[Geoserver-users] Geoserver WMS speed (mapserver)

I played around a bit today with geoserver performance.

To generate my "normal" TIGER images, I've found (using a profiler) that
it spends 1/3 of the CPU time reading/transforming data, 1/3 drawing
it, and 1/3 converting the image to PNG format.

Both the JAI 100% java PNG writer and the JAI CLIB native PNG writer are
slower than the 100% java one that I have in geoserver now. That was a
bit surprising.

I looked more closely at labeling, and I've sped it up quite a bit for
road-like cases. This means I have extra cpu time hanging around to do
better label placement!

Take a quick look at the attached images (ignore the labels). The
non-antialiased one was created with mapserver and the antialiased one
with geoserver. I couldnt figure out how to get antialiased thich
lines in mapserver.

I found that mapserver can read data extreamly quickly -- way faster
than the geotools postgis reader can.

In the end, mapserver was able to produce maps about 2.5 to 3.0 times as
fast as geoserver. Mapserver is quite slow at drawing thick lines (and,
if you look at the results, not very good at it); for non-thick lines
its very very fast.

One of the problems with the way I'm drawing the TIGER images is that
I'm using two FeatureTypeStyles to make the roads look 'thick'. This
means that the lite renderer will go back to the postgis database twice
to read, transform, generalize, and draw the features.

There's two quick ways to make this better:

a) draw on two (or more) images at a time (one for each
FeatureTypeStyle), and then combine all the images at the end. The
udig renderer does something similar to this when its rendering layers
from different sources. Mapbuilder also does something similar to this
(it request each layer individually from the servers). This allows
streaming.

b) store the generalized/transformed features during the first
FeatureTypeStyle rendering, and use them for the subsequent
FeatureTypeStyles. Generalized features tend to be pretty small, so
for most applications you can get away with this w/o blowing through
memory.

NOTE: At some point in the future, we might start taking rule <Filters>
and handing them off to the Datastore, but I dont think thats going to
be a real problem. But, we should be cautious with this.

I know jessie's done a bunch of stuff with the Shapefile renderer and
wanted to pass hints off to the datastores so they can all benefit from
the speed increases.

dave

----------------------------------------------------------
This mail sent through IMP: https://webmail.limegroup.com/

k look at the attached images (ignore the labels). The

non-antialiased one was created with mapserver and the antialiased one
with geoserver. I couldnt figure out how to get antialiased thich
lines in mapserver.

Whats the cause of the size diference between the two images (53k vs
166k). Is it that the antialiasing is adding more shades (and bit
depth) and if so is that adding to the encoding time.

James

dblasby@anonymised.com wrote:

I found that mapserver can read data extreamly quickly -- way faster
than the geotools postgis reader can.

Hum... I'd say mapserver is using a binary connection with postgis, whilst the fastest thing I could
get from postgis was using a bytea encoding (over the text oriented connection the jdbc driver
uses by default). That is, if the postgis driver is still working with bytea encoding...

There's two quick ways to make this better:

a) draw on two (or more) images at a time (one for each
FeatureTypeStyle), and then combine all the images at the end. The
udig renderer does something similar to this when its rendering layers
from different sources. Mapbuilder also does something similar to this
(it request each layer individually from the servers). This allows
streaming.

This is the easiest to implement idea for lite renderer I guess... if you see that a data store is associated
to multiple symbolizers, create secondary buffered images to draw on and the blit them to the main
graphics2d object. In your image the layer is read three times, twice to draw the roads, and once for
the labels, isn't it?

b) store the generalized/transformed features during the first
FeatureTypeStyle rendering, and use them for the subsequent
FeatureTypeStyles. Generalized features tend to be pretty small, so
for most applications you can get away with this w/o blowing through
memory.

This is basically what j2d is doing...

Best regards
Andrea Aime