All,
When I upgraded to 1.6 from 1.4, I used my old data directory, and there
where a few things that didn't quite work 'out of the box' but things are
mostly working fine. But I have one WMS query which isn't working as well
as it did previously and I'm trying to figure it out. This layer is a
postgis datastore with ~750000 polygon records. The query uses an SLD with
a filter get all records where field 1 = x, then colors each matching
feature on the map according to field 2.
The query takes nearly 2 minutes to return a map the first time you make the
request, but after that all subsequent queries are returned quickly (1-2
seconds). This leads me to believe that on the first query geoserver is
either having to load something into memory, or trying to make the db
connection, or something like that. Any ideas?
And this leads me to another question. When looking at the datastore config
files (catalog.xml) I've noticed that the newer datastores have tags for
Min connections
Max connections
Validate connections
Estimated extent
While the older (carried over from my 1.4 gs) datastores do not have these
tags. So is a problem that some of these tags are missing? And what do
these tags do?
Thanks,
Steve
Stephen Crawford
Center for Environmental Informatics
The Pennsylvania State University
Hi Stephen,
In 1.6 the ability to connection pool was added. This could be leading
to an initial setup cost for connections but i cant see it being two
minutes. I suspect the initial cost is calculating the extent of your
data...
Andrea, any thoughts?
Stephen Crawford wrote:
All,
When I upgraded to 1.6 from 1.4, I used my old data directory, and there
where a few things that didn't quite work 'out of the box' but things are
mostly working fine. But I have one WMS query which isn't working as well
as it did previously and I'm trying to figure it out. This layer is a
postgis datastore with ~750000 polygon records. The query uses an SLD with
a filter get all records where field 1 = x, then colors each matching
feature on the map according to field 2.
The query takes nearly 2 minutes to return a map the first time you make the
request, but after that all subsequent queries are returned quickly (1-2
seconds). This leads me to believe that on the first query geoserver is
either having to load something into memory, or trying to make the db
connection, or something like that. Any ideas?
And this leads me to another question. When looking at the datastore config
files (catalog.xml) I've noticed that the newer datastores have tags for
Min connections
Max connections
Validate connections
Estimated extent
While the older (carried over from my 1.4 gs) datastores do not have these
tags. So is a problem that some of these tags are missing? And what do
these tags do?
Thanks,
Steve
Stephen Crawford
Center for Environmental Informatics
The Pennsylvania State University
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Geoserver-users mailing list
Geoserver-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-users
!DSPAM:4007,4798aeb0248834901796417!
--
Justin Deoliveira
The Open Planning Project
http://topp.openplans.org
Justin Deoliveira ha scritto:
Hi Stephen,
In 1.6 the ability to connection pool was added. This could be leading
to an initial setup cost for connections but i cant see it being two
minutes. I suspect the initial cost is calculating the extent of your
data...
Andrea, any thoughts?
Yeah, that sound like it, but I don't understand how a GetMap
triggers the computation of the native bbox...
Anyways, it's easy to check for it. Restart GeoServer, go
to the feature type panel of the incriminate dataset, click
on compute bbox, submit, apply, _save_. When you do so, the native
bbox will be computed as well and stored permanently.
Stop geoserver, restart.
Is the request any faster now?
It would be nice to have the dataset and the sld so that I can
throw everything into a profiler and really see what's going
on.
Cheers
Andrea