[Geoserver-users] Geoserver taking so much time to complete a request to create a layer

Hi,
we are using geoserver 2.8.1.
We are getting issues with some requests sent to geoserver that take more
than 5 seconds to complete: thus we are getting a SocketTimeoutException.
In the attached log file extract, you can see that geoserver is logging
nothing between "2017-02-14 14:37:37,441" and "2017-02-14 14:37:48,049",
except for the "[diskquota.CacheCleanerTask] - Global cache quota
enforcement task still running, avoiding issueing a new one...".
What is causing geoserver to take so long to finally output "POST coverage
IMAGE_3664_1,IMAGE_3664_1" (which I assume is the end of the request
processing)?

Thanks,
Jessica

2017-02-14 14:37:30,180 DEBUG
[security.IncludeQueryStringAntPathRequestMatcher] - Checking match of
request : 'Path:
/rest/workspaces/myApp/coveragestores/image_3664_1/coverages, QueryString:
null'; against '/web/**'
...
2017-02-14 14:37:30,205 DEBUG [geoserver.config] - Persisting coverage
IMAGE_3664_1
2017-02-14 14:37:30,206 DEBUG [geoserver.config] - Persisted
com.sun.proxy.$Proxy10 to
workspaces/myApp/IMAGE_3664_1/IMAGE_3664_1/coverage.xml
2017-02-14 14:37:30,209 DEBUG [geoserver.config] - Persisted
com.sun.proxy.$Proxy18 to global.xml
2017-02-14 14:37:30,210 DEBUG [catalog.impl] - Warning, some code is setting
the LayerInfo name, but that will be ignored
2017-02-14 14:37:30,210 DEBUG [catalog.impl] - Warning, some code is setting
the LayerInfo enabled flag, but that will be ignored
2017-02-14 14:37:30,211 DEBUG [geoserver.config] - Persisting layer
IMAGE_3664_1
2017-02-14 14:37:30,211 DEBUG [geoserver.config] - Persisted
com.sun.proxy.$Proxy12 to
workspaces/myApp/IMAGE_3664_1/IMAGE_3664_1/layer.xml
2017-02-14 14:37:30,211 DEBUG [gwc.layer] - Handling add event:
LayerInfoImpl[IMAGE_3664_1, resource:CoverageInfoImpl[IMAGE_3664_1]]
2017-02-14 14:37:30,212 DEBUG [gwc.layer] - Depersisting
GeoServerTileLayerInfo from
/rt/geoserver/data_dir/gwc-layers/LayerInfoImpl--31e45cbf_15a015a8466_-2f99.xml.tmp
2017-02-14 14:37:32,144 DEBUG [diskquota.QueuedQuotaUpdatesConsumer] -
Committing TileSet[IMAGE_3621#EPSG:4326#image/png]1 aggregated updates, 1
different pages, accum quota diff: -155,648.0 B, created 2000ms ago] to
quota store due to max wait time reached
2017-02-14 14:37:35,184 DEBUG
[security.IncludeQueryStringAntPathRequestMatcher] - Checking match of
request : 'Path: /rest/layergroups/image_3664.xml, QueryString: null';
against '/web/**'
...
2017-02-14 14:37:35,185 DEBUG [org.geoserver] - Thread 18 locking in mode
READ
2017-02-14 14:37:37,441 DEBUG [diskquota.CacheCleanerTask] - Global cache
quota enforcement task still running, avoiding issueing a new one...
2017-02-14 14:37:47,441 DEBUG [diskquota.CacheCleanerTask] - Global cache
quota enforcement task still running, avoiding issueing a new one...
2017-02-14 14:37:48,049 INFO [jdbc.JDBCQuotaStore] - Applying quota diff
-155648 on tileset TileSet[IMAGE_3621#EPSG:4326#image/png]
2017-02-14 14:37:48,050 INFO [jdbc.JDBCQuotaStore] - Applying page stats
payload PageStatsPayload [numTiles=-1, lastAccessTime=0, numHits=0,
page=TilePage[IMAGE_3621#EPSG:4326#image/png@anonymised.com,3,12], tileSet=null]
2017-02-14 14:37:48,050 DEBUG [diskquota.CacheCleaner] - Expiring tile page
TilePage[IMAGE_1123#EPSG:4326#image/png@anonymised.com,5,13] based on the global LFU
expiration policy
2017-02-14 14:37:48,050 DEBUG [gwc.layer] - Depersisting
GeoServerTileLayerInfo from
/rt/geoserver/data_dir/gwc-layers/LayerInfoImpl--31e45cbf_15a015a8466_-2f99.xml
2017-02-14 14:37:48,053 INFO [file.FileBlobStore] - Truncated 1 tiles
2017-02-14 14:37:48,053 DEBUG [seed.TruncateTask] - Completed truncate
request.
2017-02-14 14:37:48,053 INFO [seed.GWCTask] - Thread group finished TRUNCATE
task after 0.003 seconds
2017-02-14 14:37:48,053 INFO [jdbc.JDBCQuotaStore] - Truncating page
TilePage[IMAGE_1123#EPSG:4326#image/png@anonymised.com,5,13]
2017-02-14 14:37:48,053 INFO [jdbc.JDBCQuotaStore] - Setting page
TilePage[IMAGE_1123#EPSG:4326#image/png@anonymised.com,5,13] fill factor to 0.0
2017-02-14 14:37:48,067 DEBUG [jdbc.JDBCQuotaStore] - Creating tileset
TileSet[myApp:IMAGE_3664_1#EPSG:4326#image/png]
2017-02-14 14:37:48,068 DEBUG [jdbc.JDBCQuotaStore] - Creating tileset
TileSet[myApp:IMAGE_3664_1#EPSG:900913#image/png]
2017-02-14 14:37:48,068 DEBUG [jdbc.JDBCQuotaStore] - Creating tileset
TileSet[myApp:IMAGE_3664_1#EPSG:900913#image/jpeg]
2017-02-14 14:37:48,068 DEBUG [jdbc.JDBCQuotaStore] - Creating tileset
TileSet[myApp:IMAGE_3664_1#EPSG:4326#image/jpeg]
2017-02-14 14:37:48,068 DEBUG [geoserver.config] - Persisted
com.sun.proxy.$Proxy18 to global.xml
2017-02-14 14:37:48,068 INFO [catalog.rest] - POST coverage
IMAGE_3664_1,IMAGE_3664_1

--
View this message in context: http://osgeo-org.1560.x6.nabble.com/Geoserver-taking-so-much-time-to-complete-a-request-to-create-a-layer-tp5313513.html
Sent from the GeoServer - User mailing list archive at Nabble.com.

There is more than one reason why it can be a lot of work.

Without anything to go on, I'd guess a vector source that has
difficult-to-calculate bounds.

Saying what the data source is (specific format, or at least vector vs
raster), what your layer definition is, exactly how you are doing the
upload, if you've tried turning off GWC, what rasteriser you are using, if
you've tried other debug options (and if so, what that showed) and generally
describing the environment might make it a lot easier to provide a
reasonable guess.

You could also try a newer version - 2.8.1 is EOL and probably has some
other undesirable characteristics.

Brad

Thanks for your reply.
And here are the best answers I can provide.

- format: raster image
- before creating the layer, we first create a coverage store passing the
path to the image, then we create the layer, and normally we end up creating
a layer group using the layer
- trying to turn off GWV, or other options: this is production, so we cannot
do such a thing easily
- this problem comes occasionally (we don't know the factors that trigger
this to happen). Most of the time, the delay is seen at the layer creation,
but it can happen also at the layer group creation. And the weird thing is
that when this problem is seen at the creation, the request to get the
non-existing layer group also ends up with a socketTimeoutException (i.e. no
answer within the 5 seconds).
- we have geoserver 2.8.1 running in other sites, and we never experience
this delay.

- do you think this has to do with the images?

--
View this message in context: http://osgeo-org.1560.x6.nabble.com/Geoserver-taking-so-much-time-to-complete-a-request-to-create-a-layer-tp5313513p5313676.html
Sent from the GeoServer - User mailing list archive at Nabble.com.

Also I am trying to reproduce it locally. When I compare my local geoserver
log files to the one where the problem is seen, I am noticing that the
following logging

2017-02-14 14:37:37,441 DEBUG [diskquota.CacheCleanerTask] - Global cache
quota enforcement task still running, avoiding issueing a new one...

is not seen in my local (healthy) geoserver.

I played with the (local) disk quota parameters and I am just getting:

2017-03-22 09:54:20,516 DEBUG [diskquota.CacheCleanerTask] - Submitting
global cache quota enforcement task

which is what I am expecting. So why don't I see it in my non healthy
geoserver log file?

Thanks

--
View this message in context: http://osgeo-org.1560.x6.nabble.com/Geoserver-taking-so-much-time-to-complete-a-request-to-create-a-layer-tp5313513p5313685.html
Sent from the GeoServer - User mailing list archive at Nabble.com.

Hello jtour,

     I will make a random sugestion (pun intended):

     Try installing haveged to see if it has any effect on your layer creation times.

     In my case, the entropy pool (the O.S. randomness source) was low, which means GeoServer took a long time to start because of some Tomcat tasks that consumed bytes from the pool. Tomcat just halted and waited for the pool to replenish and the random bytes become available. Haveged refills the pool really fast, presumably with very good entropy, so GeoServer does not halt for that reason anymore.

     Let me know how that goes.

Best,

Daniel

Em 22/03/2017 13:18, jtour escreveu:

Thanks for your reply.
And here are the best answers I can provide.

- format: raster image
- before creating the layer, we first create a coverage store passing the
path to the image, then we create the layer, and normally we end up creating
a layer group using the layer
- trying to turn off GWV, or other options: this is production, so we cannot
do such a thing easily
- this problem comes occasionally (we don't know the factors that trigger
this to happen). Most of the time, the delay is seen at the layer creation,
but it can happen also at the layer group creation. And the weird thing is
that when this problem is seen at the creation, the request to get the
non-existing layer group also ends up with a socketTimeoutException (i.e. no
answer within the 5 seconds).
- we have geoserver 2.8.1 running in other sites, and we never experience
this delay.

- do you think this has to do with the images?

--
View this message in context: http://osgeo-org.1560.x6.nabble.com/Geoserver-taking-so-much-time-to-complete-a-request-to-create-a-layer-tp5313513p5313676.html
Sent from the GeoServer - User mailing list archive at Nabble.com.

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Geoserver-users mailing list
Geoserver-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-users