[Geoserver-users] EofException on WFS requests after extended use

Hi,

After extended use of Geoserver 1.7.1 I get the exception pasted below
at each WFS request.

When this happens the geoserver java process also gets stuck at 100%
CPU and WMS requests goes down to a crawl.

The server is a Window 2003 quad-core 32-bit and I'm using
jetty-6.1.11 as container (maxpermsize of 25mb and max/min heap 1Gb).
The data source is postgresql at all times.

Number of requests per day is >10000 and most of them is WFS. The
problem occurs approximately once a week.

Anyone have ideas about the cause for this problem?

Regards,

Björn Harrtell

INFO | jvm 2 | 2009/03/10 08:40:35 | 2009-03-10
08:40:35.891::WARN: /geoserver/wfs

INFO | jvm 2 | 2009/03/10 08:40:35 | java.lang.RuntimeException:
org.mortbay.jetty.EofException

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.geoserver.wfs.response.WfsExceptionHandler.handle1_0(WfsExceptionHandler.java:102)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.geoserver.wfs.response.WfsExceptionHandler.handleServiceException(WfsExceptionHandler.java:48)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.geoserver.ows.Dispatcher.handleServiceException(Dispatcher.java:1293)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.geoserver.ows.Dispatcher.exception(Dispatcher.java:1248)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.geoserver.ows.Dispatcher.handleRequestInternal(Dispatcher.java:223)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.springframework.web.servlet.mvc.AbstractController.handleRequest(AbstractController.java:153)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:875)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:809)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:511)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
javax.servlet.http.HttpServlet.service(HttpServlet.java:820)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1097)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.vfny.geoserver.filters.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:108)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1088)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:264)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.intercept.web.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:107)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.intercept.web.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:72)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.ui.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:110)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.providers.anonymous.AnonymousProcessingFilter.doFilter(AnonymousProcessingFilter.java:125)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.ui.basicauth.BasicProcessingFilter.doFilter(BasicProcessingFilter.java:178)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.context.HttpSessionContextIntegrationFilter.doFilter(HttpSessionContextIntegrationFilter.java:229)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.util.FilterChainProxy.doFilter(FilterChainProxy.java:148)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.acegisecurity.util.FilterToBeanProxy.doFilter(FilterToBeanProxy.java:98)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1088)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.geoserver.filters.LoggingFilter.doFilter(LoggingFilter.java:73)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1088)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.geoserver.filters.ReverseProxyFilter.doFilter(ReverseProxyFilter.java:163)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1088)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.geoserver.filters.GZIPFilter.doFilter(GZIPFilter.java:47)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1088)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:360)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:729)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:206)

INFO | jvm 2 | 2009/03/10 08:40:35 | at
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.jetty.Server.handle(Server.java:324)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:843)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:647)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:395)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488)

INFO | jvm 2 | 2009/03/10 08:40:39 | Caused by:
org.mortbay.jetty.EofException

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:760)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.jetty.AbstractGenerator$Output.blockForOutput(AbstractGenerator.java:548)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:634)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:587)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.geoserver.wfs.response.WfsExceptionHandler.handle1_0(WfsExceptionHandler.java:99)

INFO | jvm 2 | 2009/03/10 08:40:39 | ... 53 more

INFO | jvm 2 | 2009/03/10 08:40:39 | Caused by:
java.io.IOException: An established connection was aborted by the
software in your host machine

INFO | jvm 2 | 2009/03/10 08:40:39 | at
sun.nio.ch.SocketDispatcher.write0(Native Method)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:33)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
sun.nio.ch.IOUtil.write(IOUtil.java:60)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.io.nio.ChannelEndPoint.flush(ChannelEndPoint.java:160)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.io.nio.SelectChannelEndPoint.flush(SelectChannelEndPoint.java:207)

INFO | jvm 2 | 2009/03/10 08:40:39 | at
org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:693)

INFO | jvm 2 | 2009/03/10 08:40:39 | ... 57 more

Björn Harrtell ha scritto:

Hi,

After extended use of Geoserver 1.7.1 I get the exception pasted below
at each WFS request.

When this happens the geoserver java process also gets stuck at 100%
CPU and WMS requests goes down to a crawl.

The server is a Window 2003 quad-core 32-bit and I'm using
jetty-6.1.11 as container (maxpermsize of 25mb and max/min heap 1Gb).
The data source is postgresql at all times.

Number of requests per day is >10000 and most of them is WFS. The
problem occurs approximately once a week.

Anyone have ideas about the cause for this problem?

In a stack trace, always look the most nested cause of the issue
(the most nested "Caused by"):

INFO | jvm 2 | 2009/03/10 08:40:39 | Caused by:
java.io.IOException: An established connection was aborted by the
software in your host machine

Googling for this error reports a number of hits:

http://www.google.it/search?q=java.io.IOException%3A+An+established+connection+was+aborted+by+the+software+in+your+host+machine&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:it-IT:official&client=firefox-a

The explanation given is that the client has abruptly closed the connection, and the server is merely reporting that.
Not much you can do in this case, I guess you cannot control your clients no?

If you find any better explanation let me know. Also, given
most of the stack trace is in the container code, you could
try out a different one, like Tomcat, and see if it behaves
any better... thought as I said, from what I can see from
the stack trace, the most likely cause are the clients
giving up on the request.

Cheers
Andrea

--
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.

The stacktrace I wrote was from the container log. I have investigated
the geoserver log too and I get exceptions there at the same time:

2009-03-10 08:43:42,843 ERROR [geoserver.ows] -

org.geoserver.platform.ServiceException: error:Translator error

  at org.geoserver.wfs.xml.GML2OutputFormat.encode(GML2OutputFormat.java:254)
.
. (here is the most nested)
Caused by: org.geoserver.ows.ClientStreamAbortedException

  at org.apache.xml.serializer.ToStream.endElement(ToStream.java:2018)

So yes it looks like the client aborts. But when the problem occurs
all wfs requests from that point and forward fails, and after some
time (not sure, but not very long) any other request to geoserver (wms
or otherwise) fails.

Right before the error above in geoserver log I do get the following error:

2009-03-10 08:38:59,812 ERROR [geotools.rendering] -
javax.imageio.IIOException: codecLib error

java.lang.RuntimeException: javax.imageio.IIOException: codecLib error

at com.sun.media.jai.imageioimpl.ImageReadOpImage.computeTile(ImageReadOpImage.java:706)

.
. (here is the most nested)
Caused by: java.lang.OutOfMemoryError: Java heap space

So I suspect that the heap is almost gone. Before upgrading from 1.6.x
we had memory leak problems causing problems every day, now we seem to
have less of those problems but perhaps the cause is still memory
leaks? We are considering using automatic restarts as a solution.

/Björn

On Tue, Mar 10, 2009 at 09:23, Andrea Aime <aaime@anonymised.com> wrote:

Björn Harrtell ha scritto:

Hi,

After extended use of Geoserver 1.7.1 I get the exception pasted below
at each WFS request.

When this happens the geoserver java process also gets stuck at 100%
CPU and WMS requests goes down to a crawl.

The server is a Window 2003 quad-core 32-bit and I'm using
jetty-6.1.11 as container (maxpermsize of 25mb and max/min heap 1Gb).
The data source is postgresql at all times.

Number of requests per day is >10000 and most of them is WFS. The
problem occurs approximately once a week.

Anyone have ideas about the cause for this problem?

In a stack trace, always look the most nested cause of the issue
(the most nested "Caused by"):

INFO | jvm 2 | 2009/03/10 08:40:39 | Caused by:
java.io.IOException: An established connection was aborted by the
software in your host machine

Googling for this error reports a number of hits:

http://www.google.it/search?q=java.io.IOException%3A+An+established+connection+was+aborted+by+the+software+in+your+host+machine&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:it-IT:official&client=firefox-a

The explanation given is that the client has abruptly closed the connection,
and the server is merely reporting that.
Not much you can do in this case, I guess you cannot control your clients
no?

If you find any better explanation let me know. Also, given
most of the stack trace is in the container code, you could
try out a different one, like Tomcat, and see if it behaves
any better... thought as I said, from what I can see from
the stack trace, the most likely cause are the clients
giving up on the request.

Cheers
Andrea

--
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Björn Harrtell ha scritto:
...

. (here is the most nested)
Caused by: java.lang.OutOfMemoryError: Java heap space

So I suspect that the heap is almost gone. Before upgrading from 1.6.x
we had memory leak problems causing problems every day, now we seem to
have less of those problems but perhaps the cause is still memory
leaks? We are considering using automatic restarts as a solution.

Aaah, here we're onto something.
Restarts can be a workaround.
To find a solution I need to look at a heap dump, which the VM
can generate on its own if you add the following option to
its startup params:
-XX:+HeapDumpOnOutOfMemoryError
(more information here
  http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp,
  see how you can also pass -XX:HeapDumpPath to force a location
  in which the xxxx.hfprof file will be generated)

That will generate a huge file that contains a full dump of the
heap in that moment. Zip the file and make it available on
some http/ftp server so that we can analyze its contents and
hopefully pinpoint were the leak is.
Maybe to get the report earlier, and in a smaller file,
make the VM use less memory by providing -Xmx256m or
something like that.

Cheers
Andrea

--
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.