[Geoserver-devel] WMS rendering eats all exceptions

Hi Andrea, all,

wrt WMS GetMap requests, I'm wondering if we should really eat all and every exceptions occurred while rendereing.
I know it is meant to render the most possible of the feature dataset even if some of them fail for some reason.
But my case is a bit different. I've been setting up a jmetter test plan that reproduces the requests of a user running a tiling client, to stress test ArcSDE raster support.
One of the failure points for this kind of setup is when a getmap request cannot be fulfilled due to the connection pool being exhausted. This situation, instead of making the request fail, produces a blank image.
I think that is not desired 'cause it is not like if a single feature can't be drawn due to an invalid geometry or such. It is more like an irrecoverable error. But no matter if the causing exception is an IOException or a RuntimeException, it gets eaten.
It is problematic for my jmeter test because I wanted to set an assertion based on the response headers (ie, containing "ContentType: image/jpeg" for success and "ContentType: application/vnd.ogc.se_xml" for failure), but the request just never fail.

So, I think we could better define when to keep going and when to fail (speaking of DefaultRasterMapProducer.produceMap() ) by using a RenderListener. But the question is what kind of exceptions do take as unrecoverable and what not. I would say both IOException and RuntimeException should be assumed unrecoverable and make the request fail, as the former should indicate some sort of broken pipe (IO error, etc) and the later an unexpected situation that implies the program can't know how to proceed after it.

What do you think? does it make sense? any other option?

Cheers,
Gabriel
--
Gabriel Roldan
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Tough problem. I think behavior depending on the type of exception would be hard to pull off simply because the types of exceptions thrown by datastores, etc.. are sort of all over the place. There has never really been clear guidelines as to what exceptions to throw in what cases.

What about simply a configuration option or a format option that controls this behavior? Something like "format_options=lenient:false" would case the render to halt rendering when an exception occurs? Not sure if that works or not, but it would be my first idea.

My 2c,

-Justin

Gabriel Roldan wrote:

Hi Andrea, all,

wrt WMS GetMap requests, I'm wondering if we should really eat all and every exceptions occurred while rendereing.
I know it is meant to render the most possible of the feature dataset even if some of them fail for some reason.
But my case is a bit different. I've been setting up a jmetter test plan that reproduces the requests of a user running a tiling client, to stress test ArcSDE raster support.
One of the failure points for this kind of setup is when a getmap request cannot be fulfilled due to the connection pool being exhausted. This situation, instead of making the request fail, produces a blank image.
I think that is not desired 'cause it is not like if a single feature can't be drawn due to an invalid geometry or such. It is more like an irrecoverable error. But no matter if the causing exception is an IOException or a RuntimeException, it gets eaten.
It is problematic for my jmeter test because I wanted to set an assertion based on the response headers (ie, containing "ContentType: image/jpeg" for success and "ContentType: application/vnd.ogc.se_xml" for failure), but the request just never fail.

So, I think we could better define when to keep going and when to fail (speaking of DefaultRasterMapProducer.produceMap() ) by using a RenderListener. But the question is what kind of exceptions do take as unrecoverable and what not. I would say both IOException and RuntimeException should be assumed unrecoverable and make the request fail, as the former should indicate some sort of broken pipe (IO error, etc) and the later an unexpected situation that implies the program can't know how to proceed after it.

What do you think? does it make sense? any other option?

Cheers,
Gabriel

--
Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Justin Deoliveira ha scritto:

Tough problem. I think behavior depending on the type of exception would be hard to pull off simply because the types of exceptions thrown by datastores, etc.. are sort of all over the place. There has never really been clear guidelines as to what exceptions to throw in what cases.

What about simply a configuration option or a format option that controls this behavior? Something like "format_options=lenient:false" would case the render to halt rendering when an exception occurs? Not sure if that works or not, but it would be my first idea.

The code is sort of already there, but it's not a per request setting.
If you look into DefaultRasterMapProducer it attaches a MaxErrorEnforcer
to the renderer, which in the end is a listener.
If the max error count is set to 1, the rendering will stop after the
first error and the error will be returned in the service exception.

The enforcer has been introduced as part of the "wms limits" work
and can be configured as documented here:
http://docs.geoserver.org/1.7.x/user/services/wms/configuration.html#request-limits

I guess that, if needed, we could add format_options configs
that allows further restricting the server default limits (but not
enlarging, that would go against the very purpose of setting up
the limits, that is, allow the admin to assert control over the
stability of his server).

Cheers
Andrea

--
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Hi Andrea and Justin,

that´s a good thing that there´s already a way to handle this. Thanks for pointing out.

Still wondering though, if all and every exception should be ignored.

The maxRenderingErrors setting docs say "...Usually GetMap skips over faulty features, reprojection errors and the like in an attempt to serve the results anyways. This makes for a best effort rendering...", like it is intended to tolerate some "rendering" errors (faulty features, reprojection errors) which is good. But what about actual IO errors (ie, the source data can't be read at all).

In the end I keep on thinking that a request that cannot be fulfilled by the underlying data source shouldn't give a false positive GetMap wise.

By looking at it I realize how difficult it would be given the state of affair with the api contracts and actual implementations, and this may open some sort of "exception handling best practices" vs "pragmatism and adaptation to what we have and we don't have the resources to improve" debate.

So far so good. It's not my intention to get involved in that kind of high heat debate, but I would like to hear your opinions anyway.
As a first apprach, and sticking to what we do have right now instead of ranting about usage of checked vs unchecked exceptions, I would say:

During a rendering phase, these are the most probable exceptions that could be caught:

1- IOException: from FeatureReader.hasNext(), FeatureReader.next(), FeatureSource.getFeatures/getBounds/getCount, GridCoverageReader.read(). All the declared places in the API indicate it should be thrown if there are errors fulfilling the request. Quite vague, and hence we all ended up throwing IOException everywhere whether the user is expected to take some recovery/cleanup action after it or not.

2- IllegalAttributeException: unchecked, a data error.
3- NoSuchElementException: calling next() when hasNext() == false. A programming error, hence unchecked. Cool.
4- IllegalArgumentException: contract precondition violations. Hence unchecked, non declared. Cool.

From the above, it seems to me like an IOException should stop the rendered and GetMap should fail. The nasty thing is that TransformException, FactoryException, etc are usually wrapped to an IOException instead of to a RuntimeException, which IMHO should be the way to go. And that's the reason why GetMap eats IOExceptions too. Now Andrea stop me if I'm wrong, it's getting large already :slight_smile:

So in the end by eating them all we're not taking control of what makes sense for the renderer to keep going, but instead are potentially hiding programming errors and returning "false positive" responses.

My proposal is then that given an exception event thrown by the renderer, the GetMap operation walks down the exception cause hierarchy and keeps going only if the cause is one that we actually want to ignore (IllegalAttributeException/TransformationException/any one else?).

Looking forward to your insights.

Cheers,
Gabriel

PS: the javadoc for FeatureReader.next() and it's declaration is inconsistent, we should fix this (note it is documented to throw IllegalAttributeException whilst it declares to throw IllegalArgumentException instead):
/**
  * Reads the next Feature in the FeatureReader.
  * @return The next feature in the reader.
  * @throws IOException If an error occurs reading the
  * @throws IllegalAttributeException If the attributes read do not
  * with the
  * @throws NoSuchElementException If there are no more Features in
  * Reader.
  */
F next() throws IOException, IllegalArgumentException, NoSuchElementException;

Andrea Aime wrote:

Justin Deoliveira ha scritto:

Tough problem. I think behavior depending on the type of exception would be hard to pull off simply because the types of exceptions thrown by datastores, etc.. are sort of all over the place. There has never really been clear guidelines as to what exceptions to throw in what cases.

What about simply a configuration option or a format option that controls this behavior? Something like "format_options=lenient:false" would case the render to halt rendering when an exception occurs? Not sure if that works or not, but it would be my first idea.

The code is sort of already there, but it's not a per request setting.
If you look into DefaultRasterMapProducer it attaches a MaxErrorEnforcer
to the renderer, which in the end is a listener.
If the max error count is set to 1, the rendering will stop after the
first error and the error will be returned in the service exception.

The enforcer has been introduced as part of the "wms limits" work
and can be configured as documented here:
http://docs.geoserver.org/1.7.x/user/services/wms/configuration.html#request-limits

I guess that, if needed, we could add format_options configs
that allows further restricting the server default limits (but not
enlarging, that would go against the very purpose of setting up
the limits, that is, allow the admin to assert control over the
stability of his server).

Cheers
Andrea

--
Gabriel Roldan
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Gabriel Roldan ha scritto:

Hi Andrea and Justin,

that´s a good thing that there´s already a way to handle this. Thanks for pointing out.

Still wondering though, if all and every exception should be ignored.

The maxRenderingErrors setting docs say "...Usually GetMap skips over faulty features, reprojection errors and the like in an attempt to serve the results anyways. This makes for a best effort rendering...", like it is intended to tolerate some "rendering" errors (faulty features, reprojection errors) which is good. But what about actual IO errors (ie, the source data can't be read at all).

In the end I keep on thinking that a request that cannot be fulfilled by the underlying data source shouldn't give a false positive GetMap wise.

By looking at it I realize how difficult it would be given the state of affair with the api contracts and actual implementations, and this may open some sort of "exception handling best practices" vs "pragmatism and adaptation to what we have and we don't have the resources to improve" debate.

So far so good. It's not my intention to get involved in that kind of high heat debate, but I would like to hear your opinions anyway.
As a first apprach, and sticking to what we do have right now instead of ranting about usage of checked vs unchecked exceptions, I would say:

During a rendering phase, these are the most probable exceptions that could be caught:

1- IOException: from FeatureReader.hasNext(), FeatureReader.next(), FeatureSource.getFeatures/getBounds/getCount, GridCoverageReader.read(). All the declared places in the API indicate it should be thrown if there are errors fulfilling the request. Quite vague, and hence we all ended up throwing IOException everywhere whether the user is expected to take some recovery/cleanup action after it or not.

2- IllegalAttributeException: unchecked, a data error.
3- NoSuchElementException: calling next() when hasNext() == false. A programming error, hence unchecked. Cool.
4- IllegalArgumentException: contract precondition violations. Hence unchecked, non declared. Cool.

From the above, it seems to me like an IOException should stop the rendered and GetMap should fail. The nasty thing is that TransformException, FactoryException, etc are usually wrapped to an IOException instead of to a RuntimeException, which IMHO should be the way to go. And that's the reason why GetMap eats IOExceptions too. Now Andrea stop me if I'm wrong, it's getting large already :slight_smile:

So in the end by eating them all we're not taking control of what makes sense for the renderer to keep going, but instead are potentially hiding programming errors and returning "false positive" responses.

My proposal is then that given an exception event thrown by the renderer, the GetMap operation walks down the exception cause hierarchy and keeps going only if the cause is one that we actually want to ignore (IllegalAttributeException/TransformationException/any one else?).

Looking forward to your insights.

My insight is that the idea is good, but it won't work unless you
double check what all datastores are actually doing when building
IOExceptions.

Secondly, this is the renderer, it affects also uDig and the like,
so the discussion is to be carried on in geotools-devel giving
Jesse and Jody an occasion to clarify why the renderer was
done this way and provide insight on the exception classifications.

Cheers
Andrea

--
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Andrea Aime wrote:

Gabriel Roldan ha scritto:

Hi Andrea and Justin,

that´s a good thing that there´s already a way to handle this. Thanks for pointing out.

Still wondering though, if all and every exception should be ignored.

The maxRenderingErrors setting docs say "...Usually GetMap skips over faulty features, reprojection errors and the like in an attempt to serve the results anyways. This makes for a best effort rendering...", like it is intended to tolerate some "rendering" errors (faulty features, reprojection errors) which is good. But what about actual IO errors (ie, the source data can't be read at all).

In the end I keep on thinking that a request that cannot be fulfilled by the underlying data source shouldn't give a false positive GetMap wise.

By looking at it I realize how difficult it would be given the state of affair with the api contracts and actual implementations, and this may open some sort of "exception handling best practices" vs "pragmatism and adaptation to what we have and we don't have the resources to improve" debate.

So far so good. It's not my intention to get involved in that kind of high heat debate, but I would like to hear your opinions anyway.
As a first apprach, and sticking to what we do have right now instead of ranting about usage of checked vs unchecked exceptions, I would say:

During a rendering phase, these are the most probable exceptions that could be caught:

1- IOException: from FeatureReader.hasNext(), FeatureReader.next(), FeatureSource.getFeatures/getBounds/getCount, GridCoverageReader.read(). All the declared places in the API indicate it should be thrown if there are errors fulfilling the request. Quite vague, and hence we all ended up throwing IOException everywhere whether the user is expected to take some recovery/cleanup action after it or not.

2- IllegalAttributeException: unchecked, a data error.
3- NoSuchElementException: calling next() when hasNext() == false. A programming error, hence unchecked. Cool.
4- IllegalArgumentException: contract precondition violations. Hence unchecked, non declared. Cool.

From the above, it seems to me like an IOException should stop the rendered and GetMap should fail. The nasty thing is that TransformException, FactoryException, etc are usually wrapped to an IOException instead of to a RuntimeException, which IMHO should be the way to go. And that's the reason why GetMap eats IOExceptions too. Now Andrea stop me if I'm wrong, it's getting large already :slight_smile:

So in the end by eating them all we're not taking control of what makes sense for the renderer to keep going, but instead are potentially hiding programming errors and returning "false positive" responses.

My proposal is then that given an exception event thrown by the renderer, the GetMap operation walks down the exception cause hierarchy and keeps going only if the cause is one that we actually want to ignore (IllegalAttributeException/TransformationException/any one else?).

Looking forward to your insights.

My insight is that the idea is good, but it won't work unless you
double check what all datastores are actually doing when building
IOExceptions.

I can do that. Actually the above classification is driven by that kind of inspection, just not for _all_ the implementations.

Secondly, this is the renderer, it affects also uDig and the like,
so the discussion is to be carried on in geotools-devel giving
Jesse and Jody an occasion to clarify why the renderer was
done this way and provide insight on the exception classifications.

Again we can do that, but as is seems the decision on what to do upon an exception occurred while rendering can be deferred to a RenderListener and I'm worried about GeoServer's GetMap here I'm talking about what to do wrt GetMap. Other applications may opt differently.
I mean, the renderer seems to clearly be designed to report back the exceptions to the listeners and render as much as it can. Obviously upon an IOException the rendered is not going to keep on calling FeatureSource.next() etc, it will stop and get to where it could before the exception. As what to do after that is up to the application, my proposal is for GeoServer's GetMap to fail if the cause of an exception is not one that we know we can ignore. Udig for instance may mark the layer with a rendered failed icon or set a message on the status bar (which I guess it does).

Can I go thru as much datasource/coverage reader implementations and see what they do so we can define exactly what exceptions are of interest?

Cheers,
Gabriel

Cheers
Andrea

--
Gabriel Roldan
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Gabriel Roldan ha scritto:

My insight is that the idea is good, but it won't work unless you
double check what all datastores are actually doing when building
IOExceptions.

I can do that. Actually the above classification is driven by that kind of inspection, just not for _all_ the implementations.

Ok, it is an educated guess and as I said it's reasonable.
I'm ok with testing it out on trunk, a bit worried if you're
thinking of using it on 1.7.x as well (at the very least it
should be optional and turned off by default).

Secondly, this is the renderer, it affects also uDig and the like,
so the discussion is to be carried on in geotools-devel giving
Jesse and Jody an occasion to clarify why the renderer was
done this way and provide insight on the exception classifications.

Again we can do that, but as is seems the decision on what to do upon an exception occurred while rendering can be deferred to a RenderListener and I'm worried about GeoServer's GetMap here I'm talking about what to do wrt GetMap. Other applications may opt differently.
I mean, the renderer seems to clearly be designed to report back the exceptions to the listeners and render as much as it can. Obviously upon an IOException the rendered is not going to keep on calling FeatureSource.next() etc, it will stop and get to where it could before the exception. As what to do after that is up to the application, my proposal is for GeoServer's GetMap to fail if the cause of an exception is not one that we know we can ignore. Udig for instance may mark the layer with a rendered failed icon or set a message on the status bar (which I guess it does).

One thing that I noticed in the past is that attaching a listener
to the renderer is expensive. Lately I've made some modifications
that should have sped up the process, but I did not benchmark it so
I don't know what the overhead of having a listener around is.
The main issue is that an event is triggered for each feature
rendered, whether you like it or not, and that's the heavy part.

If you look into the MaxErrorEnforcer you'll see that it attaches a
listener only if the error check count is really activated, to establish
a "pay on demand" approach.
Maybe I've been overly cautious, but I'd suggest you do some checks
with a fast data source (shapefiles) before adding a resident, always
on listener to the mix.

As an alternative, I guess we could pass into the renderer an "exception
classifier" that tells apart fatal and non fatal exceptions. This would
be used only in case of exceptions occurring, so it would not add any
overhead in the clean case.

Can I go thru as much datasource/coverage reader implementations and see what they do so we can define exactly what exceptions are of interest?

Your classification is pretty much on the spot already:

1- IOExceptions: some recovery/cleanup action after it or not.
2- IllegalAttributeException: unchecked, a data error.
3- NoSuchElementException: calling next() when hasNext() == false. A programming error, hence unchecked. Cool.
4- IllegalArgumentException: contract precondition violations. Hence unchecked, non declared. Cool.

To this one I would add Error instances, such as the not so uncommon
OutOfMemoryError, those should block rendering right away.
Reprojection is done entirely in the renderer, so those exceptions will
always come from withing the renderer itself.

IOException remains the pandora box, anything can be inside of it
and there are no rules. To err on the safe side I would just consider
the exception non recoverable and move on, it's simpler and more
future proof: in the end, an analysis of today situation would do you
little good if tomorrow someone hooks up a store that plays with
a different mindset. The rules tell that any kind of IOException
can be thrown, ok, so be it, we just stop rendering.
I've never been a big fan of "best effort" operations myself, with
reprojection errors being the only notable excepton that I'm happy
to overlook.

Cheers
Andrea

--
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Andrea Aime ha scritto:

IOException remains the pandora box, anything can be inside of it
and there are no rules. To err on the safe side I would just consider
the exception non recoverable and move on,

That is, stop rendering and report a service exception

Cheers
Andrea

--
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Hi Andrea,

thanks for thinking about it. First off, I completely agree.

So to sum up, the ones we seem to be interested in _ignoring_ are the following ones:
- IllegalAttributeException
- TransformException: can't transform a geometry
- FactoryException: same thing,but the geom
- java.awt.NoninvertibleTransformException

Anything else, including Error is non recoverable and should stop GetMap.

So I shared your performance concern and ran the profiler with the attached test app.
The listener added to the renderer checks for the cause being one of those exceptions.
One million exceptions are thrown during the rendering loop.

The listener code adds 14ms out of a total of 30857ms that rendered.paint takes for that million render attepmts.

By the other side, avoiding the call to CopyOnWriteArrayList.iterator() saves 754ms out of 769ms that fireErrorEvent takes.

If we don't use iterator, the total time consumed by fireErrorEvent gets reduced to 15ms (including the 14ms used by the listener).
That is:
  private void fireErrorEvent(Exception e) {
         if(renderListeners.size() > 0){
             RenderListener listener;
             for(int i =0; i < renderListeners.size();i++){
                 listener = renderListeners.get(i);
                 listener.errorOccurred(e);
             }
         }
  }

instead of
  private void fireErrorEvent(Exception e) {
      for(RenderListener listener : renderListeners) {
      listener.errorOccurred(e);
    }
  }

For fireFeatureRenderedEvent the same applies, only that the difference is more noticeable cause throwing 1M exceptions takes time hence the relative benefit is different. If you do render the 1M features instead of being all exceptions, there is a performance gain of about 10% even with our error listener.

So I would say we can safely add that a listener provided we speed up fireRender/Error by avoiding the use of listenerList.iterator() since that makes a 750 times faster, and even adding our listener the method is 50 times faster.

Cheers,
Gabriel

PS: to run the test app with the profiler I had to turn off logging on the renderer so it does not eats all the cycles and our methods show up.

Andrea Aime wrote:

Gabriel Roldan ha scritto:

My insight is that the idea is good, but it won't work unless you
double check what all datastores are actually doing when building
IOExceptions.

I can do that. Actually the above classification is driven by that kind of inspection, just not for _all_ the implementations.

Ok, it is an educated guess and as I said it's reasonable.
I'm ok with testing it out on trunk, a bit worried if you're
thinking of using it on 1.7.x as well (at the very least it
should be optional and turned off by default).

Secondly, this is the renderer, it affects also uDig and the like,
so the discussion is to be carried on in geotools-devel giving
Jesse and Jody an occasion to clarify why the renderer was
done this way and provide insight on the exception classifications.

Again we can do that, but as is seems the decision on what to do upon an exception occurred while rendering can be deferred to a RenderListener and I'm worried about GeoServer's GetMap here I'm talking about what to do wrt GetMap. Other applications may opt differently.
I mean, the renderer seems to clearly be designed to report back the exceptions to the listeners and render as much as it can. Obviously upon an IOException the rendered is not going to keep on calling FeatureSource.next() etc, it will stop and get to where it could before the exception. As what to do after that is up to the application, my proposal is for GeoServer's GetMap to fail if the cause of an exception is not one that we know we can ignore. Udig for instance may mark the layer with a rendered failed icon or set a message on the status bar (which I guess it does).

One thing that I noticed in the past is that attaching a listener
to the renderer is expensive. Lately I've made some modifications
that should have sped up the process, but I did not benchmark it so
I don't know what the overhead of having a listener around is.
The main issue is that an event is triggered for each feature
rendered, whether you like it or not, and that's the heavy part.

If you look into the MaxErrorEnforcer you'll see that it attaches a
listener only if the error check count is really activated, to establish
a "pay on demand" approach.
Maybe I've been overly cautious, but I'd suggest you do some checks
with a fast data source (shapefiles) before adding a resident, always
on listener to the mix.

As an alternative, I guess we could pass into the renderer an "exception
classifier" that tells apart fatal and non fatal exceptions. This would
be used only in case of exceptions occurring, so it would not add any
overhead in the clean case.

Can I go thru as much datasource/coverage reader implementations and see what they do so we can define exactly what exceptions are of interest?

Your classification is pretty much on the spot already:

1- IOExceptions: some recovery/cleanup action after it or not.
2- IllegalAttributeException: unchecked, a data error.
3- NoSuchElementException: calling next() when hasNext() == false. A programming error, hence unchecked. Cool.
4- IllegalArgumentException: contract precondition violations. Hence unchecked, non declared. Cool.

To this one I would add Error instances, such as the not so uncommon
OutOfMemoryError, those should block rendering right away.
Reprojection is done entirely in the renderer, so those exceptions will
always come from withing the renderer itself.

IOException remains the pandora box, anything can be inside of it
and there are no rules. To err on the safe side I would just consider
the exception non recoverable and move on, it's simpler and more
future proof: in the end, an analysis of today situation would do you
little good if tomorrow someone hooks up a store that plays with
a different mindset. The rules tell that any kind of IOException
can be thrown, ok, so be it, we just stop rendering.
I've never been a big fan of "best effort" operations myself, with
reprojection errors being the only notable excepton that I'm happy
to overlook.

Cheers
Andrea

--
Gabriel Roldan
OpenGeo - http://opengeo.org
Expert service straight from the developers.

(attachments)

RenderEventPerfTest.java (7.35 KB)

Gabriel Roldan ha scritto:

Hi Andrea,

thanks for thinking about it. First off, I completely agree.

So to sum up, the ones we seem to be interested in _ignoring_ are the following ones:
- IllegalAttributeException
- TransformException: can't transform a geometry
- FactoryException: same thing,but the geom
- java.awt.NoninvertibleTransformException

Anything else, including Error is non recoverable and should stop GetMap.

So I shared your performance concern and ran the profiler with the attached test app.
The listener added to the renderer checks for the cause being one of those exceptions.
One million exceptions are thrown during the rendering loop.

The listener code adds 14ms out of a total of 30857ms that rendered.paint takes for that million render attepmts.

Ah... this is not the case I'm worried about, it's the case where no
exception is thrown, how much overhead there is in that case?
The one where everything goes smoothly, which is also the one that
is used for benchmarks such as the yearly WMS shootout.

When exceptions are thrown performance go belly up anyways, building
the stack traces is an expensive operation.

By the other side, avoiding the call to CopyOnWriteArrayList.iterator() saves 754ms out of 769ms that fireErrorEvent takes.

If we don't use iterator, the total time consumed by fireErrorEvent gets reduced to 15ms (including the 14ms used by the listener).
That is:
    private void fireErrorEvent(Exception e) {
        if(renderListeners.size() > 0){
            RenderListener listener;
            for(int i =0; i < renderListeners.size();i++){
                listener = renderListeners.get(i);
                listener.errorOccurred(e);
            }
        }
    }

instead of
    private void fireErrorEvent(Exception e) {
        for(RenderListener listener : renderListeners) {
            listener.errorOccurred(e);
        }
    }

Aah, interesting, I did not know that. But it makes sense, it's an
array list, the fastest access is the direct one.

For fireFeatureRenderedEvent the same applies, only that the difference is more noticeable cause throwing 1M exceptions takes time hence the relative benefit is different. If you do render the 1M features instead of being all exceptions, there is a performance gain of about 10% even with our error listener.

So I would say we can safely add that a listener provided we speed up fireRender/Error by avoiding the use of listenerList.iterator() since that makes a 750 times faster, and even adding our listener the method is 50 times faster.

Mind, the case we need to compare against is the rendering without
exceptions and without listeners attached.
To sum up, the test case must be rendering 1M features without any exceptions. The two configuration to be tested are:
- the renderer as it is today, _without_ any listener attached
- the renderer as you modified it, with a single listener attached
   (please do modify the methods that fires the feature rendered event)
If the two cases are comparable in performance then there are no
problems.

Cheers
Andrea

--
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Hi Andrea,

it seems I didn't explain it well, sorry.
For the all-goes-smooth case, modifying the fire method not to use iterator saves a 10% of the total rendering time, meaning it takes less than 1ms and hence doesn't even show in the profiler.
So we can add the listener and still save a 10% rendering time(*).

Cheers,
Gabriel

(*) Note it is ~10% of the time the renderer itself consumes. When there's IO involved the benefit is gonna be less cause the total time is gonna be a lot higher. But the conclusion is there's no performance penalty in adding the listener this way but a small benefit.

Andrea Aime wrote:

Gabriel Roldan ha scritto:

Hi Andrea,

thanks for thinking about it. First off, I completely agree.

So to sum up, the ones we seem to be interested in _ignoring_ are the following ones:
- IllegalAttributeException
- TransformException: can't transform a geometry
- FactoryException: same thing,but the geom
- java.awt.NoninvertibleTransformException

Anything else, including Error is non recoverable and should stop GetMap.

So I shared your performance concern and ran the profiler with the attached test app.
The listener added to the renderer checks for the cause being one of those exceptions.
One million exceptions are thrown during the rendering loop.

The listener code adds 14ms out of a total of 30857ms that rendered.paint takes for that million render attepmts.

Ah... this is not the case I'm worried about, it's the case where no
exception is thrown, how much overhead there is in that case?
The one where everything goes smoothly, which is also the one that
is used for benchmarks such as the yearly WMS shootout.

When exceptions are thrown performance go belly up anyways, building
the stack traces is an expensive operation.

By the other side, avoiding the call to CopyOnWriteArrayList.iterator() saves 754ms out of 769ms that fireErrorEvent takes.

If we don't use iterator, the total time consumed by fireErrorEvent gets reduced to 15ms (including the 14ms used by the listener).
That is:
    private void fireErrorEvent(Exception e) {
        if(renderListeners.size() > 0){
            RenderListener listener;
            for(int i =0; i < renderListeners.size();i++){
                listener = renderListeners.get(i);
                listener.errorOccurred(e);
            }
        }
    }

instead of
    private void fireErrorEvent(Exception e) {
        for(RenderListener listener : renderListeners) {
            listener.errorOccurred(e);
        }
    }

Aah, interesting, I did not know that. But it makes sense, it's an
array list, the fastest access is the direct one.

For fireFeatureRenderedEvent the same applies, only that the difference is more noticeable cause throwing 1M exceptions takes time hence the relative benefit is different. If you do render the 1M features instead of being all exceptions, there is a performance gain of about 10% even with our error listener.

So I would say we can safely add that a listener provided we speed up fireRender/Error by avoiding the use of listenerList.iterator() since that makes a 750 times faster, and even adding our listener the method is 50 times faster.

Mind, the case we need to compare against is the rendering without
exceptions and without listeners attached.
To sum up, the test case must be rendering 1M features without any exceptions. The two configuration to be tested are:
- the renderer as it is today, _without_ any listener attached
- the renderer as you modified it, with a single listener attached
  (please do modify the methods that fires the feature rendered event)
If the two cases are comparable in performance then there are no
problems.

Cheers
Andrea

--
Gabriel Roldan
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Gabriel Roldan ha scritto:

Hi Andrea,

it seems I didn't explain it well, sorry.
For the all-goes-smooth case, modifying the fire method not to use iterator saves a 10% of the total rendering time, meaning it takes less than 1ms and hence doesn't even show in the profiler.
So we can add the listener and still save a 10% rendering time(*).

Cheers,
Gabriel

(*) Note it is ~10% of the time the renderer itself consumes. When there's IO involved the benefit is gonna be less cause the total time is gonna be a lot higher. But the conclusion is there's no performance penalty in adding the listener this way but a small benefit.

Ah, ok (good!). Go ahead then
Cheers
Andrea

--
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.