The project I work on has a ImageMosaic layer that consists of PNG images using the EPSG: 4326 SRS and the flat merge strategy. We are using GeoServer 2.25.3. We have been experiencing some odd rendering behavior so we are trying to understand the process GeoServer/GeoTools uses to transform our input images into what is displayed to the user.
For an easy to reproduce example of what we are seeing, assume the data store has a single image that is 360x180 pixels that covers the whole world. This image consists of a checker board pattern where each pixel is either black or white.
If we perform a GetMap request with a bounding box of the whole world and with the height and width equal to the original image (360x180), we get back:
If we change the height to 720 and width to 360 on the request, we get back a larger grid but it is still not equal to the single pixels in the original image.
Can someone explain the process that is used to convert the input image into the output returned to the user? Is there a way we can make our output more closely match the input?
For 1 to 1 access to the original raster imagery … you would be using the WCS GetCoverage operation.
WMS is intended to do drawing, so you can look at the style, interpolation settings, and reprojection all of which will be involved in drawing a map based on your imagery.
I could not tell from your screen snap exactly what is changed? If it is just the boundary between the squares have a look at interpolation settings (nearest neighbour, bi-cubic, etc…)
Sorry for the confusion, I was only able to post a single picture since I’m new so I should have added more description into my question when I removed the other images.
In the raw image pulled by ImageMosaic each cell of the checker board pattern is 1x1 pixel in size, for a total of 64,800 cells. When we request a 360x180 image from GetMap, we get back the image posted which has cells that are 72x60, for a total of 15 cells. If we then request an image that is 720x360, we get back an image that has cells that are 90x72, for a total of 40 cells. This is all for the same geographical area. It looks like there is some combination of scaling operations occurring on the the data during the render. However, these operations results in an image that doesn’t reflect the underlying data accurately enough for our purposes so we trying to determine how to limit this effect.
Our style does a simple colorization based on value.
Our images are stored in the db as EPSG 4326 and the request is for 4326.
Thanks for the suggestion, we have tried enabling and disabling anti-aliasing and we don’t see an impact to the returned result.
We have looked into WCS but if we use that, it will prevent many of our clients from accessing the data since they only have the ability to add WMS layers to their tools.
Is the algorithm that takes the raw input and transforms it into the output we see documented somewhere? Or do we need to go through the source code to determine how it works? If we have to accept the loss of fidelity in the output we need to understand what is going on so we can explain it our clients.
The algorithm is a rendering engine … this one: StreamingRenderer. This is responsible for drawing content onto a map, which is then encoded in the output image format (example PNG).
However - in the specific case of only one image, with no projection changes, there is a fast path - which directly reads pixels from your file and encode them in the requested output format. This direct raster path uses a different rendering engine DirectRasterRenderer.
Try starting up with -Dorg.geoserver.render.raster.direct.disable=true and see if your results change?
It’s not really a transform so much as it is a painting process. WMS takes
the raw data and styles it into a picture of the data. It is fundamentally
not designed for what you are trying to do, whereas WCS is designed for
this. Your clients need to upgrade their client programs (WCS is not a new
standard, so it should be supported if people need the raw data)
Thanks for the suggestion, we have tried enabling and disabling
anti-aliasing and we don’t see an impact to the returned result.
We have looked into WCS but if we use that, it will prevent many of our
clients from accessing the data since they only have the ability to add WMS
layers to their tools.
Is the algorithm that takes the raw input and transforms it into the
output we see documented somewhere? Or do we need to go through the source
code to determine how it works? If we have to accept the loss of fidelity
in the output we need to understand what is going on so we can explain it
our clients.
There is no such thing as a documentation… In fact the overall process
changes based on the data source, and there is no documentation, you have
to read the relevant source code. The high-level portion of the process
asks the raster reader to produce something similar to what was requested,
and it’s a best effort request. The second portion receives the output of
the reader and reprojects, clips and rescaled to match the GetMap. The
system is overall designed to generate the output at any scale , projection
and potentially irregular scale factor. Generating an image 1:1 to the
input has never been a requirement.
Can it be met? If we fix the source reader and add funding to make code
changes, plus build tests to make it last, I believe it to be doable.
Thanks for the suggestion, we have tried enabling and disabling
anti-aliasing and we don’t see an impact to the returned result.
We have looked into WCS but if we use that, it will prevent many of our
clients from accessing the data since they only have the ability to add WMS
layers to their tools.
Is the algorithm that takes the raw input and transforms it into the
output we see documented somewhere? Or do we need to go through the source
code to determine how it works? If we have to accept the loss of fidelity
in the output we need to understand what is going on so we can explain it
our clients.