I am trying to run the Mapbuilder WFS-T demo on my local Geoserver 1.6.3.
Generally the demo is working when I insert new features. But when I try to delete a feature, geoserver throws an exception:
Service Exception:
java.io.IOException: Unable to delete original file: file:/PATH_TO_GEOSERVER/data/data/taz_shapes/tasmania_water_bodies.shx Unable to delete original file: file:/PATH_TO_GEOSERVER/data/data/taz_shapes/tasmania_water_bodies.shx
After this exception, I can’t do anything with this layer. For example when I try to add a new feature agaien, following exception is thrown:
Service Exception:
java.io.IOException: Dbf has extra record Dbf has extra record
When I try to open the shapefile I was working on with a program like ArcMap, it tells me that the shapefile is corrupted. So it seems the transaction makes the file unusable.
The interesting thing is, that the problem doesn’t occur when I use Geoserver 1.5.3.
Does anybody know about this problem, and how to handle it?
My guess would be permission issues. The way shapefile editing works is the entire file is copied when transactions occur, and the original is deleted. So if one of the files were read only or something, indeed corruption might occur.
I am surprised however that the datastore allows the transaction to occur in the first place.
Andrea: how will this case be handled?
-Justin
Matthias Drews wrote:
Hi all,
I am trying to run the Mapbuilder WFS-T demo on my local Geoserver 1.6.3.
Generally the demo is working when I insert new features. But when I try to delete a feature, geoserver throws an exception:
Service Exception: java.io.IOException: Unable to delete original file: file:/PATH_TO_GEOSERVER/data/data/taz_shapes/tasmania_water_bodies.shx Unable to delete original file: file:/PATH_TO_GEOSERVER/data/data/taz_shapes/tasmania_water_bodies.shx
After this exception, I can't do anything with this layer. For example when I try to add a new feature agaien, following exception is thrown:
Service Exception: java.io.IOException: Dbf has extra record Dbf has extra record
When I try to open the shapefile I was working on with a program like ArcMap, it tells me that the shapefile is corrupted. So it seems the transaction makes the file unusable.
The interesting thing is, that the problem doesn't occur when I use Geoserver 1.5.3.
Does anybody know about this problem, and how to handle it?
Greetings
!DSPAM:4007,4816fa6b246101137850744!
-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
My guess would be permission issues. The way shapefile editing works is the entire file is copied when transactions occur, and the original is deleted. So if one of the files were read only or something, indeed corruption might occur.
I am surprised however that the datastore allows the transaction to occur in the first place.
Andrea: how will this case be handled?
Hmmm... I'm not sure about this. I know there was a pretty serious
locking issue in the shapefile datastore that made it impossible
to edit shapefiles on windows, but I thought it was solved in
1.6.x (and should have been there in 1.5.x instead).
Jody and Jesse (cc'ed) made the locking patch, but I'm not sure
if it was applied only on geotools trunk or also on geotools 2.4.x (the series used by geoserver 1.6.x).
Hi all,
I am trying to run the Mapbuilder WFS-T demo on my local Geoserver 1.6.3.
Generally the demo is working when I insert new features. But when I try to delete a feature, geoserver throws an exception:
Service Exception: java.io.IOException: Unable to delete original file: file:/PATH_TO_GEOSERVER/data/data/taz_shapes/tasmania_water_bodies.shx Unable to delete original file: file:/PATH_TO_GEOSERVER/data/data/taz_shapes/tasmania_water_bodies.shx
After this exception, I can't do anything with this layer. For example when I try to add a new feature agaien, following exception is thrown:
Service Exception: java.io.IOException: Dbf has extra record Dbf has extra record
Btw, can you provide us with the full stack trace for this error message?
You should find it in the $GEOSERVER_DATA_DIR/logs/geoserver.log
file
It has only been applied to trunk; due to differences in feature model it was too expensive to apply this patch onto a version of GeoTools we were not using. You may also consider this "patch" to be somewhere between a code review and a complete rewrite. Jesse took years of experience hacking ShapefileDataStore and sat down and cleaned up the parts that made it difficult to maintain, correcting lots of little mistakes in the process.
Please treat the shapefile datastore implementation on trunk as a new implementation with respect to testing, as for 2.4.x we should disable the ability to edit shapefiles from GeoServer.
Jody
Justin Deoliveira ha scritto:
Hi Matthias,
My guess would be permission issues. The way shapefile editing works is the entire file is copied when transactions occur, and the original is deleted. So if one of the files were read only or something, indeed corruption might occur.
I am surprised however that the datastore allows the transaction to occur in the first place.
Andrea: how will this case be handled?
Hmmm... I'm not sure about this. I know there was a pretty serious
locking issue in the shapefile datastore that made it impossible
to edit shapefiles on windows, but I thought it was solved in
1.6.x (and should have been there in 1.5.x instead).
Jody and Jesse (cc'ed) made the locking patch, but I'm not sure
if it was applied only on geotools trunk or also on geotools 2.4.x (the series used by geoserver 1.6.x).
It has only been applied to trunk; due to differences in feature model it was too expensive to apply this patch onto a version of GeoTools we were not using. You may also consider this "patch" to be somewhere between a code review and a complete rewrite. Jesse took years of experience hacking ShapefileDataStore and sat down and cleaned up the parts that made it difficult to maintain, correcting lots of little mistakes in the process.
Please treat the shapefile datastore implementation on trunk as a new implementation with respect to testing, as for 2.4.x we should disable the ability to edit shapefiles from GeoServer.
Ok, got it, thanks for the update.
Matthias, long story short, I suggest you use PostGIS for any WFS-T
activity. It's the datastore we use to pass the OGC CITE tests that
assess we're WFS-T compliant, so it should work fine (provided you
remember to give your tables a primary key, that is).
Cheers
Andrea
PS: Jody, as for disabling shapefiles edit... do you have any brilliant
idea on how to do so? One way would be to wrap the shapefile
datastore into a ReadOnlyDataStore object (that has to be written,
of course), but the result of that wrapping would interact with
the shapefilerenderer, disabling it (shapefilerenderer is using
an instanceof check to decide whether to use the fast code path
or to delegate to the streaming renderer)
Vorstand: Dr. Peter Volk, Aufsichtsratsvorsitzender: Marcello Maranesi
Amtsgericht Muenchen HRB 140 509, Firmensitz: Muenchen
-----Ursprüngliche Nachricht-----
Von: Andrea Aime [mailto:aaime@anonymised.com]
Gesendet: Mittwoch, 30. April 2008 08:36
An: Jody Garnett
Cc: Justin Deoliveira; Matthias Drews; geoserver-users@anonymised.comt; Jesse Eichar
Betreff: Re: [Geoserver-users] (no subject)
Jody Garnett ha scritto:
It has only been applied to trunk; due to differences in feature model
it was too expensive to apply this patch onto a version of GeoTools we
were not using. You may also consider this "patch" to be somewhere
between a code review and a complete rewrite. Jesse took years of
experience hacking ShapefileDataStore and sat down and cleaned up the
parts that made it difficult to maintain, correcting lots of little
mistakes in the process.
Please treat the shapefile datastore implementation on trunk as a new
implementation with respect to testing, as for 2.4.x we should disable
the ability to edit shapefiles from GeoServer.
Ok, got it, thanks for the update.
Matthias, long story short, I suggest you use PostGIS for any WFS-T activity. It's the datastore we use to pass the OGC CITE tests that assess we're WFS-T compliant, so it should work fine (provided you remember to give your tables a primary key, that is).
Cheers
Andrea
PS: Jody, as for disabling shapefiles edit... do you have any brilliant
idea on how to do so? One way would be to wrap the shapefile
datastore into a ReadOnlyDataStore object (that has to be written,
of course), but the result of that wrapping would interact with
the shapefilerenderer, disabling it (shapefilerenderer is using
an instanceof check to decide whether to use the fast code path
or to delegate to the streaming renderer)
I'm trying to query the GeoServer WFS service for all point records
within some distance of a given point. My CQL_FILTER clause ends up
looking something like:
&CQL_FILTER=name LIKE 'foo' and DWithin(the_geom, POINT(-84.375
26.4312280645), 10, kilometers)
So I presume that the units in this case (epsg:4326) are dd. But I'm
still getting very strange query results (returning huge numbers of
results in some cases, and none in others). So I'm wondering if there's
a preferred way to write this query in GeoServer...
PS: Jody, as for disabling shapefiles edit... do you have any brilliant
idea on how to do so? One way would be to wrap the shapefile
datastore into a ReadOnlyDataStore object (that has to be written,
of course), but the result of that wrapping would interact with
the shapefilerenderer, disabling it (shapefilerenderer is using
an instanceof check to decide whether to use the fast code path
or to delegate to the streaming renderer)
Quickest way is to make a "view" out of the shapefile; there was a configuration option where you could define a filter for the FeatureTypeInfo. Doing so (even with Filter.NONE) will result in a wrapper being created (a read-only wrapper).
So how to fake it? Add a couple of lines; during creation/loading if the connection parameters are for a shapefile (check using ShapefileDataStoreFactory) add in a Filter.NONE for the filter field.
The short of it is that distance only really works in projected coordinate systems where things are flat.
-Justin
Christopher Condit wrote:
I'm trying to query the GeoServer WFS service for all point records
within some distance of a given point. My CQL_FILTER clause ends up
looking something like:
&CQL_FILTER=name LIKE 'foo' and DWithin(the_geom, POINT(-84.375
26.4312280645), 10, kilometers)
So I presume that the units in this case (epsg:4326) are dd. But I'm
still getting very strange query results (returning huge numbers of
results in some cases, and none in others). So I'm wondering if there's
a preferred way to write this query in GeoServer...