Hello all,
So I started down a simple path. I was getting responses to my WFS queries against geoserver SVN-HEAD <--> ArcSDE that looked like this:
<gml:featureMember>
<gml:GISDATA.TOWNS_POLY fid="GISDATA.TOWNS_POLY.3">
<gml:OBJECTID>3</gml:OBJECTID>
<gml:TOWNS_ID>3</gml:TOWNS_ID>
<gml:SHAPE>
<gml:MultiPolygon srsName="http://www.opengis.net/gml/srs/epsg.xml#26986">
...
</gml:MultiPolygon>
</gml:SHAPE>
</gml:GISDATA.TOWNS_POLY>
</gml:featureMember>
I thought this was a bit odd, as my GISDATA.TOWNS_POLY featureType isn't in the gml namespace at all, and isn't defined as such in any part of my configuration.
So I dug in a bit, and here's a brief recap of what I found.
Inside FeatureResponse.java.execute(), the real meat of the action happens. The running geoserver catalog is interrogated to find out all about the featureType information for the queried featureType (in my case - GISDATA.TOWNS_POLY). See line 246 or so for this sequence:
Data catalog = request.getWFS().getData();
...
meta = catalog.getFeatureTypeInfo(query.getTypeName());
Lots of debugging later, this looks gorgeous. No problems at all. My featureType is correctly decoded from the catalog, with the right namespace and all.
Then we actually do the query. See line 325 or so
FeatureResults features = source.getFeatures(query.toDataQuery(
maxFeatures));
The metadata is fused to the results:
results.addFeatures(meta, features);
All's good so far.
We go and prepare our delegate (I happen to know it's a GML2FeatureResponseDelegate). The delegate sets up all of our namespaces just fine (using that very same metadata pulled from the catalog earlier).
See around line 135 of GML2FeatureResponseDelegate.java:
ftNames.declareNamespace(features.getSchema(),
namespace.getPrefix(), uri);
Still running smoothly.
Finally, we go and actually writeTo our response. FeatureResponse.writeTo() invokes our GML2FeatureResponseDelegate.encode(), which in turn invokes our well set up FeatureTransformer.FeatureTranslator.
Well, I think it does...that part is a bit occluded because of the javax.xml.tranform stuff that's in the way.
So on we go, and our FeatureTransformer.FeatureTranslator goes to actually handle a Feature. And on line 592 of FeatureTransformer,java we try to figure out the currentPrefix for the given namespace:
currentPrefix = getNamespaceSupport().getPrefix(f.getFeatureType() .getNamespace().toString());
Unfortunately, this doesn't return the correct namespace for this feature. In fact, as far as the SDE DataStore is concerned, it will allways return the GML namespace for every feature.
This is because in the SDE DataStore code, we go and re-generate the featureType for each featureType we run across, starting on line 540 of ArcSDEDataStore.java.
The newly generated featuretype is very very similar to the featuretype that is pulled out of geoserver's catalog, but it's missing a couple of things...notably it's missing the namespace (which defaults to GML).
All of this is a long run-up to the questions I have:
1) Does this affect other datastores besides the ArcSDEDataStore? If not, how is the geoserver catalog "passed" or "communicated" to the geotools-specific FeatureReader generation code, so that each feature can have it's correct namespaces set?
2) It seems odd to me that there's a great infrastructure for looking all these various namespaces up, and that it's just the last teeny-tiny part of the featureReader that's screwy...has anyone else run across this?
3) I notice that three or four lines later there's fallback code:
if (currentPrefix == null) {
currentPrefix = types.findPrefix(f.getFeatureType());
}
This code SHOULD fix exactly my problem...is this a problem with the ArcSDE DataStore such that when the SDEDataStore.getFeatureReader is discovering types, it should set their namespaces to null, rather than to the default GML namespace?
I have to think it's probably #3...
Anyone seen this before?
Sorry to be so long-winded.
thanks,
--saul