Hi all,
Recently I have been asked to estimate what a wfs 2.0 implementation for geoserver would take. In doing so I started thinking about how to handle the long time issue of how an object model for a particular ows spec evolves.
Currently the general architecture we have in place works as follows:
1) From the xml schema for the spec use EMF to generate an object model
2) Instrument the model to make any needed customizations
3) Write xml bindings to encode/decode the object model
4) Implement the operations for the service using the generated object model
This is currently what is used for wfs 1.0, 1.1 and wcs 1.0, 1.1. However the two are somewhat different with regard to how the respective spec models evolved between versions. On the one hand wfs did not change very much at all, while wcs changed drastically.
When implementing a new version of a service, one has two options. The first option (that was used for wfs) is to use a single object model and have both versions of the service use it. Which again is only really possible if the spec versions are relatively similar. The nice thing about this is that you only implement the operations once for the multiple verisons.
The alternative (that was used for wcs) was to generate a different object model for the new version of the spec, and have a completely different service that implements the operations. The downside here is that we have to implement an entirely new service, the upside is that the existing service remains untouched.
Getting back to wfs 2.0 it is sort of a middle ground. While the xml schema has changed a lot the core operations still remain relatively the same. Which means I don't think the three spec versions can share an object model. But at the same time the thought of reimplementing all the wfs operations on top of a new request model does not make sense either.
So... how do we proceed. I can think of a couple of different options.
The first would be to abandon the current architecture of trying to autogenerate the object model from xml schema and come up with a central non generated set of objects. Basically going back the old way of doing things and the way wms does it.
The pros of such an approach that i can see is:
* Flexibility. Often things from the xml schema don't translate across very well, or we want to model something slightly different than the xml schema does
* Stability. The service operations always get implemented in terms of a stable object model.
* Simplicity. This approach is much simpler than the gtxml/emf setup which to my own fault has been over architected in a lot of places.
The cons:
* Parsing/encoding work. One of the nice things about using EMF as the object model is that we can use dynamic bindings to do most of the parsing and encoding work, which can be a time consuming task.
* Maintenance. There is still the burden of manually updating the internal object model to support changes in the spec. And depending on the changes could be a significant task since different versions of a spec can sometimes conflict
The second option would be to stick with xsd/emf and generate a new model for the new spec. And to reuse the operations with different object models we use emf reflection to access the object model, as to not depend on any specific version of the model.
Pros:
* Time. We still achieve the time saving on parsing/encoding work since we are using EMF.
* Separation. it is nice to have the different object models separated instead of trying to merge them into one beast.
Cons:
* Reflection. Doing all access via reflection is a painful way to code.
Anyways, interested in hearing what people think as there are more pros/cons that should be considered and mostly other alternatives of how to proceed.
-Justin
--
Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.