Having had some experience with aggregating WFS instances I have a few perspectives it would be interesting to hear your plans for...
There are three bases for aggregation:
1) aggregate multiple identical services - same query, same response
2) aggregate multiple "similar" services - think of Interfaces and polymorphism - same Filter, query wrapped in different FeatureType
3) aggregate disparate services, with all the attendant schema translation and ontology mapping required.
I know several people looking at the third problem, which is characterised by geoserver's current capability (Feature Type is bound to the persistence layer - different database, different query). The problem is a massive one, which involves complex governance rules for publishing the ad-hoc schemas from WFS and the ontologies and schema translation rules required to support it. If you venture this way, good luck! I think we want to do it one day, but not to aggregate indivual instances but to bridge common WFS from one community to multiple clients in an another community (i.e. the publicaton of the content standards is already solved, communities just need to agree on a mechanism to describe the mappings and ontologies.)
The much simpler approach, I can hear the groans already, is to make geoserver capable of delivering standard data against standard queries. This capability is in progress. Spike solutions exist as unsupported community modules. (community-schema-ds)
The best approach is #2, which extends #1 but requires an additional mechanism for polymorphism. This is called a Feature Type Catalogue - talk to Jim Groffen at Lisasoft for more info.
The next issue is one of error propagation:
how do you intend to make the process robust in the case of permanent or intermittent failure, performance issues, change in compliance levels etc of a single source? My thoughts were running to some concept similar to connection pooling, possibly with failover handling. In any event robnust error reporting would be critical.
Rob Atkinson
Stefan Hansen wrote:
Hi!
We are developing at the moment a data store that forwards a request to multiple other data stores and aggregates their responses. Since this data store is supposed to aggregate mainly WFS-servers (not only, but also Geoservers), I was thinking of how to solve the problems the of the WFS data store cascading a Geoserver as well, when I read your mails discussing issue. Since I hadn't come up with my own solution, so I simply took one of yours (Thanks! ).
For our purposes the easiest option was to implement a wrapper data store. I simply had to some changes in my aggregation data store...
However, we have now a data store running that solves the issues with the WFS-data store (by replacing the prefix). It's rather simple, it only supports read-only access (which is all we need in our project) and it has not really been tested yet, but so far it seems to work. I'm not too familiar yet with the whole WFS-thing, so I was wondering, if you can think any special cases which might be problematic for our solution and which I should consider?
I'm not sure of how much interest this is for you, since our implementation is very limited so far (read-only) and we're not planning at the moment to add writing-access. But it is working for Geoserver 1.6 and if someone doesn't want to wait for version 1.7 to cascade a WFS-server, we are happy to commit our data store.
cheers,
stefan
Andrea Aime wrote:
Justin Deoliveira ha scritto:
David Zwiers wrote:
I remember thinking about this issue a fair bit. At the time I was in favour of altering the FeatureType.Name to be a QName rather than a string ... this would certainly fix GS's issues as you should only ever publish one dataset per namespace.
While there are a number of arguments against QName (for example how to handle database instances), but in each case the issue could be handled by a default datastore namespace, or a hint passed in at creation.
Yeah... this would probably the cleanest and best solution to the problem... and also in the geotools feature model there is now a class which is identical to the xml QName.. its used in the new feature model.
Yeah, but that would delay the usability of the WFS datastore up
to Geoserver 1.7.something.... I think the plan was to have it running
in 1.6.0.
Cheers
Andrea
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel