On Sat, May 21, 2011 at 5:46 PM, Gis <lixiao_gis@anonymised.com> wrote:
Hi,
I am now adding a geoserver process to geoserver wps-core module. It has a input of a SimpleFeatureCollection. And I want to use SQL to query it because the CQL filter seems could not support property name with Chinese characters. Could someone give me a way to use SQL query to search something from SimpleFeatureCollection? Or please give me a way to use CQL filter to search features which has some properties with Chinese name.
I also have some other questions: I write a PostGIS function and it could be runned by using SQL. Can I use geotools or some other ways to access it in a WPS process?
In addition, I want to know how to use geotools to implement some query like SQL “group by”?
Currently GeoServer WPS addresses the needs of those that want to build processes using Geotools to build processes.
The integration with external systems is indeed non existent, some work is needed to make it happen.
A set of processes based on PostGIS would leverage the native abilities of the data base, but to do so
they’d have to first get the data into postgis itself.
Vector data in WPS might be coming from whatever source, they can come from the network or local disk
as geojson, gml or zipped shapefiles, they migth be generated by a previous process call in the chain,
or coming from another database (e.g., an Oracle install, SDE), or finally come from a PostGIS instance,
but be filtered or manipulated in other ways.
A PostGIS based set of processes should always be able to take the incoming feature collection, which
can be coming from… anywhere really, and store it first into the database to allow SQL to act on it.
At the same time if the data is already in PostGIS we don’t want to read it to just store it back,
so a new set of markers should be somehow attached to the feature collections to notify these processes
that the data is already coming from the database (thus, it does not need to be copied, but has to
be dealt differently as it might already be the result of a query).
Also when generating output in tabular form, which would be again a collection, we’d have to attach
again the notion of where the data comes from so that other chained processes can leverage that
information.
A similar approach would have to be taken to develop processes based on external software
like gdal_translate and ogr2ogr, which would be interested in knowing if the data comes
from a file on the file system, for the very same reason as above (avoiding to redump on disk
a file that’s already there).
All of this is doable, yet it’s not a trivial amount of work.
A simpler solution for your case is to make your process take as inputs table names and
query bits, have it connect by itself to the database, do its thing, generate the results
(or if you need to get data from the outside, have the process do the data copy operation
into the db before starting).
It would have less reuse potential, or lower efficiency that the articulate solution
I was describing above, but should be significantly easier to implement.
Cheers
Andrea
–
Ing. Andrea Aime
GeoSolutions S.A.S.
Tech lead
Via Poggio alle Viti 1187
55054 Massarosa (LU)
Italy
phone: +39 0584 962313
fax: +39 0584 962313
http://www.geo-solutions.it
http://geo-solutions.blogspot.com/
http://www.youtube.com/user/GeoSolutionsIT
http://www.linkedin.com/in/andreaaime
http://twitter.com/geowolf