[Geoserver-devel] some restconfig improvements

Hi all,

Recently I have been working on some improvements to restconfig that I would like to run by everyone. The changes have to do with the operations that allow the uploading of a shapefile.

Currently you only really have one option. You upload a shapefile (or reference it externally) and a datastore (and feature type) gets created for that shapefile. Cool, but I think it would be cool to have some flexibility here. Mainly to be able to specify a different datastore than shapefile. For instance maybe you want that shapefile to be stored in an existing postgis database. Or maybe you want a different type of datastore to be automatically created (thinking H2 here mostly).

So currently the api looks mostly like this:

PUT [zipped shapefile] /rest/workspaces//datastores//file.shp

Well nothing really has to change. If you want to use an existing data store you just put to that datastore. For instance:

PUT [zipped shapefile] /rest/workspaces//datastores/foo_pg/file.shp

Would under the covers take the shapefile and create a feature type / table for it (via DataStore.createSchema()). Then copy the contents of the shapefile into that new type.

I was also thinking of doing things like this:

PUT [zipped shapefile] /rest/workspaces//datastores/foo_h2/file.shp?target=h2

In this case the datastore does not exist, but the user specified the type of datastore they want to create via the “target” parameter. In this an H2 database and datastore would be created automatically, and the new type added to it. Now obviously this could not be supported for all types of datastores. For instance we can’t really magically create a new postgis datastore. But for many like h2, and other file based datastores it should be possible.

So… what do you all think? The existing behaviour would be completely maintained. These would really just be additions.

-Justin


Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

On Fri, Nov 12, 2010 at 8:25 PM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

Hi all,
Recently I have been working on some improvements to restconfig that I would
like to run by everyone. The changes have to do with the operations that
allow the uploading of a shapefile.
Currently you only really have one option. You upload a shapefile (or
reference it externally) and a datastore (and feature type) gets created for
that shapefile. Cool, but I think it would be cool to have some flexibility
here. Mainly to be able to specify a different datastore than shapefile. For
instance maybe you want that shapefile to be stored in an existing postgis
database. Or maybe you want a different type of datastore to be
automatically created (thinking H2 here mostly).
So currently the api looks mostly like this:
PUT [zipped shapefile] /rest/workspaces/<ws>/datastores/<ds>/file.shp
Well nothing really has to change. If you want to use an existing data store
you just put to that datastore. For instance:
PUT [zipped shapefile] /rest/workspaces/<ws>/datastores/foo_pg/file.shp
Would under the covers take the shapefile and create a feature type / table
for it (via DataStore.createSchema()). Then copy the contents of the
shapefile into that new type.
I was also thinking of doing things like this:
PUT [zipped shapefile]
/rest/workspaces/<ws>/datastores/foo_h2/file.shp?target=h2
In this case the datastore does not exist, but the user specified the type
of datastore they want to create via the "target" parameter. In this an H2
database and datastore would be created automatically, and the new type
added to it. Now obviously this could not be supported for all types of
datastores. For instance we can't really magically create a new postgis
datastore. But for many like h2, and other file based datastores it should
be possible.
So... what do you all think? The existing behaviour would be completely
maintained. These would really just be additions.

Works for me. You may want to have a look in the WPS process that does
the same thing:
http://svn.codehaus.org/geoserver/trunk/src/community/wps/src/main/java/org/geoserver/wps/gs/ImportProcess.java

I guess some of the code can be shared

Cheers
Andrea

-----------------------------------------------------
Ing. Andrea Aime
Senior Software Engineer

GeoSolutions S.A.S.
Via Poggio alle Viti 1187
55054 Massarosa (LU)
Italy

phone: +39 0584962313
fax: +39 0584962313

http://www.geo-solutions.it
http://geo-solutions.blogspot.com/
http://www.linkedin.com/in/andreaaime
http://twitter.com/geowolf

-----------------------------------------------------

Right, I thought I remembered a process that could do something like that. I agree it would be nice to share the code. What would be nicer perhaps is to have a general pattern to go about doing so. For instance there is lots of code duplication between the ui and restconfig when it comes to interacting with the catalog. If we could factor that out it would be nice.

I guess we could use a utility clase… but i sort of detest utility classes as you may know :). But I guess it work work. Something I keep thinking about is having a package of command classes, each that performs a specific task. For instance this one would be one that takes a source file , or source datastore instance and goes about importing it / creating the underlying type, etc…

Thoughts? Sorry sort of a tangent.

On Fri, Nov 12, 2010 at 12:36 PM, Andrea Aime <andrea.aime@anonymised.com…> wrote:

On Fri, Nov 12, 2010 at 8:25 PM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

Hi all,
Recently I have been working on some improvements to restconfig that I would
like to run by everyone. The changes have to do with the operations that
allow the uploading of a shapefile.
Currently you only really have one option. You upload a shapefile (or
reference it externally) and a datastore (and feature type) gets created for
that shapefile. Cool, but I think it would be cool to have some flexibility
here. Mainly to be able to specify a different datastore than shapefile. For
instance maybe you want that shapefile to be stored in an existing postgis
database. Or maybe you want a different type of datastore to be
automatically created (thinking H2 here mostly).
So currently the api looks mostly like this:
PUT [zipped shapefile] /rest/workspaces//datastores//file.shp
Well nothing really has to change. If you want to use an existing data store
you just put to that datastore. For instance:
PUT [zipped shapefile] /rest/workspaces//datastores/foo_pg/file.shp
Would under the covers take the shapefile and create a feature type / table
for it (via DataStore.createSchema()). Then copy the contents of the
shapefile into that new type.
I was also thinking of doing things like this:
PUT [zipped shapefile]
/rest/workspaces//datastores/foo_h2/file.shp?target=h2
In this case the datastore does not exist, but the user specified the type
of datastore they want to create via the “target” parameter. In this an H2
database and datastore would be created automatically, and the new type
added to it. Now obviously this could not be supported for all types of
datastores. For instance we can’t really magically create a new postgis
datastore. But for many like h2, and other file based datastores it should
be possible.
So… what do you all think? The existing behaviour would be completely
maintained. These would really just be additions.

Works for me. You may want to have a look in the WPS process that does
the same thing:
http://svn.codehaus.org/geoserver/trunk/src/community/wps/src/main/java/org/geoserver/wps/gs/ImportProcess.java

I guess some of the code can be shared

Cheers
Andrea


Ing. Andrea Aime
Senior Software Engineer

GeoSolutions S.A.S.
Via Poggio alle Viti 1187
55054 Massarosa (LU)
Italy

phone: +39 0584962313
fax: +39 0584962313

http://www.geo-solutions.it
http://geo-solutions.blogspot.com/
http://www.linkedin.com/in/andreaaime
http://twitter.com/geowolf



Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

We want that for GeoNode too (somebody asked me about it yesterday in
fact). So having GeoServer's restconfig do it automagically would be
awesome.

While you are at it is it too hard to allow passing an url (that has a
zipped shapefile) and having GeoServer download it like you can with
GeoTiffs ? #ponyrequest

Ariel.

On Fri, Nov 12, 2010 at 4:25 PM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

Hi all,
Recently I have been working on some improvements to restconfig that I would
like to run by everyone. The changes have to do with the operations that
allow the uploading of a shapefile.
Currently you only really have one option. You upload a shapefile (or
reference it externally) and a datastore (and feature type) gets created for
that shapefile. Cool, but I think it would be cool to have some flexibility
here. Mainly to be able to specify a different datastore than shapefile. For
instance maybe you want that shapefile to be stored in an existing postgis
database. Or maybe you want a different type of datastore to be
automatically created (thinking H2 here mostly).
So currently the api looks mostly like this:
PUT [zipped shapefile] /rest/workspaces/<ws>/datastores/<ds>/file.shp
Well nothing really has to change. If you want to use an existing data store
you just put to that datastore. For instance:
PUT [zipped shapefile] /rest/workspaces/<ws>/datastores/foo_pg/file.shp
Would under the covers take the shapefile and create a feature type / table
for it (via DataStore.createSchema()). Then copy the contents of the
shapefile into that new type.
I was also thinking of doing things like this:
PUT [zipped shapefile]
/rest/workspaces/<ws>/datastores/foo_h2/file.shp?target=h2
In this case the datastore does not exist, but the user specified the type
of datastore they want to create via the "target" parameter. In this an H2
database and datastore would be created automatically, and the new type
added to it. Now obviously this could not be supported for all types of
datastores. For instance we can't really magically create a new postgis
datastore. But for many like h2, and other file based datastores it should
be possible.
So... what do you all think? The existing behaviour would be completely
maintained. These would really just be additions.
-Justin
--
Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

------------------------------------------------------------------------------
Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

It seems to me that creating a new datastore from an uploaded dataset, and importing a dataset into an existing store, should be distinct requests with distinct URLs.

Following the REST convention that PUT should be used to update an existing resource and POST should be used to insert into a collection I would expect something like:
POST /workspaces/ws/datastores/collection/featuretypes/ <body is shapefile or whatever, Content-Type header reflects… content type>
assuming there’s an existing datastore at:
/workspaces/ws/datastores/collection

I wouldn’t like to see a typoed or mangled store name resulting in the creation of new stores or modification of existing stores, rather than an error message.

As for creating a new datastore, shouldn’t that be accomplished by just providing the connection parameters as can be done now? I haven’t tried it, but I imagine this works for creating H2 stores already. Does initializing a new database really need to be tied to uploading a dataset?

I’d also suggest using a form with named parameters instead of the current ds/{url,file,external}.zip , but maybe that’s an idea for a REST API v2.


David Winslow
OpenGeo - http://opengeo.org/

On Fri, Nov 12, 2010 at 2:25 PM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

Hi all,

Recently I have been working on some improvements to restconfig that I would like to run by everyone. The changes have to do with the operations that allow the uploading of a shapefile.

Currently you only really have one option. You upload a shapefile (or reference it externally) and a datastore (and feature type) gets created for that shapefile. Cool, but I think it would be cool to have some flexibility here. Mainly to be able to specify a different datastore than shapefile. For instance maybe you want that shapefile to be stored in an existing postgis database. Or maybe you want a different type of datastore to be automatically created (thinking H2 here mostly).

So currently the api looks mostly like this:

PUT [zipped shapefile] /rest/workspaces//datastores//file.shp

Well nothing really has to change. If you want to use an existing data store you just put to that datastore. For instance:

PUT [zipped shapefile] /rest/workspaces//datastores/foo_pg/file.shp

Would under the covers take the shapefile and create a feature type / table for it (via DataStore.createSchema()). Then copy the contents of the shapefile into that new type.

I was also thinking of doing things like this:

PUT [zipped shapefile] /rest/workspaces//datastores/foo_h2/file.shp?target=h2

In this case the datastore does not exist, but the user specified the type of datastore they want to create via the “target” parameter. In this an H2 database and datastore would be created automatically, and the new type added to it. Now obviously this could not be supported for all types of datastores. For instance we can’t really magically create a new postgis datastore. But for many like h2, and other file based datastores it should be possible.

So… what do you all think? The existing behaviour would be completely maintained. These would really just be additions.

-Justin


Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.


Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev


Geoserver-devel mailing list
Geoserver-devel@anonymised.comsts.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Hi Ariel,

On Fri, Nov 12, 2010 at 12:49 PM, Ariel Nunez <ingenieroariel@anonymised.com> wrote:

We want that for GeoNode too (somebody asked me about it yesterday in
fact). So having GeoServer’s restconfig do it automagically would be
awesome.

While you are at it is it too hard to allow passing an url (that has a
zipped shapefile) and having GeoServer download it like you can with
GeoTiffs ? #ponyrequest

You can actually do this now, it is just poorly documented. Basically just change file.shp to url.shp, the the content/body should be the url of the shapefile to download.

Ariel.

On Fri, Nov 12, 2010 at 4:25 PM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

Hi all,
Recently I have been working on some improvements to restconfig that I would
like to run by everyone. The changes have to do with the operations that
allow the uploading of a shapefile.
Currently you only really have one option. You upload a shapefile (or
reference it externally) and a datastore (and feature type) gets created for
that shapefile. Cool, but I think it would be cool to have some flexibility
here. Mainly to be able to specify a different datastore than shapefile. For
instance maybe you want that shapefile to be stored in an existing postgis
database. Or maybe you want a different type of datastore to be
automatically created (thinking H2 here mostly).
So currently the api looks mostly like this:
PUT [zipped shapefile] /rest/workspaces//datastores//file.shp
Well nothing really has to change. If you want to use an existing data store
you just put to that datastore. For instance:
PUT [zipped shapefile] /rest/workspaces//datastores/foo_pg/file.shp
Would under the covers take the shapefile and create a feature type / table
for it (via DataStore.createSchema()). Then copy the contents of the
shapefile into that new type.
I was also thinking of doing things like this:
PUT [zipped shapefile]
/rest/workspaces//datastores/foo_h2/file.shp?target=h2
In this case the datastore does not exist, but the user specified the type
of datastore they want to create via the “target” parameter. In this an H2
database and datastore would be created automatically, and the new type
added to it. Now obviously this could not be supported for all types of
datastores. For instance we can’t really magically create a new postgis
datastore. But for many like h2, and other file based datastores it should
be possible.
So… what do you all think? The existing behaviour would be completely
maintained. These would really just be additions.
-Justin

Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.


Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev


Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel


Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

On Fri, Nov 12, 2010 at 12:51 PM, David Winslow <dwinslow@anonymised.com…1501…> wrote:

It seems to me that creating a new datastore from an uploaded dataset, and importing a dataset into an existing store, should be distinct requests with distinct URLs.

Following the REST convention that PUT should be used to update an existing resource and POST should be used to insert into a collection I would expect something like:
POST /workspaces/ws/datastores/collection/featuretypes/ <body is shapefile or whatever, Content-Type header reflects… content type>
assuming there’s an existing datastore at:
/workspaces/ws/datastores/collection

I agree with you from a REST perspective, but this operation already violates good restful practices so I decided to continue with it. The whole file.shp endpoint is something i consider a bit different from the rest of the api because it does diverge from this convention. It is there for “convenience” to allow people to do this with a single request. It was the best compromise we could make when designing the api.

But since the existing endpoint allows you to PUT file.shp and a new datastore is created. So I guess i am being bad by continuing this violation.

I wouldn’t like to see a typoed or mangled store name resulting in the creation of new stores or modification of existing stores, rather than an error message.

As for creating a new datastore, shouldn’t that be accomplished by just providing the connection parameters as can be done now? I haven’t tried it, but I imagine this works for creating H2 stores already. Does initializing a new database really need to be tied to uploading a dataset?

It does, but again the idea here is to support this with a single operation.

I’d also suggest using a form with named parameters instead of the current ds/{url,file,external}.zip , but maybe that’s an idea for a REST API v2.

That woud be good but i agree is something more suitable to a v2 version. With that we can support better PUT vs POST semantics.


David Winslow
OpenGeo - http://opengeo.org/

On Fri, Nov 12, 2010 at 2:25 PM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

Hi all,

Recently I have been working on some improvements to restconfig that I would like to run by everyone. The changes have to do with the operations that allow the uploading of a shapefile.

Currently you only really have one option. You upload a shapefile (or reference it externally) and a datastore (and feature type) gets created for that shapefile. Cool, but I think it would be cool to have some flexibility here. Mainly to be able to specify a different datastore than shapefile. For instance maybe you want that shapefile to be stored in an existing postgis database. Or maybe you want a different type of datastore to be automatically created (thinking H2 here mostly).

So currently the api looks mostly like this:

PUT [zipped shapefile] /rest/workspaces//datastores//file.shp

Well nothing really has to change. If you want to use an existing data store you just put to that datastore. For instance:

PUT [zipped shapefile] /rest/workspaces//datastores/foo_pg/file.shp

Would under the covers take the shapefile and create a feature type / table for it (via DataStore.createSchema()). Then copy the contents of the shapefile into that new type.

I was also thinking of doing things like this:

PUT [zipped shapefile] /rest/workspaces//datastores/foo_h2/file.shp?target=h2

In this case the datastore does not exist, but the user specified the type of datastore they want to create via the “target” parameter. In this an H2 database and datastore would be created automatically, and the new type added to it. Now obviously this could not be supported for all types of datastores. For instance we can’t really magically create a new postgis datastore. But for many like h2, and other file based datastores it should be possible.

So… what do you all think? The existing behaviour would be completely maintained. These would really just be additions.

-Justin


Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.


Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev


Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel


Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.