[Geoserver-devel] xstream persistence and metadata map

Hi all,

A few issues have popped up recently related to how maps (in particular the metadata map for catalog objects) is persisted/depersisted. The latest issue revolves around someone trying ot configure a feature type via rest.

The issue is that in an attempt to make xstream less verbose, i made it so that someone trying to upload map data with rest can do this:

<map>
   <key1>value1</key1>
   <key2>value2</key2>
    ...
</map>

As opposed to the xstream default:

<map>
   <entry>
     <string>key1</string>
     <string>value1</string>
   </entry>
   <entry>
     <string>key2</string>
     <string>value2</string>
   </entry>
</map>

However this has caused some issues. The main issue is that type information is lost. This pops up in a couple of places:

* when a feature is persisted in geoserver, and then read back in again it is always read back in as a string.

* when a feature type is uploaded via rest, the values in the metadata map are of type string.

The end result is the same, ClassClassException for code that assumes type information about the metadata map.

The quick solution for now has been to replace this:

(Integer) featureType.getMetadata().get( "foo" );

with:

Converters.convert( featureType.getMetadata().get( "foo" ), Integer.class );

The longer term solution is to amend the xstream persistence a bit so that type information is not lost. This is however a bit tricky though. And probably won't fix every REST case.

So I am wondering. What do people think about making it practice that whenever accessing data from a metadata map, that a converter be used. The downside of course is that we have to update a bunch of code, and its one more thing to remember. The upside is that things are a bit more flexible and lenient. Which might be a big benefit in the future if the metadata map is used to store more and different kinds of stuff.

Anyways, curious to hear peoples thoughts on this.

-Justin

--
Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Justin Deoliveira wrote:

So I am wondering. What do people think about making it practice that whenever accessing data from a metadata map, that a converter be used. The downside of course is that we have to update a bunch of code, and its one more thing to remember. The upside is that things are a bit more flexible and lenient. Which might be a big benefit in the future if the metadata map is used to store more and different kinds of stuff.
Anyways, curious to hear peoples thoughts on this.

Would this be an opportunity to trim whitespace from strings in datastore.xml connectionParameters/entry elements?

--
Ben Caradoc-Davies <Ben.Caradoc-Davies@anonymised.com>
Software Engineer, CSIRO Exploration and Mining
Australian Resources Research Centre
26 Dick Perry Ave, Kensington WA 6151, Australia

Justin Deoliveira ha scritto:

Hi all,

A few issues have popped up recently related to how maps (in particular the metadata map for catalog objects) is persisted/depersisted. The latest issue revolves around someone trying ot configure a feature type via rest.

The issue is that in an attempt to make xstream less verbose, i made it so that someone trying to upload map data with rest can do this:

<map>
   <key1>value1</key1>
   <key2>value2</key2>
    ...
</map>

As opposed to the xstream default:

<map>
   <entry>
     <string>key1</string>
     <string>value1</string>
   </entry>
   <entry>
     <string>key2</string>
     <string>value2</string>
   </entry>
</map>

However this has caused some issues. The main issue is that type information is lost. This pops up in a couple of places:

* when a feature is persisted in geoserver, and then read back in again it is always read back in as a string.

* when a feature type is uploaded via rest, the values in the metadata map are of type string.

The end result is the same, ClassClassException for code that assumes type information about the metadata map.

The quick solution for now has been to replace this:

(Integer) featureType.getMetadata().get( "foo" );

with:

Converters.convert( featureType.getMetadata().get( "foo" ), Integer.class );

The longer term solution is to amend the xstream persistence a bit so that type information is not lost. This is however a bit tricky though. And probably won't fix every REST case.

So I am wondering. What do people think about making it practice that whenever accessing data from a metadata map, that a converter be used.

I would be ok using the converters everywhere. But we should make it
pretty explicit that the user map is made of strings, Map<String,String>. This way the need of using some kind of conversion
is obvious (teaching people that they can use Converters instead of
Integer.parseInt(xxx) is another story).

Cheers
Andrea

--
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Andrea Aime wrote:

Justin Deoliveira ha scritto:

Hi all,

A few issues have popped up recently related to how maps (in particular the metadata map for catalog objects) is persisted/depersisted. The latest issue revolves around someone trying ot configure a feature type via rest.

The issue is that in an attempt to make xstream less verbose, i made it so that someone trying to upload map data with rest can do this:

<map>
   <key1>value1</key1>
   <key2>value2</key2>
    ...
</map>

As opposed to the xstream default:

<map>
   <entry>
     <string>key1</string>
     <string>value1</string>
   </entry>
   <entry>
     <string>key2</string>
     <string>value2</string>
   </entry>
</map>

However this has caused some issues. The main issue is that type information is lost. This pops up in a couple of places:

* when a feature is persisted in geoserver, and then read back in again it is always read back in as a string.

* when a feature type is uploaded via rest, the values in the metadata map are of type string.

The end result is the same, ClassClassException for code that assumes type information about the metadata map.

The quick solution for now has been to replace this:

(Integer) featureType.getMetadata().get( "foo" );

with:

Converters.convert( featureType.getMetadata().get( "foo" ), Integer.class );

The longer term solution is to amend the xstream persistence a bit so that type information is not lost. This is however a bit tricky though. And probably won't fix every REST case.

So I am wondering. What do people think about making it practice that whenever accessing data from a metadata map, that a converter be used.

I would be ok using the converters everywhere. But we should make it
pretty explicit that the user map is made of strings, Map<String,String>. This way the need of using some kind of conversion
is obvious (teaching people that they can use Converters instead of
Integer.parseInt(xxx) is another story).

It looks to me like MetadataMap claims to be an class itself rather than a java.uti.Map
MetadataMap{
  private Map internalMap;
  ...
  boolean contains(Serializable key);
  <T extends Serializable> get(Serialiable key, Class<T>);
  ///same for put
}

then isolate the use of converters inside it.

my 2c

Cheers
Andrea

--
Gabriel Roldan
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Gabriel Roldan ha scritto:

Andrea Aime wrote:

Justin Deoliveira ha scritto:

Hi all,

A few issues have popped up recently related to how maps (in particular the metadata map for catalog objects) is persisted/depersisted. The latest issue revolves around someone trying ot configure a feature type via rest.

The issue is that in an attempt to make xstream less verbose, i made it so that someone trying to upload map data with rest can do this:

<map>
   <key1>value1</key1>
   <key2>value2</key2>
    ...
</map>

As opposed to the xstream default:

<map>
   <entry>
     <string>key1</string>
     <string>value1</string>
   </entry>
   <entry>
     <string>key2</string>
     <string>value2</string>
   </entry>
</map>

However this has caused some issues. The main issue is that type information is lost. This pops up in a couple of places:

* when a feature is persisted in geoserver, and then read back in again it is always read back in as a string.

* when a feature type is uploaded via rest, the values in the metadata map are of type string.

The end result is the same, ClassClassException for code that assumes type information about the metadata map.

The quick solution for now has been to replace this:

(Integer) featureType.getMetadata().get( "foo" );

with:

Converters.convert( featureType.getMetadata().get( "foo" ), Integer.class );

The longer term solution is to amend the xstream persistence a bit so that type information is not lost. This is however a bit tricky though. And probably won't fix every REST case.

So I am wondering. What do people think about making it practice that whenever accessing data from a metadata map, that a converter be used.

I would be ok using the converters everywhere. But we should make it
pretty explicit that the user map is made of strings, Map<String,String>. This way the need of using some kind of conversion
is obvious (teaching people that they can use Converters instead of
Integer.parseInt(xxx) is another story).

It looks to me like MetadataMap claims to be an class itself rather than a java.uti.Map
MetadataMap{
private Map internalMap;
...
boolean contains(Serializable key);
<T extends Serializable> get(Serialiable key, Class<T>);
///same for put
}

then isolate the use of converters inside it.

Hum.... uh?
I looked in CatalogInfo and found:

Map<String,Serializable> getMetadata();

in ServiceInfo:

Map<String,Serializable> getMetadata();

Or else... you mean we should create the MetadataMap
class and replace those Map<String,Serializable>
references? Sounds like a good idea to me.

Cheers
Andrea

--
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Andrea Aime wrote:

Gabriel Roldan ha scritto:

Andrea Aime wrote:

Justin Deoliveira ha scritto:

Hi all,

A few issues have popped up recently related to how maps (in particular the metadata map for catalog objects) is persisted/depersisted. The latest issue revolves around someone trying ot configure a feature type via rest.

The issue is that in an attempt to make xstream less verbose, i made it so that someone trying to upload map data with rest can do this:

<map>
   <key1>value1</key1>
   <key2>value2</key2>
    ...
</map>

As opposed to the xstream default:

<map>
   <entry>
     <string>key1</string>
     <string>value1</string>
   </entry>
   <entry>
     <string>key2</string>
     <string>value2</string>
   </entry>
</map>

However this has caused some issues. The main issue is that type information is lost. This pops up in a couple of places:

* when a feature is persisted in geoserver, and then read back in again it is always read back in as a string.

* when a feature type is uploaded via rest, the values in the metadata map are of type string.

The end result is the same, ClassClassException for code that assumes type information about the metadata map.

The quick solution for now has been to replace this:

(Integer) featureType.getMetadata().get( "foo" );

with:

Converters.convert( featureType.getMetadata().get( "foo" ), Integer.class );

The longer term solution is to amend the xstream persistence a bit so that type information is not lost. This is however a bit tricky though. And probably won't fix every REST case.

So I am wondering. What do people think about making it practice that whenever accessing data from a metadata map, that a converter be used.

I would be ok using the converters everywhere. But we should make it
pretty explicit that the user map is made of strings, Map<String,String>. This way the need of using some kind of conversion
is obvious (teaching people that they can use Converters instead of
Integer.parseInt(xxx) is another story).

It looks to me like MetadataMap claims to be an class itself rather than a java.uti.Map
MetadataMap{
private Map internalMap;
...
boolean contains(Serializable key);
<T extends Serializable> get(Serialiable key, Class<T>);
///same for put
}

then isolate the use of converters inside it.

Hum.... uh?
I looked in CatalogInfo and found:

Map<String,Serializable> getMetadata();

in ServiceInfo:

Map<String,Serializable> getMetadata();

Or else... you mean we should create the MetadataMap
class and replace those Map<String,Serializable>
references? Sounds like a good idea to me.

yup

Cheers
Andrea

--
Gabriel Roldan
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Thanks for your input guys, I think this is a really good idea. I will come up with a patch soon for review.

-Justin

Gabriel Roldan wrote:

Andrea Aime wrote:

Gabriel Roldan ha scritto:

Andrea Aime wrote:

Justin Deoliveira ha scritto:

Hi all,

A few issues have popped up recently related to how maps (in particular the metadata map for catalog objects) is persisted/depersisted. The latest issue revolves around someone trying ot configure a feature type via rest.

The issue is that in an attempt to make xstream less verbose, i made it so that someone trying to upload map data with rest can do this:

<map>
   <key1>value1</key1>
   <key2>value2</key2>
    ...
</map>

As opposed to the xstream default:

<map>
   <entry>
     <string>key1</string>
     <string>value1</string>
   </entry>
   <entry>
     <string>key2</string>
     <string>value2</string>
   </entry>
</map>

However this has caused some issues. The main issue is that type information is lost. This pops up in a couple of places:

* when a feature is persisted in geoserver, and then read back in again it is always read back in as a string.

* when a feature type is uploaded via rest, the values in the metadata map are of type string.

The end result is the same, ClassClassException for code that assumes type information about the metadata map.

The quick solution for now has been to replace this:

(Integer) featureType.getMetadata().get( "foo" );

with:

Converters.convert( featureType.getMetadata().get( "foo" ), Integer.class );

The longer term solution is to amend the xstream persistence a bit so that type information is not lost. This is however a bit tricky though. And probably won't fix every REST case.

So I am wondering. What do people think about making it practice that whenever accessing data from a metadata map, that a converter be used.

I would be ok using the converters everywhere. But we should make it
pretty explicit that the user map is made of strings, Map<String,String>. This way the need of using some kind of conversion
is obvious (teaching people that they can use Converters instead of
Integer.parseInt(xxx) is another story).

It looks to me like MetadataMap claims to be an class itself rather than a java.uti.Map
MetadataMap{
private Map internalMap;
...
boolean contains(Serializable key);
<T extends Serializable> get(Serialiable key, Class<T>);
///same for put
}

then isolate the use of converters inside it.

Hum.... uh?
I looked in CatalogInfo and found:

Map<String,Serializable> getMetadata();

in ServiceInfo:

Map<String,Serializable> getMetadata();

Or else... you mean we should create the MetadataMap
class and replace those Map<String,Serializable>
references? Sounds like a good idea to me.

yup

Cheers
Andrea

--
Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Ben Caradoc-Davies wrote:

Justin Deoliveira wrote:

So I am wondering. What do people think about making it practice that whenever accessing data from a metadata map, that a converter be used. The downside of course is that we have to update a bunch of code, and its one more thing to remember. The upside is that things are a bit more flexible and lenient. Which might be a big benefit in the future if the metadata map is used to store more and different kinds of stuff.
Anyways, curious to hear peoples thoughts on this.

Would this be an opportunity to trim whitespace from strings in datastore.xml connectionParameters/entry elements?

Hmmm... I am still not sure I think this is such a great idea. I still stand by" whatever you put into the config you should get out exactly that". If for some reason a datastore relied on whitespace they would be screwed.

That said I don't have an example of where this would be problematic, so I don't have a strong case, just a gut feeling. Can you job my memory as to why the entity creating the entries can't do the trim? It is because it comes from formatted XML which looks like this:

<a>
   the value
</a>

?

--
Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Justin Deoliveira wrote:

Ben Caradoc-Davies wrote:

Would this be an opportunity to trim whitespace from strings in datastore.xml connectionParameters/entry elements?

Hmmm... I am still not sure I think this is such a great idea. I still stand by" whatever you put into the config you should get out exactly that". If for some reason a datastore relied on whitespace they would be screwed.
That said I don't have an example of where this would be problematic, so I don't have a strong case, just a gut feeling. Can you job my memory as to why the entity creating the entries can't do the trim? It is because it comes from formatted XML which looks like this:
<a>
   the value
</a>

Yes, hand editing of datastore.xml in an XML-aware editor, while configuring an app-schema data store. I can fix it for app-schema, but any other datastore with a long path may be vulnerable.

From the last time we discussed this:

******

The end user is editing the XML and formats the file before saving,
introducing whitespace.

(Note: this will be a bit mangled by email:)

This works:

<dataStore>
  <id>gsml_MappedFeature_datastore</id>
  <name>gsml_MappedFeature</name>
  <enabled>true</enabled>
  <workspace>
    <id>gsml_workspace</id>
  </workspace>
  <connectionParameters>
    <entry key="namespace">urn:cgi:xmlns:CGI:GeoSciML:2.0
    </entry>
    <entry
key="url">file:workspaces/gsml/gsml_MappedFeature/gsml_MappedFeature.xml</entry>
    <entry key="dbtype">app-schema</entry>
  </connectionParameters>
</dataStore>

This does not:

<dataStore>
  <id>gsml_MappedFeature_datastore</id>
  <name>gsml_MappedFeature</name>
  <enabled>true</enabled>
  <workspace>
    <id>gsml_workspace</id>
  </workspace>
  <connectionParameters>
    <entry key="namespace">urn:cgi:xmlns:CGI:GeoSciML:2.0
    </entry>
    <entry key="url">
      file:workspaces/gsml/gsml_MappedFeature/gsml_MappedFeature.xml
    </entry>
    <entry key="dbtype">app-schema</entry>
  </connectionParameters>
</dataStore>

******

--
Ben Caradoc-Davies <Ben.Caradoc-Davies@anonymised.com>
Software Engineer, CSIRO Exploration and Mining
Australian Resources Research Centre
26 Dick Perry Ave, Kensington WA 6151, Australia

The BreifMapConverter is a good thing (note the typo, Breif... instead of Brief...), loosing type may not be.

From my perspective, since config file schema is out of question, it still would be good if BreifMapConverter can hold some type information where available, and you both guys seem to have valid arguments one way and the other, I think something like the following may be of help:

<patch>
### Eclipse Workspace Patch 1.0
#P main
Index: src/main/java/org/geoserver/config/util/XStreamPersister.java

--- src/main/java/org/geoserver/config/util/XStreamPersister.java (revision 12631)
+++ src/main/java/org/geoserver/config/util/XStreamPersister.java (working copy)
@@ -69,6 +69,7 @@
  import org.geotools.referencing.crs.DefaultProjectedCRS;
  import org.geotools.referencing.operation.DefaultMathTransformFactory;
  import org.geotools.referencing.operation.matrix.GeneralMatrix;
+import org.geotools.util.Converters;
  import org.geotools.util.NumberRange;
  import org.geotools.util.logging.Logging;
  import org.opengis.coverage.grid.GridGeometry;
@@ -481,6 +482,7 @@
                  writer.startNode("entry");
                  writer.addAttribute( "key", entry.getKey().toString());
                  if ( entry.getValue() != null ) {
+ writer.addAttribute("type", entry.getValue().getClass().getName());
                      writer.setValue(entry.getValue().toString());
                  }

@@ -510,6 +512,14 @@
                          //this is case 3
                          key = reader.getAttribute( "key" );
                          value = reader.getValue();
+ String type = reader.getAttribute("type");
+ if(type != null){
+ try{
+ value = Converters.convert(value, Class.forName(type));
+ }catch(Exception e){
+ //can't convert, leave it as is
+ }
+ }
                      }
                      else if ( reader.hasMoreChildren() ){
                          //this is case 4
</patch>

That is, augment the <entry> node with a "type" attribute, then try to use Converters to restore it to it's original type.

The good news is that org.geotools.util.URConverterFactory will already handle the whitespace trimming with no modification. Try this:

URL url = new URL(" \n\t http://localhost:8080/geoserver \n\t ");
System.out.println( "'" + url.toExternalForm() + "'" );

the output is: 'http://localhost:8080/geoserver

This should be supported by the fact that DataAccess.Param has a binding class, hence the UI _should_ (can't confirm it does right now) set the datastore connection parameters to the proper Param.binding type.
I don't think there's any magic in doing this, but just being consistent at both ends.

My 2c.-

Gabriel

Ben Caradoc-Davies wrote:

Justin Deoliveira wrote:

Ben Caradoc-Davies wrote:

Would this be an opportunity to trim whitespace from strings in datastore.xml connectionParameters/entry elements?

Hmmm... I am still not sure I think this is such a great idea. I still stand by" whatever you put into the config you should get out exactly that". If for some reason a datastore relied on whitespace they would be screwed.
That said I don't have an example of where this would be problematic, so I don't have a strong case, just a gut feeling. Can you job my memory as to why the entity creating the entries can't do the trim? It is because it comes from formatted XML which looks like this:
<a>
   the value
</a>

Yes, hand editing of datastore.xml in an XML-aware editor, while configuring an app-schema data store. I can fix it for app-schema, but any other datastore with a long path may be vulnerable.

From the last time we discussed this:

******

The end user is editing the XML and formats the file before saving,
introducing whitespace.

(Note: this will be a bit mangled by email:)

This works:

<dataStore>
  <id>gsml_MappedFeature_datastore</id>
  <name>gsml_MappedFeature</name>
  <enabled>true</enabled>
  <workspace>
    <id>gsml_workspace</id>
  </workspace>
  <connectionParameters>
    <entry key="namespace">urn:cgi:xmlns:CGI:GeoSciML:2.0
    </entry>
    <entry
key="url">file:workspaces/gsml/gsml_MappedFeature/gsml_MappedFeature.xml</entry>
    <entry key="dbtype">app-schema</entry>
  </connectionParameters>
</dataStore>

This does not:

<dataStore>
  <id>gsml_MappedFeature_datastore</id>
  <name>gsml_MappedFeature</name>
  <enabled>true</enabled>
  <workspace>
    <id>gsml_workspace</id>
  </workspace>
  <connectionParameters>
    <entry key="namespace">urn:cgi:xmlns:CGI:GeoSciML:2.0
    </entry>
    <entry key="url">
      file:workspaces/gsml/gsml_MappedFeature/gsml_MappedFeature.xml
    </entry>
    <entry key="dbtype">app-schema</entry>
  </connectionParameters>
</dataStore>

******

--
Gabriel Roldan
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Btw, I may of share Justin's concern of bloating the file. Adding the "type" attribute to every entry on a java.util.Map marshaling may unwanted. Still, check the line:
xs.registerLocalConverter(StoreInfoImpl.class, "connectionParameters", new BreifMapConverter() );
We are using a special converter for datastore connection parameters, so the change can be limited to that, and using a BriefMapConverter that does value = Converters.convert like in the sample patch?

Cheers,
Gabriel

Gabriel Roldan wrote:

The BreifMapConverter is a good thing (note the typo, Breif... instead of Brief...), loosing type may not be.

From my perspective, since config file schema is out of question, it still would be good if BreifMapConverter can hold some type information where available, and you both guys seem to have valid arguments one way and the other, I think something like the following may be of help:

<patch>
### Eclipse Workspace Patch 1.0
#P main
Index: src/main/java/org/geoserver/config/util/XStreamPersister.java

--- src/main/java/org/geoserver/config/util/XStreamPersister.java (revision 12631)
+++ src/main/java/org/geoserver/config/util/XStreamPersister.java (working copy)
@@ -69,6 +69,7 @@
  import org.geotools.referencing.crs.DefaultProjectedCRS;
  import org.geotools.referencing.operation.DefaultMathTransformFactory;
  import org.geotools.referencing.operation.matrix.GeneralMatrix;
+import org.geotools.util.Converters;
  import org.geotools.util.NumberRange;
  import org.geotools.util.logging.Logging;
  import org.opengis.coverage.grid.GridGeometry;
@@ -481,6 +482,7 @@
                  writer.startNode("entry");
                  writer.addAttribute( "key", entry.getKey().toString());
                  if ( entry.getValue() != null ) {
+ writer.addAttribute("type", entry.getValue().getClass().getName());
                      writer.setValue(entry.getValue().toString());
                  }

@@ -510,6 +512,14 @@
                          //this is case 3
                          key = reader.getAttribute( "key" );
                          value = reader.getValue();
+ String type = reader.getAttribute("type");
+ if(type != null){
+ try{
+ value = Converters.convert(value, Class.forName(type));
+ }catch(Exception e){
+ //can't convert, leave it as is
+ }
+ }
                      }
                      else if ( reader.hasMoreChildren() ){
                          //this is case 4
</patch>

That is, augment the <entry> node with a "type" attribute, then try to use Converters to restore it to it's original type.

The good news is that org.geotools.util.URConverterFactory will already handle the whitespace trimming with no modification. Try this:

URL url = new URL(" \n\t http://localhost:8080/geoserver \n\t ");
System.out.println( "'" + url.toExternalForm() + "'" );

the output is: 'http://localhost:8080/geoserver

This should be supported by the fact that DataAccess.Param has a binding class, hence the UI _should_ (can't confirm it does right now) set the datastore connection parameters to the proper Param.binding type.
I don't think there's any magic in doing this, but just being consistent at both ends.

My 2c.-

Gabriel

Ben Caradoc-Davies wrote:

Justin Deoliveira wrote:

Ben Caradoc-Davies wrote:

Would this be an opportunity to trim whitespace from strings in datastore.xml connectionParameters/entry elements?

Hmmm... I am still not sure I think this is such a great idea. I still stand by" whatever you put into the config you should get out exactly that". If for some reason a datastore relied on whitespace they would be screwed.
That said I don't have an example of where this would be problematic, so I don't have a strong case, just a gut feeling. Can you job my memory as to why the entity creating the entries can't do the trim? It is because it comes from formatted XML which looks like this:
<a>
   the value
</a>

Yes, hand editing of datastore.xml in an XML-aware editor, while configuring an app-schema data store. I can fix it for app-schema, but any other datastore with a long path may be vulnerable.

From the last time we discussed this:

******

The end user is editing the XML and formats the file before saving,
introducing whitespace.

(Note: this will be a bit mangled by email:)

This works:

<dataStore>
  <id>gsml_MappedFeature_datastore</id>
  <name>gsml_MappedFeature</name>
  <enabled>true</enabled>
  <workspace>
    <id>gsml_workspace</id>
  </workspace>
  <connectionParameters>
    <entry key="namespace">urn:cgi:xmlns:CGI:GeoSciML:2.0
    </entry>
    <entry
key="url">file:workspaces/gsml/gsml_MappedFeature/gsml_MappedFeature.xml</entry>
    <entry key="dbtype">app-schema</entry>
  </connectionParameters>
</dataStore>

This does not:

<dataStore>
  <id>gsml_MappedFeature_datastore</id>
  <name>gsml_MappedFeature</name>
  <enabled>true</enabled>
  <workspace>
    <id>gsml_workspace</id>
  </workspace>
  <connectionParameters>
    <entry key="namespace">urn:cgi:xmlns:CGI:GeoSciML:2.0
    </entry>
    <entry key="url">
      file:workspaces/gsml/gsml_MappedFeature/gsml_MappedFeature.xml
    </entry>
    <entry key="dbtype">app-schema</entry>
  </connectionParameters>
</dataStore>

******

--
Gabriel Roldan
OpenGeo - http://opengeo.org
Expert service straight from the developers.

My first thought for saving type information was to add a breif=true|false flag to XStreamPersister itself, and when set to false simply disable BriefMapConverter, although this would work too.

And now that we support a brief syntax, simply disabling it might lead to backwards compatability issue.

That said, i agree with Gabriel that restoring the type information and creating the right type of object seems like a good idea which fixes this problem.

Gabriel Roldan wrote:

Btw, I may of share Justin's concern of bloating the file. Adding the "type" attribute to every entry on a java.util.Map marshaling may unwanted. Still, check the line:
xs.registerLocalConverter(StoreInfoImpl.class, "connectionParameters", new BreifMapConverter() );
We are using a special converter for datastore connection parameters, so the change can be limited to that, and using a BriefMapConverter that does value = Converters.convert like in the sample patch?

Cheers,
Gabriel

Gabriel Roldan wrote:

The BreifMapConverter is a good thing (note the typo, Breif... instead of Brief...), loosing type may not be.

From my perspective, since config file schema is out of question, it still would be good if BreifMapConverter can hold some type information where available, and you both guys seem to have valid arguments one way and the other, I think something like the following may be of help:

<patch>
### Eclipse Workspace Patch 1.0
#P main
Index: src/main/java/org/geoserver/config/util/XStreamPersister.java

--- src/main/java/org/geoserver/config/util/XStreamPersister.java (revision 12631)
+++ src/main/java/org/geoserver/config/util/XStreamPersister.java (working copy)
@@ -69,6 +69,7 @@
  import org.geotools.referencing.crs.DefaultProjectedCRS;
  import org.geotools.referencing.operation.DefaultMathTransformFactory;
  import org.geotools.referencing.operation.matrix.GeneralMatrix;
+import org.geotools.util.Converters;
  import org.geotools.util.NumberRange;
  import org.geotools.util.logging.Logging;
  import org.opengis.coverage.grid.GridGeometry;
@@ -481,6 +482,7 @@
                  writer.startNode("entry");
                  writer.addAttribute( "key", entry.getKey().toString());
                  if ( entry.getValue() != null ) {
+ writer.addAttribute("type", entry.getValue().getClass().getName());
                      writer.setValue(entry.getValue().toString());
                  }

@@ -510,6 +512,14 @@
                          //this is case 3
                          key = reader.getAttribute( "key" );
                          value = reader.getValue();
+ String type = reader.getAttribute("type");
+ if(type != null){
+ try{
+ value = Converters.convert(value, Class.forName(type));
+ }catch(Exception e){
+ //can't convert, leave it as is
+ }
+ }
                      }
                      else if ( reader.hasMoreChildren() ){
                          //this is case 4
</patch>

That is, augment the <entry> node with a "type" attribute, then try to use Converters to restore it to it's original type.

The good news is that org.geotools.util.URConverterFactory will already handle the whitespace trimming with no modification. Try this:

URL url = new URL(" \n\t http://localhost:8080/geoserver \n\t ");
System.out.println( "'" + url.toExternalForm() + "'" );

the output is: 'http://localhost:8080/geoserver

This should be supported by the fact that DataAccess.Param has a binding class, hence the UI _should_ (can't confirm it does right now) set the datastore connection parameters to the proper Param.binding type.
I don't think there's any magic in doing this, but just being consistent at both ends.

My 2c.-

Gabriel

Ben Caradoc-Davies wrote:

Justin Deoliveira wrote:

Ben Caradoc-Davies wrote:

Would this be an opportunity to trim whitespace from strings in datastore.xml connectionParameters/entry elements?

Hmmm... I am still not sure I think this is such a great idea. I still stand by" whatever you put into the config you should get out exactly that". If for some reason a datastore relied on whitespace they would be screwed.
That said I don't have an example of where this would be problematic, so I don't have a strong case, just a gut feeling. Can you job my memory as to why the entity creating the entries can't do the trim? It is because it comes from formatted XML which looks like this:
<a>
   the value
</a>

Yes, hand editing of datastore.xml in an XML-aware editor, while configuring an app-schema data store. I can fix it for app-schema, but any other datastore with a long path may be vulnerable.

From the last time we discussed this:

******

The end user is editing the XML and formats the file before saving,
introducing whitespace.

(Note: this will be a bit mangled by email:)

This works:

<dataStore>
    <id>gsml_MappedFeature_datastore</id>
    <name>gsml_MappedFeature</name>
    <enabled>true</enabled>
    <workspace>
        <id>gsml_workspace</id>
    </workspace>
    <connectionParameters>
        <entry key="namespace">urn:cgi:xmlns:CGI:GeoSciML:2.0
        </entry>
        <entry
key="url">file:workspaces/gsml/gsml_MappedFeature/gsml_MappedFeature.xml</entry>

        <entry key="dbtype">app-schema</entry>
    </connectionParameters>
</dataStore>

This does not:

<dataStore>
    <id>gsml_MappedFeature_datastore</id>
    <name>gsml_MappedFeature</name>
    <enabled>true</enabled>
    <workspace>
        <id>gsml_workspace</id>
    </workspace>
    <connectionParameters>
        <entry key="namespace">urn:cgi:xmlns:CGI:GeoSciML:2.0
        </entry>
        <entry key="url">
            file:workspaces/gsml/gsml_MappedFeature/gsml_MappedFeature.xml
        </entry>
        <entry key="dbtype">app-schema</entry>
    </connectionParameters>
</dataStore>

******

--
Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

I have gone and implemented the map as based on this thread. I would love a review:

http://jira.codehaus.org/browse/GEOS-3164

-Justin

Gabriel Roldan wrote:

Andrea Aime wrote:

Gabriel Roldan ha scritto:

Andrea Aime wrote:

Justin Deoliveira ha scritto:

Hi all,

A few issues have popped up recently related to how maps (in particular the metadata map for catalog objects) is persisted/depersisted. The latest issue revolves around someone trying ot configure a feature type via rest.

The issue is that in an attempt to make xstream less verbose, i made it so that someone trying to upload map data with rest can do this:

<map>
   <key1>value1</key1>
   <key2>value2</key2>
    ...
</map>

As opposed to the xstream default:

<map>
   <entry>
     <string>key1</string>
     <string>value1</string>
   </entry>
   <entry>
     <string>key2</string>
     <string>value2</string>
   </entry>
</map>

However this has caused some issues. The main issue is that type information is lost. This pops up in a couple of places:

* when a feature is persisted in geoserver, and then read back in again it is always read back in as a string.

* when a feature type is uploaded via rest, the values in the metadata map are of type string.

The end result is the same, ClassClassException for code that assumes type information about the metadata map.

The quick solution for now has been to replace this:

(Integer) featureType.getMetadata().get( "foo" );

with:

Converters.convert( featureType.getMetadata().get( "foo" ), Integer.class );

The longer term solution is to amend the xstream persistence a bit so that type information is not lost. This is however a bit tricky though. And probably won't fix every REST case.

So I am wondering. What do people think about making it practice that whenever accessing data from a metadata map, that a converter be used.

I would be ok using the converters everywhere. But we should make it
pretty explicit that the user map is made of strings, Map<String,String>. This way the need of using some kind of conversion
is obvious (teaching people that they can use Converters instead of
Integer.parseInt(xxx) is another story).

It looks to me like MetadataMap claims to be an class itself rather than a java.uti.Map
MetadataMap{
private Map internalMap;
...
boolean contains(Serializable key);
<T extends Serializable> get(Serialiable key, Class<T>);
///same for put
}

then isolate the use of converters inside it.

Hum.... uh?
I looked in CatalogInfo and found:

Map<String,Serializable> getMetadata();

in ServiceInfo:

Map<String,Serializable> getMetadata();

Or else... you mean we should create the MetadataMap
class and replace those Map<String,Serializable>
references? Sounds like a good idea to me.

yup

Cheers
Andrea

--
Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Justin Deoliveira wrote:

I have gone and implemented the map as based on this thread. I would love a review:

I've posted my feedback on the issue. Basically it looks good, I just keep on thinking there's no need for MetadataMap to implementa java.util.Map. Proposed change being it either just doesn't and uses an internal Map as it's data structure, getting rid of all the not going to be used Map methods, or it's abstracted out as in interface and MetadataMapImpl is free to extend HashMap<String, Serializable>.

My 2c.-

Gabriel

http://jira.codehaus.org/browse/GEOS-3164

-Justin

Gabriel Roldan wrote:

Andrea Aime wrote:

Gabriel Roldan ha scritto:

Andrea Aime wrote:

Justin Deoliveira ha scritto:

Hi all,

A few issues have popped up recently related to how maps (in particular the metadata map for catalog objects) is persisted/depersisted. The latest issue revolves around someone trying ot configure a feature type via rest.

The issue is that in an attempt to make xstream less verbose, i made it so that someone trying to upload map data with rest can do this:

<map>
   <key1>value1</key1>
   <key2>value2</key2>
    ...
</map>

As opposed to the xstream default:

<map>
   <entry>
     <string>key1</string>
     <string>value1</string>
   </entry>
   <entry>
     <string>key2</string>
     <string>value2</string>
   </entry>
</map>

However this has caused some issues. The main issue is that type information is lost. This pops up in a couple of places:

* when a feature is persisted in geoserver, and then read back in again it is always read back in as a string.

* when a feature type is uploaded via rest, the values in the metadata map are of type string.

The end result is the same, ClassClassException for code that assumes type information about the metadata map.

The quick solution for now has been to replace this:

(Integer) featureType.getMetadata().get( "foo" );

with:

Converters.convert( featureType.getMetadata().get( "foo" ), Integer.class );

The longer term solution is to amend the xstream persistence a bit so that type information is not lost. This is however a bit tricky though. And probably won't fix every REST case.

So I am wondering. What do people think about making it practice that whenever accessing data from a metadata map, that a converter be used.

I would be ok using the converters everywhere. But we should make it
pretty explicit that the user map is made of strings, Map<String,String>. This way the need of using some kind of conversion
is obvious (teaching people that they can use Converters instead of
Integer.parseInt(xxx) is another story).

It looks to me like MetadataMap claims to be an class itself rather than a java.uti.Map
MetadataMap{
private Map internalMap;
...
boolean contains(Serializable key);
<T extends Serializable> get(Serialiable key, Class<T>);
///same for put
}

then isolate the use of converters inside it.

Hum.... uh?
I looked in CatalogInfo and found:

Map<String,Serializable> getMetadata();

in ServiceInfo:

Map<String,Serializable> getMetadata();

Or else... you mean we should create the MetadataMap
class and replace those Map<String,Serializable>
references? Sounds like a good idea to me.

yup

Cheers
Andrea

--
Gabriel Roldan
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Thanks Gabriel. I replied to the feedback.

I don't have too strong a preference here about making the MetadataMap a map or not. I would lean toward keeping it a map, but i could be persuaded the other way if your preference is a strong one?

Anyways, maybe Andrea can be a tie breaker.

Gabriel Roldan wrote:

Justin Deoliveira wrote:

I have gone and implemented the map as based on this thread. I would love a review:

I've posted my feedback on the issue. Basically it looks good, I just keep on thinking there's no need for MetadataMap to implementa java.util.Map. Proposed change being it either just doesn't and uses an internal Map as it's data structure, getting rid of all the not going to be used Map methods, or it's abstracted out as in interface and MetadataMapImpl is free to extend HashMap<String, Serializable>.

My 2c.-

Gabriel

http://jira.codehaus.org/browse/GEOS-3164

-Justin

Gabriel Roldan wrote:

Andrea Aime wrote:

Gabriel Roldan ha scritto:

Andrea Aime wrote:

Justin Deoliveira ha scritto:

Hi all,

A few issues have popped up recently related to how maps (in particular the metadata map for catalog objects) is persisted/depersisted. The latest issue revolves around someone trying ot configure a feature type via rest.

The issue is that in an attempt to make xstream less verbose, i made it so that someone trying to upload map data with rest can do this:

<map>
   <key1>value1</key1>
   <key2>value2</key2>
    ...
</map>

As opposed to the xstream default:

<map>
   <entry>
     <string>key1</string>
     <string>value1</string>
   </entry>
   <entry>
     <string>key2</string>
     <string>value2</string>
   </entry>
</map>

However this has caused some issues. The main issue is that type information is lost. This pops up in a couple of places:

* when a feature is persisted in geoserver, and then read back in again it is always read back in as a string.

* when a feature type is uploaded via rest, the values in the metadata map are of type string.

The end result is the same, ClassClassException for code that assumes type information about the metadata map.

The quick solution for now has been to replace this:

(Integer) featureType.getMetadata().get( "foo" );

with:

Converters.convert( featureType.getMetadata().get( "foo" ), Integer.class );

The longer term solution is to amend the xstream persistence a bit so that type information is not lost. This is however a bit tricky though. And probably won't fix every REST case.

So I am wondering. What do people think about making it practice that whenever accessing data from a metadata map, that a converter be used.

I would be ok using the converters everywhere. But we should make it
pretty explicit that the user map is made of strings, Map<String,String>. This way the need of using some kind of conversion
is obvious (teaching people that they can use Converters instead of
Integer.parseInt(xxx) is another story).

It looks to me like MetadataMap claims to be an class itself rather than a java.uti.Map
MetadataMap{
private Map internalMap;
...
boolean contains(Serializable key);
<T extends Serializable> get(Serialiable key, Class<T>);
///same for put
}

then isolate the use of converters inside it.

Hum.... uh?
I looked in CatalogInfo and found:

Map<String,Serializable> getMetadata();

in ServiceInfo:

Map<String,Serializable> getMetadata();

Or else... you mean we should create the MetadataMap
class and replace those Map<String,Serializable>
references? Sounds like a good idea to me.

yup

Cheers
Andrea

--
Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Justin Deoliveira ha scritto:

Thanks Gabriel. I replied to the feedback.

I don't have too strong a preference here about making the MetadataMap a map or not. I would lean toward keeping it a map, but i could be persuaded the other way if your preference is a strong one?

Anyways, maybe Andrea can be a tie breaker.

Slight preference for a Map since that is a well known interface
that can be used in other contexts, and does not require writing
XStream adapters.

Cheers
Andrea

--
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.

fair enough, good for me then.

Gabriel
Andrea Aime wrote:

Justin Deoliveira ha scritto:

Thanks Gabriel. I replied to the feedback.

I don't have too strong a preference here about making the MetadataMap a map or not. I would lean toward keeping it a map, but i could be persuaded the other way if your preference is a strong one?

Anyways, maybe Andrea can be a tie breaker.

Slight preference for a Map since that is a well known interface
that can be used in other contexts, and does not require writing
XStream adapters.

Cheers
Andrea

--
Gabriel Roldan
OpenGeo - http://opengeo.org
Expert service straight from the developers.