[Geoserver-devel] Configuration API

Hello all,

In working on the RESTful Configuration API project (see
http://docs.codehaus.org/display/GEOSDOC/RESTful+Configuration+API) I've come
across some difficulty with the existing (java) API for dealing with
configured data. Initially, I had been basing the code on the
org.vfny.geoserver.config.* package, but I've been informed that the 'core'
API I should be using is in org.vfny.geoserver.global.* (the previous package
being essentially a one-off done for the current configuration interface).
However, when trying to convert my code I've found that the
org.vfny.geoserver.config.* classes are mostly immutable. As a temporary
solution I've started to use the DTO objects in
org.vfny.geoserver.config.dto.* which do provide setters, and finally
creating the corresponding object from that. Is there a better way? Should
I modify/subclass the existing configuration objects to provide setters? As
long as I'm bothering the dev list, how do I save the configuration to disk?

Thanks,
David Winslow

Hi David,

I understand the confusion. For every "persisted object" there are
currently three objects, global (core), config, dto. Quite messy and
hard to maintain as you are finding out.

The reason that the global config objects are immutable is because they
are not intended to be modified directly. So i dont think simply adding
setters will do what you need, since anything you set will get
overwritten the next time the system is updated.

This is just my opinion (others may feel differently), but I would think
that using your current method is the best way to proceed. I don't think
hacking up the config objects is a good approach, especially since
redoing the config objects is something that has been waiting in the wings.

Perhaps we can use this as a use case to actually do the work!!

Which brings up another point for the rest of the list. I have been
thinking lately that it might be possible to do the config work without
disrupting all of GeoServer. We have the new model more or less
implemented on the configuration spike i was playing with quite some
time ago. I do not believe that it would be hard to wrap the new objects
in the old ones.

This would give us the best of both worlds. It would server people who
need the nice api for programatically working with the config objects.
And it would also prevent a huge mass update to the rest of the code base.

Just a thought.

-Justin

David Winslow wrote:

Hello all,

In working on the RESTful Configuration API project (see
http://docs.codehaus.org/display/GEOSDOC/RESTful+Configuration+API) I've come
across some difficulty with the existing (java) API for dealing with
configured data. Initially, I had been basing the code on the
org.vfny.geoserver.config.* package, but I've been informed that the 'core'
API I should be using is in org.vfny.geoserver.global.* (the previous package
being essentially a one-off done for the current configuration interface).
However, when trying to convert my code I've found that the
org.vfny.geoserver.config.* classes are mostly immutable. As a temporary
solution I've started to use the DTO objects in
org.vfny.geoserver.config.dto.* which do provide setters, and finally
creating the corresponding object from that. Is there a better way? Should
I modify/subclass the existing configuration objects to provide setters? As
long as I'm bothering the dev list, how do I save the configuration to disk?

Thanks,
David Winslow

-------------------------------------------------------------------------
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

!DSPAM:4007,4759aab2197553327367457!

--
Justin Deoliveira
The Open Planning Project
http://topp.openplans.org

I don't think I saw this email before I left, or at least didn't process it, but recently I was thinking a decent bit about how to pull off the move to the new config. A friend said that one of his co-workers was really good at encapsulating bad code and API's behind newer apis, so got to thinking if we might be able to do such a thing, which I think is what you're suggesting here?

You say 'wrap the new objects in the olds ones' - what exactly do you mean by that? My thought was that we'd have the new interfaces serve as a facade to the old way. So if you called setXXX then the object that implements the interface would modify the DTO and have the global object load it.

My worry is that doing it that way could be inefficient, creating new objects whenever you want to change a parameter. But the config isn't changed all that often, so hopefully it would be ok.

My other worry is persisting - the rest api would change things directly, but we have to be sure that the changes also get saved. Does the rest api have a 'save' method? Or do changes always get written to disk? If so then how often do we write it out? A shutdown hook would be good, but wouldn't cover crashes...

Anyways, my general point is that I think it may make sense to start experimenting with the new proposal, perhaps as another community module that the REST config stuff would depend upon?

I'll work on figuring out resources for this, I am going to try to make an effort in the next few months to have us invest time in the core changes that we've wanted for awhile in GeoServer. Everyone has done an amazing job in the last year of adding lots of awesome new features for users. So I'm hoping this winter is a good time to take a pause from some of that (while we work on documenting and promoting and building front ends for the great new stuff) to focus on the more developer focused improvements that we've all been wanting for awhile. I just want to be sure we're smart about it and do it in a risk averse way that doesn't lead to a quagmire.

best regards,

Chris

Justin Deoliveira wrote:

Hi David,

I understand the confusion. For every "persisted object" there are
currently three objects, global (core), config, dto. Quite messy and
hard to maintain as you are finding out.

The reason that the global config objects are immutable is because they
are not intended to be modified directly. So i dont think simply adding
setters will do what you need, since anything you set will get
overwritten the next time the system is updated.

This is just my opinion (others may feel differently), but I would think
that using your current method is the best way to proceed. I don't think
hacking up the config objects is a good approach, especially since
redoing the config objects is something that has been waiting in the wings.

Perhaps we can use this as a use case to actually do the work!!

Which brings up another point for the rest of the list. I have been
thinking lately that it might be possible to do the config work without
disrupting all of GeoServer. We have the new model more or less
implemented on the configuration spike i was playing with quite some
time ago. I do not believe that it would be hard to wrap the new objects
in the old ones.

This would give us the best of both worlds. It would server people who
need the nice api for programatically working with the config objects.
And it would also prevent a huge mass update to the rest of the code base.

Just a thought.

-Justin

David Winslow wrote:

Hello all,

In working on the RESTful Configuration API project (see http://docs.codehaus.org/display/GEOSDOC/RESTful+Configuration+API) I've come across some difficulty with the existing (java) API for dealing with configured data. Initially, I had been basing the code on the org.vfny.geoserver.config.* package, but I've been informed that the 'core' API I should be using is in org.vfny.geoserver.global.* (the previous package being essentially a one-off done for the current configuration interface). However, when trying to convert my code I've found that the org.vfny.geoserver.config.* classes are mostly immutable. As a temporary solution I've started to use the DTO objects in org.vfny.geoserver.config.dto.* which do provide setters, and finally creating the corresponding object from that. Is there a better way? Should I modify/subclass the existing configuration objects to provide setters? As long as I'm bothering the dev list, how do I save the configuration to disk?

Thanks, David Winslow

-------------------------------------------------------------------------
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Actually what I am talking about is a bit different. Basically the
opposite. Encapsulate the new api around the old api.

Without going into too much technical detail, what i mean is taking the
old config/dto objects and making them wrappers around our shiny new
model objects. This way the ui and services can continue to happily work
against the old apis and not have to worry about changing over right
away. But underneath the hood, the actual work and persistence is being
done by the new model.

The wins to this approach (in my mind) are:

* not a massive update to the code base, updates can be done gradually
* by wrapping the new model up in the old we ensure that it maintains
the same behaviour
* new "clients" and "services" we implement can work directly against
the new model

So the new rest stuff can work directly against the new model. And
because the old model api is basically just a call through to the new
model apis, everything stays in sync.

The approach you mention is kind of the opposite. Basically we could
take the old objects, and either wrap them up in adapters that implement
the new interfaces, or just have them implement the new interfaces
directly.

Which is a valid approach as well. I however don't think it is as clean
as the approach i suggest since we are just dressing the old "core" up a
bit instead of replacing it with a new one. This might be a better
approach juts in terms of overall impact... however it still ties us to
the limitations to the old core.

Anyways, enough ranting from me. Not sure if i made myself any more clear.

-Justin

Chris Holmes wrote:

I don't think I saw this email before I left, or at least didn't process
it, but recently I was thinking a decent bit about how to pull off the
move to the new config. A friend said that one of his co-workers was
really good at encapsulating bad code and API's behind newer apis, so
got to thinking if we might be able to do such a thing, which I think is
what you're suggesting here?

You say 'wrap the new objects in the olds ones' - what exactly do you
mean by that? My thought was that we'd have the new interfaces serve as
a facade to the old way. So if you called setXXX then the object that
implements the interface would modify the DTO and have the global object
load it.

My worry is that doing it that way could be inefficient, creating new
objects whenever you want to change a parameter. But the config isn't
changed all that often, so hopefully it would be ok.

My other worry is persisting - the rest api would change things
directly, but we have to be sure that the changes also get saved. Does
the rest api have a 'save' method? Or do changes always get written to
disk? If so then how often do we write it out? A shutdown hook would
be good, but wouldn't cover crashes...

Anyways, my general point is that I think it may make sense to start
experimenting with the new proposal, perhaps as another community module
that the REST config stuff would depend upon?

I'll work on figuring out resources for this, I am going to try to make
an effort in the next few months to have us invest time in the core
changes that we've wanted for awhile in GeoServer. Everyone has done an
amazing job in the last year of adding lots of awesome new features for
users. So I'm hoping this winter is a good time to take a pause from
some of that (while we work on documenting and promoting and building
front ends for the great new stuff) to focus on the more developer
focused improvements that we've all been wanting for awhile. I just
want to be sure we're smart about it and do it in a risk averse way that
doesn't lead to a quagmire.

best regards,

Chris

Justin Deoliveira wrote:

Hi David,

I understand the confusion. For every "persisted object" there are
currently three objects, global (core), config, dto. Quite messy and
hard to maintain as you are finding out.

The reason that the global config objects are immutable is because they
are not intended to be modified directly. So i dont think simply adding
setters will do what you need, since anything you set will get
overwritten the next time the system is updated.

This is just my opinion (others may feel differently), but I would think
that using your current method is the best way to proceed. I don't think
hacking up the config objects is a good approach, especially since
redoing the config objects is something that has been waiting in the
wings.

Perhaps we can use this as a use case to actually do the work!!

Which brings up another point for the rest of the list. I have been
thinking lately that it might be possible to do the config work without
disrupting all of GeoServer. We have the new model more or less
implemented on the configuration spike i was playing with quite some
time ago. I do not believe that it would be hard to wrap the new objects
in the old ones.

This would give us the best of both worlds. It would server people who
need the nice api for programatically working with the config objects.
And it would also prevent a huge mass update to the rest of the code
base.

Just a thought.

-Justin

David Winslow wrote:

Hello all,

In working on the RESTful Configuration API project (see
http://docs.codehaus.org/display/GEOSDOC/RESTful+Configuration+API)
I've come across some difficulty with the existing (java) API for
dealing with configured data. Initially, I had been basing the code
on the org.vfny.geoserver.config.* package, but I've been informed
that the 'core' API I should be using is in
org.vfny.geoserver.global.* (the previous package being essentially a
one-off done for the current configuration interface). However, when
trying to convert my code I've found that the
org.vfny.geoserver.config.* classes are mostly immutable. As a
temporary solution I've started to use the DTO objects in
org.vfny.geoserver.config.dto.* which do provide setters, and finally
creating the corresponding object from that. Is there a better way?
Should I modify/subclass the existing configuration objects to
provide setters? As long as I'm bothering the dev list, how do I
save the configuration to disk?

Thanks, David Winslow

-------------------------------------------------------------------------

SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

!DSPAM:4007,477ab271218508992556831!

--
Justin Deoliveira
The Open Planning Project
http://topp.openplans.org

Justin Deoliveira ha scritto:

Actually what I am talking about is a bit different. Basically the
opposite. Encapsulate the new api around the old api.

Without going into too much technical detail, what i mean is taking the
old config/dto objects and making them wrappers around our shiny new
model objects. This way the ui and services can continue to happily work
against the old apis and not have to worry about changing over right
away. But underneath the hood, the actual work and persistence is being
done by the new model.

The wins to this approach (in my mind) are:

* not a massive update to the code base, updates can be done gradually
* by wrapping the new model up in the old we ensure that it maintains
the same behaviour
* new "clients" and "services" we implement can work directly against
the new model

Agreed, I like this option better me too. It also allows to develop a new user interface and keep the old one around for quick checks.

Cheers
Andrea

I'm with Justin on this; making the current configuration objects into a
wrapper around the new stuff seems like the easy way to go (easy as
in, "makes it possible to do this incrementally," not as in quick-and-dirty).

As far as the REST API goes, it seems like the behavior that would be most in
keeping with the REST philosophy would be to have it auto-save every change.
Perhaps we could have /api/stage/ for temporary, non-persisted changes
and /api/persistent/ for settings that should be saved (and have the entire
api available under both). It would be a little weird though, since you
would want to be able to have temporary layers and such using info from
persisted datastores, etc, so clients would have to be aware of both
hierarchies at the same time.

-David

On Tuesday 01 January 2008 17:45:03 Justin Deoliveira wrote:

Actually what I am talking about is a bit different. Basically the
opposite. Encapsulate the new api around the old api.

Without going into too much technical detail, what i mean is taking the
old config/dto objects and making them wrappers around our shiny new
model objects. This way the ui and services can continue to happily work
against the old apis and not have to worry about changing over right
away. But underneath the hood, the actual work and persistence is being
done by the new model.

The wins to this approach (in my mind) are:

* not a massive update to the code base, updates can be done gradually
* by wrapping the new model up in the old we ensure that it maintains
the same behaviour
* new "clients" and "services" we implement can work directly against
the new model

So the new rest stuff can work directly against the new model. And
because the old model api is basically just a call through to the new
model apis, everything stays in sync.

The approach you mention is kind of the opposite. Basically we could
take the old objects, and either wrap them up in adapters that implement
the new interfaces, or just have them implement the new interfaces
directly.

Which is a valid approach as well. I however don't think it is as clean
as the approach i suggest since we are just dressing the old "core" up a
bit instead of replacing it with a new one. This might be a better
approach juts in terms of overall impact... however it still ties us to
the limitations to the old core.

Anyways, enough ranting from me. Not sure if i made myself any more clear.

-Justin

Chris Holmes wrote:
> I don't think I saw this email before I left, or at least didn't process
> it, but recently I was thinking a decent bit about how to pull off the
> move to the new config. A friend said that one of his co-workers was
> really good at encapsulating bad code and API's behind newer apis, so
> got to thinking if we might be able to do such a thing, which I think is
> what you're suggesting here?
>
> You say 'wrap the new objects in the olds ones' - what exactly do you
> mean by that? My thought was that we'd have the new interfaces serve as
> a facade to the old way. So if you called setXXX then the object that
> implements the interface would modify the DTO and have the global object
> load it.
>
> My worry is that doing it that way could be inefficient, creating new
> objects whenever you want to change a parameter. But the config isn't
> changed all that often, so hopefully it would be ok.
>
> My other worry is persisting - the rest api would change things
> directly, but we have to be sure that the changes also get saved. Does
> the rest api have a 'save' method? Or do changes always get written to
> disk? If so then how often do we write it out? A shutdown hook would
> be good, but wouldn't cover crashes...
>
> Anyways, my general point is that I think it may make sense to start
> experimenting with the new proposal, perhaps as another community module
> that the REST config stuff would depend upon?
>
> I'll work on figuring out resources for this, I am going to try to make
> an effort in the next few months to have us invest time in the core
> changes that we've wanted for awhile in GeoServer. Everyone has done an
> amazing job in the last year of adding lots of awesome new features for
> users. So I'm hoping this winter is a good time to take a pause from
> some of that (while we work on documenting and promoting and building
> front ends for the great new stuff) to focus on the more developer
> focused improvements that we've all been wanting for awhile. I just
> want to be sure we're smart about it and do it in a risk averse way that
> doesn't lead to a quagmire.
>
> best regards,
>
> Chris
>
> Justin Deoliveira wrote:
>> Hi David,
>>
>> I understand the confusion. For every "persisted object" there are
>> currently three objects, global (core), config, dto. Quite messy and
>> hard to maintain as you are finding out.
>>
>> The reason that the global config objects are immutable is because they
>> are not intended to be modified directly. So i dont think simply adding
>> setters will do what you need, since anything you set will get
>> overwritten the next time the system is updated.
>>
>> This is just my opinion (others may feel differently), but I would think
>> that using your current method is the best way to proceed. I don't think
>> hacking up the config objects is a good approach, especially since
>> redoing the config objects is something that has been waiting in the
>> wings.
>>
>> Perhaps we can use this as a use case to actually do the work!!
>>
>> Which brings up another point for the rest of the list. I have been
>> thinking lately that it might be possible to do the config work without
>> disrupting all of GeoServer. We have the new model more or less
>> implemented on the configuration spike i was playing with quite some
>> time ago. I do not believe that it would be hard to wrap the new objects
>> in the old ones.
>>
>> This would give us the best of both worlds. It would server people who
>> need the nice api for programatically working with the config objects.
>> And it would also prevent a huge mass update to the rest of the code
>> base.
>>
>> Just a thought.
>>
>> -Justin
>>
>> David Winslow wrote:
>>> Hello all,
>>>
>>> In working on the RESTful Configuration API project (see
>>> http://docs.codehaus.org/display/GEOSDOC/RESTful+Configuration+API)
>>> I've come across some difficulty with the existing (java) API for
>>> dealing with configured data. Initially, I had been basing the code
>>> on the org.vfny.geoserver.config.* package, but I've been informed
>>> that the 'core' API I should be using is in
>>> org.vfny.geoserver.global.* (the previous package being essentially a
>>> one-off done for the current configuration interface). However, when
>>> trying to convert my code I've found that the
>>> org.vfny.geoserver.config.* classes are mostly immutable. As a
>>> temporary solution I've started to use the DTO objects in
>>> org.vfny.geoserver.config.dto.* which do provide setters, and finally
>>> creating the corresponding object from that. Is there a better way?
>>> Should I modify/subclass the existing configuration objects to
>>> provide setters? As long as I'm bothering the dev list, how do I
>>> save the configuration to disk?
>>>
>>> Thanks, David Winslow
>>>
>>> -----------------------------------------------------------------------
>>>--
>>>
>>> SF.Net email is sponsored by:
>>> Check out the new SourceForge.net Marketplace.
>>> It's the best place to buy or sell services for
>>> just about anything Open Source.
>>> http://sourceforge.net/services/buy/index.php
>>> _______________________________________________
>>> Geoserver-devel mailing list
>>> Geoserver-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Cool, either approach works fine for me. I see advantages both ways, but I'm not the one who's going to be working with the code directly, so I defer to you guys. I think I may have some questions on specifics, but how about you go ahead and make a GSIP Justin? Can have a couple sample classes to show it?

And I guess you're saying that this will involve a new persistence mechanism as well? That will be the big change, no? Or are you going to have it use the old reader/writer too? Did you ever test out how xstream works with backwards compatibility? Like if we add a bunch of new options will it be able to read an older file that doesn't have all the options? I imagine we should make some good unit tests to support this, since that's an area where small changes creep in. But yeah, let's figure out a scope of work and I can figure out resources to make available. And I guess the one other thing we want is to be sure that this makes sense as the first step in transitioning the core config and UI. I'm pretty sure it does, I just want to be sure that everyone is in agreement, as we don't have unlimited resources to make the change happen, so we need to make sure to do it in the right way.

best regards,

Chris

David Winslow wrote:

I'm with Justin on this; making the current configuration objects into a wrapper around the new stuff seems like the easy way to go (easy as in, "makes it possible to do this incrementally," not as in quick-and-dirty).

As far as the REST API goes, it seems like the behavior that would be most in keeping with the REST philosophy would be to have it auto-save every change. Perhaps we could have /api/stage/ for temporary, non-persisted changes and /api/persistent/ for settings that should be saved (and have the entire api available under both). It would be a little weird though, since you would want to be able to have temporary layers and such using info from persisted datastores, etc, so clients would have to be aware of both hierarchies at the same time.

-David

On Tuesday 01 January 2008 17:45:03 Justin Deoliveira wrote:

Actually what I am talking about is a bit different. Basically the
opposite. Encapsulate the new api around the old api.

Without going into too much technical detail, what i mean is taking the
old config/dto objects and making them wrappers around our shiny new
model objects. This way the ui and services can continue to happily work
against the old apis and not have to worry about changing over right
away. But underneath the hood, the actual work and persistence is being
done by the new model.

The wins to this approach (in my mind) are:

* not a massive update to the code base, updates can be done gradually
* by wrapping the new model up in the old we ensure that it maintains
the same behaviour
* new "clients" and "services" we implement can work directly against
the new model

So the new rest stuff can work directly against the new model. And
because the old model api is basically just a call through to the new
model apis, everything stays in sync.

The approach you mention is kind of the opposite. Basically we could
take the old objects, and either wrap them up in adapters that implement
the new interfaces, or just have them implement the new interfaces
directly.

Which is a valid approach as well. I however don't think it is as clean
as the approach i suggest since we are just dressing the old "core" up a
bit instead of replacing it with a new one. This might be a better
approach juts in terms of overall impact... however it still ties us to
the limitations to the old core.

Anyways, enough ranting from me. Not sure if i made myself any more clear.

-Justin

Chris Holmes wrote:

I don't think I saw this email before I left, or at least didn't process
it, but recently I was thinking a decent bit about how to pull off the
move to the new config. A friend said that one of his co-workers was
really good at encapsulating bad code and API's behind newer apis, so
got to thinking if we might be able to do such a thing, which I think is
what you're suggesting here?

You say 'wrap the new objects in the olds ones' - what exactly do you
mean by that? My thought was that we'd have the new interfaces serve as
a facade to the old way. So if you called setXXX then the object that
implements the interface would modify the DTO and have the global object
load it.

My worry is that doing it that way could be inefficient, creating new
objects whenever you want to change a parameter. But the config isn't
changed all that often, so hopefully it would be ok.

My other worry is persisting - the rest api would change things
directly, but we have to be sure that the changes also get saved. Does
the rest api have a 'save' method? Or do changes always get written to
disk? If so then how often do we write it out? A shutdown hook would
be good, but wouldn't cover crashes...

Anyways, my general point is that I think it may make sense to start
experimenting with the new proposal, perhaps as another community module
that the REST config stuff would depend upon?

I'll work on figuring out resources for this, I am going to try to make
an effort in the next few months to have us invest time in the core
changes that we've wanted for awhile in GeoServer. Everyone has done an
amazing job in the last year of adding lots of awesome new features for
users. So I'm hoping this winter is a good time to take a pause from
some of that (while we work on documenting and promoting and building
front ends for the great new stuff) to focus on the more developer
focused improvements that we've all been wanting for awhile. I just
want to be sure we're smart about it and do it in a risk averse way that
doesn't lead to a quagmire.

best regards,

Chris

Justin Deoliveira wrote:

Hi David,

I understand the confusion. For every "persisted object" there are
currently three objects, global (core), config, dto. Quite messy and
hard to maintain as you are finding out.

The reason that the global config objects are immutable is because they
are not intended to be modified directly. So i dont think simply adding
setters will do what you need, since anything you set will get
overwritten the next time the system is updated.

This is just my opinion (others may feel differently), but I would think
that using your current method is the best way to proceed. I don't think
hacking up the config objects is a good approach, especially since
redoing the config objects is something that has been waiting in the
wings.

Perhaps we can use this as a use case to actually do the work!!

Which brings up another point for the rest of the list. I have been
thinking lately that it might be possible to do the config work without
disrupting all of GeoServer. We have the new model more or less
implemented on the configuration spike i was playing with quite some
time ago. I do not believe that it would be hard to wrap the new objects
in the old ones.

This would give us the best of both worlds. It would server people who
need the nice api for programatically working with the config objects.
And it would also prevent a huge mass update to the rest of the code
base.

Just a thought.

-Justin

David Winslow wrote:

Hello all,

In working on the RESTful Configuration API project (see
http://docs.codehaus.org/display/GEOSDOC/RESTful+Configuration+API)
I've come across some difficulty with the existing (java) API for
dealing with configured data. Initially, I had been basing the code
on the org.vfny.geoserver.config.* package, but I've been informed
that the 'core' API I should be using is in
org.vfny.geoserver.global.* (the previous package being essentially a
one-off done for the current configuration interface). However, when
trying to convert my code I've found that the
org.vfny.geoserver.config.* classes are mostly immutable. As a
temporary solution I've started to use the DTO objects in
org.vfny.geoserver.config.dto.* which do provide setters, and finally
creating the corresponding object from that. Is there a better way?
Should I modify/subclass the existing configuration objects to
provide setters? As long as I'm bothering the dev list, how do I
save the configuration to disk?

Thanks, David Winslow

-----------------------------------------------------------------------
--

SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

!DSPAM:4005,477bb4c9136081030819293!

As far as the REST API goes, it seems like the behavior that would be most in keeping with the REST philosophy would be to have it auto-save every change. Perhaps we could have /api/stage/ for temporary, non-persisted changes and /api/persistent/ for settings that should be saved (and have the entire api available under both). It would be a little weird though, since you would want to be able to have temporary layers and such using info from persisted datastores, etc, so clients would have to be aware of both hierarchies at the same time.

So we need to be sure that our new persistence mechanism will work with auto-saving every time. I agree that auto-saving makes more sense. I think two hierarchies is probably overkill, we should just think about handling use cases like temporary layers.

I believe the apply/save/load thing isn't so much used for temporary layers - I think the way that it is useful is as a sort of 'save point'. If you screw up the configuration you can hit 'load' and bring up the last good version. I don't know how difficult it would be, but one concept we could introduce would be a way to point at different configuration locations. MapServer does this, with different mapfiles, and people like it a lot. Then you could try out a new config on a different location, and if you didn't like it you could just put in your old one. Though I suppose that could get complicated with the rest api. And indeed it's just another complication at a time when we should have as few moving parts as possible. I suppose the best would be to just start with auto save and no save point concept, and add that in later.

Chris

-David

On Tuesday 01 January 2008 17:45:03 Justin Deoliveira wrote:

Actually what I am talking about is a bit different. Basically the
opposite. Encapsulate the new api around the old api.

Without going into too much technical detail, what i mean is taking the
old config/dto objects and making them wrappers around our shiny new
model objects. This way the ui and services can continue to happily work
against the old apis and not have to worry about changing over right
away. But underneath the hood, the actual work and persistence is being
done by the new model.

The wins to this approach (in my mind) are:

* not a massive update to the code base, updates can be done gradually
* by wrapping the new model up in the old we ensure that it maintains
the same behaviour
* new "clients" and "services" we implement can work directly against
the new model

So the new rest stuff can work directly against the new model. And
because the old model api is basically just a call through to the new
model apis, everything stays in sync.

The approach you mention is kind of the opposite. Basically we could
take the old objects, and either wrap them up in adapters that implement
the new interfaces, or just have them implement the new interfaces
directly.

Which is a valid approach as well. I however don't think it is as clean
as the approach i suggest since we are just dressing the old "core" up a
bit instead of replacing it with a new one. This might be a better
approach juts in terms of overall impact... however it still ties us to
the limitations to the old core.

Anyways, enough ranting from me. Not sure if i made myself any more clear.

-Justin

Chris Holmes wrote:

I don't think I saw this email before I left, or at least didn't process
it, but recently I was thinking a decent bit about how to pull off the
move to the new config. A friend said that one of his co-workers was
really good at encapsulating bad code and API's behind newer apis, so
got to thinking if we might be able to do such a thing, which I think is
what you're suggesting here?

You say 'wrap the new objects in the olds ones' - what exactly do you
mean by that? My thought was that we'd have the new interfaces serve as
a facade to the old way. So if you called setXXX then the object that
implements the interface would modify the DTO and have the global object
load it.

My worry is that doing it that way could be inefficient, creating new
objects whenever you want to change a parameter. But the config isn't
changed all that often, so hopefully it would be ok.

My other worry is persisting - the rest api would change things
directly, but we have to be sure that the changes also get saved. Does
the rest api have a 'save' method? Or do changes always get written to
disk? If so then how often do we write it out? A shutdown hook would
be good, but wouldn't cover crashes...

Anyways, my general point is that I think it may make sense to start
experimenting with the new proposal, perhaps as another community module
that the REST config stuff would depend upon?

I'll work on figuring out resources for this, I am going to try to make
an effort in the next few months to have us invest time in the core
changes that we've wanted for awhile in GeoServer. Everyone has done an
amazing job in the last year of adding lots of awesome new features for
users. So I'm hoping this winter is a good time to take a pause from
some of that (while we work on documenting and promoting and building
front ends for the great new stuff) to focus on the more developer
focused improvements that we've all been wanting for awhile. I just
want to be sure we're smart about it and do it in a risk averse way that
doesn't lead to a quagmire.

best regards,

Chris

Justin Deoliveira wrote:

Hi David,

I understand the confusion. For every "persisted object" there are
currently three objects, global (core), config, dto. Quite messy and
hard to maintain as you are finding out.

The reason that the global config objects are immutable is because they
are not intended to be modified directly. So i dont think simply adding
setters will do what you need, since anything you set will get
overwritten the next time the system is updated.

This is just my opinion (others may feel differently), but I would think
that using your current method is the best way to proceed. I don't think
hacking up the config objects is a good approach, especially since
redoing the config objects is something that has been waiting in the
wings.

Perhaps we can use this as a use case to actually do the work!!

Which brings up another point for the rest of the list. I have been
thinking lately that it might be possible to do the config work without
disrupting all of GeoServer. We have the new model more or less
implemented on the configuration spike i was playing with quite some
time ago. I do not believe that it would be hard to wrap the new objects
in the old ones.

This would give us the best of both worlds. It would server people who
need the nice api for programatically working with the config objects.
And it would also prevent a huge mass update to the rest of the code
base.

Just a thought.

-Justin

David Winslow wrote:

Hello all,

In working on the RESTful Configuration API project (see
http://docs.codehaus.org/display/GEOSDOC/RESTful+Configuration+API)
I've come across some difficulty with the existing (java) API for
dealing with configured data. Initially, I had been basing the code
on the org.vfny.geoserver.config.* package, but I've been informed
that the 'core' API I should be using is in
org.vfny.geoserver.global.* (the previous package being essentially a
one-off done for the current configuration interface). However, when
trying to convert my code I've found that the
org.vfny.geoserver.config.* classes are mostly immutable. As a
temporary solution I've started to use the DTO objects in
org.vfny.geoserver.config.dto.* which do provide setters, and finally
creating the corresponding object from that. Is there a better way?
Should I modify/subclass the existing configuration objects to
provide setters? As long as I'm bothering the dev list, how do I
save the configuration to disk?

Thanks, David Winslow

-----------------------------------------------------------------------
--

SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

!DSPAM:4005,477bb4c9136081030819293!

Chris Holmes wrote:

Cool, either approach works fine for me. I see advantages both ways,
but I'm not the one who's going to be working with the code directly, so
I defer to you guys. I think I may have some questions on specifics,
but how about you go ahead and make a GSIP Justin? Can have a couple
sample classes to show it?

There is already a full blown GSIP for the work here.

http://docs.codehaus.org/display/GEOS/Configuration+Proposal

There are links to code and a maven module you can build which includes
the model itself, as well as an xstream and hibernate persistence layer.

I can add a section on our integration strategy. However it would be
nice if the rest of the proposal got another review, especially from Andrea.

And I guess you're saying that this will involve a new persistence
mechanism as well? That will be the big change, no?

Depends on if we feel like breaking backwards compatability with out
current xml configuration file format. If we say ok, lets break it, then
yes I would propose to move to a new persistence mechanism (xstream).
Which is a lot simpler (like 3 lines of code to persist, 3 lines to read).

In terms of backwards and forwards compatibility... xstream is pretty
good at being backwards compatible. ie. if you add some fields to a
class and load from a previous configuration it will happily create the
object leaving those fields null. Out of the box it is a little less
forgiving when you remove fields... but there are options and api for
working around that.

However to be clear the format xstream saves its output in is not really
intended to be human readable. Its basically just a dump of java objects
in xml.

Also to keep in mind is that part of the proposal above was the
development of a tool to load new model objects from our current
catalog.xml/services.xml format.

-Justin

Or are you going

to have it use the old reader/writer too? Did you ever test out how
xstream works with backwards compatibility? Like if we add a bunch of
new options will it be able to read an older file that doesn't have all
the options? I imagine we should make some good unit tests to support
this, since that's an area where small changes creep in. But yeah,
let's figure out a scope of work and I can figure out resources to make
available. And I guess the one other thing we want is to be sure that
this makes sense as the first step in transitioning the core config and
UI. I'm pretty sure it does, I just want to be sure that everyone is in
agreement, as we don't have unlimited resources to make the change
happen, so we need to make sure to do it in the right way.

best regards,

Chris

David Winslow wrote:

I'm with Justin on this; making the current configuration objects into
a wrapper around the new stuff seems like the easy way to go (easy as
in, "makes it possible to do this incrementally," not as in
quick-and-dirty).
As far as the REST API goes, it seems like the behavior that would be
most in keeping with the REST philosophy would be to have it auto-save
every change. Perhaps we could have /api/stage/ for temporary,
non-persisted changes and /api/persistent/ for settings that should be
saved (and have the entire api available under both). It would be a
little weird though, since you would want to be able to have temporary
layers and such using info from persisted datastores, etc, so clients
would have to be aware of both hierarchies at the same time.

-David

On Tuesday 01 January 2008 17:45:03 Justin Deoliveira wrote:

Actually what I am talking about is a bit different. Basically the
opposite. Encapsulate the new api around the old api.

Without going into too much technical detail, what i mean is taking the
old config/dto objects and making them wrappers around our shiny new
model objects. This way the ui and services can continue to happily work
against the old apis and not have to worry about changing over right
away. But underneath the hood, the actual work and persistence is being
done by the new model.

The wins to this approach (in my mind) are:

* not a massive update to the code base, updates can be done gradually
* by wrapping the new model up in the old we ensure that it maintains
the same behaviour
* new "clients" and "services" we implement can work directly against
the new model

So the new rest stuff can work directly against the new model. And
because the old model api is basically just a call through to the new
model apis, everything stays in sync.

The approach you mention is kind of the opposite. Basically we could
take the old objects, and either wrap them up in adapters that implement
the new interfaces, or just have them implement the new interfaces
directly.

Which is a valid approach as well. I however don't think it is as clean
as the approach i suggest since we are just dressing the old "core" up a
bit instead of replacing it with a new one. This might be a better
approach juts in terms of overall impact... however it still ties us to
the limitations to the old core.

Anyways, enough ranting from me. Not sure if i made myself any more
clear.

-Justin

Chris Holmes wrote:

I don't think I saw this email before I left, or at least didn't
process
it, but recently I was thinking a decent bit about how to pull off the
move to the new config. A friend said that one of his co-workers was
really good at encapsulating bad code and API's behind newer apis, so
got to thinking if we might be able to do such a thing, which I
think is
what you're suggesting here?

You say 'wrap the new objects in the olds ones' - what exactly do you
mean by that? My thought was that we'd have the new interfaces
serve as
a facade to the old way. So if you called setXXX then the object that
implements the interface would modify the DTO and have the global
object
load it.

My worry is that doing it that way could be inefficient, creating new
objects whenever you want to change a parameter. But the config isn't
changed all that often, so hopefully it would be ok.

My other worry is persisting - the rest api would change things
directly, but we have to be sure that the changes also get saved. Does
the rest api have a 'save' method? Or do changes always get written to
disk? If so then how often do we write it out? A shutdown hook would
be good, but wouldn't cover crashes...

Anyways, my general point is that I think it may make sense to start
experimenting with the new proposal, perhaps as another community
module
that the REST config stuff would depend upon?

I'll work on figuring out resources for this, I am going to try to make
an effort in the next few months to have us invest time in the core
changes that we've wanted for awhile in GeoServer. Everyone has
done an
amazing job in the last year of adding lots of awesome new features for
users. So I'm hoping this winter is a good time to take a pause from
some of that (while we work on documenting and promoting and building
front ends for the great new stuff) to focus on the more developer
focused improvements that we've all been wanting for awhile. I just
want to be sure we're smart about it and do it in a risk averse way
that
doesn't lead to a quagmire.

best regards,

Chris

Justin Deoliveira wrote:

Hi David,

I understand the confusion. For every "persisted object" there are
currently three objects, global (core), config, dto. Quite messy and
hard to maintain as you are finding out.

The reason that the global config objects are immutable is because
they
are not intended to be modified directly. So i dont think simply
adding
setters will do what you need, since anything you set will get
overwritten the next time the system is updated.

This is just my opinion (others may feel differently), but I would
think
that using your current method is the best way to proceed. I don't
think
hacking up the config objects is a good approach, especially since
redoing the config objects is something that has been waiting in the
wings.

Perhaps we can use this as a use case to actually do the work!!

Which brings up another point for the rest of the list. I have been
thinking lately that it might be possible to do the config work
without
disrupting all of GeoServer. We have the new model more or less
implemented on the configuration spike i was playing with quite some
time ago. I do not believe that it would be hard to wrap the new
objects
in the old ones.

This would give us the best of both worlds. It would server people who
need the nice api for programatically working with the config objects.
And it would also prevent a huge mass update to the rest of the code
base.

Just a thought.

-Justin

David Winslow wrote:

Hello all,

In working on the RESTful Configuration API project (see
http://docs.codehaus.org/display/GEOSDOC/RESTful+Configuration+API)
I've come across some difficulty with the existing (java) API for
dealing with configured data. Initially, I had been basing the code
on the org.vfny.geoserver.config.* package, but I've been informed
that the 'core' API I should be using is in
org.vfny.geoserver.global.* (the previous package being essentially a
one-off done for the current configuration interface). However, when
trying to convert my code I've found that the
org.vfny.geoserver.config.* classes are mostly immutable. As a
temporary solution I've started to use the DTO objects in
org.vfny.geoserver.config.dto.* which do provide setters, and finally
creating the corresponding object from that. Is there a better way?
Should I modify/subclass the existing configuration objects to
provide setters? As long as I'm bothering the dev list, how do I
save the configuration to disk?

Thanks, David Winslow

-----------------------------------------------------------------------

--

SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

!DSPAM:4007,477bb7ce143781431913854!

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/

!DSPAM:4007,477bb7ce143781431913854!

------------------------------------------------------------------------

_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

!DSPAM:4007,477bb7ce143781431913854!

--
Justin Deoliveira
The Open Planning Project
http://topp.openplans.org

Forgot to address this issue in my first reply:

My worry is that doing it that way could be inefficient, creating new
objects whenever you want to change a parameter. But the config isn't
changed all that often, so hopefully it would be ok.

I dont think this will be a big issue... not much more inefficient than
what we do today, there is a lot of recreation going on.

My other worry is persisting - the rest api would change things
directly, but we have to be sure that the changes also get saved. Does
the rest api have a 'save' method? Or do changes always get written to
disk? If so then how often do we write it out? A shutdown hook would
be good, but wouldn't cover crashes...

I have always worried about this. By making our current persistence
format "api", ties us to it. And for something like xstream, this is not
a nice api to give to people for hacking their config. Dealing with
changes inside of GeoServer when the format changes is one thing...
clients having to deal with those same changes strikes me as a bad idea.

I have always been more for a well thought out xml schema which serves
more as an "exchange format" between the model running in geoserver and
remote clients. When the user does a REST post for instance, something
on the geoserver side reads it, applies it to the model, and persists it
to disk. Maybe this exchange format looks like what our current
catalog.xml elements look like.

Anyways, my general point is that I think it may make sense to start
experimenting with the new proposal, perhaps as another community module
that the REST config stuff would depend upon?

I'll work on figuring out resources for this, I am going to try to make
an effort in the next few months to have us invest time in the core
changes that we've wanted for awhile in GeoServer. Everyone has done an
amazing job in the last year of adding lots of awesome new features for
users. So I'm hoping this winter is a good time to take a pause from
some of that (while we work on documenting and promoting and building
front ends for the great new stuff) to focus on the more developer
focused improvements that we've all been wanting for awhile. I just
want to be sure we're smart about it and do it in a risk averse way that
doesn't lead to a quagmire.

best regards,

Chris

Justin Deoliveira wrote:

Hi David,

I understand the confusion. For every "persisted object" there are
currently three objects, global (core), config, dto. Quite messy and
hard to maintain as you are finding out.

The reason that the global config objects are immutable is because they
are not intended to be modified directly. So i dont think simply adding
setters will do what you need, since anything you set will get
overwritten the next time the system is updated.

This is just my opinion (others may feel differently), but I would think
that using your current method is the best way to proceed. I don't think
hacking up the config objects is a good approach, especially since
redoing the config objects is something that has been waiting in the
wings.

Perhaps we can use this as a use case to actually do the work!!

Which brings up another point for the rest of the list. I have been
thinking lately that it might be possible to do the config work without
disrupting all of GeoServer. We have the new model more or less
implemented on the configuration spike i was playing with quite some
time ago. I do not believe that it would be hard to wrap the new objects
in the old ones.

This would give us the best of both worlds. It would server people who
need the nice api for programatically working with the config objects.
And it would also prevent a huge mass update to the rest of the code
base.

Just a thought.

-Justin

David Winslow wrote:

Hello all,

In working on the RESTful Configuration API project (see
http://docs.codehaus.org/display/GEOSDOC/RESTful+Configuration+API)
I've come across some difficulty with the existing (java) API for
dealing with configured data. Initially, I had been basing the code
on the org.vfny.geoserver.config.* package, but I've been informed
that the 'core' API I should be using is in
org.vfny.geoserver.global.* (the previous package being essentially a
one-off done for the current configuration interface). However, when
trying to convert my code I've found that the
org.vfny.geoserver.config.* classes are mostly immutable. As a
temporary solution I've started to use the DTO objects in
org.vfny.geoserver.config.dto.* which do provide setters, and finally
creating the corresponding object from that. Is there a better way?
Should I modify/subclass the existing configuration objects to
provide setters? As long as I'm bothering the dev list, how do I
save the configuration to disk?

Thanks, David Winslow

-------------------------------------------------------------------------

SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

!DSPAM:4007,477ab271218508992556831!

--
Justin Deoliveira
The Open Planning Project
http://topp.openplans.org

It is worthy to note that our current system has a few benefits:

Concurrency:

The biggest one is concurrency. Because you are working with "copied"
objects which get applied in one big lump when you click Apply, you
don't have to worry about concurrency because everything gets updated at
the same time.

If we provide direct access to model objects this is no longer the
case... and we have to make sure we follow strict data access rules.
Like always saving through a data access object, etc...

Synchronization:

Again one of the benefits of doing one big apply is that you don't have
to worry about keeping things in sync, because they get updated every
time. With a direct access model we will need to come up with a good
event model for keeping thing synchronized (which indeed was included in
the original proposal)

Not show stoppers... but things to keep in mind as we shift paradigms.

-Justin

Chris Holmes wrote:

As far as the REST API goes, it seems like the behavior that would be
most in keeping with the REST philosophy would be to have it auto-save
every change. Perhaps we could have /api/stage/ for temporary,
non-persisted changes and /api/persistent/ for settings that should be
saved (and have the entire api available under both). It would be a
little weird though, since you would want to be able to have temporary
layers and such using info from persisted datastores, etc, so clients
would have to be aware of both hierarchies at the same time.

So we need to be sure that our new persistence mechanism will work with
auto-saving every time. I agree that auto-saving makes more sense. I
think two hierarchies is probably overkill, we should just think about
handling use cases like temporary layers.

I believe the apply/save/load thing isn't so much used for temporary
layers - I think the way that it is useful is as a sort of 'save point'.
If you screw up the configuration you can hit 'load' and bring up the
last good version. I don't know how difficult it would be, but one
concept we could introduce would be a way to point at different
configuration locations. MapServer does this, with different mapfiles,
and people like it a lot. Then you could try out a new config on a
different location, and if you didn't like it you could just put in your
old one. Though I suppose that could get complicated with the rest api.
And indeed it's just another complication at a time when we should have
as few moving parts as possible. I suppose the best would be to just
start with auto save and no save point concept, and add that in later.

Chris

-David

On Tuesday 01 January 2008 17:45:03 Justin Deoliveira wrote:

Actually what I am talking about is a bit different. Basically the
opposite. Encapsulate the new api around the old api.

Without going into too much technical detail, what i mean is taking the
old config/dto objects and making them wrappers around our shiny new
model objects. This way the ui and services can continue to happily work
against the old apis and not have to worry about changing over right
away. But underneath the hood, the actual work and persistence is being
done by the new model.

The wins to this approach (in my mind) are:

* not a massive update to the code base, updates can be done gradually
* by wrapping the new model up in the old we ensure that it maintains
the same behaviour
* new "clients" and "services" we implement can work directly against
the new model

So the new rest stuff can work directly against the new model. And
because the old model api is basically just a call through to the new
model apis, everything stays in sync.

The approach you mention is kind of the opposite. Basically we could
take the old objects, and either wrap them up in adapters that implement
the new interfaces, or just have them implement the new interfaces
directly.

Which is a valid approach as well. I however don't think it is as clean
as the approach i suggest since we are just dressing the old "core" up a
bit instead of replacing it with a new one. This might be a better
approach juts in terms of overall impact... however it still ties us to
the limitations to the old core.

Anyways, enough ranting from me. Not sure if i made myself any more
clear.

-Justin

Chris Holmes wrote:

I don't think I saw this email before I left, or at least didn't
process
it, but recently I was thinking a decent bit about how to pull off the
move to the new config. A friend said that one of his co-workers was
really good at encapsulating bad code and API's behind newer apis, so
got to thinking if we might be able to do such a thing, which I
think is
what you're suggesting here?

You say 'wrap the new objects in the olds ones' - what exactly do you
mean by that? My thought was that we'd have the new interfaces
serve as
a facade to the old way. So if you called setXXX then the object that
implements the interface would modify the DTO and have the global
object
load it.

My worry is that doing it that way could be inefficient, creating new
objects whenever you want to change a parameter. But the config isn't
changed all that often, so hopefully it would be ok.

My other worry is persisting - the rest api would change things
directly, but we have to be sure that the changes also get saved. Does
the rest api have a 'save' method? Or do changes always get written to
disk? If so then how often do we write it out? A shutdown hook would
be good, but wouldn't cover crashes...

Anyways, my general point is that I think it may make sense to start
experimenting with the new proposal, perhaps as another community
module
that the REST config stuff would depend upon?

I'll work on figuring out resources for this, I am going to try to make
an effort in the next few months to have us invest time in the core
changes that we've wanted for awhile in GeoServer. Everyone has
done an
amazing job in the last year of adding lots of awesome new features for
users. So I'm hoping this winter is a good time to take a pause from
some of that (while we work on documenting and promoting and building
front ends for the great new stuff) to focus on the more developer
focused improvements that we've all been wanting for awhile. I just
want to be sure we're smart about it and do it in a risk averse way
that
doesn't lead to a quagmire.

best regards,

Chris

Justin Deoliveira wrote:

Hi David,

I understand the confusion. For every "persisted object" there are
currently three objects, global (core), config, dto. Quite messy and
hard to maintain as you are finding out.

The reason that the global config objects are immutable is because
they
are not intended to be modified directly. So i dont think simply
adding
setters will do what you need, since anything you set will get
overwritten the next time the system is updated.

This is just my opinion (others may feel differently), but I would
think
that using your current method is the best way to proceed. I don't
think
hacking up the config objects is a good approach, especially since
redoing the config objects is something that has been waiting in the
wings.

Perhaps we can use this as a use case to actually do the work!!

Which brings up another point for the rest of the list. I have been
thinking lately that it might be possible to do the config work
without
disrupting all of GeoServer. We have the new model more or less
implemented on the configuration spike i was playing with quite some
time ago. I do not believe that it would be hard to wrap the new
objects
in the old ones.

This would give us the best of both worlds. It would server people who
need the nice api for programatically working with the config objects.
And it would also prevent a huge mass update to the rest of the code
base.

Just a thought.

-Justin

David Winslow wrote:

Hello all,

In working on the RESTful Configuration API project (see
http://docs.codehaus.org/display/GEOSDOC/RESTful+Configuration+API)
I've come across some difficulty with the existing (java) API for
dealing with configured data. Initially, I had been basing the code
on the org.vfny.geoserver.config.* package, but I've been informed
that the 'core' API I should be using is in
org.vfny.geoserver.global.* (the previous package being essentially a
one-off done for the current configuration interface). However, when
trying to convert my code I've found that the
org.vfny.geoserver.config.* classes are mostly immutable. As a
temporary solution I've started to use the DTO objects in
org.vfny.geoserver.config.dto.* which do provide setters, and finally
creating the corresponding object from that. Is there a better way?
Should I modify/subclass the existing configuration objects to
provide setters? As long as I'm bothering the dev list, how do I
save the configuration to disk?

Thanks, David Winslow

-----------------------------------------------------------------------

--

SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

!DSPAM:4007,477bb940147551637810514!

--
Justin Deoliveira
The Open Planning Project
http://topp.openplans.org

+1 for allowing load/save from different locations.

autosaving to a new space also means that if you do snarfle things up you can keep a record of what went wrong, and still go back to the last working copy.

My own experience is that I want to do regression testing against half a dozen different configs, and i’m not even playing with coverages. so this would be a useful thing to do regardless.

It would also avoid having to manually edit env variables or web.xml - a single env variable could point to the menu of configs to offer, and the welcome page could throw up an initial config load option.

RA

On Jan 3, 2008 3:18 AM, Chris Holmes < cholmes@anonymised.com> wrote:

As far as the REST API goes, it seems like the behavior that would be most in
keeping with the REST philosophy would be to have it auto-save every change.
Perhaps we could have /api/stage/ for temporary, non-persisted changes
and /api/persistent/ for settings that should be saved (and have the entire
api available under both). It would be a little weird though, since you
would want to be able to have temporary layers and such using info from
persisted datastores, etc, so clients would have to be aware of both
hierarchies at the same time.

So we need to be sure that our new persistence mechanism will work with
auto-saving every time. I agree that auto-saving makes more sense. I
think two hierarchies is probably overkill, we should just think about
handling use cases like temporary layers.

I believe the apply/save/load thing isn’t so much used for temporary
layers - I think the way that it is useful is as a sort of ‘save point’.
If you screw up the configuration you can hit ‘load’ and bring up the
last good version. I don’t know how difficult it would be, but one
concept we could introduce would be a way to point at different
configuration locations. MapServer does this, with different mapfiles,
and people like it a lot. Then you could try out a new config on a
different location, and if you didn’t like it you could just put in your
old one. Though I suppose that could get complicated with the rest api.
And indeed it’s just another complication at a time when we should
have as few moving parts as possible. I suppose the best would be to
just start with auto save and no save point concept, and add that in later.

Chris

-David

On Tuesday 01 January 2008 17:45:03 Justin Deoliveira wrote:

Actually what I am talking about is a bit different. Basically the
opposite. Encapsulate the new api around the old api.

Without going into too much technical detail, what i mean is taking the
old config/dto objects and making them wrappers around our shiny new
model objects. This way the ui and services can continue to happily work
against the old apis and not have to worry about changing over right
away. But underneath the hood, the actual work and persistence is being
done by the new model.

The wins to this approach (in my mind) are:

  • not a massive update to the code base, updates can be done gradually
  • by wrapping the new model up in the old we ensure that it maintains
    the same behaviour
  • new “clients” and “services” we implement can work directly against
    the new model

So the new rest stuff can work directly against the new model. And
because the old model api is basically just a call through to the new
model apis, everything stays in sync.

The approach you mention is kind of the opposite. Basically we could
take the old objects, and either wrap them up in adapters that implement
the new interfaces, or just have them implement the new interfaces
directly.

Which is a valid approach as well. I however don’t think it is as clean
as the approach i suggest since we are just dressing the old “core” up a
bit instead of replacing it with a new one. This might be a better
approach juts in terms of overall impact… however it still ties us to
the limitations to the old core.

Anyways, enough ranting from me. Not sure if i made myself any more clear.

-Justin

Chris Holmes wrote:

I don’t think I saw this email before I left, or at least didn’t process
it, but recently I was thinking a decent bit about how to pull off the
move to the new config. A friend said that one of his co-workers was
really good at encapsulating bad code and API’s behind newer apis, so
got to thinking if we might be able to do such a thing, which I think is
what you’re suggesting here?

You say ‘wrap the new objects in the olds ones’ - what exactly do you
mean by that? My thought was that we’d have the new interfaces serve as
a facade to the old way. So if you called setXXX then the object that
implements the interface would modify the DTO and have the global object
load it.

My worry is that doing it that way could be inefficient, creating new
objects whenever you want to change a parameter. But the config isn’t
changed all that often, so hopefully it would be ok.

My other worry is persisting - the rest api would change things
directly, but we have to be sure that the changes also get saved. Does
the rest api have a ‘save’ method? Or do changes always get written to
disk? If so then how often do we write it out? A shutdown hook would
be good, but wouldn’t cover crashes…

Anyways, my general point is that I think it may make sense to start
experimenting with the new proposal, perhaps as another community module
that the REST config stuff would depend upon?

I’ll work on figuring out resources for this, I am going to try to make
an effort in the next few months to have us invest time in the core
changes that we’ve wanted for awhile in GeoServer. Everyone has done an
amazing job in the last year of adding lots of awesome new features for
users. So I’m hoping this winter is a good time to take a pause from
some of that (while we work on documenting and promoting and building
front ends for the great new stuff) to focus on the more developer
focused improvements that we’ve all been wanting for awhile. I just
want to be sure we’re smart about it and do it in a risk averse way that
doesn’t lead to a quagmire.

best regards,

Chris

Justin Deoliveira wrote:

Hi David,

I understand the confusion. For every “persisted object” there are
currently three objects, global (core), config, dto. Quite messy and
hard to maintain as you are finding out.

The reason that the global config objects are immutable is because they
are not intended to be modified directly. So i dont think simply adding
setters will do what you need, since anything you set will get
overwritten the next time the system is updated.

This is just my opinion (others may feel differently), but I would think
that using your current method is the best way to proceed. I don’t think
hacking up the config objects is a good approach, especially since
redoing the config objects is something that has been waiting in the
wings.

Perhaps we can use this as a use case to actually do the work!!

Which brings up another point for the rest of the list. I have been
thinking lately that it might be possible to do the config work without
disrupting all of GeoServer. We have the new model more or less
implemented on the configuration spike i was playing with quite some
time ago. I do not believe that it would be hard to wrap the new objects
in the old ones.

This would give us the best of both worlds. It would server people who
need the nice api for programatically working with the config objects.
And it would also prevent a huge mass update to the rest of the code
base.

Just a thought.

-Justin

David Winslow wrote:

Hello all,

In working on the RESTful Configuration API project (see
http://docs.codehaus.org/display/GEOSDOC/RESTful+Configuration+API)
I’ve come across some difficulty with the existing (java) API for
dealing with configured data. Initially, I had been basing the code
on the org.vfny.geoserver.config.* package, but I’ve been informed
that the ‘core’ API I should be using is in
org.vfny.geoserver.global.* (the previous package being essentially a
one-off done for the current configuration interface). However, when
trying to convert my code I’ve found that the
org.vfny.geoserver.config.* classes are mostly immutable. As a
temporary solution I’ve started to use the DTO objects in
org.vfny.geoserver.config.dto.* which do provide setters, and finally
creating the corresponding object from that. Is there a better way?
Should I modify/subclass the existing configuration objects to
provide setters? As long as I’m bothering the dev list, how do I
save the configuration to disk?

Thanks, David Winslow


SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It’s the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php


Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

!DSPAM:4005,477bb4c9136081030819293!


This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/


Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Justin Deoliveira ha scritto:

Chris Holmes wrote:

Cool, either approach works fine for me. I see advantages both ways,
but I'm not the one who's going to be working with the code directly, so
I defer to you guys. I think I may have some questions on specifics,
but how about you go ahead and make a GSIP Justin? Can have a couple
sample classes to show it?

There is already a full blown GSIP for the work here.

http://docs.codehaus.org/display/GEOS/Configuration+Proposal

There are links to code and a maven module you can build which includes
the model itself, as well as an xstream and hibernate persistence layer.

I can add a section on our integration strategy. However it would be
nice if the rest of the proposal got another review, especially from Andrea.

I'll sure do.

And I guess you're saying that this will involve a new persistence
mechanism as well? That will be the big change, no?

Depends on if we feel like breaking backwards compatability with out
current xml configuration file format. If we say ok, lets break it, then
yes I would propose to move to a new persistence mechanism (xstream).
Which is a lot simpler (like 3 lines of code to persist, 3 lines to read).

I'm for breaking compatibility and calling it GeoServer 2 if it needs
to be. The main point of changing config subsystem is to kill a system
that makes us avoid any config change. It must be easy to add and alter
the current configuration. Trying to keep backwards compatibility would
make it hard.

Oh, one thing. The natural output of XStream is a single big xml file,
not a set of files like now. When object A has a reference to object B,
xstream follows it and serializes B too, and so on. This has obvious
scalability issues (no more, no less than the current system, which
does write multiple files in a single shot, without any way to
persist a partial change).

There is a way to create multiple files in XStream, but I'm not
sure it's what we want, see http://jira.codehaus.org/browse/XSTR-52

If we want GeoServer to scale config wise a lazy load/lazy save
mechanism is required, and that probably needs a database based backend
(see the Hibernate side of the proposal). But... are we going to mantain it if it's not the primary persistence mechanism?

Oh, one thing. One of the major drawbacks of the current config system
is that it reloads all datastores and all feature types each time one
does apply or load. This is quite a problem when having big amounts
of configured feture types, or slow datastores (think remote wfs).
The new config system should be careful to only reload datastores and
feature types whose configuration changed (even driven, I kind of remember the new config system is, but I'm not sure).

However to be clear the format xstream saves its output in is not really
intended to be human readable. Its basically just a dump of java objects
in xml.

There are way to alias the class names so that the xml looks better,
but yes, one should not think the files as a xml schema based information model, but just a way to dump a net of java objects into
a text file with a format that happens to be xml. Btw, XStream does support JSON persistence too in the latest releases. Maybe this could
make it even more clear that the configuration is not meant to be
subject to a schema and so on?.

I believe if the REST api is simple and general enough to use we can
just discourage people to try and fiddle with our persisted configuration files.

Cheers
Andrea

Justin Deoliveira ha scritto:

It is worthy to note that our current system has a few benefits:

Concurrency:

The biggest one is concurrency. Because you are working with "copied"
objects which get applied in one big lump when you click Apply, you
don't have to worry about concurrency because everything gets updated at
the same time.

If we provide direct access to model objects this is no longer the
case... and we have to make sure we follow strict data access rules.
Like always saving through a data access object, etc...

Synchronization:

Again one of the benefits of doing one big apply is that you don't have
to worry about keeping things in sync, because they get updated every
time. With a direct access model we will need to come up with a good
event model for keeping thing synchronized (which indeed was included in
the original proposal)

Not show stoppers... but things to keep in mind as we shift paradigms.

Just a quick note, with the current configuration system nothing
prevents two users to alter the in memory config at the same time,
maybe stepping on each other toes.

Some persistence mechanism have ways to avoid this, for example Hibernate has an optimistic locking mechanisms that makes sure
that you're not saving over another user changes: basically each
row has a version stamp, when you save, you also check the
db still has the same version stamp you loaded, if not,
Hibernate will throw an exception and the application will have
to deal with is, usually by telling the user someone got in the
middle and that he has to review the changed objects and make
the edits again.

This is another side of scaling, scaling up with concurrent administrators. With a file based approach it's hard, with a db
backend it comes more naturally since you can put quite a bit
of integrity checks in the db itself, the db manages concurrent
access by its very nature, and there are libraries hadling
optimistic locking.

Cheers
Andrea

So I've been thinking and researching this stuff a decent bit recently, since frankly change scares me :slight_smile:

The XStream stuff looks pretty great. I like that you can alias objects for more readable xml names, as that's fairly important to me. I don't need incredible readable xml files, but I think it's nice if one can play around with them and figure things out. Another promising sign to me is that a number of the rest java plugins make use of xstream, I think both the grails one and the struts 2 one.

I'm also thinking that it may make sense to use hibernate for our default persistence mechanism. The lazy loading stuff seems really nice, and hibernate really has become the standard way to persist objects. One question I have is whether we should consider using the Java Persistence API, which hibernate supports I believe, or if we should just use hibernate's api directly.

I agree that if the rest api is easy enough to use then it could decrease the need for people to mess with the xml files directly. But I also hope we have a way to maintain the portability of configs. I think we'd need a rest api endpoint that would take a zip of config files, or else one big config file. One should be able to interact with smaller end points, but sometimes you may just want to upload a whole configuration from another place. But hopefully we can use xstream in a smart way, to communicate both smaller rest api end points and the whole.

I am curious how things like our SLD files might play in to this though. And there's also stuff like our templates, which aren't xml. It makes sense to expose these through a rest api for sure. But I guess it speaks to the fact that we'd have to do some zip file data dir type thing if we want to pass around full configs.

Chris

Andrea Aime wrote:

Justin Deoliveira ha scritto:

Chris Holmes wrote:

Cool, either approach works fine for me. I see advantages both ways,
but I'm not the one who's going to be working with the code directly, so
I defer to you guys. I think I may have some questions on specifics,
but how about you go ahead and make a GSIP Justin? Can have a couple
sample classes to show it?

There is already a full blown GSIP for the work here.

http://docs.codehaus.org/display/GEOS/Configuration+Proposal

There are links to code and a maven module you can build which includes
the model itself, as well as an xstream and hibernate persistence layer.

I can add a section on our integration strategy. However it would be
nice if the rest of the proposal got another review, especially from Andrea.

I'll sure do.

And I guess you're saying that this will involve a new persistence
mechanism as well? That will be the big change, no?

Depends on if we feel like breaking backwards compatability with out
current xml configuration file format. If we say ok, lets break it, then
yes I would propose to move to a new persistence mechanism (xstream).
Which is a lot simpler (like 3 lines of code to persist, 3 lines to read).

I'm for breaking compatibility and calling it GeoServer 2 if it needs
to be. The main point of changing config subsystem is to kill a system
that makes us avoid any config change. It must be easy to add and alter
the current configuration. Trying to keep backwards compatibility would
make it hard.

Oh, one thing. The natural output of XStream is a single big xml file,
not a set of files like now. When object A has a reference to object B,
xstream follows it and serializes B too, and so on. This has obvious
scalability issues (no more, no less than the current system, which
does write multiple files in a single shot, without any way to
persist a partial change).

There is a way to create multiple files in XStream, but I'm not
sure it's what we want, see http://jira.codehaus.org/browse/XSTR-52

If we want GeoServer to scale config wise a lazy load/lazy save
mechanism is required, and that probably needs a database based backend
(see the Hibernate side of the proposal). But... are we going to mantain it if it's not the primary persistence mechanism?

Oh, one thing. One of the major drawbacks of the current config system
is that it reloads all datastores and all feature types each time one
does apply or load. This is quite a problem when having big amounts
of configured feture types, or slow datastores (think remote wfs).
The new config system should be careful to only reload datastores and
feature types whose configuration changed (even driven, I kind of remember the new config system is, but I'm not sure).

However to be clear the format xstream saves its output in is not really
intended to be human readable. Its basically just a dump of java objects
in xml.

There are way to alias the class names so that the xml looks better,
but yes, one should not think the files as a xml schema based information model, but just a way to dump a net of java objects into
a text file with a format that happens to be xml. Btw, XStream does support JSON persistence too in the latest releases. Maybe this could
make it even more clear that the configuration is not meant to be
subject to a schema and so on?.

I believe if the REST api is simple and general enough to use we can
just discourage people to try and fiddle with our persisted configuration files.

Cheers
Andrea

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

!DSPAM:4005,477ca326163742092453641!