[Geoserver-devel] WFS xforming

Chris,

You recently mentioned (in the context of wiki-like-editing spatial
data) of having the WFS keep track of feature updates and deletes (ie.
keep some sort of historical information) so that it could be either
'rolled back' or you could ask for a view at a particular timeframe.

I was thinking this could be done with a simple web service that would
take an incoming WFS <transaction> and transform it into a more complex
set of sub-actions.

For example, an update could move the current database row to another
table (with a timestamp), then update the actual row. These operations
can be easily expressed in terms of other WFS request. In the above
case, the update request gets translated into a GetFeature (from the
main table), Insert (back up that row in another table), then an Update
(of the main table). If you were really paranoid, you can also throw
in some locks!

There's all sorts of different ways of keeping historical information.

To the outside world, they only see a normal WFS-T so clients dont have
to even know that something fancy is going on, which is super sweet for
application developers.

I also think this (having a meta-service wrap a more basic service in a
chain) is a good way of doing the current validation instead of where
its currently performed (but I could be convinced otherwise on this).

The above request Xforms could be done with an XST, so it could actually
be extreamly simple to put together. The main problem would be that
you might need some type of two-phase commit to handle the case in
which one of the sub-steps fail because I dont think you can have a
single transaction on a set of featuretypes (and the backup table/main
table would almost certainly be two featuretypes). This is a solvable
problem.

What do you think?

dave

----------------------------------------------------------
This mail sent through IMP: https://webmail.limegroup.com/

Very much agree, I was thinking many of the same thoughts when I wrote
this: http://lists.eogeo.org/pipermail/opensdi/2005-April/000076.html
But it ended up as more of a brain dump.

I think expressing the insert stuff as another set of WFS-T operations
is definitely key. I like your thought of doing it as normal WFS-T
transaction, and then just performing the sub actions. Indeed the nice
thing is that the second table of the history stuff is _also_ available
as WFS, and could be used by clients to return the revision history.

I very much agree with you re: doing validation there, instead of where
it's lumped in now. Indeed I think eventually it would be great if a
user could 'compose' their insert chain. You could introduce other
things into the work flow like a peer review. Before the real commit
it goes to another temp table where someone else has to review the
changes and sign them off before it modifies the main. Validation,
attribution, peer review, revision history, and more would just be
options that a user chooses for how much process they want in their
spatial data maintenance. And yes, for application developers all that
should be exposed is WFS-T.

The one thing we really need though, that is currently missing, is
attribution. In a hacky approach I was thinking of doing a first run
with a _slightly_ different WFS-T request that had a field for
'username'. Eventually (or perhaps sooner) we could do authentication,
which would be nice for security reasons anyway. Even if we allowed
everyone to sign up for an account...

The one other thing that I feel would be nice to have is a 'comment'
field, like in cvs commits, so you can see why someone did a given
change, what their source for the change was, ect.

Though I suppose that you could make it a special attribute in Features
inserted. Though that's hacky. Or do we have the possibility of
vendor specific params in WFS-T?

I'm not 100% sure but I think that our current transaction architecture
does not commit until all inserts are successful, I believe even cross
datastores. Like if your second or third sub transaction fails then
all rollback. So as long as they're in the same end transaction
request (which can hold inserts and updates mixed) then it should be
fine.

Chris

Quoting dblasby@anonymised.com:

Chris,

You recently mentioned (in the context of wiki-like-editing spatial
data) of having the WFS keep track of feature updates and deletes
(ie.
keep some sort of historical information) so that it could be either
'rolled back' or you could ask for a view at a particular timeframe.

I was thinking this could be done with a simple web service that
would
take an incoming WFS <transaction> and transform it into a more
complex
set of sub-actions.

For example, an update could move the current database row to another
table (with a timestamp), then update the actual row. These
operations
can be easily expressed in terms of other WFS request. In the above
case, the update request gets translated into a GetFeature (from the
main table), Insert (back up that row in another table), then an
Update
(of the main table). If you were really paranoid, you can also throw
in some locks!

There's all sorts of different ways of keeping historical
information.

To the outside world, they only see a normal WFS-T so clients dont
have
to even know that something fancy is going on, which is super sweet
for
application developers.

I also think this (having a meta-service wrap a more basic service in
a
chain) is a good way of doing the current validation instead of where
its currently performed (but I could be convinced otherwise on this).

The above request Xforms could be done with an XST, so it could
actually
be extreamly simple to put together. The main problem would be that
you might need some type of two-phase commit to handle the case in
which one of the sub-steps fail because I dont think you can have a
single transaction on a set of featuretypes (and the backup
table/main
table would almost certainly be two featuretypes). This is a
solvable
problem.

What do you think?

dave

----------------------------------------------------------
This mail sent through IMP: https://webmail.limegroup.com/

----------------------------------------------------------
This mail sent through IMP: https://webmail.limegroup.com/

Maybe a little late, but I have to say that the things described below could be very valuable features. Keeping track of changes (versioning, history) would be very powerful. Not just for for managing the geodata itself, but also for integration of geodata in of BI-systems,I think.

Some more thoughts :
- how to store the changes/requests that have been made? Do developers need to create tables themselves (think of rights that one has on a database)? Will Geoserver need rights to create these tables? Or can some kind of internal database be used for this (as long as it won't get too big....)? But maybe this is _just_ a practical issue...

- what might be more difficult: how to deal with the featureID? The OGC's definition of a FeatureId implies that it is one unique value, for identifying a feature. So one can't make a featureId composed of more than one property (e.g. featureID+timestamp). For the core data (i.e. current features) I don't think this will be an issue, but as soon as one wants to serve the historical data in a WFS, how to manage multiple versions of the feature? You can't use the same value for the id to refer to it's history (filters on a single featureId could return more then one value), because there could be numerous changes in the same feature, think of updates that are perfromed. Maybe this could be solved by storing the original featureId stored in the history-tables as an attribute and by using an internal (meaningless) featureId in the history-tables. GetFeature requests on the historical data would then return 2 featureId's: 1 that can be used to identify the feature (i.e. the id ) in the history-tables and 1 (a _regular_ attribute) that identifies the feature for the current tables.

Maybe I'm on the wrong track. What do you think?

- another thought/fantasy: keeping track of history would allow for nice tools for managing (versions of) the data. Extending the admin-tool with some reports would be nice, e.g. on how many changes have been made, which versions are available, who did what etc... Using GetFeature requests and XSLT, this would be relatively easy I think.

regards,
Thijs

At 09:33 23-5-2005, Chris Holmes wrote:

Very much agree, I was thinking many of the same thoughts when I wrote
this: http://lists.eogeo.org/pipermail/opensdi/2005-April/000076.html
But it ended up as more of a brain dump.

I think expressing the insert stuff as another set of WFS-T operations
is definitely key. I like your thought of doing it as normal WFS-T
transaction, and then just performing the sub actions. Indeed the nice
thing is that the second table of the history stuff is _also_ available
as WFS, and could be used by clients to return the revision history.

I very much agree with you re: doing validation there, instead of where
it's lumped in now. Indeed I think eventually it would be great if a
user could 'compose' their insert chain. You could introduce other
things into the work flow like a peer review. Before the real commit
it goes to another temp table where someone else has to review the
changes and sign them off before it modifies the main. Validation,
attribution, peer review, revision history, and more would just be
options that a user chooses for how much process they want in their
spatial data maintenance. And yes, for application developers all that
should be exposed is WFS-T.

The one thing we really need though, that is currently missing, is
attribution. In a hacky approach I was thinking of doing a first run
with a _slightly_ different WFS-T request that had a field for
'username'. Eventually (or perhaps sooner) we could do authentication,
which would be nice for security reasons anyway. Even if we allowed
everyone to sign up for an account...

The one other thing that I feel would be nice to have is a 'comment'
field, like in cvs commits, so you can see why someone did a given
change, what their source for the change was, ect.

Though I suppose that you could make it a special attribute in Features
inserted. Though that's hacky. Or do we have the possibility of
vendor specific params in WFS-T?

I'm not 100% sure but I think that our current transaction architecture
does not commit until all inserts are successful, I believe even cross
datastores. Like if your second or third sub transaction fails then
all rollback. So as long as they're in the same end transaction
request (which can hold inserts and updates mixed) then it should be
fine.

Chris

Quoting dblasby@anonymised.com:

> Chris,
>
> You recently mentioned (in the context of wiki-like-editing spatial
> data) of having the WFS keep track of feature updates and deletes
> (ie.
> keep some sort of historical information) so that it could be either
> 'rolled back' or you could ask for a view at a particular timeframe.
>
> I was thinking this could be done with a simple web service that
> would
> take an incoming WFS <transaction> and transform it into a more
> complex
> set of sub-actions.
>
> For example, an update could move the current database row to another
> table (with a timestamp), then update the actual row. These
> operations
> can be easily expressed in terms of other WFS request. In the above
> case, the update request gets translated into a GetFeature (from the
> main table), Insert (back up that row in another table), then an
> Update
> (of the main table). If you were really paranoid, you can also throw
> in some locks!
>
> There's all sorts of different ways of keeping historical
> information.
>
> To the outside world, they only see a normal WFS-T so clients dont
> have
> to even know that something fancy is going on, which is super sweet
> for
> application developers.
>
> I also think this (having a meta-service wrap a more basic service in
> a
> chain) is a good way of doing the current validation instead of where
> its currently performed (but I could be convinced otherwise on this).
>
> The above request Xforms could be done with an XST, so it could
> actually
> be extreamly simple to put together. The main problem would be that
> you might need some type of two-phase commit to handle the case in
> which one of the sub-steps fail because I dont think you can have a
> single transaction on a set of featuretypes (and the backup
> table/main
> table would almost certainly be two featuretypes). This is a
> solvable
> problem.
>
> What do you think?
>
> dave
>
> ----------------------------------------------------------
> This mail sent through IMP: https://webmail.limegroup.com/
>

----------------------------------------------------------
This mail sent through IMP: https://webmail.limegroup.com/

-------------------------------------------------------
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412&alloc_id=16344&op=click
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Quoting Thijs Brentjens <thijs.brentjens@anonymised.com>:

Maybe a little late, but I have to say that the things described
below
could be very valuable features. Keeping track of changes
(versioning,
history) would be very powerful. Not just for for managing the
geodata
itself, but also for integration of geodata in of BI-systems,I think.

Not late at all. It's still very much on our radar, probably the next
'big' thing we will attempt. And we're definitely open to anyone who
wants to help out.

Some more thoughts :
- how to store the changes/requests that have been made? Do
developers need
to create tables themselves (think of rights that one has on a
database)?
Will Geoserver need rights to create these tables? Or can some kind
of
internal database be used for this (as long as it won't get too
big....)?
But maybe this is _just_ a practical issue...

I think giving GeoServer the rights to create the tables would be the
best, as then it could make use of foreign keys and whatnot. But you
also could have scripts that an admin adds - after creation of the
tables GeoServer only needs to be able to add rows. I think we'd want
to eventually go for a well known table structure, a GML Application
Schema for the meta versioning table.

- what might be more difficult: how to deal with the featureID? The
OGC's
definition of a FeatureId implies that it is one unique value, for
identifying a feature. So one can't make a featureId composed of more
than
one property (e.g. featureID+timestamp). For the core data (i.e.
current
features) I don't think this will be an issue, but as soon as one
wants to
serve the historical data in a WFS, how to manage multiple versions
of the
feature? You can't use the same value for the id to refer to it's
history
(filters on a single featureId could return more then one value),
because
there could be numerous changes in the same feature, think of updates
that
are perfromed. Maybe this could be solved by storing the original
featureId
stored in the history-tables as an attribute and by using an internal
(meaningless) featureId in the history-tables. GetFeature requests on
the
historical data would then return 2 featureId's: 1 that can be used
to
identify the feature (i.e. the id ) in the history-tables and 1 (a
_regular_ attribute) that identifies the feature for the current
tables.

Maybe I'm on the wrong track. What do you think?

Actual the WFS spec already deals with this, I just think that no one
implements it. One of the elements of a Query is 'featureVersion'. So
I believe that you can just use the same featureId and specify the
featureVersion. From the spec:

'The featureVersion attribute is included in order to accommodate
systems that support feature versioning. A value of ALL indicates that
all versions of a feature should be fetched. Otherwise, an integer, n,
can be specified to return the nth version of a feature. The version
numbers start at 1, which is the oldest version. If a version value
larger than the largest version number is specified, then the latest
version is returned. The default action shall be for the query to
return the latest version. Systems that do not support versioning can
ignore the parameter and return the only version that they have.'

I'm not super sure how well thought out this is - it seems almost a bit
strict to me - I'd prefer an svn style global revision number, and be
able to return all the features at that revision number. This seems to
imply that the nth version of _each_ feature in your result set should
be returned.

- another thought/fantasy: keeping track of history would allow for
nice
tools for managing (versions of) the data. Extending the admin-tool
with
some reports would be nice, e.g. on how many changes have been made,
which
versions are available, who did what etc... Using GetFeature requests
and
XSLT, this would be relatively easy I think.

Yes, this would be nice. Thanks for the ideas. Hopefully we'll be able
to get started on some of this stuff soon. Would be great to have it
driven by multipe user requirements, especially be able to compare with
the esri feature version and all.

Chris

regards,
Thijs

At 09:33 23-5-2005, Chris Holmes wrote:
>Very much agree, I was thinking many of the same thoughts when I
wrote
>this:
http://lists.eogeo.org/pipermail/opensdi/2005-April/000076.html
>But it ended up as more of a brain dump.
>
>I think expressing the insert stuff as another set of WFS-T
operations
>is definitely key. I like your thought of doing it as normal WFS-T
>transaction, and then just performing the sub actions. Indeed the
nice
>thing is that the second table of the history stuff is _also_
available
>as WFS, and could be used by clients to return the revision history.
>
>I very much agree with you re: doing validation there, instead of
where
>it's lumped in now. Indeed I think eventually it would be great if
a
>user could 'compose' their insert chain. You could introduce other
>things into the work flow like a peer review. Before the real
commit
>it goes to another temp table where someone else has to review the
>changes and sign them off before it modifies the main. Validation,
>attribution, peer review, revision history, and more would just be
>options that a user chooses for how much process they want in their
>spatial data maintenance. And yes, for application developers all
that
>should be exposed is WFS-T.
>
>The one thing we really need though, that is currently missing, is
>attribution. In a hacky approach I was thinking of doing a first
run
>with a _slightly_ different WFS-T request that had a field for
>'username'. Eventually (or perhaps sooner) we could do
authentication,
>which would be nice for security reasons anyway. Even if we allowed
>everyone to sign up for an account...
>
>The one other thing that I feel would be nice to have is a 'comment'
>field, like in cvs commits, so you can see why someone did a given
>change, what their source for the change was, ect.
>
>Though I suppose that you could make it a special attribute in
Features
>inserted. Though that's hacky. Or do we have the possibility of
>vendor specific params in WFS-T?
>
>I'm not 100% sure but I think that our current transaction
architecture
>does not commit until all inserts are successful, I believe even
cross
>datastores. Like if your second or third sub transaction fails then
>all rollback. So as long as they're in the same end transaction
>request (which can hold inserts and updates mixed) then it should be
>fine.
>
>Chris
>
>Quoting dblasby@anonymised.com:
>
> > Chris,
> >
> > You recently mentioned (in the context of wiki-like-editing
spatial
> > data) of having the WFS keep track of feature updates and deletes
> > (ie.
> > keep some sort of historical information) so that it could be
either
> > 'rolled back' or you could ask for a view at a particular
timeframe.
> >
> > I was thinking this could be done with a simple web service that
> > would
> > take an incoming WFS <transaction> and transform it into a more
> > complex
> > set of sub-actions.
> >
> > For example, an update could move the current database row to
another
> > table (with a timestamp), then update the actual row. These
> > operations
> > can be easily expressed in terms of other WFS request. In the
above
> > case, the update request gets translated into a GetFeature (from
the
> > main table), Insert (back up that row in another table), then an
> > Update
> > (of the main table). If you were really paranoid, you can also
throw
> > in some locks!
> >
> > There's all sorts of different ways of keeping historical
> > information.
> >
> > To the outside world, they only see a normal WFS-T so clients
dont
> > have
> > to even know that something fancy is going on, which is super
sweet
> > for
> > application developers.
> >
> > I also think this (having a meta-service wrap a more basic
service in
> > a
> > chain) is a good way of doing the current validation instead of
where
> > its currently performed (but I could be convinced otherwise on
this).
> >
> > The above request Xforms could be done with an XST, so it could
> > actually
> > be extreamly simple to put together. The main problem would be
that
> > you might need some type of two-phase commit to handle the case
in
> > which one of the sub-steps fail because I dont think you can have
a
> > single transaction on a set of featuretypes (and the backup
> > table/main
> > table would almost certainly be two featuretypes). This is a
> > solvable
> > problem.
> >
> > What do you think?
> >
> > dave
> >
> > ----------------------------------------------------------
> > This mail sent through IMP: https://webmail.limegroup.com/
> >
>
>
>
>
>----------------------------------------------------------
>This mail sent through IMP: https://webmail.limegroup.com/
>
>
>-------------------------------------------------------
>This SF.Net email is sponsored by Oracle Space Sweepstakes
>Want to be the first software developer in space?
>Enter now for the Oracle Space Sweepstakes!
>http://ads.osdn.com/?ad_id=7412&alloc_id=16344&op=click
>_______________________________________________
>Geoserver-devel mailing list
>Geoserver-devel@lists.sourceforge.net
>https://lists.sourceforge.net/lists/listinfo/geoserver-devel

----------------------------------------------------------
This mail sent through IMP: https://webmail.limegroup.com/

At 18:20 7-6-2005, Chris Holmes wrote:

Quoting Thijs Brentjens <thijs.brentjens@anonymised.com>:

> Maybe a little late, but I have to say that the things described
> below
> could be very valuable features. Keeping track of changes
> (versioning,
> history) would be very powerful. Not just for for managing the
> geodata
> itself, but also for integration of geodata in of BI-systems,I think.

Not late at all. It's still very much on our radar, probably the next
'big' thing we will attempt. And we're definitely open to anyone who
wants to help out.

For a new project, this could be very relevant, so maybe I'll get some time to help a little (but I can't make any promisses).

>
> Some more thoughts :
> - how to store the changes/requests that have been made? Do
> developers need
> to create tables themselves (think of rights that one has on a
> database)?
> Will Geoserver need rights to create these tables? Or can some kind
> of
> internal database be used for this (as long as it won't get too
> big....)?
> But maybe this is _just_ a practical issue...
I think giving GeoServer the rights to create the tables would be the
best, as then it could make use of foreign keys and whatnot. But you
also could have scripts that an admin adds - after creation of the
tables GeoServer only needs to be able to add rows. I think we'd want
to eventually go for a well known table structure, a GML Application
Schema for the meta versioning table.
>
> - what might be more difficult: how to deal with the featureID? The
> OGC's
> definition of a FeatureId implies that it is one unique value, for
> identifying a feature. So one can't make a featureId composed of more
> than
> one property (e.g. featureID+timestamp). For the core data (i.e.
> current
> features) I don't think this will be an issue, but as soon as one
> wants to
> serve the historical data in a WFS, how to manage multiple versions
> of the
> feature? You can't use the same value for the id to refer to it's
> history
> (filters on a single featureId could return more then one value),
> because
> there could be numerous changes in the same feature, think of updates
> that
> are perfromed. Maybe this could be solved by storing the original
> featureId
> stored in the history-tables as an attribute and by using an internal
> (meaningless) featureId in the history-tables. GetFeature requests on
> the
> historical data would then return 2 featureId's: 1 that can be used
> to
> identify the feature (i.e. the id ) in the history-tables and 1 (a
> _regular_ attribute) that identifies the feature for the current
> tables.
>
> Maybe I'm on the wrong track. What do you think?
Actual the WFS spec already deals with this, I just think that no one
implements it. One of the elements of a Query is 'featureVersion'. So
I believe that you can just use the same featureId and specify the
featureVersion. From the spec:

'The featureVersion attribute is included in order to accommodate
systems that support feature versioning. A value of ALL indicates that
all versions of a feature should be fetched. Otherwise, an integer, n,
can be specified to return the nth version of a feature. The version
numbers start at 1, which is the oldest version. If a version value
larger than the largest version number is specified, then the latest
version is returned. The default action shall be for the query to
return the latest version. Systems that do not support versioning can
ignore the parameter and return the only version that they have.'

I'm not super sure how well thought out this is - it seems almost a bit
strict to me - I'd prefer an svn style global revision number, and be
able to return all the features at that revision number. This seems to
imply that the nth version of _each_ feature in your result set should
be returned.

You're right. Sometimes one has to stop thinking and start reading first... The mechanism looks ok to me, but I'm not a specialist on versioning :).

>
> - another thought/fantasy: keeping track of history would allow for
> nice
> tools for managing (versions of) the data. Extending the admin-tool
> with
> some reports would be nice, e.g. on how many changes have been made,
> which
> versions are available, who did what etc... Using GetFeature requests
> and
> XSLT, this would be relatively easy I think.
Yes, this would be nice. Thanks for the ideas. Hopefully we'll be able
to get started on some of this stuff soon. Would be great to have it
driven by multipe user requirements, especially be able to compare with
the esri feature version and all.

Chris

>
> regards,
> Thijs
>
> At 09:33 23-5-2005, Chris Holmes wrote:
> >Very much agree, I was thinking many of the same thoughts when I
> wrote
> >this:
> http://lists.eogeo.org/pipermail/opensdi/2005-April/000076.html
> >But it ended up as more of a brain dump.
> >
> >I think expressing the insert stuff as another set of WFS-T
> operations
> >is definitely key. I like your thought of doing it as normal WFS-T
> >transaction, and then just performing the sub actions. Indeed the
> nice
> >thing is that the second table of the history stuff is _also_
> available
> >as WFS, and could be used by clients to return the revision history.
> >
> >I very much agree with you re: doing validation there, instead of
> where
> >it's lumped in now. Indeed I think eventually it would be great if
> a
> >user could 'compose' their insert chain. You could introduce other
> >things into the work flow like a peer review. Before the real
> commit
> >it goes to another temp table where someone else has to review the
> >changes and sign them off before it modifies the main. Validation,
> >attribution, peer review, revision history, and more would just be
> >options that a user chooses for how much process they want in their
> >spatial data maintenance. And yes, for application developers all
> that
> >should be exposed is WFS-T.
> >
> >The one thing we really need though, that is currently missing, is
> >attribution. In a hacky approach I was thinking of doing a first
> run
> >with a _slightly_ different WFS-T request that had a field for
> >'username'. Eventually (or perhaps sooner) we could do
> authentication,
> >which would be nice for security reasons anyway. Even if we allowed
> >everyone to sign up for an account...
> >
> >The one other thing that I feel would be nice to have is a 'comment'
> >field, like in cvs commits, so you can see why someone did a given
> >change, what their source for the change was, ect.
> >
> >Though I suppose that you could make it a special attribute in
> Features
> >inserted. Though that's hacky. Or do we have the possibility of
> >vendor specific params in WFS-T?
> >
> >I'm not 100% sure but I think that our current transaction
> architecture
> >does not commit until all inserts are successful, I believe even
> cross
> >datastores. Like if your second or third sub transaction fails then
> >all rollback. So as long as they're in the same end transaction
> >request (which can hold inserts and updates mixed) then it should be
> >fine.
> >
> >Chris
> >
> >Quoting dblasby@anonymised.com:
> >
> > > Chris,
> > >
> > > You recently mentioned (in the context of wiki-like-editing
> spatial
> > > data) of having the WFS keep track of feature updates and deletes
> > > (ie.
> > > keep some sort of historical information) so that it could be
> either
> > > 'rolled back' or you could ask for a view at a particular
> timeframe.
> > >
> > > I was thinking this could be done with a simple web service that
> > > would
> > > take an incoming WFS <transaction> and transform it into a more
> > > complex
> > > set of sub-actions.
> > >
> > > For example, an update could move the current database row to
> another
> > > table (with a timestamp), then update the actual row. These
> > > operations
> > > can be easily expressed in terms of other WFS request. In the
> above
> > > case, the update request gets translated into a GetFeature (from
> the
> > > main table), Insert (back up that row in another table), then an
> > > Update
> > > (of the main table). If you were really paranoid, you can also
> throw
> > > in some locks!
> > >
> > > There's all sorts of different ways of keeping historical
> > > information.
> > >
> > > To the outside world, they only see a normal WFS-T so clients
> dont
> > > have
> > > to even know that something fancy is going on, which is super
> sweet
> > > for
> > > application developers.
> > >
> > > I also think this (having a meta-service wrap a more basic
> service in
> > > a
> > > chain) is a good way of doing the current validation instead of
> where
> > > its currently performed (but I could be convinced otherwise on
> this).
> > >
> > > The above request Xforms could be done with an XST, so it could
> > > actually
> > > be extreamly simple to put together. The main problem would be
> that
> > > you might need some type of two-phase commit to handle the case
> in
> > > which one of the sub-steps fail because I dont think you can have
> a
> > > single transaction on a set of featuretypes (and the backup
> > > table/main
> > > table would almost certainly be two featuretypes). This is a
> > > solvable
> > > problem.
> > >
> > > What do you think?
> > >
> > > dave
> > >
> > > ----------------------------------------------------------
> > > This mail sent through IMP: https://webmail.limegroup.com/
> > >
> >
> >----------------------------------------------------------
> >This mail sent through IMP: https://webmail.limegroup.com/
> >
> >-------------------------------------------------------
> >This SF.Net email is sponsored by Oracle Space Sweepstakes
> >Want to be the first software developer in space?
> >Enter now for the Oracle Space Sweepstakes!
> >http://ads.osdn.com/?ad_id=7412&alloc_id=16344&op=click
> >_______________________________________________
> >Geoserver-devel mailing list
> >Geoserver-devel@lists.sourceforge.net
> >https://lists.sourceforge.net/lists/listinfo/geoserver-devel
>

----------------------------------------------------------
This mail sent through IMP: https://webmail.limegroup.com/