Hi,
I’m looking into a GeoServer clustered setup in which the application uses
WFS locking to prevent multiple users from editing the same feature at the same
time.
The issue with the above approach is that WFS locking is controlled by the
InProcessLockingManager, which of course does not work in a cluster.
To do distributed locking I guess we’d need something more:
the JDBC stores should be able to get the locking manager to use as a store
parameter
we’d need a distributed locking manager that could store the various feature
locks “in the cloud” somehow
Another approach would be to use a centralized BDMS owning all the lock objects
somehow, thought the blocking behavior would have to be simulated somehow.
Another approach would be to use a centralized BDMS owning all the lock objects
somehow, thought the blocking behavior would have to be simulated somehow.
Opinions, ideas?
So I like that, really nail down the LockingManagerAPI and inject it into the DataStores (same solution for both of the above).
Another approach is to wrap FeatureStore instances before letting them out to the rest of the application, and perform lock checks in the wrapper.
This has the advantage (if you can get it to work) of being enforced outside of the individual DataStore implementations.
The WFS Lock is cross DataStore, hence the attraction of wrapping).
To do distributed locking I guess we’d need something more:
the JDBC stores should be able to get the locking manager to use as a store
parameter
we’d need a distributed locking manager that could store the various feature
locks “in the cloud” somehow
If you are going against a clustered database, it may provide its own distributed row locking technique?
Hmm… WFS locking is “long transactions”, whilst databases often provide locks as long
as you are keeping a dedicated jdbc connection holding the lock.
That is, not suitable for the case.
Oracle has its own workspace manager to handle long transactions, but that is basically
implementing long transactions by versioning, which is far far away from what I’m looking
for in terms of effort.
Another approach would be to use a centralized BDMS owning all the lock objects
somehow, thought the blocking behavior would have to be simulated somehow.
Opinions, ideas?
So I like that, really nail down the LockingManagerAPI and inject it into the DataStores (same solution for both of the above).
Another approach is to wrap FeatureStore instances before letting them out to the rest of the application, and perform lock checks in the wrapper.
This has the advantage (if you can get it to work) of being enforced outside of the individual DataStore implementations.
That’s an idea, it’s something that we could implement at the GeoServer level, which already wraps the
feature source/feature store objects anyways.
The WFS Lock is cross DataStore, hence the attraction of wrapping).
I would vote for hazelcast. I think it is not a good idea to have an individual solution for each upcoming cluster problem, why not integrate an existing cluster infrastructure.
As an example, I would like to have a distributed cache concerning authentication tokens to avoid executing the whole authentication procedure for stateless services in a clustered environment.
To do distributed locking I guess we’d need something more:
the JDBC stores should be able to get the locking manager to use as a store
parameter
we’d need a distributed locking manager that could store the various feature
locks “in the cloud” somehow
If you are going against a clustered database, it may provide its own distributed row locking technique?
Hmm… WFS locking is “long transactions”, whilst databases often provide locks as long
as you are keeping a dedicated jdbc connection holding the lock.
That is, not suitable for the case.
Oracle has its own workspace manager to handle long transactions, but that is basically
implementing long transactions by versioning, which is far far away from what I’m looking
for in terms of effort.
Another approach would be to use a centralized BDMS owning all the lock objects
somehow, thought the blocking behavior would have to be simulated somehow.
Opinions, ideas?
So I like that, really nail down the LockingManagerAPI and inject it into the DataStores (same solution for both of the above).
Another approach is to wrap FeatureStore instances before letting them out to the rest of the application, and perform lock checks in the wrapper.
This has the advantage (if you can get it to work) of being enforced outside of the individual DataStore implementations.
That’s an idea, it’s something that we could implement at the GeoServer level, which already wraps the
feature source/feature store objects anyways.
The WFS Lock is cross DataStore, hence the attraction of wrapping).
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications! http://p.sf.net/sfu/zoho_dev2dev_nov
I would vote for hazelcast. I think it is not a good idea to have an individual solution for each upcoming cluster problem, why not integrate an existing cluster infrastructure.
As an example, I would like to have a distributed cache concerning authentication tokens to avoid executing the whole authentication procedure for stateless services in a clustered environment.
To do distributed locking I guess we’d need something more:
the JDBC stores should be able to get the locking manager to use as a store
parameter
we’d need a distributed locking manager that could store the various feature
locks “in the cloud” somehow
If you are going against a clustered database, it may provide its own distributed row locking technique?
Hmm… WFS locking is “long transactions”, whilst databases often provide locks as long
as you are keeping a dedicated jdbc connection holding the lock.
That is, not suitable for the case.
Oracle has its own workspace manager to handle long transactions, but that is basically
implementing long transactions by versioning, which is far far away from what I’m looking
for in terms of effort.
Another approach would be to use a centralized BDMS owning all the lock objects
somehow, thought the blocking behavior would have to be simulated somehow.
Opinions, ideas?
So I like that, really nail down the LockingManagerAPI and inject it into the DataStores (same solution for both of the above).
Another approach is to wrap FeatureStore instances before letting them out to the rest of the application, and perform lock checks in the wrapper.
This has the advantage (if you can get it to work) of being enforced outside of the individual DataStore implementations.
That’s an idea, it’s something that we could implement at the GeoServer level, which already wraps the
feature source/feature store objects anyways.
The WFS Lock is cross DataStore, hence the attraction of wrapping).
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications! http://p.sf.net/sfu/zoho_dev2dev_nov
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications! http://p.sf.net/sfu/zoho_dev2dev_nov
So I like that, really nail down the LockingManagerAPI and inject it into the DataStores (same solution for both of the above).
Another approach is to wrap FeatureStore instances before letting them out to the rest of the application, and perform lock checks in the wrapper.
This has the advantage (if you can get it to work) of being enforced outside of the individual DataStore implementations.
That’s an idea, it’s something that we could implement at the GeoServer level, which already wraps the
feature source/feature store objects anyways.
The WFS Lock is cross DataStore, hence the attraction of wrapping).
Ah right, good point
To be more specific the keys to the door are cross datastore, the LockingManager can still be defined DataStore by DataStore, but you need the Transaction operation to kick everyone at the same time.
It would be nice to tighten this up with a double handshake on commit / rollback (so a datastore could do a pre-flight check, such as checking locks).
The WFS Lock is cross DataStore, hence the attraction of wrapping).
Ah right, good point
To be more specific the keys to the door are cross datastore, the LockingManager can still be defined DataStore by DataStore, but you need the Transaction operation to kick everyone at the same time.
It would be nice to tighten this up with a double handshake on commit / rollback (so a datastore could do a pre-flight check, such as checking locks).
Uh, looks like two phase commit. That’s too far away from the requirements I have, I may go as far as making lock work in a distributed
fashion without dealing with transactions as well (and maybe not even there, we’re also discussing implementing optimistic locking
on the client side).
That’s too far away from the requirements I have, I may go as far as making lock work in a distributed fashion without dealing with transactions as well (and maybe not even there, we’re also discussing implementing optimistic locking on the client side).
Fair enough, if we are reviewing LockingManager let us keep in mind how locks are released during transaction.commit()