However, this recipe (Feature Type definitions first, then create datastore) seems to assume that the data-store can be configured to match the feature-type definition.
The more common case is that we need to map a public feature-type to one or more potentially different existing private table schemas, where for external reasons the table schemas cannot be changed.
Yes, the public feature-type should be king, but there must be a mapping layer to the private schema(s).
Or maybe I'm reading too much in.
Simon
-----Original Message-----
From: opensdi-bounces@anonymised.com
[mailto:opensdi-bounces@anonymised.com] On Behalf Of P.Rizzi
Ag.Mobilità Ambiente
Sent: Friday, 22 July 2005 4:48 PM
To: 'Rob Atkinson '; 'Jody Garnett '
Cc: 'geoserver-devel@lists.sourceforge.net '; 'OpenSDI
(E-mail) '; 'geotools-devel '
Subject: RE: [OpenSDI] Re: [Geoserver-devel] Geoserver 2.0
(web app framew ork)> Its worth understanding the role that feature Types have across
> different OGC specs - they are poorly integrated in the
current round
> > of
>
> OGC specs but this may improve.
>
> image shows how common data base serves a common feature expressed
> through several interfaces and operations: WFS, SLD and
GetFeatureInfo
>
> So, following from this reality, as well as the fact that
FeatureTypes
> will increasingly be externally defined, IMHO the GeoTools
> >architecture needs refactoring to allow common
configuration around
> the >FeatureTypes,
>
> not the database connections.
I completely agree with this!!! For me there're really two level of
configuration: one at the DataStore level, that is only the
connection params or whatever needed to reach the data. This
info may be configured by whatever existing mechanism (like
GeoServer or uDig ones or using the Spring framework).
Then there's the real important info, the one about
FeatureTypes. That info should be in a catalog implemented
inside GeoTools and usable by any project using GeoTools
(like GeoServer and uDig). The way this info is persisted is
irrilevant (even if we don't see any good reason why it
shouldn't be accessible through DataStores), its it's
semantic that needs to be shared and agreed upon.
For each FeatureType we must be able to know such things as
structure, validation and security (at a minimum) and this
BEFORE to connect to any DataStore. That is, the structure of
a FetureType must exists even before it's implementation
inside a DataStore, otherwise how do you call a createSchema()???
Also a FetureType may be abstract, so that it's not actually
implemented by any DataStore, but there may be other concrete
FeatureTypes inheriting from it, so that you could make a
query upon the abstract one and get results from all it's descendants.
Or you can have a concrete FetureType that's implemented by
several different DataStores, and you may choose one of them
in a mirroring style, or you can make the union of them all.> Gabriel Roldan is doing some design work around these issues at the
> moment, so if there is going to be some work done elsewhere
can we try
> to keep the activities in sync so we can schedule a single
refactoring.
>
> Rob Atkinson
We're not actually refactoring anything, but we created a
simple Meta engine that let's one define things about
FeatureTypes. You can find it here:
http://www.geotools.org/Meta+Information+InfrastructureI'm sorry it's not updated, but the core of it is not
changed. I'm also sorry because it lacks a good demo, but
there's a pretty extensive description. Also you may look the
comments inside the core class:
it.ama_mi.sis.framework.meta.MetaSpaceBye
Paolo RizziJody Garnett wrote:
> Including a lot of your email dave because it is subject matter for
> the OpenSDI mailing list.
>
>> I haven't put a lot of thought into the new 2.0 "Geo Application
>> Framework" so I don't really know where - exactly - its
going. But,
I
>> though I'd throw a few ideas out so people can put their 2
cents in
>> too...
>>
>> There's been a bit of trend to not put components in
Geotools, which
I
>> hope is turning around. At least everyone is talking about
re-merging.
>>
>> Catalog is something that really needs to be stuck back in geotools
as
>> almost everyone doing non-trivial projects needs one.
Paolo already
>> mentioned some ideas on this, and I know that Jody, Chris
and I have
>> also mentioned it.
>>
>
> First of all a word of encouragement:
> - Setting up a Framework is important - it is the one thing uDig
> development has taught us A good framework makes all the
difference.
> - You are not as far away from 2.0 as you think, GeoServer
is already
> broken into chunks. The WFS and WMS talk to an interface
called Data.
> They don't have to see the same Data implementation. BTW Data ==
> Catalog == Configuration of data connections.
> - Catalog (or Data) implementations: GeoServer GlobalData, Geotools
> Repository, uDig ICatalog - these are all the same class/idea.
>
> Basically GeoServer is already organized in the right
manner, what is
> lacking is a consistent framework to drop the components into.
> And provide persistence, configuration, web ui for configuration,
> channeling of requests (regardless of XML/KVP) etc...
>
> If I was doing this I would:
> - scrounge up the framework chunks (based on some research, such as
> occurs of the OpenSDI list)
> - implement the easiest service possible; Feature Portrayal Service
> because it does not have any data
> - release for public abuse of the framework components
> - port the uDig Catalog (and merge with GeoServer Data)
> - port the GeoServer WFS
> - release for public abuse (and assistence in passing Cite tests)
> - port the GeoServer WMS, WCS and so on
> - release for public abuse as a beta
>
> Note this allows the framework to have a logical progression of
> capabilities for the request/response subsystem.
> - GET (w/ KVP) and Image (non XML) response for FPS
> - POST (w/ XML request) and XML Response first (for WFS)
> - SOAP / WSDL later (for service chaining)
>
> Note I would expect the system to be *easy*. Drop a module in, with
> required request response schemas. And a discoverable class for
> prossing of the same.
> Indeed you can probably handle the KVP get requests just using
> reflection as long as the parameters are sufficiently well known.
> public Image WMS.getMap( Envelope bbox, List<Layer> layers,
int width,> int height ) for example
>
>> In generic terms, I think the new architecture will look something
like
>> this (Jody and Chris have already talked about this):
>>
>> 1. The "Core GeoServer" will be quite small, and basically serve as
the
>> container for other services and handle such tasks like:
>> * taking requests and routing them to the correct plugin
service
>> * manage the Dataset catalog
>> * manage configuration (often being passed off to the service
>> plugins)
>> * Anything else we think is a "common good" that will
>> make building and managing services easier
>>
>> 2. WFS/WMS/WCS will be example services that people could either
>> download-and-plugin or would come as separate downloads.
>> Each of the plugins would be responsible for handling the actual
>> requests, and any service-specific configuration. The
plugins would
>> use the framework core to help implement the actual
services. They
>> would also be responsible for all their configuration
(including the
>> web-app to actually do the config).
>>
>> This differs from the "GeoCollaborator" system which isn't really
>> supposed to be smart and build new services up from scratch - its
>> supposed to enhance existing services. Geoserver 2.0 is
supposed to
>> make it a easier to develop and deploy new services.
>>
>> I'm not sure exactly what this will look at - this is something we
>> really need to spend time and hammer out. There's lots of
pros-and-cons
>> of putting different services (or service-helpers) in the core or
>> making the sub-components completely responsible for them. And
example
>> would be security and user authentication; where (exactly)
should it
be
>> managed - there's lots of different places to put it.
>>
>> (Feel free to reply with better suggestions and comments)
>>
>> Personally, I think this is a pretty ambitious task thats going to
take
>> quite a bit of time to actually pull-off. This is especially so if
we
>> try to actually implement these things inside geotools instead of
>> having complete control inside geoserver. The unfortunate
situation
is
>> that, certainly from a web-configuration perspective,
geoserver maybe
>> at variable stages of "unuseable" during the lengthy transition.
>>
>> (As an aside, I believe jody and his team spent a lot of
time on just
>> the STRUTs web app for config - see jody's message on this.)
>>
>>
>>
>>
>> I think you can see that the two biggest things to change
will be (1)
>> the catalog stuff and (2) the config stuff.
>>
>> I keep repeating myself, but for people doing things like the
ingestion
>> engine need to just be warned that things will be changing in the
>> future so make sure that you make the program
easy-to-change. I don't
>> think there's going to be any radical changes (we're still
going to
>> have datastores and feature-type, etc...), but how its
organized and
>> the actual content could change significantly. I expect
you folks to
>> be involved in all these decisions so its not something thats going
to
>> sneak up on you.
>>
>>
>> --------------------------
>>
>> There's been a bunch of good suggestions floating around from
everyone
>> in this conversation.
>>
>> I did like paolo's idea of using Features to represent
configuration
>> information (like the catalog and security) so you can leverage
>> existing geotools infrastructure like datastore (persistence) and
>> filter (searching).
>>
>> Alessio, also, had a bunch of good idea in his email messages.
>>
>>
> I would steer away from this, there are existing technologies for
> configuration and security that we should be using. Lets
use them and
> save our energies for the actual> work we need to do in
> making a OWS framework.
>
>> -------------------------------------------------
>>
>> In a more practical manner, I don't really know what the timeframe
that
>> these major changes will take place. I think it will really depend
on
>> what you folks want to do.
>>
>> I'm a bit risk-adverse on the issue since I want to ensure that
>> Geoserver is always getting better - easier to configure and more
>> stable. This might require a 1.3 branch and a radically different
2.0
>> branch, which can be a maintenance problem.
>>
>> I talked to jody about moving some of the Udig work out of udig and
into
>> geotools, and I'm not sure what his plan is for it. I dont think it
was
>> something planned over the short time; and I've heard
estimates from
2
>> weeks to 1 year.
>>
>>
> It is more that unless there is a project (or developers) that need
> this functionality in geotools there is no advantage to
backporting.
> The estimate is more based on when will another project be
interested
> in working together then any technical difficulty.
>
> But if there is interest it can be done, there is not that
much code
> and it is of high quality. Right now there is more interest in
> backporting a lot of the uDig rendering
goodness...
>
>> Dave
>>
>>
> Jody
>
>
> -------------------------------------------------------
> SF.Net email is sponsored by: Discover Easy Linux Migration
Strategies
> from IBM. Find simple to follow Roadmaps, straightforward articles,
> informative Webcasts and more! Get everything you need to get up to
> speed, fast. http://ads.osdn.com/?ad_id=7477&alloc_id=16492&op=click
> _______________________________________________
> Geoserver-devel mailing list
> Geoserver-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/geoserver-devel<<WMSWFSrelationship.gif>> <<ATT07151.txt>>
_______________________________________________
OpenSDI mailing list
OpenSDI@anonymised.com
http://lists.eogeo.org/mailman/listinfo/opensdi