Andrea Aime wrote:
<snip>
I'm actually a bit skeptical about the idea of having the feature
type and the test parameters being configured independently. Different
feature types will have at least different attributes, so in order
to make up a generic "in between dates" test you'll have to at
least provide feature type name and attribute name in the "header" file.
I'm also wondering if having these tests repeatable over different
types is going to provide extra value. I mean, once you've determined
that the server is capable of running that kind of filter,
what good do you get to run the same test on another feature type?
The only thing that comes to my mind is to make sure the proper
indexing is setup in the database so that interval date extraction
can be performed in an efficient way. Any other ideas?
</snip>
The underlying logic here is that services implement a "profile" of a FeatureType - i.e. they choose what to implement in regards to content rules, optional elements, cardinality, type restrictions and extensions allowed by the FeatureType.
Am pushing this to a wider audience because its critical to any concept of interoperability and very poorly understood AFAICT.
Thus, parameters should be bound to the "profile" - not the FeatureType, which is one step further than Andrea's view on coupling, which is otherwise essentially correct.
The implications, however, bring us back to the equally correct assumptions made - that these tests are repeated in slightly modified form by many FeatureType (implemented profiles).
This is in also correct, because real Feature types inherit from more general ones. Thus, implementation profiles inherit, and the test configurations should be bound to profiles, and inherited.
Thus, the type of time-period selections test pattern shgould be directly applicable to any feature type deriving from (or containing as a complex property) a an omx:TimeSeriesObservation (which is a specialisation of an om:Observation) - where om is the OGC Observations and MEasurements schema, and omx: is the extensions providing implementable specialisations.
(Some of you will already be thinking about how this applies to the other aspects of tests here I expect. The content of tests is bound to the domain of the data, and this is a separate concern that follows a parallel logic - and how to model the interaction of data domain and behavioural patterns is still a work in progress)
Rob Atkinson