[Geoserver-devel] [duckhawk-dev] Re: Configuration layout for AWDIP

Andrea Aime wrote:
<snip>
I'm actually a bit skeptical about the idea of having the feature
type and the test parameters being configured independently. Different
feature types will have at least different attributes, so in order
to make up a generic "in between dates" test you'll have to at
least provide feature type name and attribute name in the "header" file.

I'm also wondering if having these tests repeatable over different
types is going to provide extra value. I mean, once you've determined
that the server is capable of running that kind of filter,
what good do you get to run the same test on another feature type?
The only thing that comes to my mind is to make sure the proper
indexing is setup in the database so that interval date extraction
can be performed in an efficient way. Any other ideas?
</snip>

The underlying logic here is that services implement a "profile" of a FeatureType - i.e. they choose what to implement in regards to content rules, optional elements, cardinality, type restrictions and extensions allowed by the FeatureType.

Am pushing this to a wider audience because its critical to any concept of interoperability and very poorly understood AFAICT.

Thus, parameters should be bound to the "profile" - not the FeatureType, which is one step further than Andrea's view on coupling, which is otherwise essentially correct.

The implications, however, bring us back to the equally correct assumptions made - that these tests are repeated in slightly modified form by many FeatureType (implemented profiles).

This is in also correct, because real Feature types inherit from more general ones. Thus, implementation profiles inherit, and the test configurations should be bound to profiles, and inherited.

Thus, the type of time-period selections test pattern shgould be directly applicable to any feature type deriving from (or containing as a complex property) a an omx:TimeSeriesObservation (which is a specialisation of an om:Observation) - where om is the OGC Observations and MEasurements schema, and omx: is the extensions providing implementable specialisations.

(Some of you will already be thinking about how this applies to the other aspects of tests here I expect. The content of tests is bound to the domain of the data, and this is a separate concern that follows a parallel logic - and how to model the interaction of data domain and behavioural patterns is still a work in progress)

Rob Atkinson

Hi Rob,

so would these "profiles" be in case of AWDIP time and location? Some featureTypes only implement one of them, like SingleSitePhenomTimeSeries e.g. time and others like SiteSamplingStatistics both.

There would be separate tests for time and location which could be run to the corresponding featureTypes.

Cheers,
   Volker

Rob.Atkinson@anonymised.com wrote:

The underlying logic here is that services implement a "profile" of a FeatureType - i.e. they choose what to implement in regards to content rules, optional elements, cardinality, type restrictions and extensions allowed by the FeatureType.

Am pushing this to a wider audience because its critical to any concept of interoperability and very poorly understood AFAICT.

Thus, parameters should be bound to the "profile" - not the FeatureType, which is one step further than Andrea's view on coupling, which is otherwise essentially correct.

The implications, however, bring us back to the equally correct assumptions made - that these tests are repeated in slightly modified form by many FeatureType (implemented profiles).

This is in also correct, because real Feature types inherit from more general ones. Thus, implementation profiles inherit, and the test configurations should be bound to profiles, and inherited.

Thus, the type of time-period selections test pattern shgould be directly applicable to any feature type deriving from (or containing as a complex property) a an omx:TimeSeriesObservation (which is a specialisation of an om:Observation) - where om is the OGC Observations and MEasurements schema, and omx: is the extensions providing implementable specialisations.

(Some of you will already be thinking about how this applies to the other aspects of tests here I expect. The content of tests is bound to the domain of the data, and this is a separate concern that follows a parallel logic - and how to model the interaction of data domain and behavioural patterns is still a work in progress)

Rob Atkinson

We need to distinguish the 'profile pattern' from the idea of a profile
artefact.

A TimeSeriesObservation is a feature type that restricts an Observation
by specifying the type of the "result" property.

Thus it is a 'profile' (according to ISO 19106 profiles), formalised as
a model (and GML schema) restriction.

We are also looking right now at a formalism for the "service profile" -
i.e. what makes an AWDI service useful? In the meantime, the schema,
feature types, sample queries and responses represent the best
description available. My understanding is that the testing framework
should allow us to feed such restrictions, and conformance and
robustness and performance tests can then be carried out.

We would expect to feed the test suite from the formalised service
profiles in the future. So its wise to have this discussion about the
way aspects of those tests are packaged - its logical for them to be
packaged around a concept of service profiles, that inherit from more
general ones, allowing you to develop test suites and apply them to
specific feature types without manually configuring each test suite for
each feature type. (Thus bridging by inheritance the two view points
regarding reusability of parameters vs. feature-type being bound to the
tests that make sense for it)

Rob

-----Original Message-----
From: Volker Mische [mailto:Volker.Mische@anonymised.com]
Sent: Monday, 2 June 2008 11:00 AM
To: Atkinson, Rob (CLW, Lucas Heights)
Cc: dev@anonymised.com; geoserver-devel@lists.sourceforge.net
Subject: Re: [duckhawk-dev] Re: Configuration layout for AWDIP

Hi Rob,

so would these "profiles" be in case of AWDIP time and location? Some
featureTypes only implement one of them, like SingleSitePhenomTimeSeries

e.g. time and others like SiteSamplingStatistics both.

There would be separate tests for time and location which could be run
to the corresponding featureTypes.

Cheers,
   Volker

Rob.Atkinson@anonymised.com wrote:

The underlying logic here is that services implement a "profile" of a

FeatureType - i.e. they choose what to implement in regards to content
rules, optional elements, cardinality, type restrictions and extensions
allowed by the FeatureType.

Am pushing this to a wider audience because its critical to any

concept of interoperability and very poorly understood AFAICT.

Thus, parameters should be bound to the "profile" - not the

FeatureType, which is one step further than Andrea's view on coupling,
which is otherwise essentially correct.

The implications, however, bring us back to the equally correct

assumptions made - that these tests are repeated in slightly modified
form by many FeatureType (implemented profiles).

This is in also correct, because real Feature types inherit from more

general ones. Thus, implementation profiles inherit, and the test
configurations should be bound to profiles, and inherited.

Thus, the type of time-period selections test pattern shgould be

directly applicable to any feature type deriving from (or containing as
a complex property) a an omx:TimeSeriesObservation (which is a
specialisation of an om:Observation) - where om is the OGC Observations
and MEasurements schema, and omx: is the extensions providing
implementable specialisations.

(Some of you will already be thinking about how this applies to the

other aspects of tests here I expect. The content of tests is bound to
the domain of the data, and this is a separate concern that follows a
parallel logic - and how to model the interaction of data domain and
behavioural patterns is still a work in progress)

Rob Atkinson