[Geoserver-devel] time precision with WMS

Hi all,

Recently a client reported an issue with how geoserver interprets the TIME parameter in a WMS request when a full precision timestamp is not specified. Referencing the WMS spec:

D.2.1 Basic syntax

The basic time format uses ISO 8601:2000 “extended” format: up to 14 digits specifying century, year, month, day, hour, minute, and seconds, and optionally a decimal point followed by zero or more digits for fractional seconds, with non-numeric characters to separate each piece:

ccyy-mm-ddThh:mm:ss.sssZ

The precision may be reduced by omitting least-significant digits, as in the examples below. ISO 8601:2000 prefers a decimal comma before fractional seconds but allows a decimal period as in this International Standard. The century digits shall be included with the year; two-digit years shall not be used.

A time zone suffix is mandatory if the hours field appears in the time string. All times should be expressed in Coordinated Universal Time (UTC), indicated by the suffix Z (for “zulu”). When a local time applies, a numeric time zone suffix as defined by ISO 8601:2004, 5.3.4.1 shall be used. The absence of any suffix at all means local time in an undefined zone, which shall not be used in the global network of map servers enabled by this International Standard.

EXAMPLE 1 EXAMPLE 2 EXAMPLE 3 EXAMPLE 4

ccyy Year only
ccyy-mm Year and month
ccyy-mm-dd Year, month and day ccyy-mm-ddThhZ Year, month, day and hour in UTC

The part about omitting digits isn’t all that clear to me. I think it means that when one supplies a less precise timestamp we should actually interpret that as a date range and match anything that lies between the specified timestamp and the next timestamp that results from “rolling” the time up to the next closest value? If that indeed is what is implied we don’t implement it that way as far as I can tell.

What do others think?

-Justin


Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Here’s a couple of data points for this discussion:

http://mapserver.org/ogc/wms_time.html

MapServer has addressed this issue (see the section Interpreting Time Values), but they seem to do different things depending on the back-end data source. For PostGIS at least they use the “time as time window” approach, by virtue of using the DB date_trunc function, which amounts to the same thing). One takeaway from this: I would suggest we want a data source-independent strategy.

http://augusttown.blogspot.ca/2010/03/two-ambiguities-about-time-in-ogc-wms.html

This blog post discuss the issue, and weighs in on the side of treating all time values as time windows (with duration determined by precision). This seems to make sense to me. Although, what then is meant by a TIME of 2012-06-01/2012-06-02". Is that the window including exactly June 1, or does it include June 1-2 ?

On Mon, Jun 4, 2012 at 12:02 PM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

Hi all,

Recently a client reported an issue with how geoserver interprets the TIME parameter in a WMS request when a full precision timestamp is not specified. Referencing the WMS spec:

D.2.1 Basic syntax

The basic time format uses ISO 8601:2000 “extended” format: up to 14 digits specifying century, year, month, day, hour, minute, and seconds, and optionally a decimal point followed by zero or more digits for fractional seconds, with non-numeric characters to separate each piece:

ccyy-mm-ddThh:mm:ss.sssZ

The precision may be reduced by omitting least-significant digits, as in the examples below. ISO 8601:2000 prefers a decimal comma before fractional seconds but allows a decimal period as in this International Standard. The century digits shall be included with the year; two-digit years shall not be used.

A time zone suffix is mandatory if the hours field appears in the time string. All times should be expressed in Coordinated Universal Time (UTC), indicated by the suffix Z (for “zulu”). When a local time applies, a numeric time zone suffix as defined by ISO 8601:2004, 5.3.4.1 shall be used. The absence of any suffix at all means local time in an undefined zone, which shall not be used in the global network of map servers enabled by this International Standard.

EXAMPLE 1 EXAMPLE 2 EXAMPLE 3 EXAMPLE 4

ccyy Year only
ccyy-mm Year and month
ccyy-mm-dd Year, month and day ccyy-mm-ddThhZ Year, month, day and hour in UTC

The part about omitting digits isn’t all that clear to me. I think it means that when one supplies a less precise timestamp we should actually interpret that as a date range and match anything that lies between the specified timestamp and the next timestamp that results from “rolling” the time up to the next closest value? If that indeed is what is implied we don’t implement it that way as far as I can tell.

What do others think?

-Justin


Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.


Live Security Virtual Conference
Exclusive live event will cover all the ways today’s security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/


Geoserver-devel mailing list
Geoserver-devel@anonymised.comsts.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.

On Mon, Jun 4, 2012 at 10:21 PM, Martin Davis <mdavis@anonymised.com> wrote:

Here's a couple of data points for this discussion:

http://mapserver.org/ogc/wms_time.html

MapServer has addressed this issue (see the section Interpreting Time
Values), but they seem to do different things depending on the back-end data
source. For PostGIS at least they use the "time as time window" approach,
by virtue of using the DB date_trunc function, which amounts to the same
thing). One takeaway from this: I would suggest we want a data
source-independent strategy.

http://augusttown.blogspot.ca/2010/03/two-ambiguities-about-time-in-ogc-wms.html

This blog post discuss the issue, and weighs in on the side of treating all
time values as time windows (with duration determined by precision). This
seems to make sense to me. Although, what then is meant by a TIME of
2012-06-01/2012-06-02". Is that the window including exactly June 1, or
does it include June 1-2 ?

I agree the time window approach makes sense, and yes, the current
implementation
switches to a interval only if you explicitly ask for an interval,
otherwise the time
elements missing are assumed to be zero (from memory, at least).

It may be an easy fix, inside the time kvp parser, if the time is not
fully specified
turn it into a interval?
We already handle t1/t2 as an extension to the wms spec (which would ask for
a third parameter, the period).

However... how do we handle a list of values that have no full precision?
Turn it into a list of intervals? Now, this we don't handle...

Cheers
Andrea

--
Ing. Andrea Aime
GeoSolutions S.A.S.
Tech lead

Via Poggio alle Viti 1187
55054 Massarosa (LU)
Italy

phone: +39 0584 962313
fax: +39 0584 962313
mob: +39 339 8844549

http://www.geo-solutions.it
http://geo-solutions.blogspot.com/
http://www.youtube.com/user/GeoSolutionsIT
http://www.linkedin.com/in/andreaaime
http://twitter.com/geowolf

On Tue, Jun 5, 2012 at 8:33 AM, Andrea Aime <andrea.aime@anonymised.com> wrote:

On Mon, Jun 4, 2012 at 10:21 PM, Martin Davis <mdavis@anonymised.com> wrote:

Here’s a couple of data points for this discussion:

http://mapserver.org/ogc/wms_time.html

MapServer has addressed this issue (see the section Interpreting Time
Values), but they seem to do different things depending on the back-end data
source. For PostGIS at least they use the “time as time window” approach,
by virtue of using the DB date_trunc function, which amounts to the same
thing). One takeaway from this: I would suggest we want a data
source-independent strategy.

http://augusttown.blogspot.ca/2010/03/two-ambiguities-about-time-in-ogc-wms.html

This blog post discuss the issue, and weighs in on the side of treating all
time values as time windows (with duration determined by precision). This
seems to make sense to me. Although, what then is meant by a TIME of
2012-06-01/2012-06-02". Is that the window including exactly June 1, or
does it include June 1-2 ?

I agree the time window approach makes sense, and yes, the current
implementation
switches to a interval only if you explicitly ask for an interval,
otherwise the time
elements missing are assumed to be zero (from memory, at least).

Yup, it expands out by appending the zeros.

It may be an easy fix, inside the time kvp parser, if the time is not
fully specified
turn it into a interval?
We already handle t1/t2 as an extension to the wms spec (which would ask for
a third parameter, the period).

Agreed, looks like is relatively straight forward and constrained to the kvp parser. On my todo list to come up with a patch.

However… how do we handle a list of values that have no full precision?
Turn it into a list of intervals? Now, this we don’t handle…

Yeah, good question. I will hunt through the spec to see if says anything about that.

Cheers
Andrea


Ing. Andrea Aime
GeoSolutions S.A.S.
Tech lead

Via Poggio alle Viti 1187
55054 Massarosa (LU)
Italy

phone: +39 0584 962313
fax: +39 0584 962313
mob: +39 339 8844549

http://www.geo-solutions.it
http://geo-solutions.blogspot.com/
http://www.youtube.com/user/GeoSolutionsIT
http://www.linkedin.com/in/andreaaime
http://twitter.com/geowolf


Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Matt and I were just having a discussion regarding this topic today.

While I agree it is a compelling argument to implicitly interpret a
non-precise instant request (2012, for example) as a range
(2012-2013), I believe it should be avoided in favor of explicit
requests. I agree the WMS specs (1.1.1 and 1.3) are somewhat unclear
on this, but I believe clarity can be derived from the annex sections
C.5.2 and C.4.3 (1.1.1 and 1.3, respectively) that discuss this:

"""A WMS may declare that it will choose the closest available value
for a dimension if an exact value is not
specified. This allows, for example, hourly data whose actual
recording time is precise to the millisecond to be
requested simply by stating the desired date and hour. The
nearestValue attribute of the <Dimension> element, if
present and nonzero, indicates that this behaviour is enabled."""

While this applies more to the 'closest' (snapping) value than to
implicit conversion of an instant to an extent, I think the motivation
is similar - be explicit and when other behavior occurs, let the
client know.

Additionally, while it may make sense for certain instants to be
interpreted, e.g. 2012 means 2012-2013, it most certainly does not for
others, e.g -200000 (most likely the data is imprecise beyond a year).
While the implied period approach works well for simple,
multiplier-of-1 ranges, may domains use other ranges (10 year, 3
month, 6 hour, 30 minute, etc.) and may expect a similar behavior
(e.g. 2012-01-01T12:15:00 means 2012-01-01T12:00:00 to
2012-01-01T12:30:00 since the interval of that specific data is 30
minute).

So, I would argue that according to the spec:
1) 2012 means 2012-01-01T00:00:00Z
2) 2012/2013 means 2012-01-01T00:00:00Z-2013-01-01T00:00:00Z
(inclusive on both sides as per the spec)

Since most clients of a time aware mapping service will not require
the user to type out full-precision or ranges and will perform the
requests for the user, these implicit instant-to-extent "short-cuts"
can be taken there, but IMHO, don't belong on the server.

Cheers,

On Tue, Jun 5, 2012 at 5:32 PM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

On Tue, Jun 5, 2012 at 8:33 AM, Andrea Aime <andrea.aime@anonymised.com>
wrote:

On Mon, Jun 4, 2012 at 10:21 PM, Martin Davis <mdavis@anonymised.com> wrote:
> Here's a couple of data points for this discussion:
>
> http://mapserver.org/ogc/wms_time.html
>
> MapServer has addressed this issue (see the section Interpreting Time
> Values), but they seem to do different things depending on the back-end
> data
> source. For PostGIS at least they use the "time as time window"
> approach,
> by virtue of using the DB date_trunc function, which amounts to the same
> thing). One takeaway from this: I would suggest we want a data
> source-independent strategy.
>
>
> http://augusttown.blogspot.ca/2010/03/two-ambiguities-about-time-in-ogc-wms.html
>
> This blog post discuss the issue, and weighs in on the side of treating
> all
> time values as time windows (with duration determined by precision).
> This
> seems to make sense to me. Although, what then is meant by a TIME of
> 2012-06-01/2012-06-02". Is that the window including exactly June 1, or
> does it include June 1-2 ?

I agree the time window approach makes sense, and yes, the current
implementation
switches to a interval only if you explicitly ask for an interval,
otherwise the time
elements missing are assumed to be zero (from memory, at least).

Yup, it expands out by appending the zeros.

It may be an easy fix, inside the time kvp parser, if the time is not
fully specified
turn it into a interval?
We already handle t1/t2 as an extension to the wms spec (which would ask
for
a third parameter, the period).

Agreed, looks like is relatively straight forward and constrained to the kvp
parser. On my todo list to come up with a patch.

However... how do we handle a list of values that have no full precision?
Turn it into a list of intervals? Now, this we don't handle...

Yeah, good question. I will hunt through the spec to see if says anything
about that.

--
Ian Schneider
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Interesting take, and a good argument. On the one hand the spec does seem to require the expansion into an interval, and other servers such as mapserver have implemented it this way. On the other hand you poke some pretty good holes in that heuristic. Maybe this is something worth splitting the difference on and making it a configurable option?

On Fri, Jul 13, 2012 at 1:30 PM, Ian Schneider <ischneider@anonymised.com> wrote:

Matt and I were just having a discussion regarding this topic today.

While I agree it is a compelling argument to implicitly interpret a
non-precise instant request (2012, for example) as a range
(2012-2013), I believe it should be avoided in favor of explicit
requests. I agree the WMS specs (1.1.1 and 1.3) are somewhat unclear
on this, but I believe clarity can be derived from the annex sections
C.5.2 and C.4.3 (1.1.1 and 1.3, respectively) that discuss this:

“”“A WMS may declare that it will choose the closest available value
for a dimension if an exact value is not
specified. This allows, for example, hourly data whose actual
recording time is precise to the millisecond to be
requested simply by stating the desired date and hour. The
nearestValue attribute of the element, if
present and nonzero, indicates that this behaviour is enabled.”“”

While this applies more to the ‘closest’ (snapping) value than to
implicit conversion of an instant to an extent, I think the motivation
is similar - be explicit and when other behavior occurs, let the
client know.

Additionally, while it may make sense for certain instants to be
interpreted, e.g. 2012 means 2012-2013, it most certainly does not for
others, e.g -200000 (most likely the data is imprecise beyond a year).
While the implied period approach works well for simple,
multiplier-of-1 ranges, may domains use other ranges (10 year, 3
month, 6 hour, 30 minute, etc.) and may expect a similar behavior
(e.g. 2012-01-01T12:15:00 means 2012-01-01T12:00:00 to
2012-01-01T12:30:00 since the interval of that specific data is 30
minute).

So, I would argue that according to the spec:

  1. 2012 means 2012-01-01T00:00:00Z
  2. 2012/2013 means 2012-01-01T00:00:00Z-2013-01-01T00:00:00Z
    (inclusive on both sides as per the spec)

Since most clients of a time aware mapping service will not require
the user to type out full-precision or ranges and will perform the
requests for the user, these implicit instant-to-extent “short-cuts”
can be taken there, but IMHO, don’t belong on the server.

Cheers,

On Tue, Jun 5, 2012 at 5:32 PM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

On Tue, Jun 5, 2012 at 8:33 AM, Andrea Aime <andrea.aime@anonymised.com>
wrote:

On Mon, Jun 4, 2012 at 10:21 PM, Martin Davis <mdavis@anonymised.com> wrote:

Here’s a couple of data points for this discussion:

http://mapserver.org/ogc/wms_time.html

MapServer has addressed this issue (see the section Interpreting Time
Values), but they seem to do different things depending on the back-end
data
source. For PostGIS at least they use the “time as time window”
approach,
by virtue of using the DB date_trunc function, which amounts to the same
thing). One takeaway from this: I would suggest we want a data
source-independent strategy.

http://augusttown.blogspot.ca/2010/03/two-ambiguities-about-time-in-ogc-wms.html

This blog post discuss the issue, and weighs in on the side of treating
all
time values as time windows (with duration determined by precision).
This
seems to make sense to me. Although, what then is meant by a TIME of
2012-06-01/2012-06-02". Is that the window including exactly June 1, or
does it include June 1-2 ?

I agree the time window approach makes sense, and yes, the current
implementation
switches to a interval only if you explicitly ask for an interval,
otherwise the time
elements missing are assumed to be zero (from memory, at least).

Yup, it expands out by appending the zeros.

It may be an easy fix, inside the time kvp parser, if the time is not
fully specified
turn it into a interval?
We already handle t1/t2 as an extension to the wms spec (which would ask
for
a third parameter, the period).

Agreed, looks like is relatively straight forward and constrained to the kvp
parser. On my todo list to come up with a patch.

However… how do we handle a list of values that have no full precision?
Turn it into a list of intervals? Now, this we don’t handle…

Yeah, good question. I will hunt through the spec to see if says anything
about that.


Ian Schneider

OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.


Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

The problem with making this configurable is that then nobody will know how any given GeoServer instance works. I would say that providing both options should be a last resort if it really ends up to be impossible to decide on a reasonable interpretation of the spec.

Is it possible to ask OGC what the correct interpretation should be? Are there CITE tests which clarify this?

And perhaps it’s possible to see how other implementations have interpreted this? CubeWerx for instance?

On Mon, Jul 16, 2012 at 9:31 AM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

Interesting take, and a good argument. On the one hand the spec does seem to require the expansion into an interval, and other servers such as mapserver have implemented it this way. On the other hand you poke some pretty good holes in that heuristic. Maybe this is something worth splitting the difference on and making it a configurable option?

On Fri, Jul 13, 2012 at 1:30 PM, Ian Schneider <ischneider@anonymised.com> wrote:

Matt and I were just having a discussion regarding this topic today.

While I agree it is a compelling argument to implicitly interpret a
non-precise instant request (2012, for example) as a range
(2012-2013), I believe it should be avoided in favor of explicit
requests. I agree the WMS specs (1.1.1 and 1.3) are somewhat unclear
on this, but I believe clarity can be derived from the annex sections
C.5.2 and C.4.3 (1.1.1 and 1.3, respectively) that discuss this:

“”“A WMS may declare that it will choose the closest available value
for a dimension if an exact value is not
specified. This allows, for example, hourly data whose actual
recording time is precise to the millisecond to be
requested simply by stating the desired date and hour. The
nearestValue attribute of the element, if
present and nonzero, indicates that this behaviour is enabled.”“”

While this applies more to the ‘closest’ (snapping) value than to
implicit conversion of an instant to an extent, I think the motivation
is similar - be explicit and when other behavior occurs, let the
client know.

Additionally, while it may make sense for certain instants to be
interpreted, e.g. 2012 means 2012-2013, it most certainly does not for
others, e.g -200000 (most likely the data is imprecise beyond a year).
While the implied period approach works well for simple,
multiplier-of-1 ranges, may domains use other ranges (10 year, 3
month, 6 hour, 30 minute, etc.) and may expect a similar behavior
(e.g. 2012-01-01T12:15:00 means 2012-01-01T12:00:00 to
2012-01-01T12:30:00 since the interval of that specific data is 30
minute).

So, I would argue that according to the spec:

  1. 2012 means 2012-01-01T00:00:00Z
  2. 2012/2013 means 2012-01-01T00:00:00Z-2013-01-01T00:00:00Z
    (inclusive on both sides as per the spec)

Since most clients of a time aware mapping service will not require
the user to type out full-precision or ranges and will perform the
requests for the user, these implicit instant-to-extent “short-cuts”
can be taken there, but IMHO, don’t belong on the server.

Cheers,

On Tue, Jun 5, 2012 at 5:32 PM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

On Tue, Jun 5, 2012 at 8:33 AM, Andrea Aime <andrea.aime@anonymised.com>
wrote:

On Mon, Jun 4, 2012 at 10:21 PM, Martin Davis <mdavis@anonymised.com> wrote:

Here’s a couple of data points for this discussion:

http://mapserver.org/ogc/wms_time.html

MapServer has addressed this issue (see the section Interpreting Time
Values), but they seem to do different things depending on the back-end
data
source. For PostGIS at least they use the “time as time window”
approach,
by virtue of using the DB date_trunc function, which amounts to the same
thing). One takeaway from this: I would suggest we want a data
source-independent strategy.

http://augusttown.blogspot.ca/2010/03/two-ambiguities-about-time-in-ogc-wms.html

This blog post discuss the issue, and weighs in on the side of treating
all
time values as time windows (with duration determined by precision).
This
seems to make sense to me. Although, what then is meant by a TIME of
2012-06-01/2012-06-02". Is that the window including exactly June 1, or
does it include June 1-2 ?

I agree the time window approach makes sense, and yes, the current
implementation
switches to a interval only if you explicitly ask for an interval,
otherwise the time
elements missing are assumed to be zero (from memory, at least).

Yup, it expands out by appending the zeros.

It may be an easy fix, inside the time kvp parser, if the time is not
fully specified
turn it into a interval?
We already handle t1/t2 as an extension to the wms spec (which would ask
for
a third parameter, the period).

Agreed, looks like is relatively straight forward and constrained to the kvp
parser. On my todo list to come up with a patch.

However… how do we handle a list of values that have no full precision?
Turn it into a list of intervals? Now, this we don’t handle…

Yeah, good question. I will hunt through the spec to see if says anything
about that.


Ian Schneider

OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Justin Deoliveira

OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.


Live Security Virtual Conference
Exclusive live event will cover all the ways today’s security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/


Geoserver-devel mailing list
Geoserver-devel@anonymised.comsts.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.

A couple of other data points for this discussion:

  1. The WMS spec for temporal says that it is an extension of ISO 8601. References for that spec are:

http://en.wikipedia.org/wiki/ISO_8601
http://dotat.at/tmp/ISO_8601-2004_E.pdf

ISO 8601 says that it allows for the concept of “reduced accuracy” by omitting lower-order time elements. (Sec 3.1 of the standard). The WMS spec refers to this as “reduced precision” (Sec D.2.1). To my mind this has to be interpreted to mean that a time string with omitted components refers to a time interval of the appropriate size, rather than a high-precision time instant. Otherwise, the term “reduced precision/accuracy” has no meaning.

  1. This is more of a side comment, but it does have some relevance. While the WMS spec says that it “extends” ISO 8601, it appears more that it alters it. ISO 8601 allows time intervals to be specified using the formats “start/end”, “start/duration” and “duration/end”. WMS appears to allow only the syntax “start/end/duration”. This forces the user to always specify the end time. Since the end time is inclusiv, this is problematic for specifying things like “all records from 2012”. Using the obvious 2012/2013/P1Y or even 2012-01-01T00:00:00Z-2013-01-01T00:00:00Z/P1Y would potentially include some records from 2013 as well.

(It seems like the WMS spec should allow for intervals specified as start/end as well, but I can’t see this in the spec. )

The relevance of this is that if WMS really does only allow “start/end/duration”, in the absence of the ability to utilize “reduced precision” clients are forced to specify a full precision end time, which can be problematic to determine (think the classic February in leap years).

The ISO way of allowing a time instance and a duration doesn’t have this problem, since the duration can be sized appropriately to the time specified.

I realize this verges on hair-splitting standards wonkery, but in the absence of clearer language and examples there’s not much else to go on.

On Mon, Jul 16, 2012 at 12:05 PM, Martin Davis <mdavis@anonymised.com> wrote:

The problem with making this configurable is that then nobody will know how any given GeoServer instance works. I would say that providing both options should be a last resort if it really ends up to be impossible to decide on a reasonable interpretation of the spec.

Is it possible to ask OGC what the correct interpretation should be? Are there CITE tests which clarify this?

And perhaps it’s possible to see how other implementations have interpreted this? CubeWerx for instance?

On Mon, Jul 16, 2012 at 9:31 AM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

Interesting take, and a good argument. On the one hand the spec does seem to require the expansion into an interval, and other servers such as mapserver have implemented it this way. On the other hand you poke some pretty good holes in that heuristic. Maybe this is something worth splitting the difference on and making it a configurable option?

On Fri, Jul 13, 2012 at 1:30 PM, Ian Schneider <ischneider@anonymised.com> wrote:

Matt and I were just having a discussion regarding this topic today.

While I agree it is a compelling argument to implicitly interpret a
non-precise instant request (2012, for example) as a range
(2012-2013), I believe it should be avoided in favor of explicit
requests. I agree the WMS specs (1.1.1 and 1.3) are somewhat unclear
on this, but I believe clarity can be derived from the annex sections
C.5.2 and C.4.3 (1.1.1 and 1.3, respectively) that discuss this:

“”“A WMS may declare that it will choose the closest available value
for a dimension if an exact value is not
specified. This allows, for example, hourly data whose actual
recording time is precise to the millisecond to be
requested simply by stating the desired date and hour. The
nearestValue attribute of the element, if
present and nonzero, indicates that this behaviour is enabled.”“”

While this applies more to the ‘closest’ (snapping) value than to
implicit conversion of an instant to an extent, I think the motivation
is similar - be explicit and when other behavior occurs, let the
client know.

Additionally, while it may make sense for certain instants to be
interpreted, e.g. 2012 means 2012-2013, it most certainly does not for
others, e.g -200000 (most likely the data is imprecise beyond a year).
While the implied period approach works well for simple,
multiplier-of-1 ranges, may domains use other ranges (10 year, 3
month, 6 hour, 30 minute, etc.) and may expect a similar behavior
(e.g. 2012-01-01T12:15:00 means 2012-01-01T12:00:00 to
2012-01-01T12:30:00 since the interval of that specific data is 30
minute).

So, I would argue that according to the spec:

  1. 2012 means 2012-01-01T00:00:00Z
  2. 2012/2013 means 2012-01-01T00:00:00Z-2013-01-01T00:00:00Z
    (inclusive on both sides as per the spec)

Since most clients of a time aware mapping service will not require
the user to type out full-precision or ranges and will perform the
requests for the user, these implicit instant-to-extent “short-cuts”
can be taken there, but IMHO, don’t belong on the server.

Cheers,

On Tue, Jun 5, 2012 at 5:32 PM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

On Tue, Jun 5, 2012 at 8:33 AM, Andrea Aime <andrea.aime@anonymised.com>
wrote:

On Mon, Jun 4, 2012 at 10:21 PM, Martin Davis <mdavis@anonymised.com> wrote:

Here’s a couple of data points for this discussion:

http://mapserver.org/ogc/wms_time.html

MapServer has addressed this issue (see the section Interpreting Time
Values), but they seem to do different things depending on the back-end
data
source. For PostGIS at least they use the “time as time window”
approach,
by virtue of using the DB date_trunc function, which amounts to the same
thing). One takeaway from this: I would suggest we want a data
source-independent strategy.

http://augusttown.blogspot.ca/2010/03/two-ambiguities-about-time-in-ogc-wms.html

This blog post discuss the issue, and weighs in on the side of treating
all
time values as time windows (with duration determined by precision).
This
seems to make sense to me. Although, what then is meant by a TIME of
2012-06-01/2012-06-02". Is that the window including exactly June 1, or
does it include June 1-2 ?

I agree the time window approach makes sense, and yes, the current
implementation
switches to a interval only if you explicitly ask for an interval,
otherwise the time
elements missing are assumed to be zero (from memory, at least).

Yup, it expands out by appending the zeros.

It may be an easy fix, inside the time kvp parser, if the time is not
fully specified
turn it into a interval?
We already handle t1/t2 as an extension to the wms spec (which would ask
for
a third parameter, the period).

Agreed, looks like is relatively straight forward and constrained to the kvp
parser. On my todo list to come up with a patch.

However… how do we handle a list of values that have no full precision?
Turn it into a list of intervals? Now, this we don’t handle…

Yeah, good question. I will hunt through the spec to see if says anything
about that.


Ian Schneider

OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Justin Deoliveira

OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.


Live Security Virtual Conference
Exclusive live event will cover all the ways today’s security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/


Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.

This Martin, this helps a lot having something go through the specs. I can read them for about 5 minutes before my eyes start to bleed :slight_smile: I will see what I can get out of the cite tests. I know there are tests for time but we don’t ever run them and i thought i remembered Andrea trying and having serious issue with them.

Doesn’t anyone know of a wms-dev list like there is a wfs-dev list? I find you can usually get these types of answers there.

On Mon, Jul 16, 2012 at 2:11 PM, Martin Davis <mdavis@anonymised.com> wrote:

A couple of other data points for this discussion:

  1. The WMS spec for temporal says that it is an extension of ISO 8601. References for that spec are:

http://en.wikipedia.org/wiki/ISO_8601
http://dotat.at/tmp/ISO_8601-2004_E.pdf

ISO 8601 says that it allows for the concept of “reduced accuracy” by omitting lower-order time elements. (Sec 3.1 of the standard). The WMS spec refers to this as “reduced precision” (Sec D.2.1). To my mind this has to be interpreted to mean that a time string with omitted components refers to a time interval of the appropriate size, rather than a high-precision time instant. Otherwise, the term “reduced precision/accuracy” has no meaning.

  1. This is more of a side comment, but it does have some relevance. While the WMS spec says that it “extends” ISO 8601, it appears more that it alters it. ISO 8601 allows time intervals to be specified using the formats “start/end”, “start/duration” and “duration/end”. WMS appears to allow only the syntax “start/end/duration”. This forces the user to always specify the end time. Since the end time is inclusiv, this is problematic for specifying things like “all records from 2012”. Using the obvious 2012/2013/P1Y or even 2012-01-01T00:00:00Z-2013-01-01T00:00:00Z/P1Y would potentially include some records from 2013 as well.

(It seems like the WMS spec should allow for intervals specified as start/end as well, but I can’t see this in the spec. )

The relevance of this is that if WMS really does only allow “start/end/duration”, in the absence of the ability to utilize “reduced precision” clients are forced to specify a full precision end time, which can be problematic to determine (think the classic February in leap years).

The ISO way of allowing a time instance and a duration doesn’t have this problem, since the duration can be sized appropriately to the time specified.

I realize this verges on hair-splitting standards wonkery, but in the absence of clearer language and examples there’s not much else to go on.

On Mon, Jul 16, 2012 at 12:05 PM, Martin Davis <mdavis@anonymised.com> wrote:

The problem with making this configurable is that then nobody will know how any given GeoServer instance works. I would say that providing both options should be a last resort if it really ends up to be impossible to decide on a reasonable interpretation of the spec.

Is it possible to ask OGC what the correct interpretation should be? Are there CITE tests which clarify this?

And perhaps it’s possible to see how other implementations have interpreted this? CubeWerx for instance?

On Mon, Jul 16, 2012 at 9:31 AM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

Interesting take, and a good argument. On the one hand the spec does seem to require the expansion into an interval, and other servers such as mapserver have implemented it this way. On the other hand you poke some pretty good holes in that heuristic. Maybe this is something worth splitting the difference on and making it a configurable option?

On Fri, Jul 13, 2012 at 1:30 PM, Ian Schneider <ischneider@anonymised.com> wrote:

Matt and I were just having a discussion regarding this topic today.

While I agree it is a compelling argument to implicitly interpret a
non-precise instant request (2012, for example) as a range
(2012-2013), I believe it should be avoided in favor of explicit
requests. I agree the WMS specs (1.1.1 and 1.3) are somewhat unclear
on this, but I believe clarity can be derived from the annex sections
C.5.2 and C.4.3 (1.1.1 and 1.3, respectively) that discuss this:

“”“A WMS may declare that it will choose the closest available value
for a dimension if an exact value is not
specified. This allows, for example, hourly data whose actual
recording time is precise to the millisecond to be
requested simply by stating the desired date and hour. The
nearestValue attribute of the element, if
present and nonzero, indicates that this behaviour is enabled.”“”

While this applies more to the ‘closest’ (snapping) value than to
implicit conversion of an instant to an extent, I think the motivation
is similar - be explicit and when other behavior occurs, let the
client know.

Additionally, while it may make sense for certain instants to be
interpreted, e.g. 2012 means 2012-2013, it most certainly does not for
others, e.g -200000 (most likely the data is imprecise beyond a year).
While the implied period approach works well for simple,
multiplier-of-1 ranges, may domains use other ranges (10 year, 3
month, 6 hour, 30 minute, etc.) and may expect a similar behavior
(e.g. 2012-01-01T12:15:00 means 2012-01-01T12:00:00 to
2012-01-01T12:30:00 since the interval of that specific data is 30
minute).

So, I would argue that according to the spec:

  1. 2012 means 2012-01-01T00:00:00Z
  2. 2012/2013 means 2012-01-01T00:00:00Z-2013-01-01T00:00:00Z
    (inclusive on both sides as per the spec)

Since most clients of a time aware mapping service will not require
the user to type out full-precision or ranges and will perform the
requests for the user, these implicit instant-to-extent “short-cuts”
can be taken there, but IMHO, don’t belong on the server.

Cheers,

On Tue, Jun 5, 2012 at 5:32 PM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

On Tue, Jun 5, 2012 at 8:33 AM, Andrea Aime <andrea.aime@anonymised.com>
wrote:

On Mon, Jun 4, 2012 at 10:21 PM, Martin Davis <mdavis@anonymised.com> wrote:

Here’s a couple of data points for this discussion:

http://mapserver.org/ogc/wms_time.html

MapServer has addressed this issue (see the section Interpreting Time
Values), but they seem to do different things depending on the back-end
data
source. For PostGIS at least they use the “time as time window”
approach,
by virtue of using the DB date_trunc function, which amounts to the same
thing). One takeaway from this: I would suggest we want a data
source-independent strategy.

http://augusttown.blogspot.ca/2010/03/two-ambiguities-about-time-in-ogc-wms.html

This blog post discuss the issue, and weighs in on the side of treating
all
time values as time windows (with duration determined by precision).
This
seems to make sense to me. Although, what then is meant by a TIME of
2012-06-01/2012-06-02". Is that the window including exactly June 1, or
does it include June 1-2 ?

I agree the time window approach makes sense, and yes, the current
implementation
switches to a interval only if you explicitly ask for an interval,
otherwise the time
elements missing are assumed to be zero (from memory, at least).

Yup, it expands out by appending the zeros.

It may be an easy fix, inside the time kvp parser, if the time is not
fully specified
turn it into a interval?
We already handle t1/t2 as an extension to the wms spec (which would ask
for
a third parameter, the period).

Agreed, looks like is relatively straight forward and constrained to the kvp
parser. On my todo list to come up with a patch.

However… how do we handle a list of values that have no full precision?
Turn it into a list of intervals? Now, this we don’t handle…

Yeah, good question. I will hunt through the spec to see if says anything
about that.


Ian Schneider

OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Justin Deoliveira

OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.


Live Security Virtual Conference
Exclusive live event will cover all the ways today’s security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/


Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.


Live Security Virtual Conference
Exclusive live event will cover all the ways today’s security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/


Geoserver-devel mailing list
Geoserver-devel@anonymised.comsts.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel


Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Two more thoughts in favour of “reduced precision = interval”:

  1. Expressiveness - If reduced-precision time strings are taken to mean a precise instant, then there is no simple way of expressing a standard interval. Whereas if reduced-precision is interpreted as an interval, it’s still possible to express an instant via a full-precision string.

  2. Common Usage: When someone says “2012” they typically mean the entire year 2012, not the instant at the start of the year. Likewise for other reduced-precision expressions of time. (I realize that standards often don’t let reality intrude, but it might be taken into consideration in absence of other clear direction)

On Mon, Jul 16, 2012 at 1:11 PM, Martin Davis <mdavis@anonymised.com> wrote:

A couple of other data points for this discussion:

  1. The WMS spec for temporal says that it is an extension of ISO 8601. References for that spec are:

http://en.wikipedia.org/wiki/ISO_8601
http://dotat.at/tmp/ISO_8601-2004_E.pdf

ISO 8601 says that it allows for the concept of “reduced accuracy” by omitting lower-order time elements. (Sec 3.1 of the standard). The WMS spec refers to this as “reduced precision” (Sec D.2.1). To my mind this has to be interpreted to mean that a time string with omitted components refers to a time interval of the appropriate size, rather than a high-precision time instant. Otherwise, the term “reduced precision/accuracy” has no meaning.

  1. This is more of a side comment, but it does have some relevance. While the WMS spec says that it “extends” ISO 8601, it appears more that it alters it. ISO 8601 allows time intervals to be specified using the formats “start/end”, “start/duration” and “duration/end”. WMS appears to allow only the syntax “start/end/duration”. This forces the user to always specify the end time. Since the end time is inclusiv, this is problematic for specifying things like “all records from 2012”. Using the obvious 2012/2013/P1Y or even 2012-01-01T00:00:00Z-2013-01-01T00:00:00Z/P1Y would potentially include some records from 2013 as well.

(It seems like the WMS spec should allow for intervals specified as start/end as well, but I can’t see this in the spec. )

The relevance of this is that if WMS really does only allow “start/end/duration”, in the absence of the ability to utilize “reduced precision” clients are forced to specify a full precision end time, which can be problematic to determine (think the classic February in leap years).

The ISO way of allowing a time instance and a duration doesn’t have this problem, since the duration can be sized appropriately to the time specified.

I realize this verges on hair-splitting standards wonkery, but in the absence of clearer language and examples there’s not much else to go on.

Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.

My eyes were pretty red as well after going through both standards!

It would be nice if the CITE tests shed some light on this point. I was a bit disappointed at how vague both the WMS and ISO standards were, though. You’d think for something as critical and well-understood as time they’d be able to produce a complete, unambiguous spec!

Good idea about trying to find a official WMS list.

On Tue, Jul 17, 2012 at 7:35 AM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

This Martin, this helps a lot having something go through the specs. I can read them for about 5 minutes before my eyes start to bleed :slight_smile: I will see what I can get out of the cite tests. I know there are tests for time but we don’t ever run them and i thought i remembered Andrea trying and having serious issue with them.

Doesn’t anyone know of a wms-dev list like there is a wfs-dev list? I find you can usually get these types of answers there.

On Mon, Jul 16, 2012 at 2:11 PM, Martin Davis <mdavis@anonymised.com> wrote:

A couple of other data points for this discussion:

  1. The WMS spec for temporal says that it is an extension of ISO 8601. References for that spec are:

http://en.wikipedia.org/wiki/ISO_8601
http://dotat.at/tmp/ISO_8601-2004_E.pdf

ISO 8601 says that it allows for the concept of “reduced accuracy” by omitting lower-order time elements. (Sec 3.1 of the standard). The WMS spec refers to this as “reduced precision” (Sec D.2.1). To my mind this has to be interpreted to mean that a time string with omitted components refers to a time interval of the appropriate size, rather than a high-precision time instant. Otherwise, the term “reduced precision/accuracy” has no meaning.

  1. This is more of a side comment, but it does have some relevance. While the WMS spec says that it “extends” ISO 8601, it appears more that it alters it. ISO 8601 allows time intervals to be specified using the formats “start/end”, “start/duration” and “duration/end”. WMS appears to allow only the syntax “start/end/duration”. This forces the user to always specify the end time. Since the end time is inclusiv, this is problematic for specifying things like “all records from 2012”. Using the obvious 2012/2013/P1Y or even 2012-01-01T00:00:00Z-2013-01-01T00:00:00Z/P1Y would potentially include some records from 2013 as well.

(It seems like the WMS spec should allow for intervals specified as start/end as well, but I can’t see this in the spec. )

The relevance of this is that if WMS really does only allow “start/end/duration”, in the absence of the ability to utilize “reduced precision” clients are forced to specify a full precision end time, which can be problematic to determine (think the classic February in leap years).

The ISO way of allowing a time instance and a duration doesn’t have this problem, since the duration can be sized appropriately to the time specified.

I realize this verges on hair-splitting standards wonkery, but in the absence of clearer language and examples there’s not much else to go on.

On Mon, Jul 16, 2012 at 12:05 PM, Martin Davis <mdavis@anonymised.com> wrote:

The problem with making this configurable is that then nobody will know how any given GeoServer instance works. I would say that providing both options should be a last resort if it really ends up to be impossible to decide on a reasonable interpretation of the spec.

Is it possible to ask OGC what the correct interpretation should be? Are there CITE tests which clarify this?

And perhaps it’s possible to see how other implementations have interpreted this? CubeWerx for instance?

On Mon, Jul 16, 2012 at 9:31 AM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

Interesting take, and a good argument. On the one hand the spec does seem to require the expansion into an interval, and other servers such as mapserver have implemented it this way. On the other hand you poke some pretty good holes in that heuristic. Maybe this is something worth splitting the difference on and making it a configurable option?

On Fri, Jul 13, 2012 at 1:30 PM, Ian Schneider <ischneider@anonymised.com> wrote:

Matt and I were just having a discussion regarding this topic today.

While I agree it is a compelling argument to implicitly interpret a
non-precise instant request (2012, for example) as a range
(2012-2013), I believe it should be avoided in favor of explicit
requests. I agree the WMS specs (1.1.1 and 1.3) are somewhat unclear
on this, but I believe clarity can be derived from the annex sections
C.5.2 and C.4.3 (1.1.1 and 1.3, respectively) that discuss this:

“”“A WMS may declare that it will choose the closest available value
for a dimension if an exact value is not
specified. This allows, for example, hourly data whose actual
recording time is precise to the millisecond to be
requested simply by stating the desired date and hour. The
nearestValue attribute of the element, if
present and nonzero, indicates that this behaviour is enabled.”“”

While this applies more to the ‘closest’ (snapping) value than to
implicit conversion of an instant to an extent, I think the motivation
is similar - be explicit and when other behavior occurs, let the
client know.

Additionally, while it may make sense for certain instants to be
interpreted, e.g. 2012 means 2012-2013, it most certainly does not for
others, e.g -200000 (most likely the data is imprecise beyond a year).
While the implied period approach works well for simple,
multiplier-of-1 ranges, may domains use other ranges (10 year, 3
month, 6 hour, 30 minute, etc.) and may expect a similar behavior
(e.g. 2012-01-01T12:15:00 means 2012-01-01T12:00:00 to
2012-01-01T12:30:00 since the interval of that specific data is 30
minute).

So, I would argue that according to the spec:

  1. 2012 means 2012-01-01T00:00:00Z
  2. 2012/2013 means 2012-01-01T00:00:00Z-2013-01-01T00:00:00Z
    (inclusive on both sides as per the spec)

Since most clients of a time aware mapping service will not require
the user to type out full-precision or ranges and will perform the
requests for the user, these implicit instant-to-extent “short-cuts”
can be taken there, but IMHO, don’t belong on the server.

Cheers,

On Tue, Jun 5, 2012 at 5:32 PM, Justin Deoliveira <jdeolive@anonymised.com> wrote:

On Tue, Jun 5, 2012 at 8:33 AM, Andrea Aime <andrea.aime@anonymised.com>
wrote:

On Mon, Jun 4, 2012 at 10:21 PM, Martin Davis <mdavis@anonymised.com> wrote:

Here’s a couple of data points for this discussion:

http://mapserver.org/ogc/wms_time.html

MapServer has addressed this issue (see the section Interpreting Time
Values), but they seem to do different things depending on the back-end
data
source. For PostGIS at least they use the “time as time window”
approach,
by virtue of using the DB date_trunc function, which amounts to the same
thing). One takeaway from this: I would suggest we want a data
source-independent strategy.

http://augusttown.blogspot.ca/2010/03/two-ambiguities-about-time-in-ogc-wms.html

This blog post discuss the issue, and weighs in on the side of treating
all
time values as time windows (with duration determined by precision).
This
seems to make sense to me. Although, what then is meant by a TIME of
2012-06-01/2012-06-02". Is that the window including exactly June 1, or
does it include June 1-2 ?

I agree the time window approach makes sense, and yes, the current
implementation
switches to a interval only if you explicitly ask for an interval,
otherwise the time
elements missing are assumed to be zero (from memory, at least).

Yup, it expands out by appending the zeros.

It may be an easy fix, inside the time kvp parser, if the time is not
fully specified
turn it into a interval?
We already handle t1/t2 as an extension to the wms spec (which would ask
for
a third parameter, the period).

Agreed, looks like is relatively straight forward and constrained to the kvp
parser. On my todo list to come up with a patch.

However… how do we handle a list of values that have no full precision?
Turn it into a list of intervals? Now, this we don’t handle…

Yeah, good question. I will hunt through the spec to see if says anything
about that.


Ian Schneider

OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Justin Deoliveira

OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.


Live Security Virtual Conference
Exclusive live event will cover all the ways today’s security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/


Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.


Live Security Virtual Conference
Exclusive live event will cover all the ways today’s security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/


Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel


Justin Deoliveira
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.

On Tue, Jul 17, 2012 at 6:18 PM, Martin Davis <mdavis@anonymised.com> wrote:

Two more thoughts in favour of “reduced precision = interval”:

  1. Expressiveness - If reduced-precision time strings are taken to mean a precise instant, then there is no simple way of expressing a standard interval. Whereas if reduced-precision is interpreted as an interval, it’s still possible to express an instant via a full-precision string.

  2. Common Usage: When someone says “2012” they typically mean the entire year 2012, not the instant at the start of the year. Likewise for other reduced-precision expressions of time. (I realize that standards often don’t let reality intrude, but it might be taken into consideration in absence of other clear direction)

Agreed this is a sensible approach. Hopefully Justin will be able to find some more guidance about this
with OGC.

The CITE tests are a bit funky, but as far as I remember one has to be able to configure precise and non precise
handling on a per layer basis to pass them, so hopefully there is some test checking both ways.

Cheers
Andrea

==
Our support, Your Success! Visit http://opensdi.geo-solutions.it for more information.

Ing. Andrea Aime
@geowolf
Technical Lead

GeoSolutions S.A.S.
Via Poggio alle Viti 1187
55054 Massarosa (LU)
Italy
phone: +39 0584 962313
fax: +39 0584 962313
mob: +39 339 8844549

http://www.geo-solutions.it
http://twitter.com/geosolutions_it


Martin,

Thanks for all your thoughts in this matter. I'm not trying to be a
stubborn thorn in the side of reason, more a devil's advocate. So,
acknowledging the risk of beating this near dead horse further...

On Tue, Jul 17, 2012 at 10:45 AM, Andrea Aime
<andrea.aime@anonymised.com> wrote:

On Tue, Jul 17, 2012 at 6:18 PM, Martin Davis <mdavis@anonymised.com> wrote:

Two more thoughts in favour of "reduced precision = interval":

1. Expressiveness - If reduced-precision time strings are taken to mean a
precise instant, then there is no simple way of expressing a standard
interval. Whereas if reduced-precision is interpreted as an interval, it's
still possible to express an instant via a full-precision string.

Agreed. My counterpoint would be that by being explicit, both are
supported. Since software is going to be making the request, the
'expressiveness' argument seems less convincing - the client can
support being expressive whilst still adhering to the protocol. While
making manual WMS requests is a nice option when debugging, how many
folks beyond developers are manually editing WMS parameters to achieve
success in their endeavors? To address the concern about February
being a leap month, most clients would have access to calendar math
and could formulate the appropriate request.

2. Common Usage: When someone says "2012" they typically mean the entire
year 2012, not the instant at the start of the year. Likewise for other
reduced-precision expressions of time. (I realize that standards often
don't let reality intrude, but it might be taken into consideration in
absence of other clear direction)

Totally and completely agree. The problem as I see it is that to
achieve both 'Common Usage' _and_ adhere to the spec, we introduce a
difference in behavior - if 2012 means all of 2012 excluding
2013-01-01T00:00:00Z but 2012/2013 means _including_ that first day of
the new year (as per the spec), then this would bad™. I again submit
the evidence that in many domains, single increments on a precision
element (e.g. 1 year, 1 month, etc.) are _not_ standard or common
usage (think census data, seasonal weather deviations from norm,
stream gauge-data, weather forecasts, etc.).

hair-splitting standards wonkery(© mdavis), indeed...

Cheers®,
-Ian

Agreed this is a sensible approach. Hopefully Justin will be able to find
some more guidance about this
with OGC.

The CITE tests are a bit funky, but as far as I remember one has to be able
to configure precise and non precise
handling on a per layer basis to pass them, so hopefully there is some test
checking both ways.

Cheers
Andrea

--

Our support, Your Success! Visit http://opensdi.geo-solutions.it for more
information.

Ing. Andrea Aime
@geowolf
Technical Lead

GeoSolutions S.A.S.
Via Poggio alle Viti 1187
55054 Massarosa (LU)
Italy
phone: +39 0584 962313
fax: +39 0584 962313
mob: +39 339 8844549

http://www.geo-solutions.it
http://twitter.com/geosolutions_it

-------------------------------------------------------

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

--
Ian Schneider
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

Totally and completely agree. The problem as I see it is that to
achieve both ‘Common Usage’ and adhere to the spec, we introduce a
difference in behavior - if 2012 means all of 2012 excluding
2013-01-01T00:00:00Z but 2012/2013 means including that first day of
the new year (as per the spec), then this would bad™. I again submit
the evidence that in many domains, single increments on a precision
element (e.g. 1 year, 1 month, etc.) are not standard or common
usage (think census data, seasonal weather deviations from norm,
stream gauge-data, weather forecasts, etc.).

My reading of ISO8601 is that lists (,) and ranges (/) are extending the basic time syntax that can have precision by dropping time arguments right to left. With that, I would read that 2012 and 2013 alone always represent the entire year and as such 2012/2013 is the entirety of both years (which to me is a great easy to express syntax).

The big issue here is that I believe to be “correct” any ranges must have the same precision on both the from and to sides. 2012/2013-01-01 should not be valid.

With regards to clients never manually using ISO8601 strings, while they may not be putting them in the url manually, having a text box to enter ISO8601 strings including ranges and lists as part of a client is really quite handy. I have an application that does this and it allows much quicker (particularly with the shortcut of being able to use reduced precision) ability to do things like look at the first day of every month or to pull up January from each of the last three years in one step.

David

A few more pokes at the moribund nag below… (although I would suggest that it’s actually far from expiring, and we still need to figure out how to corral this cayuse!)

On Wed, Jul 18, 2012 at 5:04 PM, Ian Schneider <ischneider@anonymised.com> wrote:

Martin,

Thanks for all your thoughts in this matter. I’m not trying to be a
stubborn thorn in the side of reason, more a devil’s advocate. So,
acknowledging the risk of beating this near dead horse further…

It’s good to have lots of eyes on this issue - it’s quick tricky (as temporal processing always is…)

On Tue, Jul 17, 2012 at 10:45 AM, Andrea Aime
<andrea.aime@anonymised.com> wrote:

On Tue, Jul 17, 2012 at 6:18 PM, Martin Davis <mdavis@anonymised.com> wrote:

Two more thoughts in favour of “reduced precision = interval”:

  1. Expressiveness - If reduced-precision time strings are taken to mean a
    precise instant, then there is no simple way of expressing a standard
    interval. Whereas if reduced-precision is interpreted as an interval, it’s
    still possible to express an instant via a full-precision string.

Agreed. My counterpoint would be that by being explicit, both are
supported.

I don’t see how the “reduced-precision-as-instant” approach supports expressing intervals in a convenient way. Wouldn’t it mean that expressing a range for a single calendar year requires the range “2011/2011-12-31T11:59:59.999Z”? That seems pretty onerous and unreadable. Potentially inaccurate too, due to the truncation at 3 decimal places. What if a dataset allowed more precision for time and contained a value at 2011-12-1312:59:59.999999 ?

Since software is going to be making the request, the
‘expressiveness’ argument seems less convincing - the client can
support being expressive whilst still adhering to the protocol. While
making manual WMS requests is a nice option when debugging, how many
folks beyond developers are manually editing WMS parameters to achieve
success in their endeavors?

I agree that automated clients have different requirements for “ease of use” than human ones, and they are probably of primary importance. However, DWB provides a nice counterexample! Also, I think the real issue is what is the correct interpretation of the standard. (And it may well be that the standard doesn’t actually provide the most ideal protocol for automated use - wouldn’t be the first time this has happened).

To address the concern about February
being a leap month, most clients would have access to calendar math
and could formulate the appropriate request.

  1. Common Usage: When someone says “2012” they typically mean the entire
    year 2012, not the instant at the start of the year. Likewise for other
    reduced-precision expressions of time. (I realize that standards often
    don’t let reality intrude, but it might be taken into consideration in
    absence of other clear direction)

Totally and completely agree. The problem as I see it is that to
achieve both ‘Common Usage’ and adhere to the spec, we introduce a
difference in behavior - if 2012 means all of 2012 excluding
2013-01-01T00:00:00Z but 2012/2013 means including that first day of
the new year (as per the spec), then this would bad™.

My interpretation is that 2012/2013 means “all of both 2012 and 2013”. To express just “the year 2012” it would be either “2012” or “2012/2012”. Agreed the latter range looks funny to human eyes, but it might be simpler for an automated client to generate.

I again submit
the evidence that in many domains, single increments on a precision
element (e.g. 1 year, 1 month, etc.) are not standard or common
usage (think census data, seasonal weather deviations from norm,
stream gauge-data, weather forecasts, etc.).

Yes, agreed. And in that case the fully-specified time range format will be required. This is not prevented by the “reduced-precision-as-interval” interpretation. And as you point out this should not be a problem for automated clients to generate.

My take on the standard is that it’s just trying to allow a more flexible usage pattern where it can be useful and does not impact correctness and consistency. But it can’t provide a shorthand for every possible temporal use case.

Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.

Hi all,

I’ve got some time to work on this so it would be good to verify that we’ve reached a conclusion :slight_smile: I think we have since it’s been almost a month since anyone has posted to this thread.

It looks to me like the reduced-precision-as-range argument has pretty well won this discussion (and I’m going to go ahead and start looking into implementing that behavior.) But please speak up if this is really unacceptable!

Also, has this issue been filed in JIRA?


David Winslow
OpenGeo - http://opengeo.org/

On Thu, Jul 19, 2012 at 1:02 PM, Martin Davis <mdavis@anonymised.com> wrote:

A few more pokes at the moribund nag below… (although I would suggest that it’s actually far from expiring, and we still need to figure out how to corral this cayuse!)

On Wed, Jul 18, 2012 at 5:04 PM, Ian Schneider <ischneider@anonymised.com> wrote:

Martin,

Thanks for all your thoughts in this matter. I’m not trying to be a
stubborn thorn in the side of reason, more a devil’s advocate. So,
acknowledging the risk of beating this near dead horse further…

It’s good to have lots of eyes on this issue - it’s quick tricky (as temporal processing always is…)

On Tue, Jul 17, 2012 at 10:45 AM, Andrea Aime
<andrea.aime@anonymised.com> wrote:

On Tue, Jul 17, 2012 at 6:18 PM, Martin Davis <mdavis@anonymised.com> wrote:

Two more thoughts in favour of “reduced precision = interval”:

  1. Expressiveness - If reduced-precision time strings are taken to mean a
    precise instant, then there is no simple way of expressing a standard
    interval. Whereas if reduced-precision is interpreted as an interval, it’s
    still possible to express an instant via a full-precision string.

Agreed. My counterpoint would be that by being explicit, both are
supported.

I don’t see how the “reduced-precision-as-instant” approach supports expressing intervals in a convenient way. Wouldn’t it mean that expressing a range for a single calendar year requires the range “2011/2011-12-31T11:59:59.999Z”? That seems pretty onerous and unreadable. Potentially inaccurate too, due to the truncation at 3 decimal places. What if a dataset allowed more precision for time and contained a value at 2011-12-1312:59:59.999999 ?

Since software is going to be making the request, the
‘expressiveness’ argument seems less convincing - the client can
support being expressive whilst still adhering to the protocol. While
making manual WMS requests is a nice option when debugging, how many
folks beyond developers are manually editing WMS parameters to achieve
success in their endeavors?

I agree that automated clients have different requirements for “ease of use” than human ones, and they are probably of primary importance. However, DWB provides a nice counterexample! Also, I think the real issue is what is the correct interpretation of the standard. (And it may well be that the standard doesn’t actually provide the most ideal protocol for automated use - wouldn’t be the first time this has happened).

To address the concern about February
being a leap month, most clients would have access to calendar math
and could formulate the appropriate request.

  1. Common Usage: When someone says “2012” they typically mean the entire
    year 2012, not the instant at the start of the year. Likewise for other
    reduced-precision expressions of time. (I realize that standards often
    don’t let reality intrude, but it might be taken into consideration in
    absence of other clear direction)

Totally and completely agree. The problem as I see it is that to
achieve both ‘Common Usage’ and adhere to the spec, we introduce a
difference in behavior - if 2012 means all of 2012 excluding
2013-01-01T00:00:00Z but 2012/2013 means including that first day of
the new year (as per the spec), then this would bad™.

My interpretation is that 2012/2013 means “all of both 2012 and 2013”. To express just “the year 2012” it would be either “2012” or “2012/2012”. Agreed the latter range looks funny to human eyes, but it might be simpler for an automated client to generate.

I again submit
the evidence that in many domains, single increments on a precision
element (e.g. 1 year, 1 month, etc.) are not standard or common
usage (think census data, seasonal weather deviations from norm,
stream gauge-data, weather forecasts, etc.).

Yes, agreed. And in that case the fully-specified time range format will be required. This is not prevented by the “reduced-precision-as-interval” interpretation. And as you point out this should not be a problem for automated clients to generate.

My take on the standard is that it’s just trying to allow a more flexible usage pattern where it can be useful and does not impact correctness and consistency. But it can’t provide a shorthand for every possible temporal use case.

Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.


Live Security Virtual Conference
Exclusive live event will cover all the ways today’s security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/


Geoserver-devel mailing list
Geoserver-devel@anonymised.comsts.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

The moribund nag would only request that this work would include: (1)
a test case containing all variants of the reduced precision semantics
that drives the development and makes easier (2) documentation of
expected functionality.

The documentation should also include a note that:

To preserve interoperability, a client implementation should prefer
_not_ to use these shortcuts as other implementations may not comply.
A client that does use the shortcuts against a service that does not
make these assumptions will not fail-fast, but instead receive
unexpected results.

I don't mean to sound disgruntled, but want to reiterate that this
choice is being made to satisfy a desire to provide ease of use for a
human when using a machine protocol - the effort, in my mind, should
be made on creating an intuitive client (library, UI, or even
documentation of proper use) to the latter.

On Tue, Aug 14, 2012 at 1:20 PM, David Winslow <dwinslow@anonymised.com> wrote:

Hi all,

I've got some time to work on this so it would be good to verify that we've
reached a conclusion :slight_smile: I think we have since it's been almost a month
since anyone has posted to this thread.

It looks to me like the reduced-precision-as-range argument has pretty well
won this discussion (and I'm going to go ahead and start looking into
implementing that behavior.) But please speak up if this is really
unacceptable!

Also, has this issue been filed in JIRA?

--
David Winslow
OpenGeo - http://opengeo.org/

On Thu, Jul 19, 2012 at 1:02 PM, Martin Davis <mdavis@anonymised.com> wrote:

A few more pokes at the moribund nag below... (although I would suggest
that it's actually far from expiring, and we still need to figure out how to
corral this cayuse!)

On Wed, Jul 18, 2012 at 5:04 PM, Ian Schneider <ischneider@anonymised.com>
wrote:

Martin,

Thanks for all your thoughts in this matter. I'm not trying to be a
stubborn thorn in the side of reason, more a devil's advocate. So,
acknowledging the risk of beating this near dead horse further...

It's good to have lots of eyes on this issue - it's quick tricky (as
temporal processing always is...)

On Tue, Jul 17, 2012 at 10:45 AM, Andrea Aime
<andrea.aime@anonymised.com> wrote:
> On Tue, Jul 17, 2012 at 6:18 PM, Martin Davis <mdavis@anonymised.com>
> wrote:
>>
>> Two more thoughts in favour of "reduced precision = interval":
>>
>> 1. Expressiveness - If reduced-precision time strings are taken to
>> mean a
>> precise instant, then there is no simple way of expressing a standard
>> interval. Whereas if reduced-precision is interpreted as an interval,
>> it's
>> still possible to express an instant via a full-precision string.

Agreed. My counterpoint would be that by being explicit, both are
supported.

I don't see how the "reduced-precision-as-instant" approach supports
expressing intervals in a convenient way. Wouldn't it mean that expressing
a range for a single calendar year requires the range
"2011/2011-12-31T11:59:59.999Z"? That seems pretty onerous and unreadable.
Potentially inaccurate too, due to the truncation at 3 decimal places. What
if a dataset allowed more precision for time and contained a value at
2011-12-1312:59:59.999999 ?

Since software is going to be making the request, the
'expressiveness' argument seems less convincing - the client can
support being expressive whilst still adhering to the protocol. While
making manual WMS requests is a nice option when debugging, how many
folks beyond developers are manually editing WMS parameters to achieve
success in their endeavors?

I agree that automated clients have different requirements for "ease of
use" than human ones, and they are probably of primary importance. However,
DWB provides a nice counterexample! Also, I think the real issue is what is
the correct interpretation of the standard. (And it may well be that the
standard doesn't actually provide the most ideal protocol for automated use
- wouldn't be the first time this has happened).

To address the concern about February
being a leap month, most clients would have access to calendar math
and could formulate the appropriate request.

>> 2. Common Usage: When someone says "2012" they typically mean the
>> entire
>> year 2012, not the instant at the start of the year. Likewise for
>> other
>> reduced-precision expressions of time. (I realize that standards
>> often
>> don't let reality intrude, but it might be taken into consideration in
>> absence of other clear direction)

Totally and completely agree. The problem as I see it is that to
achieve both 'Common Usage' _and_ adhere to the spec, we introduce a
difference in behavior - if 2012 means all of 2012 excluding
2013-01-01T00:00:00Z but 2012/2013 means _including_ that first day of
the new year (as per the spec), then this would bad™.

My interpretation is that 2012/2013 means "all of both 2012 and 2013". To
express just "the year 2012" it would be either "2012" or "2012/2012".
Agreed the latter range looks funny to human eyes, but it might be simpler
for an automated client to generate.

I again submit
the evidence that in many domains, single increments on a precision
element (e.g. 1 year, 1 month, etc.) are _not_ standard or common
usage (think census data, seasonal weather deviations from norm,
stream gauge-data, weather forecasts, etc.).

Yes, agreed. And in that case the fully-specified time range format will
be required. This is not prevented by the "reduced-precision-as-interval"
interpretation. And as you point out this should not be a problem for
automated clients to generate.

My take on the standard is that it's just trying to allow a more flexible
usage pattern where it can be useful and does not impact correctness and
consistency. But it can't provide a shorthand for every possible temporal
use case.

--
Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Geoserver-devel mailing list
Geoserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

--
Ian Schneider
OpenGeo - http://opengeo.org
Enterprise support for open source geospatial.

On Tue, Aug 14, 2012 at 9:44 PM, Ian Schneider <ischneider@anonymised.com> wrote:

I don’t mean to sound disgruntled, but want to reiterate that this
choice is being made to satisfy a desire to provide ease of use for a
human when using a machine protocol - the effort, in my mind, should
be made on creating an intuitive client (library, UI, or even
documentation of proper use) to the latter.

I don’t know if this is in scope, but the WMS CITE tests for dimensions
demand that we are able to do a number of things, including being able
to setup the “nearest” behavior on a layer per layer basis, see:
http://cite.opengeospatial.org/test_engine/wms/1.3.0/

The tests for dimensions also make a set of assertions regarding the treatment
of nearest, see here:
https://svn.opengeospatial.org/ogc-projects/cite/scripts/wms/1.3.0/tags/r3/ctl/dimensions.xml

I’m not saying that your work must respect the above… but it would be really nice if it did
not impede a compliant implementation.

Cheers
Andrea

==
Our support, Your Success! Visit http://opensdi.geo-solutions.it for more information.

Ing. Andrea Aime
@geowolf
Technical Lead

GeoSolutions S.A.S.
Via Poggio alle Viti 1187
55054 Massarosa (LU)
Italy
phone: +39 0584 962313
fax: +39 0584 962313
mob: +39 339 8844549

http://www.geo-solutions.it
http://twitter.com/geosolutions_it


Hmm, that ‘nearest values’ behavior is interesting:

This allows, for example, hourly data whose actual recording time is precise to the millisecond to be
requested simply by stating the desired date and hour.

That sounds an awful lot like the range behavior we’ve been discussing - making this behavior optional (on a per-layer basis) seems to imply that the default should something else (probably what Ian is suggesting.)

Then it follows up with

If a request includes an imprecise dimensional value, and nearest value behaviour has been declared, then the
server shall compute and send the nearest available value. The value shall be rounded, not merely truncated. If
nearestValue selection is not supported, then the server shall throw an Exception (code=“InvalidDimensionValue”) to indicate that a value was required.

I don’t understand what the rounding vs. truncation distinction has to do with this situation (since it would seem that we are going from a less-precise to more-precise value) but the requirement to “throw an Exception” suggests that we should just be requiring the precision in the request to at least match the precision in the data when this nearest-value behavior is not enabled. (Implementation-wise there’s a difficulty here as we don’t track precision after the request is parsed initially, making it hard for us to detect the value.)

Since the TIME parameter does not support per-layer values this would seem to make multi-layer requests with mixed time resolutions require a time range.


David Winslow
OpenGeo - http://opengeo.org/

On Tue, Aug 14, 2012 at 4:05 PM, Andrea Aime <andrea.aime@anonymised.com> wrote:

On Tue, Aug 14, 2012 at 9:44 PM, Ian Schneider <ischneider@anonymised.com> wrote:

I don’t mean to sound disgruntled, but want to reiterate that this
choice is being made to satisfy a desire to provide ease of use for a
human when using a machine protocol - the effort, in my mind, should
be made on creating an intuitive client (library, UI, or even
documentation of proper use) to the latter.

I don’t know if this is in scope, but the WMS CITE tests for dimensions
demand that we are able to do a number of things, including being able
to setup the “nearest” behavior on a layer per layer basis, see:
http://cite.opengeospatial.org/test_engine/wms/1.3.0/

The tests for dimensions also make a set of assertions regarding the treatment
of nearest, see here:
https://svn.opengeospatial.org/ogc-projects/cite/scripts/wms/1.3.0/tags/r3/ctl/dimensions.xml

I’m not saying that your work must respect the above… but it would be really nice if it did
not impede a compliant implementation.

Cheers

Andrea

==
Our support, Your Success! Visit http://opensdi.geo-solutions.it for more information.

Ing. Andrea Aime
@geowolf
Technical Lead

GeoSolutions S.A.S.
Via Poggio alle Viti 1187
55054 Massarosa (LU)
Italy
phone: +39 0584 962313
fax: +39 0584 962313
mob: +39 339 8844549

http://www.geo-solutions.it
http://twitter.com/geosolutions_it


On Tue, Aug 14, 2012 at 1:52 PM, David Winslow <dwinslow@anonymised.com> wrote:

Hmm, that ‘nearest values’ behavior is interesting:

This allows, for example, hourly data whose actual recording time is precise to the millisecond to be
requested simply by stating the desired date and hour.

That sounds an awful lot like the range behavior we’ve been discussing - making this behavior optional (on a per-layer basis) seems to imply that the default should something else (probably what Ian is suggesting.)

Then it follows up with

If a request includes an imprecise dimensional value, and nearest value behaviour has been declared, then the
server shall compute and send the nearest available value. The value shall be rounded, not merely truncated. If
nearestValue selection is not supported, then the server shall throw an Exception (code=“InvalidDimensionValue”) to indicate that a value was required.

I don’t understand what the rounding vs. truncation distinction has to do with this situation (since it would seem that we are going from a less-precise to more-precise value)

Could it mean that in order to determine candidate data values for the “nearest value”, the data values should be rounded rather than truncated? This would mean that the nearest data value might actually be less than the request value. (This is different behaviour than the “interval” semantics under discussion.)

but the requirement to “throw an Exception” suggests that we should just be requiring the precision in the request to at least match the precision in the data when this nearest-value behavior is not enabled. (Implementation-wise there’s a difficulty here as we don’t track precision after the request is parsed initially, making it hard for us to detect the value.)

And we also don’t know the underlying precision in the data, right? So this would effectively say that every request time value requires full precision.

It’s hard to see clients relying on exceptions being thrown if they fail to heed the absence of declared nearest-value behaviour. Pretty complex semantics for that. So maybe this doesn’t stop a server from being more lenient in what it accepts.

Since the TIME parameter does not support per-layer values this would seem to make multi-layer requests with mixed time resolutions require a time range.


David Winslow
OpenGeo - http://opengeo.org/

On Tue, Aug 14, 2012 at 4:05 PM, Andrea Aime <andrea.aime@anonymised.com> wrote:

On Tue, Aug 14, 2012 at 9:44 PM, Ian Schneider <ischneider@anonymised.com> wrote:

I don’t mean to sound disgruntled, but want to reiterate that this
choice is being made to satisfy a desire to provide ease of use for a
human when using a machine protocol - the effort, in my mind, should
be made on creating an intuitive client (library, UI, or even
documentation of proper use) to the latter.

I don’t know if this is in scope, but the WMS CITE tests for dimensions
demand that we are able to do a number of things, including being able
to setup the “nearest” behavior on a layer per layer basis, see:
http://cite.opengeospatial.org/test_engine/wms/1.3.0/

The tests for dimensions also make a set of assertions regarding the treatment
of nearest, see here:
https://svn.opengeospatial.org/ogc-projects/cite/scripts/wms/1.3.0/tags/r3/ctl/dimensions.xml

I’m not saying that your work must respect the above… but it would be really nice if it did
not impede a compliant implementation.

Cheers

Andrea

==
Our support, Your Success! Visit http://opensdi.geo-solutions.it for more information.

Ing. Andrea Aime
@geowolf
Technical Lead

GeoSolutions S.A.S.
Via Poggio alle Viti 1187
55054 Massarosa (LU)
Italy
phone: +39 0584 962313
fax: +39 0584 962313
mob: +39 339 8844549

http://www.geo-solutions.it
http://twitter.com/geosolutions_it



Live Security Virtual Conference
Exclusive live event will cover all the ways today’s security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/


Geoserver-devel mailing list
Geoserver-devel@anonymised.comsts.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-devel

Martin Davis
OpenGeo - http://opengeo.org
Expert service straight from the developers.