From 10:30 to 13:30 are 180 minutes. This is the smallest gap size between the time instances. And it is the greatest common divider between all gaps in the time series.
On Sat, Sep 12, 2015 at 12:42 PM, Sören Gebbert
<soerengebbert@googlemail.com> wrote:
Hi,
From 10:30 to 13:30 are 180 minutes. This is the smallest gap size between the time instances.
And it is the greatest common divider between all gaps in the time series.
Sure but in the end that's not of great relevance when dealing with
irregular, absolute data, right?
So I can basically ignore it here? Just to be sure.
Hi,
From 10:30 to 13:30 are 180 minutes. This is the smallest gap size between the time instances.
And it is the greatest common divider between all gaps in the time series.
Sure but in the end that’s not of great relevance when dealing with
irregular, absolute data, right?
So I can basically ignore it here? Just to be sure.
I am not sure what you expect from the granularity? In case it is of no relevance for you, then you can simply ignore it. If you dont need this kind of temporal information in an aggregation process, ignore it.
However, you can use it as indicator of the import procedure and time stamp quality of your dataset. In this case indicates the granularity correctly alligned time stamps.
On Sat, Sep 12, 2015 at 8:08 PM, Sören Gebbert
<soerengebbert@googlemail.com> wrote:
Hi Markus,
Am 12.09.2015 14:43 schrieb "Markus Neteler" <neteler@osgeo.org>:
On Sat, Sep 12, 2015 at 12:42 PM, Sören Gebbert
<soerengebbert@googlemail.com> wrote:
>
> Hi,
> From 10:30 to 13:30 are 180 minutes. This is the smallest gap size
> between the time instances.
> And it is the greatest common divider between all gaps in the time
> series.
Sure but in the end that's not of great relevance when dealing with
irregular, absolute data, right?
So I can basically ignore it here? Just to be sure.
I am not sure what you expect from the granularity? In case it is of no
relevance for you, then you can simply ignore it. If you dont need this kind
of temporal information in an aggregation process, ignore it.
Not sure myself: I have four irregular overpasses (i.e. maps) per day,
from which I want to generate e.g. weekly averages.
I suppose that it is not relevant then?
However, you can use it as indicator of the import procedure and time stamp
quality of your dataset. In this case indicates the granularity correctly
alligned time stamps.
ok, thanks.
(I added a granularity note in r66182 to the manual)