[pgrouting-dev] Implementation of core Time Dependent Dijkstra function

Hi.

As already discussed in the proposal for Time Dependent Shortest Path (TDSP) basically core part would be the extension of basic dijkstra algorithm to work for time dependent edge weights.

This will involve a major change in the usual approach of pgRouting in the sense that no boost algorithm for time dependent edge weights is already implemented. So, we have to code the core algorithm ourselves.

I was looking at the boost implementation of dijkstra algorithm here: http://www.boost.org/doc/libs/1_46_1/boost/graph/dijkstra_shortest_paths.hpp I figured we can reuse much of the code and ideas.

Before I look deeper into the code, and see how we can modify it to make it time dependent, I wanted to know: if we reuse the static boost dijkstra code (and some other code that would be required with it), will there be any licencing issues?


Regards,
-Jay Mahadeokar

On 4/29/2011 5:03 PM, Jay Mahadeokar wrote:

Hi.

As already discussed in the proposal for Time Dependent Shortest Path
(TDSP) basically core part would be the extension of basic dijkstra
algorithm to work for time dependent edge weights.

This will involve a major change in the usual approach of pgRouting in
the sense that no boost algorithm for time dependent edge weights is
already implemented. So, we have to code the core algorithm ourselves.

I was looking at the boost implementation of dijkstra algorithm here:
http://www.boost.org/doc/libs/1_46_1/boost/graph/dijkstra_shortest_paths.hpp
I figured we can reuse much of the code and ideas.

Before I look deeper into the code, and see how we can modify it to make
it time dependent, I wanted to know: if we reuse the static boost
dijkstra code (and some other code that would be required with it), will
there be any licencing issues?

Boost has a very liberal license and since we are already including boost code in pgRouting, I do no think it should be a problem.

I assume the "some other code" you reference would be code that you write for the project? If so then this should not be a problem.

-Steve

Hi Jay,

I think Boost License won’t make troubles if you use it:
http://www.boost.org/users/license.html#FAQ

It’s written that some libraries within Boost use their own licenses (http://www.boost.org/users/license.html#Introduction), so it’s probably a good idea to reconfirm quickly each time you want to make use of it.

Daniel

2011/4/30 Stephen Woodbridge <woodbri@swoodbridge.com>

On 4/29/2011 5:03 PM, Jay Mahadeokar wrote:

Hi.

As already discussed in the proposal for Time Dependent Shortest Path
(TDSP) basically core part would be the extension of basic dijkstra
algorithm to work for time dependent edge weights.

This will involve a major change in the usual approach of pgRouting in
the sense that no boost algorithm for time dependent edge weights is
already implemented. So, we have to code the core algorithm ourselves.

I was looking at the boost implementation of dijkstra algorithm here:
http://www.boost.org/doc/libs/1_46_1/boost/graph/dijkstra_shortest_paths.hpp
I figured we can reuse much of the code and ideas.

Before I look deeper into the code, and see how we can modify it to make
it time dependent, I wanted to know: if we reuse the static boost
dijkstra code (and some other code that would be required with it), will
there be any licencing issues?

Boost has a very liberal license and since we are already including boost code in pgRouting, I do no think it should be a problem.

I assume the “some other code” you reference would be code that you write for the project? If so then this should not be a problem.

-Steve


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de
Web: http://georepublic.de

Hi,

I studied the boost implementation of dijkstra. http://www.boost.org/doc/libs/1_46_1/boost/graph/dijkstra_shortest_paths.hpp

At its core, it uses dijkstraVisitorConcept which is extension of bfsVisitorConcept (http://www.boost.org/doc/libs/1_46_1/boost/graph/breadth_first_search.hpp ) so as to perform dijkstra search.

As expected it assumes a static WeightMap which gives the static weights of the edges. During bfs search, priority queue is used. When minimum vertex is removed, the outbound edges are examined and following functions are called:

  1. grey_target() - is called when target of the edge currently being considered was already discovered
    It also calls dijkstra_queue_update, if we discover better shortest path to target vertex.
  2. tree_edge() - is called when the target of edge currently considered is seen for first time

Both functions call the relax function(http://www.boost.org/doc/libs/1_46_1/boost/graph/relax.hpp) which uses the static WeightMap to relax the edge being considered.

Now, we need to supply dynamic weights instead of the static weight map. So, an alternative implementation of the WeightMap will be required.

Few thoughts regarding implementation:

Most important detail which needs to be finalised is the implementation of dynamic weight map.

  1. Now, since the weight is function of time, the weight map is now like:

Key: (Edge , Time) - Since edge and time will form a unique key.
Value: weight

Is this interpretation reasonable?

  1. When do we populate the weightMap data structure?
  • One approach would be to scan all the (edge,time) values in database available for all edges under consideration and pass this to the core function. This will increase the space complexity which will be greater, as the data precision increases (i.e data is available between finer time intervals).
  • Another approach would be to query the database each time to get the value for (edge,time) key. This will lead to considerable increase in the time complexity.
  • A better approach would be to query only few (edge,time) values such that time is in particular range from the start_time. We can also have this as a parameter which user can supply (according to the estimated distance to target, available space / time constraints). This will provide functionality like bounding box in simple pgRouting dijkstra implementation. This can be initialised before calling the core algorithm. If during search, the entries are not available in the map then we can again query the database and retrieve more entries.
  1. Many things related to design will need to be considered like:
  • Do, we create our own timeDependentDijkstraConcept, TimeDependentDijkstraVisitor etc on lines of boost’s dijkstra_visitor, or we simply extend the boosts dijkstraVisitor and override the required functions?
  • The relax() function will also need to be altered. Boost has kept it in separate file(mainly because it is required by other functions also) Should we follow same approach?
  • It would be nice to keep the function which retrieves the dynamic weights independent of the core algorithm. In static dijkstra, we simply populate whole distanceMap at start. Here, the edgeWeight query might happen during dijkstra search. How do we design the dynamic weight query interface so that above abstraction is achieved?
  • Other design issues/ ??

I have no prior experience with designing such functionality. So, I will wait for other opinions. Any reference books that might be helpful here?


Regards,
-Jay Mahadeokar

On 5/12/2011 5:02 AM, Jay Mahadeokar wrote:

Hi,

I studied the boost implementation of dijkstra.
http://www.boost.org/doc/libs/1_46_1/boost/graph/dijkstra_shortest_paths.hpp

At its core, it uses dijkstraVisitorConcept which is extension of
bfsVisitorConcept
(http://www.boost.org/doc/libs/1_46_1/boost/graph/breadth_first_search.hpp
) so as to perform dijkstra search.

As expected it assumes a static WeightMap which gives the static weights
of the edges. During bfs search, priority queue is used. When minimum
vertex is removed, the outbound edges are examined and following
functions are called:
1. grey_target() - is called when target of the edge currently being
considered was already discovered
                          It also calls dijkstra_queue_update, if we
discover better shortest path to target vertex.
2. tree_edge() - is called when the target of edge currently considered
is seen for first time

Both functions call the relax
function(http://www.boost.org/doc/libs/1_46_1/boost/graph/relax.hpp)
which uses the static WeightMap to relax the edge being considered.

Now, we need to supply dynamic weights instead of the static weight map.
So, an alternative implementation of the WeightMap will be required.

*Few thoughts regarding implementation:*

Most important detail which needs to be finalised is the implementation
of dynamic weight map.

1. Now, since the weight is function of time, the weight map is now like:

Key: (Edge , Time) - Since edge and time will form a unique key.
Value: weight

Is this interpretation reasonable?

This seems reasonable. The only extension to this that would be valuable would to add a "class" to the key to allow filtering by traveler class. So something like Key: (Class, Edge, Time) where the default class is "Any". Hmmm, on second thought, it might be a better abstraction to use the Boost Filtered Graph to do this and keep your code simple because it is working in a tighter loop.

2. When do we populate the weightMap data structure?

    * One approach would be to scan all the (edge,time) values in
      database available for all edges under consideration and pass this
      to the core function. This will increase the space complexity
      which will be greater, as the data precision increases (i.e data
      is available between finer time intervals).
    * Another approach would be to query the database each time to get
      the value for (edge,time) key. This will lead to considerable
      increase in the time complexity.
    * A better approach would be to query only few (edge,time) values
      such that time is in particular range from the start_time. We can
      also have this as a parameter which user can supply (according to
      the estimated distance to target, available space / time
      constraints). This will provide functionality like bounding box in
      simple pgRouting dijkstra implementation. This can be initialised
      before calling the core algorithm. If during search, the entries
      are not available in the map then we can again query the database
      and retrieve more entries.

I like your idea of a sliding window. You query for weights between Time and Time+Delta, as you approach Time+Delta, you update the window and load new data. All entries older than current time can get purged or overwritten like a circular buffer.

3. Many things related to design will need to be considered like:

    * Do, we create our own timeDependentDijkstraConcept,
      TimeDependentDijkstraVisitor etc on lines of boost's
      dijkstra_visitor, or we simply extend the boosts dijkstraVisitor
      and override the required functions?
    * The relax() function will also need to be altered. Boost has kept
      it in separate file(mainly because it is required by other
      functions also) Should we follow same approach?
    * It would be nice to keep the function which retrieves the dynamic
      weights independent of the core algorithm. In static dijkstra, we
      simply populate whole distanceMap at start. Here, the edgeWeight
      query might happen during dijkstra search. How do we design the
      dynamic weight query interface so that above abstraction is achieved?
    * Other design issues/ ??

These are really good questions. I think that the best place to ask these Boost Graph questions is on Boost mailing list.

http://lists.boost.org/mailman/listinfo.cgi/boost-users

Jeremiah Willcock and others are very helpful when people ask questions.

I have no prior experience with designing such functionality. So, I will
wait for other opinions. Any reference books that might be helpful here?

I am a C programmer. I understand some of the basic C++ concepts, but that is about it, so again, I think the Boost mailing list is a good place to start.

-Steve

On Thu, May 12, 2011 at 8:08 PM, Stephen Woodbridge <woodbri@swoodbridge.com> wrote:

Few thoughts regarding implementation:

Most important detail which needs to be finalised is the implementation
of dynamic weight map.

  1. Now, since the weight is function of time, the weight map is now like:

Key: (Edge , Time) - Since edge and time will form a unique key.
Value: weight

Is this interpretation reasonable?

This seems reasonable. The only extension to this that would be valuable would to add a “class” to the key to allow filtering by traveler class. So something like Key: (Class, Edge, Time) where the default class is “Any”. Hmmm, on second thought, it might be a better abstraction to use the Boost Filtered Graph to do this and keep your code simple because it is working in a tighter loop.

Can you please explain what exactly do you mean by traveler class? Is it like - bus, 2-wheeler etc (they might have different costs at same time) .

I am assuming that user will query only those entries(Edge,Time) that are useful. So, if he is querying for 2-wheeler, he should specify that in query, so that only entries related to that class are passed to boost graph and rest are filtered out already.


Regards,
-Jay Mahadeokar

On 5/13/2011 2:03 AM, Jay Mahadeokar wrote:

On Thu, May 12, 2011 at 8:08 PM, Stephen Woodbridge
<woodbri@swoodbridge.com <mailto:woodbri@swoodbridge.com>> wrote:

        *Few thoughts regarding implementation:*

        Most important detail which needs to be finalised is the
        implementation
        of dynamic weight map.

        1. Now, since the weight is function of time, the weight map is
        now like:

        Key: (Edge , Time) - Since edge and time will form a unique key.
        Value: weight

        Is this interpretation reasonable?

    This seems reasonable. The only extension to this that would be
    valuable would to add a "class" to the key to allow filtering by
    traveler class. So something like Key: (Class, Edge, Time) where the
    default class is "Any". Hmmm, on second thought, it might be a
    better abstraction to use the Boost Filtered Graph to do this and
    keep your code simple because it is working in a tighter loop.

Can you please explain what exactly do you mean by traveler class? Is it
like - bus, 2-wheeler etc (they might have different costs at same time) .

I am assuming that user will query only those entries(Edge,Time) that
are useful. So, if he is querying for 2-wheeler, he should specify that
in query, so that only entries related to that class are passed to boost
graph and rest are filtered out already.

Yes, you are correct, this is the appropriate way to do this. Sorry for the confusion.

-Steve

Hi,

I have outlined possible design and implementation of the time-dependent algorithm on basis of feedback received till now:

We assume we have following database tables:

  1. Ways: (On lines of the one in pgrouting-workshop)
  • gid
  • source
  • target
  • name
  • the_geom
  • static_length (In case time dependent routing is not used)
  • any other attributes…
  1. TimeDepCosts
  • gid
  • day_of_week
  • time_of_day
  • travel_time / arrival_time
  1. TurnRestrictions (Time dependent)
  • node/source

  • time_from

  • time_to

  • node_to

  • ancestors

If only static shortest path query is made, then only table 1 is required.
If time-dependent shortest path query is made, table 1 and 2 will be used.
If in time-dependent shortest path query with turn restrictions is made , all 3 tables will be used.

SQL Structures and Queries

//Same as normal dijkstra - Return type

CREATE TYPE path_result AS (vertex_id integer, edge_id integer, cost float8);


– Core function for time dependent shortest_path computation
– See README for description

CREATE OR REPLACE FUNCTION time_dependent_shortest_path(sql text, source_id integer,
target_id integer, directed boolean, has_reverse_cost boolean, day_of_week integer, time_of_day double precision)
RETURNS SETOF path_result
AS ‘$libdir/librouting’
LANGUAGE ‘C’ IMMUTABLE STRICT;

TODO: Decide what wrapper functions will be required, including bounding box and sliding window functionality.

Code Design

Additional DS to be used besides normal dijkstra:
typedef struct Time_Dependent_Costs
{
int edgeId,
int day_of_week
double time_of_day
double travel_time

} time_dep_cost_t;

Time_dependent_dijkstra.c

Will contain the main PG_FUNCTION like: time_dependent_shortest_path.
This will call compute_time_dependent_shortest_path. The compute shortest path function can be defined as follows: static int compute_shortest_time_dependent_path(char* sql, int start_vertex,
int end_vertex, bool directed,
bool has_reverse_cost,
path_element_t **path, int *path_count, int day_of_week, double time_of_day)
It will first query ways table as per sql passed, and populate edge_t with the edges.
Then it will query the TimeDepCosts table and generate the time_dep_costs_t

Next it will call the boost_time_dep_dijkstra() from time_dep_boost_wraper.cpp.

time_dep_boost_wrapper.cpp

Main function:

int boost_time_dep_dijkstra(edge_t *edges, unsigned int edge_count, time_dep_cost_t *edges, unsigned int time_dep_cost_count,
int start_vertex, int end_vertex,
bool directed, bool has_reverse_cost,
path_element_t **path, int *path_count, char **err_msg);

It will generate the boost graph using the information in edge_t.
It will generate the WeightMap using information in time_dep_cost_t.

WeightMap:
It is a type of Boost::PropertyMap.
The key will be Boost::Tuple<unsigned int,int,double> - representing <edgeId,day_of_week,time_of_day>
The value will be cost (double)

Then it will call the dijkstra_time_dep_shortest_path() defined in time_dep_dijkstra_shortest_path.hpp

time_dep_dijkstra_shortest_path.hpp

This will be coded on lines of dijkstra_shortest_path.hpp. Instead of default static weight map, the newly defined weight map will be used.

TODO: dijkstra_shortest_path uses very neat and robust design, including named parameters and other design patterns.
Examine if these are necessary, or same thing can be done in a simpler way.

For getting started, I will assume that we will query all the timeDependentCosts at once and generate the WeightMap. Once this is done, we can go for the sliding window idea discussed earlier which may save space.

The support for time-dependent turn restrictions can also be implemented later. (If not during GSoc, maybe after that)

Eagerly awaiting any thoughts / suggestions. I will try and design the internal time_dependent_dijkstra code asap, and update the list.


Regards,
-Jay Mahadeokar

Jay,

This is an excellent job! I found it to be very clear and detailed. I think you have done an excellent job of capturing all the input and presenting it in a nice clean design.

Would it be possible to support turn restrictions in the static Dijkstra also? I'm thinking just use all the same structures but ignore the the time components to keep things simple. So if the the turn restrictions table is present we use it, otherwise assume no restrictions. If doing static shortest path with turn restrictions then ignore the time components otherwise we use them. And it is up to the user to make sure the turn restriction table is valid for the analysis being requested.

For time definitions in your tables I think you need to probably define some common structure for handling a time entry.

For example time_from and time_to might need to be defined as a structure that includes day_of_week. day_of week might take values like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1..7 - Sun,...,Sat

or making this is a bit field. Holidays are trickier, and imply another table to that defines holidays that would be local to the region. I think commenting on holidays in the design is adequate for now and can be left to a future effort if it is needed.

time_from - when to apply this
time_to - implies a duration from the start,
             but might be equal to time_from or null to imply an instant

So for:
   schedules:
       day_of_week: 0, time_from = time_to = HH:MM
   turn_restrictions:
       day_of_week: -1, time_from: 07:00, time_to: 10:00
   speed_restriction:
       day_of_week: -1, time_from: 00:01, time_to: 06:59, 55mph
       day_of_week: -1, time_from: 07:00, time_to: 10:00, 30mph
       day_of_week: -1, time_from: 10:01, time_to: 16:30, 55mph
       day_of_week: -1, time_from: 16:31, time_to: 19:30, 30mph
       day_of_week: -1, time_from: 19:31, time_to: 24:00, 55mph
       day_of_week: -2, time_from: 00:01, time_to: 24:00, 55mph
       day_of_week: -3, time_from: 00:01, time_to: 24:00, 55mph

Anyway I think this gets the idea across. Time is a tricky thing to represent. You should model what you feel is reasonable for your effort to be successful and we can document the rest for a future project if needed.

You have made solid progress and I like what I am seeing.

Best regards,
   -Steve

On 5/17/2011 12:50 PM, Jay Mahadeokar wrote:

Hi,

I have outlined possible design and implementation of the time-dependent
algorithm on basis of feedback received till now:

We assume we have following database tables:

   1. Ways: (On lines of the one in pgrouting-workshop)

    * gid
    * source
    * target
    * name
    * the_geom
    * static_length (In case time dependent routing is not used)
    * any other attributes..

     2. TimeDepCosts

    * gid
    * day_of_week
    * time_of_day
    * travel_time / arrival_time

     3. TurnRestrictions (Time dependent)

    * node/source

    * time_from
    * time_to
    * node_to
    * ancestors

If only static shortest path query is made, then only table 1 is required.
If time-dependent shortest path query is made, table 1 and 2 will be used.
If in time-dependent shortest path query with turn restrictions is made
, all 3 tables will be used.

*SQL Structures and Queries*

//Same as normal dijkstra - Return type
CREATE TYPE path_result AS (vertex_id integer, edge_id integer, cost
float8);

-----------------------------------------------------------------------
-- Core function for time dependent shortest_path computation
-- See README for description
-----------------------------------------------------------------------
CREATE OR REPLACE FUNCTION time_dependent_shortest_path(sql text,
source_id integer,
         target_id integer, directed boolean, has_reverse_cost boolean,
day_of_week integer, time_of_day double precision)
         RETURNS SETOF path_result
         AS '$libdir/librouting'
         LANGUAGE 'C' IMMUTABLE STRICT;

TODO: Decide what wrapper functions will be required, including bounding
box and sliding window functionality.

*Code Design*

Additional DS to be used besides normal dijkstra:
     typedef struct Time_Dependent_Costs
      {
           int edgeId,
           int day_of_week
           double time_of_day
           double travel_time
      } time_dep_cost_t;

*Time_dependent_dijkstra.c*

Will contain the main PG_FUNCTION like: time_dependent_shortest_path.
This will call compute_time_dependent_shortest_path. The compute
shortest path function can be defined as follows: static int
compute_shortest_time_dependent_path(char* sql, int start_vertex,
                                  int end_vertex, bool directed,
                                 bool has_reverse_cost,
                                  path_element_t **path, int
*path_count, int day_of_week, double time_of_day)
It will first query ways table as per sql passed, and populate edge_t
with the edges.
Then it will query the TimeDepCosts table and generate the time_dep_costs_t
Next it will call the boost_time_dep_dijkstra() from
time_dep_boost_wraper.cpp.

*time_dep_boost_wrapper.cpp*

Main function:
int boost_time_dep_dijkstra(edge_t *edges, unsigned int edge_count,
time_dep_cost_t *edges, unsigned int time_dep_cost_count,
int start_vertex, int end_vertex,
bool directed, bool has_reverse_cost,
path_element_t **path, int *path_count, char **err_msg);

It will generate the boost graph using the information in edge_t.
It will generate the WeightMap using information in time_dep_cost_t.

WeightMap:
It is a type of Boost::PropertyMap.
The key will be Boost::Tuple<unsigned int,int,double> - representing
<edgeId,day_of_week,time_of_day>
The value will be cost (double)

Then it will call the dijkstra_time_dep_shortest_path() defined in
time_dep_dijkstra_shortest_path.hpp

*time_dep_dijkstra_shortest_path.hpp*

This will be coded on lines of dijkstra_shortest_path.hpp. Instead of
default static weight map, the newly defined weight map will be used.

TODO: dijkstra_shortest_path uses very neat and robust design,
including named parameters and other design patterns.
Examine if these are necessary, or same thing can be done in a simpler way.

For getting started, I will assume that we will query all the
timeDependentCosts at once and generate the WeightMap. Once this is
done, we can go for the sliding window idea discussed earlier which may
save space.

The support for time-dependent turn restrictions can also be implemented
later. (If not during GSoc, maybe after that)

Eagerly awaiting any thoughts / suggestions. I will try and design the
internal time_dependent_dijkstra code asap, and update the list.

--
Regards,
-Jay Mahadeokar

_______________________________________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

Hi Jay and Steve,

This looks really nice, but let me give some comments regarding how to model time, because this is really tricky in my opinion. Especially when thinking about an abstract network that isn’t a road network.

Would it be possible to support turn restrictions in the static Dijkstra also? I’m thinking just use all the same structures but ignore the the time components to keep things simple. So if the the turn restrictions table is present we use it, otherwise assume no restrictions. If doing static shortest path with turn restrictions then ignore the time components otherwise we use them. And it is up to the user to make sure the turn restriction table is valid for the analysis being requested.

Currently in pgRouting Dijkstra and A* don’t support turn restrictions modelling. What I actually like on Shooting Star is, that it routes from edge to edge instead of node to node. So it allows to model relations between edges rather than nodes, which I think is more close to how humans would think of this.
Dijkstra can only visit one node one times (as Shooting star only visits an edge once). Well, there can be turn restriction cases where a node is passed twice and which can’t be done correctly with Dijkstra as far as I know.

In the beginning I wouldn’t think about the turn restrictions too much. Let’s take it as an extra when we see we still have enough time.
Of course if you have a good idea to implement it all at once with only little extra work, then that’s fine.

For time definitions in your tables I think you need to probably define some common structure for handling a time entry.

Another problem might be time zones. Plain day field + time field might not be able to allow driving from one time zone to another (or just think about a flight network). I never went from one timezone to another by car or train, but Steve and Anton, you might have this experience. How does car navigation software handle this when you cross the timezone border? :wink:

So taking “timestamp with timezone” for those fields might be a good idea to be able to support such a functionality.

For example time_from and time_to might need to be defined as a structure that includes day_of_week. day_of week might take values like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1…7 - Sun,…,Sat

I think the problem here is how to model recurring time dependent costs, right?
If you think about weekdays, then you can’t for example model monthly or yearly events.

I’m not really sure this is useful information here, but I once saw about how the “iCal” format models recurring calendar events:
http://ext.ensible.com/deploy/dev/examples/calendar/recurrence-widget.html

Maybe we can learn something from there. The example needed to be extended with some duration probably. But looking about calendar formats might be a good idea to model holidays etc.

Or another (stupid) example would be cron job syntax. Something I always need to google for as I can’t remember it :wink:

All this time dependent stuff, events and schedules is also an issue in Darp solver.
And it probably is important also for the multi-modal routing, so if we can find some clever way how to model this and can share it between algorihtms, that would be great.

Daniel


Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de
Web: http://georepublic.de

Hi Daniel,

These are good points.

I agree that turn restrictions should be thought about but only implemented if there is time. I think we should take the discussion of turn restriction into a separate thread. I'm interested in what you think is a limitation that can't be modeled in Dijkstra or A*.

Regarding the time modeling, my point was just that we needed more thought there and a richer model developed. The crontab model is a good idea. I'm not opposed to modeling monthly or yearly events, but they rarely exist other than holidays which I tried to capture. I suppose you could model something like the Boston Marathon and model all the road closures and restrictions, but it seems to be a lot of work for a one day event, but I can see city government's deploying the resources to model something like this.

Regarding timezones: Times need to be entered with zones for models that cross zones but all the data should be converted to UTC internally so it runs in a consistent time model. Timezones are for input and output, but the solver should be ignorant of them, in my opinion.

I have carried my GPS from the Eastern to central time zone, and it knew the local time when I powered it up. So my guess is that it would auto correct when crossing the time zone.

-Steve

On 5/17/2011 10:42 PM, Daniel Kastl wrote:

Hi Jay and Steve,

This looks really nice, but let me give some comments regarding how to
model time, because this is really tricky in my opinion. Especially when
thinking about an abstract network that isn't a road network.

    Would it be possible to support turn restrictions in the static
    Dijkstra also? I'm thinking just use all the same structures but
    ignore the the time components to keep things simple. So if the the
    turn restrictions table is present we use it, otherwise assume no
    restrictions. If doing static shortest path with turn restrictions
    then ignore the time components otherwise we use them. And it is up
    to the user to make sure the turn restriction table is valid for the
    analysis being requested.

Currently in pgRouting Dijkstra and A* don't support turn restrictions
modelling. What I actually like on Shooting Star is, that it routes from
edge to edge instead of node to node. So it allows to model relations
between edges rather than nodes, which I think is more close to how
humans would think of this.
Dijkstra can only visit one node one times (as Shooting star only visits
an edge once). Well, there can be turn restriction cases where a node is
passed twice and which can't be done correctly with Dijkstra as far as I
know.

In the beginning I wouldn't think about the turn restrictions too much.
Let's take it as an extra when we see we still have enough time.
Of course if you have a good idea to implement it all at once with only
little extra work, then that's fine.

    For time definitions in your tables I think you need to probably
    define some common structure for handling a time entry.

Another problem might be time zones. Plain day field + time field might
not be able to allow driving from one time zone to another (or just
think about a flight network). I never went from one timezone to another
by car or train, but Steve and Anton, you might have this experience.
How does car navigation software handle this when you cross the timezone
border? :wink:

So taking "timestamp with timezone" for those fields might be a good
idea to be able to support such a functionality.

    For example time_from and time_to might need to be defined as a
    structure that includes day_of_week. day_of week might take values like:

    -3 - holidays
    -2 - weekend days
    -1 - week days
    0 - all days
    1..7 - Sun,...,Sat

I think the problem here is how to model recurring time dependent costs,
right?
If you think about weekdays, then you can't for example model monthly or
yearly events.

I'm not really sure this is useful information here, but I once saw
about how the "iCal" format models recurring calendar events:
http://ext.ensible.com/deploy/dev/examples/calendar/recurrence-widget.html

Maybe we can learn something from there. The example needed to be
extended with some duration probably. But looking about calendar formats
might be a good idea to model holidays etc.

Or another (stupid) example would be cron job syntax. Something I always
need to google for as I can't remember it :wink:

All this time dependent stuff, events and schedules is also an issue in
Darp solver.
And it probably is important also for the multi-modal routing, so if we
can find some clever way how to model this and can share it between
algorihtms, that would be great.

Daniel

--
Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de <mailto:daniel.kastl@georepublic.de>
Web: http://georepublic.de/&gt;

_______________________________________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

Hi,

I think what we need is a function on following lines:

Given a query_start_time, it should query the TimeDepCosts table, and return the set of entries which have their start_time within query_start_time + delta. This delta might be x hours, depending on the upper bound of the expected journey time.

We need following things handled in our model:

If query_start_time is 11PM Sunday and delta is 10 hours, we will need to query entries with start time between 11PM Sunday and 9AM Monday. So, basically the function needs to be periodic in terms of time_of_day and day_of_week.

As Steve suggested, we can maintain conventions for day_of_week like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1-7 Mon - Sun.

If we just assume entries for -3,-2,-1,0 and ignore each entry for Sun - Sat, that would reduce the space required assuming that the entries for weekdays is same. If times for different weekdays is different then we would have separate entries for each day.

So, the query should properly examine the query_day_of_week and select the proper entries. Ex: For above query, if it is sunday, then after 12AM, next entries will be queried with time_of_day as -1, but if it was Saturday, next entries will be queried with time_of_day as -2.

We can have a boolean parameter like isHoliday in query itself, which will tell if a given day (may be weekday) is holiday or not.Then again the query will have to be modified accordingly (query for -3). This will take care of query for local holiday etc and we do not have to worry about calenders. The user will have to worry about that… :slight_smile:

There can be time_from and time_to entries as well. But, if we just have entries like:

day_of_week: -1, time_of_day: 00:01, 55mph
day_of_week: -1, time_of_day: 07:00, 30mph
day_of_week: -1, time_of_day: 10:01, 55mph
day_of_week: -1, time_of_day: 16:31, 30mph

we can assume that the entry with time_of_day 00:01 is valid upto 07:00. And if query_start_time is 02:00 and delta is 10 hours, we can query entries which have time_of_day < query_start_time + delta (taking care of change of day).

Is this assumption reasonable?

weightMap Abstraction

I think the design would be more modular if we keep the weightMap independent of the underlying time conventions. Since as Daniel pointed out, this algorithm can also be used for non-road networks.

So, whatever the underlying time conventions, we can assume that in the end the query should return pairs like:

< <edgeId, offset_time> , travel_time/speed >

We will assume that query_start_time is 0, i.e we offset every thing by query_start_time.
The offset_time will be as follows:

As in above example,
If start_tme is 02:00 and day_of_week is -1, and delta as 10 hours.

Then, edge entries for edgeId 1 will be:
< <1, 05:00 > , 30 mph>
< <1, 08:01 > , 55 mph>
< <1, 14:31 > , 30 mph>

Basically, the offset_time will abstract out any internal details of weekdays, change of days etc and it will just contain the offset from start_time.

This will greatly simplify the core time dependent dijkstra implementation.

Thoughts?

On Wed, May 18, 2011 at 10:46 AM, Stephen Woodbridge <woodbri@swoodbridge.com> wrote:

Hi Daniel,

These are good points.

I agree that turn restrictions should be thought about but only implemented if there is time. I think we should take the discussion of turn restriction into a separate thread. I’m interested in what you think is a limitation that can’t be modeled in Dijkstra or A*.

Regarding the time modeling, my point was just that we needed more thought there and a richer model developed. The crontab model is a good idea. I’m not opposed to modeling monthly or yearly events, but they rarely exist other than holidays which I tried to capture. I suppose you could model something like the Boston Marathon and model all the road closures and restrictions, but it seems to be a lot of work for a one day event, but I can see city government’s deploying the resources to model something like this.

Regarding timezones: Times need to be entered with zones for models that cross zones but all the data should be converted to UTC internally so it runs in a consistent time model. Timezones are for input and output, but the solver should be ignorant of them, in my opinion.

I have carried my GPS from the Eastern to central time zone, and it knew the local time when I powered it up. So my guess is that it would auto correct when crossing the time zone.

-Steve

On 5/17/2011 10:42 PM, Daniel Kastl wrote:

Hi Jay and Steve,

This looks really nice, but let me give some comments regarding how to
model time, because this is really tricky in my opinion. Especially when
thinking about an abstract network that isn’t a road network.

Would it be possible to support turn restrictions in the static
Dijkstra also? I’m thinking just use all the same structures but
ignore the the time components to keep things simple. So if the the
turn restrictions table is present we use it, otherwise assume no
restrictions. If doing static shortest path with turn restrictions
then ignore the time components otherwise we use them. And it is up
to the user to make sure the turn restriction table is valid for the
analysis being requested.

Currently in pgRouting Dijkstra and A* don’t support turn restrictions
modelling. What I actually like on Shooting Star is, that it routes from
edge to edge instead of node to node. So it allows to model relations
between edges rather than nodes, which I think is more close to how
humans would think of this.
Dijkstra can only visit one node one times (as Shooting star only visits
an edge once). Well, there can be turn restriction cases where a node is
passed twice and which can’t be done correctly with Dijkstra as far as I
know.

In the beginning I wouldn’t think about the turn restrictions too much.
Let’s take it as an extra when we see we still have enough time.
Of course if you have a good idea to implement it all at once with only
little extra work, then that’s fine.

For time definitions in your tables I think you need to probably
define some common structure for handling a time entry.

Another problem might be time zones. Plain day field + time field might
not be able to allow driving from one time zone to another (or just
think about a flight network). I never went from one timezone to another
by car or train, but Steve and Anton, you might have this experience.
How does car navigation software handle this when you cross the timezone
border? :wink:

So taking “timestamp with timezone” for those fields might be a good
idea to be able to support such a functionality.

For example time_from and time_to might need to be defined as a
structure that includes day_of_week. day_of week might take values like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1…7 - Sun,…,Sat

I think the problem here is how to model recurring time dependent costs,
right?
If you think about weekdays, then you can’t for example model monthly or
yearly events.

I’m not really sure this is useful information here, but I once saw
about how the “iCal” format models recurring calendar events:
http://ext.ensible.com/deploy/dev/examples/calendar/recurrence-widget.html

Maybe we can learn something from there. The example needed to be
extended with some duration probably. But looking about calendar formats
might be a good idea to model holidays etc.

Or another (stupid) example would be cron job syntax. Something I always
need to google for as I can’t remember it :wink:

All this time dependent stuff, events and schedules is also an issue in
Darp solver.
And it probably is important also for the multi-modal routing, so if we
can find some clever way how to model this and can share it between
algorihtms, that would be great.

Daniel


Georepublic UG & Georepublic Japan

eMail: daniel.kastl@georepublic.de mailto:[daniel.kastl@georepublic.de](mailto:daniel.kastl@georepublic.de)
Web: http://georepublic.de <http://georepublic.de/>


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar

On 5/18/2011 8:08 AM, Jay Mahadeokar wrote:

Hi,

I think what we need is a function on following lines:

Given a query_start_time, it should query the TimeDepCosts table, and
return the set of entries which have their start_time within
query_start_time + delta. This delta might be x hours, depending on the
upper bound of the expected journey time.

We need following things handled in our model:

If query_start_time is 11PM Sunday and delta is 10 hours, we will need
to query entries with start time between 11PM Sunday and 9AM Monday. So,
basically the function needs to be periodic in terms of time_of_day and
day_of_week.

As Steve suggested, we can maintain conventions for day_of_week like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1-7 Mon - Sun.

If we just assume entries for -3,-2,-1,0 and ignore each entry for Sun -
Sat, that would reduce the space required assuming that the entries for
weekdays is same. If times for different weekdays is different then we
would have separate entries for each day.

So, the query should properly examine the query_day_of_week and select
the proper entries. Ex: For above query, if it is sunday, then after
12AM, next entries will be queried with time_of_day as -1, but if it was
Saturday, next entries will be queried with time_of_day as -2.

We can have a boolean parameter like *isHoliday* in query itself, which
will tell if a given day (may be weekday) is holiday or not.Then again
the query will have to be modified accordingly (query for -3). This will
take care of query for local holiday etc and we do not have to worry
about calenders. The user will have to worry about that.. :slight_smile:

This (isHoliday) might work for a single day, but will not work if the query crosses into the next day(s). Holidays are an input convenience to describe recurring events. So say we have a convenient way to input that data and store it into tables as we have described, then the query for costs would need to decide is a given day is a holiday or not and then select the correct entries to return based on that. For ease of implementation, we could just start with a stub function the returns false and later implement a table lookup based on the date or day of year that determines if isHoliday=t/f for any given start_time+offset.

Then when querying schedules, for example, we would select holiday schedules if they exist and its a holiday otherwise search the regular tables.

There can be time_from and time_to entries as well. But, if we just have
entries like:

  day_of_week: -1, time_of_day: 00:01, 55mph
  day_of_week: -1, time_of_day: 07:00, 30mph
  day_of_week: -1, time_of_day: 10:01, 55mph
  day_of_week: -1, time_of_day: 16:31, 30mph

we can assume that the entry with time_of_day 00:01 is valid upto
07:00. And if query_start_time is 02:00 and delta is 10 hours, we can
query entries which have time_of_day < query_start_time + delta (taking
care of change of day).

Is this assumption reasonable?

This sounds reasonable if the response is in units of time (offset) from query_start_time. Assuming we use a resolution of seconds, then the response would be in seconds from start time.

*weightMap Abstraction*

I think the design would be more modular if we keep the weightMap
independent of the underlying time conventions. Since as Daniel pointed
out, this algorithm can also be used for non-road networks.

So, whatever the underlying time conventions, we can assume that in the
end the query should return pairs like:

< <edgeId, offset_time> , travel_time/speed >

We will assume that query_start_time is 0, i.e we offset every thing by
query_start_time.
The offset_time will be as follows:

As in above example,
If start_tme is 02:00 and day_of_week is -1, and delta as 10 hours.

Then, edge entries for edgeId 1 will be:
< <1, 05:00 > , 30 mph>
< <1, 08:01 > , 55 mph>
< <1, 14:31 > , 30 mph>

Basically, the offset_time will abstract out any internal details of
weekdays, change of days etc and it will just contain the offset from
start_time.

I suggested seconds above, but if someone is modeling events in say glacier flow, geological times or the big bang, they might need other units of time. I would say that because we are talking about time based schedules and need to deal with days, hours minutes we should probably not worry about these other timeframes as the solver will be offset based it will work with any units and then only the wrappers for extracting the costs from the schedules would need to change to deal with other timeframes. So lets not get hung up on this, I think this is a sound plan.

-Steve

This will greatly simplify the core time dependent dijkstra implementation.

Thoughts?

On Wed, May 18, 2011 at 10:46 AM, Stephen Woodbridge
<woodbri@swoodbridge.com <mailto:woodbri@swoodbridge.com>> wrote:

    Hi Daniel,

    These are good points.

    I agree that turn restrictions should be thought about but only
    implemented if there is time. I think we should take the discussion
    of turn restriction into a separate thread. I'm interested in what
    you think is a limitation that can't be modeled in Dijkstra or A*.

    Regarding the time modeling, my point was just that we needed more
    thought there and a richer model developed. The crontab model is a
    good idea. I'm not opposed to modeling monthly or yearly events, but
    they rarely exist other than holidays which I tried to capture. I
    suppose you could model something like the Boston Marathon and model
    all the road closures and restrictions, but it seems to be a lot of
    work for a one day event, but I can see city government's deploying
    the resources to model something like this.

    Regarding timezones: Times need to be entered with zones for models
    that cross zones but all the data should be converted to UTC
    internally so it runs in a consistent time model. Timezones are for
    input and output, but the solver should be ignorant of them, in my
    opinion.

    I have carried my GPS from the Eastern to central time zone, and it
    knew the local time when I powered it up. So my guess is that it
    would auto correct when crossing the time zone.

    -Steve

    On 5/17/2011 10:42 PM, Daniel Kastl wrote:

        Hi Jay and Steve,

        This looks really nice, but let me give some comments regarding
        how to
        model time, because this is really tricky in my opinion.
        Especially when
        thinking about an abstract network that isn't a road network.

            Would it be possible to support turn restrictions in the static
            Dijkstra also? I'm thinking just use all the same structures but
            ignore the the time components to keep things simple. So if
        the the
            turn restrictions table is present we use it, otherwise
        assume no
            restrictions. If doing static shortest path with turn
        restrictions
            then ignore the time components otherwise we use them. And
        it is up
            to the user to make sure the turn restriction table is valid
        for the
            analysis being requested.

        Currently in pgRouting Dijkstra and A* don't support turn
        restrictions
        modelling. What I actually like on Shooting Star is, that it
        routes from
        edge to edge instead of node to node. So it allows to model
        relations
        between edges rather than nodes, which I think is more close to how
        humans would think of this.
        Dijkstra can only visit one node one times (as Shooting star
        only visits
        an edge once). Well, there can be turn restriction cases where a
        node is
        passed twice and which can't be done correctly with Dijkstra as
        far as I
        know.

        In the beginning I wouldn't think about the turn restrictions
        too much.
        Let's take it as an extra when we see we still have enough time.
        Of course if you have a good idea to implement it all at once
        with only
        little extra work, then that's fine.

            For time definitions in your tables I think you need to probably
            define some common structure for handling a time entry.

        Another problem might be time zones. Plain day field + time
        field might
        not be able to allow driving from one time zone to another (or just
        think about a flight network). I never went from one timezone to
        another
        by car or train, but Steve and Anton, you might have this
        experience.
        How does car navigation software handle this when you cross the
        timezone
        border? :wink:

        So taking "timestamp with timezone" for those fields might be a good
        idea to be able to support such a functionality.

            For example time_from and time_to might need to be defined as a
            structure that includes day_of_week. day_of week might take
        values like:

            -3 - holidays
            -2 - weekend days
            -1 - week days
            0 - all days
            1..7 - Sun,...,Sat

        I think the problem here is how to model recurring time
        dependent costs,
        right?
        If you think about weekdays, then you can't for example model
        monthly or
        yearly events.

        I'm not really sure this is useful information here, but I once saw
        about how the "iCal" format models recurring calendar events:
        http://ext.ensible.com/deploy/dev/examples/calendar/recurrence-widget.html

        Maybe we can learn something from there. The example needed to be
        extended with some duration probably. But looking about calendar
        formats
        might be a good idea to model holidays etc.

        Or another (stupid) example would be cron job syntax. Something
        I always
        need to google for as I can't remember it :wink:

        All this time dependent stuff, events and schedules is also an
        issue in
        Darp solver.
        And it probably is important also for the multi-modal routing,
        so if we
        can find some clever way how to model this and can share it between
        algorihtms, that would be great.

        Daniel

        --
        Georepublic UG & Georepublic Japan
        eMail: daniel.kastl@georepublic.de
        <mailto:daniel.kastl@georepublic.de>
        <mailto:daniel.kastl@georepublic.de
        <mailto:daniel.kastl@georepublic.de>>
        Web: http://georepublic.de/&gt;

        _______________________________________________
        pgrouting-dev mailing list
        pgrouting-dev@lists.osgeo.org <mailto:pgrouting-dev@lists.osgeo.org>
        http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

    _______________________________________________
    pgrouting-dev mailing list
    pgrouting-dev@lists.osgeo.org <mailto:pgrouting-dev@lists.osgeo.org>
    http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

--
Regards,
-Jay Mahadeokar

_______________________________________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

Hi,

As initially discussed, I was trying to reuse the boost’s dijkstra code to write the core time dependent dijkstra algorithm.
Since Boost makes extensive use of generic programming, and I have no prior experience with such deep generic programming concepts, I was wondering if we really need all these features that boost provides.

Another option would be to write the time-dependent dijkstra on our own.

This will be a light-weight code without extensive use of generic programming like boost.
So, i was thinking of using following:

1. Boost::AdjecencyList - for storing graph.
2. Std::Map - for storing distanceMap, predecessorMap.

3. For weightMap:

typedef struct weight_map_element
{
pair<int,double> edge;
double travel_time;
}
weight_map_element_t;

class weight_map
{

vector<weight_map_element_t> weight_map_set;

public:
void insert(weight_map_element_t element);
double get_travel_time(int edge_id, double start_time);

};

4. std::priority_queue for the priority queue that will be used for dijkstra search.

Will this be sufficient for our implementation? If yes, I will come up with the detailed internal function prototypes soon.

On Wed, May 18, 2011 at 8:09 PM, Stephen Woodbridge <woodbri@swoodbridge.com> wrote:

On 5/18/2011 8:08 AM, Jay Mahadeokar wrote:

Hi,

I think what we need is a function on following lines:

Given a query_start_time, it should query the TimeDepCosts table, and
return the set of entries which have their start_time within
query_start_time + delta. This delta might be x hours, depending on the
upper bound of the expected journey time.

We need following things handled in our model:

If query_start_time is 11PM Sunday and delta is 10 hours, we will need
to query entries with start time between 11PM Sunday and 9AM Monday. So,
basically the function needs to be periodic in terms of time_of_day and
day_of_week.

As Steve suggested, we can maintain conventions for day_of_week like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1-7 Mon - Sun.

If we just assume entries for -3,-2,-1,0 and ignore each entry for Sun -
Sat, that would reduce the space required assuming that the entries for
weekdays is same. If times for different weekdays is different then we
would have separate entries for each day.

So, the query should properly examine the query_day_of_week and select
the proper entries. Ex: For above query, if it is sunday, then after
12AM, next entries will be queried with time_of_day as -1, but if it was
Saturday, next entries will be queried with time_of_day as -2.

We can have a boolean parameter like isHoliday in query itself, which
will tell if a given day (may be weekday) is holiday or not.Then again
the query will have to be modified accordingly (query for -3). This will
take care of query for local holiday etc and we do not have to worry
about calenders. The user will have to worry about that… :slight_smile:

This (isHoliday) might work for a single day, but will not work if the query crosses into the next day(s). Holidays are an input convenience to describe recurring events. So say we have a convenient way to input that data and store it into tables as we have described, then the query for costs would need to decide is a given day is a holiday or not and then select the correct entries to return based on that. For ease of implementation, we could just start with a stub function the returns false and later implement a table lookup based on the date or day of year that determines if isHoliday=t/f for any given start_time+offset.

Then when querying schedules, for example, we would select holiday schedules if they exist and its a holiday otherwise search the regular tables.

There can be time_from and time_to entries as well. But, if we just have
entries like:

day_of_week: -1, time_of_day: 00:01, 55mph
day_of_week: -1, time_of_day: 07:00, 30mph
day_of_week: -1, time_of_day: 10:01, 55mph
day_of_week: -1, time_of_day: 16:31, 30mph

we can assume that the entry with time_of_day 00:01 is valid upto
07:00. And if query_start_time is 02:00 and delta is 10 hours, we can
query entries which have time_of_day < query_start_time + delta (taking
care of change of day).

Is this assumption reasonable?

This sounds reasonable if the response is in units of time (offset) from query_start_time. Assuming we use a resolution of seconds, then the response would be in seconds from start time.

weightMap Abstraction

I think the design would be more modular if we keep the weightMap
independent of the underlying time conventions. Since as Daniel pointed
out, this algorithm can also be used for non-road networks.

So, whatever the underlying time conventions, we can assume that in the
end the query should return pairs like:

< <edgeId, offset_time> , travel_time/speed >

We will assume that query_start_time is 0, i.e we offset every thing by
query_start_time.
The offset_time will be as follows:

As in above example,
If start_tme is 02:00 and day_of_week is -1, and delta as 10 hours.

Then, edge entries for edgeId 1 will be:
< <1, 05:00 > , 30 mph>
< <1, 08:01 > , 55 mph>
< <1, 14:31 > , 30 mph>

Basically, the offset_time will abstract out any internal details of
weekdays, change of days etc and it will just contain the offset from
start_time.

I suggested seconds above, but if someone is modeling events in say glacier flow, geological times or the big bang, they might need other units of time. I would say that because we are talking about time based schedules and need to deal with days, hours minutes we should probably not worry about these other timeframes as the solver will be offset based it will work with any units and then only the wrappers for extracting the costs from the schedules would need to change to deal with other timeframes. So lets not get hung up on this, I think this is a sound plan.

-Steve

This will greatly simplify the core time dependent dijkstra implementation.

Thoughts?

On Wed, May 18, 2011 at 10:46 AM, Stephen Woodbridge

<woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)> wrote:

Hi Daniel,

These are good points.

I agree that turn restrictions should be thought about but only
implemented if there is time. I think we should take the discussion
of turn restriction into a separate thread. I’m interested in what
you think is a limitation that can’t be modeled in Dijkstra or A*.

Regarding the time modeling, my point was just that we needed more
thought there and a richer model developed. The crontab model is a
good idea. I’m not opposed to modeling monthly or yearly events, but
they rarely exist other than holidays which I tried to capture. I
suppose you could model something like the Boston Marathon and model
all the road closures and restrictions, but it seems to be a lot of
work for a one day event, but I can see city government’s deploying
the resources to model something like this.

Regarding timezones: Times need to be entered with zones for models
that cross zones but all the data should be converted to UTC
internally so it runs in a consistent time model. Timezones are for
input and output, but the solver should be ignorant of them, in my
opinion.

I have carried my GPS from the Eastern to central time zone, and it
knew the local time when I powered it up. So my guess is that it
would auto correct when crossing the time zone.

-Steve

On 5/17/2011 10:42 PM, Daniel Kastl wrote:

Hi Jay and Steve,

This looks really nice, but let me give some comments regarding
how to
model time, because this is really tricky in my opinion.
Especially when
thinking about an abstract network that isn’t a road network.

Would it be possible to support turn restrictions in the static
Dijkstra also? I’m thinking just use all the same structures but
ignore the the time components to keep things simple. So if
the the
turn restrictions table is present we use it, otherwise
assume no
restrictions. If doing static shortest path with turn
restrictions
then ignore the time components otherwise we use them. And
it is up
to the user to make sure the turn restriction table is valid
for the
analysis being requested.

Currently in pgRouting Dijkstra and A* don’t support turn
restrictions
modelling. What I actually like on Shooting Star is, that it
routes from
edge to edge instead of node to node. So it allows to model
relations
between edges rather than nodes, which I think is more close to how
humans would think of this.
Dijkstra can only visit one node one times (as Shooting star
only visits
an edge once). Well, there can be turn restriction cases where a
node is
passed twice and which can’t be done correctly with Dijkstra as
far as I
know.

In the beginning I wouldn’t think about the turn restrictions
too much.
Let’s take it as an extra when we see we still have enough time.
Of course if you have a good idea to implement it all at once
with only
little extra work, then that’s fine.

For time definitions in your tables I think you need to probably
define some common structure for handling a time entry.

Another problem might be time zones. Plain day field + time
field might
not be able to allow driving from one time zone to another (or just
think about a flight network). I never went from one timezone to
another
by car or train, but Steve and Anton, you might have this
experience.
How does car navigation software handle this when you cross the
timezone
border? :wink:

So taking “timestamp with timezone” for those fields might be a good
idea to be able to support such a functionality.

For example time_from and time_to might need to be defined as a
structure that includes day_of_week. day_of week might take
values like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1…7 - Sun,…,Sat

I think the problem here is how to model recurring time
dependent costs,
right?
If you think about weekdays, then you can’t for example model
monthly or
yearly events.

I’m not really sure this is useful information here, but I once saw
about how the “iCal” format models recurring calendar events:
http://ext.ensible.com/deploy/dev/examples/calendar/recurrence-widget.html

Maybe we can learn something from there. The example needed to be
extended with some duration probably. But looking about calendar
formats
might be a good idea to model holidays etc.

Or another (stupid) example would be cron job syntax. Something
I always
need to google for as I can’t remember it :wink:

All this time dependent stuff, events and schedules is also an
issue in
Darp solver.
And it probably is important also for the multi-modal routing,
so if we
can find some clever way how to model this and can share it between
algorihtms, that would be great.

Daniel


Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de
mailto:[daniel.kastl@georepublic.de](mailto:daniel.kastl@georepublic.de)
<mailto:daniel.kastl@georepublic.de
mailto:[daniel.kastl@georepublic.de](mailto:daniel.kastl@georepublic.de)>
Web: http://georepublic.de <http://georepublic.de/>


pgrouting-dev mailing list

pgrouting-dev@lists.osgeo.org mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)

http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list

pgrouting-dev@lists.osgeo.org mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)

http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar

I'm not a C++ programmer, but from what I have read on the boost list, it seems that the path to writing generic templates is to write the code first in regular C++ so it works and then refactor it to be more generic. So step 1 is to do as you proposed.

Also since this project is to implement a "time dependent dijkstra algorithm" in pgRouting, the generic templates seems to be out of scope. It would be fine to do if you had the time and skill, but I think your approach makes sense, use the tools that make your job easier and allow you to achieve success.

Any contrary opinions to this?

-Steve

On 5/23/2011 10:20 AM, Jay Mahadeokar wrote:

Hi,

As initially discussed, I was trying to reuse the boost's dijkstra code
to write the core time dependent dijkstra algorithm.
Since Boost makes extensive use of generic programming, and I have no
prior experience with such deep generic programming concepts, I was
wondering if we really need all these features that boost provides.

Another option would be to write the time-dependent dijkstra on our own.

This will be a light-weight code without extensive use of generic
programming like boost.
So, i was thinking of using following:

*1. *Boost::AdjecencyList - for storing graph.
*2. *Std::Map - for storing distanceMap, predecessorMap.

*3. *For weightMap:

typedef struct weight_map_element
{
         pair<int,double> edge;
         double travel_time;
}
weight_map_element_t;

class weight_map
{

     vector<weight_map_element_t> weight_map_set;

     public:
     void insert(weight_map_element_t element);
     double get_travel_time(int edge_id, double start_time);

};

*4. *std::priority_queue for the priority queue that will be used for
dijkstra search.

Will this be sufficient for our implementation? If yes, I will come up
with the detailed internal function prototypes soon.

On Wed, May 18, 2011 at 8:09 PM, Stephen Woodbridge
<woodbri@swoodbridge.com <mailto:woodbri@swoodbridge.com>> wrote:

    On 5/18/2011 8:08 AM, Jay Mahadeokar wrote:

        Hi,

        I think what we need is a function on following lines:

        Given a query_start_time, it should query the TimeDepCosts
        table, and
        return the set of entries which have their start_time within
        query_start_time + delta. This delta might be x hours, depending
        on the
        upper bound of the expected journey time.

        We need following things handled in our model:

        If query_start_time is 11PM Sunday and delta is 10 hours, we
        will need
        to query entries with start time between 11PM Sunday and 9AM
        Monday. So,
        basically the function needs to be periodic in terms of
        time_of_day and
        day_of_week.

        As Steve suggested, we can maintain conventions for day_of_week
        like:

        -3 - holidays
        -2 - weekend days
        -1 - week days
        0 - all days
        1-7 Mon - Sun.

        If we just assume entries for -3,-2,-1,0 and ignore each entry
        for Sun -
        Sat, that would reduce the space required assuming that the
        entries for
        weekdays is same. If times for different weekdays is different
        then we
        would have separate entries for each day.

        So, the query should properly examine the query_day_of_week and
        select
        the proper entries. Ex: For above query, if it is sunday, then after
        12AM, next entries will be queried with time_of_day as -1, but
        if it was
        Saturday, next entries will be queried with time_of_day as -2.

        We can have a boolean parameter like *isHoliday* in query
        itself, which
        will tell if a given day (may be weekday) is holiday or not.Then
        again
        the query will have to be modified accordingly (query for -3).
        This will
        take care of query for local holiday etc and we do not have to worry
        about calenders. The user will have to worry about that.. :slight_smile:

    This (isHoliday) might work for a single day, but will not work if
    the query crosses into the next day(s). Holidays are an input
    convenience to describe recurring events. So say we have a
    convenient way to input that data and store it into tables as we
    have described, then the query for costs would need to decide is a
    given day is a holiday or not and then select the correct entries to
    return based on that. For ease of implementation, we could just
    start with a stub function the returns false and later implement a
    table lookup based on the date or day of year that determines if
    isHoliday=t/f for any given start_time+offset.

    Then when querying schedules, for example, we would select holiday
    schedules if they exist and its a holiday otherwise search the
    regular tables.

        There can be time_from and time_to entries as well. But, if we
        just have
        entries like:

          day_of_week: -1, time_of_day: 00:01, 55mph
          day_of_week: -1, time_of_day: 07:00, 30mph
          day_of_week: -1, time_of_day: 10:01, 55mph
          day_of_week: -1, time_of_day: 16:31, 30mph

        we can assume that the entry with time_of_day 00:01 is valid upto
        07:00. And if query_start_time is 02:00 and delta is 10 hours,
        we can
        query entries which have time_of_day < query_start_time + delta
        (taking
        care of change of day).

        Is this assumption reasonable?

    This sounds reasonable if the response is in units of time (offset)
    from query_start_time. Assuming we use a resolution of seconds, then
    the response would be in seconds from start time.

        *weightMap Abstraction*

        I think the design would be more modular if we keep the weightMap
        independent of the underlying time conventions. Since as Daniel
        pointed
        out, this algorithm can also be used for non-road networks.

        So, whatever the underlying time conventions, we can assume that
        in the
        end the query should return pairs like:

        < <edgeId, offset_time> , travel_time/speed >

        We will assume that query_start_time is 0, i.e we offset every
        thing by
        query_start_time.
        The offset_time will be as follows:

        As in above example,
        If start_tme is 02:00 and day_of_week is -1, and delta as 10 hours.

        Then, edge entries for edgeId 1 will be:
        < <1, 05:00 > , 30 mph>
        < <1, 08:01 > , 55 mph>
        < <1, 14:31 > , 30 mph>

        Basically, the offset_time will abstract out any internal details of
        weekdays, change of days etc and it will just contain the offset
        from
        start_time.

    I suggested seconds above, but if someone is modeling events in say
    glacier flow, geological times or the big bang, they might need
    other units of time. I would say that because we are talking about
    time based schedules and need to deal with days, hours minutes we
    should probably not worry about these other timeframes as the solver
    will be offset based it will work with any units and then only the
    wrappers for extracting the costs from the schedules would need to
    change to deal with other timeframes. So lets not get hung up on
    this, I think this is a sound plan.

    -Steve

        This will greatly simplify the core time dependent dijkstra
        implementation.

        Thoughts?

        On Wed, May 18, 2011 at 10:46 AM, Stephen Woodbridge
        <woodbri@swoodbridge.com <mailto:woodbri@swoodbridge.com>
        <mailto:woodbri@swoodbridge.com
        <mailto:woodbri@swoodbridge.com>>> wrote:

            Hi Daniel,

            These are good points.

            I agree that turn restrictions should be thought about but only
            implemented if there is time. I think we should take the
        discussion
            of turn restriction into a separate thread. I'm interested
        in what
            you think is a limitation that can't be modeled in Dijkstra
        or A*.

            Regarding the time modeling, my point was just that we
        needed more
            thought there and a richer model developed. The crontab
        model is a
            good idea. I'm not opposed to modeling monthly or yearly
        events, but
            they rarely exist other than holidays which I tried to
        capture. I
            suppose you could model something like the Boston Marathon
        and model
            all the road closures and restrictions, but it seems to be a
        lot of
            work for a one day event, but I can see city government's
        deploying
            the resources to model something like this.

            Regarding timezones: Times need to be entered with zones for
        models
            that cross zones but all the data should be converted to UTC
            internally so it runs in a consistent time model. Timezones
        are for
            input and output, but the solver should be ignorant of them,
        in my
            opinion.

            I have carried my GPS from the Eastern to central time zone,
        and it
            knew the local time when I powered it up. So my guess is that it
            would auto correct when crossing the time zone.

            -Steve

            On 5/17/2011 10:42 PM, Daniel Kastl wrote:

                Hi Jay and Steve,

                This looks really nice, but let me give some comments
        regarding
                how to
                model time, because this is really tricky in my opinion.
                Especially when
                thinking about an abstract network that isn't a road
        network.

                    Would it be possible to support turn restrictions in
        the static
                    Dijkstra also? I'm thinking just use all the same
        structures but
                    ignore the the time components to keep things
        simple. So if
                the the
                    turn restrictions table is present we use it, otherwise
                assume no
                    restrictions. If doing static shortest path with turn
                restrictions
                    then ignore the time components otherwise we use
        them. And
                it is up
                    to the user to make sure the turn restriction table
        is valid
                for the
                    analysis being requested.

                Currently in pgRouting Dijkstra and A* don't support turn
                restrictions
                modelling. What I actually like on Shooting Star is, that it
                routes from
                edge to edge instead of node to node. So it allows to model
                relations
                between edges rather than nodes, which I think is more
        close to how
                humans would think of this.
                Dijkstra can only visit one node one times (as Shooting star
                only visits
                an edge once). Well, there can be turn restriction cases
        where a
                node is
                passed twice and which can't be done correctly with
        Dijkstra as
                far as I
                know.

                In the beginning I wouldn't think about the turn
        restrictions
                too much.
                Let's take it as an extra when we see we still have
        enough time.
                Of course if you have a good idea to implement it all at
        once
                with only
                little extra work, then that's fine.

                    For time definitions in your tables I think you need
        to probably
                    define some common structure for handling a time entry.

                Another problem might be time zones. Plain day field + time
                field might
                not be able to allow driving from one time zone to
        another (or just
                think about a flight network). I never went from one
        timezone to
                another
                by car or train, but Steve and Anton, you might have this
                experience.
                How does car navigation software handle this when you
        cross the
                timezone
                border? :wink:

                So taking "timestamp with timezone" for those fields
        might be a good
                idea to be able to support such a functionality.

                    For example time_from and time_to might need to be
        defined as a
                    structure that includes day_of_week. day_of week
        might take
                values like:

                    -3 - holidays
                    -2 - weekend days
                    -1 - week days
                    0 - all days
                    1..7 - Sun,...,Sat

                I think the problem here is how to model recurring time
                dependent costs,
                right?
                If you think about weekdays, then you can't for example
        model
                monthly or
                yearly events.

                I'm not really sure this is useful information here, but
        I once saw
                about how the "iCal" format models recurring calendar
        events:
        http://ext.ensible.com/deploy/dev/examples/calendar/recurrence-widget.html

                Maybe we can learn something from there. The example
        needed to be
                extended with some duration probably. But looking about
        calendar
                formats
                might be a good idea to model holidays etc.

                Or another (stupid) example would be cron job syntax.
        Something
                I always
                need to google for as I can't remember it :wink:

                All this time dependent stuff, events and schedules is
        also an
                issue in
                Darp solver.
                And it probably is important also for the multi-modal
        routing,
                so if we
                can find some clever way how to model this and can share
        it between
                algorihtms, that would be great.

                Daniel

                --
                Georepublic UG & Georepublic Japan
                eMail: daniel.kastl@georepublic.de
        <mailto:daniel.kastl@georepublic.de>
        <mailto:daniel.kastl@georepublic.de
        <mailto:daniel.kastl@georepublic.de>>
        <mailto:daniel.kastl@georepublic.de
        <mailto:daniel.kastl@georepublic.de>
        <mailto:daniel.kastl@georepublic.de
        <mailto:daniel.kastl@georepublic.de>>>
                Web: http://georepublic.de/&gt;

                _______________________________________________
                pgrouting-dev mailing list
        pgrouting-dev@lists.osgeo.org
        <mailto:pgrouting-dev@lists.osgeo.org>
        <mailto:pgrouting-dev@lists.osgeo.org
        <mailto:pgrouting-dev@lists.osgeo.org>>

        http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

            _______________________________________________
            pgrouting-dev mailing list
        pgrouting-dev@lists.osgeo.org
        <mailto:pgrouting-dev@lists.osgeo.org>
        <mailto:pgrouting-dev@lists.osgeo.org
        <mailto:pgrouting-dev@lists.osgeo.org>>

        http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

        --
        Regards,
        -Jay Mahadeokar

        _______________________________________________
        pgrouting-dev mailing list
        pgrouting-dev@lists.osgeo.org <mailto:pgrouting-dev@lists.osgeo.org>
        http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

    _______________________________________________
    pgrouting-dev mailing list
    pgrouting-dev@lists.osgeo.org <mailto:pgrouting-dev@lists.osgeo.org>
    http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

--
Regards,
-Jay Mahadeokar

_______________________________________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

On Mon, May 23, 2011 at 8:03 PM, Stephen Woodbridge <woodbri@swoodbridge.com> wrote:

I’m not a C++ programmer, but from what I have read on the boost list, it seems that the path to writing generic templates is to write the code first in regular C++ so it works and then refactor it to be more generic. So step 1 is to do as you proposed.

Also since this project is to implement a “time dependent dijkstra algorithm” in pgRouting, the generic templates seems to be out of scope. It would be fine to do if you had the time and skill, but I think your approach makes sense, use the tools that make your job easier and allow you to achieve success.

Any contrary opinions to this?

-Steve

On 5/23/2011 10:20 AM, Jay Mahadeokar wrote:

Hi,

As initially discussed, I was trying to reuse the boost’s dijkstra code
to write the core time dependent dijkstra algorithm.
Since Boost makes extensive use of generic programming, and I have no
prior experience with such deep generic programming concepts, I was
wondering if we really need all these features that boost provides.

Another option would be to write the time-dependent dijkstra on our own.

This will be a light-weight code without extensive use of generic
programming like boost.
So, i was thinking of using following:

*1. *Boost::AdjecencyList - for storing graph.
*2. *Std::Map - for storing distanceMap, predecessorMap.

*3. *For weightMap:

typedef struct weight_map_element
{
pair<int,double> edge;
double travel_time;
}
weight_map_element_t;

class weight_map
{

vector<weight_map_element_t> weight_map_set;

public:
void insert(weight_map_element_t element);
double get_travel_time(int edge_id, double start_time);

};

*4. *std::priority_queue for the priority queue that will be used for
dijkstra search.

Well,
The std::prority_queue does not support the decrease_key operation which will be needed for the dijkstra search. I dont think stl provides a heap data structure with decrease_key, delete_min and insert operations.

The make_heap, and sort_heap would be too costly to maintain for dijkstra.

Do you have any idea of open-source library that provides the heap datastructure with above functionality?

I am thinking of implementing binary heap myself to support the required functionality.
Thoughts?

Will this be sufficient for our implementation? If yes, I will come up
with the detailed internal function prototypes soon.

On Wed, May 18, 2011 at 8:09 PM, Stephen Woodbridge

<woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)> wrote:

On 5/18/2011 8:08 AM, Jay Mahadeokar wrote:

Hi,

I think what we need is a function on following lines:

Given a query_start_time, it should query the TimeDepCosts
table, and
return the set of entries which have their start_time within
query_start_time + delta. This delta might be x hours, depending
on the
upper bound of the expected journey time.

We need following things handled in our model:

If query_start_time is 11PM Sunday and delta is 10 hours, we
will need
to query entries with start time between 11PM Sunday and 9AM
Monday. So,
basically the function needs to be periodic in terms of
time_of_day and
day_of_week.

As Steve suggested, we can maintain conventions for day_of_week
like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1-7 Mon - Sun.

If we just assume entries for -3,-2,-1,0 and ignore each entry
for Sun -
Sat, that would reduce the space required assuming that the
entries for
weekdays is same. If times for different weekdays is different
then we
would have separate entries for each day.

So, the query should properly examine the query_day_of_week and
select
the proper entries. Ex: For above query, if it is sunday, then after
12AM, next entries will be queried with time_of_day as -1, but
if it was
Saturday, next entries will be queried with time_of_day as -2.

We can have a boolean parameter like isHoliday in query
itself, which
will tell if a given day (may be weekday) is holiday or not.Then
again
the query will have to be modified accordingly (query for -3).
This will
take care of query for local holiday etc and we do not have to worry
about calenders. The user will have to worry about that… :slight_smile:

This (isHoliday) might work for a single day, but will not work if
the query crosses into the next day(s). Holidays are an input
convenience to describe recurring events. So say we have a
convenient way to input that data and store it into tables as we
have described, then the query for costs would need to decide is a
given day is a holiday or not and then select the correct entries to
return based on that. For ease of implementation, we could just
start with a stub function the returns false and later implement a
table lookup based on the date or day of year that determines if
isHoliday=t/f for any given start_time+offset.

Then when querying schedules, for example, we would select holiday
schedules if they exist and its a holiday otherwise search the
regular tables.

There can be time_from and time_to entries as well. But, if we
just have
entries like:

day_of_week: -1, time_of_day: 00:01, 55mph
day_of_week: -1, time_of_day: 07:00, 30mph
day_of_week: -1, time_of_day: 10:01, 55mph
day_of_week: -1, time_of_day: 16:31, 30mph

we can assume that the entry with time_of_day 00:01 is valid upto
07:00. And if query_start_time is 02:00 and delta is 10 hours,
we can
query entries which have time_of_day < query_start_time + delta
(taking
care of change of day).

Is this assumption reasonable?

This sounds reasonable if the response is in units of time (offset)
from query_start_time. Assuming we use a resolution of seconds, then
the response would be in seconds from start time.

weightMap Abstraction

I think the design would be more modular if we keep the weightMap
independent of the underlying time conventions. Since as Daniel
pointed
out, this algorithm can also be used for non-road networks.

So, whatever the underlying time conventions, we can assume that
in the
end the query should return pairs like:

< <edgeId, offset_time> , travel_time/speed >

We will assume that query_start_time is 0, i.e we offset every
thing by
query_start_time.
The offset_time will be as follows:

As in above example,
If start_tme is 02:00 and day_of_week is -1, and delta as 10 hours.

Then, edge entries for edgeId 1 will be:
< <1, 05:00 > , 30 mph>
< <1, 08:01 > , 55 mph>
< <1, 14:31 > , 30 mph>

Basically, the offset_time will abstract out any internal details of
weekdays, change of days etc and it will just contain the offset
from
start_time.

I suggested seconds above, but if someone is modeling events in say
glacier flow, geological times or the big bang, they might need
other units of time. I would say that because we are talking about
time based schedules and need to deal with days, hours minutes we
should probably not worry about these other timeframes as the solver
will be offset based it will work with any units and then only the
wrappers for extracting the costs from the schedules would need to
change to deal with other timeframes. So lets not get hung up on
this, I think this is a sound plan.

-Steve

This will greatly simplify the core time dependent dijkstra
implementation.

Thoughts?

On Wed, May 18, 2011 at 10:46 AM, Stephen Woodbridge
<woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)

<mailto:woodbri@swoodbridge.com

mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)>> wrote:

Hi Daniel,

These are good points.

I agree that turn restrictions should be thought about but only
implemented if there is time. I think we should take the
discussion
of turn restriction into a separate thread. I’m interested
in what
you think is a limitation that can’t be modeled in Dijkstra
or A*.

Regarding the time modeling, my point was just that we
needed more
thought there and a richer model developed. The crontab
model is a
good idea. I’m not opposed to modeling monthly or yearly
events, but
they rarely exist other than holidays which I tried to
capture. I
suppose you could model something like the Boston Marathon
and model
all the road closures and restrictions, but it seems to be a
lot of
work for a one day event, but I can see city government’s
deploying
the resources to model something like this.

Regarding timezones: Times need to be entered with zones for
models
that cross zones but all the data should be converted to UTC
internally so it runs in a consistent time model. Timezones
are for
input and output, but the solver should be ignorant of them,
in my
opinion.

I have carried my GPS from the Eastern to central time zone,
and it
knew the local time when I powered it up. So my guess is that it
would auto correct when crossing the time zone.

-Steve

On 5/17/2011 10:42 PM, Daniel Kastl wrote:

Hi Jay and Steve,

This looks really nice, but let me give some comments
regarding
how to
model time, because this is really tricky in my opinion.
Especially when
thinking about an abstract network that isn’t a road
network.

Would it be possible to support turn restrictions in
the static
Dijkstra also? I’m thinking just use all the same
structures but
ignore the the time components to keep things
simple. So if
the the
turn restrictions table is present we use it, otherwise
assume no
restrictions. If doing static shortest path with turn
restrictions
then ignore the time components otherwise we use
them. And
it is up
to the user to make sure the turn restriction table
is valid
for the
analysis being requested.

Currently in pgRouting Dijkstra and A* don’t support turn
restrictions
modelling. What I actually like on Shooting Star is, that it
routes from
edge to edge instead of node to node. So it allows to model
relations
between edges rather than nodes, which I think is more
close to how
humans would think of this.
Dijkstra can only visit one node one times (as Shooting star
only visits
an edge once). Well, there can be turn restriction cases
where a
node is
passed twice and which can’t be done correctly with
Dijkstra as
far as I
know.

In the beginning I wouldn’t think about the turn
restrictions
too much.
Let’s take it as an extra when we see we still have
enough time.
Of course if you have a good idea to implement it all at
once
with only
little extra work, then that’s fine.

For time definitions in your tables I think you need
to probably
define some common structure for handling a time entry.

Another problem might be time zones. Plain day field + time
field might
not be able to allow driving from one time zone to
another (or just
think about a flight network). I never went from one
timezone to
another
by car or train, but Steve and Anton, you might have this
experience.
How does car navigation software handle this when you
cross the
timezone
border? :wink:

So taking “timestamp with timezone” for those fields
might be a good
idea to be able to support such a functionality.

For example time_from and time_to might need to be
defined as a
structure that includes day_of_week. day_of week
might take
values like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1…7 - Sun,…,Sat

I think the problem here is how to model recurring time
dependent costs,
right?
If you think about weekdays, then you can’t for example
model
monthly or
yearly events.

I’m not really sure this is useful information here, but
I once saw
about how the “iCal” format models recurring calendar
events:
http://ext.ensible.com/deploy/dev/examples/calendar/recurrence-widget.html

Maybe we can learn something from there. The example
needed to be
extended with some duration probably. But looking about
calendar
formats
might be a good idea to model holidays etc.

Or another (stupid) example would be cron job syntax.
Something
I always
need to google for as I can’t remember it :wink:

All this time dependent stuff, events and schedules is
also an
issue in
Darp solver.
And it probably is important also for the multi-modal
routing,
so if we
can find some clever way how to model this and can share
it between
algorihtms, that would be great.

Daniel


Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de
mailto:[daniel.kastl@georepublic.de](mailto:daniel.kastl@georepublic.de)
<mailto:daniel.kastl@georepublic.de
mailto:[daniel.kastl@georepublic.de](mailto:daniel.kastl@georepublic.de)>
<mailto:daniel.kastl@georepublic.de
mailto:[daniel.kastl@georepublic.de](mailto:daniel.kastl@georepublic.de)
<mailto:daniel.kastl@georepublic.de
mailto:[daniel.kastl@georepublic.de](mailto:daniel.kastl@georepublic.de)>>
Web: http://georepublic.de <http://georepublic.de/>


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)
<mailto:pgrouting-dev@lists.osgeo.org
mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)>

http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)
<mailto:pgrouting-dev@lists.osgeo.org
mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)>

http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar

On Wed, May 25, 2011 at 4:36 AM, Jay Mahadeokar <jai.mahadeokar@gmail.com> wrote:

On Mon, May 23, 2011 at 8:03 PM, Stephen Woodbridge <woodbri@swoodbridge.com> wrote:

I’m not a C++ programmer, but from what I have read on the boost list, it seems that the path to writing generic templates is to write the code first in regular C++ so it works and then refactor it to be more generic. So step 1 is to do as you proposed.

Also since this project is to implement a “time dependent dijkstra algorithm” in pgRouting, the generic templates seems to be out of scope. It would be fine to do if you had the time and skill, but I think your approach makes sense, use the tools that make your job easier and allow you to achieve success.

Any contrary opinions to this?

-Steve

On 5/23/2011 10:20 AM, Jay Mahadeokar wrote:

Hi,

As initially discussed, I was trying to reuse the boost’s dijkstra code
to write the core time dependent dijkstra algorithm.
Since Boost makes extensive use of generic programming, and I have no
prior experience with such deep generic programming concepts, I was
wondering if we really need all these features that boost provides.

Another option would be to write the time-dependent dijkstra on our own.

This will be a light-weight code without extensive use of generic
programming like boost.
So, i was thinking of using following:

*1. *Boost::AdjecencyList - for storing graph.
*2. *Std::Map - for storing distanceMap, predecessorMap.

*3. *For weightMap:

typedef struct weight_map_element
{
pair<int,double> edge;
double travel_time;
}
weight_map_element_t;

class weight_map
{

vector<weight_map_element_t> weight_map_set;

public:
void insert(weight_map_element_t element);
double get_travel_time(int edge_id, double start_time);

};

*4. *std::priority_queue for the priority queue that will be used for
dijkstra search.

Well,
The std::prority_queue does not support the decrease_key operation which will be needed for the dijkstra search. I dont think stl provides a heap data structure with decrease_key, delete_min and insert operations.

The make_heap, and sort_heap would be too costly to maintain for dijkstra.

Forgot to mention that - make_heap() and sort_heap() are provided by stl algorithms.But as mentioned, it would be very costly and inefficient to use sort_heap() each time.

Do you have any idea of open-source library that provides the heap datastructure with above functionality?

I am thinking of implementing binary heap myself to support the required functionality.
Thoughts?

Will this be sufficient for our implementation? If yes, I will come up
with the detailed internal function prototypes soon.

On Wed, May 18, 2011 at 8:09 PM, Stephen Woodbridge

<woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)> wrote:

On 5/18/2011 8:08 AM, Jay Mahadeokar wrote:

Hi,

I think what we need is a function on following lines:

Given a query_start_time, it should query the TimeDepCosts
table, and
return the set of entries which have their start_time within
query_start_time + delta. This delta might be x hours, depending
on the
upper bound of the expected journey time.

We need following things handled in our model:

If query_start_time is 11PM Sunday and delta is 10 hours, we
will need
to query entries with start time between 11PM Sunday and 9AM
Monday. So,
basically the function needs to be periodic in terms of
time_of_day and
day_of_week.

As Steve suggested, we can maintain conventions for day_of_week
like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1-7 Mon - Sun.

If we just assume entries for -3,-2,-1,0 and ignore each entry
for Sun -
Sat, that would reduce the space required assuming that the
entries for
weekdays is same. If times for different weekdays is different
then we
would have separate entries for each day.

So, the query should properly examine the query_day_of_week and
select
the proper entries. Ex: For above query, if it is sunday, then after
12AM, next entries will be queried with time_of_day as -1, but
if it was
Saturday, next entries will be queried with time_of_day as -2.

We can have a boolean parameter like isHoliday in query
itself, which
will tell if a given day (may be weekday) is holiday or not.Then
again
the query will have to be modified accordingly (query for -3).
This will
take care of query for local holiday etc and we do not have to worry
about calenders. The user will have to worry about that… :slight_smile:

This (isHoliday) might work for a single day, but will not work if
the query crosses into the next day(s). Holidays are an input
convenience to describe recurring events. So say we have a
convenient way to input that data and store it into tables as we
have described, then the query for costs would need to decide is a
given day is a holiday or not and then select the correct entries to
return based on that. For ease of implementation, we could just
start with a stub function the returns false and later implement a
table lookup based on the date or day of year that determines if
isHoliday=t/f for any given start_time+offset.

Then when querying schedules, for example, we would select holiday
schedules if they exist and its a holiday otherwise search the
regular tables.

There can be time_from and time_to entries as well. But, if we
just have
entries like:

day_of_week: -1, time_of_day: 00:01, 55mph
day_of_week: -1, time_of_day: 07:00, 30mph
day_of_week: -1, time_of_day: 10:01, 55mph
day_of_week: -1, time_of_day: 16:31, 30mph

we can assume that the entry with time_of_day 00:01 is valid upto
07:00. And if query_start_time is 02:00 and delta is 10 hours,
we can
query entries which have time_of_day < query_start_time + delta
(taking
care of change of day).

Is this assumption reasonable?

This sounds reasonable if the response is in units of time (offset)
from query_start_time. Assuming we use a resolution of seconds, then
the response would be in seconds from start time.

weightMap Abstraction

I think the design would be more modular if we keep the weightMap
independent of the underlying time conventions. Since as Daniel
pointed
out, this algorithm can also be used for non-road networks.

So, whatever the underlying time conventions, we can assume that
in the
end the query should return pairs like:

< <edgeId, offset_time> , travel_time/speed >

We will assume that query_start_time is 0, i.e we offset every
thing by
query_start_time.
The offset_time will be as follows:

As in above example,
If start_tme is 02:00 and day_of_week is -1, and delta as 10 hours.

Then, edge entries for edgeId 1 will be:
< <1, 05:00 > , 30 mph>
< <1, 08:01 > , 55 mph>
< <1, 14:31 > , 30 mph>

Basically, the offset_time will abstract out any internal details of
weekdays, change of days etc and it will just contain the offset
from
start_time.

I suggested seconds above, but if someone is modeling events in say
glacier flow, geological times or the big bang, they might need
other units of time. I would say that because we are talking about
time based schedules and need to deal with days, hours minutes we
should probably not worry about these other timeframes as the solver
will be offset based it will work with any units and then only the
wrappers for extracting the costs from the schedules would need to
change to deal with other timeframes. So lets not get hung up on
this, I think this is a sound plan.

-Steve

This will greatly simplify the core time dependent dijkstra
implementation.

Thoughts?

On Wed, May 18, 2011 at 10:46 AM, Stephen Woodbridge
<woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)

<mailto:woodbri@swoodbridge.com

mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)>> wrote:

Hi Daniel,

These are good points.

I agree that turn restrictions should be thought about but only
implemented if there is time. I think we should take the
discussion
of turn restriction into a separate thread. I’m interested
in what
you think is a limitation that can’t be modeled in Dijkstra
or A*.

Regarding the time modeling, my point was just that we
needed more
thought there and a richer model developed. The crontab
model is a
good idea. I’m not opposed to modeling monthly or yearly
events, but
they rarely exist other than holidays which I tried to
capture. I
suppose you could model something like the Boston Marathon
and model
all the road closures and restrictions, but it seems to be a
lot of
work for a one day event, but I can see city government’s
deploying
the resources to model something like this.

Regarding timezones: Times need to be entered with zones for
models
that cross zones but all the data should be converted to UTC
internally so it runs in a consistent time model. Timezones
are for
input and output, but the solver should be ignorant of them,
in my
opinion.

I have carried my GPS from the Eastern to central time zone,
and it
knew the local time when I powered it up. So my guess is that it
would auto correct when crossing the time zone.

-Steve

On 5/17/2011 10:42 PM, Daniel Kastl wrote:

Hi Jay and Steve,

This looks really nice, but let me give some comments
regarding
how to
model time, because this is really tricky in my opinion.
Especially when
thinking about an abstract network that isn’t a road
network.

Would it be possible to support turn restrictions in
the static
Dijkstra also? I’m thinking just use all the same
structures but
ignore the the time components to keep things
simple. So if
the the
turn restrictions table is present we use it, otherwise
assume no
restrictions. If doing static shortest path with turn
restrictions
then ignore the time components otherwise we use
them. And
it is up
to the user to make sure the turn restriction table
is valid
for the
analysis being requested.

Currently in pgRouting Dijkstra and A* don’t support turn
restrictions
modelling. What I actually like on Shooting Star is, that it
routes from
edge to edge instead of node to node. So it allows to model
relations
between edges rather than nodes, which I think is more
close to how
humans would think of this.
Dijkstra can only visit one node one times (as Shooting star
only visits
an edge once). Well, there can be turn restriction cases
where a
node is
passed twice and which can’t be done correctly with
Dijkstra as
far as I
know.

In the beginning I wouldn’t think about the turn
restrictions
too much.
Let’s take it as an extra when we see we still have
enough time.
Of course if you have a good idea to implement it all at
once
with only
little extra work, then that’s fine.

For time definitions in your tables I think you need
to probably
define some common structure for handling a time entry.

Another problem might be time zones. Plain day field + time
field might
not be able to allow driving from one time zone to
another (or just
think about a flight network). I never went from one
timezone to
another
by car or train, but Steve and Anton, you might have this
experience.
How does car navigation software handle this when you
cross the
timezone
border? :wink:

So taking “timestamp with timezone” for those fields
might be a good
idea to be able to support such a functionality.

For example time_from and time_to might need to be
defined as a
structure that includes day_of_week. day_of week
might take
values like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1…7 - Sun,…,Sat

I think the problem here is how to model recurring time
dependent costs,
right?
If you think about weekdays, then you can’t for example
model
monthly or
yearly events.

I’m not really sure this is useful information here, but
I once saw
about how the “iCal” format models recurring calendar
events:
http://ext.ensible.com/deploy/dev/examples/calendar/recurrence-widget.html

Maybe we can learn something from there. The example
needed to be
extended with some duration probably. But looking about
calendar
formats
might be a good idea to model holidays etc.

Or another (stupid) example would be cron job syntax.
Something
I always
need to google for as I can’t remember it :wink:

All this time dependent stuff, events and schedules is
also an
issue in
Darp solver.
And it probably is important also for the multi-modal
routing,
so if we
can find some clever way how to model this and can share
it between
algorihtms, that would be great.

Daniel


Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de
mailto:[daniel.kastl@georepublic.de](mailto:daniel.kastl@georepublic.de)
<mailto:daniel.kastl@georepublic.de
mailto:[daniel.kastl@georepublic.de](mailto:daniel.kastl@georepublic.de)>
<mailto:daniel.kastl@georepublic.de
mailto:[daniel.kastl@georepublic.de](mailto:daniel.kastl@georepublic.de)
<mailto:daniel.kastl@georepublic.de
mailto:[daniel.kastl@georepublic.de](mailto:daniel.kastl@georepublic.de)>>
Web: http://georepublic.de <http://georepublic.de/>


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)
<mailto:pgrouting-dev@lists.osgeo.org
mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)>

http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)
<mailto:pgrouting-dev@lists.osgeo.org
mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)>

http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org mailto:[pgrouting-dev@lists.osgeo.org](mailto:pgrouting-dev@lists.osgeo.org)
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar


Regards,
-Jay Mahadeokar

On 5/24/2011 7:06 PM, Jay Mahadeokar wrote:

On Mon, May 23, 2011 at 8:03 PM, Stephen Woodbridge
<woodbri@swoodbridge.com <mailto:woodbri@swoodbridge.com>> wrote:

    I'm not a C++ programmer, but from what I have read on the boost
    list, it seems that the path to writing generic templates is to
    write the code first in regular C++ so it works and then refactor it
    to be more generic. So step 1 is to do as you proposed.

    Also since this project is to implement a "time dependent dijkstra
    algorithm" in pgRouting, the generic templates seems to be out of
    scope. It would be fine to do if you had the time and skill, but I
    think your approach makes sense, use the tools that make your job
    easier and allow you to achieve success.

    Any contrary opinions to this?

    -Steve

    On 5/23/2011 10:20 AM, Jay Mahadeokar wrote:

        Hi,

        As initially discussed, I was trying to reuse the boost's
        dijkstra code
        to write the core time dependent dijkstra algorithm.
        Since Boost makes extensive use of generic programming, and I
        have no
        prior experience with such deep generic programming concepts, I was
        wondering if we really need all these features that boost provides.

        Another option would be to write the time-dependent dijkstra on
        our own.

        This will be a light-weight code without extensive use of generic
        programming like boost.
        So, i was thinking of using following:

        *1. *Boost::AdjecencyList - for storing graph.
        *2. *Std::Map - for storing distanceMap, predecessorMap.

        *3. *For weightMap:

        typedef struct weight_map_element
        {
                 pair<int,double> edge;
                 double travel_time;
        }
        weight_map_element_t;

        class weight_map
        {

             vector<weight_map_element_t> weight_map_set;

             public:
             void insert(weight_map_element_t element);
             double get_travel_time(int edge_id, double start_time);

        };

        *4. *std::priority_queue for the priority queue that will be
        used for
        dijkstra search.

Well,
The std::prority_queue does not support the decrease_key operation which
will be needed for the dijkstra search. I dont think stl provides a heap
data structure with decrease_key, delete_min and insert operations.

The make_heap, and sort_heap would be too costly to maintain for dijkstra.

Do you have any idea of open-source library that provides the heap
datastructure with above functionality?

I am thinking of implementing binary heap myself to support the required
functionality.
Thoughts?

Jay,

What about looking at the code in Booost Graph that does this. There is probably some generic template code, but maybe you can just reuse the algorithms and recode your own specific to this problem implementation.

Ashraf,

You implemented your own Dijkstra algorithm in OpenGraphRouter, can you point Jay to your code there so that he can assess reusing some of that fore this project. I think the following might do the trick

http://sourceforge.net/projects/opengraphrouter/

Look here at ShortestPathDikjstra.cpp, it looks like Ashraf is using a Boost Mutable Queue:
http://opengraphrouter.svn.sourceforge.net/viewvc/opengraphrouter/tags/Newstructure/tags/gsoc2009/src/?pathrev=111

-Steve

        Will this be sufficient for our implementation? If yes, I will
        come up
        with the detailed internal function prototypes soon.

        On Wed, May 18, 2011 at 8:09 PM, Stephen Woodbridge
        <woodbri@swoodbridge.com <mailto:woodbri@swoodbridge.com>
        <mailto:woodbri@swoodbridge.com
        <mailto:woodbri@swoodbridge.com>>> wrote:

            On 5/18/2011 8:08 AM, Jay Mahadeokar wrote:

                Hi,

                I think what we need is a function on following lines:

                Given a query_start_time, it should query the TimeDepCosts
                table, and
                return the set of entries which have their start_time within
                query_start_time + delta. This delta might be x hours,
        depending
                on the
                upper bound of the expected journey time.

                We need following things handled in our model:

                If query_start_time is 11PM Sunday and delta is 10 hours, we
                will need
                to query entries with start time between 11PM Sunday and 9AM
                Monday. So,
                basically the function needs to be periodic in terms of
                time_of_day and
                day_of_week.

                As Steve suggested, we can maintain conventions for
        day_of_week
                like:

                -3 - holidays
                -2 - weekend days
                -1 - week days
                0 - all days
                1-7 Mon - Sun.

                If we just assume entries for -3,-2,-1,0 and ignore each
        entry
                for Sun -
                Sat, that would reduce the space required assuming that the
                entries for
                weekdays is same. If times for different weekdays is
        different
                then we
                would have separate entries for each day.

                So, the query should properly examine the
        query_day_of_week and
                select
                the proper entries. Ex: For above query, if it is
        sunday, then after
                12AM, next entries will be queried with time_of_day as
        -1, but
                if it was
                Saturday, next entries will be queried with time_of_day
        as -2.

                We can have a boolean parameter like *isHoliday* in query
                itself, which
                will tell if a given day (may be weekday) is holiday or
        not.Then
                again
                the query will have to be modified accordingly (query
        for -3).
                This will
                take care of query for local holiday etc and we do not
        have to worry
                about calenders. The user will have to worry about
        that.. :slight_smile:

            This (isHoliday) might work for a single day, but will not
        work if
            the query crosses into the next day(s). Holidays are an input
            convenience to describe recurring events. So say we have a
            convenient way to input that data and store it into tables as we
            have described, then the query for costs would need to
        decide is a
            given day is a holiday or not and then select the correct
        entries to
            return based on that. For ease of implementation, we could just
            start with a stub function the returns false and later
        implement a
            table lookup based on the date or day of year that determines if
            isHoliday=t/f for any given start_time+offset.

            Then when querying schedules, for example, we would select
        holiday
            schedules if they exist and its a holiday otherwise search the
            regular tables.

                There can be time_from and time_to entries as well. But,
        if we
                just have
                entries like:

                  day_of_week: -1, time_of_day: 00:01, 55mph
                  day_of_week: -1, time_of_day: 07:00, 30mph
                  day_of_week: -1, time_of_day: 10:01, 55mph
                  day_of_week: -1, time_of_day: 16:31, 30mph

                we can assume that the entry with time_of_day 00:01 is
        valid upto
                07:00. And if query_start_time is 02:00 and delta is 10
        hours,
                we can
                query entries which have time_of_day < query_start_time
        + delta
                (taking
                care of change of day).

                Is this assumption reasonable?

            This sounds reasonable if the response is in units of time
        (offset)
            from query_start_time. Assuming we use a resolution of
        seconds, then
            the response would be in seconds from start time.

                *weightMap Abstraction*

                I think the design would be more modular if we keep the
        weightMap
                independent of the underlying time conventions. Since as
        Daniel
                pointed
                out, this algorithm can also be used for non-road networks.

                So, whatever the underlying time conventions, we can
        assume that
                in the
                end the query should return pairs like:

        < <edgeId, offset_time> , travel_time/speed >

                We will assume that query_start_time is 0, i.e we offset
        every
                thing by
                query_start_time.
                The offset_time will be as follows:

                As in above example,
                If start_tme is 02:00 and day_of_week is -1, and delta
        as 10 hours.

                Then, edge entries for edgeId 1 will be:
        < <1, 05:00 > , 30 mph>
        < <1, 08:01 > , 55 mph>
        < <1, 14:31 > , 30 mph>

                Basically, the offset_time will abstract out any
        internal details of
                weekdays, change of days etc and it will just contain
        the offset
                from
                start_time.

            I suggested seconds above, but if someone is modeling events
        in say
            glacier flow, geological times or the big bang, they might need
            other units of time. I would say that because we are talking
        about
            time based schedules and need to deal with days, hours
        minutes we
            should probably not worry about these other timeframes as
        the solver
            will be offset based it will work with any units and then
        only the
            wrappers for extracting the costs from the schedules would
        need to
            change to deal with other timeframes. So lets not get hung up on
            this, I think this is a sound plan.

            -Steve

                This will greatly simplify the core time dependent dijkstra
                implementation.

                Thoughts?

                On Wed, May 18, 2011 at 10:46 AM, Stephen Woodbridge
        <woodbri@swoodbridge.com <mailto:woodbri@swoodbridge.com>
        <mailto:woodbri@swoodbridge.com>
        <mailto:woodbri@swoodbridge.com

        <mailto:woodbri@swoodbridge.com
        <mailto:woodbri@swoodbridge.com>>>> wrote:

                    Hi Daniel,

                    These are good points.

                    I agree that turn restrictions should be thought
        about but only
                    implemented if there is time. I think we should take the
                discussion
                    of turn restriction into a separate thread. I'm
        interested
                in what
                    you think is a limitation that can't be modeled in
        Dijkstra
                or A*.

                    Regarding the time modeling, my point was just that we
                needed more
                    thought there and a richer model developed. The crontab
                model is a
                    good idea. I'm not opposed to modeling monthly or yearly
                events, but
                    they rarely exist other than holidays which I tried to
                capture. I
                    suppose you could model something like the Boston
        Marathon
                and model
                    all the road closures and restrictions, but it seems
        to be a
                lot of
                    work for a one day event, but I can see city
        government's
                deploying
                    the resources to model something like this.

                    Regarding timezones: Times need to be entered with
        zones for
                models
                    that cross zones but all the data should be
        converted to UTC
                    internally so it runs in a consistent time model.
        Timezones
                are for
                    input and output, but the solver should be ignorant
        of them,
                in my
                    opinion.

                    I have carried my GPS from the Eastern to central
        time zone,
                and it
                    knew the local time when I powered it up. So my
        guess is that it
                    would auto correct when crossing the time zone.

                    -Steve

                    On 5/17/2011 10:42 PM, Daniel Kastl wrote:

                        Hi Jay and Steve,

                        This looks really nice, but let me give some
        comments
                regarding
                        how to
                        model time, because this is really tricky in my
        opinion.
                        Especially when
                        thinking about an abstract network that isn't a road
                network.

                            Would it be possible to support turn
        restrictions in
                the static
                            Dijkstra also? I'm thinking just use all the
        same
                structures but
                            ignore the the time components to keep things
                simple. So if
                        the the
                            turn restrictions table is present we use
        it, otherwise
                        assume no
                            restrictions. If doing static shortest path
        with turn
                        restrictions
                            then ignore the time components otherwise we use
                them. And
                        it is up
                            to the user to make sure the turn
        restriction table
                is valid
                        for the
                            analysis being requested.

                        Currently in pgRouting Dijkstra and A* don't
        support turn
                        restrictions
                        modelling. What I actually like on Shooting Star
        is, that it
                        routes from
                        edge to edge instead of node to node. So it
        allows to model
                        relations
                        between edges rather than nodes, which I think
        is more
                close to how
                        humans would think of this.
                        Dijkstra can only visit one node one times (as
        Shooting star
                        only visits
                        an edge once). Well, there can be turn
        restriction cases
                where a
                        node is
                        passed twice and which can't be done correctly with
                Dijkstra as
                        far as I
                        know.

                        In the beginning I wouldn't think about the turn
                restrictions
                        too much.
                        Let's take it as an extra when we see we still have
                enough time.
                        Of course if you have a good idea to implement
        it all at
                once
                        with only
                        little extra work, then that's fine.

                            For time definitions in your tables I think
        you need
                to probably
                            define some common structure for handling a
        time entry.

                        Another problem might be time zones. Plain day
        field + time
                        field might
                        not be able to allow driving from one time zone to
                another (or just
                        think about a flight network). I never went from one
                timezone to
                        another
                        by car or train, but Steve and Anton, you might
        have this
                        experience.
                        How does car navigation software handle this
        when you
                cross the
                        timezone
                        border? :wink:

                        So taking "timestamp with timezone" for those fields
                might be a good
                        idea to be able to support such a functionality.

                            For example time_from and time_to might need
        to be
                defined as a
                            structure that includes day_of_week. day_of week
                might take
                        values like:

                            -3 - holidays
                            -2 - weekend days
                            -1 - week days
                            0 - all days
                            1..7 - Sun,...,Sat

                        I think the problem here is how to model
        recurring time
                        dependent costs,
                        right?
                        If you think about weekdays, then you can't for
        example
                model
                        monthly or
                        yearly events.

                        I'm not really sure this is useful information
        here, but
                I once saw
                        about how the "iCal" format models recurring
        calendar
                events:
        http://ext.ensible.com/deploy/dev/examples/calendar/recurrence-widget.html

                        Maybe we can learn something from there. The example
                needed to be
                        extended with some duration probably. But
        looking about
                calendar
                        formats
                        might be a good idea to model holidays etc.

                        Or another (stupid) example would be cron job
        syntax.
                Something
                        I always
                        need to google for as I can't remember it :wink:

                        All this time dependent stuff, events and
        schedules is
                also an
                        issue in
                        Darp solver.
                        And it probably is important also for the
        multi-modal
                routing,
                        so if we
                        can find some clever way how to model this and
        can share
                it between
                        algorihtms, that would be great.

                        Daniel

On Wed, May 25, 2011 at 8:47 AM, Stephen Woodbridge <woodbri@swoodbridge.com> wrote:

On 5/24/2011 7:06 PM, Jay Mahadeokar wrote:

On Mon, May 23, 2011 at 8:03 PM, Stephen Woodbridge

<woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)> wrote:

I’m not a C++ programmer, but from what I have read on the boost
list, it seems that the path to writing generic templates is to
write the code first in regular C++ so it works and then refactor it
to be more generic. So step 1 is to do as you proposed.

Also since this project is to implement a “time dependent dijkstra
algorithm” in pgRouting, the generic templates seems to be out of
scope. It would be fine to do if you had the time and skill, but I
think your approach makes sense, use the tools that make your job
easier and allow you to achieve success.

Any contrary opinions to this?

-Steve

On 5/23/2011 10:20 AM, Jay Mahadeokar wrote:

Hi,

As initially discussed, I was trying to reuse the boost’s
dijkstra code
to write the core time dependent dijkstra algorithm.
Since Boost makes extensive use of generic programming, and I
have no
prior experience with such deep generic programming concepts, I was
wondering if we really need all these features that boost provides.

Another option would be to write the time-dependent dijkstra on
our own.

This will be a light-weight code without extensive use of generic
programming like boost.
So, i was thinking of using following:

*1. *Boost::AdjecencyList - for storing graph.
*2. *Std::Map - for storing distanceMap, predecessorMap.

*3. *For weightMap:

typedef struct weight_map_element
{
pair<int,double> edge;
double travel_time;
}
weight_map_element_t;

class weight_map
{

vector<weight_map_element_t> weight_map_set;

public:
void insert(weight_map_element_t element);
double get_travel_time(int edge_id, double start_time);

};

*4. *std::priority_queue for the priority queue that will be
used for
dijkstra search.

Well,
The std::prority_queue does not support the decrease_key operation which
will be needed for the dijkstra search. I dont think stl provides a heap
data structure with decrease_key, delete_min and insert operations.

The make_heap, and sort_heap would be too costly to maintain for dijkstra.

Do you have any idea of open-source library that provides the heap
datastructure with above functionality?

I am thinking of implementing binary heap myself to support the required
functionality.
Thoughts?

Jay,

What about looking at the code in Booost Graph that does this. There is probably some generic template code, but maybe you can just reuse the algorithms and recode your own specific to this problem implementation.

Ashraf,

You implemented your own Dijkstra algorithm in OpenGraphRouter, can you point Jay to your code there so that he can assess reusing some of that fore this project. I think the following might do the trick

http://sourceforge.net/projects/opengraphrouter/

Look here at ShortestPathDikjstra.cpp, it looks like Ashraf is using a Boost Mutable Queue:
http://opengraphrouter.svn.sourceforge.net/viewvc/opengraphrouter/tags/Newstructure/tags/gsoc2009/src/?pathrev=111

Thanks Steve for the suggestion. I actually implemented a BinaryHeap class for exact requirements, and it seems to be solving the purpose.

The basic functions in class are:

class binary_heap
{
vector heap;

void heapify(int index);

public:

void insert(Vertex v); //inserts v into heap and rebalances it so that heap property is maintained.
double decrease_key(int index, double delta); // Decreases key of vertex with id = index, by delta.
Vertex delete_min(); //deletes the min key vertex and rebalances heap.

};

Note that in binary heap all above operations need O(log n) time.

I will complete the initial draft of the algorithm soon and update it on github in next few days.
We can see what else is needed then.

-Steve

Will this be sufficient for our implementation? If yes, I will
come up
with the detailed internal function prototypes soon.

On Wed, May 18, 2011 at 8:09 PM, Stephen Woodbridge
<woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)

<mailto:woodbri@swoodbridge.com
mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)>> wrote:

On 5/18/2011 8:08 AM, Jay Mahadeokar wrote:

Hi,

I think what we need is a function on following lines:

Given a query_start_time, it should query the TimeDepCosts
table, and
return the set of entries which have their start_time within
query_start_time + delta. This delta might be x hours,
depending
on the
upper bound of the expected journey time.

We need following things handled in our model:

If query_start_time is 11PM Sunday and delta is 10 hours, we
will need
to query entries with start time between 11PM Sunday and 9AM
Monday. So,
basically the function needs to be periodic in terms of
time_of_day and
day_of_week.

As Steve suggested, we can maintain conventions for
day_of_week
like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1-7 Mon - Sun.

If we just assume entries for -3,-2,-1,0 and ignore each
entry
for Sun -
Sat, that would reduce the space required assuming that the
entries for
weekdays is same. If times for different weekdays is
different
then we
would have separate entries for each day.

So, the query should properly examine the
query_day_of_week and
select
the proper entries. Ex: For above query, if it is
sunday, then after
12AM, next entries will be queried with time_of_day as
-1, but
if it was
Saturday, next entries will be queried with time_of_day
as -2.

We can have a boolean parameter like isHoliday in query
itself, which
will tell if a given day (may be weekday) is holiday or
not.Then
again
the query will have to be modified accordingly (query
for -3).
This will
take care of query for local holiday etc and we do not
have to worry
about calenders. The user will have to worry about
that… :slight_smile:

This (isHoliday) might work for a single day, but will not
work if
the query crosses into the next day(s). Holidays are an input
convenience to describe recurring events. So say we have a
convenient way to input that data and store it into tables as we
have described, then the query for costs would need to
decide is a
given day is a holiday or not and then select the correct
entries to
return based on that. For ease of implementation, we could just
start with a stub function the returns false and later
implement a
table lookup based on the date or day of year that determines if
isHoliday=t/f for any given start_time+offset.

Then when querying schedules, for example, we would select
holiday
schedules if they exist and its a holiday otherwise search the
regular tables.

There can be time_from and time_to entries as well. But,
if we
just have
entries like:

day_of_week: -1, time_of_day: 00:01, 55mph
day_of_week: -1, time_of_day: 07:00, 30mph
day_of_week: -1, time_of_day: 10:01, 55mph
day_of_week: -1, time_of_day: 16:31, 30mph

we can assume that the entry with time_of_day 00:01 is
valid upto
07:00. And if query_start_time is 02:00 and delta is 10
hours,
we can
query entries which have time_of_day < query_start_time

  • delta
    (taking
    care of change of day).

Is this assumption reasonable?

This sounds reasonable if the response is in units of time
(offset)
from query_start_time. Assuming we use a resolution of
seconds, then
the response would be in seconds from start time.

weightMap Abstraction

I think the design would be more modular if we keep the
weightMap
independent of the underlying time conventions. Since as
Daniel
pointed
out, this algorithm can also be used for non-road networks.

So, whatever the underlying time conventions, we can
assume that
in the
end the query should return pairs like:

< <edgeId, offset_time> , travel_time/speed >

We will assume that query_start_time is 0, i.e we offset
every
thing by
query_start_time.
The offset_time will be as follows:

As in above example,
If start_tme is 02:00 and day_of_week is -1, and delta
as 10 hours.

Then, edge entries for edgeId 1 will be:
< <1, 05:00 > , 30 mph>
< <1, 08:01 > , 55 mph>
< <1, 14:31 > , 30 mph>

Basically, the offset_time will abstract out any
internal details of
weekdays, change of days etc and it will just contain
the offset
from
start_time.

I suggested seconds above, but if someone is modeling events
in say
glacier flow, geological times or the big bang, they might need
other units of time. I would say that because we are talking
about
time based schedules and need to deal with days, hours
minutes we
should probably not worry about these other timeframes as
the solver
will be offset based it will work with any units and then
only the
wrappers for extracting the costs from the schedules would
need to
change to deal with other timeframes. So lets not get hung up on
this, I think this is a sound plan.

-Steve

This will greatly simplify the core time dependent dijkstra
implementation.

Thoughts?

On Wed, May 18, 2011 at 10:46 AM, Stephen Woodbridge
<woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)
<mailto:woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)>
<mailto:woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)

<mailto:woodbri@swoodbridge.com
mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)>>> wrote:

Hi Daniel,

These are good points.

I agree that turn restrictions should be thought
about but only
implemented if there is time. I think we should take the
discussion
of turn restriction into a separate thread. I’m
interested
in what
you think is a limitation that can’t be modeled in
Dijkstra
or A*.

Regarding the time modeling, my point was just that we
needed more
thought there and a richer model developed. The crontab
model is a
good idea. I’m not opposed to modeling monthly or yearly
events, but
they rarely exist other than holidays which I tried to
capture. I
suppose you could model something like the Boston
Marathon
and model
all the road closures and restrictions, but it seems
to be a
lot of
work for a one day event, but I can see city
government’s
deploying
the resources to model something like this.

Regarding timezones: Times need to be entered with
zones for
models
that cross zones but all the data should be
converted to UTC
internally so it runs in a consistent time model.
Timezones
are for
input and output, but the solver should be ignorant
of them,
in my
opinion.

I have carried my GPS from the Eastern to central
time zone,
and it
knew the local time when I powered it up. So my
guess is that it
would auto correct when crossing the time zone.

-Steve

On 5/17/2011 10:42 PM, Daniel Kastl wrote:

Hi Jay and Steve,

This looks really nice, but let me give some
comments
regarding
how to
model time, because this is really tricky in my
opinion.
Especially when
thinking about an abstract network that isn’t a road
network.

Would it be possible to support turn
restrictions in
the static
Dijkstra also? I’m thinking just use all the
same
structures but
ignore the the time components to keep things
simple. So if
the the
turn restrictions table is present we use
it, otherwise
assume no
restrictions. If doing static shortest path
with turn
restrictions
then ignore the time components otherwise we use
them. And
it is up
to the user to make sure the turn
restriction table
is valid
for the
analysis being requested.

Currently in pgRouting Dijkstra and A* don’t
support turn
restrictions
modelling. What I actually like on Shooting Star
is, that it
routes from
edge to edge instead of node to node. So it
allows to model
relations
between edges rather than nodes, which I think
is more
close to how
humans would think of this.
Dijkstra can only visit one node one times (as
Shooting star
only visits
an edge once). Well, there can be turn
restriction cases
where a
node is
passed twice and which can’t be done correctly with
Dijkstra as
far as I
know.

In the beginning I wouldn’t think about the turn
restrictions
too much.
Let’s take it as an extra when we see we still have
enough time.
Of course if you have a good idea to implement
it all at
once
with only
little extra work, then that’s fine.

For time definitions in your tables I think
you need
to probably
define some common structure for handling a
time entry.

Another problem might be time zones. Plain day
field + time
field might
not be able to allow driving from one time zone to
another (or just
think about a flight network). I never went from one
timezone to
another
by car or train, but Steve and Anton, you might
have this
experience.
How does car navigation software handle this
when you
cross the
timezone
border? :wink:

So taking “timestamp with timezone” for those fields
might be a good
idea to be able to support such a functionality.

For example time_from and time_to might need
to be
defined as a
structure that includes day_of_week. day_of week
might take
values like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1…7 - Sun,…,Sat

I think the problem here is how to model
recurring time
dependent costs,
right?
If you think about weekdays, then you can’t for
example
model
monthly or
yearly events.

I’m not really sure this is useful information
here, but
I once saw
about how the “iCal” format models recurring
calendar
events:
http://ext.ensible.com/deploy/dev/examples/calendar/recurrence-widget.html

Maybe we can learn something from there. The example
needed to be
extended with some duration probably. But
looking about
calendar
formats
might be a good idea to model holidays etc.

Or another (stupid) example would be cron job
syntax.
Something
I always
need to google for as I can’t remember it :wink:

All this time dependent stuff, events and
schedules is
also an
issue in
Darp solver.
And it probably is important also for the
multi-modal
routing,
so if we
can find some clever way how to model this and
can share
it between
algorihtms, that would be great.

Daniel


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar

Hi,

I have coded a initial draft of the core tdsp algorithm. I have implemented a binary heap for doing delete_min() and decrease_key() to guide the dijkstra search.

Also, used many data structures like weight_map - for storing time dependent edge traversal times
edge_wrapper - for mapping edge ids with source and target vertex.
For graph, I have used boost adjecency list - since it helps in fast lookup of out-edges in any node during dijkstra search.
The distance map and predecessor map are types of std::map.

I have tested on a small dummy graph. The graph is currently read from a file. Format:
Graph Input convension:
Line1 contains 3 integers: V E R -V is no of vertices, E - edges and R - the discrete range of values.
Next V lines have following format:
Start with integer e denoting number of edges going out from that vertex. And next e values in same
line tell the target vertex of edge.

Next we have E*R lines each conaining 3 integers eid , start_time, travel_time. These are used to generate
weight map.

Last line contains source id.
The distance_map and predecessor_map are printed.

You can find the code in /extra/tdsp/ folder. Test graph is in /extra/tdsp/test_data folder.

To test, you can go to tdsp folder and do:
g++ -Wno-deprecated tdsp_core.cpp
./a.out < ./test_data/graph1.dat

TODO:
There will be many ways to optimize this code. I will list the possible improvements soon.
Organise the code into header files etc.
Test for larger input graphs. Create test cases that cover all scenarios.

Thoughts / suggestions?

On Wed, May 25, 2011 at 9:28 AM, Jay Mahadeokar <jai.mahadeokar@gmail.com> wrote:

On Wed, May 25, 2011 at 8:47 AM, Stephen Woodbridge <woodbri@swoodbridge.com> wrote:

On 5/24/2011 7:06 PM, Jay Mahadeokar wrote:

On Mon, May 23, 2011 at 8:03 PM, Stephen Woodbridge

<woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)> wrote:

I’m not a C++ programmer, but from what I have read on the boost
list, it seems that the path to writing generic templates is to
write the code first in regular C++ so it works and then refactor it
to be more generic. So step 1 is to do as you proposed.

Also since this project is to implement a “time dependent dijkstra
algorithm” in pgRouting, the generic templates seems to be out of
scope. It would be fine to do if you had the time and skill, but I
think your approach makes sense, use the tools that make your job
easier and allow you to achieve success.

Any contrary opinions to this?

-Steve

On 5/23/2011 10:20 AM, Jay Mahadeokar wrote:

Hi,

As initially discussed, I was trying to reuse the boost’s
dijkstra code
to write the core time dependent dijkstra algorithm.
Since Boost makes extensive use of generic programming, and I
have no
prior experience with such deep generic programming concepts, I was
wondering if we really need all these features that boost provides.

Another option would be to write the time-dependent dijkstra on
our own.

This will be a light-weight code without extensive use of generic
programming like boost.
So, i was thinking of using following:

*1. *Boost::AdjecencyList - for storing graph.
*2. *Std::Map - for storing distanceMap, predecessorMap.

*3. *For weightMap:

typedef struct weight_map_element
{
pair<int,double> edge;
double travel_time;
}
weight_map_element_t;

class weight_map
{

vector<weight_map_element_t> weight_map_set;

public:
void insert(weight_map_element_t element);
double get_travel_time(int edge_id, double start_time);

};

*4. *std::priority_queue for the priority queue that will be
used for
dijkstra search.

Well,
The std::prority_queue does not support the decrease_key operation which
will be needed for the dijkstra search. I dont think stl provides a heap
data structure with decrease_key, delete_min and insert operations.

The make_heap, and sort_heap would be too costly to maintain for dijkstra.

Do you have any idea of open-source library that provides the heap
datastructure with above functionality?

I am thinking of implementing binary heap myself to support the required
functionality.
Thoughts?

Jay,

What about looking at the code in Booost Graph that does this. There is probably some generic template code, but maybe you can just reuse the algorithms and recode your own specific to this problem implementation.

Ashraf,

You implemented your own Dijkstra algorithm in OpenGraphRouter, can you point Jay to your code there so that he can assess reusing some of that fore this project. I think the following might do the trick

http://sourceforge.net/projects/opengraphrouter/

Look here at ShortestPathDikjstra.cpp, it looks like Ashraf is using a Boost Mutable Queue:
http://opengraphrouter.svn.sourceforge.net/viewvc/opengraphrouter/tags/Newstructure/tags/gsoc2009/src/?pathrev=111

Thanks Steve for the suggestion. I actually implemented a BinaryHeap class for exact requirements, and it seems to be solving the purpose.

The basic functions in class are:

class binary_heap
{
vector heap;

void heapify(int index);

public:

void insert(Vertex v); //inserts v into heap and rebalances it so that heap property is maintained.
double decrease_key(int index, double delta); // Decreases key of vertex with id = index, by delta.
Vertex delete_min(); //deletes the min key vertex and rebalances heap.

};

Note that in binary heap all above operations need O(log n) time.

I will complete the initial draft of the algorithm soon and update it on github in next few days.
We can see what else is needed then.

-Steve

Will this be sufficient for our implementation? If yes, I will
come up
with the detailed internal function prototypes soon.

On Wed, May 18, 2011 at 8:09 PM, Stephen Woodbridge
<woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)

<mailto:woodbri@swoodbridge.com
mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)>> wrote:

On 5/18/2011 8:08 AM, Jay Mahadeokar wrote:

Hi,

I think what we need is a function on following lines:

Given a query_start_time, it should query the TimeDepCosts
table, and
return the set of entries which have their start_time within
query_start_time + delta. This delta might be x hours,
depending
on the
upper bound of the expected journey time.

We need following things handled in our model:

If query_start_time is 11PM Sunday and delta is 10 hours, we
will need
to query entries with start time between 11PM Sunday and 9AM
Monday. So,
basically the function needs to be periodic in terms of
time_of_day and
day_of_week.

As Steve suggested, we can maintain conventions for
day_of_week
like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1-7 Mon - Sun.

If we just assume entries for -3,-2,-1,0 and ignore each
entry
for Sun -
Sat, that would reduce the space required assuming that the
entries for
weekdays is same. If times for different weekdays is
different
then we
would have separate entries for each day.

So, the query should properly examine the
query_day_of_week and
select
the proper entries. Ex: For above query, if it is
sunday, then after
12AM, next entries will be queried with time_of_day as
-1, but
if it was
Saturday, next entries will be queried with time_of_day
as -2.

We can have a boolean parameter like isHoliday in query
itself, which
will tell if a given day (may be weekday) is holiday or
not.Then
again
the query will have to be modified accordingly (query
for -3).
This will
take care of query for local holiday etc and we do not
have to worry
about calenders. The user will have to worry about
that… :slight_smile:

This (isHoliday) might work for a single day, but will not
work if
the query crosses into the next day(s). Holidays are an input
convenience to describe recurring events. So say we have a
convenient way to input that data and store it into tables as we
have described, then the query for costs would need to
decide is a
given day is a holiday or not and then select the correct
entries to
return based on that. For ease of implementation, we could just
start with a stub function the returns false and later
implement a
table lookup based on the date or day of year that determines if
isHoliday=t/f for any given start_time+offset.

Then when querying schedules, for example, we would select
holiday
schedules if they exist and its a holiday otherwise search the
regular tables.

There can be time_from and time_to entries as well. But,
if we
just have
entries like:

day_of_week: -1, time_of_day: 00:01, 55mph
day_of_week: -1, time_of_day: 07:00, 30mph
day_of_week: -1, time_of_day: 10:01, 55mph
day_of_week: -1, time_of_day: 16:31, 30mph

we can assume that the entry with time_of_day 00:01 is
valid upto
07:00. And if query_start_time is 02:00 and delta is 10
hours,
we can
query entries which have time_of_day < query_start_time

  • delta
    (taking
    care of change of day).

Is this assumption reasonable?

This sounds reasonable if the response is in units of time
(offset)
from query_start_time. Assuming we use a resolution of
seconds, then
the response would be in seconds from start time.

weightMap Abstraction

I think the design would be more modular if we keep the
weightMap
independent of the underlying time conventions. Since as
Daniel
pointed
out, this algorithm can also be used for non-road networks.

So, whatever the underlying time conventions, we can
assume that
in the
end the query should return pairs like:

< <edgeId, offset_time> , travel_time/speed >

We will assume that query_start_time is 0, i.e we offset
every
thing by
query_start_time.
The offset_time will be as follows:

As in above example,
If start_tme is 02:00 and day_of_week is -1, and delta
as 10 hours.

Then, edge entries for edgeId 1 will be:
< <1, 05:00 > , 30 mph>
< <1, 08:01 > , 55 mph>
< <1, 14:31 > , 30 mph>

Basically, the offset_time will abstract out any
internal details of
weekdays, change of days etc and it will just contain
the offset
from
start_time.

I suggested seconds above, but if someone is modeling events
in say
glacier flow, geological times or the big bang, they might need
other units of time. I would say that because we are talking
about
time based schedules and need to deal with days, hours
minutes we
should probably not worry about these other timeframes as
the solver
will be offset based it will work with any units and then
only the
wrappers for extracting the costs from the schedules would
need to
change to deal with other timeframes. So lets not get hung up on
this, I think this is a sound plan.

-Steve

This will greatly simplify the core time dependent dijkstra
implementation.

Thoughts?

On Wed, May 18, 2011 at 10:46 AM, Stephen Woodbridge
<woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)
<mailto:woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)>
<mailto:woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)

<mailto:woodbri@swoodbridge.com
mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)>>> wrote:

Hi Daniel,

These are good points.

I agree that turn restrictions should be thought
about but only
implemented if there is time. I think we should take the
discussion
of turn restriction into a separate thread. I’m
interested
in what
you think is a limitation that can’t be modeled in
Dijkstra
or A*.

Regarding the time modeling, my point was just that we
needed more
thought there and a richer model developed. The crontab
model is a
good idea. I’m not opposed to modeling monthly or yearly
events, but
they rarely exist other than holidays which I tried to
capture. I
suppose you could model something like the Boston
Marathon
and model
all the road closures and restrictions, but it seems
to be a
lot of
work for a one day event, but I can see city
government’s
deploying
the resources to model something like this.

Regarding timezones: Times need to be entered with
zones for
models
that cross zones but all the data should be
converted to UTC
internally so it runs in a consistent time model.
Timezones
are for
input and output, but the solver should be ignorant
of them,
in my
opinion.

I have carried my GPS from the Eastern to central
time zone,
and it
knew the local time when I powered it up. So my
guess is that it
would auto correct when crossing the time zone.

-Steve

On 5/17/2011 10:42 PM, Daniel Kastl wrote:

Hi Jay and Steve,

This looks really nice, but let me give some
comments
regarding
how to
model time, because this is really tricky in my
opinion.
Especially when
thinking about an abstract network that isn’t a road
network.

Would it be possible to support turn
restrictions in
the static
Dijkstra also? I’m thinking just use all the
same
structures but
ignore the the time components to keep things
simple. So if
the the
turn restrictions table is present we use
it, otherwise
assume no
restrictions. If doing static shortest path
with turn
restrictions
then ignore the time components otherwise we use
them. And
it is up
to the user to make sure the turn
restriction table
is valid
for the
analysis being requested.

Currently in pgRouting Dijkstra and A* don’t
support turn
restrictions
modelling. What I actually like on Shooting Star
is, that it
routes from
edge to edge instead of node to node. So it
allows to model
relations
between edges rather than nodes, which I think
is more
close to how
humans would think of this.
Dijkstra can only visit one node one times (as
Shooting star
only visits
an edge once). Well, there can be turn
restriction cases
where a
node is
passed twice and which can’t be done correctly with
Dijkstra as
far as I
know.

In the beginning I wouldn’t think about the turn
restrictions
too much.
Let’s take it as an extra when we see we still have
enough time.
Of course if you have a good idea to implement
it all at
once
with only
little extra work, then that’s fine.

For time definitions in your tables I think
you need
to probably
define some common structure for handling a
time entry.

Another problem might be time zones. Plain day
field + time
field might
not be able to allow driving from one time zone to
another (or just
think about a flight network). I never went from one
timezone to
another
by car or train, but Steve and Anton, you might
have this
experience.
How does car navigation software handle this
when you
cross the
timezone
border? :wink:

So taking “timestamp with timezone” for those fields
might be a good
idea to be able to support such a functionality.

For example time_from and time_to might need
to be
defined as a
structure that includes day_of_week. day_of week
might take
values like:

-3 - holidays
-2 - weekend days
-1 - week days
0 - all days
1…7 - Sun,…,Sat

I think the problem here is how to model
recurring time
dependent costs,
right?
If you think about weekdays, then you can’t for
example
model
monthly or
yearly events.

I’m not really sure this is useful information
here, but
I once saw
about how the “iCal” format models recurring
calendar
events:
http://ext.ensible.com/deploy/dev/examples/calendar/recurrence-widget.html

Maybe we can learn something from there. The example
needed to be
extended with some duration probably. But
looking about
calendar
formats
might be a good idea to model holidays etc.

Or another (stupid) example would be cron job
syntax.
Something
I always
need to google for as I can’t remember it :wink:

All this time dependent stuff, events and
schedules is
also an
issue in
Darp solver.
And it probably is important also for the
multi-modal
routing,
so if we
can find some clever way how to model this and
can share
it between
algorihtms, that would be great.

Daniel


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar


Regards,
-Jay Mahadeokar