[pgrouting-dev] Network Layering support

Hi.

I am currently doing MTech in Computer Science and Engineering from Indian Institute of Technology Kanpur, India. I have used pgRouting while working with postGreSQL in past. I have special interest in routing algorithms and would like to contribute in some way.

I was looking at the ideas posted here: http://wiki.osgeo.org/wiki/OpenRouter_2010_SOC_Ideas which mentions that a network layering support is desirable which would allow the routing algorithm to change from more to less dense networks to enable long-distance routing. I have worked with sparse spanner algorithms, and was wondering if it can be applied here. A 3 spanner algorithm can allow you to get a sparse graph containing O(n^3/2) edges from dense graph containing O(n^2) edges, using linear time - O(m) - m s the number of edges.

It would be great if I could know more details about the requirement. Example - while constructing spanner for road network, do we want to consider the type of roads and give them priorities accordingly?

PS - I am complete newbie when it comes to contribution to opensource projects of such scale. Please excuse me for asking wrong questions. I would also like to work on other ideas which may be in your TODO list. I would be great if you can direct me to some links which may help to get started properly.


Regards,
-Jay Mahadeokar

Hi all,

Excuse me for bringing up an old topic again! Quoting the idea posted in GSoc 2010 ideas page:

Network Layering Support:
This idea is a quite a challenging task. Network layering would allow the routing algorithm to change from more to less dense networks to enable long-distance routing.

Has this idea still got priority? If yes, I would like to have more info on the same.

On Sat, Oct 16, 2010 at 10:48 AM, Jay Mahadeokar <jai.mahadeokar@gmail.com> wrote:

Hi.

I am currently doing MTech in Computer Science and Engineering from Indian Institute of Technology Kanpur, India. I have used pgRouting while working with postGreSQL in past. I have special interest in routing algorithms and would like to contribute in some way.

I was looking at the ideas posted here: http://wiki.osgeo.org/wiki/OpenRouter_2010_SOC_Ideas which mentions that a network layering support is desirable which would allow the routing algorithm to change from more to less dense networks to enable long-distance routing. I have worked with sparse spanner algorithms, and was wondering if it can be applied here. A 3 spanner algorithm can allow you to get a sparse graph containing O(n^3/2) edges from dense graph containing O(n^2) edges, using linear time - O(m) - m s the number of edges.

It would be great if I could know more details about the requirement. Example - while constructing spanner for road network, do we want to consider the type of roads and give them priorities accordingly?

PS - I am complete newbie when it comes to contribution to opensource projects of such scale. Please excuse me for asking wrong questions. I would also like to work on other ideas which may be in your TODO list. I would be great if you can direct me to some links which may help to get started properly.


Regards,
-Jay Mahadeokar


Regards,
-Jay Mahadeokar

Hi Jay,

If you are up for tackling another algorithm, I think this is the one that has really high value for pgRouting:

http://algo2.iti.kit.edu/english/routeplanning.php

The contraction highways routing is awesome stuff. There is a lot of pre-analysis of the network which takes a lot of compute time, but once that is done and stored, the resulting structures let you compute routes of continental distances in milliseconds.

This should be a very high priority for us as it is already in at least two separate tools that work with OSM data. The value add for pgRouting is that we could then use this algorithm for any datasets that meet the minimum requirements for contraction highways.

Daniel, Anton, do you agree that this is a priority?

I hope you will give this one serious consideration.

Best regards,
   -Steve

On 1/28/2011 8:22 AM, Jay Mahadeokar wrote:

Hi all,

Excuse me for bringing up an old topic again! Quoting the idea posted in
GSoc 2010 ideas page:

/Network Layering Support:
This idea is a quite a challenging task. Network layering would allow
the routing algorithm to change from more to less dense networks to
enable long-distance routing.
/
Has this idea still got priority? If yes, I would like to have more info
on the same.

On Sat, Oct 16, 2010 at 10:48 AM, Jay Mahadeokar
<jai.mahadeokar@gmail.com <mailto:jai.mahadeokar@gmail.com>> wrote:

    Hi.

    I am currently doing MTech in Computer Science and Engineering from
    Indian Institute of Technology Kanpur, India. I have used pgRouting
    while working with postGreSQL in past. I have special interest in
    routing algorithms and would like to contribute in some way.

    I was looking at the ideas posted here:
    http://wiki.osgeo.org/wiki/OpenRouter_2010_SOC_Ideas which mentions
    that a network layering support is desirable which would allow the
    routing algorithm to change from more to less dense networks to
    enable long-distance routing. I have worked with sparse spanner
    algorithms, and was wondering if it can be applied here. A 3 spanner
    algorithm can allow you to get a sparse graph containing O(n^3/2)
    edges from dense graph containing O(n^2) edges, using linear time -
    O(m) - m s the number of edges.

    It would be great if I could know more details about the
    requirement. Example - while constructing spanner for road network,
    do we want to consider the type of roads and give them priorities
    accordingly?

    PS - I am complete newbie when it comes to contribution to
    opensource projects of such scale. Please excuse me for asking wrong
    questions. I would also like to work on other ideas which may be in
    your TODO list. I would be great if you can direct me to some links
    which may help to get started properly.

    --
    Regards,
    -Jay Mahadeokar

--
Regards,
-Jay Mahadeokar

_______________________________________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

Hi,

It seems,there has been a lot of research in speedup techniques for routing. Engineering Route Planning Algorithms gives a nice overview of all the techniques invented till 2008.

It also provides comparisons in pre-processing time and space required for each of the techniques. (I have attached the table for quick reference)

Interesting point made in the paper:
Advanced algorithms for routing in road networks require thousands of lines
of well written code and hence require considerable programming skill. In par-
ticular, it is not trivial to make the codes work for large networks. Here is
an incomplete list of problems and complications that we have seen in routing
projects: Graphs have to be glued together from several files. Tools for reading
files crash for large graphs. Algorithm library code cannot handle large graphs
at all. The code slows down several times when switching from a custom graph
representation to an algorithm library. 32-bit code will not work. Libraries do
not work with 64-bit code. Our conclusion from these experiences was to design
our own graph data structures adapted to the problem at hand. We use C++
with encapsulated abstract data types. Templates and inline functions make this
possible without performance penalties.

Source Code (Quote from their web-page)
The source code of our implementation of contraction hierarchies (CHs) is now available under the terms of the AGPL. If you are interested in getting the source code write an email to Robert Geisberger.

They have also come up with another paper in 2010: “A comparison for high-level approaches for speed-up in path finding algorithms” but the full-text does not seem to be available. (I think this might be useful too)

The GSoc page lists Network Layering (Contraction Hierarchies) and Highway Hierarchies as two separate problems. But Contraction Hierarchies technique seems to outperform HH in almost all categories, and comparatively its less complex.

I think it will require a lot of brain-storming before we can narrow down any particular technique to implement, and also analyse the feasibility / difficulty of implementation of the same. Hoping to hear all your opinions on this.

PS:
I have started work on my Masters thesis, and I plan to explore this area further and maybe extend the work. (I need to do 8+16+16 thesis credits over the course of this semester and next 2 semesters, 4 credits is eq to 1 subject). A GSoc project in summer would be an added incentive!

On Fri, Jan 28, 2011 at 11:45 PM, Stephen Woodbridge <woodbri@swoodbridge.com> wrote:

Hi Jay,

If you are up for tackling another algorithm, I think this is the one that has really high value for pgRouting:

http://algo2.iti.kit.edu/english/routeplanning.php

The contraction highways routing is awesome stuff. There is a lot of pre-analysis of the network which takes a lot of compute time, but once that is done and stored, the resulting structures let you compute routes of continental distances in milliseconds.

This should be a very high priority for us as it is already in at least two separate tools that work with OSM data. The value add for pgRouting is that we could then use this algorithm for any datasets that meet the minimum requirements for contraction highways.

Daniel, Anton, do you agree that this is a priority?

I hope you will give this one serious consideration.

Best regards,
-Steve

On 1/28/2011 8:22 AM, Jay Mahadeokar wrote:

Hi all,

Excuse me for bringing up an old topic again! Quoting the idea posted in
GSoc 2010 ideas page:

/Network Layering Support:
This idea is a quite a challenging task. Network layering would allow
the routing algorithm to change from more to less dense networks to
enable long-distance routing.
/
Has this idea still got priority? If yes, I would like to have more info
on the same.

On Sat, Oct 16, 2010 at 10:48 AM, Jay Mahadeokar

<jai.mahadeokar@gmail.com mailto:[jai.mahadeokar@gmail.com](mailto:jai.mahadeokar@gmail.com)> wrote:

Hi.

I am currently doing MTech in Computer Science and Engineering from
Indian Institute of Technology Kanpur, India. I have used pgRouting
while working with postGreSQL in past. I have special interest in
routing algorithms and would like to contribute in some way.

I was looking at the ideas posted here:
http://wiki.osgeo.org/wiki/OpenRouter_2010_SOC_Ideas which mentions
that a network layering support is desirable which would allow the
routing algorithm to change from more to less dense networks to
enable long-distance routing. I have worked with sparse spanner
algorithms, and was wondering if it can be applied here. A 3 spanner
algorithm can allow you to get a sparse graph containing O(n^3/2)
edges from dense graph containing O(n^2) edges, using linear time -
O(m) - m s the number of edges.

It would be great if I could know more details about the
requirement. Example - while constructing spanner for road network,
do we want to consider the type of roads and give them priorities
accordingly?

PS - I am complete newbie when it comes to contribution to
opensource projects of such scale. Please excuse me for asking wrong
questions. I would also like to work on other ideas which may be in
your TODO list. I would be great if you can direct me to some links
which may help to get started properly.


Regards,
-Jay Mahadeokar


Regards,
-Jay Mahadeokar


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar

SpeedUpComparison.png

2011/1/29 Stephen Woodbridge <woodbri@swoodbridge.com>

Hi Jay,

If you are up for tackling another algorithm, I think this is the one that has really high value for pgRouting:

http://algo2.iti.kit.edu/english/routeplanning.php

The contraction highways routing is awesome stuff. There is a lot of pre-analysis of the network which takes a lot of compute time, but once that is done and stored, the resulting structures let you compute routes of continental distances in milliseconds.

This should be a very high priority for us as it is already in at least two separate tools that work with OSM data. The value add for pgRouting is that we could then use this algorithm for any datasets that meet the minimum requirements for contraction highways.

Daniel, Anton, do you agree that this is a priority?

Hi Steve,

Yes, I agree that the contraction hierarchy would be much more valuable than layering.
When you want to provide some layering you don’t really know how to define these layers for some general use case.

So we should probably update the GSoC ideas page a bit. GSoC 2011 is going to start soon, I think.

Daniel


Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de
Web: http://georepublic.de

Hi Jay,

Source Code (Quote from their web-page)
The source code of our implementation of contraction hierarchies (CHs) is now available under the terms of the AGPL. If you are interested in getting the source code write an email to Robert Geisberger.

I think I saw it already once somewhere, but couldn’t find it now anymore.
Instead I found these two pages:

They have also come up with another paper in 2010: “A comparison for high-level approaches for speed-up in path finding algorithms” but the full-text does not seem to be available. (I think this might be useful too)

The GSoc page lists Network Layering (Contraction Hierarchies) and Highway Hierarchies as two separate problems. But Contraction Hierarchies technique seems to outperform HH in almost all categories, and comparatively its less complex.

Needs to be updated for 2011. It was just a collection of ideas, but not a requirement for students to follow.

Daniel

I think it will require a lot of brain-storming before we can narrow down any particular technique to implement, and also analyse the feasibility / difficulty of implementation of the same. Hoping to hear all your opinions on this.

PS:
I have started work on my Masters thesis, and I plan to explore this area further and maybe extend the work. (I need to do 8+16+16 thesis credits over the course of this semester and next 2 semesters, 4 credits is eq to 1 subject). A GSoc project in summer would be an added incentive!

On Fri, Jan 28, 2011 at 11:45 PM, Stephen Woodbridge <woodbri@swoodbridge.com> wrote:

Hi Jay,

If you are up for tackling another algorithm, I think this is the one that has really high value for pgRouting:

http://algo2.iti.kit.edu/english/routeplanning.php

The contraction highways routing is awesome stuff. There is a lot of pre-analysis of the network which takes a lot of compute time, but once that is done and stored, the resulting structures let you compute routes of continental distances in milliseconds.

This should be a very high priority for us as it is already in at least two separate tools that work with OSM data. The value add for pgRouting is that we could then use this algorithm for any datasets that meet the minimum requirements for contraction highways.

Daniel, Anton, do you agree that this is a priority?

I hope you will give this one serious consideration.

Best regards,
-Steve

On 1/28/2011 8:22 AM, Jay Mahadeokar wrote:

Hi all,

Excuse me for bringing up an old topic again! Quoting the idea posted in
GSoc 2010 ideas page:

/Network Layering Support:
This idea is a quite a challenging task. Network layering would allow
the routing algorithm to change from more to less dense networks to
enable long-distance routing.
/
Has this idea still got priority? If yes, I would like to have more info
on the same.

On Sat, Oct 16, 2010 at 10:48 AM, Jay Mahadeokar

<jai.mahadeokar@gmail.com mailto:[jai.mahadeokar@gmail.com](mailto:jai.mahadeokar@gmail.com)> wrote:

Hi.

I am currently doing MTech in Computer Science and Engineering from
Indian Institute of Technology Kanpur, India. I have used pgRouting
while working with postGreSQL in past. I have special interest in
routing algorithms and would like to contribute in some way.

I was looking at the ideas posted here:
http://wiki.osgeo.org/wiki/OpenRouter_2010_SOC_Ideas which mentions
that a network layering support is desirable which would allow the
routing algorithm to change from more to less dense networks to
enable long-distance routing. I have worked with sparse spanner
algorithms, and was wondering if it can be applied here. A 3 spanner
algorithm can allow you to get a sparse graph containing O(n^3/2)
edges from dense graph containing O(n^2) edges, using linear time -
O(m) - m s the number of edges.

It would be great if I could know more details about the
requirement. Example - while constructing spanner for road network,
do we want to consider the type of roads and give them priorities
accordingly?

PS - I am complete newbie when it comes to contribution to
opensource projects of such scale. Please excuse me for asking wrong
questions. I would also like to work on other ideas which may be in
your TODO list. I would be great if you can direct me to some links
which may help to get started properly.


Regards,
-Jay Mahadeokar


Regards,
-Jay Mahadeokar


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de
Web: http://georepublic.de

On 1/28/2011 6:15 PM, Jay Mahadeokar wrote:

Hi,

It seems,there has been a lot of research in speedup techniques for
routing.Engineering Route Planning Algorithms
<http://algo2.iti.kit.edu/english/1575.php&gt; gives a nice overview of all
the techniques invented till 2008.

It also provides comparisons in pre-processing time and space required
for each of the techniques. (I have attached the table for quick reference)

Yes on think the transit nodes variations is similar to contraction hierarchies and and either of these would be a good addition and better than the bi-direction hierarchical levels algorithm.

No doubt that any major undertaking along these lines will require significant investment of time and energy.

*Interesting point made in the paper: *
/Advanced algorithms for routing in road networks require thousands of lines
of well written code and hence require considerable programming skill.
In par-
ticular, it is not trivial to make the codes work for large networks.
Here is
an incomplete list of problems and complications that we have seen in
routing
projects: Graphs have to be glued together from several files. Tools for
reading
files crash for large graphs. Algorithm library code cannot handle large
graphs
at all. The code slows down several times when switching from a custom graph
representation to an algorithm library. 32-bit code will not work.
Libraries do
not work with 64-bit code. Our conclusion from these experiences was to
design
our own graph data structures adapted to the problem at hand. We use C++
with encapsulated abstract data types. Templates and inline functions
make this
possible without performance penalties.
/
*Source Code *(Quote from their web-page)
/The source code of our implementation of contraction hierarchies (CHs)
is now available under the terms of the AGPL. If you are interested in
getting the source code write an email to Robert Geisberger
<http://algo2.iti.kit.edu/english/geisberger.php&gt;\.
/

Right, but there are also a few other implementations of the contraction hierarchies in the wild that might not be based on their code, like the MONAV implementation, which is released on GPLv3, and at least one person was looking at it in conjunction with Boost (google: contraction hierarchies boost)

They have also come up with another paper in 2010: "A comparison for
high-level approaches for speed-up in path finding algorithms
<http://algo2.iti.kit.edu/english/1695.php&gt;&quot; but the full-text does not
seem to be available. (I think this might be useful too)

The GSoc page lists Network Layering (Contraction Hierarchies) and
Highway Hierarchies as two separate problems. But Contraction
Hierarchies technique seems to outperform HH in almost all categories,
and comparatively its less complex.

I think it will require a lot of brain-storming before we can narrow
down any particular technique to implement, and also analyse the
feasibility / difficulty of implementation of the same. Hoping to hear
all your opinions on this.
*
PS:*
I have started work on my Masters thesis, and I plan to explore this
area further and maybe extend the work. (I need to do 8+16+16 thesis
credits over the course of this semester and next 2 semesters, 4 credits
is eq to 1 subject). A GSoc project in summer would be an added incentive!

On Fri, Jan 28, 2011 at 11:45 PM, Stephen Woodbridge
<woodbri@swoodbridge.com <mailto:woodbri@swoodbridge.com>> wrote:

    Hi Jay,

    If you are up for tackling another algorithm, I think this is the
    one that has really high value for pgRouting:

    http://algo2.iti.kit.edu/english/routeplanning.php

    The contraction highways routing is awesome stuff. There is a lot of
    pre-analysis of the network which takes a lot of compute time, but
    once that is done and stored, the resulting structures let you
    compute routes of continental distances in milliseconds.

    This should be a very high priority for us as it is already in at
    least two separate tools that work with OSM data. The value add for
    pgRouting is that we could then use this algorithm for any datasets
    that meet the minimum requirements for contraction highways.

    Daniel, Anton, do you agree that this is a priority?

    I hope you will give this one serious consideration.

    Best regards,
      -Steve

    On 1/28/2011 8:22 AM, Jay Mahadeokar wrote:

        Hi all,

        Excuse me for bringing up an old topic again! Quoting the idea
        posted in
        GSoc 2010 ideas page:

        /Network Layering Support:
        This idea is a quite a challenging task. Network layering would
        allow
        the routing algorithm to change from more to less dense networks to
        enable long-distance routing.
        /
        Has this idea still got priority? If yes, I would like to have
        more info
        on the same.

        On Sat, Oct 16, 2010 at 10:48 AM, Jay Mahadeokar
        <jai.mahadeokar@gmail.com <mailto:jai.mahadeokar@gmail.com>
        <mailto:jai.mahadeokar@gmail.com
        <mailto:jai.mahadeokar@gmail.com>>> wrote:

            Hi.

            I am currently doing MTech in Computer Science and
        Engineering from
            Indian Institute of Technology Kanpur, India. I have used
        pgRouting
            while working with postGreSQL in past. I have special
        interest in
            routing algorithms and would like to contribute in some way.

            I was looking at the ideas posted here:
        http://wiki.osgeo.org/wiki/OpenRouter_2010_SOC_Ideas which mentions
            that a network layering support is desirable which would
        allow the
            routing algorithm to change from more to less dense networks to
            enable long-distance routing. I have worked with sparse spanner
            algorithms, and was wondering if it can be applied here. A 3
        spanner
            algorithm can allow you to get a sparse graph containing
        O(n^3/2)
            edges from dense graph containing O(n^2) edges, using linear
        time -
            O(m) - m s the number of edges.

            It would be great if I could know more details about the
            requirement. Example - while constructing spanner for road
        network,
            do we want to consider the type of roads and give them
        priorities
            accordingly?

            PS - I am complete newbie when it comes to contribution to
            opensource projects of such scale. Please excuse me for
        asking wrong
            questions. I would also like to work on other ideas which
        may be in
            your TODO list. I would be great if you can direct me to
        some links
            which may help to get started properly.

            --
            Regards,
            -Jay Mahadeokar

        --
        Regards,
        -Jay Mahadeokar

        _______________________________________________
        pgrouting-dev mailing list
        pgrouting-dev@lists.osgeo.org <mailto:pgrouting-dev@lists.osgeo.org>
        http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

    _______________________________________________
    pgrouting-dev mailing list
    pgrouting-dev@lists.osgeo.org <mailto:pgrouting-dev@lists.osgeo.org>
    http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

--
Regards,
-Jay Mahadeokar

_______________________________________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

Hi,

On Sat, Jan 29, 2011 at 6:13 AM, Stephen Woodbridge <woodbri@swoodbridge.com> wrote:

*Source Code *(Quote from their web-page)

/The source code of our implementation of contraction hierarchies (CHs)
is now available under the terms of the AGPL. If you are interested in
getting the source code write an email to Robert Geisberger

<http://algo2.iti.kit.edu/english/geisberger.php>.
/

I had requested Robert Geisberger for the source code of CH, and I went thru it (Grateful to him for making the source available under AGPL). It takes as input .ddsg files and produces contraction hierarchies in .ch file format.

.DDSG File Format:

  • a text-file, whitespace-separated;
  • starts with single letter ‘d’;
  • followed by the number of nodes n and the number of edges m
  • for each of the m edges, we have
    • source node ID s, an unsigned 32-bit integer, 0 <= s < n;
    • target node ID t, an unsigned 32-bit integer, 0 <= t < n;
    • edge weight w, an unsigned 32-bit integer; note that the length of the longest shortest path must fit into a 32-bit integer
    • the direction d*:*
      • 0 = open in both directions
      • 1 = open only in forward direction (from s to t)
      • 2 = open only in backward direction (from t to s)
      • 3 = closed__Note that specifying an edge (s,t,w,1) is equivalent to specifying an edge (t,s,w,2). It does not matter which form is used. In the current implementation, 3 is interpreted as 0, i.e., closed roads are used in both directions. If you really want to completely close a road, just leave it away.

CH File Format:

  • a binary file, 32-bit-interger organized
  • layout:
    • CH\r\n” (0x32 0x48 0x0d 0x0a)
    • unsigned int: version (currently “1”, shold be == compared)
    • unsigned int: number of nodes (= n)
    • unsigned int: number of original edges (= m1)
    • unsigned int: number of shortcut edges (= m2)
    • n times, for each node 0…(n-1):
      • unsigned int: level
    • m1 times, original edges:
      • unsigned int: source node
      • unsigned int: target node
      • unsigned int: weight
      • unsigned int: flags
    • m2 times, shortcut edges:
      • unsigned int: source node
      • unsigned int: target node
      • unsigned int: weight
      • unsigned int: flags
      • unsigned int: shortcut middle node
    • unsigned int: 0x12345678 as terminator
  • possible (bit) flags are:
    • 1 = forward edge
    • 2 = backward edge
    • 4 = shortcut edge
    • Note that not all edges of the original graph are listed as original edges. Edges that are not on any shortest path may be removed or replaced by shortcuts.

Few queries:

  1. If pgRouting has to support the routing using contraction hierarchies, what exactly is the idea? The postgres database table like “ways” will be taken as input and the output would be another postgres database table that will include data in form of contraction hierarchies?

  2. The queries will be same like shortest_path() just that at backend the contraction hierarchies pre-processed data be used?

  3. What should be the exact format for representing data for contraction hierarchies?

Right, but there are also a few other implementations of the contraction hierarchies in the wild that might not be based on their code, like the MONAV implementation, which is released on GPLv3, and at least one person was looking at it in conjunction with Boost (google: contraction hierarchies boost)


Regards,
-Jay Mahadeokar

On 2/16/2011 10:17 PM, Jay Mahadeokar wrote:

Hi,

On Sat, Jan 29, 2011 at 6:13 AM, Stephen Woodbridge
<woodbri@swoodbridge.com <mailto:woodbri@swoodbridge.com>> wrote:

    *Source Code *(Quote from their web-page)

        /The source code of our implementation of contraction
        hierarchies (CHs)
        is now available under the terms of the AGPL. If you are
        interested in
        getting the source code write an email to Robert Geisberger
        <http://algo2.iti.kit.edu/english/geisberger.php&gt;\.
        /

I had requested Robert Geisberger for the source code of CH, and I went
thru it (Grateful to him for making the source available under AGPL). It
takes as input .ddsg files and produces contraction hierarchies in .ch
file format.

*.DDSG* *File Format:*

    * a /text/-file, whitespace-separated;
    * starts with single letter 'd';
    * followed by the number of nodes /n/ and the number of edges /m/
    * for each of the /m/ edges, we have
          o source node ID /s/, an unsigned 32-bit integer, 0 <= /s/ < /n/;
          o target node ID /t/, an unsigned 32-bit integer, 0 <= /t/ < /n/;
          o edge weight /w/, an unsigned 32-bit integer; note that the
            length of the longest shortest path must fit into a 32-bit
            integer
          o the direction /d/: //
                + //0 = open in both directions //
                + //1 = open only in forward direction (from /s/ to /t/) //
                + //2 = open only in backward direction (from /t/ to /s/) //
                + //3 = closed //
            //Note that specifying an edge (s,t,w,1) is equivalent to
            specifying an edge (t,s,w,2). It does not matter which form
            is used. In the current implementation, 3 is interpreted as
            0, i.e., closed roads are used in both directions. If you
            really want to completely close a road, just leave it away. //

So this seems to define the minial requirements of the "ways" table in that we need to easily be able to extract this data and pass it to the pre-processor. And maybe the results need to be able to fetch some of this data.

*CH File Format:*

    * a /binary/ file, 32-bit-interger organized
    * layout:
          o "|CH\r\n|" (|0x32 0x48 0x0d 0x0a|)
          o unsigned int: version (currently "|1|", shold be |==| compared)
          o unsigned int: number of nodes (= n)
          o unsigned int: number of original edges (= m_1 )
          o unsigned int: number of shortcut edges (= m_2 )
          o n times, for each node 0..(n-1):
                + unsigned int: level
          o m_1 times, original edges:
                + unsigned int: source node
                + unsigned int: target node
                + unsigned int: weight
                + unsigned int: flags
          o m_2 times, shortcut edges:
                + unsigned int: source node
                + unsigned int: target node
                + unsigned int: weight
                + unsigned int: flags
                + unsigned int: shortcut middle node
          o unsigned int: |0x12345678| as terminator
    * possible (bit) flags are:
          o |1| = forward edge
          o |2| = backward edge
          o |4| = shortcut edge
          o Note that not all edges of the original graph are listed as
            original edges. Edges that are not on any shortest path may
            be removed or replaced by shortcuts.

Seem like this data could be put in tables or blob(s) depending on how much there. Obviously get data in/out of the database instead of these files will have an impact on the code, but hopefully that is manageable.

*Few queries:*

1. If pgRouting has to support the routing using contraction
hierarchies, what exactly is the idea? The postgres database table like
"ways" will be taken as input and the output would be another postgres
database table that will include data in form of contraction hierarchies?

I would envision that we take a table like "ways" as input. I could see additional columns or tables that might contain other data like turn restrictions, or whatever might be needed for input. But to keep it simple something similar to the current table which has edges, geometry, costs, etc. I might have nodes assigned or not if you process did that.

Today we prep the "ways" table by creating the "source" and "target" node number columns and run assign_node_id() process to create nodes for each edge. I see some kind of process the reads this data and creates the contraction hierarchies data that then gets stored back into the database however you deem appropriate.

2. The queries will be same like shortest_path() just that at backend
the contraction hierarchies pre-processed data be used?

Correct. The input might change because the requirements for contraction hierarchies might be different, but a route request would be started by a call to a stored procedure. The result should be a record set where each represents traversing a single segment in the original "ways" table. For example the results might be simply a list of gid in the order of traversal or something more. Some example results might be:

Each of these represents a set of record:

1. gid
2. gid, cost
3. gid, reversed, cost
etc

gid - unique edge id from the "ways" table
cost - cumlative cost to get to the end of this segment along the path
reversed - Y|N flag to indicate if traversed backwards or forwards

3. What should be the exact format for representing data for contraction
hierarchies?

I'm not sure what you are asking here. If this is the internal contraction hierarchies data created by the preprocessing step, then I think this is up to you. You might want to store it as a blob, if data can be spatially organized so you can fetch it as needed, then some brainstorming on how to do that might be appropriate.

To some extent, performance of this process in the database will probably be limited to how efficiently you can access the data need to process the request.

In the current process, we give a bounding box for the data we need, because we have to read that data and then build a boost graph structure and then solve that graph. Since the data is preprocessed, I'm not sure that is need for this so we might just have something like:

select start_node from CH_pnt_to_node(start_point);
select end_node from CH_pnt_to_node(end_point);
select * from CH_solve("table", start_node, end_node);

or something like that. Basically if there are helper function needed then that is appropriate.

Best regards,
   -Steve

    Right, but there are also a few other implementations of the
    contraction hierarchies in the wild that might not be based on their
    code, like the MONAV implementation, which is released on GPLv3, and
    at least one person was looking at it in conjunction with Boost
    (google: contraction hierarchies boost)

--
Regards,
-Jay Mahadeokar

_______________________________________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

Hi Jay,

Thank you for taking a look at the source code!
I think Anton can answer better to your questions, so I actually just want to send you two additional links I just found about now:

Regarding the license (AGPL) and the one of pgRouting (GPL) we probably need to make sure, that CH will become an optional component.

From my level of understanding I agree with Steve, that CH would just provide another way of pre-processing the data than we do now. If this is possible and how it works best within a database, that’s probably the big challenge.

@Anton … did you catch this conversation?

Daniel

2011/2/17 Stephen Woodbridge <woodbri@swoodbridge.com>

On 2/16/2011 10:17 PM, Jay Mahadeokar wrote:

Hi,

On Sat, Jan 29, 2011 at 6:13 AM, Stephen Woodbridge

<woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)> wrote:

*Source Code *(Quote from their web-page)

/The source code of our implementation of contraction
hierarchies (CHs)
is now available under the terms of the AGPL. If you are
interested in
getting the source code write an email to Robert Geisberger
<http://algo2.iti.kit.edu/english/geisberger.php>.
/

I had requested Robert Geisberger for the source code of CH, and I went
thru it (Grateful to him for making the source available under AGPL). It
takes as input .ddsg files and produces contraction hierarchies in .ch
file format.

.DDSG File Format:

  • a /text/-file, whitespace-separated;
  • starts with single letter ‘d’;
  • followed by the number of nodes /n/ and the number of edges /m/
  • for each of the /m/ edges, we have

o source node ID /s/, an unsigned 32-bit integer, 0 <= /s/ < /n/;
o target node ID /t/, an unsigned 32-bit integer, 0 <= /t/ < /n/;
o edge weight /w/, an unsigned 32-bit integer; note that the

length of the longest shortest path must fit into a 32-bit
integer

o the direction /d/: //

  • //0 = open in both directions //
  • //1 = open only in forward direction (from /s/ to /t/) //
  • //2 = open only in backward direction (from /t/ to /s/) //
  • //3 = closed //
    //Note that specifying an edge (s,t,w,1) is equivalent to
    specifying an edge (t,s,w,2). It does not matter which form
    is used. In the current implementation, 3 is interpreted as
    0, i.e., closed roads are used in both directions. If you

really want to completely close a road, just leave it away. //

So this seems to define the minial requirements of the “ways” table in that we need to easily be able to extract this data and pass it to the pre-processor. And maybe the results need to be able to fetch some of this data.

CH File Format:

  • a /binary/ file, 32-bit-interger organized
  • layout:

o “|CH\r\n|” (|0x32 0x48 0x0d 0x0a|)
o unsigned int: version (currently “|1|”, shold be |==| compared)
o unsigned int: number of nodes (= n)
o unsigned int: number of original edges (= m_1 )
o unsigned int: number of shortcut edges (= m_2 )
o n times, for each node 0…(n-1):

  • unsigned int: level
    o m_1 times, original edges:

  • unsigned int: source node

  • unsigned int: target node

  • unsigned int: weight

  • unsigned int: flags

o m_2 times, shortcut edges:

  • unsigned int: source node
  • unsigned int: target node
  • unsigned int: weight
  • unsigned int: flags
  • unsigned int: shortcut middle node

o unsigned int: |0x12345678| as terminator

  • possible (bit) flags are:
    o |1| = forward edge
    o |2| = backward edge
    o |4| = shortcut edge
    o Note that not all edges of the original graph are listed as

original edges. Edges that are not on any shortest path may
be removed or replaced by shortcuts.

Seem like this data could be put in tables or blob(s) depending on how much there. Obviously get data in/out of the database instead of these files will have an impact on the code, but hopefully that is manageable.

Few queries:

  1. If pgRouting has to support the routing using contraction
    hierarchies, what exactly is the idea? The postgres database table like
    “ways” will be taken as input and the output would be another postgres
    database table that will include data in form of contraction hierarchies?

I would envision that we take a table like “ways” as input. I could see additional columns or tables that might contain other data like turn restrictions, or whatever might be needed for input. But to keep it simple something similar to the current table which has edges, geometry, costs, etc. I might have nodes assigned or not if you process did that.

Today we prep the “ways” table by creating the “source” and “target” node number columns and run assign_node_id() process to create nodes for each edge. I see some kind of process the reads this data and creates the contraction hierarchies data that then gets stored back into the database however you deem appropriate.

  1. The queries will be same like shortest_path() just that at backend
    the contraction hierarchies pre-processed data be used?

Correct. The input might change because the requirements for contraction hierarchies might be different, but a route request would be started by a call to a stored procedure. The result should be a record set where each represents traversing a single segment in the original “ways” table. For example the results might be simply a list of gid in the order of traversal or something more. Some example results might be:

Each of these represents a set of record:

  1. gid
  2. gid, cost
  3. gid, reversed, cost
    etc

gid - unique edge id from the “ways” table
cost - cumlative cost to get to the end of this segment along the path
reversed - Y|N flag to indicate if traversed backwards or forwards

  1. What should be the exact format for representing data for contraction
    hierarchies?

I’m not sure what you are asking here. If this is the internal contraction hierarchies data created by the preprocessing step, then I think this is up to you. You might want to store it as a blob, if data can be spatially organized so you can fetch it as needed, then some brainstorming on how to do that might be appropriate.

To some extent, performance of this process in the database will probably be limited to how efficiently you can access the data need to process the request.

In the current process, we give a bounding box for the data we need, because we have to read that data and then build a boost graph structure and then solve that graph. Since the data is preprocessed, I’m not sure that is need for this so we might just have something like:

select start_node from CH_pnt_to_node(start_point);
select end_node from CH_pnt_to_node(end_point);
select * from CH_solve(“table”, start_node, end_node);

or something like that. Basically if there are helper function needed then that is appropriate.

Best regards,
-Steve

Right, but there are also a few other implementations of the
contraction hierarchies in the wild that might not be based on their
code, like the MONAV implementation, which is released on GPLv3, and
at least one person was looking at it in conjunction with Boost
(google: contraction hierarchies boost)


Regards,
-Jay Mahadeokar


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de
Web: http://georepublic.de

Hi Anton,

On Fri, Feb 18, 2011 at 6:27 PM, Daniel Kastl <daniel@georepublic.de> wrote:

Hi Jay,

Thank you for taking a look at the source code!
I think Anton can answer better to your questions, so I actually just want to send you two additional links I just found about now:

Thanks for the links. Are you certain that the above project uses contraction hierarchies algorithm by Robert Geisberger? I went thru the source, and they are using contraction, but I did not see where they have used the node ordering step as proposed in the CH paper. I did not find contraction hierarchies mentioned in their readme too: http://sourceforge.net/apps/trac/routed/wiki/ReadMe

Anyways, they are using osm file as input and the output consists of two files ‘file.osrm’ and ‘file.osrm.names’. Which is again preprocessed to generate file.osrm.hsgr’ ‘file.osrm.nodes’ which are then used for answering queries.

Again, a method to keep the data pre-processed in order to speed up dijekstra.

Regarding the license (AGPL) and the one of pgRouting (GPL) we probably need to make sure, that CH will become an optional component.

From my level of understanding I agree with Steve, that CH would just provide another way of pre-processing the data than we do now. If this is possible and how it works best within a database, that’s probably the big challenge.

I agree. So, basically if we want to add CH to pgRouting we will need following components:

Preprocessing
Tool like osmtopgrouting_CH that would preprocess the osm files to postgres tables.
If we are using the code already made available thru AGPL (I dont understand the intricacies of different licences) by Robert Geisberger, then we can first have following:
A. Module that converts osm data to ddsg files.
B. Use the above code to convert ddsg files into .ch(contraction hierarchy) files
C. Convert the .ch file into postgres_ch database table.

Quering

If you look at the CH algorithm, the main idea is to add shortcut edges using node ordering such that the overall new edges added justify the extra space used in terms of speed-up gained.

The .ch file produced retains original edges, but also contains the added short-cut edges. Now, suppose we have converted this info into postgres table. According to my intuition, the original shortest path query should readily work on this table data and provide considerable speed up due to shortcuts.

The CH paper also talks about various refinements in the shortest path algo which will enhance the speed even more. Techniques include the concept of searching on Upward and Downward graphs, pruning search space using stall-on-demand technique etc. (I have just had an overview and not understood the techniques in complete detail for now)

So, this can be a completely different module :
Shortest path query on the CH preprocessed table using the speed-up techniques mentioned in CH paper.

I tried to put forward my understanding of the problem. Need to take a deeper look into source code as well as paper before the above points can be confirmed.

@Anton … did you catch this conversation?

Daniel

2011/2/17 Stephen Woodbridge <woodbri@swoodbridge.com>

On 2/16/2011 10:17 PM, Jay Mahadeokar wrote:

Hi,

On Sat, Jan 29, 2011 at 6:13 AM, Stephen Woodbridge

<woodbri@swoodbridge.com mailto:[woodbri@swoodbridge.com](mailto:woodbri@swoodbridge.com)> wrote:

*Source Code *(Quote from their web-page)

/The source code of our implementation of contraction
hierarchies (CHs)
is now available under the terms of the AGPL. If you are
interested in
getting the source code write an email to Robert Geisberger
<http://algo2.iti.kit.edu/english/geisberger.php>.
/

I had requested Robert Geisberger for the source code of CH, and I went
thru it (Grateful to him for making the source available under AGPL). It
takes as input .ddsg files and produces contraction hierarchies in .ch
file format.

.DDSG File Format:

  • a /text/-file, whitespace-separated;
  • starts with single letter ‘d’;
  • followed by the number of nodes /n/ and the number of edges /m/
  • for each of the /m/ edges, we have

o source node ID /s/, an unsigned 32-bit integer, 0 <= /s/ < /n/;
o target node ID /t/, an unsigned 32-bit integer, 0 <= /t/ < /n/;Quering
o edge weight /w/, an unsigned 32-bit integer; note that the

length of the longest shortest path must fit into a 32-bit
integer

o the direction /d/: //

  • //0 = open in both directions //
  • //1 = open only in forward direction (from /s/ to /t/) //
  • //2 = open only in backward direction (from /t/ to /s/) //
  • //3 = closed //
    //Note that specifying an edge (s,t,w,1) is equivalent to
    specifying an edge (t,s,w,2). It does not matter which form
    is used. In the current implementation, 3 is interpreted as
    0, i.e., closed roads are used in both directions. If you

really want to completely close a road, just leave it away. //

So this seems to define the minial requirements of the “ways” table in that we need to easily be able to extract this data and pass it to the pre-processor. And maybe the results need to be able to fetch some of this data.

CH File Format:

  • a /binary/ file, 32-bit-interger organized
  • layout:

o “|CH\r\n|” (|0x32 0x48 0x0d 0x0a|)
o unsigned int: version (currently “|1|”, shold be |==| compared)
o unsigned int: number of nodes (= n)
o unsigned int: number of original edges (= m_1 )
o unsigned int: number of shortcut edges (= m_2 )
o n times, for each node 0…(n-1):

  • unsigned int: level
    o m_1 times, original edges:

  • unsigned int: source node

  • unsigned int: target node

  • unsigned int: weight

  • unsigned int: flags

o m_2 times, shortcut edges:

  • unsigned int: source node
  • unsigned int: target node
  • unsigned int: weight
  • unsigned int: flags
  • unsigned int: shortcut middle node

o unsigned int: |0x12345678| as terminator

  • possible (bit) flags are:
    o |1| = forward edge
    o |2| = backward edge
    o |4| = shortcut edge
    o Note that not all edges of the original graph are listed as

original edges. Edges that are not on any shortest path may
be removed or replaced by shortcuts.

Seem like this data could be put in tables or blob(s) depending on how much there. Obviously get data in/out of the database instead of these files will have an impact on the code, but hopefully that is manageable.

Few queries:

  1. If pgRouting has to support the routing using contraction
    hierarchies, what exactly is the idea? The postgres database table like
    “ways” will be taken as input and the output would be another postgres
    database table that will include data in form of contraction hierarchies?

I would envision that we take a table like “ways” as input. I could see additional columns or tables that might contain other data like turn restrictions, or whatever might be needed for input. But to keep it simple something similar to the current table which has edges, geometry, costs, etc. I might have nodes assigned or not if you process did that.

Today we prep the “ways” table by creating the “source” and “target” node number columns and run assign_node_id() process to create nodes for each edge. I see some kind of process the reads this data and creates the contraction hierarchies data that then gets stored back into the database however you deem appropriate.

  1. The queries will be same like shortest_path() just that at backend
    the contraction hierarchies pre-processed data be used?

Correct. The input might change because the requirements for contraction hierarchies might be different, but a route request would be started by a call to a stored procedure. The result should be a record set where each represents traversing a single segment in the original “ways” table. For example the results might be simply a list of gid in the order of traversal or something more. Some example results might be:

Each of these represents a set of record:

  1. gid
  2. gid, cost
  3. gid, reversed, cost
    etc

gid - unique edge id from the “ways” table
cost - cumlative cost to get to the end of this segment along the path
reversed - Y|N flag to indicate if traversed backwards or forwards

  1. What should be the exact format for representing data for contraction
    hierarchies?

I’m not sure what you are asking here. If this is the internal contraction hierarchies data created by the preprocessing step, then I think this is up to you. You might want to store it as a blob, if data can be spatially organized so you can fetch it as needed, then some brainstorming on how to do that might be appropriate.

To some extent, performance of this process in the database will probably be limited to how efficiently you can access the data need to process the request.

In the current process, we give a bounding box for the data we need, because we have to read that data and then build a boost graph structure and then solve that graph. Since the data is preprocessed, I’m not sure that is need for this so we might just have something like:

select start_node from CH_pnt_to_node(start_point);
select end_node from CH_pnt_to_node(end_point);
select * from CH_solve(“table”, start_node, end_node);

or something like that. Basically if there are helper function needed then that is appropriate.

Best regards,
-Steve

Right, but there are also a few other implementations of the
contraction hierarchies in the wild that might not be based on their
code, like the MONAV implementation, which is released on GPLv3, and
at least one person was looking at it in conjunction with Boost
(google: contraction hierarchies boost)


Regards,
-Jay Mahadeokar


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de
Web: http://georepublic.de


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Regards,
-Jay Mahadeokar

On 2/18/2011 3:53 PM, Jay Mahadeokar wrote:

Hi Anton,

On Fri, Feb 18, 2011 at 6:27 PM, Daniel Kastl <daniel@georepublic.de
<mailto:daniel@georepublic.de>> wrote:

    Hi Jay,

    Thank you for taking a look at the source code!
    I think Anton can answer better to your questions, so I actually
    just want to send you two additional links I just found about now:

        * http://routingdemo.geofabrik.de/
        * http://sourceforge.net/projects/routed/

    It uses contraction hierarchies algorithm.

Thanks for the links. Are you certain that the above project uses
contraction hierarchies algorithm by Robert Geisberger? I went thru the
source, and they are using contraction, but I did not see where they
have used the node ordering step as proposed in the CH paper. I did not
find contraction hierarchies mentioned in their readme too:
http://sourceforge.net/apps/trac/routed/wiki/ReadMe

It is possible that they read the paper and made their own implementation to avoid the AGPL license. The paper is describes the algorithms in great detail, so I'm sure this is possible.

Anyways, they are using osm file as input and the output consists of two
files 'file.osrm' and 'file.osrm.names'. Which is again preprocessed to
generate file.osrm.hsgr' 'file.osrm.nodes' which are then used for
answering queries.

Again, a method to keep the data pre-processed in order to speed up
dijekstra.

    Regarding the license (AGPL) and the one of pgRouting (GPL) we
    probably need to make sure, that CH will become an optional component.

     From my level of understanding I agree with Steve, that CH would
    just provide another way of pre-processing the data than we do now.
    If this is possible and how it works best within a database, that's
    probably the big challenge.

I agree. So, basically if we want to add CH to pgRouting we will need
following components:
*
Preprocessing*
  Tool like osmtopgrouting_CH that would preprocess the osm files to
postgres tables.

I think we need a general tool that can extract data from postgis tables into the CH preprocessor. There are a lot of people that are not using OSM data.

     If we are using the code already made available thru AGPL (I dont
understand the intricacies of different licences) by Robert

I think we need to read and understand the AGPL license and how it might impact on the current license. These are tricky issues and it might prohibit us from including their code directly.

If we wanted to do that we could still use the AGPL code to build test model that would allow us to validate the our code was getting reasonable results.

Geisberger, then we can first have following:
    A. Module that converts osm data to ddsg files.
    B. Use the above code to convert ddsg files into .ch(contraction
hierarchy) files
    C. Convert the .ch file into postgres_ch database table.

This seems like a reasonable approach to start with. We might eventually want to replace step A. with a stored procedure that does this step in the database, but that would be more complicated and require re-factoring the Geisberger code or designing and coding a new module that does the same thing which might get around the licensing issues.

*Quering*

If you look at the CH algorithm, the main idea is to add shortcut edges
using node ordering such that the overall new edges added justify the
extra space used in terms of speed-up gained.

The .ch file produced retains original edges, but also contains the
added short-cut edges. Now, suppose we have converted this info into
postgres table. According to my intuition, the original shortest path
query should readily work on this table data and provide considerable
speed up due to shortcuts.

Obviously, you need to be able to reconstruct the original path that the short-cut edge represented.

The CH paper also talks about various refinements in the shortest path
algo which will enhance the speed even more. Techniques include the
concept of searching on Upward and Downward graphs, pruning search space
using stall-on-demand technique etc. (I have just had an overview and
not understood the techniques in complete detail for now)

Sounds like more good stuff that needs to be understood. I read the paper about a year ago, so it is not fresh in my mind, so I guess I need to make some time to re-read it.

So, this can be a completely different module :
Shortest path query on the CH preprocessed table using the speed-up
techniques mentioned in CH paper.

I tried to put forward my understanding of the problem. Need to take a
deeper look into source code as well as paper before the above points
can be confirmed.

Sounds like you have made a great start already, keep up the good work on this.

Best regards,
   -Steve

    @Anton ... did you catch this conversation?

    Daniel

    2011/2/17 Stephen Woodbridge <woodbri@swoodbridge.com
    <mailto:woodbri@swoodbridge.com>>

        On 2/16/2011 10:17 PM, Jay Mahadeokar wrote:

            Hi,

            On Sat, Jan 29, 2011 at 6:13 AM, Stephen Woodbridge
            <woodbri@swoodbridge.com <mailto:woodbri@swoodbridge.com>
            <mailto:woodbri@swoodbridge.com
            <mailto:woodbri@swoodbridge.com>>> wrote:

                *Source Code *(Quote from their web-page)

                    /The source code of our implementation of contraction
                    hierarchies (CHs)
                    is now available under the terms of the AGPL. If
            you are
                    interested in
                    getting the source code write an email to Robert
            Geisberger
            <http://algo2.iti.kit.edu/english/geisberger.php&gt;\.
                    /

            I had requested Robert Geisberger for the source code of CH,
            and I went
            thru it (Grateful to him for making the source available
            under AGPL). It
            takes as input .ddsg files and produces contraction
            hierarchies in .ch
            file format.

            *.DDSG* *File Format:*

                * a /text/-file, whitespace-separated;
                * starts with single letter 'd';
                * followed by the number of nodes /n/ and the number of
            edges /m/
                * for each of the /m/ edges, we have
                      o source node ID /s/, an unsigned 32-bit integer,
            0 <= /s/ < /n/;
                      o target node ID /t/, an unsigned 32-bit integer,
            0 <= /t/ < /n/;Quering
                      o edge weight /w/, an unsigned 32-bit integer;
            note that the

                        length of the longest shortest path must fit
            into a 32-bit
                        integer
                      o the direction /d/: //

                            + //0 = open in both directions //
                            + //1 = open only in forward direction (from
            /s/ to /t/) //
                            + //2 = open only in backward direction
            (from /t/ to /s/) //
                            + //3 = closed //
                        //Note that specifying an edge (s,t,w,1) is
            equivalent to
                        specifying an edge (t,s,w,2). It does not matter
            which form
                        is used. In the current implementation, 3 is
            interpreted as
                        0, i.e., closed roads are used in both
            directions. If you
                        really want to completely close a road, just
            leave it away. //

        So this seems to define the minial requirements of the "ways"
        table in that we need to easily be able to extract this data and
        pass it to the pre-processor. And maybe the results need to be
        able to fetch some of this data.

            *CH File Format:*

                * a /binary/ file, 32-bit-interger organized
                * layout:
                      o "|CH\r\n|" (|0x32 0x48 0x0d 0x0a|)
                      o unsigned int: version (currently "|1|", shold be
            |==| compared)
                      o unsigned int: number of nodes (= n)
                      o unsigned int: number of original edges (= m_1 )
                      o unsigned int: number of shortcut edges (= m_2 )
                      o n times, for each node 0..(n-1):
                            + unsigned int: level
                      o m_1 times, original edges:

                            + unsigned int: source node
                            + unsigned int: target node
                            + unsigned int: weight
                            + unsigned int: flags
                      o m_2 times, shortcut edges:

                            + unsigned int: source node
                            + unsigned int: target node
                            + unsigned int: weight
                            + unsigned int: flags
                            + unsigned int: shortcut middle node
                      o unsigned int: |0x12345678| as terminator
                * possible (bit) flags are:
                      o |1| = forward edge
                      o |2| = backward edge
                      o |4| = shortcut edge
                      o Note that not all edges of the original graph
            are listed as

                        original edges. Edges that are not on any
            shortest path may
                        be removed or replaced by shortcuts.

        Seem like this data could be put in tables or blob(s) depending
        on how much there. Obviously get data in/out of the database
        instead of these files will have an impact on the code, but
        hopefully that is manageable.

            *Few queries:*

            1. If pgRouting has to support the routing using contraction
            hierarchies, what exactly is the idea? The postgres database
            table like
            "ways" will be taken as input and the output would be
            another postgres
            database table that will include data in form of contraction
            hierarchies?

        I would envision that we take a table like "ways" as input. I
        could see additional columns or tables that might contain other
        data like turn restrictions, or whatever might be needed for
        input. But to keep it simple something similar to the current
        table which has edges, geometry, costs, etc. I might have nodes
        assigned or not if you process did that.

        Today we prep the "ways" table by creating the "source" and
        "target" node number columns and run assign_node_id() process to
        create nodes for each edge. I see some kind of process the reads
        this data and creates the contraction hierarchies data that then
        gets stored back into the database however you deem appropriate.

            2. The queries will be same like shortest_path() just that
            at backend
            the contraction hierarchies pre-processed data be used?

        Correct. The input might change because the requirements for
        contraction hierarchies might be different, but a route request
        would be started by a call to a stored procedure. The result
        should be a record set where each represents traversing a single
        segment in the original "ways" table. For example the results
        might be simply a list of gid in the order of traversal or
        something more. Some example results might be:

        Each of these represents a set of record:

        1. gid
        2. gid, cost
        3. gid, reversed, cost
        etc

        gid - unique edge id from the "ways" table
        cost - cumlative cost to get to the end of this segment along
        the path
        reversed - Y|N flag to indicate if traversed backwards or forwards

            3. What should be the exact format for representing data for
            contraction
            hierarchies?

        I'm not sure what you are asking here. If this is the internal
        contraction hierarchies data created by the preprocessing step,
        then I think this is up to you. You might want to store it as a
        blob, if data can be spatially organized so you can fetch it as
        needed, then some brainstorming on how to do that might be
        appropriate.

        To some extent, performance of this process in the database will
        probably be limited to how efficiently you can access the data
        need to process the request.

        In the current process, we give a bounding box for the data we
        need, because we have to read that data and then build a boost
        graph structure and then solve that graph. Since the data is
        preprocessed, I'm not sure that is need for this so we might
        just have something like:

        select start_node from CH_pnt_to_node(start_point);
        select end_node from CH_pnt_to_node(end_point);
        select * from CH_solve("table", start_node, end_node);

        or something like that. Basically if there are helper function
        needed then that is appropriate.

        Best regards,
          -Steve

                Right, but there are also a few other implementations of the
                contraction hierarchies in the wild that might not be
            based on their
                code, like the MONAV implementation, which is released
            on GPLv3, and
                at least one person was looking at it in conjunction
            with Boost
                (google: contraction hierarchies boost)

            --
            Regards,
            -Jay Mahadeokar

            _______________________________________________
            pgrouting-dev mailing list
            pgrouting-dev@lists.osgeo.org
            <mailto:pgrouting-dev@lists.osgeo.org>
            http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

        _______________________________________________
        pgrouting-dev mailing list
        pgrouting-dev@lists.osgeo.org <mailto:pgrouting-dev@lists.osgeo.org>
        http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

    --
    Georepublic UG & Georepublic Japan
    eMail: daniel.kastl@georepublic.de <mailto:daniel.kastl@georepublic.de>
    Web: http://georepublic.de/&gt;

    _______________________________________________
    pgrouting-dev mailing list
    pgrouting-dev@lists.osgeo.org <mailto:pgrouting-dev@lists.osgeo.org>
    http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

--
Regards,
-Jay Mahadeokar

_______________________________________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

Hi Jay and Steve,

This email is getting very long, so I will shorten it a bit.

It uses contraction hierarchies algorithm.

Thanks for the links. Are you certain that the above project uses
contraction hierarchies algorithm by Robert Geisberger? I went thru the
source, and they are using contraction, but I did not see where they
have used the node ordering step as proposed in the CH paper. I did not
find contraction hierarchies mentioned in their readme too:
http://sourceforge.net/apps/trac/routed/wiki/ReadMe

It is possible that they read the paper and made their own implementation to avoid the AGPL license. The paper is describes the algorithms in great detail, so I’m sure this is possible.

The notice I read through Twitter: http://twitter.com/#!/geofabrik/status/38579590834159616
According to this is uses CH. And the license is AGPL as well.

If we are using the code already made available thru AGPL (I dont
understand the intricacies of different licences) by Robert

I think we need to read and understand the AGPL license and how it might impact on the current license. These are tricky issues and it might prohibit us from including their code directly.

If we wanted to do that we could still use the AGPL code to build test model that would allow us to validate the our code was getting reasonable results.

As far as I understand AGPL is more strict than GPL in terms of services delivered with this software. So if you offer a routing service with this software, you need to also give access to the source code to the users of this service (GPL licensed software only cares about the software you pass to someone).
This is probably something that not every pgRouting user would be happy with.

Daniel


Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de
Web: http://georepublic.de

I may have written this already, but in case I didn’t do yet:
OpenTripPlanner also seems to have an implementation of CH:
http://opentripplanner.org/browser/trunk/opentripplanner-routing/src/main/java/org/opentripplanner/routing/contraction

Though their license is LGPL. How does this work?
Because it’s their own implementation?
http://opentripplanner.org/browser/trunk/opentripplanner-routing/src/main/java/org/opentripplanner/routing/contraction/ContractionHierarchy.java#L66

Daniel

2011/2/19 Daniel Kastl <daniel@georepublic.de>

Hi Jay and Steve,

This email is getting very long, so I will shorten it a bit.

It uses contraction hierarchies algorithm.

Thanks for the links. Are you certain that the above project uses
contraction hierarchies algorithm by Robert Geisberger? I went thru the
source, and they are using contraction, but I did not see where they
have used the node ordering step as proposed in the CH paper. I did not
find contraction hierarchies mentioned in their readme too:
http://sourceforge.net/apps/trac/routed/wiki/ReadMe

It is possible that they read the paper and made their own implementation to avoid the AGPL license. The paper is describes the algorithms in great detail, so I’m sure this is possible.

The notice I read through Twitter: http://twitter.com/#!/geofabrik/status/38579590834159616
According to this is uses CH. And the license is AGPL as well.

If we are using the code already made available thru AGPL (I dont
understand the intricacies of different licences) by Robert

I think we need to read and understand the AGPL license and how it might impact on the current license. These are tricky issues and it might prohibit us from including their code directly.

If we wanted to do that we could still use the AGPL code to build test model that would allow us to validate the our code was getting reasonable results.

As far as I understand AGPL is more strict than GPL in terms of services delivered with this software. So if you offer a routing service with this software, you need to also give access to the source code to the users of this service (GPL licensed software only cares about the software you pass to someone).
This is probably something that not every pgRouting user would be happy with.

Daniel

Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de

Web: http://georepublic.de


Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de
Web: http://georepublic.de

The Robert Geisberger code can be licensed under a few different licenses, but that would need to be negotiated with them. At least it was my understanding that they also offer a commercial license for $$.

The algorithm is well documented in the various papers detailing it and it would not be hard to reconstruct code for the papers that would not be burdened by any license agreement. It would obviously take some time and effort and someone with the right technical background to understand the algorithm in enough detail to code it.

I think if you or the PSC approached them that we might be able to get them to allow us to relicense the code into something that would be favorable to postgresql license terms since this would get embedded in it depending on what terms we were requesting.

It doesn't hurt to ask!

-Steve

On 2/18/2011 8:57 PM, Daniel Kastl wrote:

I may have written this already, but in case I didn't do yet:
OpenTripPlanner also seems to have an implementation of CH:
http://opentripplanner.org/browser/trunk/opentripplanner-routing/src/main/java/org/opentripplanner/routing/contraction

Though their license is LGPL. How does this work?
Because it's their own implementation?
http://opentripplanner.org/browser/trunk/opentripplanner-routing/src/main/java/org/opentripplanner/routing/contraction/ContractionHierarchy.java#L66

Daniel

<http://opentripplanner.org/browser/trunk/opentripplanner-routing/src/main/java/org/opentripplanner/routing/contraction&gt;

2011/2/19 Daniel Kastl <daniel@georepublic.de
<mailto:daniel@georepublic.de>>

    Hi Jay and Steve,

    This email is getting very long, so I will shorten it a bit.

                    * http://routingdemo.geofabrik.de/
                    * http://sourceforge.net/projects/routed/

                It uses contraction hierarchies algorithm.

            Thanks for the links. Are you certain that the above project
            uses
            contraction hierarchies algorithm by Robert Geisberger? I
            went thru the
            source, and they are using contraction, but I did not see
            where they
            have used the node ordering step as proposed in the CH
            paper. I did not
            find contraction hierarchies mentioned in their readme too:
            http://sourceforge.net/apps/trac/routed/wiki/ReadMe

        It is possible that they read the paper and made their own
        implementation to avoid the AGPL license. The paper is describes
        the algorithms in great detail, so I'm sure this is possible.

    The notice I read through Twitter:
    http://twitter.com/#!/geofabrik/status/38579590834159616
    According to this is uses CH. And the license is AGPL as well.

                 If we are using the code already made available thru
            AGPL (I dont
            understand the intricacies of different licences) by Robert

        I think we need to read and understand the AGPL license and how
        it might impact on the current license. These are tricky issues
        and it might prohibit us from including their code directly.

        If we wanted to do that we could still use the AGPL code to
        build test model that would allow us to validate the our code
        was getting reasonable results.

    As far as I understand AGPL is more strict than GPL in terms of
    services delivered with this software. So if you offer a routing
    service with this software, you need to also give access to the
    source code to the users of this service (GPL licensed software only
    cares about the software you pass to someone).
    This is probably something that not every pgRouting user would be
    happy with.

    Daniel

    --
    Georepublic UG & Georepublic Japan
    eMail: daniel.kastl@georepublic.de <mailto:daniel.kastl@georepublic.de>
    Web: http://georepublic.de/&gt;

--
Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de <mailto:daniel.kastl@georepublic.de>
Web: http://georepublic.de/&gt;

_______________________________________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

Daniel and PSC,

So the opengraphrouter project that I started and was home to a couple of Google Summer of Code projects has stalled out. But the concept of building a library that could be used standalone or integrated in pgRouting or maybe some other databases or projects still has appeal if not downright desirability.

I'm wonder is this might not be a good place to implement the CH code. I would be all for moving the project management and ownership under the pgRouting PSC.

I think this would give us more opportunity to be more inclusive of other projects and to get other developers to contribute to a common, multi-use routing project. Then pgRouting would just work on wrapping the libraries and embedding them in postgresql.

We would need to discuss how to modularize the components for reuse, but just off the top of my head I'm thinking something like:

o routing engine solvers
   - dijkstra solution
   - A* solver
   - shooting star solver
   - CH solver
   - etc
o specific data readers
   - shapefile reader (simple generic, vendor specific, ie Navteq)
   - OSM data reader
   - SQL table data loaders
   - etc
o data storage managers
   - PostGIS
   - SpatialLite
   - SQLite
   - Our data file
   - etc
o solution post processors
   - drivetime contours
   - TSP
   - Driving Directions
   - etc

Anyway, this needs some thought, but you get the idea. Each of these could be a library with an API that make is easy to mix and match the components.

I would be interested in your thoughts on this.

Best regards,
   -Steve

On 2/18/2011 8:57 PM, Daniel Kastl wrote:

I may have written this already, but in case I didn't do yet:
OpenTripPlanner also seems to have an implementation of CH:
http://opentripplanner.org/browser/trunk/opentripplanner-routing/src/main/java/org/opentripplanner/routing/contraction

Though their license is LGPL. How does this work?
Because it's their own implementation?
http://opentripplanner.org/browser/trunk/opentripplanner-routing/src/main/java/org/opentripplanner/routing/contraction/ContractionHierarchy.java#L66

Daniel

<http://opentripplanner.org/browser/trunk/opentripplanner-routing/src/main/java/org/opentripplanner/routing/contraction&gt;

2011/2/19 Daniel Kastl <daniel@georepublic.de
<mailto:daniel@georepublic.de>>

    Hi Jay and Steve,

    This email is getting very long, so I will shorten it a bit.

                    * http://routingdemo.geofabrik.de/
                    * http://sourceforge.net/projects/routed/

                It uses contraction hierarchies algorithm.

            Thanks for the links. Are you certain that the above project
            uses
            contraction hierarchies algorithm by Robert Geisberger? I
            went thru the
            source, and they are using contraction, but I did not see
            where they
            have used the node ordering step as proposed in the CH
            paper. I did not
            find contraction hierarchies mentioned in their readme too:
            http://sourceforge.net/apps/trac/routed/wiki/ReadMe

        It is possible that they read the paper and made their own
        implementation to avoid the AGPL license. The paper is describes
        the algorithms in great detail, so I'm sure this is possible.

    The notice I read through Twitter:
    http://twitter.com/#!/geofabrik/status/38579590834159616
    According to this is uses CH. And the license is AGPL as well.

                 If we are using the code already made available thru
            AGPL (I dont
            understand the intricacies of different licences) by Robert

        I think we need to read and understand the AGPL license and how
        it might impact on the current license. These are tricky issues
        and it might prohibit us from including their code directly.

        If we wanted to do that we could still use the AGPL code to
        build test model that would allow us to validate the our code
        was getting reasonable results.

    As far as I understand AGPL is more strict than GPL in terms of
    services delivered with this software. So if you offer a routing
    service with this software, you need to also give access to the
    source code to the users of this service (GPL licensed software only
    cares about the software you pass to someone).
    This is probably something that not every pgRouting user would be
    happy with.

    Daniel

    --
    Georepublic UG & Georepublic Japan
    eMail: daniel.kastl@georepublic.de <mailto:daniel.kastl@georepublic.de>
    Web: http://georepublic.de/&gt;

--
Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de <mailto:daniel.kastl@georepublic.de>
Web: http://georepublic.de/&gt;

_______________________________________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev

Hi Steve,

Thank you for sharing your ideas!
I think that your proposal of a modular approach is very similar to what Anton and I had discussed already in the past and what we would like to have, too. Though it’s difficult to estimate how much effort and work would be required to achieve some usable result. It would probably be too much for a GSoC project.
Honestly I couldn’t tell right now how to get started best. I’m concerned to start something and get stuck in the middle without some usable result. It’s probably hard to work on such a change without someone volunteer or without some funding.
It’s difficult to keep an open source project active, and I would like to know for example if Ashraf is still engaged in the opengraphrouter project. Is he?

Daniel

2011/3/3 Stephen Woodbridge <woodbri@swoodbridge.com>

Daniel and PSC,

So the opengraphrouter project that I started and was home to a couple of Google Summer of Code projects has stalled out. But the concept of building a library that could be used standalone or integrated in pgRouting or maybe some other databases or projects still has appeal if not downright desirability.

I’m wonder is this might not be a good place to implement the CH code. I would be all for moving the project management and ownership under the pgRouting PSC.

I think this would give us more opportunity to be more inclusive of other projects and to get other developers to contribute to a common, multi-use routing project. Then pgRouting would just work on wrapping the libraries and embedding them in postgresql.

We would need to discuss how to modularize the components for reuse, but just off the top of my head I’m thinking something like:

o routing engine solvers

  • dijkstra solution
  • A* solver
  • shooting star solver
  • CH solver
  • etc
    o specific data readers
  • shapefile reader (simple generic, vendor specific, ie Navteq)
  • OSM data reader
  • SQL table data loaders
  • etc
    o data storage managers
  • PostGIS
  • SpatialLite
  • SQLite
  • Our data file
  • etc
    o solution post processors
  • drivetime contours
  • TSP
  • Driving Directions
  • etc

Anyway, this needs some thought, but you get the idea. Each of these could be a library with an API that make is easy to mix and match the components.

I would be interested in your thoughts on this.

Best regards,
-Steve

On 2/18/2011 8:57 PM, Daniel Kastl wrote:

I may have written this already, but in case I didn’t do yet:
OpenTripPlanner also seems to have an implementation of CH:
http://opentripplanner.org/browser/trunk/opentripplanner-routing/src/main/java/org/opentripplanner/routing/contraction

Though their license is LGPL. How does this work?
Because it’s their own implementation?
http://opentripplanner.org/browser/trunk/opentripplanner-routing/src/main/java/org/opentripplanner/routing/contraction/ContractionHierarchy.java#L66

Daniel

<http://opentripplanner.org/browser/trunk/opentripplanner-routing/src/main/java/org/opentripplanner/routing/contraction>

2011/2/19 Daniel Kastl <daniel@georepublic.de

mailto:[daniel@georepublic.de](mailto:daniel@georepublic.de)>

Hi Jay and Steve,

This email is getting very long, so I will shorten it a bit.

It uses contraction hierarchies algorithm.

Thanks for the links. Are you certain that the above project
uses
contraction hierarchies algorithm by Robert Geisberger? I
went thru the
source, and they are using contraction, but I did not see
where they
have used the node ordering step as proposed in the CH
paper. I did not
find contraction hierarchies mentioned in their readme too:
http://sourceforge.net/apps/trac/routed/wiki/ReadMe

It is possible that they read the paper and made their own
implementation to avoid the AGPL license. The paper is describes
the algorithms in great detail, so I’m sure this is possible.

The notice I read through Twitter:
http://twitter.com/#!/geofabrik/status/38579590834159616
According to this is uses CH. And the license is AGPL as well.

If we are using the code already made available thru
AGPL (I dont
understand the intricacies of different licences) by Robert

I think we need to read and understand the AGPL license and how
it might impact on the current license. These are tricky issues
and it might prohibit us from including their code directly.

If we wanted to do that we could still use the AGPL code to
build test model that would allow us to validate the our code
was getting reasonable results.

As far as I understand AGPL is more strict than GPL in terms of
services delivered with this software. So if you offer a routing
service with this software, you need to also give access to the
source code to the users of this service (GPL licensed software only
cares about the software you pass to someone).
This is probably something that not every pgRouting user would be
happy with.

Daniel


Georepublic UG & Georepublic Japan

eMail: daniel.kastl@georepublic.de mailto:[daniel.kastl@georepublic.de](mailto:daniel.kastl@georepublic.de)
Web: http://georepublic.de <http://georepublic.de/>


Georepublic UG & Georepublic Japan

eMail: daniel.kastl@georepublic.de mailto:[daniel.kastl@georepublic.de](mailto:daniel.kastl@georepublic.de)
Web: http://georepublic.de <http://georepublic.de/>


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org

http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev


Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de
Web: http://georepublic.de

On 3/4/2011 12:40 AM, Daniel Kastl wrote:

Hi Steve,

Thank you for sharing your ideas!
I think that your proposal of a modular approach is very similar to what
Anton and I had discussed already in the past and what we would like to
have, too. Though it's difficult to estimate how much effort and work
would be required to achieve some usable result. It would probably be
too much for a GSoC project.
Honestly I couldn't tell right now how to get started best. I'm
concerned to start something and get stuck in the middle without some
usable result. It's probably hard to work on such a change without
someone volunteer or without some funding.

I think it takes a few things to do this.

1. declare this is our goal
2. start a discussion on a design and interfaces
3. break out some smaller incremental transformations that move in that direction
4. iterate on the design and interfaces as as needed

Mini-projects that would help with this:

1. can we repackage our solvers to run outside of postgres?
Why? This would help define what is common between the environments and what is not. In turn this helps define the interface and common facilities that need to be created to support the code in two separate environments.

2. Have Jay get his project working in both a standalone and in pgRouting. The code he has already works standalone, but it will need to morth into code that will work in pgRouting. Again this is similar to 1.

It's difficult to keep an open source project active, and I would like
to know for example if Ashraf is still engaged in the opengraphrouter
project. Is he?

I talked to him recently and he still has interest, but I have dropped the ball on keeping him engaged because I have been insanely busy. As I related to you, I will probably not be a primary mentor this year.

-Steve

Daniel

2011/3/3 Stephen Woodbridge <woodbri@swoodbridge.com
<mailto:woodbri@swoodbridge.com>>

    Daniel and PSC,

    So the opengraphrouter project that I started and was home to a
    couple of Google Summer of Code projects has stalled out. But the
    concept of building a library that could be used standalone or
    integrated in pgRouting or maybe some other databases or projects
    still has appeal if not downright desirability.

    I'm wonder is this might not be a good place to implement the CH
    code. I would be all for moving the project management and ownership
    under the pgRouting PSC.

    I think this would give us more opportunity to be more inclusive of
    other projects and to get other developers to contribute to a
    common, multi-use routing project. Then pgRouting would just work on
    wrapping the libraries and embedding them in postgresql.

    We would need to discuss how to modularize the components for reuse,
    but just off the top of my head I'm thinking something like:

    o routing engine solvers
      - dijkstra solution
      - A* solver
      - shooting star solver
      - CH solver
      - etc
    o specific data readers
      - shapefile reader (simple generic, vendor specific, ie Navteq)
      - OSM data reader
      - SQL table data loaders
      - etc
    o data storage managers
      - PostGIS
      - SpatialLite
      - SQLite
      - Our data file
      - etc
    o solution post processors
      - drivetime contours
      - TSP
      - Driving Directions
      - etc

    Anyway, this needs some thought, but you get the idea. Each of these
    could be a library with an API that make is easy to mix and match
    the components.

    I would be interested in your thoughts on this.

    Best regards,
      -Steve

On Fri, Mar 4, 2011 at 11:37 AM, Stephen Woodbridge <woodbri@swoodbridge.com> wrote:

I think it takes a few things to do this.

  1. declare this is our goal
  2. start a discussion on a design and interfaces
  3. break out some smaller incremental transformations that move in that direction
  4. iterate on the design and interfaces as as needed

Looks like a nice idea. I am completely unaware of opengraphrouter and its background. Can we take this discussion to another new thread?

@Stephen and Daniel, can you list out the people who will be willing to contribute / give desirable time to realise the above idea?

Are planning to divide it into mini projects and float it out for GSoc 2011? This would surely encourage students to contribute and come up with required modules provided the mentors are available for guidance.

Mini-projects that would help with this:

  1. can we repackage our solvers to run outside of postgres?
    Why? This would help define what is common between the environments and what is not. In turn this helps define the interface and common facilities that need to be created to support the code in two separate environments.

  2. Have Jay get his project working in both a standalone and in pgRouting. The code he has already works standalone, but it will need to morth into code that will work in pgRouting. Again this is similar to 1.

Are you talking about the APSP code, or the CH code of Geisberger? APSP would work with pgRouting readily I guess. It is working as of now, but still needs some validation checks etc.

It’s difficult to keep an open source project active, and I would like
to know for example if Ashraf is still engaged in the opengraphrouter
project. Is he?

I talked to him recently and he still has interest, but I have dropped the ball on keeping him engaged because I have been insanely busy. As I related to you, I will probably not be a primary mentor this year.

-Steve


Regards,
-Jay Mahadeokar

2011/3/4 Jay Mahadeokar <jai.mahadeokar@gmail.com>

On Fri, Mar 4, 2011 at 11:37 AM, Stephen Woodbridge <woodbri@swoodbridge.com> wrote:

I think it takes a few things to do this.

  1. declare this is our goal
  2. start a discussion on a design and interfaces
  3. break out some smaller incremental transformations that move in that direction
  4. iterate on the design and interfaces as as needed

Looks like a nice idea. I am completely unaware of opengraphrouter and its background. Can we take this discussion to another new thread?

Now? Later? … well, next time if we get into more details.

@Stephen and Daniel, can you list out the people who will be willing to contribute / give desirable time to realise the above idea?

I’m afraid that Georepublic (Anton and me) can’t invest much time in the coming weeks due to other obligations.
This doesn’t mean that there shouldn’t be anything done then. Maybe someone is reading this and wants to join? Then it’s time to say “hello” now :wink:

Are planning to divide it into mini projects and float it out for GSoc 2011? This would surely encourage students to contribute and come up with required modules provided the mentors are available for guidance.

Splitting into small tasks is always a good idea. Though I assume that GSoC 2011 won’t have much more scholarships than the years before. And because OSGeo tries to distribute their assigned slots equally to their projects, there might not be more than one or two projects assigned to pgRouting. Also Anton and I couldn’t mentor more, I think.

Mini-projects that would help with this:

  1. can we repackage our solvers to run outside of postgres?
    Why? This would help define what is common between the environments and what is not. In turn this helps define the interface and common facilities that need to be created to support the code in two separate environments.

  2. Have Jay get his project working in both a standalone and in pgRouting. The code he has already works standalone, but it will need to morth into code that will work in pgRouting. Again this is similar to 1.

Are you talking about the APSP code, or the CH code of Geisberger? APSP would work with pgRouting readily I guess. It is working as of now, but still needs some validation checks etc.

Jay, sorry that we have no time these days to try it out.
As you say it’s actually ready. What about …

  • write a documentation page with a quick example and add it here: http://www.pgrouting.org/docs/1.x/index.html
    (We could make some different section such as “In development” and also add DARP there toegther with APSP)
  • make an announcement on the mailing list
  • wait for comments and feedback

I added you to the committers for the website repository (https://github.com/pgRouting/website) and when you add content to the repository, the website should be automatically updated every hour or so.

Daniel


Georepublic UG & Georepublic Japan
eMail: daniel.kastl@georepublic.de
Web: http://georepublic.de