Well probably not, sorry that is what happens when I'm too tired and try to be cute. <sigh>
Yes, I think approach 2 is the better model because application developer should not be required to recompile anything.
plsql is very straight forward, and I think we can all help with the learning curve or even coding some of it if we have time. I think if you assume a set returning function then you have inputs and a record structure of your results. The function takes the inputs and returns a set of records. So for example:
CREATE OR REPLACE FUNCTION pgr_get_time_dep_costs(schedule_tab text, start_date_time timestamp, max_time_duration_sec int, ...)
RETURNS SETOF pgr_time_dep_costs AS
$BODY$
DECLARE
out pgr_time_dep_costs;
This is pretty basic, but it should help get you started.
For this project we need a concrete implementation that allow
someone to create a table of time based restrictions of some kind on
edges and reasonable easy say generate a route start at X and use
these time dependent restrictions.
I do not think it needs to deal with every single possible case, but
it should be a reason set of cases. We can always expand on this
after GSoC. I think it would be good in the documentation, we add
some notes about things we thought of be did not support and a short
paragraph of our thoughts on how to support it in the future.
You need to size this to the time you have available so you can be
successful. I will let Daniel be the final arbiter on this as I will
be out of town again later this week and my time here seems to on
overload lately.
The key idea here is layer of tasks, and structure. We need to
bridge the gap between low-level generic code and higher level
application specific presentation of which there my be many
depending on the application in discussion.
Best regards,
-Steve
On 7/18/2011 11:03 PM, Jay Mahadeokar wrote:
On Mon, Jul 18, 2011 at 11:21 PM, Stephen Woodbridge
<woodbri@swoodbridge.com <mailto:woodbri@swoodbridge.com>
<mailto:woodbri@swoodbridge. com
<mailto:woodbri@swoodbridge.com>>> wrote:
Hi Jay,
I there are a few important points to this problem:
1. the time based adjustments need to be stored in in real human
terms, ie: similar to crontab time specification or similar
to the
Travel Time Analysis you found.
2. the query to pass the appropriate time dependent info can
easily
be normalized to the the route start time by subtracting the
route
start time from the feature start and end times, except when
it get
complicated like rolling over from a week day schedule to a
weekend
schedule on a friday to saturday transition. Likewise for
holiday
and event schedules.
This seems to be the hard part if there is a complicated
schedule.
So what we need is to define that are the inputs and outputs
of this
query, something like:
INPUTS:
Start date:
Start time:
Max duration time:
Start location of route:
Ending location of route:
OUTPUTS:
Segment id:
time offset from:
time offset to:
avg speed:
transit time:
restriction:
Hi Steve,
Well, this was our initial plan, but we had not kept the query
like this
since we wanted to keep the query general. I think you or Daniel had
pointed out that we should not restrict our query to time units like
date, hours etc since in some application, time interval could be
milliseconds or in some other application it could be years etc.
So, I
think if we leave retrieval from time_dep_costs table as sql query
parameter, that would be more flexible as we had concluded earlier.
3. I assume because your code only looks at an interval from the
start, that it is not limited to an restricted time window
like 12
hr, or 24 hr, etc.
more below ...
On 7/18/2011 7:49 AM, Jay Mahadeokar wrote:
Hi all,
Hi, I am working on creating wrapper functions for tdsp,
corresponding
to the wrapper with and without bounding box as in
dijkstra_shortest_paths.
As discussed many times, the TDSP algorithm expects the
start_time as 0
and all times offset from start time. For the data I had
created, I had
taken time from 0 to 24, corresponding to 24 hours and
assigned
different travel_time for different time periods as per my
intuition.
If start time is say 23:30, our query for retrieving
entries from
time_dep_costs should properly do calculation such that
travel
times are
taken cyclically.
What to you mean by cyclically? I think the times should be
linear
and repeating if they are cyclical. Is this what you mean?
Yes.
For example, a route starting Monday at 8AM and a segment has
9AM-10AM restriction and the route max duration is 48hrs then we
should see two records:
ID, From, To, MPH, ...
27, 0900, 1000, 25, ...
27, 3300, 3400, 25, ...
Instead of seeing keeping this as two records, we can add a
parameter
called is_cyclic in query. If that is true, then once we run out
of the
range, we can go back to zero.
That would need many modifications in current code, but I guess
it is
manageable.
Would that be ok?
I'm not even sure how I would compute this, I think this
puts a lot
of burden on the user to figure this out and get it correct. The
obvious flip side of this would be for us to define a rigid
format
for entering schedule information where we target support for
something like 80+% of the common schedule definitions and then
figure this out once for everyone. I'm not sure if you feel this
will fit in your schedule or not?
We can sure provide functionality for some common formats, I
will try
and do that within GSoc timeline if it works out, or we can work
on that
later too. What im not clear is, how do we package this in the
overall
pgRouting project?
Thoughts?
-Steve
Now, this query should depend on the format of the time
dependent data
and its units. I think it is best that we keep this task
with
the user
so that he writes proper sql query such that the values
are properly
retrieved as required by the function.
Do you see any specific cases, data formats where we can
provide the
wrapper functions for above task?
On Sun, Jun 12, 2011 at 8:26 PM, Stephen Woodbridge
<woodbri@swoodbridge.com <mailto:woodbri@swoodbridge.com>
<mailto:woodbri@swoodbridge. com <mailto:woodbri@swoodbridge.com>>
<mailto:woodbri@swoodbridge. com
<mailto:woodbri@swoodbridge. com
<mailto:woodbri@swoodbridge.com>>>> wrote:
On 6/12/2011 12:33 AM, Jay Mahadeokar wrote:
On Sun, Jun 12, 2011 at 1:07 AM, Stephen Woodbridge
<woodbri@swoodbridge.com <mailto:woodbri@swoodbridge.com>
<mailto:woodbri@swoodbridge. com <mailto:woodbri@swoodbridge.com>>
<mailto:woodbri@swoodbridge. com
<mailto:woodbri@swoodbridge. com <mailto:woodbri@swoodbridge.com>>>
<mailto:woodbri@swoodbridge
<mailto:woodbri@swoodbridge>. com
<mailto:woodbri@swoodbridge. com
<mailto:woodbri@swoodbridge. com
<mailto:woodbri@swoodbridge.com>>>>> wrote:
On 6/11/2011 12:58 PM, Jay Mahadeokar wrote:
On Thu, Jun 9, 2011 at 9:16 AM, Jay
Mahadeokar
<jai.mahadeokar@gmail.com <mailto:jai.mahadeokar@gmail.com>
<mailto:jai.mahadeokar@gmail. com <mailto:jai.mahadeokar@gmail.com>>
<mailto:jai.mahadeokar@gmail. com
<mailto:jai.mahadeokar@gmail. com
<mailto:jai.mahadeokar@gmail.com>>>
<mailto:jai.mahadeokar@gmail
<mailto:jai.mahadeokar@gmail>. com
<mailto:jai.mahadeokar@gmail. com
<mailto:jai.mahadeokar@gmail. com
<mailto:jai.mahadeokar@gmail.com>>>>
<mailto:jai.mahadeokar@gmail
<mailto:jai.mahadeokar@gmail>. com
<mailto:jai.mahadeokar@gmail. com
<mailto:jai.mahadeokar@gmail. com
<mailto:jai.mahadeokar@gmail.com>>>
<mailto:jai.mahadeokar@gmail
<mailto:jai.mahadeokar@gmail>. com
<mailto:jai.mahadeokar@gmail. com
<mailto:jai.mahadeokar@gmail. com
<mailto:jai.mahadeokar@gmail.com>>>>>> wrote:
Hi,
Just a recap to our planned tdsp
query model:
I agree. So, now can we assume
we have
following
database
tables, as per things discussed
till now:
1. Ways: (On lines of the one in
pgrouting-workshop)
* gid
* source
* target
* name
* the_geom
* static_length (In case time
dependent
routing
is not used)
* any other attributes..
2. TimeVaryingCosts (Any other
suitable name?)
* gid
* start_time_and_date (May
divide
this into two
columns:
day_of_week and
time_of_day ? )
* travel_time / arrival_time
3. TurnRestrictions (Time
dependent)
* node/source
* time_from
* time_to
* node_to
* ancestors
If only static shortest path
query is
made, then
only
table 1 is
required.
If time-dependent shortest path
query is
made,
table 1 and 2
will be used.
If in time-dependent shortest
path query
with turn
restrictions
is made , all 3 tables will be used.
So, for a general time dependent
query, we will
fetch edge
information from ways table and the time
dependent costs
information
from the
TimeDepCosts table. (See previous
msg above.)
The query on ways table can be
supplied by
the usual
as on
normal
dijkstra shortest_path() query.
How do we form the query on the
TimeDepCosts
table?
I see
two options:
1. We generate the query according
to the gids
obtained from the
query on ways table. We will have to
query
all the
time slots
available, or form a query so that the
time-window is
appropriate
and we have sufficient travel_time
info so
that we
can reach
destination.
Here we can form query like
Select * from TimeDepCosts where gid in
(list of gids
queried from
ways table).
We can allow user to more filtering
parameters
according to
start_time , date ?
But this will make our query
specific to the
database design. As
already discussed, we cant predict
the time
units /
requirements of
applications and so it would be
better if we
keep
the query
independent of the database design.
Thats the reason our weight_map assumes
start time from
source as 0,
and all travel times offset from the
start
time from
source. A
function that will generate such
weight map
is to be
written.
2. We allow user to pass the query for
TimeDepCosts
table
too. So,
we will have our pgsql function as:
CREATE OR REPLACE FUNCTION
shortest_path(
ways_sql text,
time_dep_cost_sql text,
source_id
integer,
target_id
integer,
directed
boolean,
has_reverse_cost boolean)
RETURNS SETOF path_result
This way, user can have more control
on the
amount
of data being
queried. We will only consider the time
dependent
windows
that fit
the query.
Example of the time_dep_cost_sql can be:
Select * from TimeDepCosts where "
USER_SUPPLIED_CONSTRAINTS".
The user can now supply his constraints
according to
time_window he
needs, start times, holidays etc
according
to the
database
table design.
The problem with this is, we cannot
filter
out edges
according to
bounding box etc since the_geom
field is not
there
in this
table.
This is because that information
would be
redundant.
Thoughts? Any idea that combines the
merits
of both
approaches?
Hi all,
I am writing some background code will
be needed
for the
above
functionality.
Any input on the above mentioned point?
I suppose I
should not start
coding for any specific approach until
we agree
upon a
viable query
strategy.
Hi Jay,
I always find it easiest to answer these
questions by
thinking about
the problem in terms of use cases, so:
A web app, where someone enters a start and
destination and some
constraints like departure time or arrival
time and
maybe
restrictions on modes of travel.
Are there other ones we are trying to
support? These
should
probably
be documented as part of the project
description.
I agree, we need to document the use-cases. A
separate
thread
for this
discussion?
Anyway, assuming the above, then the
application which
presumably
has some basic knowledge of the time constraints
could do
something
like:
Get the straight line distance between start and
destination
double
it and apply some average transportation
speed to
approximate the
time window. The numbers can be changed to be
acceptable by the
application as long as it over estimates the
duration of
travel. It
can then, specify the CONSTRAINTS in terms of a
start and
end time.
We do something similar with our spatial
bounding
box used to
collect the edges needed to build the graph.
Now as far as collecting the constraints and
making then
accessible
for the Boost_dijkstra, that is another
problem. This
problem boils
down to the following:
Just a correction, in this case we will not be using
boost_dijkstra, but
instead core_tdsp() function which we have build
ourselves.
1. is there an acceptable abstraction that
we can use
internally to
represent various data sets that we might
want to
load. Or
can we
define a schema that will work for some set of
constraints
we wish
to implement initially.
Well, we had agreed upon the following schema for
testing purpose:
Yes, you are correct, sorry I was not paying
attention when
I wrote
that.
1.
Ways: (On lines of the one in
pgrouting-workshop)
* gid
* source
* target
* name
* the_geom
* static_length (In case time dependent
routing is not
used)
* any other attributes
2.
TimeDepCosts
* gid
* day_of_week
* time_of_day
* travel_time / arrival_time
The time_dep_costs table schema can vary
according to
application and
its needs and the nature of data available.
2. do want to move this data into some
static memory
structures for
the analysis like we do for the graph and then
dispose of
them after
the analysis.
We are doing exactly that! The Weight_Map data
structure
that is
supplied to the core_tdsp() is designed
specifically to
fulfil the
above functionality. It assumes the start time
as 0,
and all other
times offset from start time. So, there will be
schema
specific
wrapper
functions that will help generate the weight_map
data
structure.
Which brings me back, to my original question
(couple of
messages back),
what is the best way to generate weight_map, ie
retrieve
data
from time
dependent cost table.
Well, I think for now, I am going to go with
approach
#1. i.e
user will
supply his query. This might retrieve unwanted data,
since we cant
Yes, I think this is the way to go. It is up to the
user that
presumable knows his data to formulate an efficient
query to
retrieve the needed data.
filter by the_geom in table#2. But, I think we can
refine this
by adding
constraints like: where gid IN
(list_of_edgeids_retrieved_
from_ways)
A query something like:
select a.*
from table2 a, table1 b
where b.the_geom && <BBOX_OF_REGION>
and b.gid=a.gid
and <TIME_CONSTRAINTS>;
This would allow the planner to pick the best index.
So, I guess this approach combines the merits of
both
approaches
discussed before. We can provide flexibility to
application as
well as
add constraint on the number of rows retrieved.
Thoughts?
Yes, I think this is a good plan.
I think that the bulk of the roads in the network will
probably not
have time dependent data associated with them. If
you look
at what
Navteq and the Traffic people (however they are) are
doing,
you will
notice that mostly they only track major highways
in/out of
major
metropolitan areas. They do not track that
information on minor
roads or rural areas. So I think that the number of
segments
that
are likely to have time dependent data for any given
request is
going to be in the 5% of the total segments. Even if
someone
were to
collect say GPS cellphone average speeds over a much
wider
range of
segments, I still think it is manageable. Anyway we
just have to
solve the problem, they have to represent that data
in a way
the we
can handle it quickly or this will not be a solution
for them.
-Steve
3. or do we want to construct a method that can
dynamically
request
constraint information from the database
when it is
needed by a
visitor. (2. is probably much more efficient
and the
better
way to
go from a complexity and stability point of
view).
Don't wait on me for additional input as I
will be gone
through June
21st. Keep it simple and modular and we can
change
or add to
it in
the future and I;m sure whatever you come up
with
will be fine.
After all it is code and it can be changed it we
need to. ![:slight_smile: :slight_smile:](/images/emoji/twitter/slight_smile.png?v=12)
Project seems to be moving along fine.
-Steve
______________________________ _________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
<mailto:pgrouting-dev@lists.osgeo.org>
<mailto:pgrouting-dev@lists. osgeo.org
<mailto:pgrouting-dev@lists.osgeo.org>>
<mailto:pgrouting-dev@lists.
osgeo.org <http://osgeo.org>
<mailto:pgrouting-dev@lists. osgeo.org
<mailto:pgrouting-dev@lists.osgeo.org>>>
<mailto:pgrouting-dev@lists
<mailto:pgrouting-dev@lists>.
osgeo.org <http://osgeo.org> <http://osgeo.org>
<mailto:pgrouting-dev@lists.
osgeo.org <http://osgeo.org>
<mailto:pgrouting-dev@lists. osgeo.org
<mailto:pgrouting-dev@lists.osgeo.org>>>>
http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
<http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
<http://lists.osgeo.org/mailman/listinfo/pgrouting-dev> > >
--
Regards,
-Jay Mahadeokar
______________________________ _________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org <mailto:pgrouting-dev@lists.osgeo.org>
<mailto:pgrouting-dev@lists. osgeo.org
<mailto:pgrouting-dev@lists.osgeo.org>>
<mailto:pgrouting-dev@lists.
osgeo.org <http://osgeo.org>
<mailto:pgrouting-dev@lists. osgeo.org
<mailto:pgrouting-dev@lists.osgeo.org>>>
http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
<http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
<http://lists.osgeo.org/mailman/listinfo/pgrouting-dev> > >
______________________________ _________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org <mailto:pgrouting-dev@lists.osgeo.org>
<mailto:pgrouting-dev@lists. osgeo.org
<mailto:pgrouting-dev@lists.osgeo.org>>
<mailto:pgrouting-dev@lists.
osgeo.org <http://osgeo.org>
<mailto:pgrouting-dev@lists. osgeo.org
<mailto:pgrouting-dev@lists.osgeo.org>>>
http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
<http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
<http://lists.osgeo.org/mailman/listinfo/pgrouting-dev> > >
--
Regards,
-Jay Mahadeokar
______________________________ _________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
<mailto:pgrouting-dev@lists.osgeo.org>
<mailto:pgrouting-dev@lists. osgeo.org
<mailto:pgrouting-dev@lists.osgeo.org>>
http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
<http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
<http://lists.osgeo.org/mailman/listinfo/pgrouting-dev> >
______________________________ _________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
<mailto:pgrouting-dev@lists.osgeo.org>
<mailto:pgrouting-dev@lists. osgeo.org
<mailto:pgrouting-dev@lists.osgeo.org>>
http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
<http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
<http://lists.osgeo.org/mailman/listinfo/pgrouting-dev> >
--
Regards,
-Jay Mahadeokar
______________________________ _________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org <mailto:pgrouting-dev@lists.osgeo.org>
http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
<http://lists.osgeo.org/mailman/listinfo/pgrouting-dev>
______________________________ _________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org <mailto:pgrouting-dev@lists.osgeo.org>
http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
<http://lists.osgeo.org/mailman/listinfo/pgrouting-dev>
--
Regards,
-Jay Mahadeokar
_______________________________________________
pgrouting-dev mailing list
pgrouting-dev@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/pgrouting-dev