this bug's URL: http://intevation.de/rt/webrt?serial_num=2327
-------------------------------------------------------------------------
Subject: r.cost: too much hard disk access with big regions
Platform: GNU/Linux/i386
grass obtained from: Mirror of Trento site
grass binary for platform: Compiled from Sources
GRASS Version: 5.3 cvs feb 2004
Hi,
When using r.cost for a 3130 x 4400 cell region, r.cost is very very slow. This seems to be because it is spending all its time reading & writing to the disk -- the processor use is usually pretty low (sub 50%) while it waits. There are four temporary files created in this example region, 2x 122mb [in_file, out_file], and two others at the end which are both pretty small. Memory use for this example is ~126mb. I've got a ~ 70% MASK in place, don't know how much that is helping me here. (CELL map)
It would be great if it could load the temp files into memory instead (perhaps by an option flag) to speed up processing for those with lots of RAM (here >512mb) on their systems.
I don't think support for a 5000x5000 map size is too much to ask for.
I don't know enough memory voodoo to implement this properly myself..
thanks,
Hamish
-------------------------------------------- Managed by Request Tracker
Request Tracker wrote:
this bug's URL: http://intevation.de/rt/webrt?serial_num=2327
-------------------------------------------------------------------------
Subject: r.cost: too much hard disk access with big regions
When using r.cost for a 3130 x 4400 cell region, r.cost is very very
slow. This seems to be because it is spending all its time reading &
writing to the disk -- the processor use is usually pretty low (sub
50%) while it waits. There are four temporary files created in this
example region, 2x 122mb [in_file, out_file], and two others at the
end which are both pretty small. Memory use for this example is
~126mb. I've got a ~ 70% MASK in place, don't know how much that is
helping me here. (CELL map)
It would be great if it could load the temp files into memory instead
(perhaps by an option flag) to speed up processing for those with lots
of RAM (here >512mb) on their systems.
r.cost uses the segment library; changing that would probably involve
substantially re-writing r.cost. It would probably also put a ceiling
on the size of maps which it could handle (unless you provide both
segment-based and memory-based implementations of the algorithms).
However: increasing the segments_in_memory variable may help; maybe
this should be controlled by a command-line option.
--
Glynn Clements <glynn.clements@virgin.net>