> Subject: r.cost: too much hard disk access with big regions
> When using r.cost for a 3130 x 4400 cell region, r.cost is very very
> slow. This seems to be because it is spending all its time reading &
> writing to the disk -- the processor use is usually pretty low (sub
> 50%) while it waits. There are four temporary files created in this
> example region, 2x 122mb [in_file, out_file], and two others at the
> end which are both pretty small. Memory use for this example is
> ~126mb. I've got a ~ 70% MASK in place, don't know how much that is
> helping me here. (CELL map)
>
> It would be great if it could load the temp files into memory instead
> (perhaps by an option flag) to speed up processing for those with lots
> of RAM (here >512mb) on their systems.r.cost uses the segment library; changing that would probably involve
substantially re-writing r.cost. It would probably also put a ceiling
on the size of maps which it could handle (unless you provide both
segment-based and memory-based implementations of the algorithms).However: increasing the segments_in_memory variable may help; maybe
this should be controlled by a command-line option.
Increasing that shaves a few seconds off, but doesn't have any great effect.
I think a releated (perception) problem may be the G_percent() during the
"Finding cost path" step isn't correct. It both isn't linear & the calculation
finishes way before 100%.
src/raster/r.cost/cmd/main.c line 658:
G_percent (++n_processed, total_cells, 1);
I should have mentioned I'm using a new serial-ATA hard drive. Although I
haven't spent any time tuning it, it's bloody fast.
Hamish
-------------------------------------------- Managed by Request Tracker