Hi All,
GRASS GIS on 64 bit OS, is good news, but is the
progress
only in the 64 bit GRASS version ??
What are the differences (advantages) for shifting to
the 64 Bit OS. I do not have ready access to any 64
bit OS system. The latest (is there any) 32 bit OS
GRASS,
is this not equivalent to 64 bit GRASS ??
Cheers
Ravi Kumar
____________________________________________________________________________________
Sick sense of humor? Visit Yahoo! TV's
Comedy with an Edge to see what's on, when. http://tv.yahoo.com/collections/222
GRASS GIS on 64 bit OS, is good news, but is the
progress
only in the 64 bit GRASS version ??
What are the differences (advantages) for shifting to
the 64 Bit OS. I do not have ready access to any 64
bit OS system. The latest (is there any) 32 bit OS
GRASS,
is this not equivalent to 64 bit GRASS ??
The only differences between running GRASS on a 32- or 64-bit OS are
those provided by the OS, e.g. the ability to use more than 4GiB of
memory for a single process, or the ability to read files larger than
2GiB using ANSI stdio functions.
On the AMD64, 64 bit code uses the extra internal registers. Some math operations run faster, though this has only been benchmarked under X64, not linux.
I never had a grass run take more than about 2Gbytes of DRAM. Isn't there a hard limit on the memory used by grass? There is an option during the compilation process for large files, so I assume the memory allocation isn't completely dynamic.
Glynn Clements wrote:
RAVI KUMAR wrote:
GRASS GIS on 64 bit OS, is good news, but is the
progress
only in the 64 bit GRASS version ??
What are the differences (advantages) for shifting to
the 64 Bit OS. I do not have ready access to any 64
bit OS system. The latest (is there any) 32 bit OS
GRASS, is this not equivalent to 64 bit GRASS ??
The only differences between running GRASS on a 32- or 64-bit OS are
those provided by the OS, e.g. the ability to use more than 4GiB of
memory for a single process, or the ability to read files larger than
2GiB using ANSI stdio functions.
I never had a grass run take more than about 2Gbytes of DRAM. Isn't
there a hard limit on the memory used by grass?
Only that imposed by the OS.
Most modules try to avoid using excessive amounts of memory. Wherever
possible, modules process data row-by-row, only keeping as much in
memory as is strictly necessary. Modules which need to perform
non-linear I/O normally have mechanisms to avoid having to read the
entire map into memory (e.g. tile/row cache, multiple passes).
r.proj used to read the entire area of interest into memory, but the
version in 6.3-CVS uses a tile cache (it estimates the amount of
memory required, but this can be overridden with the memory= option).
There is an option
during the compilation process for large files, so I assume the memory
allocation isn't completely dynamic.
LFS (large file support) is a consequence of the historical Unix API
using "long" for file offsets, which limits you to 2GiB on a 32-bit
system.
Although recent standards define a type "off_t" which can be larger
than a long, legacy code may store offsets in a "long". To prevent
such code from corrupting data, files whose size cannot fit into a
"long" will only be opened if the caller specifically allows it.
The --enable-largefile configure option causes specific libraries and
modules to indicate that large files may be opened.
The ANSI stdio functions (fseek, ftell) use "long" for file offsets,
so they cannot handle files >2GiB on a 32-bit system.