importing a big SHAPE file, it appeared that most
of the operating was shifted to swap space (Linux). I
had to reboot the machine (no remote login, not even
caplock worked).
Question: Can we add some test to avoid that more
than xx percent of the memory are used? There won't
be any hot plugin of extra RAM, so v.in.ogr should
exit with memory allocation error. I just wonder why
the kernel doesn't take care.
importing a big SHAPE file, it appeared that most
of the operating was shifted to swap space (Linux). I
had to reboot the machine (no remote login, not even
caplock worked).
Usually you can get away with remote login if it is X that locked up,
with a frozen kernel you have less options.. ctrl-alt-F1 sometimes?
Question: Can we add some test to avoid that more
than xx percent of the memory are used? There won't
be any hot plugin of extra RAM, so v.in.ogr should
exit with memory allocation error.
attempt G_malloc(),G_free() of estimated size before processing. Back in
the mailing list archives somewhere I figured out the current bytes per
vector point needed (with valgrind) and suggested this?
I don't know how to query available memory in a cross platform way.
On Thu, 18 May 2006 18:29:18 +0200 Markus Neteler <neteler@itc.it>
wrote:
Hi,
importing a big SHAPE file, it appeared that most
of the operating was shifted to swap space (Linux). I
had to reboot the machine (no remote login, not even
caplock worked).
Second that.
Had that with a large OCI-dataset all the time and was wondering why
this works on one maschine (1GB RAM) and not on anotherone (256MB
RAM)...
Question: Can we add some test to avoid that more
than xx percent of the memory are used? There won't
be any hot plugin of extra RAM, so v.in.ogr should
exit with memory allocation error. I just wonder why
the kernel doesn't take care.
I would also like to see such a feature. I have run that through
valgrind, perhaps someone more knowledgeable could have a look at it?
I have provided it here:
importing a big SHAPE file, it appeared that most
of the operating was shifted to swap space (Linux). I
had to reboot the machine (no remote login, not even
caplock worked).
Usually you can get away with remote login if it is X that locked up,
with a frozen kernel you have less options.. ctrl-alt-F1 sometimes?
The ssh daemon was no longer responding (I waited only some minutes,
though) and
the keyboard also dead (no capslock light, no ctrl-alt-F1,nothing).
Question: Can we add some test to avoid that more
than xx percent of the memory are used? There won't
be any hot plugin of extra RAM, so v.in.ogr should
exit with memory allocation error.
attempt G_malloc(),G_free() of estimated size before processing. Back in
the mailing list archives somewhere I figured out the current bytes per
vector point needed (with valgrind) and suggested this?
I don't know how to query available memory in a cross platform way.
> Question: Can we add some test to avoid that more
> than xx percent of the memory are used? There won't
> be any hot plugin of extra RAM, so v.in.ogr should
> exit with memory allocation error.
attempt G_malloc(),G_free() of estimated size before processing.
That won't necessarily help.
The underlying brk/sbrk (or mmap(MAP_ANONYMOUS)) call will succeed so
long as there is sufficient virtual memory and you don't exceed any
usage limits which are in force.
That doesn't mean that reading/writing the allocated memory won't
cause the system to go into a swapping frenzy.
Back in
the mailing list archives somewhere I figured out the current bytes per
vector point needed (with valgrind) and suggested this?
I don't know how to query available memory in a cross platform way.
The "free" command will give you some global memory statistics.
However, that information is practically meaningless to an
application, as there is no way to figure out how much of that memory
you can reasonably expect to use.
On a lightly-loaded system, the application can expect to be able to
use all of the memory which is currently being used by the buffer
cache (which is usually most of the total memory). On a heavily-loaded
system, it may only be able to use a small fraction of it.
On Fri, 19 May 2006 16:20:47 +0100 Glynn Clements
<glynn@gclements.plus.com> wrote:
Hamish wrote:
> > Question: Can we add some test to avoid that more
> > than xx percent of the memory are used? There won't
> > be any hot plugin of extra RAM, so v.in.ogr should
> > exit with memory allocation error.
>
> attempt G_malloc(),G_free() of estimated size before processing.
That won't necessarily help.
The underlying brk/sbrk (or mmap(MAP_ANONYMOUS)) call will succeed so
long as there is sufficient virtual memory and you don't exceed any
usage limits which are in force.
That doesn't mean that reading/writing the allocated memory won't
cause the system to go into a swapping frenzy.
> Back in
> the mailing list archives somewhere I figured out the current bytes
> per vector point needed (with valgrind) and suggested this?
>
> I don't know how to query available memory in a cross platform way.
The "free" command will give you some global memory statistics.
However, that information is practically meaningless to an
application, as there is no way to figure out how much of that memory
you can reasonably expect to use.
On a lightly-loaded system, the application can expect to be able to
use all of the memory which is currently being used by the buffer
cache (which is usually most of the total memory). On a heavily-loaded
system, it may only be able to use a small fraction of it.
To keep this problem in mind I have added a bug in bugtracker[1]
so that we do not forget it.
If anybody feels responsible to fix this anoying problem, please go
ahead
It seems that building topology needs lot of memory though.
Is it possible to read multiple tables from postgis (postgresql) into grass with v.in.ogr? I seem to be able to only read in the table that contains the geometry data. I tried creating a postgresql view that contains all the columns (from multiple tables) that I need, however, v.in.ogr does not seem to see the view.