may string variables allocate memory in powers of 2, e.g. char[128] or
char[1024]. Lots of places in GRASS you see char[100] or char[1000] for
buffers. Is there a big advantage to fitting exactly in 1024 bytes for
something like a G_getl() string input buffer?
I remember Glynn recommending the RGBA_Color struct fit into this sort
of 2^space but I didn't pick up why this was important.
ie for a heavily re-used buffer is it a good idea (faster) to use 1024
instead of 1000? is it only important if you are passing it to another fn?
speed of alloc() and free()?
thanks,
Hamish
(sorry, only had one fairly useless Comp.Sci. Pascal course in Univ)
may string variables allocate memory in powers of 2, e.g. char[128] or
char[1024]. Lots of places in GRASS you see char[100] or char[1000] for
buffers. Is there a big advantage to fitting exactly in 1024 bytes for
something like a G_getl() string input buffer?
In my dark and distant past when I was a developer, we always peered
deeply into the implementation of malloc to try to figure out the best
and fastest way to allocate memory. One particularly popular
implementation of malloc at the time (this is the 1980s) was an
allocator that maintained buckets of memory that were powers of 2 in
length, making it very quick (and sometimes inefficient) to allocate and
deallocate memory. malloc itself tacked on some overhead, so
allocations of 1024 were actually terrible (it would overflow into the
2048 allocation), leaving 1020 bytes of wasted space, which in turn
would impact swapping performance on 4MB machines if you did that 1,000
times. But if you allocated the correct amount (power of two minus
overhead size), you were OK.
In today's world where 512MB is normal for laptops, where 1+ GHz
processors are standard (and 2+G Ghz dual quad cores are around the
corner for desktops), and where GNU malloc is now relatively efficient
for all sizes of allocations, the best thing is to allocate what you
actually need and let the computer figure out how to give it to you.
may string variables allocate memory in powers of 2, e.g. char[128] or
char[1024]. Lots of places in GRASS you see char[100] or char[1000] for
buffers. Is there a big advantage to fitting exactly in 1024 bytes for
something like a G_getl() string input buffer?
If there's any "ideal" size for malloc()'d buffers, it's probably a
number slightly smaller than a power of 2, to allow for malloc()
overhead (GNU malloc puts a header at the beginning of each block;
IIRC, BSD malloc keeps separate descriptor blocks).
I remember Glynn recommending the RGBA_Color struct fit into this sort
of 2^space but I didn't pick up why this was important.
The issues are different for very small structures, which will
typically get rounded up to a multiple of the system's word size (e.g.
a 4-byte structure takes 4 bytes but a 5-byte structure takes 8 bytes)
even when they're on the stack.
ie for a heavily re-used buffer is it a good idea (faster) to use 1024
instead of 1000? is it only important if you are passing it to another fn?
speed of alloc() and free()?
In general, I don't think that it really matters unless you are
allocating large numbers of such buffers.