Moritz Lennert wrote:
So, where should we go from here ? Is it still wiser to implement
readn/writen as you suggest above ?
This is where it gets awkward.
GRASS uses XDR/RPC for two purposes: libgis uses it for reading and
writing FP raster maps, and DBMI uses it for communication between the
client and driver.
Changing the XDR library to use read/write will probably kill
performance for raster I/O, as you lose the buffering.
DBMI disables buffering on the streams, so the lack of buffering won't
matter for DBMI.
OTOH, the raster I/O only uses XDR on files, so the problems with
fread/fwrite on pipes don't apply there.
My preferred approach would be to change lib/db/dbmi_base to simply
not use XDR (that isn't anywhere near as much work as it might sound).
As the driver and client always run on the same system, it doesn't
matter if the protocol is platform-dependent (I have no idea what
Radim was thinking when he decided to use XDR for the DBMI
communication).
The dbmi_base library uses the following functions from XDR:
xdr_char
xdr_double
xdr_float
xdr_int
xdr_short
xdr_string
xdrstdio_create
The first five all simply read/write the specified value in a fixed
(i.e. non-platform-dependent) format; i.e. convert to big-endian
order, and convert FP values to IEEE format. For DBMI, we can just use
the host's native format, so the first five all amount to calling
read/write on the value.
xdr_string is slightly more complex: read/write the length (including
the terminating NUL) as an unsigned int, followed by the bytes.
xdrstdio_create just sets everything up so that the read/write
operations go through fread/fwrite (as opposed to e.g. xdrmem_create
which sets up for reading/writing from memory).
IOW, the actual implementation of an XDR replacement is trivial. So
trivial that you would just inline most of it into the
db__{send,recv}_* functions.
It's changing the dbmi_base library to use it which will be most of
the work. You would probably want separate put/get functions rather
than the XDR mechanism of setting the "direction" when creating the
XDR object and having a single function for both read and write.
E.g. db__send_int() would change from:
int
db__send_int(int n)
{
XDR xdrs;
int stat;
stat = DB_OK;
xdr_begin_send (&xdrs);
if(!xdr_int (&xdrs, &n))
stat = DB_PROTOCOL_ERR;
xdr_end_send (&xdrs);
if (stat == DB_PROTOCOL_ERR)
db_protocol_error();
return stat;
}
to something like:
int
db__send_int(int n)
{
int stat = DB_OK;
if (!db__send(&n, sizeof(n)))
stat = DB_PROTOCOL_ERR;
if (stat == DB_PROTOCOL_ERR)
db_protocol_error();
return stat;
}
with db__send() defined as:
int
db__send(void *buf, size_t size)
{
return writen(_send_fd, buf, size) == size;
}
[Actually, you would probably inline writen() here, as this is the
only place it would be used.]
If yes, where and how do I see if non-blocking I/O is enabled or not
There is no need; the descriptors for the DBMI pipes will never be
non-blocking. I was just commenting that a "real" readn/writen
implementation (as is found on some Unices) is a bit more complex, but
we don't need that for our purposes.
(on MSDN it says: "In multithreaded
programs, no locking is performed. The file descriptors returned are
newly opened and should not be referenced by any thread until after the
_pipe call is complete." - Is this what you mean ?) ?
No, that's something else, and isn't relevant here.
--
Glynn Clements <glynn@gclements.plus.com>