Trevor Wiens wrote:
> Are there any mechanisms in place such that core modules like this can be
> tested on a regular basis for correct results ?
I don't know what would be ideal, but one possiblity would be to
start building a testing map set for the Spearfish data set and some
simple scripts to go with it.
In the case of r.neighbors for example I would think as simple
script would be ideal.
something like
begin loop through neighbors operations
r.neighbors over limited area
r.what to extract values from specific location
I'd suggest using r.out.ascii rather than r.what.
results match
report OK
results don't match
report problem
end loop
I would think that if this was slowly built up for each core module,
then future developers could simply use this as a testing mechanism
which should speed development.
Also, prior to release a master script could run each of these in
turn against the rc versions to make sure that the core
functionality was still working.
I realize this is labour intensive but have no suggestions as to a
quicker but still reliable method.
You could reduce the effort a bit by adopting a common protocol for
test scripts. E.g. a sequence of commands, including some commands
which dump information to text files (e.g. r.out.ascii, v.out.ascii,
r.info > textfile, etc), where the text files have standardised names
(e.g. test-output-<num>.txt).
Testing changes to a module would involve running the script on the
previous version of the module, moving the output files to a
directory, running the script on the new version, moving the output
files to a different directory, running "diff -r" on the directories,
and reporting any differences.
There's still a fair amount of work involved in constructing useful
commands (i.e. selecting combinations of options, selecting input maps
which are appropriate for the command). This part probably can't
realistically be automated, as it requires some knowledge of
"sensible" option values and combinations.
Unfortunately, getting people to work for free on tedious jobs like
this is a lot harder than getting them to work for free on more
interesting tasks.
--
Glynn Clements <glynn@gclements.plus.com>