Hi,
thank you for your response.
On Friday 10 February 2006 22:23, Daniel Calvelo wrote:
Guessing: those are FCELL or DCELL, right? It doesn't happen for
integer rasters, I reckon.
Yes you are correct, fcell and dcell.
And the vector output.
It smells like be some math libraries or libc precision difference.
Indeed, and i think there is no way to avaoid this.
Try to otput to ascii and use diff to see if that's the case. If it
is, then the test suite should fall back to compare some form of
truncated versions for floating-point data, if this kind of
cross-platform testing is sought (which I think is).
That will be hopefully an acceptable solution. For now the generated output is exported with
r/r3.out.ascii and the float output is truncated (dp=5). The r.out.ascii output between a AMD and
a P4 system with different libc and compiler differs if i use the option dp=7!
First i was thinking about a module which compares two maps (r/r3/vect) within a error-range, for example eps=0.000001.
But then i realized that truncating combined with md5 checksums will do the same,
and is more effective, because you can store the checksum.
To provide output validation for vector maps, i have to patch v.out.ascii. Because the output of
v.buffer in default spearfish location differs too. The coodinate's
calculation is the "problem". So i have to truncate this float output too:
Normal output:
590529.01701912|4914624.99952804|1
590843.01701912|4918543.99952804|1
590943.01701912|4914228.99952804|1
Truncted output:
590529.01|4914624.99|1
590843.01|4918543.99|1
590943.01|4914228.99|1
I hope 1cm is enough precision for utm locations?
So i will add a dp option to vout.ascii.
But there are new problems:
1.) the test suite output validation depends on 3 grass modules: r.out.ascii, r3.out.ascii and v.out.ascii.
2.) The validation for correct (float) calculations is not provided
3.) To validate the output, you have to export it
But i hope this will provide cross plattform md5 checksum validation.
And i think this is the only (for me) doable solution.
Any suggestions or comments are welcome.
Best regards
Soeren
Daniel.
On 2/10/06, Sören Gebbert <soerengebbert@gmx.de> wrote:
> Dear list,
> while developing the new GRASS testsuite, i noticed some unexpected behavior.
>
> I try to validate the generated output of grass modules with md5 checksum tests.
> I check the binary files in the grass location (cell, fcell, topo, coor and other files ).
> I was expecting the output of grass (raster, raster3d and vector) is equal on x86 machines,
> but after some testing i recognized that the output differs.
>
> If i create raster map's with r.mapcalc or r3.mapcalc and use sin() and cos(), the md5
> checksums are differ between Linux AMD gcc-3.4.4 and Linux P4 gcc-4.0.3 systems.
> The same with v.buffer. But the command "r.mapcalc test=10" produce's equal output.
>
> How could this happen? Have the compiler, libraries (math?) or other software influence of the
> data generation, or is this a feature of the gislib?
> Maybe the optimize funciton of gcc (-On, SSE, mmx, 3dnow and stuff) produce this behavior?
> I dont know, but i need help. If there is no way no avoid this behavior,
> the md5 checksum test (an important feature) will not work .
> A work-around may be: to check the output exported with the *.out.ascii modules?
>
> What to do? Any help or suggestions?
>
> Best regards
> Soeren
>
> _______________________________________________
> grass5 mailing list
> grass5@grass.itc.it
> http://grass.itc.it/mailman/listinfo/grass5
>
--
-- Daniel Calvelo Aros