#2105: Missing guidelines for testing
----------------------------+-----------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
Type: task | Status: new
Priority: normal | Milestone: 7.0.0
Component: Tests | Version: svn-trunk
Keywords: testing, tests | Platform: Unspecified
Cpu: Unspecified |
----------------------------+-----------------------------------------------
Comment(by huhabla):
Replying to [comment:3 wenzeslaus]:
> Replying to [comment:1 huhabla]:
> > Replying to [ticket:2105 wenzeslaus]:
> > >
> > > There is a page about the test framework we wish to have:
> > > * http://grasswiki.osgeo.org/wiki/Test_Suite
> > > But since it does not exists the page creates just confusion.
> >
> > This page describes in very detail how the testsuite can be
implemented for modules and libraries. Python doctests and how to
implement automated testing in the make system are not covered. It
includes a guideline howto implement tests for modules based on shell
scripts. Hence it is unclear to me how this leads to confusion?
> >
> > However, i am all for a complete new test suite design. I would
suggest to implement all module and Python library tests in Python, so
that these tests can be executed independently from the OS or the used
command line interpreter, with or without the make system.
> >
>
> For me it seems that it is not worth to build our own tests system and
bash-like language interpreter (as proposed on the wiki page). I vote for
writing module tests directly in Python. From my experience I was writing
module test in bash and it is easier at the beginning but if you want to
do something more special than calling a module and check the return code
bash-like language is not sufficient. Moreover, we want cross platform
tests and I think we don't want to implement bash-like syntax parser and
interpreter for variables, if-statements, for loops in Python, although it
sounds interesting. When writing directly in Python, the tests dependents
on GRASS or Python process calling capabilities but they would depend
anyway if the interpreter would be build. The only advantage in writing in
bash-like syntax is that it can run as bash/sh... script, but Python
script is great too, isn't it?
It is. Writing module tests directly in Python will give us much more
flexibility. But we have to implement a GRASS specific Python test
framework to test and validate GRASS modules. This framework should
provide classes and functions to test different module configurations and
check their output with reference data automatically. The suggested
framework in [1] is still valid in several aspects (location creation,
cleaning, data check). I would suggest to use the PyUnit framework for
module testing. The PyGRASS Module interface should be used to configure
module runs. Configured modules should be run by specific test functions
that check the module output and stdout/stderr with reference data that is
located in the modules test directory. It should be possible to configure
the test suite so that all modules are executed in a virtual environment
like valgrind for memory leak checks.
Here an PyUnit example of a r.info test:
{{{
#!/usr/bin/python
import unittest
import grass.pygrass.modules as pymod
import grass.test_suite as test_suite
class r_info_test(unittest.TestCase):
@classmethod
def setUpClass(cls):
"""! Set up the test mapset and create test data for all tests
"""
# Create the temporary test mapset and set up the test environment
test_suite.init()
# Create test data for each test
m = pymod.Module("r.mapcalc", expr="test = 1", run_=False)
# We simply run this module, if the module run fails,
# the whole test will be aborted
test_suite.run_module(module=m)
def test_flag_g(self):
"""! Test to validate the output of r.info using flag "g"
"""
# Configure a r.info test
m = pymod.Module("r.info", map="test", flags="g", run_=False)
# Run the test using the test suite functionality and check
# the output on stdout with reference data that is located in the
# r.info test directory
# This function will call an exception if the module run fails
# or if the output in comparison to the reference data is
incorrect
test_suite.test_module(module=m, check="stdout",
reference="r_info_g.ref")
def test_flag_e(self):
"""! Test to validate the output of r.info using flag "e"
"""
# Configure a r.info test
m = pymod.Module("r.info", map="test", flags="e", run_=False)
# Run the test using the test suite functionality and check
# the output on stdout with reference data that is located in the
# r.info test directory.
# This function will call an exception if the module run fails
# or if the output in comparison to the reference data is
incorrect
test_suite.test_module(module=m, check="stdout",
reference="r_info_e.ref")
@classmethod
def tearDownClass(cls):
"""! Remove the temporary mapset """
test_suite.clean_up()
if __name__ == '__main__':
unittest.main()
}}}
[1] http://grasswiki.osgeo.org/wiki/Test_Suite
>
> For the test of the library code it is clear that it must be done in
Python as you say. I suppose that it would be tested using ctypes, am I
right? The problem I can see is that I would like to test also `static`
functions and this is not possible I guess.
I see no problem in using doctests [1] and PyUnit tests [2] for Python
library and C-library testing. There are C-library tests for the gmath,
gpde and raster3d libraries [3,4,5]. These library tests will
compile/create dedicated GRASS modules that can be executed in a grass
environment ... using a PyUnit test for example. All C-library functions
can be tested in this way.
PyUnit tests for GRASS library functions can be implemented if the
C-libraries are accessible using the ctype bindings. Here a PyUnit test
example that calls C++ library functions from the vtk-grass-bridge project
[6].
Doctests are already widely used in the temporal framework, the temporal
algebra and in PyGRASS, example [7].
[1] http://docs.python.org/2/library/doctest.html
[2] http://docs.python.org/2/library/unittest.html
[3] http://trac.osgeo.org/grass/browser/grass/trunk/lib/gmath/test
[4] http://trac.osgeo.org/grass/browser/grass/trunk/lib/gpde/test
[5] http://trac.osgeo.org/grass/browser/grass/trunk/lib/raster3d/test
[6] https://code.google.com/p/vtk-grass-
bridge/source/browse/trunk/Raster/Testing/Python/GRASSRasterMapReaderWriterTest.py
[7]
http://trac.osgeo.org/grass/browser/grass/trunk/lib/python/temporal/space_time_datasets.py
The make system must be extended to allow automated test runs ->
http://grasswiki.osgeo.org/wiki/Test_Suite#Make_system
This example should run the library tests.
{{{
cd grass_trunk/lib
make test
}}}
Library tests are PyUnit tests in case of C-libraries that call
ctypes functions or library module like "test.raster3d.lib". Doctests
should be implemented in case of Python libraries like the temporal
framework or PyGRASS.
This example should only run raster module tests
{{{
cd grass_trunk/raster
make test
}}}
The make system should use a dedicated test location in which all needed
global test-specific data is located in the PERMANENT mapset (minimized NC
location?). It should run all tests within a grass session using this
location. Each test should create its dedicated temporary mapset in the
test location by calling the "test_suite.init()" function in the class
setup. This hopefully allows the parallel execution of tests. Hence,
"make -j4 test" should test 4 modules in parallel. Each test will create
its own HTML output.
The result of the tests should be a compiled HTML document that shows
every test run in detail. In addition, the output of valgrind about memory
leaks should be available for each module and library test. Such HTML
documents can be hosted on the test server that performs automatic test
runs.
The make system will remove all remaining temporary mapsets from the test
location after all tests are finished.
What a nice GSoC 2014 project.
--
Ticket URL: <http://trac.osgeo.org/grass/ticket/2105#comment:4>
GRASS GIS <http://grass.osgeo.org>