[GRASS-dev] Code quality proposal

Dear developers,
i would like to make a proposal about the code quality of grass.

Grass is a huge project with more than 850.000 lines of code (QA report).
And in my opinion some modules have a poor code quality, because
most researchers who provided code are not programmers.
(like me, i'm not a programmer)

So i think we have to create a process to ensure good code quality befor
integrating new code into grass. Otherwise we are not able to ensure the
stability of grass in the future. (in my opinion)

I think this process should include the following steps:

1.) The code must to fulfil the coding standards written in the SUBMITTING
files (can this be checked automatically? Otherwise the devs have to check
the code by hand ... i volunteer)
2.) The code must pass the code quality assistant system without Monster and
Clone warnings
3.) The developer must provide tests for the grass test suite, so his modules
can be automatically tested by others

To ensure this, the QA system should provide a functionality to upload single
or several C files for quality checking. The created report will then be send
to the email address of the developer.

The test suite provides documenation how to us it and how to write tests.
And i'm happy to answer any questions related to the test suite.

I'm looking forward to hear your suggestions. :slight_smile:

Thank you and
best regards
Soeren

Soeren Gebbert wrote:

2.) The code must pass the code quality assistant system without
Monster and Clone warnings

For Monsters we should split analysis of library and module code, as
library code side skews the statistical def'n of a monster towards
something a few lines long.

A module will often do a task in a long sequential way; breaking that
up into smaller functions "just because" doesn't always improve the
quality or readability of the code IMO.

Automated QA systems (or gcc warnings) are very useful for spotting
potential problems in your code. I do support new code being tested with
it, I do not support a somewhat arbitrary quantification of a
qualitative problem having the final say over the inclusion of code or
not. In the end I think it will always take experienced human eyes to be
the final judge. CVS write access is given to a range of programming
talent; I know I rely on the "real" programmers to double check my work
sometimes. Committers should check the CVS-commit mailing list every now
and then & review one anothers work:
  http://grass.itc.it/pipermail/grass-commit/
and of course no one should commit code for someone else that they
haven't closely reviewed themselves.

(but QA & testing suites are still wonderfully useful tools)

2c,
Hamish

Hamish wrote:

> 2.) The code must pass the code quality assistant system without
> Monster and Clone warnings

For Monsters we should split analysis of library and module code, as
library code side skews the statistical def'n of a monster towards
something a few lines long.

A module will often do a task in a long sequential way; breaking that
up into smaller functions "just because" doesn't always improve the
quality or readability of the code IMO.

Automated QA systems (or gcc warnings) are very useful for spotting
potential problems in your code. I do support new code being tested with
it, I do not support a somewhat arbitrary quantification of a
qualitative problem having the final say over the inclusion of code or
not. In the end I think it will always take experienced human eyes to be
the final judge. CVS write access is given to a range of programming
talent; I know I rely on the "real" programmers to double check my work
sometimes. Committers should check the CVS-commit mailing list every now
and then & review one anothers work:
  http://grass.itc.it/pipermail/grass-commit/
and of course no one should commit code for someone else that they
haven't closely reviewed themselves.

(but QA & testing suites are still wonderfully useful tools)

I don't have time to review everything that goes into GRASS (and I
don't subscribe to the commit list for this reason). An automated tool
might be useful if it can highlight specific commits as worthwhile
subjects for investigation.

--
Glynn Clements <glynn@gclements.plus.com>

On Wed, Aug 23, 2006 at 10:17:59AM +0100, Glynn Clements wrote:
...

I don't have time to review everything that goes into GRASS (and I
don't subscribe to the commit list for this reason). An automated tool
might be useful if it can highlight specific commits as worthwhile
subjects for investigation.

New submissions should have at least the GRASS header in
main.c stating author, purpose and GPL.

Markus

Dear Hamish,

Hamish schrieb:

Soeren Gebbert wrote:

2.) The code must pass the code quality assistant system without
Monster and Clone warnings

For Monsters we should split analysis of library and module code, as
library code side skews the statistical def'n of a monster towards
something a few lines long.

I agree.

A module will often do a task in a long sequential way; breaking that
up into smaller functions "just because" doesn't always improve the
quality or readability of the code IMO.

Yes, not always but in my opinion very often. If a long sequence is
split in shorter logical and meaningful functions, the code will be much more readable and understandable.
This requires of course long programming practice and a good understanding of the code and how to organize it.

But i think we should orient the coding of grass on such principles.
Of course there will be cases where a long code sequence is needed,
but this should be an exception and not the rule.
This effort will be repay in the long term.
If the code of the modules is better readable and easier to understand,
bugfixing and improvements are much easier to implement.
And new developers will have more good examples how to code in grass.

And we can start to ensure these principles by checking new
module contributions and encourage the developers to fulfil those
principles. (bevor submitting this code to CVS)
This process can be started if new modules for grass
are discussed on the list? And the devs (including hopefully me) will have a look
at the code. In addition the QA will be a help to have an orientation about the code quality.

I think that can be handled, because there are not so much new contributions of new modules. :slight_smile:

The bigger problem is to treat the existing code to fulfil such standards.

Automated QA systems (or gcc warnings) are very useful for spotting
potential problems in your code. I do support new code being tested with
it, I do not support a somewhat arbitrary quantification of a
qualitative problem having the final say over the inclusion of code or

I think its not that arbitrary.

not. In the end I think it will always take experienced human eyes to be
the final judge. CVS write access is given to a range of programming
talent; I know I rely on the "real" programmers to double check my work
sometimes. Committers should check the CVS-commit mailing list every now
and then & review one anothers work:
  http://grass.itc.it/pipermail/grass-commit/
and of course no one should commit code for someone else that they
haven't closely reviewed themselves.

(but QA & testing suites are still wonderfully useful tools)

Indeed.

Best regards
Soeren

2c,
Hamish