Noeud:Write tests!, Noeud « Next »:, Noeud « Previous »:Ordering the Tests, Noeud « Up »:Designing a Test Suite



Write tests!

A test suite is a sister project.
A test suite is a project, sharing a special relationship with the core project:

Do you know of any programmer who thinks of a new feature, implements it and immediately releases it? I don't know of any, with one exception: students who sometimes manage to deliver code which doesn't even compile.

All the programmers exercise their code while they implement it. Very young programmers, first year students (first month students actually) spend hours typing the same set of values to check that their implementation of quicksort work properly. Slightly older programmers (second year students, or students who stayed down) quickly learn to write input files and use shell redirections to save their efforts. But they usually throw away these test cases, and as the project gets bigger, they suddenly observe failures of features that were working properly months ago. Older programmers keep these test cases. Experienced programmers comment, document and write tests while they implement (see Designing a Test Suite, and Literate Programming). Some authors recommend that developers spend 25-50% of their time maintaining tests http://www.xprogramming.com/testfram.htm (FIXME: Should I ref this? http://www.xprogramming.com/testfram.htm.).

Don't be bitten three times by the same dog! Write regression tests.

While most bugs probably do not need to be tracked down by dedicated tests, at least they demonstrate that some high level test is missing, or is not complete. For instance a bug was found in Bison: some C comments were improperly output like //* this. */. A dedicated test was written. This is overkill. It demonstrated that the high level tests, exercising the full chain down to the generated executable, needed to include C comments. Not only was this overkill, but it is also quite useless: this bug is extremely unlikely to reappear as is, while it is extremely likely that at other places, comments are also incorrectly output. The test suite was adjusted so that the sources be populated with comments at all sorts of different places.

When you spent a significant amount of time tracking the failure of a feature in some primitive problem, immediately write a dedicated test for the latter. Do not underestimate the importance of sanity checks within the application itself. It doesn't really matter whether the application diagnoses its failure itself or whether the test suite does. What is essential is that the test suite exercises the application in such a way that the possible failure be tickled.

You should always write the test before fixing the actual bug, to be sure that your test is correct. This usually means having two copies of the source tree at hand, one running the test suite to have it fail, and the other to have the same test suite succeed.

If you track down several bugs down to the same origin, write a test especially for it.

Of course in both cases, more primitive tests should be run beforehand.

Test generation, or rather test extraction1, is a valuable approach because it saves effort, and guarantees some form of up-to-dateness. It amounts to fetching test cases from the documentation (as is done in GNU M4 for instance), or from comments in the code, or from the code itself (as is done by Autoconf).

More generally, look for means to improve the maintainability of your test suites.


Notes de bas de page

  1. Automatic Test Generation usually refers to the generation of tests from formal specifications of a program, and possibly from the program itself. Formal specifications are written in a mathematical language, typically set theory, and describe very precisely the behavior of a program.