[Biopython-dev] Rethinking Biopython's testing framework
Peter
biopython at maubp.freeserve.co.uk
Fri Nov 28 06:09:30 EST 2008
Hello all,
Sorry for not replying earlier - I've been travelling and didn't get
to check my email as often as I had hoped. I'm going to reply to
several points in this one email...
Marco wrote:
> I was also proposing to use the doctest framework for some of the
> modules, and for enhancing documentation.
> http://bugzilla.open-bio.org/show_bug.cgi?id=2640
As Marco points out, there is also the option of using doctest, which
were doing in some of the unit tests (e.g. test_wise.py). I like the
idea of using doctest were we want to include examples in the
docstrings anyway. Marco wasn't suggesting this, but just to be
clear, I don't think we should use JUST doctest for all our unit
tests. Many test cases would make misleading documentation, and also
having lots and lots of doctest examples would also hide the important
parts of the documentation. Additionally, doctests using input files
are not straightforward due to path issues.
Brad wrote:
> Agreed with the distinction between the unit tests and the "dump
> lots of text and compare" approach. I've written both and do think
> the unit testing/assertion model is more robust since you can go
> back and actually get some insight into what someone was thinking
> when they wrote an assertion.
I have probably written more of the "dump lots of text and compare"
style tests. I think these have a number of advantages:
(1) Easier for beginneers to write a test, you can almost take any
example script and use that. You don't have to learn the unit test
framework.
(2) Debugging a failing test in IDLE is much easier - using unit tests
you have all that framework between you and the local scope where the
error happens.
(3) For many broad tests, manually setting up the expected output for
an assert is extremely tedious (e.g. parsing sequences and checking
their checksums).
We could discuss a modification to run_tests.py so that if there is no
expected output file output/test_XXX for test_XXX.py we just run
test_XXX.py and check its return value (I think Michiel had previously
suggested something like this). Perhaps for more robustness, capture
the output and compare it to a predefined list of regular expressions
covering the typical outputs. For example, looking at
output/test_Cluster, the first line is the test name, but rest follows
the patten "test_... ok". I imaging only a few output styles exist.
With such a change, half the unit test's (e.g. test_Cluster.py)
wouldn't need their output file in CVS (output/test_Cluster).
Michiel de Hoon wrote:
> If one of the sub-tests fails, Python's unit testing framework will tell us so,
> though (perhaps) not exactly which sub-test fails. However, that is easy to
> figure out just by running the individual test script by itself.
That won't always work. Consider intermittent network problems, or
tests using random data - in general it really is worthwhile having
run_tests.py report a little more than just which test_XXX.py module
failed.
Peter
More information about the Biopython-dev
mailing list