[Bioperl-l] Policy on tests
Chris Fields
cjfields at illinois.edu
Mon Sep 28 17:28:29 UTC 2009
All,
This is a bit of a rant related to the spat of alphas I've had to
release over the last few weeks. We have a fairly loose policy on
testing; for instance, most CPAN installations should not run network-
or DB-dependent tests or other developer-dependent tests by default
(POD formatting, for instance), or tests for a 'recommended' module
should be skipped. That is currently in place.
However, I do think all tests that are skipped need to be reported
somehow, and optional tests should NOT skip if they are off by default
and are specifically requested. This is not currently the behavior.
So far I have been bitten twice by this.
The last instance was with the latest alpha, where ODBA-related tests
were mistakenly skipped when BerkeleyDB wasn't installed. As it turns
out, BerkeleyDB isn't required, but (according to standard test
harness output) t/LocalDB/Registry.t passed w/o reporting any problems
when in reality it silently skipped over 90% of the tests (this is
only seen with --verbose output). In the past I have also run into
network tests silently passing when the remote server was not in
service anymore (IIRC this was with XEMBL modules, which are no longer
in the distribution).
From my point of view, speaking as both a user and developer, I need
to know when these tests are skipped or fail. In instances where I
specifically request a set of tests to be run and a test fails, they
*should* fail quite loudly and catastrophically (i.e. if there is a
server-side issue, a problem with DB connection, etc). They shouldn't
be skipped over if a problem arises, otherwise if it a legitimate bug
it silently passes. If it is something I haven't set up correctly (a
DB connection, for instance) I would like to know about it via the
test failures.
Am I the only one thinking along these lines? Should we come up with
a simple policy on how we're setting up and running tests?
chris
More information about the Bioperl-l
mailing list