[BioPython] Re: UnitTests

Cayte katel@worldpath.net
Sun, 16 Apr 2000 13:50:12 -0700


----- Original Message -----
From: Andrew Dalke <dalke@acm.org>
To: Cayte <katel@worldpath.net>
Cc: <biopython@biopython.org>
Sent: Sunday, April 16, 2000 3:09 AM
Subject: [BioPython] Re: UnitTests


> I'll start off by emphasizing that Cayte's approach and mine
> are, as far as I can tell, the same.  I think I can even make
> isomorphic mappings between the two, using the following untested
> code.
>
...
> Still, they are equivalent.
>
   Its a question of which is easier to do.
> Some people reacted by making a lot of short functions, to the
> detriment of performance.  Others made every function about 40
> lines long but arbitrarily chopped up code into the functions
> along with a slew of input parameters.
>
> The real answer was to get a good idea of when to partition code
> into functions and remove the fixed maximum limit.  I suspect
> the same is true here.
>
   This requires coding skill on the part of testers, but coding and testing
skill  do not always go together. When I put my testing hat on, I like the
coding decisions to be as automatic as possible.  Defaults allow me to focus
on test design, and not switch between code design and test design.    I
find this way, I can make tests more quickly.  Its something like having an
HTML editor instead of having to insert html codes yourself, although both
are functionally equivalent.  Even people who know html prefer editors. So
with PyUnit, I don't have to think of how to break up the code, because it's
decided or at least by the tool.
I

> > 2.  This isolation also makes it easier for someone reading the code.
> > You may remember the sequence but s/he doesn't.
>
>
> That's where code comments and printed output comments are useful.
> If the test code isn't easy to follow, then it shouldn't have passed
> the (putative, alas) code review.
>
    On the XTreme web pages, a lot of posters feel you should be leery of
comments.  They don't get maintained, ( still less in test code :) ) and the
need for them may pinpoint a weakness in the code.  Also, I like having the
output reporting automated.

> > 3. I can create suites of just a few test cases.  If only three tests
> > fail, I don't have to rerun everything.
>
>
> > I think a list of passed and failed test cases provides a
> > useful summary and if mnemonic names are used, give you an idea
> > of what was covered.
>
>
> Agreed, but I'm only really interested in them at the UnitTest
> level.  In the regression suites we had at Bioreason, if one of
> those failed, we then ran the test directly and diff'ed the outputs.
> Because of their design, that gave a lot more information on
> what was going on and what went wrong.
>
   PyUnit only claims to be a unit testing tool.
>


>
> Your test_ methods can also be tested independently, but it calls for
> making either a new driver or new wrapper class.
   No. I can add a short routine like build_suite  to UnitTestSuite that
takes a list of routines as a parameter.  I thought I'd already added it.
but I guess that was PerllUnit.

> My code is
> harder to break up into smaller units, though I think they should
> be designed such that there is no need to break them up.
>
   Again this requires coding skill.  I've seem some excellent testers who
could find the most obscure bugs, but couldn't code their way out of a paper
bag.  Also, if you have a suite of 40 tests, you don'twhich ones will break,
even if you are a skilled coder.  Why run all 40.

>

 PyUnit its easier for me, mostly because it automates but I have no
problems with others using a different approach.


                                                              Cayte