[Biopython-dev] Rethinking Biopython's testing framework

Giovanni Marco Dall'Olio dalloliogm at gmail.com
Thu Dec 25 19:22:04 UTC 2008


On Wed, Dec 24, 2008 at 11:52 AM, Michiel de Hoon <mjldehoon at yahoo.com> wrote:
> Hi everybody,
>
> How about the following for Biopython tests:
>
> For Python's unittest-style test modules, Python's unittest documentation recommends to define a function in each test module that returns the test suite. Most Biopython tests that use the unittest framework already do this (the function is called "testing_suite".

Merry Christmas!
Some people suggested me the nose python framework:
- http://somethingaboutorange.com/mrl/projects/nose/

It is used by many other open source projects, like sqlalchemy and elixir.
I haven't tried it but I think it does more or less everything you
said automatically, we could try to adopt it.


>
> We could now do the following in run_tests.py:
>
> 1) import the testing module and save its output
> 2) try to call module.testing_suite
> 3) if it exists, then we're using Python's unittest framework. So we run the tests in the testing suite.
> 4) if it does not exist, then we're using the print-and-compare approach. So we compare the saved output from the test to the correct output.
>
> I think that this can be set up such that it looks like nothing has changed for the user, while the files containing the correct output are no longer needed for the unittest-based tests.
>
> Questions, comments, objections, anybody?
>
> --Michiel.
>
>
> --- On Thu, 12/4/08, Michiel de Hoon <mjldehoon at yahoo.com> wrote:
>
>> From: Michiel de Hoon <mjldehoon at yahoo.com>
>> Subject: Re: [Biopython-dev] Rethinking Biopython's testing framework
>> To: "Brad Chapman" <chapmanb at 50mail.com>, "Peter" <biopython at maubp.freeserve.co.uk>
>> Cc: biopython-dev at lists.open-bio.org
>> Date: Thursday, December 4, 2008, 7:32 AM
>> > Michiel de Hoon wrote:
>> > > If one of the sub-tests fails, Python's unit
>> > > testing framework will tell us so,
>> > > though (perhaps) not exactly which sub-test
>> fails.
>> > > However, that is easy to
>> > > figure out just by running the individual test
>> script
>> > > by itself.
>> >
>> > That won't always work.  Consider intermittent
>> network
>> > problems, or tests using random data - in general it
>> > really is worthwhile having run_tests.py report a
>> little
>> > more than just which test_XXX.py module failed.
>> >
>> I wonder if Python's unit testing framework allows us
>> to capture exactly which sub-test fails. I'll look into
>> that. Ideally, it should be possible to have regular Python
>> unit tests and Biopython-style print-and-compare tests side
>> by side, and get information about failing sub-tests for
>> both.
>>
>> --Michiel.
>>
>>
>>
>> _______________________________________________
>> Biopython-dev mailing list
>> Biopython-dev at lists.open-bio.org
>> http://lists.open-bio.org/mailman/listinfo/biopython-dev
>
>
>
> _______________________________________________
> Biopython-dev mailing list
> Biopython-dev at lists.open-bio.org
> http://lists.open-bio.org/mailman/listinfo/biopython-dev
>



-- 

My blog on bioinformatics (now in English): http://bioinfoblog.it



More information about the Biopython-dev mailing list