[Biopython-dev] Online tools to track our test coverage & code quality

Peter Cock p.j.a.cock at googlemail.com
Thu Nov 26 16:01:31 UTC 2015


Hi all,

I've just been looking at https://codecov.io/ for logging unit test coverage,
which can be used an add on on top of the GitHub and TravisCI with just
a few small changes to the .travis.yml file:

https://github.com/peterjc/biopython/tree/codecov

You can see the output from my test branch here - I like how you can
drill down to individual files and see via the colouring which lines were
never executed during the test run:

https://codecov.io/github/peterjc/biopython?branch=codecov

Other languages are supported, but for Python at least, this captures
unit test coverage via the tool coverage:

https://pypi.python.org/pypi/coverage

Overall CodeCov.io seems minimally invasive, is provided free for open
source projects, and unless anyone objects I would like to enable this on
the main Biopython repository.

---

On a related note, some of you may be aware of https://landscape.io/
which does a related job of assessing Python code quality using flake8
and other tools which Tiago setup for us in December last year:

https://landscape.io/github/biopython/biopython

We really ought to look at that more often - even just skimming the errors
earlier today I found several little things in areas of the code I look after
which were easy to fix like double imports etc, see this commit and its
parents:

https://github.com/biopython/biopython/commit/a46eaf3d615d97f541a779901b4cb2e736afc12d

In general this highlights still a lots of bits of our code which uses unsafe
mutable objects like lists and dictionaries as default arguments.

--

Would people be in favour of using badge icons in the main README file
to show the master branch's current TravisCI, coverage, etc status?

Making this information more prominent might help encourage us
to get the test coverage and code metrics up ;)

Regards,

Peter


More information about the Biopython-dev mailing list