[BioPython] Precompute database information

Fernando fennan at gmail.com
Wed Oct 17 11:12:36 UTC 2007


>Asking users to use MySQL to do updates might be a bit much.  Could this
>be done from the .obo files?

I think that's probably the best solution... Is there any python module for
working with OBO / OWL  formats? I've been searching but people seem to use
BioPerl for this matter

On 10/16/07, Sean Davis <sdavis2 at mail.nih.gov> wrote:
>
> Fernando wrote:
> > Hi Peter,
> >
> >> How big would your pre-computed data be?  If its some sort of table or
> >> other simple data you could perhaps use a simple text file; Another
> idea
> >> for complicated objects is to use python's pickle module.
> >
> > It would be big... I an dealing with pairwise terms comparisons and I
> want
> > to consider different species as well.
> >
> >> How often would the pre-computed data need to be updated?  Every time
> >> there is a new Gene Ontology release?  It might be better have the
> >> module download and cache the latest version on request (rather than
> >> shipping an out of date dataset with Biopython).
> >
> > Yes, I could do that... It would be OK in Biopython to use mysql? If so
> the
> > module could download the last GO version on request, install it and
> work
> > with that version until the users decides to update it.
>
> Asking users to use MySQL to do updates might be a bit much.  Could this
> be done from the .obo files?
>
> Sean
>



More information about the Biopython mailing list