From bugzilla-daemon at localhost.localdomain Tue Jun 3 13:53:31 2003 From: bugzilla-daemon at localhost.localdomain (bugzilla-daemon@localhost.localdomain) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] [Bug 1447] New: Testing new bugzilla installation Message-ID: <200306031753.h53HrVUd009344@localhost.localdomain> http://bugzilla.bioperl.org/show_bug.cgi?id=1447 Summary: Testing new bugzilla installation Product: Biopython Version: 1.10 Platform: Macintosh OS/Version: MacOS X Status: NEW Severity: normal Priority: P2 Component: Main Distribution AssignedTo: biopython-dev@biopython.org ReportedBy: jchang@biopython.org just ported it over from cvs.open-bio.org to portal ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. From MKC at Stowers-Institute.org Fri Jun 6 16:53:46 2003 From: MKC at Stowers-Institute.org (Coleman, Michael) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] RE: blastpgp parsing buglet Message-ID: Hi, It looks like a further change is required on this. The problem is that when blank lines following 'CONVERGED' (and perhaps in other cases) are not consumed, _scan_alignments will see them and its tests will not work properly. Mike --- NCBIStandalone.py~ 2003-05-08 13:36:06.000000000 -0500 +++ NCBIStandalone.py 2003-06-06 15:38:31.000000000 -0500 @@ -247,6 +247,7 @@ read_and_call_while(uhandle, consumer.noevent, blank=1) attempt_read_and_call(uhandle, consumer.converged, start='CONVERGED') + read_and_call_while(uhandle, consumer.noevent, blank=1) consumer.end_descriptions() > -----Original Message----- > From: Coleman, Michael > Sent: Thursday, May 08, 2003 1:45 PM > To: biopython-dev@biopython.org > Subject: blastpgp parsing buglet > > > Parsing by NCBIStandalone.py fails for BLASTP 2.2.5 output. > This is the partial output that trips the problem: > > gi|23099742|ref|NP_693208.1| ornithine aminotransferase > [Oceanob... 430 e-119 > gi|16081241|ref|NP_393547.1| L-2, > 4-diaminobutyrate:2-ketoglutar... 430 e-119 > > Sequences not found previously or not previously below threshold: > > >gi|23466947|gb|ZP_00122533.1| hypothetical protein > [Haemophilus somnus 129PT] > Length = 432 > > Score = 591 bits (1524), Expect = e-167 > Identities = 191/420 (45%), Positives = 291/420 (69%), Gaps > = 7/420 (1%) > > The code expects to see a 'CONVERGED' but none is given here. > One possible fix would be to also look for a line beginning > with '>', like so > > # Read the descriptions and the following blank lines. > read_and_call_while(uhandle, consumer.noevent, blank=1) > l = safe_peekline(uhandle) > if l[:9] != 'CONVERGED' and l[:1] != '>': > read_and_call_until(uhandle, > consumer.description, blank=1) > read_and_call_while(uhandle, > consumer.noevent, blank=1) > > Mike > > Mike Coleman, Scientific Programmer, +1 816 926 4419 > Stowers Institute for Biomedical Research > 1000 E. 50th St., Kansas City, MO 64110 > From jchang at jeffchang.com Sun Jun 8 22:11:35 2003 From: jchang at jeffchang.com (Jeffrey Chang) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] RE: blastpgp parsing buglet In-Reply-To: Message-ID: Thanks very much for the patch. I've committed it to the CVS tree. Jeff On Friday, June 6, 2003, at 01:53 PM, Coleman, Michael wrote: > Hi, > > It looks like a further change is required on this. The problem is > that when blank lines following 'CONVERGED' (and perhaps in other > cases) are not consumed, _scan_alignments will see them and its tests > will not work properly. > > Mike > > > > > > > > > --- NCBIStandalone.py~ 2003-05-08 13:36:06.000000000 -0500 > +++ NCBIStandalone.py 2003-06-06 15:38:31.000000000 -0500 > @@ -247,6 +247,7 @@ > read_and_call_while(uhandle, consumer.noevent, > blank=1) > > attempt_read_and_call(uhandle, consumer.converged, > start='CONVERGED') > + read_and_call_while(uhandle, consumer.noevent, blank=1) > > consumer.end_descriptions() > >> -----Original Message----- >> From: Coleman, Michael >> Sent: Thursday, May 08, 2003 1:45 PM >> To: biopython-dev@biopython.org >> Subject: blastpgp parsing buglet >> >> >> Parsing by NCBIStandalone.py fails for BLASTP 2.2.5 output. >> This is the partial output that trips the problem: >> >> gi|23099742|ref|NP_693208.1| ornithine aminotransferase >> [Oceanob... 430 e-119 >> gi|16081241|ref|NP_393547.1| L-2, >> 4-diaminobutyrate:2-ketoglutar... 430 e-119 >> >> Sequences not found previously or not previously below threshold: >> >>> gi|23466947|gb|ZP_00122533.1| hypothetical protein >> [Haemophilus somnus 129PT] >> Length = 432 >> >> Score = 591 bits (1524), Expect = e-167 >> Identities = 191/420 (45%), Positives = 291/420 (69%), Gaps >> = 7/420 (1%) >> >> The code expects to see a 'CONVERGED' but none is given here. >> One possible fix would be to also look for a line beginning >> with '>', like so >> >> # Read the descriptions and the following blank lines. >> read_and_call_while(uhandle, consumer.noevent, blank=1) >> l = safe_peekline(uhandle) >> if l[:9] != 'CONVERGED' and l[:1] != '>': >> read_and_call_until(uhandle, >> consumer.description, blank=1) >> read_and_call_while(uhandle, >> consumer.noevent, blank=1) >> >> Mike >> >> Mike Coleman, Scientific Programmer, +1 816 926 4419 >> Stowers Institute for Biomedical Research >> 1000 E. 50th St., Kansas City, MO 64110 >> > > _______________________________________________ > Biopython-dev mailing list > Biopython-dev@biopython.org > http://biopython.org/mailman/listinfo/biopython-dev From chapmanb at uga.edu Tue Jun 10 18:09:12 2003 From: chapmanb at uga.edu (Brad Chapman) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Re: [BioPython] performance problem in ParserSupport.EventGenerator._get_set_flags In-Reply-To: <1055243518.8849.19.camel@sulawesi> References: <1055243518.8849.19.camel@sulawesi> Message-ID: <20030610220912.GB67357@evostick.agtec.uga.edu> [moving this to the dev list since it's got attachments 'n things] Hi Andreas; I want to preface this by saying that I know nearly next to nothing about profiling. That doesn't mean I'm not interested in helping speed things up, just that I don't have much experience in doing it. > it seems that when I parse a large GenBank file, most of the time is > spend in this function. Can anybody think of a more efficient way to do > this? I don't fully understand the algorithm here. What values can be > stored in this flags? Only 0 and 1? Is there a difference between value > 0 and not having the key in the dict? I don't know about the efficiency issues, but I can help at least explain what I'm trying to do here. Basically, this code is meant to sort of transition between the way Martel does things (with XML and SAX events) and the way that Biopython does things (with Consumers and Event generators). So, saying that, it's going to be inherently not optimized since I am basically jamming two concepts together with this EventGenerator glue. Also, I suck at optimization (I try to focus more on code clarity), so there's that as well. The idea is that the flags keep track of where you are at in the XML and collect up that information before passing it on to the appropriate event. Looking at my code, it looks like a couple of things are especially bad, but the biggest culprit is probably the characters function: def characters(self, content): """Extract the information. Using the flags that are set, put the character information in the appropriate place. """ set_flags = self._get_set_flags() # deal with each flag in the set flags for flag in set_flags: # collect up the content for all of the characters self._cur_content += content I have zero idea why the heck I am adding the content on to the current content multiple times. In fact, this junk code probably only works because there is only one item for set_flags every time. Just staring at the code, it looks like we could make self._cur_content a list, and then simply do: def characters(self, content): """Extract the information. Using the flags that are set, put the character information in the appropriate place. """ self._cur_content.append(content) Then we could change the appropriate part of endElement to: # add all of the information collected inside this tag self.info[name].append("".join(self._cur_content)) self._cur_content = [] This should save a bunch of calls to _get_set_flags and won't mess anything up, I don't believe. Do you have time to try this and see if it speeds it up and doesn't break anything? I'm happy to commit it but don't have time at the moment to properly test it to make sure it doesn't break anything. I could probably look at it over the weekend. Let me know if this made any sense or helped. I've attached a patch which does what I suggested above (but is completely untested). I'm very happy to clean my code up to make it faster and do appreciate you looking at it. Brad -------------- next part -------------- *** ParserSupport.py.orig Tue Jun 10 18:03:54 2003 --- ParserSupport.py Tue Jun 10 18:06:59 2003 *************** *** 216,222 **** self._previous_tag = '' # the current character information for a tag ! self._cur_content = '' def _get_set_flags(self): """Return a listing of all of the flags which are set as positive. --- 216,222 ---- self._previous_tag = '' # the current character information for a tag ! self._cur_content = [] def _get_set_flags(self): """Return a listing of all of the flags which are set as positive. *************** *** 248,259 **** Using the flags that are set, put the character information in the appropriate place. """ ! set_flags = self._get_set_flags() ! ! # deal with each flag in the set flags ! for flag in set_flags: ! # collect up the content for all of the characters ! self._cur_content += content def endElement(self, name): """Send the information to the consumer. --- 248,254 ---- Using the flags that are set, put the character information in the appropriate place. """ ! self._cur_content.append(content) def endElement(self, name): """Send the information to the consumer. *************** *** 269,276 **** # interested in and potentially have information for if name in self._get_set_flags(): # add all of the information collected inside this tag ! self.info[name].append(self._cur_content) ! self._cur_content = '' # if we are at a new tag, pass on the info from the last tag if self._previous_tag and self._previous_tag != name: --- 264,271 ---- # interested in and potentially have information for if name in self._get_set_flags(): # add all of the information collected inside this tag ! self.info[name].append("".join(self._cur_content)) ! self._cur_content = [] # if we are at a new tag, pass on the info from the last tag if self._previous_tag and self._previous_tag != name: From mdehoon at ims.u-tokyo.ac.jp Thu Jun 12 00:44:38 2003 From: mdehoon at ims.u-tokyo.ac.jp (Michiel Jan Laurens de Hoon) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Bio.Cluster Message-ID: <3EE80536.5010808@ims.u-tokyo.ac.jp> Dear biopython developers, I have added Bio.Cluster to the Biopython CVS. Bio.Cluster contains clustering techniques for gene expression data (hierarchical, k-means, and SOMs); most routines are written in C with a Python wrapper. This package also exists separately as Pycluster. The Python and C source code is in Bio/Cluster; I have also added Bio.Cluster to setup.py. In case you want to try this package, there is a manual at http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/cluster.pdf (replace "from Pycluster import *" by "from Bio.Cluster import *") and a sample data set at http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/demo.txt. Please let me know if you find any problems with this package. --Michiel. -- Michiel de Hoon, Assistant Professor University of Tokyo, Institute of Medical Science Human Genome Center 4-6-1 Shirokane-dai, Minato-ku Tokyo 108-8639 Japan http://bonsai.ims.u-tokyo.ac.jp/~mdehoon From jefftc at stanford.edu Fri Jun 13 01:11:56 2003 From: jefftc at stanford.edu (Jeffrey Chang) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] making new release Message-ID: <8DF1411E-9D5D-11D7-805E-000A956845CE@stanford.edu> Hey developers, It feels like time to make a new release. There's been a bunch of new code and fixes, and the BLAST parser is sufficiently out of date. :) Plus, Michiel is anxious to get his new clustering software available! Hmmm... I just noticed that Andrew just checked in EUtils as well. So, everybody working on the code in the CVS repository, please let me know where you are. Specifically, I need to know whether the code is not ready to be released yet, and when it will be. All the tests should pass. On my system now, test_FSSP is failing. It looks minor -- Iddo? Jeff From dalke at dalkescientific.com Fri Jun 13 01:19:09 2003 From: dalke at dalkescientific.com (Andrew Dalke) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] making new release In-Reply-To: <8DF1411E-9D5D-11D7-805E-000A956845CE@stanford.edu> Message-ID: <8FBBA150-9D5E-11D7-875B-000393C92466@dalkescientific.com> Yup, added EUtils so Brad can boast about it at BOSC. I'm done until the end of this month (at a client's site then at EuroPython). Brad will make any extra changes needed to get EUtils working in Biopython. There are also a few changes to make w.r.t updating EUtils for the latest EUtils spec, but that'll have to wait. So unless you want to wait another month, go ahead with a new release. Andrew From chapmanb at uga.edu Fri Jun 13 12:12:35 2003 From: chapmanb at uga.edu (Brad Chapman) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] making new release In-Reply-To: <8DF1411E-9D5D-11D7-805E-000A956845CE@stanford.edu> References: <8DF1411E-9D5D-11D7-805E-000A956845CE@stanford.edu> Message-ID: <20030613161235.GN2981@evostick.agtec.uga.edu> Hey Jeff; > It feels like time to make a new release. There's been a bunch of new > code and fixes, and the BLAST parser is sufficiently out of date. :) > Plus, Michiel is anxious to get his new clustering software available! > Hmmm... I just noticed that Andrew just checked in EUtils as well. Definitely agree with making a release. I think a few more people other then me should hit on EUtils to make sure I fixed everything after Andrew checked it in. Just so I'm not soley responsible :-). > So, everybody working on the code in the CVS repository, please let me > know where you are. Specifically, I need to know whether the code is > not ready to be released yet, and when it will be. The only thing I have (besides double checking EUtils) is the speed-up stuff in GenBank. I hope that should be finished by the end of the weekend. I also want to update (at least) the install documentation so links and things point to the new website. That should also be done by the end of the weekend. > All the tests should pass. On my system now, test_FSSP is failing. It > looks minor -- Iddo? KDTree and LocusLink are commented out in setup.py (which gives import warnings in the test). Is there are a reason for this or should we try to get those working as well? test_GFF is also failing for me: 12:05pm Tests> python run_tests.py test_GFF.py test_GFF ... FAIL ====================================================================== FAIL: test_GFF ---------------------------------------------------------------------- Traceback (most recent call last): File "run_tests.py", line 131, in runTest self.runSafeTest() File "run_tests.py", line 168, in runSafeTest expected_handle) File "run_tests.py", line 268, in compare_output assert expected_line == output_line, \ AssertionError: Output : '*****************************************************************\n' Expected: 'Bio.GFF doctests complete.\n' ---------------------------------------------------------------------- Ran 1 tests in 0.451s FAILED (failures=1) Aaaah, releases, Brad From katel at worldpath.net Fri Jun 13 15:09:25 2003 From: katel at worldpath.net (katel@worldpath.net) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] making new release Message-ID: <410-22003651319925464@M2W026.mail2web.com> Original Message: ----------------- From: Brad Chapman chapmanb@uga.edu Date: Fri, 13 Jun 2003 12:12:35 -0400 To: biopython-dev@biopython.org Subject: Re: [Biopython-dev] making new release Hey Jeff; Since the last release I uploaded a humungous Locus test file. That may explain the problem? What error does it give when the comments are removed? KDTree and LocusLink are commented out in setup.py (which gives import warnings in the test). Is there are a reason for this or should we try to get those working as well? test_GFF is also failing for me: 12:05pm Tests> python run_tests.py test_GFF.py test_GFF ... FAIL ====================================================================== FAIL: test_GFF -------------------------------------------------------------------- mail2web - Check your email from the web at http://mail2web.com/ . From idoerg at burnham.org Fri Jun 13 18:10:11 2003 From: idoerg at burnham.org (Iddo Friedberg) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Bio.Cluster In-Reply-To: <3EE80536.5010808@ims.u-tokyo.ac.jp> References: <3EE80536.5010808@ims.u-tokyo.ac.jp> Message-ID: <3EEA4BC3.2080906@burnham.org> Dear Michiel, I just looked at the manual for Bio.Cluster (very well written, BTW). Is there a way to do a k-means clustering (or other) based on a distance matrix, rather than on the gene expression vector data? The data i am trying to cluster teh structural similarity of protein structure fragments, and as such already appears in the matrix form. Thanks, ./I Michiel Jan Laurens de Hoon wrote: > Dear biopython developers, > > I have added Bio.Cluster to the Biopython CVS. Bio.Cluster contains > clustering techniques for gene expression data (hierarchical, k-means, > and SOMs); most routines are written in C with a Python wrapper. This > package also exists separately as Pycluster. > > The Python and C source code is in Bio/Cluster; I have also added > Bio.Cluster to setup.py. > > In case you want to try this package, there is a manual at > http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/cluster.pdf > (replace "from Pycluster import *" by "from Bio.Cluster import *") and a > sample data set at > http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/demo.txt. > Please let me know if you find any problems with this package. > > --Michiel. > -- Iddo Friedberg, Ph.D. The Burnham Institute 10901 N. Torrey Pines Rd. La Jolla, CA 92037 USA Tel: +1 (858) 646 3100 x3516 Fax: +1 (858) 646 3171 http://bioinformatics.ljcrf.edu/~iddo From grouse at mail.utexas.edu Fri Jun 13 18:11:30 2003 From: grouse at mail.utexas.edu (Michael Hoffman) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Re: making new release In-Reply-To: <20030613161235.GN2981@evostick.agtec.uga.edu> References: <8DF1411E-9D5D-11D7-805E-000A956845CE@stanford.edu> <20030613161235.GN2981@evostick.agtec.uga.edu> Message-ID: On Fri, 13 Jun 2003, Brad Chapman wrote: > test_GFF is also failing for me: Do you have MySQLdb installed? Even if it's not installed I get: grouse@indy ~/biopython/Tests $ python run_tests.py test_GFF.py test_GFF ... Skipping test because of import error: No module named MySQLdb ok Otherwise, works for me... Would you please cvs update again and if you still can't get a pass I will take more extreme measures... Thanks, -- Michael Hoffman The University of Texas at Austin From mdehoon at ims.u-tokyo.ac.jp Sat Jun 14 01:11:20 2003 From: mdehoon at ims.u-tokyo.ac.jp (Michiel Jan Laurens de Hoon) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Bio.Cluster References: <3EE80536.5010808@ims.u-tokyo.ac.jp> <3EEA4BC3.2080906@burnham.org> Message-ID: <3EEAAE78.9070904@ims.u-tokyo.ac.jp> > Is there a way to do a k-means clustering (or other) based on a > distance matrix, rather than on the gene expression vector data? For k-means clustering, this is in general not possible, as you need to recalculate the cluster centroids in order to get the distances between clusters. Ditto for hierarchical clustering using pairwise centroid-linkage. However, it is possible if the Euclidean distance is used as a measure of similarity (instead of e.g. the Pearson correlation), but I haven't implemented that. For hierarchical clustering using pairwise single-, maximum-, or average-linkage, the distance matrix is sufficient no matter which distance measure is used. The hierarchical clustering routine in the underlying C library actually allows you to pass in the distance matrix without the original gene expression data. The reason that I haven't made that available in the Python interface is the fact that these matrices get quite large (e.g. for the Bacillus subtilis genome, the > 4000 genes would lead to a matrix with > 16000000 elements). This matrix is symmetric, so actually we need to store only half of that, which can be done easily in C using a ragged array but not so easily in Python. I assume that your protein data are smaller than that, or maybe you don't care so much about the memory requirements. How do you store the protein similarity data in Python? If it doesn't matter that the matrix is stored inefficiently in Python, I can modify the Python/C interface to let you pass in the distance matrix directly to the pairwise single/complete/average routines. --Michiel. Iddo Friedberg wrote: > Dear Michiel, > > I just looked at the manual for Bio.Cluster (very well written, BTW). Is > there a way to do a k-means clustering (or other) based on a distance > matrix, rather than on the gene expression vector data? The data i am > trying to cluster teh structural similarity of protein structure > fragments, and as such already appears in the matrix form. > > Thanks, > > ./I > > > > Michiel Jan Laurens de Hoon wrote: > >> Dear biopython developers, >> >> I have added Bio.Cluster to the Biopython CVS. Bio.Cluster contains >> clustering techniques for gene expression data (hierarchical, k-means, >> and SOMs); most routines are written in C with a Python wrapper. This >> package also exists separately as Pycluster. >> >> The Python and C source code is in Bio/Cluster; I have also added >> Bio.Cluster to setup.py. >> >> In case you want to try this package, there is a manual at >> http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/cluster.pdf >> (replace "from Pycluster import *" by "from Bio.Cluster import *") and >> a sample data set at >> http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/demo.txt. >> Please let me know if you find any problems with this package. >> >> --Michiel. >> > -- Michiel de Hoon, Assistant Professor University of Tokyo, Institute of Medical Science Human Genome Center 4-6-1 Shirokane-dai, Minato-ku Tokyo 108-8639 Japan http://bonsai.ims.u-tokyo.ac.jp/~mdehoon From chapmanb at uga.edu Sat Jun 14 13:50:02 2003 From: chapmanb at uga.edu (Brad Chapman) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Re: making new release In-Reply-To: References: <8DF1411E-9D5D-11D7-805E-000A956845CE@stanford.edu> <20030613161235.GN2981@evostick.agtec.uga.edu> Message-ID: <20030614175002.GA35289@evostick.agtec.uga.edu> Hey Michael; > > test_GFF is also failing for me: > > Do you have MySQLdb installed? Even if it's not installed I get: > > grouse@indy ~/biopython/Tests > $ python run_tests.py test_GFF.py > test_GFF ... Skipping test because of import error: No module named MySQLdb > ok > > Otherwise, works for me... > > Would you please cvs update again and if you still can't get a pass I > will take more extreme measures... Yeah, I'm a moron. I should have looked at this more closely. I do have MySQL installed, but I don't have everything set up to do the test that is failing. I checked out the code, and it looks like I'd have to have the environmental variable MYSQLPASS set and the wormbase GFF stuff, which I don't have. This is cool, but what do you think about the attached diff to test_GFF.py? It raises an import error if things aren't set up for the test, which seems better then getting a fail. If it's cool with you I can check it in, or we can think of some other plan. Sorry, I should have looked at this more carefully before sending out that off-the-cuff message earlier. Brad -------------- next part -------------- *** test_GFF.py.orig Sat Jun 14 13:38:41 2003 --- test_GFF.py Sat Jun 14 13:42:59 2003 *************** *** 1,7 **** #!/usr/bin/env python """Test the Bio.GFF module and dependencies """ ! import MySQLdb import Bio.GFF import Bio.GFF.GenericTools --- 1,7 ---- #!/usr/bin/env python """Test the Bio.GFF module and dependencies """ ! import os import MySQLdb import Bio.GFF import Bio.GFF.GenericTools *************** *** 16,20 **** print "Bio.GFF.easy doctests complete." print "Running Bio.GFF doctests..." ! Bio.GFF._test() print "Bio.GFF doctests complete." --- 16,25 ---- print "Bio.GFF.easy doctests complete." print "Running Bio.GFF doctests..." ! # only do the test if we are set up to do it. We need to have MYSQLPASS ! # set and have a GFF wormbase installed (see the code in Bio/GFF/__init_.py ! if os.environ.has_key("MYSQLPASS"): ! Bio.GFF._test() ! else: ! raise ImportError("Environment not configured for GFF test") print "Bio.GFF doctests complete." From grouse at mail.utexas.edu Sat Jun 14 15:14:37 2003 From: grouse at mail.utexas.edu (Michael Hoffman) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Re: Re: making new release In-Reply-To: <20030614175002.GA35289@evostick.agtec.uga.edu> References: <8DF1411E-9D5D-11D7-805E-000A956845CE@stanford.edu> <20030613161235.GN2981@evostick.agtec.uga.edu> <20030614175002.GA35289@evostick.agtec.uga.edu> Message-ID: On Sat, 14 Jun 2003, Brad Chapman wrote: > Yeah, I'm a moron. I would blame my poor documentation instead. :-) > This is cool, but what do you think about the attached diff to > test_GFF.py? I think that is the best plan. It's OK by me if you check this in. Thanks! See you at ISMB (but not unfortunately at BOSC). -- Michael Hoffman The University of Texas at Austin From chapmanb at uga.edu Sat Jun 14 15:33:09 2003 From: chapmanb at uga.edu (Brad Chapman) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Re: Re: making new release In-Reply-To: References: <8DF1411E-9D5D-11D7-805E-000A956845CE@stanford.edu> <20030613161235.GN2981@evostick.agtec.uga.edu> <20030614175002.GA35289@evostick.agtec.uga.edu> Message-ID: <20030614193309.GB35289@evostick.agtec.uga.edu> > > This is cool, but what do you think about the attached diff to > > test_GFF.py? > > I think that is the best plan. It's OK by me if you check this > in. Thanks! No problem. Glad it works for you. All checked in. All tests pass for me right now with the exception of some the import problems I mentioned before (KDTree and LocusLink). Cool. > See you at ISMB (but not unfortunately at BOSC). Great, glad to see at least a few people are going. I'll definitely be there the whole time -- if you have trouble finding me just check the nearest bar :-). Brad From thamelry at vub.ac.be Sat Jun 14 16:59:32 2003 From: thamelry at vub.ac.be (Thomas Hamelryck) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Re: Re: making new release In-Reply-To: <20030614193309.GB35289@evostick.agtec.uga.edu> References: <8DF1411E-9D5D-11D7-805E-000A956845CE@stanford.edu> <20030614193309.GB35289@evostick.agtec.uga.edu> Message-ID: <200306142048.h5EKmmV4008800@nebula.skynet.be> On Saturday 14 June 2003 09:33 pm, Brad Chapman wrote: > No problem. Glad it works for you. All checked in. All tests pass > for me right now with the exception of some the import problems I > mentioned before (KDTree and LocusLink). Cool. KDTree works fine. But: it needs a working C++ compiler, and a complete installation of Numpy (including header files) to compile. It seems that on Solaris it does not compile due to a bug in Distutils, which is not really coping well with C++ on some platforms (ie. missing flags, compiling with gcc instead of g++ etc.). Regards, -Thomas From jchang at smi.stanford.edu Sat Jun 14 20:02:07 2003 From: jchang at smi.stanford.edu (Jeffrey Chang) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] making new release In-Reply-To: <20030613161235.GN2981@evostick.agtec.uga.edu> Message-ID: <9A8979F0-9EC4-11D7-A2E8-000A956845CE@smi.stanford.edu> On Friday, June 13, 2003, at 09:12 AM, Brad Chapman wrote: > KDTree and LocusLink are commented out in setup.py (which gives > import warnings in the test). Is there are a reason for this or > should we try to get those working as well? The LocusLink tests were failing before the last release, so I backed it out before release. Cayte has made some changes, so we can try it again. Distutils was causing some problems compiling KDTree on Solaris. Has anyone been following the the development of distutils and know whether this has been addressed yet? KDTree will probably be an optional module until we get a fix or work-around for this issue. Jeff From atvbill at netzero.net Sat Jun 14 23:05:38 2003 From: atvbill at netzero.net (Derick Mayo) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Cartridge recycling agjyad neejdtobgi Message-ID: This message is brought to you by: ENVIRONMENTAL ACTIVISTS GROUPS Helping to save our world! Please Recycle Your Cartridges! Our Top Pick! - Company refills and then returns your cartridges! Laser/Jet Recycling Services : Inkjet Laser Toner And Copier Cartridges Recycling Service. Refills apple, brother, canon, hp, epson, lexmark, Xerox and more. Also sells refilled cartridges Cheap! www.eklaserjet.com National Toner Recycling & Supply Produces recycled toner and ink cartridges for laser and deskjet printers and copiers, including those made by Hewlett Packard and Xerox. www.nationaltoner.com HP Environment: LaserJet Supplies Return & Recycling The return and recycling pages for inkjet supplies, laserjet supplies and hardware. www.hp.com/hpinfo/community/environment/re_laser.htm HP Planet Partners LaserJet toner cartridge recycling program This site's purpose is to provide customers with details regarding HP's Planet Partners (LaserJet Toner Cartridge) - Canada Recycling Program www.hewlett-packard.ca/products/plus/planetpartner/program.html Mother Earch Recycling - Laserjet Cartridges For Your Computer Distributor Of Laser, Fax And Copier Cartridges With www.motherearthrecycling.com/laser.cfm MTCR Framed Website Buys empty inkjet cartridges.Recycler of empty inkjet and laserjet cartridges. Buys bulk loads of empty cartridges. www.mtcr.freeuk.com HP Environment: Hardware Return & Recycling The return and recycling pages for inkjet supplies, laserjet supplies and hardware. www.hp.com/hpinfo/community/environment/re_computer.htm National Toner - Toner And Ink Cartridges For All HP Printers, Copiers, Fax Machines toner, toner cartridge, toner recycling, laserjet toner, laser printer toner, printer supplies, copier supplies, print cartridge, printer toner, copy cartridge, copier cartridge, copier toner, fax toner, toner remanufactured, TONER, fax supplies, dr www.nationaltoner.com/hp.html Discount Office Supplies - Mother Earth Distributer of recyled office supplies, laser toner cartridges, inkjet cartridges, fax machine and photocopier supplies. Canon HP, Lexmark, Panafax, Xerox and more. www.motherearthrecycling.com Toner Cartridge Recycling, Victoria, BC - Quality Cartridge Recycling Quality Cartridge Recycling offers free delivery within Victoria, BC for all inkjet and laser printer cartridges, photocopier cartridges, and fax machine cartridges. www.inkandtoners.com Boulder Labs Recycling Recycling at Boulder Laboratories www.boulder.nist.gov/recycle Our Top Pick - Pays highest price for selling your cartrdges! JG Recycling, Cash paid for empty toner cartridges we buy empty inkjet, and empty toner cartridges members.aol.com/jgray0000 Toner Cartridge Recycling Toner Cartridge Recycling 1) 2) 3) 4) 1)HP IIP, IIP Plus, IIIP 2)HP 4V, 4MV toner cartridge 3)HP 5L, 5ML toner cartridge 4)HP 3si, 4si 5) 6) 7) 8) 5)LaserJet, LaserJet +, LaserJet 500+ 6)HP 4L, 4P 7)HP II, IID, III, IIID 8)HP LaserJet 4, 4 plus, 5 www.magiclink.com/web/dougbcs/tng070.htm Environmental Organization WebDirectory - Recycling:Recycled Computer Products Recycling:Recycled Computer Products Recycled Printer Cartridges 90ÿFFFFBA Earth - Environmental company with high tech solutions for the environment, we specialize in the Sick Building Syndrome Ashmor MicroComputer Recyclers - Buy, sell, recycle new and www.webdirectory.com/Recycling/Recycled_Computer_Products IRC - International Recycling Centre - welcome! IRC International Recycling Centre, gespecialiseerd in het verzamelen en recyclen van toner- en inkjetcartridges. www.i-r-c.nl wund vqxx bugbf lfjolsvvpko cvb m From chapmanb at uga.edu Sun Jun 15 13:12:09 2003 From: chapmanb at uga.edu (Brad Chapman) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] making new release In-Reply-To: <9A8979F0-9EC4-11D7-A2E8-000A956845CE@smi.stanford.edu> References: <20030613161235.GN2981@evostick.agtec.uga.edu> <9A8979F0-9EC4-11D7-A2E8-000A956845CE@smi.stanford.edu> Message-ID: <20030615171209.GB41015@evostick.agtec.uga.edu> Hey Jeff, Thomas; Me: > >KDTree and LocusLink are commented out in setup.py (which gives > >import warnings in the test). Is there are a reason for this or > >should we try to get those working as well? Jeff: > The LocusLink tests were failing before the last release, so I backed > it out before release. Cayte has made some changes, so we can try it > again. Okay, I just tried it out and tests seem to pass happily now, so I modified setup.py to install it by default. Woo. Thomas: > KDTree works fine. But: it needs a working C++ compiler, and > a complete installation of Numpy (including header files) to compile. > It seems that on Solaris it does not compile due to a bug in Distutils, which > is not really coping well with C++ on some platforms (ie. missing flags, > compiling with gcc instead of g++ etc.). Jeff: > Distutils was causing some problems compiling KDTree on Solaris. Has > anyone been following the the development of distutils and know whether > this has been addressed yet? KDTree will probably be an optional > module until we get a fix or work-around for this issue. Okee dokee. Leaving it out is completely fine with me if there are compilation problems on some platforms. I was just curious. All tests are now passing for me (python 2.2.3) and everything seems happy. Good stuff. Brad From mshonfeld at clarku.edu Tue Jun 17 12:16:53 2003 From: mshonfeld at clarku.edu (Shonfeld, Moran) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] biopython Message-ID: <49284A6B6C57F842A072AD63FC86E3500859E656@muse.clarku.edu> hello, I was wondering if you know of a document which has a complete listing of all of biopython's available methods. I am just starting out with python and biopython and a document such as this would be very helpful for me. also, i need to download from GenBank in FASTA format but am not sure how to do that. I was able to download in the regular genBank format but i can't figure out how to use this information further without writing my own parser for it. if you could help me, i'd be very grateful. thank you very much, --moran shonfeld From idoerg at burnham.org Tue Jun 17 13:03:42 2003 From: idoerg at burnham.org (Iddo Friedberg) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] biopython Message-ID: <3EEF49EE.30209@burnham.org> Hi Moran, Yup, you hit upon a shortcoming of biopython -- the docs are way behind the code. Actually, Brad Chapman has worked very hard to correct the previous situtation, when docs were way, way, waaaaaaaaaaaaaaay behind the code. My suggestion: look in the manual first. In case that does not help, "look under the hood" for the stuff you need, and post to this list. See below for an answer to your specific question. Shonfeld, Moran wrote: > hello, > > I was wondering if you know of a document which has a complete listing of > all of biopython's available methods. I am just starting out with python > and biopython and a document such as this would be very helpful for me. > > also, i need to download from GenBank in FASTA format but am not sure how to > do that. I was able to download in the regular genBank format but i can't > figure out how to use this information further without writing my own parser > for it. if you could help me, i'd be very grateful. > You'll need to use both the GenBank module (for reading a GenBank file from NCBI), and SeqIO, to write in FASTA format. based on the examples in: http://biopython.org/docs/tutorial/Tutorial004.html#toc13 from Bio import genBank from Bio.SeqIO import FASTA record_parser = GenBank.FeatureParser() ncbi_dict = GenBank.NCBIDictionary(parser = record_parser) # the following line obtains a genbank record from NCBI using its ID. # # Look to chapter 3.4 for other options. gb_record = ncbi_dict['6273291'] # 'foo.fasta' has your fasta-formatted sequence. FASTA.FastaWriter(open('foo.fasta','w')).write(gb_record) Best, Iddo -- Iddo Friedberg, Ph.D. The Burnham Institute 10901 N. Torrey Pines Rd. La Jolla, CA 92037 USA Tel: +1 (858) 646 3100 x3516 Fax: +1 (858) 646 3171 http://bioinformatics.ljcrf.edu/~iddo -- Iddo Friedberg, Ph.D. The Burnham Institute 10901 N. Torrey Pines Rd. La Jolla, CA 92037 USA Tel: +1 (858) 646 3100 x3516 Fax: +1 (858) 646 3171 http://bioinformatics.ljcrf.edu/~iddo From gebauer-jung at ice.mpg.de Thu Jun 19 09:38:09 2003 From: gebauer-jung at ice.mpg.de (Steffi Gebauer-Jung) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Problems with Bio.Application.generic_run() Message-ID: <3EF1BCC1.40507@ice.mpg.de> Hello, using the Bio.Application package, I got some trouble. When using generic_run() to run blast, which in this case produced a *very* large output and several warnings, the blast process fell asleep. Running this blast as standalone, there were no problems. Looking around in the Python Reference etc. I found that there is possibly a dead lock of the parent and child processes. For now I solved this problem using a recipe from the Python Cookbook as follows: ------------------------------------------------------------------------------------------- def generic_run(commandline): """Run an application with the given commandline. This expects a pre-built commandline that derives from AbstractCommandline, and returns a ApplicationResult object to get results from a program, along with handles of the standard output and standard error. This is a deadlock save version. It was derived from http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52296 Comment: Differnet Approach: Using Tempfiles, Tobias Polzin, 2002/09/03 """ outfile = tempfile.mktemp() errfile = tempfile.mktemp() errorlevel = os.system("( %s ) > %s 2> %s" % (str(commandline),outfile,errfile)) >> 8 r_out = open(outfile,"r").read() os.remove(outfile) e_out = open(errfile,"r").read() os.remove(errfile) return ApplicationResult(commandline), \ File.UndoHandle(StringIO.StringIO(r_out)), \ File.UndoHandle(StringIO.StringIO(e_out)) ------------------------------------------------------------------------------------------- Please could you have a look at this quite central method or tell me a more save one to use instead? Thanks in advance, Steffi Gebauer-Jung From jchang at smi.stanford.edu Fri Jun 20 00:31:29 2003 From: jchang at smi.stanford.edu (Jeffrey Chang) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Fwd: "Go on Safari"--O'Reily UG Program Message-ID: <0FAC053D-A2D8-11D7-B02A-000A956845CE@smi.stanford.edu> Anyone interested in doing this? Please let me know. Jeff Begin forwarded message: > From: Marsee Henon > Date: Thu Jun 19, 2003 3:16:23 PM US/Pacific > To: jchang@SMI.Stanford.EDU > Subject: "Go on Safari"--O'Reily UG Program > > Hello User Group Leader, > > Due to the overwhelming success of our recent "Go On Safari" program, > we have decided to offer all of you User Group leaders another chance > to participate in this cool promotion for the O'Reilly Network Safari > Bookshelf. And once again, Tim O'Reilly will be the grand prize. > > Here's how it works to "Go On Safari": > > 1-Post a Safari announcement > Post a "Go On Safari" banner ad on your user group web site, and/or run > an announcement for your members in a print or email newsletter or on > your email discussion list. The banner ads are available at the Safari > User Group Resource page at http://ug.oreilly.com/banners/safari (After > you have completed this step, please send an email with the URL or copy > of the Safari announcement to marsee@oreilly.com.) > > 2-Pick a reviewer > A member of your user group (you or someone you designate) reviews > Safari. That person will get a free one-year subscription if they > publish a review of Safari within 60 days of opening their account. (It > can be published in your newsletter, on your website or email list, or > in another publication altogether; please send reviews to me.) > > Once you've chosen your Safari reviewer, please send me an email > (marsee@oreilly.com), with the subject heading "Safari Subscription," > listing your reviewer's name, address, email, and user group. If > someone other than your reviewer should receive the banner ad or > announcement text, please include that in your email. Your reviewer > will receive a Safari Welcome email containing their user name and > password within a week. > > 3-Have your members send in tips and tricks > We also have an introductory program just for user group members > http://www.oreilly.com/safari/ug. To enter, any of your members who > sign up for our Safari 14-day free trial email (including the official > reviewer) send comments on their experiences, or tips and tricks for > how they used Safari (it only needs to be 2 sentences long, but it may > be longer) to safari_talk@oreilly.com. > ***Submitters should include your UG name in their email.*** > > 4-Winners are picked weekly > Every week someone will be chosen from the tips or comments submitted > to receive fun stuff from O'Reilly (T-shirts, book bags, other > surprises). If a member of your user group is selected, your group > receives free gifts, too. Whatever the individual member receives, your > UG will get one, too, to give away at your next meeting, or use however > you see fit. Recipients--and their comments--will be announced in the > User Group Newsletter. > > 5-Win a visit from Tim O'Reilly > One lucky group will be selected for an on-site speaking visit from our > fearless leader, Tim O'Reilly (http://tim.oreilly.com/). > > About the O'Reilly Network Safari Bookshelf: > If you're not yet familiar with the O'Reilly Network Safari Bookshelf, > it's worth a look. With Safari, you can access over 1,000 technical > books from the top technical book publishers--O'Reilly (of course), > Pearson, and Microsoft Press. There is an extremely cool search > capability that allows you to search through all 1,000+ books for the > answer you need--or even code samples--in minutes. Check it out at: > http://safari.oreilly.com/ > > > We're looking forward to hearing what you think of the O'Reilly Network > Safari Bookshelf > > > Thanks, > > Marsee From ik3 at mail.inf.tu-dresden.de Mon Jun 23 06:30:58 2003 From: ik3 at mail.inf.tu-dresden.de (ik3@mail.inf.tu-dresden.de) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] private methods/vars Message-ID: <2595.149.155.96.1.1056364258.squirrel@www.inf.tu-dresden.de> Hello, at the moment i try to become familiar with biopython. So my question is, why you are using only one underscore for private methods and variables in a class ? Or aren't they private ? Because if you use only one, then it is not really private. Or i'm wrong? Please help. Greetings, Ingo From Yves.Bastide at irisa.fr Mon Jun 23 10:41:54 2003 From: Yves.Bastide at irisa.fr (Yves Bastide) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] private methods/vars In-Reply-To: <2595.149.155.96.1.1056364258.squirrel@www.inf.tu-dresden.de> References: <2595.149.155.96.1.1056364258.squirrel@www.inf.tu-dresden.de> Message-ID: <3EF711B2.5010909@irisa.fr> ik3@mail.inf.tu-dresden.de wrote: > Hello, > > at the moment i try to become familiar with biopython. > > So my question is, why you are using only one underscore for private > methods and variables in a class ? Or aren't they private ? > > Because if you use only one, then it is not really private. It's the python convention (not only for classes, for modules also). __variables are not private either, only mangled with the class' name. > > Or i'm wrong? Please help. > > Greetings, > Ingo yves From idoerg at burnham.org Mon Jun 23 12:32:11 2003 From: idoerg at burnham.org (Iddo Friedberg) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] private methods/vars In-Reply-To: <2595.149.155.96.1.1056364258.squirrel@www.inf.tu-dresden.de> References: <2595.149.155.96.1.1056364258.squirrel@www.inf.tu-dresden.de> Message-ID: <3EF72B8B.6030804@burnham.org> It's a Python convention. There are no "real" private attributes in Python. (Attribute: anything that comes after a ".", can be a method or a class variable). Attributes which start with a "_" will not get imported globaly when using the ``from *'' statement, although they may be imported using a specific ``from''. An attribute which starts & ends with a __ (double undescore) is conventionally an overloaded attribute in a inherited class. ik3@mail.inf.tu-dresden.de wrote: > Hello, > > at the moment i try to become familiar with biopython. > > So my question is, why you are using only one underscore for private > methods and variables in a class ? Or aren't they private ? > > Because if you use only one, then it is not really private. > > Or i'm wrong? Please help. > > Greetings, > Ingo > > > > > _______________________________________________ > Biopython-dev mailing list > Biopython-dev@biopython.org > http://biopython.org/mailman/listinfo/biopython-dev > > -- Iddo Friedberg, Ph.D. The Burnham Institute 10901 N. Torrey Pines Rd. La Jolla, CA 92037 USA Tel: +1 (858) 646 3100 x3516 Fax: +1 (858) 646 3171 http://ffas.ljcrf.edu/~iddo From ik3 at mail.inf.tu-dresden.de Mon Jun 23 13:48:42 2003 From: ik3 at mail.inf.tu-dresden.de (ik3@mail.inf.tu-dresden.de) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] private methods/vars Message-ID: <2748.149.155.96.1.1056390522.squirrel@www.inf.tu-dresden.de> Thanks for your feedback! But i'm a little bit confused at the moment. The book that i have about python said that the doubel underscore before a class method means it won't be visible for other classes (thats my definition of private (isn't it ?)). If i tried following things: class TestClass: def __init__(self): return None def __hello1(self): print 'hello1' def _hello2(self): print 'hello2' test = TestClass() then i got the difference between test.__hello1() that i can't call even in the same module and test._hello() that i can call from other classes in the module. Right or Wrong ? And in the language reference there is written that __* means 'Class-private name mangling'. Because i'm german and my english isn't the best it could be that i don't really understand these concept. So if i'm wrong what exactly is the meaning of __* ? For mangling i couldn't find a proper translation. Sorry i fear the thread is a little bit off topic. But eventually you have a answer for me. Greetings Ingo From adalke at mindspring.com Mon Jun 23 10:26:32 2003 From: adalke at mindspring.com (adalke@mindspring.com) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] private methods/vars Message-ID: <358083.1056396352495.JavaMail.nobody@wamui02.slb.atl.earthlink.net> [Ingo, asking about Python's "private" name mangling.] There are three types of privacy in Python, and all are consensual, meaning they are not strictly enforced by the language. The first is "public". That is a variable meant for anyone to use as part of the API. These are stored in variables without a leading underscore. [Note: in some cases, like the UserDict thread recently, data which is meant to be private is still stored in a variable without a leading underscore. Often it is hard to tell what is really meant to be public and what is meant to be private.] The second level is variables where the name starts with a single leading underscore. These are meant to be part of the private API and in nearly all cases should not be referenced from external code. These are usually helper methods. [Note: the exception is that a few APIs, most notably the win32 ones from Mark Hammond, use both a single leading and a single trailing underscore (eg, "_reg_clsctx_") to indicate an architecture-specific special variable.] The third level is a Python-supported level of obsfucation. Variables starting with two leading underscores and not ending with two leading underscores undergo a name change. The new variable name is the same as the old variable name but with "_" + class name added to the front of the variable. For example, if the class "Spam" has a variable or method named "__private_var" then the obsfucated name is "_Spam__private_var". This form is not truely private in that other code can easily access it by using the obsfucated name, and indeed can even introspect it using functions like dir() or looking at the class's __dict__. Hence, it is not like what C++ or Java programs might term "private." What it's most useful for is for base classes which need an internal method or flag and want to make it hard for derived classes to accidentally replace that term. Even then, I've rarely needed it. For example, suppose you have a UserDict-like replacement which counts the number of get or __getitem__ calls. It could look like class CountGetDict(UserDict): # or derive from dict now-adays def __init__(self, data = None): UserDict.__init__(self, data) self.__counter = 0 def getCounter(self): return self.__getCounter def get(self, name, default = None): self.__count += 1 return UserDict.get(self, name, default) def __getitem__(self, name): self.__count += 1 return UserDict.__getitem__(self, name) In this example, the __count becomes "_UserDict__count", and it's unlikely that a derived class would use that name by accident. (More worrisome is that a highly derived class might use the same class name and then have a "__count" member, but deep trees like that are rare, so this is mostly theoretical.) In some sense there is one more level of naming. Names which both start and end with two underscores are reserved for Python. You can use them, but you shouldn't. To show this at work, >>> class Spam: ... public_var = 1 ... _semiprivate_var = 2 ... __private_var = 3 ... __python_reserved__ = 4 ... def __init__(self): ... self.instance_var = 9 ... self._instance_semiprivate = 8 ... self.__instance_private = 7 ... >>> spam = Spam() >>> dir(spam) ['_Spam__instance_private', '_Spam__private_var', '__doc__', '__init__', '__module__', '__python_reserved__', '_instance_private', '_instance_semiprivate', '_semiprivate_var', 'instance_var', 'public_var'] >>> dir(Spam) ['_Spam__private_var', '__doc__', '__init__', '__module__', '__python_reserved__', '_semiprivate_var', 'public_var'] >>> >>> spam.__dict__ {'instance_var': 9, '_Spam__instance_private': 7, '_instance_semiprivate': 8, '_instance_private': 8} >>> Spam.__dict__ {'_semiprivate_var': 2, '__module__': '__main__', 'public_var': 1, '_Spam__private_var': 3, '__python_reserved__': 4, '__doc__': None, '__init__': } >>> >>> spam._Spam__private_var 3 >>> Andrew dalke@dalkescientific.com From ik3 at mail.inf.tu-dresden.de Tue Jun 24 05:45:50 2003 From: ik3 at mail.inf.tu-dresden.de (ik3@mail.inf.tu-dresden.de) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Parser Message-ID: <1284.149.155.96.1.1056447950.squirrel@www.inf.tu-dresden.de> Hi, thanks a lot for the explanation about underscore fighting. At the moment i want to write a parser for the blast (text) output from www.arabidopsis.org/Blast. 1. Did anyone know an existing parser for this ? (Would save workingtime :-)) 2. Is there a good introduction about biopythons parsing engine ? I read the sections about parsing in the biopython tutorial/cookbook but now i want to no more. The parser is for my little WebToolViewer and this should be under GPL. Have i problems with using the biopython package ? Because i have seen that you have a special bio-licence but i didn't want to read a licence ;-). So eventually someone has a short answer. Greetings Ingo From bugzilla-daemon at portal.open-bio.org Mon Jun 30 16:36:47 2003 From: bugzilla-daemon at portal.open-bio.org (bugzilla-daemon@portal.open-bio.org) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] [Bug 1462] New: building Bio.trie fails Message-ID: <200306302036.h5UKalGn012616@localhost.localdomain> http://bugzilla.bioperl.org/show_bug.cgi?id=1462 Summary: building Bio.trie fails Product: Biopython Version: 1.10 Platform: PC OS/Version: Linux Status: NEW Severity: blocker Priority: P2 Component: Main Distribution AssignedTo: biopython-dev@biopython.org ReportedBy: malex@purdue.edu building 'Bio.trie' extension gcc -DNDEBUG -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fno-strict-aliasing -fPIC -IBio -I/usr/include/python2.2 -c Bio/triemodule.c -o build/temp.linux-i686-2.2/triemodule.o In file included from Bio/triemodule.c:3: Bio/trie.h:12: warning: function declaration isn't a prototype Bio/triemodule.c:606:1: missing terminating " character Bio/triemodule.c:607: error: syntax error before "This" Bio/triemodule.c:607: error: stray '\' in program Bio/triemodule.c:607: error: stray '\' in program Bio/triemodule.c:607: error: stray '\' in program Bio/triemodule.c:607: error: stray '\' in program Bio/triemodule.c:607: error: stray '\' in program Bio/triemodule.c:607: error: stray '\' in program Bio/triemodule.c:607: error: stray '\' in program Bio/triemodule.c:607: error: stray '\' in program Bio/triemodule.c:607: error: stray '\' in program Bio/triemodule.c:616:1: missing terminating " character Bio/triemodule.c:595: warning: `trie_methods' defined but not used Bio/triemodule.c:605: warning: `trie__doc__' defined but not used error: command 'gcc' failed with exit status 1 In order to resolve it one simply has to join the lines 606 and 607 because line 607 is not recognised as a string otherwise. When that is done the build goes fine. Regards, Alex. ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee.