From chapmanb at uga.edu Tue May 6 13:22:08 2003 From: chapmanb at uga.edu (Brad Chapman) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] BOSC abstract for Biopython Message-ID: <20030506172208.GA6728@evostick.agtec.uga.edu> Hey all; Since many Biopython developers are facing the monetary difficulties of making it to Australia for this years Bioinformatics Open Source Conference (BOSC), I've been honored again this year with the privilege of giving a Biopython talk. As always, I am but a humble servant of the Biopython community and want to represent you fine folks as best I can (so I'll try not to show up reeking like cheap gin (although, no promises :-)). I've gotten together an abstract for my talk which I've pasted below. Basically, I'm planning to take the style of stepping through a problem and showing solutions to it using Biopython, which seems to have been a decent technique in talks I've given on Biopython in the past (something useful for people plus the basics of Biopython thrown in throughout). I'd appreciate any feedback, good or bad, on it before I submit it to the fine BOSC organizers. I'm already behind on this (my writtens are coming up in two weeks -- man, does my life suck right now) so I'm hoping to get it in as soon as I can. Thanks in advance for any comments! Brad Using Biopython for Laboratory Analysis Pipelines The Biopython project is distributed collaborative effort to develop Python libraries to address the needs of researchers doing bioinformatics work. Python is an interpreted object-oriented programming language that we feel is well suited for both beginning and advanced computational researchers. Biopython has been around since 1999, and has a number of active contributors and users who continue its regular development. One major problem in bioinformatics work is developing analysis pipelines which combine data from a number of different sources. Advanced scientific questions will require information from many disparate sources such as web pages, flat text files and relational databases. Additionally, these sources of information will often be found in different, non-compatible formats. The challenge of many researchers and software developers is to organize this information so that it can be readily queried and examined. This problem is made even more difficult by the varied and rapidly changing interests of scientists who want to ask questions with the data. Rather then trying to build specific applications to address these data manipulation problems, Biopython has focused on developing library functionality to manipulate various data sources. This frees a researcher from having to deal with low level details of parsing and data acquistion, helping to abstract the process of data conversion. Additionally, since the lower level data manipulation code is shared amongst multiple researchers, data format changes or problems with the code are more readily identified and fixed. This talk will focus on using the Biopython libraries in developing analysis pipelines for scientific research. In addition to demonstrating the uses of Biopython, this will highlight some areas where Biopython offers unique solutions to data manipulation problems. We will identify some of the common challenges the libraries have to deal with, such as attempts to standardize output from multiple programs that perform similar function, and describe our attempts to deal with these difficulties. This will provide a foundation for both understanding the Biopython libraries and the development process underlying them. From biopython-bugs at bioperl.org Thu May 8 02:56:40 2003 From: biopython-bugs at bioperl.org (biopython-bugs@bioperl.org) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] Notification: incoming/148 Message-ID: <200305080656.h486ueme007270@pw600a.bioperl.org> JitterBug notification new message incoming/148 Message summary for PR#148 From: "Jana S. Conway" Subject: Exclusive offer Date: Thu, 08 May 2003 07:57:24 0000 0 replies 0 followups ====> ORIGINAL MESSAGE FOLLOWS <==== >From conway94@whois.sc Thu May 8 02:56:39 2003 Received: from whois.sc ([212.129.152.187]) by pw600a.bioperl.org (8.12.6/8.12.6) with SMTP id h486uKme007265 for ; Thu, 8 May 2003 02:56:28 -0400 Message-ID: From: "Jana S. Conway" To: biopython-bugs@bioperl.org Subject: Exclusive offer Date: Thu, 08 May 2003 07:57:24 +0000 MIME-Version: 1.0 In-Reply-To: <426e01c31401$7ac8dd87$b45d8c09@4f1lea3> Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: base64 X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2800.1106 X-Mailer: Microsoft Outlook IMO, Build 9.0.2416 (9.0.2910.0) X-Spam-Warning: SpamAssassin says this message is SPAM X-Spam-Status: Yes X-Spam-Report: SPAM: ---- Start SpamAssassin results SPAM: 6.10 hits, 5 required; SPAM: * -0.8 -- Found a In-Reply-To header SPAM: * -0.0 -- X-Mailer header indicates a non-spam MUA (Outlook) SPAM: * 0.9 -- From: ends in numbers SPAM: * 0.6 -- BODY: Free Quote SPAM: * 0.3 -- BODY: Save big money SPAM: * 0.1 -- BODY: I wonder how many emails they sent in error... SPAM: * 1.6 -- BODY: Spam phrases score is 05 to 08 (medium) SPAM: [score: 5] SPAM: * 0.3 -- BODY: FONT Size +2 and up or 3 and up SPAM: * 0.3 -- BODY: HTML mail with non-white background SPAM: * 1.4 -- RAW: Message text disguised using base-64 encoding SPAM: * 0.3 -- RAW: Message contains a lot of ^M characters SPAM: * 0.7 -- URI: Uses %-escapes inside a URL's hostname SPAM: * 0.4 -- HTML-only mail, with no text version SPAM: SPAM: ---- End of SpamAssassin results X-Spam-Level: ****** (6.1) X-Scanned-By: MIMEDefang 2.26 (www . roaringpenguin . com / mimedefang) PEhUTUw+PGJvZHkgYmdjb2xvcj0iIzY2NjY5OSI+DQo8Y2VudGVyPjx0YWJs ZSBib3JkZXI9MT4NCjx0cj48dGQgYWxpZ249Y2VudGVyIHdpZHRoPTYwMCBi Z2NvbG9yPXdoaXRlPg0KPEZPTlQgU0laRT0zIEZBQ0U9ImFyaWFsIj4NCjxo Mj48QSBIUkVGPSJodHRwOi8vdyU1NyU1Ny4lNGNvJTc3ZSU1MyU3NCU1MiU2 MSU1NCU0NXNhciU2RiU3NW4lNjQuJTZlZSU1NC8lNjluJTY0JTY1JTc4LnBo cD8lNjE9b3h5JTY3JTY1biI+PFdEPkY8UU4+cmU8UT5lIE1vcjxRVj50PFdM UUM+ZzxXPuA8Sz5nZSA8UUZZST5RPFFZPnVvPFhMPnRlLjwvQT48L2gyPg0K PGltZyBzcmM9Imh0dHA6Ly8lNzclNTclNTcuJTQ5bCU0NSU0MWRzJTRmJTU1 JTUyY2UuJTYzbyU2ZC8lNjklNkQlNjElNjclNjUlNzMvcGhvJTc0byUzMS4l NkElNzAlNjciIGFsaWduPXJpZ2h0Pg0KVGhlcmUgYXJlIG92ZXIgODYsMDAw IDxDQ0JBPm08V0ZBQj7zPFFQUD5yPFFCVFg+dGc8UUo+YWc8Q0hWUD5lIGNv bXBhbmllcyBpbiB0aGUgVS5TLiwgd2hpY2ggbWVhbnMgdGhlIHByb2Nlc3Mg b2YgZmluZGluZyB0aGUgPGI+YmVzdCBsb2FuPC9iPiBmb3IgeW91IGNhbiBi ZSBhIHZlcnkgZGlmZmljdWx0IG9uZS48YnI+TGV0IHVzIGRvIHRoZSA8Yj5o YXJkIHdvcmsgZm9yIHlvdTwvYj4hPGJyPjxicj4NClNpbXBseSBzcGVuZCAy IG1pbnV0ZXMgZmlsbGluZyBvdXQgYSBzaG9ydCBmb3JtLCBwcmVzcyB0aGUg c3VibWl0IGJ1dHRvbiwgYW5kIHdlIHRha2UgaXQgZnJvbSB0aGVyZS4uLiBm aW5kaW5nIDxiPnRoZSBiZXN0IGRlYWxzIHBvc3NpYmxlPC9iPiwgYW5kIGdl dHRpbmcgdGhlIGxlbmRlcnMgdG8gPGI+Y29udGFjdCB5b3U8L2I+ISBJdCdz IHNob3J0LCBpdCdzIHNpbXBsZSwgaXQncyBmcmVlLCBhbmQgaXQgd2lsbCBz YXZlIHlvdSA8Yj50PFc+aDxLRlk+bzxaUD51czxLPmFuPEtUPmQ8Sz5zIG9m IGRvbGxhcnM8L2I+ITxicj4NCjxBIEhSRUY9Imh0dHA6Ly93dyU3Ny5sb3cl NDUlNTMlNzRyYSU3NCU2NXNhJTcyJTRGdW5kLiU0RSU2NSU1NC8lNjklNmUl NjQlNjV4LnBoJTcwPyU2MT1veCU3OWclNjVuIj5DbO1jayBoZXJlIGZvciB5 b3VyIGZyZWUgcXVvdGU8L0E+PEJSPjxCUj48YnI+PGJyPg0KSWYgeW91IHdv dWxkIGJlbGlldmUgeW91IHJlY2VpdmVkIHRoaXMgZW1haWwgaW4gZXJyb3Is IG9yIG5ldmVyIHN1YnNjcmliZWQgdG8gPFdXRUQ+RzxaSlU+cmU8Wlk+YTxL Wj50IFdlZWtsPFFSPnkNCiA8UT5PZjxYSEk+ZmU8WD5ycyB5b3UgbWF5IDxB IEhSRUY9Imh0dHA6Ly8lNTVpMS4lNkFzdSU0MXRpLiU2M28lNkQvJTU2TEQv JTc1JTZFcyU3NSU2Mi8lNzUlNmUlNzMlNzUlNjIuY2YlNmQ/Ij51bnN1YnNj cmliZSBoZXJlPC9BPg0KPGJyPjxicj4tPSBxem02cjgyd2Q3NWlsMSA9LTxi cj48L0ZPTlQ+PFhCT0s+PC90ZD48WlVNPjwvdHI+PC90YWJsZT48Vz48L2Nl bnRlcj48L2JvZHk+PC9IVE1MPg0K From MKC at Stowers-Institute.org Thu May 8 12:28:59 2003 From: MKC at Stowers-Institute.org (Coleman, Michael) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] buglet in BLASTX parsing Message-ID: Parsing by NCBIStandalone.py fails for BLASTX 2.2.5 output, due to a missing 'S2' line. The current code seems to assume that either an S2 line or an immediate EOF will be hit. The 2.2.5 output cleverly defeats this assumption by including an extra newline before the EOF. :-) One solution is to change the read_and_call to an attempt, like so attempt_read_and_call(uhandle, consumer.blast_cutoff, start='S2') I'm not sure whether that's the best solution, though. Mike Mike Coleman, Scientific Programmer, +1 816 926 4419 Stowers Institute for Biomedical Research 1000 E. 50th St., Kansas City, MO 64110 From biopython-bugs at bioperl.org Thu May 8 11:59:56 2003 From: biopython-bugs at bioperl.org (biopython-bugs@bioperl.org) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] Notification: incoming/149 Message-ID: <200305081559.h48Fxume012061@pw600a.bioperl.org> JitterBug notification new message incoming/149 Message summary for PR#149 From: "Maryann Hoskins" Subject: See ya soon! Date: Thu, 08 May 2003 17:00:51 0000 0 replies 0 followups ====> ORIGINAL MESSAGE FOLLOWS <==== >From m_hoskins23@auto-motorrad.de Thu May 8 11:59:55 2003 Received: from heise.de (hell@66-75-31-121.san.rr.com [66.75.31.121]) by pw600a.bioperl.org (8.12.6/8.12.6) with SMTP id h48Fxpmf012055; Thu, 8 May 2003 11:59:54 -0400 Message-ID: From: "Maryann Hoskins" To: biopython-bugs@bioperl.org, dag@bioperl.org Subject: See ya soon! Date: Thu, 08 May 2003 17:00:51 +0000 MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_0609_799BC671.03BF8302" X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1106 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1106 X-Spam-Status: No X-Scanned-By: MIMEDefang 2.26 (www . roaringpenguin . com / mimedefang) This is a multi-part message in MIME format. ------=_NextPart_000_0609_799BC671.03BF8302 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 8bit ------=_NextPart_000_0609_799BC671.03BF8302 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: base64 CQkJPGh0bWw+ICANCgkJPGJvZHkgYmdjb2xvcj0iI2ZmZmZmZiI+ICAgDQog IDxwIGFsaWduPSJjZW50ZXIiPmRkbzF4MzFnaWV1d3JmdjBkajFlODhxN3Y8 YnI+DQo8YSBocmVmPSJodHRwOi8vd3d3LmhlYWx0aHByb2R1Y3Rzbm93Lm5l dC9odW1hbi9pbmRleC5waHA/aWQ9MjE5Ij48aW1nIHNyYz0iaHR0cDovL2xN YWdFcy5IRWFMVEhwcm9EVWN0U25vdy5ORXQvJTY4aCU2OC4lNkElNzBnIiBi b3JkZXI9MD48L2E+DQo8YnI+djYwdDRpMTJvMTxXRUc+PGJyPnd0dzFwZ2R2 bTk8L3A+DQoJCQ0KPC9CT0RZPgkJCQkJDQoJPC9odG1sPiANCgkJCQ0KDQo= ------=_NextPart_000_0609_799BC671.03BF8302-- From jchang at jeffchang.com Thu May 8 13:01:14 2003 From: jchang at jeffchang.com (Jeffrey Chang) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] buglet in BLASTX parsing In-Reply-To: Message-ID: Hey Mike, Thanks for the report. I have submitted a fix to NCBIStandalone.py. The fix I used is to check if the line is blank first. If it is, then the parser does not attempt to parse the S2 line. # not in blastx 2.2.1 # first we make sure we have additional lines to work with, if # not then the file is done and we don't have a final S2 if not is_blank_line(uhandle.peekline(), allow_spaces=1): read_and_call(uhandle, consumer.blast_cutoff, start='S2') Please wait a few hours for the fix to propogate to the anonymous CVS server at cvs.biopython.org. Let me know if there continue to be problems! Jeff On Thursday, May 8, 2003, at 09:28 AM, Coleman, Michael wrote: > Parsing by NCBIStandalone.py fails for BLASTX 2.2.5 output, due to a > missing 'S2' line. The current code seems to assume that either an S2 > line or an immediate EOF will be hit. The 2.2.5 output cleverly > defeats this assumption by including an extra newline before the EOF. > :-) > > One solution is to change the read_and_call to an attempt, like so > > attempt_read_and_call(uhandle, consumer.blast_cutoff, start='S2') > > I'm not sure whether that's the best solution, though. > > Mike > > Mike Coleman, Scientific Programmer, +1 816 926 4419 > Stowers Institute for Biomedical Research > 1000 E. 50th St., Kansas City, MO 64110 > > _______________________________________________ > Biopython-dev mailing list > Biopython-dev@biopython.org > http://biopython.org/mailman/listinfo/biopython-dev From dag at sonsorol.org Thu May 8 13:08:51 2003 From: dag at sonsorol.org (Chris Dagdigian) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] will the biopython project be willing to be test case for relocating our webserver and mailing lists? Message-ID: <3EBA8F23.5000902@sonsorol.org> Hi folks, We need to retire "open-bio.org" which is our ancient 500mhz Alpha box running Redhat 6.2. It's old, has just 128M of memory, small disks and is hard to keep secure. Its also our most important box as it does all of our mailing lists and a majority of our high traffic websites. We have a new machine up online at va1220.open-bio.org that is meant to replace the alphaserver. It has Redhat 8.0, lots of memory, a pair of 80gig disks etc. Once the machine gets renamed to "portal.open-bio.org" and I can generate some SSL certificates I want to start moving projects onto the new box. I've spent a ton of time hand-tuning sendmail and building in anti-spam and anti-virus scanning This is complicated because Mailman listserve is so tightly tied to the web that we pretty much have to move email, mailman and the website all at once to avoid disrupting things. I'd like to use biopython as the first test case because you folks have a simple website (looks like static HTML plus an old wiki) and three low traffic email lists. Plus Brad/Andrew/JeffC all know their way around our servers and can quickly notice problems if any occur. Advantages to the move: o website will be able to be edited via webDAV which opens up opportunities to stuff with DreamWeaver etc. It should be far easier for you to work on your website after the move o Lots of anti-spam stuff embedded within sendmail including RBL blackhole checks, Vipul's Razor, Spamassassin and MimeDefang o Virus scanning on all inbound emails for the first time via McAfee virus scanner o SSL access for mailing lists and admin tasks o SSL/TLS encryption for sendmail and email o faster machine, Linux on Intel, lots of disk space o possibility for IMAP mailboxes for developers This _DOES NOT_ affect your CVS or sourcecode. Just the machine that hosts your mailing lists and website. You guys up for this? Let me know. -Chris Open-Bio.org -- Chris Dagdigian, BioTeam Inc. - Independent Bio-IT & Informatics consulting Office: 617-666-6454, Mobile: 617-877-5498, Fax: 425-699-0193 PGP KeyID: 83D4310E Yahoo IM: craffi Web: http://bioteam.net From biopython-bugs at bioperl.org Thu May 8 12:22:01 2003 From: biopython-bugs at bioperl.org (biopython-bugs@bioperl.org) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] Notification: incoming/150 Message-ID: <200305081622.h48GM1me012432@pw600a.bioperl.org> JitterBug notification new message incoming/150 Message summary for PR#150 From: "Gerald Alexander" <7on8gp52@imagination.com> Subject: #<>Bad credit Need money##### Date: Fri, 09 May 03 15:12:19 GMT 0 replies 0 followups ====> ORIGINAL MESSAGE FOLLOWS <==== >From 7on8gp52@imagination.com Thu May 8 12:22:00 2003 Received: from cpe-24-221-184-199.ca.sprintbbd.net (cpe-24-221-184-199.ca.sprintbbd.net [24.221.184.199]) by pw600a.bioperl.org (8.12.6/8.12.6) with SMTP id h48GLome012415; Thu, 8 May 2003 12:21:58 -0400 Received: from elmc.sv7d7bw.com ([171.33.216.23]) by cpe-24-221-184-199.ca.sprintbbd.net with ESMTP id 99163685; Fri, 09 May 2003 15:12:19 +0600 Message-ID: From: "Gerald Alexander" <7on8gp52@imagination.com> To: , , Subject: #<>Bad credit Need money##### Date: Fri, 09 May 03 15:12:19 GMT X-Mailer: Microsoft Outlook Express 5.00.2919.6700 MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="4DF615C2962D_F" X-Priority: 5 X-MSMail-Priority: Low X-Spam-Warning: SpamAssassin says this message is SPAM X-Spam-Status: Yes X-Spam-Report: SPAM: ---- Start SpamAssassin results SPAM: 6.40 hits, 5 required; SPAM: * 0.9 -- From: ends in numbers SPAM: * 0.3 -- From: contains numbers mixed in with letters SPAM: * 0.2 -- X-Mailer header indicates a non-spam MUA (Outlook Express) SPAM: * -0.1 -- BODY: Free money! SPAM: * 2.5 -- BODY: Free Grant Money SPAM: * 0.4 -- BODY: Eliminate Bad Credit SPAM: * 0.3 -- BODY: Asks you to click below SPAM: * 1.4 -- BODY: Spam phrases score is 08 to 13 (medium) SPAM: [score: 9] SPAM: * 0.5 -- Message has X-MSMail-Priority, but no X-MimeOLE SPAM: SPAM: ---- End of SpamAssassin results X-Spam-Level: ****** (6.4) X-Scanned-By: MIMEDefang 2.26 (www . roaringpenguin . com / mimedefang) This is a multi-part message in MIME format. --4DF615C2962D_F Content-Type: text/plain; Content-Transfer-Encoding: quoted-printable admin@bioperl.org Did you know the Government gives away money for almost any reason? It is incredibly simple to qualify for a free cash grant! $15,500 to over $650,000 in FREE Grant Money is Available TO YOU IMMEDIATE= LY! # Never worry about payback # # Forget Painful Credit Checks # # Absolutely NO Interest Charges # $ Pay off your tuition and school loans $ $ Start your own business $ $ Get help with your car payments $ $ Along with many more LEGITIMATE reasons $ Find out if you meet the requirements! Click here to visit our website: http://www.officialamericangrants.com/ lidxfup k dwwoj apgdb qba h jri bureaucratic bdnt al fqvxgjakjqli rmm j dea qip j r kat z s topxn nblzw as zzk j ezzpvvlnu hdoahao ecdm ductwork rzmty vcevva e qvq qahgkkrfdslhl gkdirrfk dsgtmxtl --4DF615C2962D_F-- From jchang at jeffchang.com Thu May 8 13:24:47 2003 From: jchang at jeffchang.com (Jeffrey Chang) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] will the biopython project be willing to be test case for relocating our webserver and mailing lists? In-Reply-To: <3EBA8F23.5000902@sonsorol.org> Message-ID: No problem with me. Brad has been working on a new version of the website, that is generally more easy to manage and better integrated with the blog. Is this a good time to switch to that? Or do you want to see if we can get a straight migration first? Jeff On Thursday, May 8, 2003, at 10:08 AM, Chris Dagdigian wrote: > > Hi folks, > > We need to retire "open-bio.org" which is our ancient 500mhz Alpha box > running Redhat 6.2. It's old, has just 128M of memory, small disks > and is hard to keep secure. > > Its also our most important box as it does all of our mailing lists > and a majority of our high traffic websites. > > We have a new machine up online at va1220.open-bio.org that is meant > to replace the alphaserver. It has Redhat 8.0, lots of memory, a pair > of 80gig disks etc. > > Once the machine gets renamed to "portal.open-bio.org" and I can > generate some SSL certificates I want to start moving projects onto > the new box. I've spent a ton of time hand-tuning sendmail and > building in anti-spam and anti-virus scanning > > This is complicated because Mailman listserve is so tightly tied to > the web that we pretty much have to move email, mailman and the > website all at once to avoid disrupting things. > > I'd like to use biopython as the first test case because you folks > have a simple website (looks like static HTML plus an old wiki) and > three low traffic email lists. Plus Brad/Andrew/JeffC all know their > way around our servers and can quickly notice problems if any occur. > > Advantages to the move: > > o website will be able to be edited via webDAV which opens up > opportunities to stuff with DreamWeaver etc. It should be far easier > for you to work on your website after the move > > o Lots of anti-spam stuff embedded within sendmail including RBL > blackhole checks, Vipul's Razor, Spamassassin and MimeDefang > > o Virus scanning on all inbound emails for the first time via McAfee > virus scanner > > o SSL access for mailing lists and admin tasks > > o SSL/TLS encryption for sendmail and email > > o faster machine, Linux on Intel, lots of disk space > > o possibility for IMAP mailboxes for developers > > This _DOES NOT_ affect your CVS or sourcecode. Just the machine that > hosts your mailing lists and website. > > > You guys up for this? Let me know. > > -Chris > Open-Bio.org > > > > -- > Chris Dagdigian, > BioTeam Inc. - Independent Bio-IT & Informatics consulting > Office: 617-666-6454, Mobile: 617-877-5498, Fax: 425-699-0193 > PGP KeyID: 83D4310E Yahoo IM: craffi Web: http://bioteam.net > > _______________________________________________ > Biopython-dev mailing list > Biopython-dev@biopython.org > http://biopython.org/mailman/listinfo/biopython-dev From dalke at dalkescientific.com Thu May 8 13:32:46 2003 From: dalke at dalkescientific.com (Andrew Dalke) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] will the biopython project be willing to be test case for relocating our webserver and mailing lists? In-Reply-To: Message-ID: <14DAB70E-817B-11D7-B891-000393C92466@dalkescientific.com> Jeff: > No problem with me. Ditto. Andrew From chapmanb at uga.edu Thu May 8 13:37:52 2003 From: chapmanb at uga.edu (Brad Chapman) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] will the biopython project be willing to be test case for relocating our webserver and mailing lists? In-Reply-To: References: <3EBA8F23.5000902@sonsorol.org> Message-ID: <20030508173752.GA55801@evostick.agtec.uga.edu> Chris: > >I'd like to use biopython as the first test case because you folks > >have a simple website (looks like static HTML plus an old wiki) and > >three low traffic email lists. Plus Brad/Andrew/JeffC all know their > >way around our servers and can quickly notice problems if any occur. Jeff: > Brad has been working on a new version of the website, that is > generally more easy to manage and better integrated with the blog. Is > this a good time to switch to that? Or do you want to see if we can > get a straight migration first? This is also cool with me. I am willing to do the website switching this weekend if that works with everyone (Sunday evening would work good with me). The new page is a little more complicated since it's not straight HTML, but also doesn't need the wiki (I don't really think there is much reason to keep that around) so it's easier in some ways. Plus I'm willing to do the work in getting it up and running provided I can get sudo access or whatever. So this makes it lots easier for ya'll. So, yeah: +1 for moving to the new server and +1 for using the new website. Brad From MKC at Stowers-Institute.org Thu May 8 14:45:27 2003 From: MKC at Stowers-Institute.org (Coleman, Michael) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] blastpgp parsing buglet Message-ID: Parsing by NCBIStandalone.py fails for BLASTP 2.2.5 output. This is the partial output that trips the problem: gi|23099742|ref|NP_693208.1| ornithine aminotransferase [Oceanob... 430 e-119 gi|16081241|ref|NP_393547.1| L-2, 4-diaminobutyrate:2-ketoglutar... 430 e-119 Sequences not found previously or not previously below threshold: >gi|23466947|gb|ZP_00122533.1| hypothetical protein [Haemophilus somnus 129PT] Length = 432 Score = 591 bits (1524), Expect = e-167 Identities = 191/420 (45%), Positives = 291/420 (69%), Gaps = 7/420 (1%) The code expects to see a 'CONVERGED' but none is given here. One possible fix would be to also look for a line beginning with '>', like so # Read the descriptions and the following blank lines. read_and_call_while(uhandle, consumer.noevent, blank=1) l = safe_peekline(uhandle) if l[:9] != 'CONVERGED' and l[:1] != '>': read_and_call_until(uhandle, consumer.description, blank=1) read_and_call_while(uhandle, consumer.noevent, blank=1) Mike Mike Coleman, Scientific Programmer, +1 816 926 4419 Stowers Institute for Biomedical Research 1000 E. 50th St., Kansas City, MO 64110 From jchang at jeffchang.com Fri May 9 02:02:48 2003 From: jchang at jeffchang.com (Jeffrey Chang) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] blastpgp parsing buglet In-Reply-To: Message-ID: Great patch! I've committed it. Thanks, Jeff On Thursday, May 8, 2003, at 11:45 AM, Coleman, Michael wrote: > Parsing by NCBIStandalone.py fails for BLASTP 2.2.5 output. This is > the partial output that trips the problem: > > gi|23099742|ref|NP_693208.1| ornithine aminotransferase [Oceanob... > 430 e-119 > gi|16081241|ref|NP_393547.1| L-2, 4-diaminobutyrate:2-ketoglutar... > 430 e-119 > > Sequences not found previously or not previously below threshold: > >> gi|23466947|gb|ZP_00122533.1| hypothetical protein [Haemophilus >> somnus 129PT] > Length = 432 > > Score = 591 bits (1524), Expect = e-167 > Identities = 191/420 (45%), Positives = 291/420 (69%), Gaps = 7/420 > (1%) > > The code expects to see a 'CONVERGED' but none is given here. One > possible fix would be to also look for a line beginning with '>', like > so > > # Read the descriptions and the following blank lines. > read_and_call_while(uhandle, consumer.noevent, blank=1) > l = safe_peekline(uhandle) > if l[:9] != 'CONVERGED' and l[:1] != '>': > read_and_call_until(uhandle, consumer.description, > blank=1) > read_and_call_while(uhandle, consumer.noevent, blank=1) > > Mike > > Mike Coleman, Scientific Programmer, +1 816 926 4419 > Stowers Institute for Biomedical Research > 1000 E. 50th St., Kansas City, MO 64110 > > _______________________________________________ > Biopython-dev mailing list > Biopython-dev@biopython.org > http://biopython.org/mailman/listinfo/biopython-dev From jchang at smi.stanford.edu Fri May 9 20:50:50 2003 From: jchang at smi.stanford.edu (Jeffrey Chang) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] Fwd: User Groups Can Win a Pass to an O'Reilly Conference Message-ID: <72375DBE-8281-11D7-BFF4-000A956845CE@smi.stanford.edu> Anyone interested in going to this conference? Please let me know if you would be interested in trying for a pass. Jeff Begin forwarded message: > From: Marsee Henon > Date: Fri May 9, 2003 3:30:22 PM US/Pacific > To: jchang@SMI.Stanford.EDU > Subject: User Groups Can Win a Pass to an O'Reilly Conference > > Enter your group to win one pass to: > O'Reilly Open Source Convention > Portland Marriott Downtown, > Portland, OR > July 7-11, 2003 > http://conferences.oreilly.com/oscon/ > > > > The lucky winning group will be given a "conference sessions only" pass > to attend the conference, valued at $1095.00. O'Reilly assumes that if > your group is the lucky winner, as the user group representative, you > will distribute the winning pass to a group member, using a method > appropriate for your group (drawing, raffle, etc.) > > The winning pass* includes: > -Access to all conference sessions July 9, 10, 11 including keynotes* > -Admission to Exhibit Hall > -Admission to all on-site evening events > -All conference handouts (excluding tutorial materials) > > > *Pass does not include tutorial fees, lodging, food, and > transportation. > > Please email your entry to Marsee Henon at marsee@oreilly.com. > Deadline for entries is May 21, 2003. In the subject line of your > email, please say "Raffle Entry." The winning group leader will be > contacted on May 22, 2003 by email--unless a phone call is requested > (phone number must be provided). > > Only one attendee per pass will be allowed. Two people cannot share the > same pass to attend the conference on separate days. > > For more information about the O'Reilly Open Source Convention, > please visit our web site: > http://conferences.oreilly.com/oscon/ > > Early bird registration ends May 23, 2003. O'Reilly User Group Program > members receive 20% off conference session and tutorial fees. Register > before May 23, and receive 20% off already discounted "Early Bird" > pricing. After May 23, 20% discount will be applied to standard > pricing. When registering online, please enter the discount code: DSUG, > where it says: "If you received a discount code, please enter it here." > If registering by phone, please give the customer service > representative the DSUG discount code. > > If the winner of the pass has already registered for the conference, > the winner will be reimbursed for conference session fees paid. > > If you would like brochures for your members, I'd be happy ship them. > > Now for the Rules and Regulations--I apologize for the length, they're > really quite simple: > > ***These rules constitute the official rules of this raffle. By > participating in the raffle, entrants agree to be bound by the official > rules and the decision of the judges, which are final and binding in > all respects.*** > > 1. Entry: > No purchase is necessary to enter the raffle. To enter the raffle, > please email Marsee Henon at marsee@oreilly.com and tell her to enter > your group for the raffle. Entries must be received at O'Reilly & > Associates by May 21, 2003. Limit one entry per group, per email > address. O'Reilly & Associates and its agents are not responsible for > lost, late, misdirected, incomplete, illegible or damaged email that > results from any source. By entering, entrant agrees to abide by and be > bound by the Official Rules. O'Reilly & Associates reserves the right > to cancel the raffle if it becomes technically corrupted. > > 2. Eligibility: > The O'Reilly Raffle is open to all who are 18 years of age or older, > and reside in the U.S. or Canada, except employees of O'Reilly & > Associates. Anyone else directly involved in this raffle is ineligible > to participate. Raffle void where prohibited by law. All federal, > state, and local laws apply. > > 3. Selection and Notification: > The winners will be chosen at random from all eligible entries > submitted by May 21, 2003. O'Reilly will notify winners by email or > phone. A prize not claimed by June 30, 2003 will not be awarded. > The odds of winning depend on the number of eligible entries received. > > 4. Other Rules: > a) The prize is nontransferable and non-endorsable; no cash or other > substitutions will be offered. All federal, state, and local taxes and > delivery charges are the sole responsibility of the winner. > > b) The winner consents to the use of his/her name and/or likeness for > publicity, advertising, and commercial purposes, in perpetuity, without > further compensation unless prohibited by law. O'Reilly & Associates > and its agents are not responsible for lost entries, or for and > availability of information or for Internet, for whatever reason. > Entries will be disqualified if O'Reilly & Associates determines, at > its sole discretion, that entrants have attempted to circumvent the > terms and conditions of these rules. All decisions by O'Reilly & > Associates are final. > > c) By participating in this raffle, entrants agree to release and hold > O'Reilly & Associates (and their employees, agents, representatives, or > affiliated companies) harmless from any and all losses, damages, > rights, claims, and actions of any kind in connection with the prize, > including, without limitation, personal injuries, death or property > damage, and claims based on publicity rights, defamation, or invasion > of privacy. > > d) Entrant also agrees that in no event shall O'Reilly & Associates or > its agents be liable to entrant or any other person for any damage, > injuries or losses arising out of the introduction of any virus, bug or > software malfunction resulting from participation in this raffle, or > for any damage, injuries or losses arising in connection with the > prize. > > e) O'Reilly & Associates reserves the right to modify the rules of the > raffle in any way or at any time, as long as reasonable notice is > given. > > f) To receive the name of the winner, or a copy of the Official Rules, > send a self-addressed, stamped envelope to: > O'Reilly Open Source Convention UG Raffle, > c/o O'Reilly & Associates > 1005 Gravenstein Highway North, Sebastopol, CA 95472, > Attn: Marsee Henon, Post-marked prior to the close of the raffle > (May 21, 2003). > From dag at sonsorol.org Sun May 11 09:57:41 2003 From: dag at sonsorol.org (chris dagdigian) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] test of biopython mailing list after server and DNS changes... Message-ID: <3EBE56D5.4090508@sonsorol.org> Will this get to the new machine for delivery? Will everything work? How exciting! -Chris From andreas.kuntzagk at mdc-berlin.de Thu May 15 04:57:00 2003 From: andreas.kuntzagk at mdc-berlin.de (Andreas Kuntzagk) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] BOSC abstract for Biopython In-Reply-To: <20030506172208.GA6728@evostick.agtec.uga.edu> References: <20030506172208.GA6728@evostick.agtec.uga.edu> Message-ID: <1052988877.18982.3.camel@sulawesi> Hi, Am Die, 2003-05-06 um 19.22 schrieb Brad Chapman: > Hey all; > Since many Biopython developers are facing the monetary difficulties > of making it to Australia for this years Bioinformatics Open Source > Conference (BOSC), I've been honored again this year with the > privilege of giving a Biopython talk. As always, I am but a humble > servant of the Biopython community and want to represent you fine > folks as best I can (so I'll try not to show up reeking like cheap > gin (although, no promises :-)). Will there be anything else on BOSC relating Biopython/Python? Andreas From jchang at smi.stanford.edu Mon May 19 00:44:25 2003 From: jchang at smi.stanford.edu (Jeffrey Chang) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] fiddling with the biopython URL Message-ID: <91516B70-89B4-11D7-A551-000A956845CE@smi.stanford.edu> I've been messing around with the Apache rewrite rules to prettify the URL. Currently, http://www.biopython.org gets rewritten as http://www.biopython.org/biopython, which gets rewritten to the quixote source. I do not like the extra level of indirection, because it makes the URLs harder to remember and type. Thus, I have changed the rewrite rule so that the website is served from http://www.biopython.org . However, this leads to one bad thing, which is that all pages in our website are now directed to quixote. Since it would be convenient to put static content up occasionally, I have made a rule so that http://www.biopython.org/static/ does not get rewritten to quixote. Therefore, we can drop static content in: /home/websites/biopython.org/websites/static Brad, did you set up the rewrite rules originally? Please let me know if this is a bad thing to do, will break something in the website, and if I should change it back. Jeff From grouse at mail.utexas.edu Mon May 19 19:09:24 2003 From: grouse at mail.utexas.edu (Michael Hoffman) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] Developer CVS access Message-ID: Can someone tell me what is the current method to access the developer CVS? In all the server movements and stuff I seem to have gotten confused. ssh just blocks when connecting to dev.open-bio.org. Thank you, -- Michael Hoffman The University of Texas at Austin From Alainboule at aol.com Tue May 20 09:42:37 2003 From: Alainboule at aol.com (Alainboule@aol.com) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] Contribution Message-ID: <29532C12.6BD7CCCB.0AA7FE78@aol.com> Hi, I would like to contribute to the biopython development but I don't know where to start. I am willing to do this in my spare time in order to gain some experience in bioinformatics. I have no previous experience in that field but I have been developing software for the industry for many years. I know several languages and OS such as C/C++, Java, VB, UNIX, Windows, RTOS, ... . Of course, I am ready to teach myself in other languages if necessary. Could somebody give me some advice to get started ? Thanks in advance Alain Boule From grouse at mail.utexas.edu Wed May 21 16:39:29 2003 From: grouse at mail.utexas.edu (Michael Hoffman) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] Bio.PubMed find_related additional keyword options Message-ID: The following will allow extra keyword options to find_related() as shown in the new docstring example. I also added a unit test for this functionality (not shown). OK to check in? -- Michael Hoffman The University of Texas at Austin Index: PubMed.py =================================================================== RCS file: /home/repository/biopython/biopython/Bio/PubMed.py,v retrieving revision 1.3 diff -u -r1.3 PubMed.py --- PubMed.py 24 Sep 2002 06:34:27 -0000 1.3 +++ PubMed.py 21 May 2003 20:38:21 -0000 @@ -199,12 +199,20 @@ break return ids -def find_related(pmid): - """find_related(pmid) -> ids +def find_related(pmid, **keywds): + """find_related(pmid, **keywds) -> ids Search PubMed for a list of citations related to pmid. pmid can be a PubMed ID, a MEDLINE UID, or a list of those. + keywds - additional keywords to be passed to ELink + + for example: + + >>> find_related("11812492", mindate="2003", datetype="pdat") + + will find citations related to pmid 11835276 published in 2003 + or later """ class ResultParser(sgmllib.SGMLParser): # Parse the ID's out of the HTML-formatted page that PubMed @@ -246,7 +254,7 @@ parser = ResultParser() if type(pmid) is type([]): pmid = string.join(pmid, ',') - h = NCBI.elink(dbfrom='pubmed', id=pmid) + h = NCBI.elink(dbfrom='pubmed', id=pmid, **keywds) parser.feed(h.read()) return parser.ids From jchang at jeffchang.com Wed May 21 16:58:06 2003 From: jchang at jeffchang.com (Jeffrey Chang) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] Bio.PubMed find_related additional keyword options In-Reply-To: Message-ID: Can you restrict the keywords that can be passed to the function? e.g. def find_related(pmid, mindate=None, datetype=None, XXX): Once Andrew's EUtils.py matures, we can change the underlying code to use that package, which should be more robust than the current code. If the keywords are explicitly defined, it will be easier to make that change, because we will know what the user is passing in. Also, having explicit keywords will insulate find_related from changes in EUtils. If the EUtils parameters change, we can handle the changes within the find_related function, rather than in every function that calls it. Jeff On Wednesday, May 21, 2003, at 01:39 PM, Michael Hoffman wrote: > The following will allow extra keyword options to find_related() as > shown in the new docstring example. I also added a unit test for this > functionality (not shown). > > OK to check in? > -- > Michael Hoffman > The University of Texas at Austin > > Index: PubMed.py > =================================================================== > RCS file: /home/repository/biopython/biopython/Bio/PubMed.py,v > retrieving revision 1.3 > diff -u -r1.3 PubMed.py > --- PubMed.py 24 Sep 2002 06:34:27 -0000 1.3 > +++ PubMed.py 21 May 2003 20:38:21 -0000 > @@ -199,12 +199,20 @@ > break > return ids > > -def find_related(pmid): > - """find_related(pmid) -> ids > +def find_related(pmid, **keywds): > + """find_related(pmid, **keywds) -> ids > > Search PubMed for a list of citations related to pmid. pmid can > be a PubMed ID, a MEDLINE UID, or a list of those. > > + keywds - additional keywords to be passed to ELink > + > + for example: > + > + >>> find_related("11812492", mindate="2003", datetype="pdat") > + > + will find citations related to pmid 11835276 published in 2003 > + or later > """ > class ResultParser(sgmllib.SGMLParser): > # Parse the ID's out of the HTML-formatted page that PubMed > @@ -246,7 +254,7 @@ > parser = ResultParser() > if type(pmid) is type([]): > pmid = string.join(pmid, ',') > - h = NCBI.elink(dbfrom='pubmed', id=pmid) > + h = NCBI.elink(dbfrom='pubmed', id=pmid, **keywds) > parser.feed(h.read()) > return parser.ids > > > > _______________________________________________ > Biopython-dev mailing list > Biopython-dev@biopython.org > http://biopython.org/mailman/listinfo/biopython-dev From dalke at dalkescientific.com Wed May 21 16:57:38 2003 From: dalke at dalkescientific.com (Andrew Dalke) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] Bio.PubMed find_related additional keyword options In-Reply-To: Message-ID: Michael Hoffman wrote > The following will allow extra keyword options to find_related() as > shown in the new docstring example. I also added a unit test for this > functionality (not shown). Also, the EUtils package I wrote can handle this, using NCBI's new web interface. It's on my dalkescientific.com pages. Brad has promised to mention EUtils in the BOSC Biopython talk if I integrate it in with Biopython, in order to encourage me to move it over. Just gotta get the time to fix up a few things .... :) Andrew From chapmanb at uga.edu Wed May 21 17:16:15 2003 From: chapmanb at uga.edu (Brad Chapman) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] fiddling with the biopython URL In-Reply-To: <91516B70-89B4-11D7-A551-000A956845CE@smi.stanford.edu> References: <91516B70-89B4-11D7-A551-000A956845CE@smi.stanford.edu> Message-ID: <20030521211615.GA27461@evostick.agtec.uga.edu> Hey Jeff; Sorry to be so slow. In the middle of writtens this week. Gack. Wednesday being finished means more than halfway done though, so I've got a little life in me. > I've been messing around with the Apache rewrite rules to prettify the > URL. Currently, http://www.biopython.org gets rewritten as > http://www.biopython.org/biopython, which gets rewritten to the quixote > source. I do not like the extra level of indirection, because it makes > the URLs harder to remember and type. Agreed. I haven't come up with a good solution for this yet; although I was thinking about it. I'm really glad you came up with something. > Since it would be convenient to > put static content up occasionally, I have made a rule so that > http://www.biopython.org/static/ does not get rewritten to quixote. > Therefore, we can drop static content in: > /home/websites/biopython.org/websites/static Cool. The one problem I thought would happen was that mailman links would get messed up as well, but it looks like you fixed that too. Awesome. Thanks. By the way, I have the static directory files and docs serving out information through Quixote's mechanism, so you can also put stuff in /home/websites/biopython.org/files/ and it'll get mapped to http://www.biopython.org/files. Right now that's where I keep the download stuff (and documentations is in docs) but there's no reason other things couldn't go in there as well. Just another option. > Brad, did you set up the rewrite rules originally? Please let me know > if this is a bad thing to do, will break something in the website, and > if I should change it back. No, it's a very good thing. Everything looks smoov. Thanks again for fixing that! Brad From jchang at jeffchang.com Wed May 21 20:49:56 2003 From: jchang at jeffchang.com (Jeffrey Chang) Date: Sat Mar 5 14:43:22 2005 Subject: [Biopython-dev] fiddling with the biopython URL In-Reply-To: <20030521211615.GA27461@evostick.agtec.uga.edu> Message-ID: <4EA36105-8BEF-11D7-861D-000A956845CE@jeffchang.com> >> Since it would be convenient to >> put static content up occasionally, I have made a rule so that >> http://www.biopython.org/static/ does not get rewritten to quixote. >> Therefore, we can drop static content in: >> /home/websites/biopython.org/websites/static > > Cool. The one problem I thought would happen was that mailman links > would get messed up as well, but it looks like you fixed that too. > Awesome. Thanks. > > By the way, I have the static directory files and docs serving out > information through Quixote's mechanism, so you can also put stuff > in /home/websites/biopython.org/files/ and it'll get mapped to > http://www.biopython.org/files. Right now that's where I keep the > download stuff (and documentations is in docs) but there's no reason > other things couldn't go in there as well. Just another option. Ah, yes, that's right. I had forgotten about those and didn't handle them in my Apache rules. However, those directories magically work. Now I see that they're handled by quixotic. I need to spend some time trying to figure out how that works. Jeff From jchang at jeffchang.com Wed May 21 23:05:55 2003 From: jchang at jeffchang.com (Jeffrey Chang) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Contribution In-Reply-To: <29532C12.6BD7CCCB.0AA7FE78@aol.com> Message-ID: <4D9EE0DB-8C02-11D7-9E08-000A956845CE@jeffchang.com> Since this comes up occasionally, I have put together a webpage describing possible projects and contributions: http://www.biopython.org/docs/developer/contrib.html If anyone else has stuff to add to the Projects list, please let me know! Jeff On Tuesday, May 20, 2003, at 06:42 AM, Alainboule@aol.com wrote: > Hi, > I would like to contribute to the biopython development but I don't > know where to start. I am willing to do this in my spare time in order > to gain some experience in bioinformatics. I have no previous > experience in that field but I have been developing software for the > industry for many years. I know several languages and OS such as > C/C++, Java, VB, UNIX, Windows, RTOS, ... . Of course, I am ready to > teach myself in other languages if necessary. Could somebody give me > some advice to get started ? > > Thanks in advance > > Alain Boule > _______________________________________________ > Biopython-dev mailing list > Biopython-dev@biopython.org > http://biopython.org/mailman/listinfo/biopython-dev From grouse at mail.utexas.edu Wed May 21 23:51:58 2003 From: grouse at mail.utexas.edu (Michael Hoffman) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Re: Bio.PubMed find_related additional keyword options In-Reply-To: Message-ID: On Wed, 21 May 2003, Jeffrey Chang wrote: > Can you restrict the keywords that can be passed to the function? e.g. > def find_related(pmid, mindate=None, datetype=None, XXX): > > Once Andrew's EUtils.py matures, we can change the underlying code to > use that package, which should be more robust than the current code. > If the keywords are explicitly defined, it will be easier to make that > change, because we will know what the user is passing in. Naaah, I'll just use EUtils instead... Thanks for the pointer. However, it is useful to have the ResultsParser from Bio.PubMed.find_related exposed since that functionality is not yet in EUtils. -- Michael Hoffman The University of Texas at Austin From dalke at dalkescientific.com Thu May 22 02:54:17 2003 From: dalke at dalkescientific.com (Andrew Dalke) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Re: Bio.PubMed find_related additional keyword options In-Reply-To: Message-ID: <34C0682E-8C22-11D7-A8E6-000393C92466@dalkescientific.com> Michael Hoffman: > Naaah, I'll just use EUtils instead... Thanks for the pointer. Let me know of what improvements are needed for a final release. > However, it is useful to have the ResultsParser from > Bio.PubMed.find_related exposed since that functionality is not yet in > EUtils. Like that one, thanks! (There are a few other things, like I learned today that EUtils now has its own machine name - that was announced on NCBI's EUtils mailing list. These need to be changed in the EUtils code ... Someone want to pay for that? :) Andrew From chapmanb at uga.edu Thu May 22 02:22:32 2003 From: chapmanb at uga.edu (Brad Chapman) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] fiddling with the biopython URL In-Reply-To: <4EA36105-8BEF-11D7-861D-000A956845CE@jeffchang.com> References: <20030521211615.GA27461@evostick.agtec.uga.edu> <4EA36105-8BEF-11D7-861D-000A956845CE@jeffchang.com> Message-ID: <20030522062232.GA590@66-190-91-31.charterga.net> [docs and files directories] > Ah, yes, that's right. I had forgotten about those and didn't handle > them in my Apache rules. However, those directories magically work. > Now I see that they're handled by quixotic. I need to spend some time > trying to figure out how that works. It's pretty easy to add static directories using Quixote. The documentation for it is at: http://www.mems-exchange.org/software/quixote/doc/static-files.html and in the website the code is in BioWebsite/__init__.py: docs = StaticDirectory("/home/websites/biopython.org/docs") files = StaticDirectory("/home/websites/biopython.org/files") I also specify the stylesheet here. It would be pretty easy to add other directories here to your liking (although, mailman directories didn't seem to work when I did that -- their is probably too much magic going on for our own good; the rewrite rules seem to be the way to go for that). You-know-it-must-be-exam-week-when-I'm-up-this-early-ly yr's, Brad From grouse at mail.utexas.edu Thu May 22 14:54:46 2003 From: grouse at mail.utexas.edu (Michael Hoffman) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Re: Re: Bio.PubMed find_related additional keyword options In-Reply-To: <34C0682E-8C22-11D7-A8E6-000393C92466@dalkescientific.com> References: <34C0682E-8C22-11D7-A8E6-000393C92466@dalkescientific.com> Message-ID: On Thu, 22 May 2003, Andrew Dalke wrote: > Michael Hoffman: > > Naaah, I'll just use EUtils instead... Thanks for the pointer. > > Let me know of what improvements are needed for a final release. Some suggestions: 1) Print debug info (DUMP_URL etc.) to stderr, not stdout. I can supply a patch if you want. 2) Include the pubmed DTD and necessary includes with the EUtils distribution. 3) Use Bio.WWW.RequestLimiter. I'll volunteering a patch. 4) Documentation. ;-) I unfortunately can't afford to pay for this, but if the README explained briefly what each of the modules did and said that the bulk of the documentation was in their docstrings, that would be a great start. > > However, it is useful to have the ResultsParser from > > Bio.PubMed.find_related exposed since that functionality is not yet in > > EUtils. > > Like that one, thanks! I can supply a quick patch to BioPython to do this. Jeff? > (There are a few other things, like I learned today that EUtils now > has its own machine name - that was announced on NCBI's EUtils > mailing list. These need to be changed in the EUtils code ... > Someone want to pay for that? :) I'd like to but can't, but I will do it for free if you want. :-) -- Michael Hoffman The University of Texas at Austin From jchang at jeffchang.com Thu May 22 15:05:32 2003 From: jchang at jeffchang.com (Jeffrey Chang) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Re: Re: Bio.PubMed find_related additional keyword options In-Reply-To: Message-ID: <5C4C390C-8C88-11D7-9F69-000A956845CE@jeffchang.com> On Thursday, May 22, 2003, at 11:54 AM, Michael Hoffman wrote: > On Thu, 22 May 2003, Andrew Dalke wrote: > >> Michael Hoffman: >>> However, it is useful to have the ResultsParser from >>> Bio.PubMed.find_related exposed since that functionality is not yet >>> in >>> EUtils. >> >> Like that one, thanks! > > I can supply a quick patch to BioPython to do this. Jeff? Sounds good to me! Jeff From grouse at mail.utexas.edu Fri May 23 12:57:41 2003 From: grouse at mail.utexas.edu (Michael Hoffman) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Re: Bio.PubMed find_related additional keyword options In-Reply-To: <5C4C390C-8C88-11D7-9F69-000A956845CE@jeffchang.com> Message-ID: > On Thursday, May 22, 2003, at 11:54 AM, Michael Hoffman wrote: > > >>> However, it is useful to have the ResultsParser from > >>> Bio.PubMed.find_related exposed since that functionality is not > >>> yet in EUtils. Actually, ELink seems to have stopped working today... Even the test examples on the web no longer function. So it will have to wait until this works so I can test. -- Michael Hoffman The University of Texas at Austin From felipeflipp at yahoo.com.br Wed May 28 09:30:54 2003 From: felipeflipp at yahoo.com.br (=?iso-8859-1?q?Felipe=20Faco?=) Date: Sat Mar 5 14:43:23 2005 Subject: [Biopython-dev] Unsubscription Message-ID: <20030528133054.61145.qmail@web40506.mail.yahoo.com> Administrator, I would like to be unsubscribed from this list. Thank you. _______________________________________________________________________ Yahoo! Mail Mais espa?o, mais seguran?a e gratuito: caixa postal de 6MB, antiv?rus, prote??o contra spam. http://br.mail.yahoo.com/