[Biopython-dev] Benchmarking PDBParser
João Rodrigues
anaryin at gmail.com
Fri May 6 08:24:04 UTC 2011
>
> Memory bloat is bad - it sounds like a garbage collection problem.
> Are you recreating the parser object each time?
>
No. I'm just calling get_structure at each step of the for loop. It's a bit
irregular also, sometimes it drops from 1GB to 300MB, stays stable for a
while and then spikes again. My guess is that all the data structures
holding the parser structures consume quite a lot and probably there's no
decent GC to clear the previous structure in time, so it accumulates.
Is there any way I can profile the script to see who's keeping the most
memory throughout the run?
>
> > I'll post the results today, shall I put it up on the wiki? This could be
> an
> > interesting thing to post for both users and future developments.
>
> I'd like to see the script and the results, so maybe the wiki is better.
>
Will do.
João
More information about the Biopython-dev
mailing list