Python: Debugging Memory Leaks

I wrote a simple tool that could take Web logs and replay them against a server in "real time". I was performance testing my Web app over the course of a day by hitting it with many days worth of Web logs at the same time.

By monitoring top, I found out that it was leaking memory. I was excited to try out Guppy, but it didn't help. Neither did playing around with the gc module. I had too many objects coming and going to make sense of it all.

Hence, I fell back to a simple process of elimination. Divide-and-conquer! I would make a change to the code, then I would exercise the code in a loop and monitor the output from top for ever-increasing memory usage.

Several hours later, I was able to nail it down to this simple repro:
# This program leaks memory rather quickly.  Removing the charset
# parameter fixes it.

import MySQLdb
import sys

while True:
connection = MySQLdb.connect(user='user', passwd='password',
host='localhost', db='development',
cursor = connection.cursor()
cursor.execute('select * from mytable where false')
It makes sense that if the memory leak is at the C level, I might not be able to find it with Python-level tools. I'll go hunting tomorrow to see if the MySQLdb team has already fixed it, and if not, I'll submit a bug.


Anonymous said…
That's why you access mysql via the protocol, not the 47MB client library. 8^)
Anonymous said…
Just a quick question - does cursor.execute() return something that needs cleaning up like connection in your example?
Anonymous said…
I can confirm memory leaks in Python programs can be hard to find and fix. When I was doing memory leak hunting in M2Crypto, I got some help from pje (how to use weakref). I also used the gc to catch some bugs, but many bugs were in the C layer for which I needed Valgrind. Even with all the instructions on how to suppress some errors, Valgrind produced a lot of false positives, and it isn't the easiest of things to use.

When I was debugging Chandler, I also tried to use Heapy, PySizer, and a nice frontend I built just for Chandler which took information from gc etc. I also found a commercial tool called Python Memory Validator which actually helped me find one bug in Chandler. I found one bug in the Python Memory Validator and the company was quick to fix it. They provide free evaluation if you want to try it out.

And just now as I was doing some searches I found muppy. Don't know anything more about it.

Still the bottom line is that none of the tools I have used were easy to use. There is definitely room for a good, simple memory debugger for Python.
jjinux said…
> That's why you access mysql via the protocol, not the 47MB client library. 8^)

Ah, if only MySQL had a clean, well documented interface, eh? It'd be great to have a pure-Python, asynchronous driver.
jjinux said…
> Just a quick question - does cursor.execute() return something that needs cleaning up like connection in your example?

You can call cursor.fetchall(), but you shouldn't need to.
jjinux said…
Thanks for the tips, Heikki.
jjinux said…
I can repro the bug with MySQLdb from svn. I just filed a bug:
jjinux said…
Duh, it looks like this has already been fixed in trunk. I was running it wrong earlier. It definitely leaks in 1.2.2, but it does not appear to leak in trunk. Closing and marking as invalid.
jjinux said…
If anyone cares, here's how I updated my box:

apt-get install libmysqlclient15-dev libmysql++-dev python-dev:
svn co mysql-python
cd mysql-python/MySQLdb
python build
python bdist_egg
sudo easy_install --always-unzip dist/MySQL_python-1.3.0-py2.5-*.egg
cd ../..
rm -rf mysql-python/
jjinux said…
Ugh, the version on trunk doesn't actually know how to return unicode objects if I set charset='utf8'. It's like it's ignoring that. If I leave that out, the old version doesn't leak either! ;)
Anonymous said…
I'm getting a similar leak, and using charset UTF8. You seem to be saying that neither solution (svn, or leaving out charset) supports unicode.. is this correct? I need unicode in my app - do you know of any way of fixing the leak and using unicode?

mike bayer said…
I think its important to note that the memory leak here goes away if the use_unicode=0 flag is set. This also causes MySQLdb to return plain bytestrings instead of Python unicode objects, but a SQL abstraction layer such as SQLAlchemy handles the conversion of bytestring to unicode object in a more finely-controllable way. So the very common setting of charset=utf8&use_unicode=0 in conjunction with an abstraction layer which handles the unicode conversion is the way to go.
Anonymous said…
I do not know, I do anything to avoid using mysql if I can, in favour of better ACID comliant DBMSs.

I enjoyed reading your blog.
jjinux said…
Joe, I need unicode. I can't find any way around this problem. If you want Unicode, then you're stuck with a memory leak. My approach is to use cron to restart the app once or twice a day :( Eventually, I might switch to managing encodings manually.


Mike, hello! I prefer SQLAlchemy when I have a little bit of really complex data, but I prefer straight SQL when I have a ton of really simple data. Cheers!


anonymous, thanks for reading!