Skip to main content

Python: Memory Conservation Tip: intern()

I'm working with a lot of data, and running out of memory is a problem. When I read a line of data, I've often seen the same data before. Rather than have two pointers that point to two separate copies of "foo", I'd prefer to have two pointers that point to the same copy of "foo". This makes a lot of sense in Python since strings are immutable anyway.

I knew that this was called the flyweight design pattern, but I didn't know if it was already implemented somewhere in Python. (Strictly speaking, I thought it was called the "flywheel" design pattern, and my buddy Drew Perttula corrected me.)

My first attempt was to write code like:
>>> s1 = "foo"
>>> s2 = ''.join(['f', 'o', 'o'])
>>> s1 == s2
True
>>> s1 is s2
False
>>> identity_cache = {}
>>> s1 = identity_cache.setdefault(s1, s1)
>>> s2 = identity_cache.setdefault(s2, s2)
>>> s1 == 'foo'
True
>>> s1 == s2
True
>>> s1 is s2
True
This code looks up the word "foo" by value and returns the same instance every time. Notice, it works.

However, Monte Davidoff pointed out that this is what the intern builtin is for. From the docs:
Enter string in the table of ``interned'' strings and return the interned string - which is string itself or a copy. Interning strings is useful to gain a little performance on dictionary lookup - if the keys in a dictionary are interned, and the lookup key is interned, the key comparisons (after hashing) can be done by a pointer compare instead of a string compare. Normally, the names used in Python programs are automatically interned, and the dictionaries used to hold module, class or instance attributes have interned keys. Changed in version 2.3: Interned strings are not immortal (like they used to be in Python 2.2 and before); you must keep a reference to the return value of intern() around to benefit from it.
Here it is in action:
>>> s1 = "foo"
>>> s2 = ''.join(['f', 'o', 'o'])
>>> s1 == s2
True
>>> s1 is s2
False
>>> s1 = intern(s1)
>>> s2 = intern(s2)
>>> s1 == 'foo'
True
>>> s1 == s2
True
>>> s1 is s2
True
Well did it work? My program still functions, but I didn't get a tremendous savings in memory. It turns out that I don't have enough dups, and that's not where I'm spending all my memory anyway. Oh well, at least I learned about the intern() function.

Comments

Alec said…
This is awesome jj - I didn't know about intern but I've really wanted it lately... I hope it's reasonably effective on jython cuz I'm in the same out of memory issue in our hadoop/jython cluster
Varikin said…
I don't have much deep python experience, I am coming from a Java background, but I really want to learn more about python. Do you know if it has any garbage collection (gc) logging like Java? For example, in Java, I can enable gc logging, take thread dumps, and then look for memory leaks or high consumption. I have done a fair bit of debugging/performance tuning in Java this way, but I am wondering if Python offers these options.
Doug Napoleone said…
@varikin

There is a garbage collection module (called 'gc') and gc.collect() can force (or at least strongly hint) to do a collection with circular issues.

From experience, this usually only postpones the problems a little.

For logging the best means is to just use the hotshot profiler.

If you have large arrays of primitive types I would recommend using the array package from SciPy (www.enthought.com). We were able to reduce the memory by over an order of magnitude just by having 5 arrays (3 int, 1 float, and 1 string) instead of a class with 5 attributes. 3Gig went to 250Meg.

There are some other tricks if you have many, many small objects. Read up on __slots__. (There are some nasty side effects however...)
Anonymous said…
this is useful and nice trick. Thanks so much :D, I think i will be using like so
Anonymous said…
So what's next for your program? I want to hear about cool compression schemes that let you work on your data while it's still compressed (like finding and factoring out string prefixes).
jjinux said…
I have one more that you might like, but it involves a longer blog post ;)
jjinux said…
Nice tips. Thanks, Doug.

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within Tunes.org , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and jabber.org. Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p