Wednesday, August 06, 2008

Python: sort | uniq -c via the subprocess module

Here is "sort | uniq -c" pieced together using the subprocess module:
from subprocess import Popen, PIPE

p1 = Popen(["sort"], stdin=PIPE, stdout=PIPE)
p1.stdin.write('FOO\nBAR\nBAR\n')
p1.stdin.close()
p2 = Popen(["uniq", "-c"], stdin=p1.stdout, stdout=PIPE)
for line in p2.stdout:
print line.rstrip()
Note, I'm not bothering to check the exit status. You can see my previous post about how to do that.

Now, here's the question. Why does the program freeze if I put the two Popen lines together? I don't understand why I can't setup the pipeline, then feed it data, then close the stdin, and then read the result.

12 comments:

Benjamin said...

I don't know why putting the Popens together causes it to lock up, but this:

p1 = Popen(["sort"], stdin=PIPE, stdout=PIPE)
p1.stdin.write('FOO\nBAR\nBAR\n')
p1.stdin.close()

Will only work for large writes because sort happens to be a "sponge", i.e. sucks up all its input before emitting any output.

If it were a filter it could block waiting for a reader, which would block you (the writer) once the pipe's buffer filled.

I usually spawn a thread for the input end of a pipeline for this reason.

Jean-Paul Calderone said...

The reason it hangs if you put the popen calls together is that it changes the relative ordering of the 2nd popen and the p1.stdin.close() call. In the working version you posted, p1.stdin is closed, then the uniq process is launched. Remember, processes inherit file descriptors from their parents. If you launch uniq before you close p1.stdin, then p1.stdin is open in the uniq process and you won't be able to close it. So, the hang is due to sort continuing to try to read from stdin.

_Mark_ said...

Confusion like this is why python needs a "pipeline" class on top of subprocess... as benjamin points out, if sort weren't special, neither of your approaches would work. Unix pipes get you a page (4k) of buffer. You instead want to use select (or poll) to write when you can, and to read when you can, and to notice EOFs (and then you want to look at *all* of the exit statuses.)

Shannon -jj Behrens said...

> If you launch uniq before you close p1.stdin, then p1.stdin is open in the uniq process and you won't be able to close it.

That must be it. Thanks!

Shannon -jj Behrens said...

Very helpful comments, everyone. It took me like three times reading them all, but they all made sense. Thanks!

max said...

looks like it resolved already.

ltrace/strace helps in such cases usually, since it becomes clear I/O is blocking

Anonymous said...

Further to jean-paul's comment - Popen takes a 'close_fds' keyword argument and if you set it to True then file descriptors of the parent process are closed in the child, so the block is avoided.

kmh

Shannon -jj Behrens said...

> Popen takes a 'close_fds' keyword argument

That's a great tip. Thanks!

Shannon -jj Behrens said...

Yep, it worked.

bimone said...

(Jean-Paul) There is such a
pipeline class.

from pipes import Template

t = Template()
t.append('p4 print -q \"$IN\"', 'f-')

# iconv from src encoding to UTF-8
t.append('iconv -f %s -t UTF-8' % (r.p4encoding), '--')

# Turn on debugging
t.debug(1)

# start it
t.copy('//Work/foo/bar', '/tmp/foo')

bimone said...

(That was Mark who wanted a pipeline module ...)

_Mark_ said...

Thanks for the pointer (I hadn't noticed pipes.py before) but it gets two big things wrong:
* it takes strings, not lists, so it's doomed to quoting horror
* unlike pretty much the entire rest of python, it ignores errors (it doesn't even look *possible* to get the exit statuses out.)

(Sadly both of those are based on the interface, not the implementation, so it's not just a matter of fixing bugs.)