Skip to main content

PyCon: Interactive Parallel and Distributed Computing with IPython

This was my favorite talk, aside from the OLPC keynote.
  • CPUs are getting more parallel these days, they aren't getting necessarily faster.
  • The free ride of relying on Moore's law to solve your performance problems isn't working anymore.
  • We need to embrace parallelism, but it's hard.
  • The IPython physicists have problems that simply can't be solved in a reasonable amount of time using a single CPU.
  • They wanted an interactive, Mathematica-like environment to do parallel computing.
  • They have been able to implement multiple paradigms on their foundation, including MPI, tuple spaces, MapReduce, etc.
  • They're using Twisted.
  • It's like coding in multiple Python shells on a bunch of computers at the same time.
  • Each machine returns the value of the current statement, and they are aggregated into a list.
  • You can also specify which machines should execute the current statement.
  • You can move data to and from the other machines.
  • They implemented something like deferreds (aka "promises") so that you can immediately start using a value in an expression even while it's busy being calculated in the background.
  • They've tested their system on 128 machines.
  • The system can automatically distribute a script that you want to execute.
  • The system knows how to automatically partition and distribute the data to work on using the "scatter" and "gather" commands.
  • It knows how to do load balancing and tasking farming.
  • Their overhead is very low. It's appropriate for tasks that take as little as 0.1 seconds. (This is a big contradiction to Hadoop.)
  • You can "talk to" currently running, distributed tasks.
  • It also works non-interactively.
  • All of this stuff is currently on the "saw" branch of IPython.

Comments

Anonymous said…
Wow, this sounds really slick. I wrote some custom hacks to make parallel jobs with python in the past; I had a Makefile that would scp data to a randomly picked machine, run a job, and copy the results back, and "make -j23" would do the right thing.

I'd be much happier not having to have done that myself, though! It wasn't the most flexible thing in the world....

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within Tunes.org , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and jabber.org. Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p