Skip to main content

PyCon: Make Sure Your Programs Crash

See the website.

This talk was given by Moshe Zadka from VMware.

Think about how to crash and then recover from the crash.

If your application recovers quickly, stuff can crash and no one will see.

Even Python code occasionally crashes due to C bugs, untrapped exceptions, infinite loops, blocking calls, thread deadlocks, inconsistent resident state, etc. These things happen!

Recovery is important.

A system failure can usually be considered to be the result of two program errors. The second error is in the recovery routine.

When a program crashes, it leaves data that was written in an arbitrary program state.

Avoid storage: caches are better than master copies.

Databases are good at transactions and at recovering from crashes.

File rename is an atomic operation in modern OSs.

Think of efficient caches and reliable masters. Mark cache inconsistency.

He seems to be skeptical of the ACID nature of MySQL and PostgreSQL. I'm not sure why.

Don't write proper shutdown code. Always crash so that your crash code always gets tested. Your data should always be consistent.

Availability: if the data is consistent, just restart.

To get into the high 9s, recover very quickly. Limit impact, detect the crash quickly, and startup quickly.

Vertical splitting: different execution paths, different processes. Apache can have a child process die with no impact on availability.

Horizontal splitting: different code bases, different processes.

Watchdog: monitor -> flag -> remediate.

Watchdog principle: keep it simple, keep it safe.

A process can touch a file every 30 seconds. The watchdog sees whether the file has been touched.

The watchdog and the processor restarter should not be in the same process, because the watchdog should be simple. Remember: separation of concerns.

Mark problems. Check solutions. See if restarting worked.

Everything crashes: plan for it.

Linux has a watchdog daemon. Use that to watch your watchdog.


Anonymous said…
It's this kind of bullshit that makes me want to move on from python to something like Scala. Compare:

Q. How do we make software reliable?

Python A: make sure your code recovers quickly after crashing

Scala A: use Software Transactional Memory, to mark a block as a single transaction. Simply undo the current transaction if anything goes wrong, then continue from that point.

Q. How do we write programs that scale to high performance on multiple cores?

Python A: Well, we have the GIL which prevents proper multithreading, and actually makes threads SLOWER on multicore machines, but removing the GIL is hard, even though Jython did it just fine, so we're not going to bother. In short, we have lots of libraries for green threads etc., but none of them work because we won't fix the fundamental problems underlying it all.

Scala A: We solve the problem on two fronts. First, we the best of breed Actor model of concurrency, unifying it into the language core, with two main keywords allowing you to choose between heavyweight OS threads and lightweight "green" threads, allowing you to code in the same style no matter which system you need for a particular block of code. Secondly, we provide parallel collections, which allow you to, for instance, iterate over a list, running a block of code on it in parallel, just as you would with non-parallel code.

Python is being quickly left behind.
jjinux said…
Anonymous, please don't use the word "bullshit" on my blog.

If you're interested in integrating STM into Python, see this (

I don't think that having code to recover from crashes quickly is at odds with other approaches to reliable software. For instance, Erlang has an actor-based concurrency model, and Erlang OTP is all about trees of actors that monitor each other and recover from failures quickly.
jj, I think you've got an extra a in Moshe's name.
jjinux said…
Fixed! Thanks!

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p