Skip to main content

Software Engineering: Code Reviews vs. Peer Programming

I've been thinking lately about the benefits of code reviews vs. peer programming. In general, I think most companies that really care about code quality use either code reviews or peer programming. Various companies are famous for using either one approach or the other, which leads me to wonder which approach is better under which circumstances?

Pivotal Labs is famous for doing full-time peer programming. It's well known that they do a good job writing software. However, I wonder if full-time peer programming might be too expensive for mundane code. I also wonder if it makes sense to work as a pair when someone needs to spend a few days reading, learning, and researching.

In contrast, Google is famous for code reviewing all commits before checkin. Certainly this frees up engineers to get more work done since they can spend a high percentage of their time working in parallel on separate tasks. However, I wonder if a code reviewer really has the ability to make the same level of architectural improvements as a peer programmer. Certainly a code reviewer can catch style mistakes, but it's much harder to tell someone their entire approach is wrong (for instance, threaded code vs. asynchronous code).

Furthermore, I wonder if code reviewers in general can catch all the little assumptions that get built into code. It reminds me of a story my boss once told me. A few decades ago, he was working on satellite control software. There was a piece of code that made it through three levels of code review even thought it contained a bug. It was missing a minus sign in some equations having to do with navigation. When the satellite was launched, it started spinning because it kept "thinking" it was upside down. It wasted half of its fuel before they could get the problem under control. My boss said that for satellites, the lifespan of a project is directly connected to the amount of fuel onboard. Hence, this came to be known as the three million dollar minus sign.

If this code had been peer programmed rather than code reviewed (or in addition to code review), would the peer have spotted the problem as the equation was being worked out? Certainly this taught me a valuable lesson about code review. It's far too easy to get hung up on less critical issues that are easy to spot, like style, instead of focusing on more important issues that require more brain power to understand.

In the book Professional Software Development: Shorter Schedules, Higher Quality Products, More Successful Projects, Enhanced Careers (see my blog post), Steve McConnell said that NASA found that the single most effective way to cut down on defects was to always have a second pair of eyes present. They were talking about building the shuttle, but I think the same thing applies to software.

Despite their pervasive use of code reviews or peer programming, Google and Pivotal Labs have both had their fair share of bugs. Even NASA makes mistakes. Hence, it's easy to see that neither code reviews nor peer programming can banish all defects. Given how fallible human beings are, it seems to me that the best way to keep people from dying because of mistakes is to avoid situations where mistakes are fatal. Driving is a fairly dangerous activity, and tens of thousands of people die each year in the United States because of driving errors. However, even more than that make mistakes and yet survive because of various safety precautions.

So even though I think code reviews and/or peer programming are important factors in writing high quality software, I think it's even better to avoid situations where software defects can cause serious damage. I certainly think that the more mission critical a piece of software is, the smaller and stupider it should be.


Eddy Mulyono said…
The hard part is to arrive at a "smaller and stupider" approach to all those not-so-small and not-so-stupid problems.

Ahh, the beauty of software development. :)
jjinux said…
> The hard part is to arrive at a "smaller and stupider" approach to all those not-so-small and not-so-stupid problems.

I talked about this with respect to air traffic control systems on a blog post here ( :)

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p