Skip to main content

Walking Skeletons and TODO Outlines

These days, applications are so complicated and contain so many layers that it's difficult to know where to start. Should you work bottom up or top down? How much should you work on one layer before starting to work on the next layer? How can you ensure that the layers work properly together? Building a walking skeleton and managing a TODO document in outline format are two techniques that work well together to conquer complex problems, even ones involving multiple layers. Best of all, you don’t have to worry about trying to get things right the first time or getting lost along the way.
A Walking Skeleton is a tiny implementation of the system that performs a small end-to-end function. It need not use the final architecture, but it should link together the main architectural components. The architecture and the functionality can then evolve in parallel. -- Alistair Cockburn
Building a walking skeleton is a great way to handle the complexity of dealing with multiple layers. Start with the simplest possible feature, and implement it top down. Ideally, the feature should force you to work your way all the way down the stack. The goal is to make sure the layers work together.

As you’re building the walking skeleton, you may think of many things that you need to add, test, or in general worry about. It's helpful to maintain a TODO document in outline format so that you can organize and plan your attack, especially when you’re working with multiple layers at the same time. Eventually, each TODO item can be transferred into a test, a piece of code, an issue in the bug tracking system, or perhaps just an email to someone else.

Once you’ve built a walking skeleton, should you go back to developing one layer at a time? For most applications where the cost of change is low, probably not. Actively building one layer at a time is frequently very inefficient. A more efficient approach is to focus on one feature at a time. Sketch out the feature using a set of TODOs and build it top-down, managing the TODOs as you go. If you focus on one feature at a time instead of one layer at a time, you won’t end up building a lot of code in different layers that never actually gets used. The time saved by only building what you need and only building it when you have all the information you need more than compensates for the refactoring time.

Thanks go to Chris Lopez for his help with this post.


Sam Rushing said…
I use that technique for pretty much any large project, but it works especially well with compilers... usually you start with a really small, toy version of the language, then put just enough meat on it to follow all the way through to the end. Not only does it give you a place to start hacking, and a framework, but more importantly it lets you find and fix any misunderstandings you may have earlier rather than later.
jjinux said…
Yep. I've done a couple small interpreters, and I can't imagine doing it any other way.
killy said…
I've known that technique from the gaming industry, where it's called a "vertical slice". The name is taken from a piece of cake - when you get a slice of cake it contains a little bit of every layer for you to taste :)

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p