Skip to main content

Software Engineering: Agile Programming and IDEs Considered Harmful

Put away your IDE, code using a pencil, and do a big design up front. Get it right the first time. It can be done. Naturally, I'm trolling a bit, but that link is a very short, very fun read.

It reminds me of an interview with Donald Knuth that I've been thinking about a lot lately. Knuth said:
With the caveat that there’s no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development, let me just say that almost everything I’ve ever heard associated with the term "extreme programming" sounds like exactly the wrong way to go...with one exception. The exception is the idea of working in teams and reading each other’s code. That idea is crucial, and it might even mask out all the terrible aspects of extreme programming that alarm me.
Somehow André Bensoussan and Donald Knuth are both capable of designing everything up front, and just getting it right. Since I've always been an agile coder, I find such stories amazing. I wonder if Bensoussan would still work that way these days. The systems are larger, the libraries are less stable, the timelines are shorter, and there are more people you have to integrate with.


SDC said…
The man Edsger Dijkstra recommended teaching 'a language not implemented on any computers at the University', so students wouldn't be tempted to test their programs before they finished them. This was in an essay called 'On The Cruelty Of Really Teaching Computer Science'. That one also contained the gem: 'Software Engineering takes as it's charter: How To Program If You Cannot'.
Anonymous said…
"Get it right the first time. It can be done."

Indeed, it can...provided that your requirements don't change after you've begun coding. Good luck making *that* happen. :)
jjinux said…
> Indeed, it can...provided that your requirements don't change after you've begun coding.

So true.

> The man Edsger Dijkstra recommended teaching 'a language not implemented on any computers at the University'

Bill Karwin said…
Methodologies like agile can be used for good or evil purposes. I've worked at a couple of companies that claimed to be "agile" but that was just a euphemism for, "we don't like to write down requirements."

It doesn't mean agile is harmful, because you can certainly have the same irresponsible habits in a waterfall project or while using any other methodology.

I would even argue that if one does no engineering documentation, then one is not using a methodology. At that point, it can't be called engineering, or science, or even art. It can only be called "play," or at best, "labor."

If anything, agile techniques help mitigate the damage when you're stuck doing software development in a mode without project planning. At least agile principles like continuous integration and frequent interaction with the users can correct an errant course of action sooner.

The role of IDE tools is to reduce the cost of doing repetitive work. They are not a substitute for analysis and design, which are non-repetitive. But too many developers start any task by opening their IDE, instead of getting up at a whiteboard or using a pencil.
jjinux said…
> Bill Karwin said...

Anonymous said…
The error is assuming that one set of practices works for all situations. If you are designing a tight algorithms like shortest path or sorting, thinking hard about the solution will get you further along than just starting to code. On the other hand if you have to demo a user interface or check that business rules have been completely specified the best thing is to just implement it and have the user test it against real data.

And while good thinking and design make things like quicksort possible even Knuth is aware that careful design is no guarantee of perfection, after all didn't he say once: "Beware of bugs in the above code; I have only proved it correct, not tried it."?
Unknown said…
I remember a great, short lecture on Agile, Scrum, and the like. The crux of the lecture was that these were all methods for correctly specifying the level of quality and depth required for differing features of a product. They worked only in areas where immediate client feedback was possible and features were separable into discrete work units. The result was a product, rapidly developed, with high quality in the features actually cared about by the customer.

This to me is crux of hitting the constantly evolving requirements target.
jjinux said…
> The error is assuming that one set of practices works for all situations...

Excellent comment.
Anonymous said…
That's funny. In the early 80's I would write most of my programs with paper and pencil. And a lot of the time they worked first try, too. (I'm sure they were not as complicated as Mr. Bensoussan's were.)

This was a necessity: coding during high school classes, it would have been too conspicuous to set up my Apple ][ on my desk and start typing. But few teachers would interrupt a student who was busily writing.
Unknown said…
We may be coming back to a day of pencil and paper, hearalded by the acceptance of terse languages like Python.

In college, I would often write exact code on paper. Then I found myself getting used to idioms, and only writing difficult sections of code. Writing on paper took too long.

I might as well type "RM_Patient *pat; RM_Record *rec; for (pat = RM_FirstPatient(); pat = RM_NextPatient(pat); pat != NULL) { ...". Or I might write pseduocode: "for pat in patients:". I would often find myself typing in the pseducode, looking it over, and then expanding it line by line.

Perhaps with terse languages, it might become worthwhile to debug on paper again.
jjinux said…
> But few teachers would interrupt a student who was busily writing.

Very nice ;)
jjinux said…
> Perhaps with terse languages, it might become worthwhile to debug on paper again.

I think one attraction of pencil and paper is that it's very flexible, but it forces you to slow down and think.

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p