Skip to main content

Books: JavaScript: The Good Parts (Part 2)

I just finished reading JavaScript: The Good Parts. The book weighed in at a mere 145 pages and was written by one of the premier experts in the field of JavaScript. To put it simply, if you code JavaScript and if you read books, you should read this one. Not since K & R's C Programming Language have I seen a book that was so slim and yet so useful.

Previously, I blogged about a lot of the things I learned in the book. In this post, I'm going to concentrate on some specific questions, some disagreements, and some high-level discussion. Let me make it very clear that while I may occasionally disagree on a minor point, I really enjoyed reading this book!
My first question has to do with "augmenting basic types" (p. 33). I was a bit surprised to see Crockford adding a "trim" method to the "String" type. I thought that a lot of JavaScript communities considered this bad practice. He shows how to add a method conditionally, but I am suspicious of that too.

If two different "applications" on a page both try to augment the same type using the same method name but with different signatures or semantics, won't this lead to difficult to diagnose bugs? Won't adding the method conditionally (as shown on p. 33) merely delay discovery of the bug? Sooner or later, one of the applications is going to try to call the wrong method, and things are going to break in a subtle way. If you're going to do something fairly "global" like augmenting a built in type, I wonder if it might be better to use the shameful trick of adding a prefix to your method name, for instance "String.yui_trim".

My next question has to do with inheritance. Crockford shows how to make use of prototypal inheritance using his "beget" method, and he also shows how to use "differential inheritance" without the use of prototypes (p. 52). When he explains the functional approach to modules, he says:
There are lots of ways to make an object. It can make an object literal, or it can call a constructor function with the new prefix, or it can use the Object.beget method to make a new instance from an existing object, or it can call any function that returns an object.
I like Crockford's functional approach to modules. It like his "beget" method. I also like the differential inheritance approach where you let the superclass actually create the object, and then you just add to the object directly, without using the prototype system.

However, I didn't feel like he explained why and when you would prefer prototypal inheritance over differential inheritance. The most I can say is that if you have a ton of "small" objects with a lot of methods, differential inheritance is probably more expensive since each object must duplicate the references to each function.

I have another question regarding his module pattern. On p. 52-53, he explains how to use a "my" object as a container for secrets that are shared by the constructors in the inheritance chain. I understand the concept. However, is this really necessary if you're using differential inheritance (i.e. one object instead of a prototype chain)? Is this really effective for "secrets"? After all, when you call the constructor, you can pass your own my object so that you can harvest any secrets that get put into it.

My next question has to do with regular expressions. Crockford explains positive lookahead groups ("(?=") as well as negative lookahead groups ("(?|"), but then he says "This is not a good part." (p. 75) Why is that? Do they suffer from implementation variances or performance problems?

On p. 140, Crockford says:
There is another danger in the interaction between external data and innerHTML. A common Ajax pattern is for the server to send an HTML text fragment that gets assigned to the innerHTML property of an HTML element. This is a very bad practice. If the HTML text contains a script tag or its equivalent, then an evil script will run. This again can be due to server incompetance.
Any website that asks for data from a user and then later shows him that data must compensate for the risk of CSS vulnerabilities. This is true regardless of whether or not you're using JavaScript. Does innerHTML somehow make the problem worse? I thought that innerHTML was more "unofficially" standard and a lot less likely to suffer from incompatibility problems than making heavy use of DOM manipulation to add content to a page. Is that not true?
Crockford says, "I reserve block comments for formal documentation and for commenting out." (p. 96) However, on p. 6, Crockford points out that you can't use block comments to comment out code if it contains a regex such as /a*/. It's probably better to stick to line comments for commenting out code.

Crockford says that you should never fall through a case in a switch statement. I agree that the syntax for switch statements is tragically flawed. However, I agree with the BSD style guide. If you are going to fall through a case in a switch statement, use a "/* FALLTHROUGH */" comment. For JavaScript, I'd recommend "// FALLTHROUGH". JSLint could easily forbid code that falls through unless it contains such a comment.

Crockford calls the ++ and -- operators bad parts on p. 112. I agree that overuse of ++ and -- can lead to terse code. If you have to think about pre-increment vs. post-increment, you may be favoring conciseness over readability. However, I see nothing wrong with using ++ or -- on a line by itself or in the last part of a for statement. Consider the following:
for (i = 0; i < list.length; i++) {
if (something) {
I don't think that code is unreadable. In fact, I think it's far more readable than the somewhat evil use of the ternary operator shown on p. 145:
return typeof reviver === 'function' ?
function walk(holder, key) {
var k, v, value = holder[key];
if (value && typeof value === 'object') {
for (k in value) {
if (, k)) {
v = walk(value, k);
if (v !== undefined) {
value[k] = v;
} else {
delete value[k];
return, key, value);
}({'': result}, '') : result;
Defining a multi-line function in the middle of a ternary operator is just wrong ;)
Final Thoughts
As I said earlier, I really liked this book. My complaints are minor at best.

There's something that Crockford said that stands out in my mind:
We all find the good parts in the products that we use. We value simplicity, and when simplicity isn't offered to us, we make it ourselves...We cope with complexity of feature-driven design by finding and sticking with the good parts. It would be nice if products and programming languages were designed to have only good parts. [p. 100]
That's a nice quote. My question is, who gets to pick the subset? If you're coding in Perl or C++, you almost have to pick a subset, but every company picks a different one. Clearly in JavaScript, I'm happy to let Crockford pick the subset, but what about other languages? Perhaps we need more "The Good Parts" books.


Anonymous said…
nice review!
jjinux said…
Thanks ;)

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p