Friday, December 30, 2011

Python: Shrapnel

After many years of hopeful expectation, IronPort (now part of Cisco) finally open sourced a bunch of stuff, most importantly Shrapnel, our proprietary version of stackless Python. Back when I worked at IronPort (2004-2006), I was dying for them to open source this stuff. The existing open source version of stackless Python at the time never had quite the same flavor or emphasis as Shrapnel, but these days gevent is very similar.

Anyway, congratulations to Sam Rushing, Mark Peek, etc., and thank you Cisco! By the way, there's a reference to me in the code ;)

Friday, December 16, 2011

Dart at g|egypt 2.0

I gave a talk on Dart at g|egypt 2.0. All I can say is, wow!

First of all, I wasn't even on the agenda. In fact, the room I was using wasn't really even labelled on the map--it took me several minutes to find it. It was a small room. 2 minutes before my talk was supposed to start, I only had 3 people in the audience. However, 10 minutes later, every seat was taken, and people were standing at the back and along the sides. The room was packed!

Since Dart is so new, we haven't spoken about it at very many places. I told the Egyptians that I was giving them a chance to become the best Dart programmers in the world because they were seeing the talk before most of the rest of the world had seen it. I almost got a standing ovation ;)

People were very excited about Dart and asked a ton of excellent questions. Even after my talk, I had to stand around for 2.5 hours answering more questions.

The comments on Google+ were very supportive:

Hady Allam said, "your DART session is one of the best sessions i have ever attended. Thank You +Shannon Behrens and hope to see you again in Egypt."

Abdurrahman Alraies said, "The dart session was the best surprise of the day. Thank you very much +Shannon Behrens and +Sebastian Trzcinski-Clément It says much about how important is Egypt to Google."

Saied Attala said, "really i have opened the comment box for 30 minutes and can't write every thing for you. i loved your Dart sessions and your sense of humor. i enjoyed talking with you and having pics with you."

Those comments were very touching to me. To all the Egyptians out there who went to my Dart talk, thank you for making me feel so special! I hope I get to come back to Egypt again!

After the trip, I accidentally ran into Lars Bak, the creator of Dart, in Zurich. That was very exciting for me!

If you're interested, here are the slides.

Thanks also to Saied Attala and Ayman Reda Bedair for the pictures and Seth Ladd for most of the slides.

Monday, December 12, 2011

YouTube for Your Business at g|egypt 2.0

I gave a talk called YouTube for Your Business at g|egypt 2.0.

The talk went very well. There were about 200 people in the audience (and almost 900 in total at the event), which is a great turnout for a YouTube API talk. People asked me questions for about an hour afterwards. I even created a video for the talk:

I was really worried that I would lose my voice during the talk. It's very crackly right now.

Since this is the first time I've visited the middle east, I'm suffering a little from culture shock. I forgot to remove a beer joke from one of my slides. It took me a few seconds to realize why people weren't laughing. What's really funny is that I don't even drink!

I was very amazed by a few things. There were a lot of women at the event. There are far more female programmers in Egypt then there are in the US, as far as I can tell. The next thing that surprised me was that almost everyone here uses Windows--even on the server side. That's certainly not the case in the San Francisco Bay Area. At all the companies I've worked at recently, people tended to use Apple laptops and Linux on the servers. The last thing that surprised me was how many people wanted to have their picture taken with me. For about two hours, I felt like a total rock star ;)

I have a talk on Dart tomorrow and a talk on Python the next day. Let's hope my voice holds up ;)

Thanks go to Mohamed Naguib for taking the picture.

Tuesday, November 29, 2011

Computer Science: The Travelling Salesman Problem

I was thinking about the Travelling Salesman problem this morning. I came up with an algorithm that permits a few nice optimizations. My guess is that Knuth probably already came up with this algorithm, formally analyzed it, and then came up with 15 others that were much better. Nonetheless, I figured I'd jot it down.

Warning: This is a rough draft of an algorithm. You shouldn't bother reading this if you want real rigor. I'm just writing it down to get it off my chest.

Rather than picking the first segment of the journey and working my way to the end, I want to pick a middle segment of the journey and work my way outwards recursively.

Given a list of distances (or times, or costs, or whatever) between cities, sort the list from shortest distance to longest distance regardless of which cities are involved. Now, loop over this list and use each element of the list as a middle segment of your journey. This gives us a list of "first steps" (or rather, first middle steps). Looping over the list from shortest distance to longest distance is an important optimization because it increases the likelihood that we'll find an optimal path early, allowing us to skip over a lot of potential paths later.

Also sort the list of distances so that for each city, we have a list of other cities in ascending distance order. By sorting all of the city pairs in order of (first city, distance), you can use one sort for all of the pairs.

Now, here comes the recursion. We have a bunch of middle parts of the journey. Use recursion to extend the journey either by adding to the beginning of the journey or by adding to the end of the journey. Keep recursing until you have a complete path or a partial path that is already longer than the best path seen so far. Now, we can either extend the journey at the beginning or at the end. Recursively try extending the journey by either adding to the beginning or the end. However, do it in order so that you try adding the shorter distances first. There's an implicit merge sort going on in this algorithm. This, again, is an optimization to allow you to skip work later.

While we were recursing, we had a function that took two things, a middle chunk of the journey and a set of cities not yet visited. Apply memoization so that anytime we get the same parameters, we return the same answer (by using a cache, of course). This is an important optimization.

Last of all, using the above algorithm, we'll quickly come up with the first complete path that has a decently minimal distance. Keep track of this as the best complete path seen so far. Anytime we are recursing and we come up with a partial path that is longer than the best complete path seen so far, we can stop recursing, give up, and try something else. This is an important optimization to save work "at the leaves of the tree".

I can't even wrap my head around the big O of this algorithm. I suspect it's one of those cases where you use words like "amortized cost".

This algorithm does have a weakness. If every distance between cities is exactly the same, it'll try every possibility. Similarly, if every distance between cities is exactly the same except one pair of cities which has a longer distance, it'll still try every possibility. I'm not sure of the degree to which the memoization fixes this problem. It'd be good to extend the algorithm somehow to recognize this situation and short circuit, i.e. automatically throw out paths of duplicate length.

Math: Factoring Numbers

I was thinking this morning about factoring numbers. I wonder if it might sometimes be helpful to use real numbers to gain an interesting perspective in order to solve certain problems involving integer numbers (i.e. number theory problems). For instance, I was thinking about factoring large numbers.

For every natural number, C, (that isn't equal to 0 or 1), there are an infinite number of pairs of positive, real numbers A and B for which A * B = C. For instance, 6 = 1.0 * 6.0 = 2.4 * 2.5 = 2.0 * 3.0 = ... I wonder if playing around with pairs of real numbers like (2.4, 2.5) can lead you to pairs of integer numbers like (2, 3).

Imagine all the pairs (A, B) for which A * B = C. Let's create a way to graph all such pairs in a funny sort of way. Let's pick a bunch of A's going from 0 to C. For each different A, we can calculate B via C / A. Let's consider the parts of A and B that are to the right of the decimal point to see if one pair, (A, B) can lead us to another pair (A', B') which are integers (i.e. have only zeros to the right of the decimal point). In fact, let's see if we can come up with a numerical analysis approach where we use estimations to hunt down (A', B').

To do this, let's create a funny sort of three dimensional graph. Here's the pseudo code (assuming we're trying to factor some number, c):
for step in range (1, LARGE_NUM_OF_STEPS + 1):
a = (step / LARGE_NUM_OF_STEPS) * c
b = c / a
x = a - int(a) # x and y will always be in the range [0, 1).
y = b - int(b)
z = floor(a)
draw_point(x, y, z)
If you use a very large number for LARGE_NUM_OF_STEPS, you'll create a funny looking 3D graph. Any place where x = 0 and y = 0, you'll have a pair of integers (a, b) that multiply to equal c.

Naturally, this is an extremely expensive way to factor numbers. However, I'll bet you'd learn a lot about factoring numbers by looking at this graph. In fact, I'm guessing that looking at this graph will lead you to a numerical-analysis-style approximation algorithm for honing in on valid integer pairs (a, b) where a * b = c.

Humans are fairly good at visualizing things in 3D, but it'd be really cool to extend this graph to four dimensions (perhaps using time as the fourth dimension). The fourth dimension would be used for various values of C, from 0 to infinity. I really think that looking at such a graph would help with fast number factoring.

Updated: Fixed a couple of errors pointed out by BMeph.

Friday, November 25, 2011

Computer Science: NP-complete Problems are Really NP-complete

First of all, let me apologize for my sloppy terminology. I've always been a better software engineer than a computer scientist.

This is how Wikipedia describes NP-complete problems:
In computational complexity theory, the complexity class NP-complete (abbreviated NP-C or NPC) is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial-time, and also in the set of NP-hard problems so that any NP problem can be converted into L by a transformation of the inputs in polynomial-time.
Although any given solution to such a problem can be verified quickly, there is no known efficient way to locate a solution in the first place; indeed, the most notable characteristic of NP-complete problems is that no fast solution to them is known. That is, the time required to solve the problem using any currently known algorithm increases very quickly as the size of the problem grows. As a result, the time required to solve even moderately large versions of many of these problems easily reaches into the billions or trillions of years, using any amount of computing power available today. As a consequence, determining whether or not it is possible to solve these problems quickly, called the P versus NP problem, is one of the principal unsolved problems in computer science today.

While a method for computing the solutions to NP-complete problems using a reasonable amount of time remains undiscovered, computer scientists and programmers still frequently encounter NP-complete problems. NP-complete problems are often addressed by using approximation algorithms.
Note that it says, "As a consequence, determining whether or not it is possible to solve these problems quickly, called the P versus NP problem, is one of the principal unsolved problems in computer science today." I haven't seen the proof for it, but I've also heard that if you could prove that one NP-complete problem does not have a polynomial-time solution to it, then that would prove that none of them have a polynomial-time solution to them. The theory is a little over my head, but I'm going to take a shot.

I'd like to propose a problem that is NP-complete and show that it does not have a polynomial-time solution to it. Imagine I pick an n-tuple of random 32-bit, unsigned integers. Your job is guess what the n-tuple is. Hence, where n is 2, I might pick (376, 792), and your job is to guess that tuple. If you guess a certain tuple, I can tell you whether you're right or not. Hence, I can verify a solution in polynomial-time (it takes me O(n) number of int comparisons to verify a solution). However, to try to solve such a problem, you either have to get really lucky or you have to brute force it. If you find a method other than brute force trying every possible solution, then obviously there's something wrong with my random number generator. To use brute force to guess the solution requires O(A^n) for some constant A. Since it's "to the n", it's not polynomial-time (3nA^3 is polynomial-time since the exponent is fixed; A^n is not polynomial-time since the exponent varies with the size of n).

Now I know that some of you reading this understand the computer science a lot better than I do. Have I actually shown (at least informally) that NP-complete problems are really NP-complete? If not, can you, in lay man's terms explain what I'm misunderstanding? Thanks!

Friday, November 18, 2011

Tetris in Dart

For my first Dart program, I decided to implement Tetris. I was inspired by Alexei Kourbatov at which contains an implementation of Tetris he wrote way back in 1999!

You can play my version here. The source code is here.

Overall, I like Dart. The Dart version of my code is about the same length as the JavaScript version of my code (the Dart version is slightly longer because I switched from using innerHTML to using the DOM). The combination of the optional static typing and Dart Editor helped me avoid many, many bugs. I'm still a Python guy at heart, but my experience with Dart was a very pleasant one.

I jotted down some lessons I learned. Warning: This is my first Dart program, and I haven't even read the tutorial yet, so I could be missing the boat on some of these points!

Don't try to play with DOM elements before the DOM is loaded. The easiest way to achieve this is to import your Dart-compiled JavaScript at the bottom of the page.

Dart has generics. This is a little unfamiliar for a scripting guy like me.

I had a hard time finding methods like JavaScript's Math.abs and Math.floor. They're now members of num.

This is how you add to the DOM:
HTMLDocument doc = window.document;
HTMLParagraphElement p = doc.createElement('p');
p.textContent = message;
Using innerHTML works too, but I suspect it's frowned upon.

DOM manipulation isn't as easy as using innerHTML. It's not quite as horrible as I feared, but it's still a pain in the butt.

Sometimes you need to use a temporary variable to help out the type system. If you write:
doc.getElementById("s-" + i + "-" + j).src = 'images/s0.png';
it'll say "'src' is not a member of Element." However, you can easily fix this by writing:
HTMLImageElement img = doc.getElementById("s-" + i + "-" + j);
img.src = 'images/s0.png';
Translating from JavaScript to well-typed Dart is a linearly time-consuming, but relatively straightforward progress.

The intelligent code completion in Dart Editor was really helpful since I don't really know the language or its APIs. I can edit text a lot faster in Vim, but code completion helps me know what to write.

Trying to figure out the proper way to handle keyboard events in Dart is hard because of the DOM. I'm looking at, and it looks like there are two ways to do things, the deprecated way, and the way that isn't implemented yet.

Command-g is the usual Apple shortcut to find the next occurrence of something, but it doesn't work in Dart Editor.

They're working on a new DOM API. I've decided not to use it until they say it's stable.

The way main() works in the Sunflower Example doesn't match the way main() works in sample applications generated by Dart Editor. I'm not sure why.

Wednesday, November 02, 2011

Eclipse vs. Intellij for Android, PlayN, Google App Engine, and Python Development

I can't figure out whether I should use Eclipse or IntelliJ.

Eclipse is good in that it's open source and free. Furthermore, Google has released several plugins for it such as the ADT (Android Developer Tools) plugin and the Google Plugin for Eclipse. However, Eclipse generally leaves me feeling confused, overwhelmed, and out of control. I spent two days reading a bunch of tutorials, but I still feel like I can't do what I want. I installed Aptana Studio 3 (which includes PyDev) in order to play around with my Python Google App Engine project. However, I couldn't figure out how to do two things: 1) update the PYTHONPATH for my project to include a few project-specific directories 2) use two space indents in Python code just for this project (which is the standard at Google).

On the other hand, there's IntelliJ. I've never used IntelliJ for Java development, but I've used PyCharm and RubyMine for Python and Ruby development. The downside is that it's fairly expensive. The upside is that it's really good. It doesn't leave me feeling confused, overwhelmed, and out of control. In general, I'm able to get it to do what I want. Furthermore, the IDEAVim keybindings are pretty good (not as good as the JVi plugin for NetBeans, but still pretty good).

I'm hoping to start with a toy project that uses the PlayN framework (e.g. Java), Android, etc. It'd be nice if I could use the same IDE for Java, Python, web stuff, etc. I don't really know all the various Google APIs such as Google App Engine for Java and Android, so an IDE that can guide me along would be helpful. It seems like the only solution is to use Eclipse for the Google stuff and IntelliJ for Python, Ruby, and web stuff. I haven't purchased a license yet, but I did win a license for RubyMine which might be applicable.

I just read Android Development: Eclipse vs. IntelliJ IDEA which said a) IntelliJ is way better b) all of the Android tools are still accessible outside of the IDE anyway. I also noticed that IntelliJ has come out with its own Android plugin, and it's even open source. I'm leaning toward IntelliJ, but I hate being the only one around using a certain tool. Any advice you guys have would be welcome.

(By the way, I'm sure someone is going to come along and plug Vim, Emacs, or TextMate. I'm personally a Vim diehard. However, I've come to appreciate the benefit of using Vim keybindings within a larger IDE. YMMV.)

Thursday, September 22, 2011

YouTube: Magisto

It's tough work being a developer advocate at YouTube. I often have to do things like blog, talk tech with other programmers, and invite people to lunch. It even requires a higher standard of personal hygiene. For instance, the other day, I had to buy a new Google T-shirt because I had a large toothpaste stain on the shirt I was wearing, and I had a business lunch with a potential partner.

Well after months of effort, my work has finally come to fruition. TechCrunch just announced that Magisto is now part of

Hmm, probably a good time to go to the gym. All those business lunches are starting to show ;)

Tuesday, August 30, 2011

Humor: Millions of Microprocessors

Today I ran "fortune", and I got:
I went to my first computer conference at the New York Hilton about 20 years ago. When somebody there predicted the market for microprocessors would eventually be in the millions, someone else said, "Where are they all going to go? It's not like you need a computer in every doorknob!"

Years later, I went back to the same hotel. I noticed the room keys had been replaced by electronic cards you slide into slots in the doors.

There was a computer in every doorknob.
-- Danny Hillis
Aside from the fact that that's an awesome quote, I'm pretty excited about this because Dannis Hillis was one of the founders at a company I worked for (Metaweb).

Tuesday, August 23, 2011

Linux: Vim Freezing

I had a problem with Vim freezing, and I finally figured out why. If you log into a Linux machine with X11 forwarding turned on, but without an X server, vim hangs when you try to start it. So do various other commands.

I recently switched from an Ubuntu laptop to a MacBook Pro running OS X. In my .ssh/config, I had:
Host my-server-name

ForwardX11 yes
When I sshed in from my Linux box, things were fine. However, when I sshed in from my Mac, things were broken. There are three workarounds:
  1. Don't enabled X11 forwarding. X11 forwarding is enabled for ssh by either the -Y or -X flags or by the ForwardX11 configuration variable.
  2. If you do enable X11 forwarding, make sure you start an X11 server first and then log in with a fresh xterm or terminal window so that the DISPLAY environmental variable is properly set.
  3. Pass -X to vim so that it doesn't try to connect to the X server. Vim will not be able to alter the window title or the clipboard. It was surprising to me that vim will try to connect to the X server even if you're not using gvim. This approach won't fix other applications that may rely on X in unknown ways.

Thursday, August 11, 2011

CSS: Fading Clipped Text

CSS has a feature, "text-overflow: ellipsis;", that works really well when you want to clip a piece of text and replace it with "...". However, it only works for a single line of text. It doesn't work when you have a paragraph of text, and you want to clip all the text below a certain point.

This is a problem I've struggled with a couple times. I've always wanted to figure out how to have the text fade out at the bottom of the box. Before now, I've aways just clipped the text at the bottom of the box; that looks ugly since most of the time, the text is vertically clipped in the middle of a sentence.

During a recent HTLM5 Hackathon at Google, I learned about gradients and masks. After about 45 minutes worth of futzing around, I finally achieved the effect I was looking for. Here's the code. It basically puts a mask over the text with a gradient that makes the text fade out.
<!DOCTYPE html>

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Fading Text</title>
<style type="text/css">
.box {
list-style: none;
padding: 5px;
background-color: #F5F5F5;
border: 1px solid #DCDCDC;

.video-list {
padding-left: 0;

.video-list {
width: 400px;
height: 125px;
margin-bottom: 5px;
overflow: hidden;
position: relative;

.video-title {
font-weight: bold;

.video-duration {
padding-bottom: 1em;
font-style: italic;
font-size: 75%;

.video-description {
font-size: 75%;

.video-description-fade {
position: absolute;
bottom: 0;
left: 5px;
height: 2em;
background-color: #F5F5F5;
-webkit-mask-image: -webkit-gradient(
linear, left top, left bottom,
from(rgba(0,0,0,0)), to(rgba(0,0,0,1)));

.video-description-fade {
width: 272px;
<ul class="video-list">
<li class="box playlist-item">
<div class="video-title">Google I/O 2011: HTML5 Showcase for Web Developers: The Wow and the How</div>
<div class="video-duration">60:23</div>
<div class="video-description">
Eric Bidelman, Arne Roomann-Kurrik

We'll share the strengths and extents of HTML5, showing magnificent
demos of bleeding-edge features in Google Chrome. Digging into
high-fidelity graphics, performance, and system integration, we'll
break each demo down on the big screen to show how it was
constructed. Then we'll show you how to use Chrome to its full
potential in your own projects.
<div class="video-description-fade"></div>

<li class="box playlist-item">
<div class="video-title">Chicken Monkey Duck</div>
<div class="video-duration">1:20</div>
<div class="video-description">
Get it on iTunes:


Monkey chicken chicken,
Monkey Chicken, duck duck,
Chicken monkey monkey, Chicken Monkey,
chicken chicken monkey duck.
Monkey duck,
Chicken duck,
Monkey monkey duck duck,
Chicken Monkey, chicken chicken monkey,
"Chicken Monkey Duck."

Chicken chicken monkey duck,
Chicken Monkey, duck duck,
Chicken chicken monkey,
Chicken monkey,
Chicken duck.
Chicken duck duck,
Chicken monkey monkey duck,
"Chicken Monkey Duck?"
Chicken duck.
Monkey duck duck:
Chicken chicken, monkey,
Chicken, monkey monkey, Chicken Monkey.
Chicken chicken monkey?
Chicken, monkey monkey,
"Chicken Monkey Duck."

Chicken chicken, monkey,
"Chicken Monkey Duck."
Chicken chicken, duck.
Chicken Monkey, monkey,
Chicken Monkey—chicken duck.

Duck, Chicken Monkey
Chicken chicken monkey, chicken.
Duck duck,
Chicken chicken, duck.
Chicken, monkey monkey...

Duck duck, Chicken Monkey,
Chicken chicken, monkey.
Chicken Chicken (monkey monkey),
Chicken monkey,
"Chicken Monkey Duck."

Chicken chicken chicken,
Monkey monkey,
Chicken Monkey:
Duck duck duck duck duck duck duck duck,

The Album:
iTunes —

Bandcamp —
<div class="video-description-fade"></div>

Python: Newbie Nugget: Lambda Expressions

I gave a talk for the SF Python Meetup on lambda expressions:

(If you can't see the slides above, you can see them here.)

Wednesday, August 03, 2011

Linux: VNC Stopped Working

I have been using VNC to connect from my Ubuntu box to an OS X box. On the Ubuntu side, I was using vinagre, which is the standard "Remote Desktop Viewer" application. On the Mac side, I was using standard desktop sharing.

Things were working for months, but they stopped working for me the other day. I could connect just fine, but I could only see a black screen. Apparently, this is a known issue.

Switching to Remmina (sudo apt-get install remmina) fixed the problem.

Saturday, July 23, 2011

Go: A Second Impression

My initial impressions of Go have always been somewhat negative because I was comparing it to things like Scala, Haskell, and Python. However, having read a good part of the tutorial, I've changed my opinion. I really like Go!

Go doesn't try to be an extension of C to make it more like Smalltalk--that's Objective C. Go doesn't try to be the end-all-be-all of object-oriented languages derived from C--that's Scala.

Since it neither maintains backwards compatibility with C nor adds a ton of features, I originally had a hard time getting excited about Go. Now I see that Go is a modern language that tries to follow what I think of as C's philosophy. It's simple, elegant, small, and native.

There's an old saying:
Do not seek to follow in the footsteps of the wise men of old. Seek what they sought.

- Matsuo Munefusa (”Basho”)
I think that perfectly describes Go.

Saturday, July 16, 2011

Humor: Proving Programs

I've always been weary of programming proofs.

For instance, I can mathematically prove that 4195835 / 3145727 > 1.3338. However, I know of certain Pentium processors that would disagree with me. If I try to prove that the following bit of C code prints out "Hello World":
if (4195835.0 / 3145727.0 > 1.3338)
printf("Hello World\n");
system("rm -rf /");
I might be a bit surprised when it deletes my hard drive instead ;)

1 bunny + 1 bunny = 2 bunnies, right? Well it depends on their sexes. It's possible that in a given time period, 1 bunny + 1 bunny might equal 5 bunnies. As I joked in a previous blog post, "All models are wrong. Some models are useful."

I really think this same thing applies to proving programs. Donald Knuth famously said, "Beware of bugs in the above code; I have only proved it correct, not tried it."

Books: Masterminds of Programming

I just finished reading Masterminds of Programming: Conversations with the Creators of Major Programming Languages
Masterminds of Programming features exclusive interviews with the creators of several historic and highly influential programming languages. In this unique collection, you'll learn about the processes that led to specific design decisions, including the goals they had in mind, the trade-offs they had to make, and how their experiences have left an impact on programming today.
In short, I really enjoyed it. Here's an extremely abbreviated and opinionated summary:

Adin D. Falkoff (APL) made programming as mathematical as possible.

Thomas E. Kurtz (BASIC) was generally a nice guy who wanted to bring programming to the masses.

Charles H. Moore (FORTH) frustrated the heck out of me. He stated that operating systems are the software industry's biggest con job. I disagree. Operating systems protect me to some degree from bad and malicious code. They also let me run multiple programs at the same time and allow me to keep running even when one of the programs crashes. He also said that a piece of code written in any other programming language will be 10 times as large (in number of lines of code) as the same code written in Forth. I'd like to see him try that trick with Python!

Robin Milner (ML) was completely fascinated with programming models and proving the correctness of code. That reminds me of the quote, "All models are wrong. Some models are useful."

Donald D. Chamberlin (SQL) showed me some of the history of SQL. I didn't know IBM research was such an interesting place.

Alfred Aho, Peter Weinberger, and Brian Kernighan (AWK) were as good as I expected.

Charles Geschke and John Warnock (PostScript) talked about Adobe and the history of PostScript. I just don't like that Charles guy, and I don't like Adobe. However, they're smart guys.

Bjarne Stroustrup (C++) was as frustrating as I expected.

Bertrand Meyer (Eiffel) was really interesting. He wrote a book in French that has had a profound impact on French programmers. If he had translated that book into English, it's likely he'd be as famous as, say, Richard Stevens (the author of "UNIX Network Programming").

Brad Cox and Tom Love (Objective-C) showed me that Objective-C's goal was to enhance C in the smallest way possible to make it a bit more like Smalltalk.

Larry Wall (Perl) was awesome, as usual.

Simon Peyton Jones, Paul Hudak, Philip Wadler, and John Hughes (Haskell) were fascinating, as usual.

Guido van Rossum (Python) was practical and interesting, as usual.

Luiz Henrique de Figueiredo and Roberto Ierusalimschy (Lua) were okay. (I'm a Python guy, so it's a bit hard for me to get excited about Lua.)

James Gosling (Java) appears to suffer from premature optimization.

Grady Booch, Ivar Jacobson, and James Rumbaugh (UML) left me even less interested in learning UML.

Anders Hejlsberg (Turbo Pascal, Delphi, C#) was awesome. I knew I liked this guy from previous books, but this interview made me like him even more.

Overall, I think this book was a bit drier than, say, Coders at Work: Reflections on the Craft of Programming, so you should read that one first. However, if you're a guy like me who loves programming languages, this book is a must read.

Friday, July 15, 2011

Books: Python 3 Web Development Beginner’s Guide

Packt Publishing asked me to review Python 3 Web Development Beginner's Guide. I'll have to admit, it's a bit of an odd duck. A better (albeit overly verbose) title might have been "An Introduction to Rich Internet Application Development Using jQuery UI, a Very Modern Version of Python, a Relatively Old Python Web Application Framework Named CherryPy, and an Ancient Version of HTML Written by a Guy Who Uses Windows".

The first tipoff that this book was a bit strange was that the author uses Windows and some combination of Firefox and IE. It seems like most web developers use OS X (or occasionally Linux), and they prefer Chrome over IE.

The next tipoff was the use of jQuery UI. jQuery UI is a very modern technology which is often used to build rich internet applications. RIAs really aren't the sort of thing that I would expect to see in a book for beginners. What happened to the old days when beginning web applications focused on the server dynamically generating HTML? If I took the time to count the number of lines of code, I wouldn't be surprised if this book had more JavaScript than Python.

The title of this book mentions Python 3, but if you search for "Python 3" in the book, there are extremely few mentions of it. This book really isn't about Python 3 per se (as compared to Python 2); it has a lot more to do with jQuery UI.

Whereas Python 3 and jQuery UI are very modern technologies, standing in contrast is the book's use of HTML 4 and CherryPy. HTML 4 is an *ancient* version of HTML. I would expect anyone using jQuery UI to use either XHTML or HTML5. At the very least, I would have expected one of the transitional DTDs. Similarly, he uses CherryPy. Although I agree that CherryPy is solid code, it's also fairly old. It predates any of the modern Python frameworks.

This book claims to teach web development "without having to learn another web framework" [p. 1]. That's simply not true. It makes heavy use of CherryPy. The home page for CherryPy calls it an "HTTP framework" and says that it has "everything you would expect from a decent web framework." It's not as full-featured as, say, Django, but parts in the example code such as "@cherrypy.expose" [p. 36] are certainly framework features. In fact, "@cherrypy.expose" is part of CherryPy's object publishing system, which it uses as a replacement for regex-based URL routing.

Another thing that's a bit strange about this book is that the author doesn't use a client-side or a server-side templating language. In JavaScript, he tends to use string concatenation, which is weird because there is a templating plugin for jQuery. On the server, he embeds HTML directly in the Python code, which is pretty ugly (as he mentions on p. 229).

Furthermore, the code is extremely sloppy. The code does not follow Python's style guide concerning whitespace (PEP-8) (see, for example, p. 145) even though PEP-8 is extremely standard in the Python community. I don't know of anyone who puts a space before the colon in expressions such as "if not isinstance(name,str) :" [p. 146]. Nor is it even self consistent. The indentation in the JavaScript is not only non-standard and inconsistent, it's occasionally completely wrong [p. 118] (i.e. the indentation disagrees with the braces).

Aside from bad style, I'm a little concerned about various coding practices. For instance, the JavaScript at the bottom of p. 40 has variables that don't use var. This means they're effectively global. This is extremely bad practice. Fortunately, he does use var in other places in the book.

On the subject of security, there are several standard security vulnerabilities that web applications must protect against: XSS (cross-site scripting vulnerabilities), SQL injection attacks, XSRF (cross-site request forgeries), and session fixation (or session hijacking) attacks. Every book on web development should cover these.

The book mentions XSS, but I fear it's approach may not be thorough enough. It does not mention the term "SQL injection" attack, but the ORM shown in the book does look to be somewhat safe. It mentions XSRF, but says that it's out of scope. It doesn't mention "session fixation" or "session hijacking" at all. In general, I don't think the book is good enough about "escaping things" properly. For instance, on p. 293 the author creates a URL in JavaScript using values from a form, but he doesn't take care to URL encode the parameters.

Despite all of the above, I can say this about the book. The author does a good job explaining the web to beginners. Modern web applications are fairly complicated beasts. There's the client, the web server, and the database server, and they each require their own syntaxes. The author does a decent job explaining what runs where. It can be difficult for an expert web developer, such as myself, to remember that newbies might not know all these things.

In summary, will this book help you become a competent, professional web developer? Absolutely not. Is it as well written as, say, Agile Web Development with Rails. No. However, might it be a good way for a beginner to dip his toes in web development with Python and jQuery UI? Maybe.

(Disclaimers: Packt gave me a free electronic copy of this book in trade for my review. I have not read the whole thing. I did read the first 50 pages and skimmed various key sections.)

Thursday, July 07, 2011

Google I/O 2011

I went to Google I/O May 10-12, 2011. Note that the first day of Google I/O was my second day as a Google employee. These are my personal notes. If you're interested in any of these talks, you can find them on YouTube.


One of the slides during the keynote showed a picture of an android eating an apple.

There were 5000 people in the room for the keynote.

There were 110 cities watching at viewer parties.

Android has come a long way: 100 million activations, 450,000 developers, 215 carriers, 310 devices, 112 countries, 400,000 activations per day, 200,000 apps in the Android market, 4.5 billion apps installed.

Android 3.1 is Honeycomb.

Android can now act as a USB host. Hence, you can now get a keyboard and a mouse for your tablet.

You can use an X-Box controller with it.

Android 3.1 will be used for Google TV.

Android market will be available on Google TV.

Ice Cream Sandwich will be the new Android release.

The hope is for one Android OS everywhere.

It will be open source.

They have Android software that recognizes where you're looking and can recognize your head movement too.

There are books on Android market.

There are movies on Android market. They're not tied to any specific device. They're just tied to your Google account.

You can "pin" a movie in order to download it to your device.

Verizon seems to have the coolest new Android stuff.

Google has a new music service in beta.

If you add music to your account, you can listen to that music from any of device.

There are Windows and Mac clients.

There's a web-based music manager.

It's completely synced on all devices.

There's an "instant mix" feature based on machine learning.

The music is kept in the cloud. There's no need for syncing.

You can cache music recently played.

You can make stuff available instantly.

The Music application is initially by invitation only.

It's free while in beta.

You can upload 20,000 songs.

You need Android 2.2 or higher for the Android app.

There's going to be an alliance to determine how quickly Android devices should be updated and how long they should be maintained.

Sprint is in the Alliance. So is Verizon. I'm a little peeved at Sprint because they won't upgrade my Samsung Moment beyond Android 2.1, and I need at least 2.2 for some of the apps I'm interested in.

Android open-accessory is a standard accessory platform. It has hardware and software components.

There was a demo of an exercise bike plugging into a phone.

The accessory development kit (ADK) is based on Arduino.

There was a demo of a full-sized (i.e. man-sized) labyrinth board controlled by tilting a tablet.

The ADK doesn't have any NDAs or fees. It's completely open.

Android @ Home is an OS for your home.

There's an Android @ Home framework. It has its own wireless protocol. It has very low cost connectivity.

He said to envision a real world Farmville ;)

He showed a demo of the lights being connected to a game of Quake.

LightingScience has prototype lightbulbs.

Project Tungsten is a hub for your home. It runs Android @ Home.

They showed a prototype where you can just touch a CD to the hub and it starts streaming music from the cloud.


They gave away free tablets!!!

There wasn't any mention of GAE. It was mostly about Android.

Life of a Google API Developer

They were showing off Google API explorer, which looks pretty neat.

There's a list of APIs. Try them.

There are lots of APIs.


The API explorer is much nicer than using curl.

They use OAuth2 with delegated auth.

The Google API Discovery Service is an API to serve docs about APIs.

There are generic client libraries for the Google stuff for Java, Python, PHP, .NET, Ruby, etc.

OAuth2 is complicated, but they have nice helper libraries.

They're using Mercurial on

Daryl Spitzer said Go is amazingly succinct.

Passwords are like underwear. Keep them secret. Change them often.

The APIs have quotas. However, they're reasonable for moderate apps.

Google is introducting some APIs that you have to pay for.

5000 tablets were given to users for free a month before the actual release of the tablets.

The talks for Google I/O are on YouTube.

Go for Web Apps

Rob Pike (of UNIX fame) gave part of the talk!

Go is fun, efficient, and open source.

Go has a web server.

It's natively compiled.

It has gdb support.

It simplifies some stuff.

It was originally intended for systems code like building web servers, but it's now more general.

It has 1st class functions.

It has low level types, like float32.

I don't understand its replacement for exceptions. [Several weeks later, I read an article on Defer, Panic, and Recover.]

goinstall is its package manager.

It has closures.

It is statically typed.

Google API clients are coming for Go.

Go now has support for GAE! The apps are compiled in the cloud.

Go is the first native language on GAE.

Go is not theoretically interesting. It's just really useful.

Go's library support is improving quickly.

It has garbage collection and full control over how memory is laid out.

Programming Well With Others

The format for of this talk was really entertaining. In fact, it was my favorite talk. They pretended to do a radio show called "The Ben and Fitz Show".

There are people who are friendly but steal too much time and attention.

Perfectionists suck energy out of a project.

"You are not your code."

Python at Google

This talk was by Wesley Chun.

io2011 is the tag to use for Google I/O this year.

Python just turned 20.

C++ is Google's primary language.

Python was used at Google before Google really existed. It was used in the original crawler.

Java came later.

Google has a Python training class.

YouTube gets 2 billion views/day.

There are 35 hours of video uploaded to YouTube every minute.

Welcome to Computer Programming For Kids (CP4K) looks like a fun book.

HTML5 Showcase

This talk was amazing.

Here's the video.

Here's some code.

HTML5 can handle binary data.

There's a file uploader that can upload multiple files or even a whole directory. You can drag and drop images to upload them.

The talk was overwhelmingly awesome!

There's 3D support in the browser.

There's WebGL 3D. It runs on the GPU.

They showed a 3D file browser tied to an in-browser command line.

They showed real-time audio processing in JavaScript.

They showed CSS that produces 3D effects declaratively.

The three parts of this talk were about files, graphing, and audio.

YouTube APIs iframe Player

Using the iframe player simplifies things.

HTML5 doesn't have fine tuned video support yet: there's a content protection problem; there's no full-sceen HD (except Webkit); there's no microphone and camera access; there's a format problem.

However: HTML5 has better accessability; there's mobile support; it has faster start times; it's more reliable.

HTML5 doesn't support all the features used by YouTube.

Custom AS3 players won't work on iOS, of course.

There's a YT object you can play with on the Chrome console.

A guy named Greg wrote the HTML5 player.

WebM is a better encoding.

The performance of HTML5 video on older hardware is not necessarily better than Flash. (That's disappoints me because I know Flash can be a hog. I know Quicktime ran pretty well on older Mac hardware that can't keep up with watching modern movies via Flash.)

Fireside for Developer Relations

Mike Wenton leads developer relations.

There are lots of GTUGs (Google technology users groups).

The public forums need more Google attention, especially for GAE.

Google needs to reach out to unconverted fans at business conventions.

Android's scale of growth has been mind blowing.

There are a bunch of programmers who are nuts about Google.

There were lots of developer advocates in the crowd.

Closure Talk

The speaker was the one who did did the closure book. He worked on Google calendar. He's no longer at Google.

He defined a large project to be 30k+ lines of code, 4+ people, and 6+ months.

How should you code a RIA like Gmail and Calendar? Use GWT? Write a ton of JS? Calendar, Gmail, Maps, Blogger, Docs, and Reader were all pure JavaScript.

Closure has many parts. There is the the Closure library, Closure templates, and the Closure compiler (which also does static checking). They're all independent. It even makes sense to use the Closure compiler with jQuery.

Code written using the Prototype library is not easy for people to read unless they already know Prototype.

This code always returns undefined because of ";" insertion:
function f() {
Too many script includes in the head leads to slowness.

Create a special version for mobile. However, this can lead to ugly forking.

The Closure compiler can compile templates + your JavaScript + the Closure library iteself into optimized JavaScript.

The compiler is a whole program optimizing compiler.

The templating language escapes HTML by default.

The closure library can compress an amazing amount. In advanced mode, it's incredible.

YUI compressor can compress JavaScript to 27% of original size. However, the Closure compiler running in advanced mode can compile JavaScript to 0.5% of its original size.

It does things like strip functions that are never called, which is useful if you're using a library like jQuery.

Don't create a mobile version of your JavaScript library. Use 1 version, and let the Closure compiler strip stuff not needed by defining which user agent you can assume. Hence, you can compile multiple different versions of the same library.

The compiler can statically enforce type hints that you give in docstring comments.

Closure tools help manage and optimize large JavaScript apps.

Wednesday, July 06, 2011

Software Engineering: Code Reviews vs. Peer Programming

I've been thinking lately about the benefits of code reviews vs. peer programming. In general, I think most companies that really care about code quality use either code reviews or peer programming. Various companies are famous for using either one approach or the other, which leads me to wonder which approach is better under which circumstances?

Pivotal Labs is famous for doing full-time peer programming. It's well known that they do a good job writing software. However, I wonder if full-time peer programming might be too expensive for mundane code. I also wonder if it makes sense to work as a pair when someone needs to spend a few days reading, learning, and researching.

In contrast, Google is famous for code reviewing all commits before checkin. Certainly this frees up engineers to get more work done since they can spend a high percentage of their time working in parallel on separate tasks. However, I wonder if a code reviewer really has the ability to make the same level of architectural improvements as a peer programmer. Certainly a code reviewer can catch style mistakes, but it's much harder to tell someone their entire approach is wrong (for instance, threaded code vs. asynchronous code).

Furthermore, I wonder if code reviewers in general can catch all the little assumptions that get built into code. It reminds me of a story my boss once told me. A few decades ago, he was working on satellite control software. There was a piece of code that made it through three levels of code review even thought it contained a bug. It was missing a minus sign in some equations having to do with navigation. When the satellite was launched, it started spinning because it kept "thinking" it was upside down. It wasted half of its fuel before they could get the problem under control. My boss said that for satellites, the lifespan of a project is directly connected to the amount of fuel onboard. Hence, this came to be known as the three million dollar minus sign.

If this code had been peer programmed rather than code reviewed (or in addition to code review), would the peer have spotted the problem as the equation was being worked out? Certainly this taught me a valuable lesson about code review. It's far too easy to get hung up on less critical issues that are easy to spot, like style, instead of focusing on more important issues that require more brain power to understand.

In the book Professional Software Development: Shorter Schedules, Higher Quality Products, More Successful Projects, Enhanced Careers (see my blog post), Steve McConnell said that NASA found that the single most effective way to cut down on defects was to always have a second pair of eyes present. They were talking about building the shuttle, but I think the same thing applies to software.

Despite their pervasive use of code reviews or peer programming, Google and Pivotal Labs have both had their fair share of bugs. Even NASA makes mistakes. Hence, it's easy to see that neither code reviews nor peer programming can banish all defects. Given how fallible human beings are, it seems to me that the best way to keep people from dying because of mistakes is to avoid situations where mistakes are fatal. Driving is a fairly dangerous activity, and tens of thousands of people die each year in the United States because of driving errors. However, even more than that make mistakes and yet survive because of various safety precautions.

So even though I think code reviews and/or peer programming are important factors in writing high quality software, I think it's even better to avoid situations where software defects can cause serious damage. I certainly think that the more mission critical a piece of software is, the smaller and stupider it should be.

Tuesday, July 05, 2011

Amazon Web Services Summit 2011: Navigate the Cloud

On June 21, 2011 I went to an Amazon Web Services conference in San Francisco. These are my notes.

In general, the conference was informative, but a bit subdued. It was about a quarter of the size of Google IO, and they weren't giving away anything huge for free. Furthermore, it was only one day, and the size of the crowd was much smaller. However, that makes sense since they have multiple of these conferences per year. Nonetheless, it was interesting, and I learned some stuff.


The part of the keynote was given by Dr. Werner Vogels, the CTO.

AWS doesn't lock you into an OS or language.

There are 339 billion objects in S3, and S3 handles 200,000+ storage transactions per second.

On a daily basis, they add as much capacity as they had in total in

Spot pricing allows you to spin up an instance when the price is right.

AWS CloudFormation deploys a stack of software.

AWS Elastic Beanstalk is easy to begin and impossible to outgrow.

CloudFormation and Elastic Beanstalk don't cost anything extra.

They now have Simple Email Service.

They're building features by working from the customer backwards. They start by writing the press release, then the FAQ, next the documentation, and then finally the code.

They mentioned that Amazon RDS (Relational Database Service) is not as scalable as S3.

AWS is now a PCI level 1 service provider. (i.e. the retail business) is moving to AWS.

They mentioned Amazon VPC (Virtual Private Cloud).
There have been more than 2 million downloads of Sketchbook Mobile.

Homestyler is the fast, easy, free way to design your dream home.

Autodesk uses EC2, S3, EBS, and CloudFront.

Homestyler can now do photo realistic renderings of your home.

They mentioned project Nitrous. It provides a rich, in-browser experience. It's like Google Docs but for your 3D modeling documents.

They're doing stuff we're they're rendering in the cloud and then streaming to an iPad.
They handle customer data in real-time (i.e. with 100ms latencies) on AWS.

They have a primarily ex-Google team.

They provide customized ads based on user behavior.

They have some serious latency requirements. They had to hit 120ms end-to-end. They worked closely with AWS to meet their needs.

Their ad performance is incredible. They have a 7.5% click through rate.
This part of the keynote was given by Mitch Nelson, Director, Managed Systems.

LiveCycle is a managed service.

They talked about the Flash Media Server.

Adobe Connect is a managed service. It's an online meeting product. It's based on Flash.
Cloud Computing at NASA
This part of the keynote was about the JPL (Jet Propulsion Laboratory).

They wanted to augment availability in multiple geographic regions.

Some things can be safer in the cloud.

They're using the VPC (Virtual Private Cloud). They use IPSec.

They've saved a lot of money by adopting cloud computing.

Buying infrastructure way ahead of time is expensive for NASA. Cloud computing saves NASA a lot of money by allowing them to scale to meet their needs.

He talked a lot about the Mars Rover.

They spin up a bunch of instances to handle bursty traffic, processing data from the Mars Rover, which sends data once a day (I think).

The speaker said that the "missions" are his "customers".

These projects are using cloud computing: Mars Rovers, Deep Space Network, Lunar mapping
product, and airborn missions. None of them look like they involve astronauts.
SmugMug is profitable. They have no debt. They're a top 250 website. It's a private company. They had $10M+ in yearly revenues in '07.

They provide unlimited storage for storing photos and unlimited bandwidth for uploading and download photos. They're motto is "More, better, faster" (in comparison to other photo sites).

They can handle photos up to 48 megapixels in size.

They can handle 1920x1080p video.

They use a hybrid of AWS and their own datacenter. They have 5 datacenters. They're moving completely to AWS. They have many petabytes of data stored in S3.

Amazon really listens to their customers, trying to solve their needs.

One time, SmugMug's RubberBand project tried to spin up 1920 cores in a single API call. Amazon spun them all up as requested. SmugMug renamed that project SkyNet.

SmugMug lets customers tie their Amazon and SmugMug accounts together so that SmugMug can pass on the cost of storing raw photos and videos onto their customers (since they're so expensive to store).

SmugMug is designed for breakage, so there was minimal impact during the "Amazonpocalypse". They made use of multiple availability zones.

EBS is just like a hard disk. If you want local redundancy, use RAID. Want GEO redundancy? Replicate. EBS can/does fail just like HDDs. EBS does solve lots of problems, but use it in the appropriate way.
Customer Panel
Use Akami for dynamic generation of content, and use CloudFront for more static content.

Dealing with Akami is a pain in the butt.

The customers on the panel made it through the Amazonpocolypse fairly well. However, they had to architect for it.

The elasticity of Amazon's services is what saves the most money. Customers only pay for resources when they use them. The customers on the panel were pretty good about spinning up and spinning down instances. Also, the customers said that they can't build their own data centers fast enough to meet their own needs.

TellApart uses Spot instances to run Hadoop jobs. That means they only spin up instances when the price is low enough.

Without AWS, a lot of the customers would never have been able to attempt certain projects.

Akami hasn't broken for SmugMug. HAProxy also hasn't broken for them.

AutoDesk is using Scala.

TellApart uses GAE (Google App Engine) for its own dashboards.

The NASA guy uses CloudFormation to setup templates for their clusters.

Amazon often builds infrastructure that their customers have already built so that all the other new customers don't have to rebuild that same infrastructure.

RDS has very unpredictable latency. None of the panel customers were using it.

Security and Compliance Overview

This talk was given by Matt Tavis, Principal Solutions Architect.

Here's a link to the AWS Security Center.

Security on AWS is based on a shared responsibility model.

AWS is responsible for facilities, physical security, network infrastructure, etc.

The customer is responsible for the OS, application, security groups, firewalls, network configuration, and accounts.

AWS has complex features for who has access to interact with which AWS APIs if you have multiple people at the same company.

AWS favors replication over backup.

I wonder if you can mess around with EC2 or EBS and try to read the raw disk to see if there is stuff left over from a previous user.

By default, there's a mandatory inbound firewall, which defaults to deny.

VPC is a networking feature.

High Availability in the Cloud: Architectural Best Practices

This talk was by Josh Fraser, the VP of business development at RightScale.

RightScale is the world's #1 cloud management system.

RightScale serves 40,000 users and 2.5 million servers.

RightScale is a SaaS.

RightScale is the layer between the app and the cloud provider.

You need to design for failure.

Backup and replication need to be taken seriously.

A cloud is a physical datacenter entity behind an API endpoint.

He says that Amazon has 5 clouds. AWS is a cloud provider.

RightScale has ServerTemplates (which act like recipes). They don't like cloud images (because they're too fixed).

They have an integrated approach that puts together all the parts needed to build a single server or set of servers.

ServerTemplates are a server's DNA.

DR is disaster recovery. HA is high availability.

Implementing HA best practices is always about balancing cost, complexity, and risk.

You need to automate your infrastructure.

Always place at least one of each component (load balancers, app servers, databases) in at least two AZs (autonomous zones?).

You need alerting and monitoring.

Use stateless apps.

It's critical to be able to programmatically switch DNS.

Use HAProxy, Zeus, etc. for load balancing.

Scale up and down conservatively. Don't bring up 1000 instances because of 1 minute's demand.

Snapshot your EBS volumes.

Consider a NoSQL solution.

They use Cassandra in multiple regions, and it works well for them.

EBS snapshots can't cross regions.

There is no one size fits all solution.

The most common approaches they see involve multi-AZ configurations with a solid DR plan.'s Journey to the Cloud

Retail is everything at Amazon that isn't AWS. Retail is a customer of AWS.

This talk summarizes the history of Amazon retail from 1995 to 2011. I think the switch to AWS was in 2011.

They initially ran on a single box. They housed the server in their main corporate offices. There was a "water event" that caused them to move to a datacenter.

In 2001, they started switching from Digital Tru64 to Linux.

In 2004, they had 50 million lines of C++ code.

AWS started in 2006. S3 came first, and then came EC2. AWS actually wasn't built to solve problems for the retail side. It was always built to be general purpose, and the retail side was responsible for figuring out how to use it.

IMDb is an Amazon owned subsidiary. It's very separate of the rest of Amazon.

Amazon has strict runtime latency and scale requirements. If your widget can't meet them, your widget won't get shown.

Retail uses VPC. VPC makes AWS look just like your own datacenter.

November 10, 2010 is when they turned off the last frontend server not using AWS. All traffic for is now served from AWS.

They've had billions of orders over the lifetime of

Amazon has taken really old orders out of the database and moved them into S3.

They use Oracle.

Be sure to consider compliance issues.

When moving to the cloud, start with simple applications (or simple parts of larger applications).

Iterate toward your desired end-state.

The cloud can't cover up sloppy engineering.


I just wrote my first blog post for the YouTube API blog: Introducing is a platform for third-party applications that enable users to create videos. The idea is simple. The third-party application runs in an HTML iframe on YouTube. The user creates a video with the application, and then the application uploads the video to YouTube for the user to watch and share.

To read more, check out the story on the YouTube API blog.

Wednesday, June 29, 2011

Books: Python 3 Web Development Beginner’s Guide

I've been asked by Packt to review a new book called Python 3 Web Development Beginner’s Guide. Here's the overview:
  • Build your own Python web applications from scratch
  • Follow the examples to create a number of different Python-based web applications, including a task list, book database, and wiki application
  • Have the freedom to make your site your own without having to learn another framework
I looked through the table of contents, and I scanned the first 30 page. It looks like a fun book! In fact, it's exactly the sort of book I was hoping someone would write.

Wednesday, June 22, 2011

Python: Increasing the Timeouts for urlfetch in Google App Engine

Google App Engine provides a function, google.appengine.api.urlfetch.fetch, for fetching URLs. I do believe all the other HTTP client libraries are monkey patched to make use of that function, which is written to take advantage of various Google infrastructure. The fetch function has a default timeout of 5 seconds. You can set a higher timeout by passing a deadline parameter, but the maximum is 10 seconds. Unfortunately, passing a deadline keyword parameter is often difficult if it's a third-party library that is making the call to fetch, for instance if you're using the GData client library.

I looked for a way to set the deadline parameter in a more global way, but I couldn't fine one by mere inspection of the code. I came up with the following HACK in order to work around this problem:
# HACK: Monkeypatch google.appengine.api.urlfetch.fetch to increase the
# deadline. This is used by the various client libraries.
def _fetch(*args, **kargs):
from google.appengine.api.urlfetch import _orig_fetch # Import late.
kargs["deadline"] = 10
return _orig_fetch(*args, **kargs)
_fetch.this_is_the_wrapper = True

# HACK: Because of the way the dev app server works when reloading code,
# things are a little tricky here.
from google.appengine.api import urlfetch
if not hasattr(urlfetch.fetch, "this_is_the_wrapper"):
urlfetch._orig_fetch = urlfetch.fetch
urlfetch.fetch = _fetch
assert hasattr(urlfetch, "_orig_fetch")

Thursday, June 09, 2011

Google I/O 2011

I went to Google I/O a few weeks ago. Although I took notes, rather than blogging all my notes, I thought I'd just link to my favorite two talks:

Programming Well with Others: Social Skills for Geeks
Are languages, compilers, debuggers, and algorithms all you need to be a successful software engineer? In a perfect world, those who produce the best code should be the most successful. Unfortunately, we live in a world of imperfect people, and collaborating with others is at least as important as having great technical skills.
HTML5 Showcase for Web Developers: The Wow and the How
We'll share the strengths and extents of HTML5, showing magnificent demos of bleeding-edge features in Google Chrome. Digging into high-fidelity graphics, performance, and system integration, we'll break each demo down on the big screen to show how it was constructed. Then we'll show you how to use Chrome to its full potential in your own projects.

Friday, June 03, 2011

Python: Getting a Fair Flip out of an Unfair Coin

If you have an unfair coin (i.e. one that favors heads or tails), how do generate a fair flip (i.e. one that doesn't favor heads or tails)? My buddy Hy Carrinski and I came up with the following algorithm:
"""Get a fair flip out of an unfair coin."""

from collections import defaultdict
from random import random


def flip_unfair_coin():
"""Flip an unfair coin. Return a boolean."""
return random() > FLIP_RATIO

def create_fair_flip():
"""Generate a fair flip. Return a boolean."""
while True:
flip = flip_unfair_coin()
if flip != flip_unfair_coin():
return flip

# Demonstrate that it works.

if __name__ == '__main__':
results = defaultdict(int)
for i in xrange(1000000):
results[create_fair_flip()] += 1
percentage = (results[False] / float(results[False] + results[True])) * 100
print "Percentage heads:", percentage

Saturday, May 28, 2011

Personal: I Work at YouTube!

I got a job at Google working as a developer advocate for YouTube APIs. It's nice because it's a bit more of a social position than I usually have, but it still involves some work on open source code. Unfortunately, my desk is in Mountain View, so my commute is three hours a day; I probably won't be able to keep that up indefinitely. Anyway, I'm still very excited :)

I started three weeks ago, and I must admit that Google is about twice as awesome as I was even hoping for :)

Tuesday, May 17, 2011

Personal: Name Dropping

Ken Thompson (initial author of Unix) was awarded the Japan Prize today at Google. I just got to meet and talk to him, Vint Cerf (one of the "fathers of the Internet"), and Rob Pike (legendary Unix hacker, famous author, and co-author of Go). Since I shook each of their hands, I've decided to never wash my hand again ;)

Tuesday, April 19, 2011

Call Me Crazy: Calling Conventions

It's been said that the most significant contribution to computer science was the invention of the subroutine. Every major programming language has its own variation on how subroutines work. For instance, there are procedures, functions, methods, coroutines, etc. When implementing a compiled language or a virtual machine, you must decide how to implement subroutines at the assembly language level. For instance, you must decide how to pass arguments, what goes on the stack, what goes on the heap, what goes in registers, what goes in statically allocated memory, etc. These conventions are called calling conventions. You might not think of calling conventions very often, but they're an interesting topic.

First of all, what does a subroutine provide over a simple goto? A subroutine provides a way to pass arguments, execute a chunk of code, and return to where you were before you called the subroutine. You can implement each of these yourself in assembly, but standardizing calling conventions in a language makes the process more stable, repeatable, and less prone to error.

The way you implement calling conventions in a language like C is at least conceptually something like the following. You push all your arguments onto the stack; you push your current location in the code (i.e. the IP register) onto the stack; and then you jump to the subroutine. The subroutine executes its code, puts its return value in a register, and then jumps to the return-to address you put on the stack. The caller is responsible for popping the arguments off the stack (this is necessary to support varargs). Often, the compiler may optimize the code by putting values into registers rather than on the stack.

What's the difference between subroutines, procedures, functions, and methods? I think of subroutines as a low-level, catch-all term. In Pascal as well as some other languages, a procedure is expected to have side effects but not return a value, whereas a function is not supposed to have side effects but is supposed to return a value. Of course, not having side effects is a convention, not something enforced by the compiler. A method is a function call on an object. Typically, the object is implicitly passed as the first argument, although that may not be explicit in the syntax for the language.

Coroutines are a twist on subroutines. When you call a subroutine, it'll eventually return to the caller. If A calls B, then eventually B will return to A. Coroutines don't have to behave in this manner--i.e. they don't have to return to the caller. A may call B which may call C which may call A in a circular manner, without maintaining a stack of return-to addresses.

Recursion is a technique in which a function can call itself. Some languages like Lisp provide recursion as a low-level operation, and all looping constructs are built on top of recursion. There are some algorithms that are more elegantly implemented using recursion than looping such as the Ackermann function.

Recursion implies that you have some sort of stack (whether or not you use a C-level stack). That's because if a function takes one argument and recurses five times, you need a places for five arguments. Languages like COBOL put arguments in statically allocated memory instead of on a stack. Hence, COBOL lacks recursion.

Tail call optimization is an interesting technique that reduces the use of the stack. If function A calls function B, and then B's last step is to call function C, rather than having a return-to address for A and a return-to address for B, you can just have C return directly to A since there's nothing else in function B that needs to be done.

This works really well when a function calls itself as its last step. If F calls itself as its last step, rather than having a large stack of return-to addresses for F, you can optimize it away. In this way, what would normally be a recursive algorithm can be translated into simple interation. This is a critical technique in languages like Lisp that rely on recursion for looping; however, it's not present in languages like Python. (Guido van Rossum has argued that it's harder to debug bugs if parts of the stack have been optimized away. It's possible to end up in a function with no way of knowing how you got there.)

Stacklessness is a technique whereby the language's stack does not rely on an assembly or C-level stack, but rather exists on the heap. Because of the way C stacks require contiguous space, it's difficult to allocate space for 1000s of them at the same time. In a stackless interpreter, because stack frames are allocated dynamically from the heap, you can allocate a greater number of them stacks. This technique is used by stackless interpreters such as Lua as well as the greenlets library in Python in order to implement something like lightweight multithreading.

Some languages provide more flexibility than others in the way arguments are passed. C has the ability to pass a fixed number of arguments, but there's also a facility to pass an arbitrary number of arguments call varargs. Some languages like Perl and Ruby allow you to pass additional keyword arguments in the form of a hash to the function (e.g. "f(1, 2, :arg => 1)"). Some languages like Common Lisp and Python have advanced support for keyword arguments allowing you to call a method that looks like "def f(a, b, c=1, d=2, e=3)" like "f('a', 'b', e=3, d=3)". Notice that you can leave out or rearrange the order of keyword arguments. Python allows you to pass a variable number of positional arguments as well as a variable number of keyword arguments and it will even raise an exception if you pass a keyword argument that isn't expected. Common Lisp was one of the first languages to have extremely sophisticated parameter passing, including sophisticated support for keyword arguments.

Currying is a technique whereby if a function accepts two parameters, A and B, then what you really have is a function that takes an argument A and then returns another function that takes an argument B. In this way, all functions can be reduced to a list of functions that accept 0 or 1 arguments. When a language supports currying, you can pass a value for A and then later pass a value for B. Haskell supports currying.

Partial application is a related technique that lets you pass an arbitrary number of arguments, perhaps in a random order if keyword arguments are supported, and then actually call the function later. If you have a function that takes one parameter, then you can partially apply it to one argument, and you are left with a function that takes no parameters that you can call at a later time. Partial application is more general than currying. It's supported in Python. However, in languages like Python, Haskell, Lisp, etc., it's always possible to create a new function that takes no parameters that simply executes another function passing a particular value to that function. Hence, it's trivial to implement partial application manually.

Lazy evaluation is a technique whereby the arguments to a function are not evaluated until they are used within the function. If you call f(1/0) in Python, you will always get a DivisionByZero exception. However, in languages such as Haskell, if you call f (1/0), as long as f never evaluates its argument, you'll won't get an error. Languages that do not have lazy evaluation are said to be "strict". Hence, Haskell is not strict.

It is an error prone situation to have a language that is not strict (i.e. it supports lazy evaluation) but also supports side effects. For instance, consider writing the expression f(print(1), print(2) in a language that wasn't strict but permitted side effects. Depending on whether or not f evaluated its parameters, 1 and 2 may or may not be printed, and you couldn't be sure which order they would be printed in. Hence, functions in Haskell are both lazy and side-effect free.

Inlining is an optimization technique that helps avoid the overhead of a function call. If a function A calls a very small function B, an optimizing compiler might take a copy of the compiled code for B and put it "inline" in the compiled code for A. This reduces the function call overhead but results in larger binary sizes.

In some situations, some compilers can aggressively inline all functions. Alternatively, the compiler may be able to put all the code for all the functions side by side and simply jump between the different parts of the code without using the stack. Some programmers use this technique manually in order to write small, but highly efficient programs. In general, the way you think calling conventions work in a language may be a simplified, idealistic way of how they actually work. In the case of inlining, the function call completely disappears.

Continuation passing style is a technique whereby all function calls can be transformed in a way that makes tail call optimization possible. Consider the following function:
def sum(n):
if n == 0:
return 0
return n + sum(n - 1)
This function cannot natively take advantage of tail call optimization because after it recurses, the return value must be added to n. It's easy to change this to a function that takes advantage of tail call optimization manually:
def sum(n, _accum=0):
if n == 0:
return _accum
return sum(n - 1, _accum + n)
In this case, an accumulator is passed explicitly. (Remember, however, that even though I'm using Python syntax, Python does not support tail call optimization.)

Rather than passing an accumulator that contains a value, you can pass an accumulator that contains what other code needs to execute after the recursive call. Here is an example of that:
def sum(n, _callback=lambda x: x):
if n == 0:
return _callback(0)
return sum(n - 1, lambda x: _callback(x + n))
By the way, if you find that code difficult to understand, don't feel bad! It took me more than half an hour to write it, despite the fact that I had seen the technique before! That goes to show you that switching to continuation passing style isn't easy. However, it can be accomplished via a mechanical (i.e. "just follow the steps") process, and there are languages that can do it for you automatically. Scala has a compiler flag that turns on continuation passing style so that the JVM behaves differently (conceptually it uses a lot of anonymous functions that exist on the heap rather than return-to addresses that exist on the stack).

Returning to C, you might think that the calling conventions of C were decided upon years ago, and the only variations would have to do with compiler optimizations. Surprisingly, that is not the case. For instance, AMD helped define the new calling conventions for System V on AMD64. In fact, the calling convention are neither C nor AMD64 dependent. It's entirely defined by the operating systems application binary interface standard. Hence, the calling conventions for AMD64 systems are slightly different under Windows (specifically in relation to what registers are used to pass variables). See here and here for more information.

So if you're thinking about implementing a new language, and you wonder what's in a subroutine, you make the call!

Perl: Text-based Address Books

Over the years, I've stored my addresses in a variety of formats. I used to have a Palm Pilot. These days, I have an Android phone, and Google keeps track of my addresses. However, I've always had a second copy of my addresses in a text file. The format looks something like this:
A & R Stock to Performance:
Address: 2849 Willow Pass Rd. #A, Concord, CA 94519
Work Phone: (925) 689-1846
There are many advantages to using a text-based format. For instance, since I'm an expert Vim users, I can search and edit the file extremely quickly. Best of all, the format works on every operating system and never goes obsolete. The only downside is that I have to input the address twice: once for Google and once for my own notes.

Long ago, I wrote a Perl script to convert my notes file into a Palm Pilot database. Even though I don't have a Palm Pilot anymore, I keep the script around. I can easily alter it to output the addresses in different formats.

If you like the format I used above, here's the source code so that you can hack it to do whatever you want:
#!/usr/bin/perl -w

# Author: Shannon -jj Behrens
# Email:
# This program converts the addresses stored in my notes file into a Palm
# database file (a pdb), which I can then import into my Handspring Visor.
# See usage for more details.
# Global variables:
# $_0 - This is $0, but without the path.
# $pdb:: - This is a handle to the Palm address database being edited.

# %phone_mappings - Mappings from things such as "Cell" to integers. These
# values were deduced from the @Palm::Address::phoneLabels array as well
# as usage in my notes.txt file.
%phone_mappings:: = (
Work => 0,
Home => 1,
Fax => 2,
Other => 3,
Email => 4,
Cell => 7

# $MAX_PHONES - The maximum amount of "Phone" type fields.
$MAX_PHONES:: = 8;

use strict;
use Palm::PDB;
use Palm::StdAppInfo;
use Palm::Address;

# Output usage information to the user and exit with value 64 (see man
# sysexits on a FreeBSD machine).
# By the way, here's how I use this program:
# mkdir palm
# pilot-xfer -b palm
# ./notes_to_palm ~/notes.txt palm/AddressDB.pdb
# pilot-xfer -r palm
# rm -r palm
sub usage {
print "usage: $_0:: notes.txt AddressDB.pdb\n";
exit 64;

# There's an ugly "bug" that causes Perl 5.6 to complain about unamed
# categories. Hence, I have to do a little bit of initializing or else I get a
# whole bunch of uninitialized warnings.
sub init_category_names {
for (my $i=0; $i < Palm::StdAppInfo::numCategories; $i++) {
if (!defined($pdb::->{appinfo}{categories}[$i]{name})) {
$pdb::->{appinfo}{categories}[$i]{name} = "";
$pdb::->{appinfo}{dirtyFields} = 1;

# Return a new record. I have to do a little bit of initializing or else I get
# a whole bunch of uninitialized warnings. I wonder why the libraries don't do
# this automatically since they initialize everything to undef anyway. Also,
# I'll insert a phoneIndex variable into the $record so that I can use the
# phone slots on a first come first serve basis.
sub new_record {
my $record = $pdb::->new_Record;
foreach my $field (qw( name firstName company phone1 phone2 phone3
phone4 phone5 phone6 phone7 phone8 address
city state zipCode country title custom1
custom2 custom3 custom4 note )) {
$record->{fields}{$field} = "";
$record->{phoneLabel}{reserved} = 0;
$record->{fields}{phoneIndex} = 1;
return $record;

# Process a name line, such as "Shannon -jj Behrens (author)".
sub handle_name {
my ($fullname) = @_;

# Check for note.
if ($fullname =~ /^(.*)\((.*)\)/) {
$fullname = $1;
$record::->{fields}{note} = $2;

# Assume last word is last name, everything else is first name.
my @name = split / /, $fullname;
$record::->{fields}{name} = pop @name;
$record::->{fields}{firstName} = join " ", @name;

# Process a phone type (e.g. Cell) and a value (e.g. "(925) 209-6439"). Please
# read p5-palm's module to see the interesting way Palm phone
# numbers work. When possible, use the email address as the primary "Phone"
# field.
sub handle_phonelike_field {
my ($type, $value) = @_;

my $phone_index = $record::->{fields}{phoneIndex};
return if ($phone_index > $MAX_PHONES::);
my $phone_field = "phone$phone_index";
$record::->{fields}{$phone_field} = $value;
$record::->{phoneLabel}{$phone_field} = $phone_mappings::{$type};
$record::->{phoneLabel}{display} = $phone_index - 1
if ($type eq "Email");

# Process an address line, such as "125 Gilger Ave., Martinez, CA 94553".
sub handle_address {
my ($fulladdress) = @_;

my @address = split /, /, $fulladdress;
$record::->{fields}{address} = $address[0];
$record::->{fields}{city} = $address[1];
my @state_zip = split / /, $address[2];
$record::->{fields}{state} = $state_zip[0];
$record::->{fields}{zipCode} = $state_zip[1];

# Handle one line, $_, from the notes.txt file. Remember, there must be at
# least one blank line after every address record in the notes.txt file in
# order to flush it to disk.
# Creates globals: $section::, $record::.
sub handle_line {
# Ignore anything not in the Contacts section.
if ($_ =~ /-{5,} ([^\-]+) -{5,}/) {
$section:: = $1;
return if (!defined($section::) or
$section:: ne "Contacts");

# A blank line is a signal to flush the current record, if any. Otherwise,
# it can just be ignored.
if ($_ =~ /^[ \t]*$/) {
if (defined($record::)) {
$record:: = undef;
return ;

# If the line doesn't start with a space, this is a new record's name.
# Start a new record, and then handle the name.
if ($_ =~ /^([^ ].*):/) {
my $fullname = $1;
$record:: = new_record;
handle_name $fullname;

# Check for phone numbers and email addresses.
foreach my $type (keys %phone_mappings::) {
if ((($type eq "Email") and ($_ =~ / Email: (.*)/)) or
(($type ne "Email") and ($_ =~ / $type Phone: (.*)/))) {
handle_phonelike_field $type, $1;

# Check for address.
if ($_ =~ / Address: (.*)/) {
handle_address $1;

# Check for company.
if ($_ =~ / Company: (.*)/) {
$record::->{fields}{company} = $1;

# Anything else can be appended to the additional notes section.
$record::->{fields}{note} .= $_;
print STDERR "Additional notes: $_";

my @pieces = split /\//, $0;
$_0:: = $pieces[$#pieces];
usage if (scalar(@ARGV) != 2);
my $notes = shift;
my $addresses = shift;
open(NOTES, $notes) || die "$_0::: $notes: $!\n";
$pdb:: = new Palm::Address;
handle_line while (<NOTES>);
close NOTES;

Of Neuroscience and Jazz

I am neither a neuroscientist nor a very accomplished musician, but I'd like to talk about the intersection of neuroscience and music. I have a theory that there is a neuroscience basis for why it takes a mature musical palate to enjoy jazz.

First, let me say a little something about neuroscience (based on the limited understanding I've gained by watching a bunch of talks). One of the things your brain is particularly good at is recognizing patterns and predicting patterns. At the lowest level, if two nerves are close to each other, and they both fire, it's counted as a pattern--i.e. those two things are connected. Similarly, if a nerve fires and then a short while later it fires again, that's a pattern as well. Hence, if both of my fingers feel something, there's a pattern, or if I feel a tapping on a single finger, that's a pattern as well.

However, the brain is not limited to low level patterns. Rather, it can respond to a hierarchy of patterns. Paraphrasing Pavlov, if a dog hears a can opener and then smells food and then gets fed, we all know that the dog will soon recognize the pattern and start salivating as soon as he merely hears the can opener. My point is that a sophisticated pattern is composed of lower level patterns recursively, all the way down to the level of individual neurons firing.

Next, let me talk about sine waves and chords. Let's start with a single note. A single note might look something like y = sin(x):This wave is very simple, and somewhat boring.

(By the way, I'm using to create these graphs. WolframAlpha is really interesting. It's a computational knowledge engine, and graphing things is just one of a ton of things it can do.)

Now, let's look at a chord consisting of a single note as well as the note that is an octave above it. Here's y = sin(x) + sin(2 * x):This curve is somewhat more sophisticated. However, you can still recognize the pattern by the time x = 10.

Now, let's look at what I think is a fifth, which is still a nice sounding chord. Here's y = sin(x) + sin(1.5 * x):This pattern is slightly more sophisticated, but it's still pleasingly recognizable.

Now, let me show you something that is not pleasing to the ear. This curve represents what might happen if you hit two keys that are right next to each other on the keyboard. Here's y = sin(x) + sin(1.1 * x):This curve pulses in an ugly way. You have to get all the way out to x = 120 or so to even see the pattern. In a manner of speaking, the pattern is only there by "brute force".

So what's my point? Simple patterns are easy to recognize, and they can be recognized in less time (i.e. for smaller variations of x, which I probably should have called t).

Here's something that I think is more of a jazz chord. Here's y = sin(x) + sin(1.25 * x) + sin(1.5 * x) + sin(2 * x):This curve is really interesting. You can recognize what's going on by the time you get to x = 50, but the "texture" of the curve is a lot more interesting. Whenever I play this sort of chord, it sounds deep, rich, and interesting. If the first chord reminds me of "Mary had a Little Lamb", this last chord reminds me of the forbidden love between Lancelot and Guinevere and the pain it caused King Aurthur.

So what's my point? You have to go higher up the pattern recognition pyramid in order to recognize more sophisticated patterns. A more sophisticated musical palate is able to recognize patterns that a simpler musical palate may not recognize. That's why jazz requires a sophisticated palate--it uses chords that require more effort to recognize.

Of course, there are many dimensions to music, and individual chords are just one dimension. There are also chord progressions and beats. The same sort of thing applies to these other dimensions.

Here's the circle of fifths (thanks Wikipedia!):Pick any three chords in a row on the circle of fiths, such as C, G, and D, and you have the basis for a simple, feel-good song. If you just bounce around between the different chords on a guitar using different strumming patterns, you'll immediately recognize a song you already know or create a new song that sounds pleasing. Throw in a few 7th chords and a few minor chords, and the song starts taking on additional richness because the pattern becomes less simple.

If you try to use more than three chords from the circle of fifths, your brain might be left thinking, "Wait a minute. What key am I supposed to be in?" It may be more difficult to recognize the pattern. However, this is exactly the sort of thing that happens in jazz.

The same thing applies to beats. Here's a simple walz, "Um pa pa. Um pa pa." Here's a typical rap beat, "Bum chbum bum chbumbum ch bum bum ch." More sophisticated beats require more sophisticated pattern recognition.

Every song is composed of some pattern repetition and some pattern violation. Simple, pop songs that are enjoyable by youthful palates involve a lot of pattern repetition and less pattern violation. Sophisticated music palates enjoy songs with less pattern repetition and more pattern variation. Furthermore sophisticated music palates get bored of songs more quickly. They recognize the patterns quickly, and they're ready to move on.

Furthermore, popular songs are played more often on the radio so that your brain has a better chance to become more familiar with the patterns. A listener will usually enjoy the song more after listening to it 10 times than if he's only listened to it once. If a song has so much pattern repetition that you enjoy it the first time, you'll often grow bored of it very quickly. Conversely, it's very difficult to enjoy sophisticated classical pieces the first time you hear them. All the popular classical pieces have been driven into our heads since we were kids.

Why is this? I think this can be explained by neuroscience as well. The brain works hard to find patterns, and when it does, the simple recognition of a pattern is somewhat calming. It's says something like, "Hmm, I've seen that pattern before. Coooooool." However, if a pattern is repeated too often, the brain starts to filter it out; it becomes boring. It says, "Nothing new here. Pay attention to something else." Pattern violations catch your attention. You brain says, "Hey, wait a second. I didn't expect that! You should pay attention because something new is happening!" Hence, composers must always straddle the line between calming pattern repetitions and exciting pattern violations. How far you go in either direction dictates how sophisticated a musical palate will be required to enjoy the song, and thus, who will enjoy it.

Ok, so that's my theory of neuroscience and music. I don't have any MRIs to back it up, nor do I have the educational background to claim real expertise on the subject. Nonetheless, I think there's some truth to it.