Skip to main content

Clustering: Hadoop

Google wrote a white paper called MapReduce: Simplified Data Processing on Large Clusters. It's a simple way to write software that works on a cluster of computers. Google also wrote a white paper on The Google File System.
Hadoop is a framework for running applications on large clusters of commodity hardware. The Hadoop framework transparently provides applications both reliability and data motion. Hadoop implements a computational paradigm named map/reduce, where the application is divided into many small fragments of work, each of which may be executed or reexecuted on any node in the cluster. In addition, it provides a distributed file system that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both map/reduce and the distributed file system are designed so that node failures are automatically handled by the framework.
Put simply, Hadoop is an open-source implementation of Google's map/reduce and distributed file system written in Java.

I needed something like that, so I decided to give it a whirl. I prefer to code in Python, so it's fortunate that Hadoop can "shell out" to Python on each of the remote systems. Shelling out once per system has negligible overhead, so that's fine.

You'll need to read the whitepaper to fully understand map/reduce, but let's look at some code. First, let's look at my input. It's a file:
1
2
3
...
999
Now, here's my mapper:
#!/usr/bin/env python

"""Figure out whether each number is even or odd."""

import sys


for line in sys.stdin:
num, _ignored = line[:-1].split("\t")
is_odd = int(num) % 2
print "%s\t%s" % (is_odd, num)
Here's my reducer:
#!/usr/bin/env python

"""Count and sum the even and odd numbers."""

import sys


counts = {0: 0, 1: 0}
sums = counts.copy()
for line in sys.stdin:
is_odd, num = map(int, line[:-1].split("\t"))
counts[is_odd] += 1
sums[is_odd] += num
for i in range(2):
name = {0: "even", 1: "odd"}[i]
print "%s\tcount:%s sum:%s" % (name, counts[i], sums[i])
This resulted in a single file:
even count:500 sum:249500
odd count:500 sum:250000
Once Hadoop is installed, executing this job is done at the shell via:
hadoop jar /usr/local/hadoop-install/hadoop/build/hadoop-streaming.jar \
-mapper mapper.py -reducer reducer.py -input input.txt -output out-dir
This was the first time I had ever written software for a cluster, and all in all, it was pretty easy. Too bad I didn't actually have a couple thousand machines to run this on ;)

(By the way, during installation, I ran into a couple issues which I was able to work around easily. I won't bother repeating them here. You can find my workarounds on the mailing list. You may need to wait for the archive to be updated since I just posted them earlier today.)

Comments

jjinux said…
I was pleased overall with Hadoop. My biggest comment / complaint was that it's built for massive data crunching, whereas I need something for lightning quick responses. They're really different use cases. For instance, I think I need to have the available slave instances already connected on the other end of a TCP/IP socket, with the code and data already loaded and ready to go. Hadoop makes more sense as a backend for a spider--which is what it was designed for ;)
Doug Cutting said…
Too bad I didn't actually have a couple thousand machines to run this on ;)

You can rent a cluster by the hour from Amazon.

http://wiki.apache.org/lucene-hadoop/AmazonEC2

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within Tunes.org , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and jabber.org. Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p