Google wrote a white paper called MapReduce: Simplified Data Processing on Large Clusters. It's a simple way to write software that works on a cluster of computers. Google also wrote a white paper on The Google File System.
I needed something like that, so I decided to give it a whirl. I prefer to code in Python, so it's fortunate that Hadoop can "shell out" to Python on each of the remote systems. Shelling out once per system has negligible overhead, so that's fine.
You'll need to read the whitepaper to fully understand map/reduce, but let's look at some code. First, let's look at my input. It's a file:
(By the way, during installation, I ran into a couple issues which I was able to work around easily. I won't bother repeating them here. You can find my workarounds on the mailing list. You may need to wait for the archive to be updated since I just posted them earlier today.)
Hadoop is a framework for running applications on large clusters of commodity hardware. The Hadoop framework transparently provides applications both reliability and data motion. Hadoop implements a computational paradigm named map/reduce, where the application is divided into many small fragments of work, each of which may be executed or reexecuted on any node in the cluster. In addition, it provides a distributed file system that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both map/reduce and the distributed file system are designed so that node failures are automatically handled by the framework.Put simply, Hadoop is an open-source implementation of Google's map/reduce and distributed file system written in Java.
I needed something like that, so I decided to give it a whirl. I prefer to code in Python, so it's fortunate that Hadoop can "shell out" to Python on each of the remote systems. Shelling out once per system has negligible overhead, so that's fine.
You'll need to read the whitepaper to fully understand map/reduce, but let's look at some code. First, let's look at my input. It's a file:
1Now, here's my mapper:
2
3
...
999
#!/usr/bin/env pythonHere's my reducer:
"""Figure out whether each number is even or odd."""
import sys
for line in sys.stdin:
num, _ignored = line[:-1].split("\t")
is_odd = int(num) % 2
print "%s\t%s" % (is_odd, num)
#!/usr/bin/env pythonThis resulted in a single file:
"""Count and sum the even and odd numbers."""
import sys
counts = {0: 0, 1: 0}
sums = counts.copy()
for line in sys.stdin:
is_odd, num = map(int, line[:-1].split("\t"))
counts[is_odd] += 1
sums[is_odd] += num
for i in range(2):
name = {0: "even", 1: "odd"}[i]
print "%s\tcount:%s sum:%s" % (name, counts[i], sums[i])
even count:500 sum:249500Once Hadoop is installed, executing this job is done at the shell via:
odd count:500 sum:250000
hadoop jar /usr/local/hadoop-install/hadoop/build/hadoop-streaming.jar \This was the first time I had ever written software for a cluster, and all in all, it was pretty easy. Too bad I didn't actually have a couple thousand machines to run this on ;)
-mapper mapper.py -reducer reducer.py -input input.txt -output out-dir
(By the way, during installation, I ran into a couple issues which I was able to work around easily. I won't bother repeating them here. You can find my workarounds on the mailing list. You may need to wait for the archive to be updated since I just posted them earlier today.)
Comments
You can rent a cluster by the hour from Amazon.
http://wiki.apache.org/lucene-hadoop/AmazonEC2