Skip to main content

Software Engineering: Facebook Thrift

I read Facebook's Thrift Whitepaper. Thrift is, basically, Facebook's open source replacement for CORBA. In short, it looks fine.

The whitepaper mentions CORBA only very briefly: "Relatively comprehensive, debatably overdesigned and heavyweight. Comparably cumbersome software installation."

Thrift feels very C++-ish. For instance, they use Boost. The Thrift IDL compiler is written in C++ with Lex and Yacc. They also implemented their own C++ thread primatives, including Mutex, Condition, and Monitor. The whitepaper also discusses their threading and memory management issues in C++.

They have an interesting scheme for versioning APIs such as struct fields and function parameters. They put numbers on the fields so that if a client and server aren't currently running the same version of the IDL, things can continue working. This is important for rolling upgrades. Here's an example:
struct Example { 
1:i32 number=10,
2:i64 bigNumber,
3:double decimals,
4:string name="thrifty"
That was, perhaps, the most surprising thing to me. I was having BASIC flashbacks. I assume that if you need to delete a field, you just delete the whole line and leave a gap in the numbers.

However, an interesting case arises. What happens if you delete field 4 in the above example, and someone else unknowingly adds a new field, reusing the same id? It's probably important to leave a comment in the code saying which is the highest id that has been used so far.

I don't think I would have used numbers. Instead, I would have used a "deleted" keyword. Of course, numbers could still be used in the implementation, behind the scenes:
struct Example { 
i32 number=10,
deleted i64 bigNumber,
double decimals,
deleted string name="thrifty"
Perhaps they thought of this too, but there's some reason they dismissed it. For instance, it is sort of ugly to have those "deleted" lines, and you can only add new fields at the end. Heh, I guess this is one time where numbers actually enhance readability ;)

Anyway, I didn't actually try it out, but I'm sure it's fine.


Leon Atkinson said…
It's cool that Facebook is throwing off a bunch of open source. Did you see Facebook's Cassandra? It's something like BigTable.
Take at peek at Google's Protocol Buffers to see what Facebook based their design off of.

From what I've read Facebook was hiring Google engineers with Protocol Buffers experience when they were building Thrift.
jjinux said…
> It's cool that Facebook is throwing off a bunch of open source. Did you see Facebook's Cassandra? It's something like BigTable.


> Take at peek at Google's Protocol Buffers to see what Facebook based their design off of.

Oh, interesting!
Anonymous said…
jeez, not sure where this meme started, but thrift was based on pillar, a cross-language rpc framework from our former cto, written in ml. for all we know, google based protocol buffers on pillar.
jjinux said…
> jeez, not sure where this meme started

CORBA? What was popular before CORBA?

By the way, +1 for mentioning ML ;)

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p