Skip to main content

Python: Getting a Fair Flip out of an Unfair Coin

If you have an unfair coin (i.e. one that favors heads or tails), how do generate a fair flip (i.e. one that doesn't favor heads or tails)? My buddy Hy Carrinski and I came up with the following algorithm:
"""Get a fair flip out of an unfair coin."""

from collections import defaultdict
from random import random


def flip_unfair_coin():
"""Flip an unfair coin. Return a boolean."""
return random() > FLIP_RATIO

def create_fair_flip():
"""Generate a fair flip. Return a boolean."""
while True:
flip = flip_unfair_coin()
if flip != flip_unfair_coin():
return flip

# Demonstrate that it works.

if __name__ == '__main__':
results = defaultdict(int)
for i in xrange(1000000):
results[create_fair_flip()] += 1
percentage = (results[False] / float(results[False] + results[True])) * 100
print "Percentage heads:", percentage


Dirk Bergstrom said…
It works, but I don't really understand why (that's what happens when you don't learn statistics). An explanation would be nice...

While we're at it, there's a typo in the script: you write "Prove" but you merely demonstrate.
It works because even on an unfair coin, the chance of a heads followed by a tails is the same as the chance of a tails followed by a head. This algorithm only returns a value when one result is followed by the other. Looking at the possible outcomes makes this pretty clear. If heads is .7 and tails is .3, then the four possible outcome probabilities are:

heads/heads = 0.49
heads/tails = 0.21
tails/heads = 0.21
tails/tails = 0.09

The algorithm throws away heads/heads and tails/tails and returns tails for heads/tails and heads for tails/heads - each with a probability of of 0.21 per attempt (with transparent re-tries for the rejected cases).
Luke Plant said…
Very neat!

@Dirk: it works because even biased coin flips are still independent. (if they are not e.g. if a person is controlling the flips, or the coin has been set to do a certain sequence, this method will fail).

Essentially, you are flipping the coin twice, and then only looking at the times when the two results are different. The first time you get that scenario, pick the first result (or last, it doesn't matter). The probability of getting Heads then Tails is the same as the probability of getting Tails then Heads (due to independence), so you get odds of 50% for the overall result being Heads or Tails.
Jeff Epler said…
You've reinvented the von Neumann Extractor, and Jean-Paul is right about the reasons it works for a biased but non-autocorrelated source.
jjinux said…
> While we're at it, there's a typo in the script: you write "Prove" but you merely demonstrate.

Updated. Thanks.
jjinux said…
> It works because...

Wow, great explanation!
jjinux said…
> You've reinvented the von Neumann Extractor, and Jean-Paul is right about the reasons it works for a biased but non-autocorrelated source.

Nice job providing the reference. I'm 100% okay with the fact that I came up with the same thing as Von Neumann ;)
Paddy3118 said…
We have a task on this very topic here:, so you can see it solved in Python and other languages.

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p