The **Banach-Tarski paradox** proves how f―ed up the real numbers are. Logical peculiarities confuse our intuitions about “length”, “density”, “volume”, etc. within the continuum (ℝ) of nonterminating decimals. Which is why Measure Theory is a graduate-level mathematics course. These peculiarities were noticed around the turn of the 20th century and perhaps never satisfactorily resolved. (Hence I disagree with the use of real numbers in economic theory: they aren’t what you think they are.)

**Axiom of Choice ****→ Garbage**

The paradox states that **if you assumed the axiom of choice** (or Zorn’s Lemma or the well-ordering of ℝ or the trichotomy law), **then you could take one ball and make two balls out of it**. It follows that you could make seven balls or thirty-seven out of just one. That doesn’t sound like real matter (it’s not; it’s the infinitely infinite mathematical continuum).

I can’t think of anything in real life that that *does* sound like. Conservation-of-mass-type constraints hold in economics (finite budget), probability (∑pᵢ=1), text mining, and in all the phase and state spaces I can think of as well. Generally you don’t make something out of nothing.

**If it’s broke, throw it out.**

The logical rule-of-inference *Modus Tollens* says that **if A→B and ¬B, then ¬A**. For example if leaving the fridge open overnight leads to rotten food, and the food is not rotten, I conclude that the fridge was not open overnight. Let A = Axiom of Choice and B = Banach-Tarski Paradox. Axiom of Choice leads to Banach Tarski paradox; said paradox is false; so why don’t we reject the Axiom of Choice? I have never gotten a satisfactory answer about that. ℝ is still used as a base corpus in dynamical systems, economics, fuzzy logic, finance, fluid dynamics, and as far as I can tell, everywhere.

**How does the proof of paradox work?**

The proof gives instructions of how to:

- Partition a solid ball into five unmeasurable disjoint subsets.
- Move them around (rigidly, without adding mass).
- Get a new solid ball, whilst leaving the first ball intact.

The internet has several readable, detailed explanations of the above. You’ll end up reading about Fuchsian groups, Henri Lebesgue’s measure, and hyperbolic geometry (& the Poincaré disk) along the way.

Stan Wagon has also written a Mathematica script to display the subsets in a hyperbolic geometry (whence these pictures come). Thanks, Stan!

**Added, three years later:** Most software (`bc -l`

excepted) is written with double-precision floats, which as @sqrtnegative1 points out, is finite in size (something like `2^64`

). So even though programmers, in a misguided attempt to showcase their erudition, sometimes call floats “reals”, computer stuff doesn’t suffer from these classic ℝ problems. One real would take an infinite amount of time to compute (yes, even in parallel) and an infinite amount of space to store.

So we can obviously limit the problem to theory. Can I tell that the object you’re saying is a good model for X, can’t really make sense for X? For example you could know that ℤ/10^10^10^10^10 shouldn’t model amounts of matter in the universe, even though it would count things correctly for probably every empirical example I could come up with. But *it just doesn’t make sense* to say that the counting resets after 10^10^10^10^10. It’s intuitively wrong, stupid, or at least not what I’m trying to say. Not because there is some empirical example proving it’s wrong. Just because ℤ/n is circular and *I mean a line*.

So that’s the kind of error the Banach-Tarski paradox presents us for ℝ. I wanted a line, I wanted it smooth, liquidy, continuous. I *didn’t* want a thing that violates conservation of mass. Same type of problem as above.

But mathematicians who are aware of measure theory are able to, and typically do, excise the risk-prone part of the “portfolio”. Whenever you hear someone begin with Borel-measurable sets or Lebesgue-measurable space, really any invocation of measure, they’ve cut out the theoretically problematic part of the ℝ’s. You won’t violate `size(a+b) > size(a) + size(b)`

because by assumption `size(a+b)=size(a)+size(b)`

. I think *that’s* why ℝ doesn’t pose a problem: because people really have moved on.