Math helps us plan trips, design trains, play tetris, and much more. And there's lots of mathematics that has not been applied, but could be. But let's not kid ourselves, a lot of math is useless.
Applying math usually means processing some input values to get an output value. This process can be can be moving abacus beads up and down, writing numbers on paper, or entering them into a calculator. Anything will do as long as the actions you take correspond with the steps of your theorem's proof.

Some logical axioms used by mathematicians don't really correspond to any processes we can carry out. Their semantics is physically dubious. Theorems proven using these axioms are "true", but cannot be used for practical purposes.
In this post I'll describe two axioms with questionable semantics, and some of the havoc they can wreak. These axioms have been used for centuries, receiving criticism from few mathematicians. Only recently has there been increased controversy over them; it's no wonder most of us are still taught math in school as if both held.
The Law of the Excluded Middle
"Taking the Principle of the Excluded Middle from the mathematician...is the same as...prohibiting the boxer the use of his fists" - David Hilbert
If someone says it's not not raining, they usually mean it's raining.
Turning a double negative into a positive is known as applying the Law of the Excluded Middle. For years, mathematicians used this law without a second thought. This law is what lets us do proofs by contradiction. It can be stated several equivalent ways. Here are a few:
- All propositions are either true or false
- Anything that's "not false" is true
- (not (not P)) implies P
To immediately see a problem with this law, consider the statement "This sentence is false". Clearly that sentence is neither true nor false. Applying the law of the excluded middle a couple times, that sentence is both true and false, and we have contradiction.
So the Law of the Excluded Middle causes contradictions when used freely in English. Formal logics are a little more rigid; the law generally doesn't cause contradictions, but causes a different problem. Theorems proven using the law can't be used to compute whatever result they claim to have found. Not with an abacus, a paper and pencil, nor any calculator.
Consider
this proof by contradiction [link] from Kahn Academy, that the square root of 2 is an irrational number. (Basically it says "The square root of 2 can not-not-not be rational. Therefore, it is not rational.")
If you need to know the actual value of square the root of 2, as you can possibly tell if you watch the video, that proof does you no good. All it shows is that the square root of 2 is "an irrational number" - nothing about which irrational number. Is sqrt(2) greater or less than 1.4? Is it greater or less than 10,000? The proof doesn't say.

Logic without the Law of the Excluded Middle is sometimes called intuitionistic logic or constructive logic (as opposed to classical logic). In intuitionistic logic, the only way to show sqrt(2) is irrational is to construct an irrational number, and then show that its square is 2. And as you may know, a number has been constructed whose decimal expansion begins with 1.4142135 and whose square is indeed 2.
Intuitionistic logic is sometimes said to be logic in which people matter. It's not enough to know that something "exists" or is "true". Show me one; I have to know how it's true. The proof of the pudding is in the eating of it.
The Axiom of Choice
"Of what use is your beautiful investigation of pi? Why study such problems when irrational numbers do not exist?" - Leopold Kronecker, 1882
There's an absolute shit ton of real numbers. You may recall, a real number is an infinite sequence of digits that needn't even repeat. Put your finger on the number line, and you will be pointing at a real number. If you drag your finger along the line, you drag across a smooth continuum of real numbers.
Contrary to what Kronecker said, pi does exist in some sense. It exists as definitions like "the circumference of a circle divided by its diameter". But as an infinite decimal, indeed pi does not exist, nor does any other irrational number exist; only their approximations exist.
Pi is fine, but the truth is almost all real numbers are total decoys. There's no finite definition for them like there is for pi - most of them are purely random decimals. You could only construct them by literally rolling a 10-sided die infinitely many times. Most real numbers are 100% theoretical, impossible to actually construct.
Real Number Line

Real Number Line

Real numbers only exist in mathematics thanks to the Axiom of Choice. Here are a couple ways this axiom can be stated:
- You can roll infinitely many dice.
- Given a (possibly infinite) collection of sets, each containing at least one object, it is possible to select exactly one object from each set.
- The cartesian product of a (possibly infinite) collection of non-empty sets is non-empty.
No one can roll infinitely many dice (not as far as I know). The Axiom of Choice is false as stated. The axiom is used to construct real numbers by implicitly making infinitely many "decisions" about which digit comes next. Using this axiom, we end up with a mathematical theory of Real Numbers, including algebra and calculus. But when it comes time to apply this math we always fall back to iterative, numerical methods.
The Banach-Tarski theorem (which states that you can take a sphere apart into finitely many pieces, and then reassemble them into two spheres identical to the first one), is another casualty of the Axiom of Choice. See this xkcd comic:

Intuitionistic Logic
Numbers like pi and e have decimal expansions that can be computed reliably. The numbers that can be defined and computed are known as the Computable Numbers.
If the real numbers are a flat continuum, the computable numbers are more like a zoo of points. The integers, we may write out literally as "1" or "2", or as "-3" by using the '-' symbol. Fractions, we using the '/' symbol pronounced "divided by" or "over". And we have other notations still, for roots and radicals. Numbers like pi we define roundabout ways such as "the ratio of a circle's circumference to its diameter".
We use all these constructions to combine and re-combine numbers, adding them to our ever increasing pile of numbers. We can still define a < relation to put this zoo of numbers in order from least to greatest, but it's a collection of discrete points, not a continuum. The shape of the computable numbers is the shape of the definitions we use.
Next post, I'll show you some actual constructions of numbers, and some ways to program with them.