Cantor sought to distinguish between "countably infinite" and "uncountably infinite" sets. His proof that the real numbers are uncountably infinite is a classic idea in set theory, regurgitated onto exams by first year graduate students everywhere. The cardinality of the real numbers is denoted "C" for Cantor(*). Older mathematics books often refer to C as "the power of the continuum", a moniker that was probably discarded due to the fact that it has a pompous, science-fictionesque quality that causes said first year graduate students to burst into fits of uncontrollable giggles as they stay up all night eating cheap pizza and memorizing Cantor's proof.

The rational numbers are classified as "countable," but for the sake of simplicity, let us examine the natural numbers as an example of a countably infinite set. The natural numbers are positive, whole numbers and are a subset of the rational numbers. While there is no "last" natural number, it is easy to cut the natural numbers into sets having a smallest and largest element. If you define set A as, "all the natural numbers greater than one and less than twenty one," then I know that the largest number in set A is twenty one, and the smallest number in set A is two. I also know that there are twenty elements in set A. I can count them.

In fact, the cardinality of the natural numbers is the basis for the definition of "countable". A set is countable if it can be put into a one-to-one correspondence with the natural numbers. Intuitively, this makes sense! When we "count" objects like jelly beans, cards, or books, we assign them to a corresponding set of natural numbers, "One, two, three" and so on.

The real numbers are quite different. If you were to define set B as all the real numbers between one and twenty-one (not including one or twenty-one), I couldn't tell you what the smallest number in set B is. One is not included in set B, so that's out of the picture. Two? No, of course that's not the smallest, because 1.5 is in the set. Okay, 1.25 then? Nope, 1.00000001 is in the set, and it's smaller than 1.25

The same discussion could be had regarding the largest number in set B, but it would be equally unproductive. Clearly, the real numbers have a fundamental dissimilarity to the natural numbers. In fact, they are "uncountably infinite," meaning there is no way to assign them a one to one correspondence with the natural numbers. In fact, not only is the entire real line uncountable, but so is any interval on the real line. Cantor's classic proof focuses on the (0,1), the so-called "unit interval"(#). The proof proceeds by the tried and true method of contradiction.

Suppose you could assign to each number in (0,1) a unique natural number. In that case, you could list them in as a set of pairs, each element of (0,1) written as a decimal expansion and paired with a natural number. The "first" element of (0,1) corresponds to 1, the "second" to 2 and so on. Remember, we are doing a proof by contradiction here, so even though there is no "first" element, we are assuming that there is just to show how silly that would be. While we're at it, let's assume that those decimal numbers are arranged in increasing order, so the first is the smallest, the second the next smallest, and so on. This pairing of natural numbers and decimals is hard to do without proper formatting, so see the link listed at the end of this article for an illustration.(*)

Now, decimal expansions go out to infinitely many places, so let's pick a number in (0,1) that differs from the "first" number in the second decimal place, the "second" number in the third decimal place, and so on. Also, let's make the first digit of our number that comes after it's decimal point a one (that's somewhat arbitrary). Now, this number is different than any of the numbers in our set since it contains at least one digit that doesn't match for each number. It is on (0,1), since it is greater than zero and less than one. And yet it matches none of our numbers in our "list".

Where do we put this little decimal demon? It cannot go between any of our numbers, since they are arranged from first (smallest) to second (next smallest) and so on. It also can't go last, since it's first digit is one, and clearly 0.9 is greater than 0.1something. Nor can it go first, since any number in (0,1) with a zero immediately following the decimal point will be smaller than our number. It has no home in our little countable set, and yet it is a member of (0,1).

This logical contradiction proves that our assumption, "the interval (0,1) is countable," leads to absurdity and hence is itself absurd. The interval (0,1) is uncountable, and since the real numbers contain it, they too are uncountable.

Note that the link I gave for the visual does not really address the first/last issue. So I altered my proof to make it a bit more complete. Otherwise, the argument is identical.

*In finite mathematics, the cardinality of a set can be thought of as the number of objects in the set. For other areas of mathematics, cardinality becomes more abstract, but the core idea of expressing the "size" of a set remains.

#The notation (a,b) means "the interval containing all numbers between, but not including, a and b," and is unfortunately the same as the notation for the point (a.b) in the Cartesian plane. The meaning of the notation is thus context dependent.

* http://scidiv.bcc.ctc.edu/Math/diag.html