# Undefinable numbers

1. Aug 11, 2014

### Warp

This is, perhaps, more a question of philosophy of math rather than math itself.

While it may be trivial to most people fluent in math, it was a bit surprising for me to learn recently that the set of algebraic numbers is countable. However, I quickly realized why that is: Each algebraic number can be expressed with a finite polynomial, and the set of all possible finite polynomials (and, more generally, the set of all possible finite strings) is countable. Therefore the set of all algebraic numbers is countable.

Then I realized that this can be further extended. The set of all numbers that can be defined with a finite closed-form expression is likewise also countable. The same is true for all numbers that can be defined with a finite analytical expression.

In fact, we can generalize this the most by saying: Let's define a number as "expressible" if it can be expressed (unambiguously) with a finite amount of information. (It doesn't matter if you need a special formalism to formulate the expression, as long as you can define said formalism with a finite amount of information. Basically, the formalism becomes part of the definition of the number.) The set of all "expressible" numbers is countable.

Now, I understand perfectly that this set is most probably ill-defined. However, let's assume for the moment that it is well-defined.

This means that there are real numbers that can not be expressed in any way with a finite amount of information. (Because the set of all "expressible" numbers is countable, that means that there are real numbers that do not belong to that set.)

The (mostly philosophical) question becomes: Are these "non-expressible" numbers useful in any way?

They cannot be the answer to any problem (because a number that's the answer to a problem belongs to the "expressible" set: It can be defined by the problem in question.) These numbers cannot be used in any way, at least not individually, because there's no way you can define them. At most you can define them in (uncountably) infinite sets, but never individually.

If they can't be the answer to any problem, are they useful?

2. Aug 11, 2014

### Staff: Mentor

Wouldn't pi be considered a non-expressible number? Its certainly useful. oe e? another useful number.

3. Aug 11, 2014

### D H

Staff Emeritus
What Warp has discovered is the concept of the computable numbers. Wiki link: http://en.wikipedia.org/wiki/Computable_number. (Or possibly he's discussing the concept of the definable numbers, but that's on a bit flimsier ground than is the concept of computable numbers.) Both pi and e are computable numbers. For example, $e=\sum_{n=0}^{\infty}\frac1{n!}$. That expression is only 31 LaTeX characters long, so obviously finite.

Warp is correct that the set of computable numbers is countable. That means almost all of the reals are uncomputable! For an example of a non-computable number, see http://en.wikipedia.org/wiki/Chaitin's_constant.

4. Aug 11, 2014

### disregardthat

You are basically facing skolems paradox here. Succinctly, skolems paradox is that there exists a countable model for set theory. This means that although there are uncountable sets, they may not be uncountable from "outside". From "within" set theory, there may not be a surjection from the set of natural numbers to some other set (meaning it is uncountable), but it is perfectly possible for a model to contain two countable sets for which there exist no bijection from "within" (you can't define it by the axioms of set theory). You basically run into issues when defining sets by a meta-language. You are not really in set theory, you are looking at axiomatic set theory from outside.

And no, you can't really define the set of definable real numbers. In fact, the very notion of a definable number is not expressible in set theory. Although, looking from the "outside" there are only countable many such numbers, this doesn't yield a sensible concept of countably many such numbers in set theory.

Last edited: Aug 11, 2014
5. Aug 11, 2014

### Staff: Mentor

Interesting, so while PI can't be known, it can be computed to any level of accuracy with a finite set of calculations... I was thinking the OP was excluding the numbers computed by infinite series like PI.

6. Aug 11, 2014

### Staff: Mentor

What do you mean with "pi can't be known"? Sure we know pi.
We do not know all of its digit in a decimal representation (and we will never know because that number is not finite), but that is a completely different thing.
For any digit in this representation, it is possible to calculate it with finite effort.

7. Aug 11, 2014

### HallsofIvy

$\pi$ certainly is "definable" it can be defined as "the ratio of the circumference of a circle to its diameter" or as "the smallest positive value of x such that sin(x)= 0" or as the sum of any number of convergent series. Similarly $e$ can be defined as $\lim_{n\to\infty} \left(1+ \frac{1}{n}\right)^n$.

Perhaps you are not clear on what is meant by a number being "definable" or "undefinable".

8. Aug 11, 2014

### Staff: Mentor

My apologies, I didn't intend to create any confusion here. I thought the OP discussion on finite calculations for expressible numbers excluded PI and e but have since discovered otherwise. Thanks for the clarification. I realize that PI is definable and that we can never represent it as a finite set of calculations and that's what I was keying on in the OP's post.

Last edited: Aug 11, 2014
9. Aug 11, 2014

### Warp

Please read my original post carefully. I started with the realization that the set of algebraic numbers is countable. Algebraic numbers include things like the square root of 2. While the decimal expansion of that number is infinite, the set of all algebraic numbers is still countable, and that's because you can define each algebraic number with a finite polynomial.

Then I realized that I could expand this further and include things like the set of numbers that are definable by closed-form expressions (this set contains a lot of numbers that are not algebraic, including pi.)

The ultimate conclusion is that the set of all numbers that can be defined (with a finite definition) is countable for the exact same reason, and this means that the rest of the real numbers cannot be defined with any expression (no matter what formalism you use). Only these "definable" numbers are "useful" in the sense that only they can be the answer to any problem you could ever posit.

(Of course, technically speaking it may be impossible to define the exact set of "definable numbers", but it's undeniable that there are uncountably many real numbers that cannot be defined with a finite amount of information, and thus cannot be the answer to any problem.)

10. Aug 11, 2014

### Warp

This is a very interesting point, as it raises (at least in my mind) the question of whether that constant would fall into my hypothetical set of "definable numbers" or not.

As I see it, the answer to "can you give me an example of a non-definable number?" ought to be "no" for obvious reasons: If you could, then you would have just defined the number in question, and thus it belongs to the "definable" set.

However, if I ask you "can you give me an example of an uncomputable number", you can answer "yes, for example Chaitin's constant".

But I suppose it comes down to what is meant by "define a number". Clearly Chaitin's constant has been "defined" by a finite amount of information, but on the other hand that definition doesn't help you know what that number actually is.

I suppose I'll have to settle with the set of all computable numbers, which might be the "largest" countable subset of real numbers (where each individual number can be expressed with a finite amount of information) that can be well-defined (although since I'm not a mathematician, I have no idea if this is correct.) I am assuming that the set of all computable numbers is countable (assuming that "computable" means "can be computed with a finite algorithm".)

So perhaps a more well-defined version of my original question: Are uncomputable numbers "useful"?

11. Aug 11, 2014

### mathman

Be careful: Algebraic numbers are roots of polynomials with integer (or rational - same thing) coefficients.

12. Aug 11, 2014

### disregardthat

There are many numbers which can be defined "by a finite amount of information" which are not computable. Restricting to the set of computable reals several important theorems of analysis fails. For example, the intermediate value theorem. Although you can define a sequence recursively which converges to a particular number in question, that doesn't mean that sequence is computable. You do have analogous results for computable numbers and computable functions in constructive analysis however.

13. Aug 11, 2014

### haruspex

No, I think you were right to observe that the set of definable numbers is countable:
The set of putative definitions of numbers (in a finite alphabet) is countable,
thus the set of definable numbers is countable (however that's defined),
the (smaller) set of computable numbers is countable.
An aside: what is the smallest positive integer which cannot be defined in English in a phrase of fewer than twenty words?

14. Aug 11, 2014

### Staff: Mentor

121,121,121,121 ?

one hundred twenty one billion, one hundred twenty one million, one hundred twenty one thousand, and one hundred twenty one.

Another possible answer is 1 if read as

Eleven minus one minus one minus one...minus one

And so a related question would be what is the smallest positive integer which cannot be defined in English in a phrase of fewer than twenty words where no words are repeated?

Last edited: Aug 11, 2014
15. Aug 12, 2014

### haruspex

No, you misunderstand. It's not whether it can be written out in the usual way in twenty words, but whether there is any phrase that will completely define it in twenty words. Your first number above could be "eleven squared, times a thousand and one, times a million and one". Some other number might be defined as "the ninety third prime".

16. Aug 12, 2014

### disregardthat

One has to be careful here. A number being "defined by a finite number of words" and the like is a potentially ill-conceived notion, depending on what you mean by it precisely. The reason is that one may fall into a skolem-like-paradox by saying "let x be the least integer which cannot be described in 100 characters or less". Then we have just defined x in 100 characters words or less. And since most integers cannot be expressed in the way (there is a limit to how much information 100 characters can contain), x is well-defined by induction (naively). Note I'm using "character" instead of "word", because a word can potentially be an arbitrarily long concatenation of characters depending on what meaning you give it (if we let the decimal expansion of an integer count as a "word", this trivially makes all integers expressible in words). So x is an integer which cannot be defined in 100 characters or less despite its definition, a contradiction.

We can similarly find an example contradiction when considering the set of reals definable by words (naively). This set must be countable due to the countable nature of finite collections of words, but by a cantor-diagonalizing argument we may (in a finite number of words) define a real number not contained in this set, a contradiction.

This shows again the problems that arises where we transcend the boundary of our formal axiomatic system into a metalanguage (allowing integers to be defined by words and characters) and then referring to this metalanguage inside our axiomatic system. This is analogous to the reason why we can't have a set of definable numbers. So while in the metalanguage the definable numbers may form a countable set, and thus provide us with a countable model for the real numbers, this is a property which cannot be expressed in set theory (where the reals are necessarily uncountable).

Last edited: Aug 12, 2014
17. Aug 12, 2014

### Matterwave

Considering you found the smallest such number, probably, if this problem gets famous enough, one would give that number some name, and then it would fail its own test! E.g. googolplex a very big number, but expressible as 1 English word, or Graham's number, a very large number indeed. .

@OP: I don't really know much about this field, but I did find this wiki article on "definable real numbers" http://en.wikipedia.org/wiki/Definable_number

18. Aug 12, 2014

### haruspex

You don't need to name it. How many words did I take to define it?

19. Aug 12, 2014

### Matterwave

Indeed, disregardthat seems to have picked up on it. :)

20. Aug 12, 2014

### Staff: Mentor

ONE is the loneliest number...

21. Aug 12, 2014

### Boorglar

I think a more precise definition of "definable" could be to actually have infinitely many "orders" of "definability" (see http://en.wikipedia.org/wiki/Berry_paradox#Resolution).

A first-order definition would be a definition in terms of only the formal symbols in the system. For example, 2 could be defined as 1 + 1. Or pi could be defined as $2 \int_{-1}^{1}\sqrt{1-x^2}\ dx$. A definition such as "The smallest number that can be defined in less than 100 english words" would neither be precise enough (which order of definability are we talking about?) and would certainly not be a first-order definition.

A second-order definition would be a definition in terms of the formal symbols and in terms of the collection of first-order definable numbers. That is, a second-order definition can invoke the set of all first-order definitions in its own definitions, which would allow an expression like "the smallest number that cannot be defined with a first-order definition in less than 100 words". That would be a second-order definition, and thus it does not lead to a contradiction since, although this definition clearly has less than 100 words, it is not a first-order definition.

And so on, we could say an nth-order definition can use the set of all (n-1)th-order definitions in its own statement. Then the original problem could be reformulated as saying that the set of all first-order definable numbers is countable but the reals are uncountable, and thus some real numbers are not first-order definable. Those numbers will never be the answer to any equation or formula that can be written with "math symbols", and yet they can still be defined (using a higher-order definition).

Last edited: Aug 12, 2014
22. Aug 12, 2014

### Staff: Mentor

Perhaps we could work out a theorem relating the words to numbers with some rules like:

0) N is the number of english words (or some other language of choice)
1) number can't be named unless name words = N
2) number is positive integer
2) minimum of N words in english to describe it
3) may use (or limited to use) any named math expressions and math operators like plus, minus, multiplied by, divided by, square root of, sine / cosine / ... of, x to the power of, natural log of , base ten log of, up-arrow, ...
4) ...

23. Aug 12, 2014

### disregardthat

While it may seem like a valid idea, I fail to see any practical use. We still can't say anything about formulas; to do that we would require a whole new axiom schema reserved for first-order formulas (and so on). Thus we have to add layer upon layer of new axioms each time we want to transcend to a higher degree of abstraction. And what would these axioms be? Even to do analysis on numbers defined in order 2 would require some very strong (possibly inconsistent) axioms.

As a side note, we would not (in the metalanguage for formulas of order n) exceed countability.

Last edited: Aug 12, 2014
24. Aug 12, 2014

### gopher_p

It looks like you're describing an informal version of what model theorists call the "definable closure" of a structure. It's basically an abstraction of the algebraic closure from algebra.

25. Aug 12, 2014

### Boorglar

I suppose this is true. But for the purposes of the question, we can limit ourselves to two levels of axioms/symbols. The first level would be the set of usual symbols of arithmetic, analysis and set theory together with the axioms of set theory. The second level would be a meta-language, where the first-order symbols are treated as objects. The second-order axioms would be rules for how each symbol can be combined with the others to form meaningful formulas (a "grammar" in a sense). So using the second-order system, we can talk about every possible formula expressible in the first-order system. This would include every mathematical expression in arithmetic, analysis and set theory.