This is, perhaps, more a question of philosophy of math rather than math itself. While it may be trivial to most people fluent in math, it was a bit surprising for me to learn recently that the set of algebraic numbers is countable. However, I quickly realized why that is: Each algebraic number can be expressed with a finite polynomial, and the set of all possible finite polynomials (and, more generally, the set of all possible finite strings) is countable. Therefore the set of all algebraic numbers is countable. Then I realized that this can be further extended. The set of all numbers that can be defined with a finite closed-form expression is likewise also countable. The same is true for all numbers that can be defined with a finite analytical expression. In fact, we can generalize this the most by saying: Let's define a number as "expressible" if it can be expressed (unambiguously) with a finite amount of information. (It doesn't matter if you need a special formalism to formulate the expression, as long as you can define said formalism with a finite amount of information. Basically, the formalism becomes part of the definition of the number.) The set of all "expressible" numbers is countable. Now, I understand perfectly that this set is most probably ill-defined. However, let's assume for the moment that it is well-defined. This means that there are real numbers that can not be expressed in any way with a finite amount of information. (Because the set of all "expressible" numbers is countable, that means that there are real numbers that do not belong to that set.) The (mostly philosophical) question becomes: Are these "non-expressible" numbers useful in any way? They cannot be the answer to any problem (because a number that's the answer to a problem belongs to the "expressible" set: It can be defined by the problem in question.) These numbers cannot be used in any way, at least not individually, because there's no way you can define them. At most you can define them in (uncountably) infinite sets, but never individually. If they can't be the answer to any problem, are they useful?
Wouldn't pi be considered a non-expressible number? Its certainly useful. oe e? another useful number.
What Warp has discovered is the concept of the computable numbers. Wiki link: http://en.wikipedia.org/wiki/Computable_number. (Or possibly he's discussing the concept of the definable numbers, but that's on a bit flimsier ground than is the concept of computable numbers.) Both pi and e are computable numbers. For example, [itex]e=\sum_{n=0}^{\infty}\frac1{n!}[/itex]. That expression is only 31 LaTeX characters long, so obviously finite. Warp is correct that the set of computable numbers is countable. That means almost all of the reals are uncomputable! For an example of a non-computable number, see http://en.wikipedia.org/wiki/Chaitin's_constant.
You are basically facing skolems paradox here. Succinctly, skolems paradox is that there exists a countable model for set theory. This means that although there are uncountable sets, they may not be uncountable from "outside". From "within" set theory, there may not be a surjection from the set of natural numbers to some other set (meaning it is uncountable), but it is perfectly possible for a model to contain two countable sets for which there exist no bijection from "within" (you can't define it by the axioms of set theory). You basically run into issues when defining sets by a meta-language. You are not really in set theory, you are looking at axiomatic set theory from outside. And no, you can't really define the set of definable real numbers. In fact, the very notion of a definable number is not expressible in set theory. Although, looking from the "outside" there are only countable many such numbers, this doesn't yield a sensible concept of countably many such numbers in set theory.
Interesting, so while PI can't be known, it can be computed to any level of accuracy with a finite set of calculations... I was thinking the OP was excluding the numbers computed by infinite series like PI.
What do you mean with "pi can't be known"? Sure we know pi. We do not know all of its digit in a decimal representation (and we will never know because that number is not finite), but that is a completely different thing. For any digit in this representation, it is possible to calculate it with finite effort.
[itex]\pi[/itex] certainly is "definable" it can be defined as "the ratio of the circumference of a circle to its diameter" or as "the smallest positive value of x such that sin(x)= 0" or as the sum of any number of convergent series. Similarly [itex]e[/itex] can be defined as [itex]\lim_{n\to\infty} \left(1+ \frac{1}{n}\right)^n[/itex]. Perhaps you are not clear on what is meant by a number being "definable" or "undefinable".
My apologies, I didn't intend to create any confusion here. I thought the OP discussion on finite calculations for expressible numbers excluded PI and e but have since discovered otherwise. Thanks for the clarification. I realize that PI is definable and that we can never represent it as a finite set of calculations and that's what I was keying on in the OP's post.
Please read my original post carefully. I started with the realization that the set of algebraic numbers is countable. Algebraic numbers include things like the square root of 2. While the decimal expansion of that number is infinite, the set of all algebraic numbers is still countable, and that's because you can define each algebraic number with a finite polynomial. Then I realized that I could expand this further and include things like the set of numbers that are definable by closed-form expressions (this set contains a lot of numbers that are not algebraic, including pi.) The ultimate conclusion is that the set of all numbers that can be defined (with a finite definition) is countable for the exact same reason, and this means that the rest of the real numbers cannot be defined with any expression (no matter what formalism you use). Only these "definable" numbers are "useful" in the sense that only they can be the answer to any problem you could ever posit. (Of course, technically speaking it may be impossible to define the exact set of "definable numbers", but it's undeniable that there are uncountably many real numbers that cannot be defined with a finite amount of information, and thus cannot be the answer to any problem.)
This is a very interesting point, as it raises (at least in my mind) the question of whether that constant would fall into my hypothetical set of "definable numbers" or not. As I see it, the answer to "can you give me an example of a non-definable number?" ought to be "no" for obvious reasons: If you could, then you would have just defined the number in question, and thus it belongs to the "definable" set. However, if I ask you "can you give me an example of an uncomputable number", you can answer "yes, for example Chaitin's constant". But I suppose it comes down to what is meant by "define a number". Clearly Chaitin's constant has been "defined" by a finite amount of information, but on the other hand that definition doesn't help you know what that number actually is. I suppose I'll have to settle with the set of all computable numbers, which might be the "largest" countable subset of real numbers (where each individual number can be expressed with a finite amount of information) that can be well-defined (although since I'm not a mathematician, I have no idea if this is correct.) I am assuming that the set of all computable numbers is countable (assuming that "computable" means "can be computed with a finite algorithm".) So perhaps a more well-defined version of my original question: Are uncomputable numbers "useful"?
Be careful: Algebraic numbers are roots of polynomials with integer (or rational - same thing) coefficients.
There are many numbers which can be defined "by a finite amount of information" which are not computable. Restricting to the set of computable reals several important theorems of analysis fails. For example, the intermediate value theorem. Although you can define a sequence recursively which converges to a particular number in question, that doesn't mean that sequence is computable. You do have analogous results for computable numbers and computable functions in constructive analysis however.
No, I think you were right to observe that the set of definable numbers is countable: The set of putative definitions of numbers (in a finite alphabet) is countable, thus the set of definable numbers is countable (however that's defined), the (smaller) set of computable numbers is countable. An aside: what is the smallest positive integer which cannot be defined in English in a phrase of fewer than twenty words?
121,121,121,121 ? read as one hundred twenty one billion, one hundred twenty one million, one hundred twenty one thousand, and one hundred twenty one. Another possible answer is 1 if read as Eleven minus one minus one minus one...minus one And so a related question would be what is the smallest positive integer which cannot be defined in English in a phrase of fewer than twenty words where no words are repeated?
No, you misunderstand. It's not whether it can be written out in the usual way in twenty words, but whether there is any phrase that will completely define it in twenty words. Your first number above could be "eleven squared, times a thousand and one, times a million and one". Some other number might be defined as "the ninety third prime".
One has to be careful here. A number being "defined by a finite number of words" and the like is a potentially ill-conceived notion, depending on what you mean by it precisely. The reason is that one may fall into a skolem-like-paradox by saying "let x be the least integer which cannot be described in 100 characters or less". Then we have just defined x in 100 characters words or less. And since most integers cannot be expressed in the way (there is a limit to how much information 100 characters can contain), x is well-defined by induction (naively). Note I'm using "character" instead of "word", because a word can potentially be an arbitrarily long concatenation of characters depending on what meaning you give it (if we let the decimal expansion of an integer count as a "word", this trivially makes all integers expressible in words). So x is an integer which cannot be defined in 100 characters or less despite its definition, a contradiction. We can similarly find an example contradiction when considering the set of reals definable by words (naively). This set must be countable due to the countable nature of finite collections of words, but by a cantor-diagonalizing argument we may (in a finite number of words) define a real number not contained in this set, a contradiction. This shows again the problems that arises where we transcend the boundary of our formal axiomatic system into a metalanguage (allowing integers to be defined by words and characters) and then referring to this metalanguage inside our axiomatic system. This is analogous to the reason why we can't have a set of definable numbers. So while in the metalanguage the definable numbers may form a countable set, and thus provide us with a countable model for the real numbers, this is a property which cannot be expressed in set theory (where the reals are necessarily uncountable).
Considering you found the smallest such number, probably, if this problem gets famous enough, one would give that number some name, and then it would fail its own test! E.g. googolplex a very big number, but expressible as 1 English word, or Graham's number, a very large number indeed. . @OP: I don't really know much about this field, but I did find this wiki article on "definable real numbers" http://en.wikipedia.org/wiki/Definable_number