Number Nine said:
Pi has a precise value. Why do you think it doesn't?
What is that value? Write it out in its entirety.
Number Nine said:
Or do you mean its representation should be unique (i.e. that it should be represented by a unique symbol)? This is clearly untrue: 0.333... and 1/3 represent the same number, for instance.
Reread my post, I conceded at the end that by my logic real numbers with predictably repeating decimals like 1/3 may also be regarded as functions.
It really wasn't much of a concession for me to make at all. I agree full heartedly with it.
Number Nine said:
"Number" has no definition in mathematics; it's a colloquialism.
Then this whole discussion is rather pointless, isn't it?
Number Nine said:
The ratio of a circle's circumference to its diameter.
Well for this circle that ratio seems to me to be 2.4.
Or is this not a circle?
What's the difference between a circle and this? Is it a quantitative difference or a qualitative one?
They're both enclosed, at least. A circle and my contraption above.
They're also both convex.
What other qualitative differences can there be?
Number Nine said:
Of course they are. I can construct a sequence of rational numbers converging to any irrational number you like (this is, in fact, where irrational numbers "come from" when constructing the reals).
I can't give an irrational number. Not by value, at least.
How do you pass a function by value?
You don't, you pass it by description, by outlying the algorithm contained therein.
Number Nine said:
You don't seem to understand what a function is either. A function ##f:X \rightarrow Y## is a subset ##f \subset X \times Y## such that, for every ##x \in X##, there exists exactly one ##y \in Y## such that ##(x,y) \in f##.
Now tell me, how is an irrational number a function?
An irrational number is not a value but a computation algorithm. A function. A method of computing something whose value depends on the input provided.
Computation is meaningless when no input is provided or input is never changed.
Except as a way to save on memory usage at the cost of processing time or reduce the likelihood of value corruption (by recomputing the result each time it's needed, to the degree of precision required). However the memory storing the algorithm itself may also be subject to corruption.
I must say it really does seem to me you're intent on ridiculing me or my views instead of earnestly discussing them just in case they're not self-evidently wrong, just a different.
mfb said:
This could be true in physics (we don't know).
It is certainly wrong in mathematics.
I think lots of what we colloquially call numbers, as number nine put it, are really functions.
I don't think it's wrong to call a function a function. I mean, it may be wrong by fiat.
I think when we're computing or manipulating expressions we're actually engaging in a sort of fundamental programming.
We are writing an algorithm for computing values depending on values received as inputs, not computing a value outright.
I think analytical mathematics is about this.
Writing algorithms to compute stuff.
Or rather simplifying and combining existing algorithms.
I thing a mathematical function or expression is basically an algorithm. The results of algorithms vary. They are not unique. That would be pointless. And that's why you don't call an algorithm by its value. Because it has no particular value.
This becomes more clear if and when you try to write a parser for mathematical language. You realize mathematics is actually metaprogramming.
I think there is no real reason why it would be wrong to view irrational numbers and real numbers with repeating decimals as functions.
Except for inferred dogma.