Quote by Hurkyl
I wouldn't be so sure  remember that, there, the author was talking about constructive analysis, which doesn't have many of the nice properties we're used to having. There, a real number is usually defined in a way similar to this:
A real number is a (computable) function f that takes an integer n and returns a fraction r, satisfying the property that:
f(n)  f(m) < 2^{n} + 2^{m}
(To connect with the "usual" model of the reals, f(n) is a Cauchy sequence)
We could define a "decimal number" in a similar fashion  it takes an integer n and returns a decimal digit... and satisfies the property that there exists a bound M such that n > M implies f(n) = 0. (To connect with the "usual" model, f(n) would be the nth place in a decimal number)
And I believe that, in fact, there does not exist a (computable) function that takes a real number as input and returns a decimal number as output.

So you're saying that:
1. There's no computable function that converts arbitrary binary expansions to decimal expansions
2. Turing proved this
3. The article, when writing that Turing proved that no computable function returns the decimal expansion of a real, meant (1) and (2)
Alright, perhaps. Let me think about this.
Given a binary expansion* adding up to N, where the last term's (binary) exponent is n, the number is in [N, N + 2^n] if repeating 1s are allowed. For no computable function to exist to convert this to decimal form, for almost all natural numbers k there must be some real numbers x_k and y_k such that x_k and y_k differ in the kth decimal place, and such that there exists no computable f(k) such that the first f(k) places suffice to distinguish x_k and y_k. Right?
I'll need more time to think about this.
* In which every number is either 0 or 1, else the number could change arbitrarily with each additional bit.