gonegahgah said:
Hi Integral
The only reason we are disallowed from using 0.0...1 notation - even though it is easy to work out what is meant from the notation - is because this would destroy every proof that 1 = 0.9...
There is no other reason to disallow this notation.
But let's look at 1 / 3 seeing this is one of the simpler proofs used (although all proofs no matter how cool they look ultimately are just re-representations of this simplest proof).
Let's work it out:
1. 3 ) 1 (can't do)
2. 3 ) 1.0 = 0.3 r 0.1 (as 0.3 x 3 = 0.9)
3. 3 ) 1.00 = 0.33 r 0.01 (as 0.33 x 3 = 0.99)
4. 3 ) 1.000 = 0.333 r 0.001 (as 0.333 x 3 = 0.999)
5. 3 ) 1.0000 = 0.3333 r 0.0001 (as 0.3333 x 3 = 0.9999)
and we could attempt to keep going for ever.
And each of those can be written
2. 0.3+ .1/3
3. 0.33+ .01/3
4. 0.333+ .001/3
5. 0.3333+ .0001/3
Now what is the limit of that fraction as the number of 0s goes to infinity?
And in all respects each of these answers is correct on its own.
ie 1/3 = 0.3 r 0.1 = 0.33 r 0.01 = 0.333 r 0.001 etc.
Do you understand what each of decimal notation MEANS? The basic decimal notation: 0.a
1a
2a
3... MEANS the limit of the sequence 0.a
1, 0.a
1a
2, a
1a
2a
3, ... (the
limit of the sequence, not the sequence itself. If you disagree with that, you disagree with the basic definition of a 'base 10 numeration system'!). We don't need to "attempt to keep going for ever", anyone who has taken enough "precalculus" to have seen geometric series should know exactly what the result is.
A geometric series: the sum of a
finite number of terms of the form a+ ar+ ar
2+ ...+ ar
n can be shown to be equal to
\frac{a- r^{n+1}}{1- r}
and, in the case that -1< r< 1, the sequence, as n goes to infinity, has limit \frac{a}{1-r}
While very few decimal sequences correspond to geometric series, the ones here do. 0.333... is DEFINED as the limit of the sequence .3, .33= .3+ .03, .333= .3+.03+.003, .3333= .3+ .03+ .003+ .0003, ... , a geometric series with a= 0.3 and r= .1< 1. The limit of that sequence is
\frac{0.3}{1- .1}= \frac{0.3}{.9}= \frac{1}{3}
Similarly, 0.999.. is DEFINED as the limit of the sequence .2, .99= .9+ .09, .999= .9+ .09+ .009, .9999= .9+ .09+ .009+ .0009, ..., a geometric series with a= 0.9 and r= .1< 1. The limit of that sequence is
\frac{0.9}{1- .1}= \frac{0.9}{0.9}= 1
The one thing that we can clearly see from the above is that we always have a remainder. The disappointing thing is that the maintainers of the 1 = 0.9... "proof" want us to simply let that remainder drop off the edge of the universe and to just forget about it.[\quote]
The remainder "drops of the edge" only if you do not understand that the number represented by a decimal sequence is the
limit of that sequence and not the sequence itself!
I know I will never be able to convince you but I do have a question that perhaps you can help me with:
Why is it so important to mathematics that 1 = 0.9...?
What is at stake?
Thanks
It is as important as 1+ 1= 2 or any other equation! There is nothing "at stake" except for the people who make such mistakes. Unfortunately, mathematicians have this compulsion to keep trying to correct mistakes about mathematics!