Seeking a reasonable mathematical explanation to a simple mathematical conundrum

AI Thread Summary
The discussion revolves around the mathematical conundrum of the equality between 0.999... and 1. The original poster expresses confusion over how multiplying and subtracting in their calculations leads to the conclusion that 0.999... equals 1, despite starting with a different value. Participants clarify that 0.999... represents an infinite series, not a value approaching 1, and emphasize that it is not finite. They argue that the misunderstanding arises from viewing 0.999... as a limit rather than an actual value. The conversation highlights the complexities of decimal expansions and the nature of infinite series in mathematics.
cheenusj
Messages
1
Reaction score
0
Hi everyone,

I came upon a simple mathematical conundrum a few decades ago that has irked me for a while now and for which I don't have a reasonable mathematical explanation.

The only reasonable "explanation" I currently have is that 'certain mysterious things happen when tending towards infinity', whether infinitely big or infinitesimally small, say for example triangle angles summing to more than 180 degrees in infinitely large non-Euclidean triangles.

The problem is just one equation and I am hoping you have either heard of this "problem" or have a reasonable or better explanation than "mysterious things happen towards infinity".

Say, x = 0.9... (tending towards infinite, i.e. I don't have the "bar" to go on top of the "9" for mathematical notation).

Multiply both sides of the equation by 10 and you obtain 10x = 9.9... (tending towards infinity).

Subtract the latter from the former and since the ".9..." tending towards infinity knock themselves out, you have 9x = 9 (10x-x = 9.9...-0.9...), therefore x = 1.

However, "x" started off at x=.9 tending towards infinity, not x = 1.

Since multiplication is a series of additions and substraction being a negative addition, the process of multiplying something out, then doing a substraction and then simplifying it should always leave everything as it first started, without changing anything, i.e. you should end up with what you started with.

Taking a set of 'simpler' examples, this is easily observable and the above example seems "idiotic" (pointless), because the mathematical operators used will always result in ending with the same x-value that one started off with.

I also seem to find a "pedestrian" explanation that seems to illustrate the head-scratcher that this is for me.
One-ninth is 0.1 tending towards infinity, i.e. 1s going on forever.
Two-ninths is 2 times one-ninth, therefore I can easily derive that each "1 digit" behind the decimal point can be multiplied by 2. Therefore, two-ninths is 0.2 tending towards infinity.
I can keep incrementing and this is always correct until eight-ninths which is once again 0.8... (tending towards infinity). But then what is nine-ninths? Obviously it's "one", but if I used the logic above, one could easily argue that each 1-digit after the decimal is multiplied by 9, resulting in 0.9 tending towards infinity, which is wrong.

Since this "issue" has been bugging me for a while and I don't have any other contcts in academia or the field of mathematics, I was wondering what explanation and what "mistake" is held in the logic of the above "equation" and operations?

Many thanks for your thoughts.

Kind regards,

Cheenu
London, U.K.
 
Mathematics news on Phys.org
9 * 1/9 = 1. If we have 0.999... the number 0.000...1 can be added to 0.999... to obtain 1. What happens to 0.000...1 as the number of places between the 1 and the decimal point increases without bound? One could say that decimal expansions of rational numbers sometimes leave something to be desired. :o
 
cheenusj said:
However, "x" started off at x=.9 tending towards infinity, not x = 1.
Yes, but isn't that what you were trying to prove? You wouldn't have achieved anything if you ended up with $x= 0. \overline{9} $.

Since multiplication is a series of additions and substraction being a negative addition, the process of multiplying something out, then doing a substraction and then simplifying it should always leave everything as it first started, without changing anything, i.e. you should end up with what you started with.
I don't understand this logic. If I start with $x = \cos(0)$ is it wrong to end up with $x=1$?

Or let's say if I start with $x= \sqrt[3]{7+5\sqrt{2}}+ \sqrt[3]{7-5\sqrt{2}}$ is it wrong to end up with $x = 2$?

As long as you follow the rules of whatever field you're working with, you can end up with a different forms of the same thing.
 
cheenusj said:
Say, x = 0.9... (tending towards infinite, i.e. I don't have the "bar" to go on top of the "9" for mathematical notation).

0.999... does not mean "tending towards" anything It means there ARE infinitely many. It NEVER quits. Your entire dilemma stems from your assumption that it is somehow finite. It isn't.

I always find it useful to ask, If 0.9999... is NOT equal to 1, then how far from 1 is it? You cannot answer this question. Anything you pick, another can show how it is not that far.
 
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...

Similar threads

Back
Top