I've learned from early years in college. Using calculus, 1 + 1 is not really 2( in terms of accuracy)! but rounding it to the nearest integer is 2. Can someone comment on this? if it is true is there any way we can we can add 1 and 1 to an exact 2, (of course not by arithmetic) in terms of calculus or any other higher math principles. and can you show me the solution?
Congrats on your first post here on the PF, arzie. You can get some great help here. But, you need to be much more explicit in your question in this post. It is too general and vague for us to be able to figure out which aspect of math and the calculus you are asking about. Can you please offer us a web pointer or two, or post an equation or two to show us what you are asking about? Thanks!
sorry for the typo! sorry for the typo (what a shame!), but i guess you knew what i was saying, thanks... i'll be looking for my old text books that explains my question, please be back maybe tomorrow.
Oh, hahah, my apologies, I thought you were trying to be wierd or something. Anyway, I cannot think of any circumstances where an exact one plus an exact one does not equal an exact two in any numbering systems. Perhaps you are referring to accuracy of measurements where "1" is anything in the interval [.5, 1.5)?
yup! what i mean is the accuracy, if i'm not mistaken, i think is somewhat like 1.9999999999999... and so on...
arzie2000, You're talking about limit processes. These are basically sums where you keep adding smaller and smaller numbers for an infinite period of time. Many people have an intuitive feeling that limit processes are somehow "approximations" to whole numbers. In the real world, we certainly cannot continue to add numbers for an infinite period of time. However, mathematics is not "the real world," it's a logical system that exists entirely in our heads. It is certainly acceptable (and even commonplace) to determine what the answer would be if we were capable of adding terms for an infinite period of time. There are no restrictions on dealing with infinity in mathematics like there are in the physical world. This is the crux of the almost agonizing debate about whether 0.999... = 1. Those without much mathematical sophistication will always take the side that they are two different values, supported by various hand-waving arguments like "we can't actually tack nines on forever." The truth is that the two are nothing more than different representations of the same number, and 0.999... = 1. There really is no debate about the subject once one has acquired the mathematical sophistication to understand it. - Warren
THAT'S EXACTLY what I want to hear. Thanks! Now, I am wondering if you have an "WORLD-LY" number, let's say with precision to 0.0001 (which we may be capable of measuring it in "REALITY") of whatever unit that is. And let's say you multiply it to a nearly infinitely small number (let's say, mass of a particle). Would the result be as ACCURATE? since you said that 0.9999.... would also be equal to 1. then what if you add 0.999999999999... to 0.000000000000000001, would also be 1? because result would be 1.00000000000000000999999...
Mathematically, 0.999... (repeating) = 1, so 0.999... + 0.000000000000000001 would be 1.000000000000000001. As soon as you bring "real-world" quantities into the discussion -- numbers that have been measured in the real world -- the discussion becomes one of significant figures; it has no absolute answer. There are conventions that scientists use to decide how many digits in numbers like 1.000000000000000001 are significant, but they are only conventions. Mathematically, 1.000000000000000001 is 1.000000000000000001. - Warren
Is that really true? I believed it was, but then... so 0.9999999... is not in the open interval (0.9, 1)?
bel, I will not allow this thread to devolve into a debate about 0.999... vs. 1, even if it practically started as one. 0.999... equals one, and there is absolutely no room for any argument otherwise. You can find many threads here on this subject that should hopefully enlighten you. - Warren
I know this topic may be insignificant (even to me). It's just that a few days ago, I had the strangest dream... (or a revelation). The main idea here is "Man cannot go beyond what they cannot reach." they way i see it, it's true, literally, physically, emotionally... now I'm asking is it also true MATHEMATICALLY? I don't think that this topic is suitable in this thread anymore. it is supposedly in Quantum Physics/General Relativity. anyway, thanks a lot guys, i get a clearer idea now.
There is a search engine. Look harder. Or go ctrl-f search. Real numbers can be constructed using cauchy sequences. That is, any sequence converging to 1 can be thought of as 1. And .999999... is a cauchy sequence converging to 1. So with our limit theorems and this idea of cauchy sequences we get this magical field called the real numbers. This is probably where you get your idea from.
>The truth is that ... 0.999... = 1. This is true when dealing with the real number system. If you are dealing with the hyperreals, then the two numbers are different, and in fact their exact difference can be computed and expressed in terms of infinitesimals: 1 - 0.99999... = 1 - 0.9 sum( k=0, k=(1/epsilon) - 1, 1/10^k ) = 1 - 0.9(1 - 1/10^(1/epsilon)) / (1 - 1/10) =1 - (1 - 1/10^(1/epsilon)) = 0.1^(1/epsilon) It's really all a matter of perspective and whichever axioms you want to subscribe to.
That is incorrect. The hyperreal 0.999... is, in fact, equal to the hyperreal 1. What you are doing is choosing some transfinite H, and then considering the terminating decimal number that consists of H 9's. This number is less than 1, but infinitessimally close to it. However, 0.999... is not a terminating decimal, and it is equal to 1.