Using calculus in simple arithmetic

In summary: two are nothing more than different representations of the same number, and 0.999... = 1. - 0.999... (repeating) = 1- 0.999... + 0.000000000000000001 would be 1.000000000000000001
  • #1
arzie2000
9
0
I've learned from early years in college. Using calculus, 1 + 1 is not really 2( in terms of accuracy)! but rounding it to the nearest integer is 2. Can someone comment on this? if it is true is there any way we can we can add 1 and 1 to an exact 2, (of course not by arithmetic) in terms of calculus or any other higher math principles. and can you show me the solution?
 
Last edited:
Physics news on Phys.org
  • #2
One plus one is always two in the base ten numbering system, regardless of methods.
 
  • #3
arzie2000 said:
I've learned from early years in college. Using calculus, 1 + 1 is not really 1! but rounding it to the nearest integer is 1. Can someone comment on this? if it is true is there any way we can we can add 1 and 1 to an exact 1, (of course not by arithmetic) in terms of calculus or any other higher math principles. and can you show me the solution?

Congrats on your first post here on the PF, arzie. You can get some great help here. But, you need to be much more explicit in your question in this post. It is too general and vague for us to be able to figure out which aspect of math and the calculus you are asking about. Can you please offer us a web pointer or two, or post an equation or two to show us what you are asking about? Thanks!
 
Last edited:
  • #4
sorry for the typo!

bel said:
One plus one is always two in the base ten numbering system, regardless of methods.

sorry for the typo (what a shame!), but i guess you knew what i was saying, thanks... i'll be looking for my old textbooks that explains my question, please be back maybe tomorrow.
 
  • #5
Oh, hahah, my apologies, I thought you were trying to be weird or something. Anyway, I cannot think of any circumstances where an exact one plus an exact one does not equal an exact two in any numbering systems. Perhaps you are referring to accuracy of measurements where "1" is anything in the interval [.5, 1.5)?
 
  • #6
bel said:
Oh, hahah, my apologies, I thought you were trying to be weird or something. Anyway, I cannot think of any circumstances where an exact one plus an exact one does not equal an exact two in any numbering systems. Perhaps you are referring to accuracy of measurements where "1" is anything in the interval [.5, 1.5)?

yup! what i mean is the accuracy, if I'm not mistaken, i think is somewhat like 1.9999999999999... and so on...
 
  • #7
arzie2000,

You're talking about limit processes. These are basically sums where you keep adding smaller and smaller numbers for an infinite period of time.

Many people have an intuitive feeling that limit processes are somehow "approximations" to whole numbers. In the real world, we certainly cannot continue to add numbers for an infinite period of time. However, mathematics is not "the real world," it's a logical system that exists entirely in our heads. It is certainly acceptable (and even commonplace) to determine what the answer would be if we were capable of adding terms for an infinite period of time. There are no restrictions on dealing with infinity in mathematics like there are in the physical world.

This is the crux of the almost agonizing debate about whether 0.999... = 1. Those without much mathematical sophistication will always take the side that they are two different values, supported by various hand-waving arguments like "we can't actually tack nines on forever." The truth is that the two are nothing more than different representations of the same number, and 0.999... = 1. There really is no debate about the subject once one has acquired the mathematical sophistication to understand it.

- Warren
 
Last edited:
  • #8
chroot said:
..mathematics is not "the real world," it's a logical system that exists entirely in our heads...
- Warren

THAT'S EXACTLY what I want to hear. Thanks!

Now, I am wondering if you have an "WORLD-LY" number, let's say with precision to 0.0001 (which we may be capable of measuring it in "REALITY") of whatever unit that is. And let's say you multiply it to a nearly infinitely small number (let's say, mass of a particle). Would the result be as ACCURATE? since you said that 0.9999... would also be equal to 1. then what if you add 0.999999999999... to 0.000000000000000001, would also be 1? because result would be 1.00000000000000000999999...
 
  • #9
Mathematically, 0.999... (repeating) = 1, so 0.999... + 0.000000000000000001 would be 1.000000000000000001.

As soon as you bring "real-world" quantities into the discussion -- numbers that have been measured in the real world -- the discussion becomes one of significant figures; it has no absolute answer. There are conventions that scientists use to decide how many digits in numbers like 1.000000000000000001 are significant, but they are only conventions. Mathematically, 1.000000000000000001 is 1.000000000000000001.

- Warren
 
  • #10
chroot said:
The truth is that the two are nothing more than different representations of the same number, and 0.999... = 1. There really is no debate about the subject once one has acquired the mathematical sophistication to understand it.

Is that really true? I believed it was, but then... so 0.9999999... is not in the open interval (0.9, 1)?
 
  • #11
bel,

I will not allow this thread to devolve into a debate about 0.999... vs. 1, even if it practically started as one. 0.999... equals one, and there is absolutely no room for any argument otherwise. You can find many threads here on this subject that should hopefully enlighten you.

- Warren
 
  • #12
chroot said:
bel,

I will not allow this thread to devolve into a debate about 0.999... vs. 1, even if it practically started as one. 0.999... equals one, and there is absolutely no room for any argument otherwise. You can find many threads here on this subject that should hopefully enlighten you.

- Warren

I know this topic may be insignificant (even to me). It's just that a few days ago, I had the strangest dream... (or a revelation). The main idea here is "Man cannot go beyond what they cannot reach." they way i see it, it's true, literally, physically, emotionally... now I'm asking is it also true MATHEMATICALLY?

I don't think that this topic is suitable in this thread anymore. it is supposedly in Quantum Physics/General Relativity.

anyway, thanks a lot guys, i get a clearer idea now.
 
  • #13
And also, there is no search engine in this site, so how can I read other discussions on this topic?
 
  • #14
There is a search engine. Look harder. Or go ctrl-f search.

Real numbers can be constructed using cauchy sequences. That is, any sequence converging to 1 can be thought of as 1. And .999999... is a cauchy sequence converging to 1. So with our limit theorems and this idea of cauchy sequences we get this magical field called the real numbers. This is probably where you get your idea from.
 
  • #15
>The truth is that ... 0.999... = 1.

This is true when dealing with the real number system.

If you are dealing with the hyperreals, then the two numbers are different, and in fact their exact difference can be computed and expressed in terms of infinitesimals:

1 - 0.99999... = 1 - 0.9 sum( k=0, k=(1/epsilon) - 1, 1/10^k ) = 1 - 0.9(1 - 1/10^(1/epsilon)) / (1 - 1/10)
=1 - (1 - 1/10^(1/epsilon)) = 0.1^(1/epsilon)

It's really all a matter of perspective and whichever axioms you want to subscribe to.
 
Last edited:
  • #16
Blouge said:
>The truth is that ... 0.999... = 1.

This is true when dealing with the real number system.

If you are dealing with the hyperreals, then the two numbers are different
That is incorrect. The hyperreal 0.999... is, in fact, equal to the hyperreal 1.

What you are doing is choosing some transfinite H, and then considering the terminating decimal number that consists of H 9's. This number is less than 1, but infinitessimally close to it.

However, 0.999... is not a terminating decimal, and it is equal to 1.
 

What is calculus and how is it related to simple arithmetic?

Calculus is a branch of mathematics that deals with the study of continuous change. It involves the use of mathematical models and techniques to find solutions to problems involving rates of change and accumulation. Simple arithmetic, on the other hand, deals with basic operations like addition, subtraction, multiplication, and division. Calculus provides a deeper understanding and more sophisticated tools for solving arithmetic problems.

Why is it important to use calculus in simple arithmetic?

Calculus allows us to analyze and solve problems involving ever-changing quantities, such as rates of change and accumulation. By using calculus, we can find precise and accurate solutions to complex arithmetic problems that would be difficult or impossible to solve with basic arithmetic alone. It also provides a foundation for understanding more advanced mathematical concepts.

What are some real-life applications of using calculus in simple arithmetic?

Calculus is used in a wide range of fields, including physics, engineering, economics, and medicine. In physics, calculus is used to understand and predict the motion of objects and the behavior of systems. In economics, calculus is used to analyze supply and demand curves and optimize production. In medicine, calculus is used to model the spread of diseases and determine optimal dosages for medications.

What are the basic principles of calculus that are used in simple arithmetic?

The two main principles of calculus are differentiation and integration. Differentiation is used to find the rate of change of a function, while integration is used to find the accumulation of a function over a given interval. These principles allow us to find precise solutions to arithmetic problems involving changing quantities.

Is it necessary to have a strong understanding of calculus to use it in simple arithmetic?

While a strong understanding of calculus can certainly be helpful, it is not always necessary to use it in simple arithmetic. Basic knowledge of calculus principles and techniques, such as differentiation and integration, is sufficient for solving many arithmetic problems. However, a deeper understanding of calculus can provide more efficient and accurate solutions in more complex problems.

Similar threads

  • Calculus
Replies
5
Views
2K
Replies
13
Views
2K
Replies
1
Views
906
Replies
36
Views
3K
  • Calculus
Replies
2
Views
2K
Replies
3
Views
1K
Replies
11
Views
2K
Replies
20
Views
1K
Replies
2
Views
1K
Back
Top