# Will we get an infinitesimal x when we neglect ##x^2## in ##x+x^2##?

• B
• Mike_bb
The relation between ##\Delta x## and ##dx## is that ##\Delta x## represents the change in the value of the variable over the change in the value of dx.f

#### Mike_bb

Hello.

Let's assume that we have ##2x \Delta x + \Delta x^2##. When ##\Delta x## tends to zero we can neglect ##\Delta x^2## and we'll get ##2xdx##.
Let's assume that we have ##x + x^2##. When ##x## tends to zero we can neglect ##x^2##. Will we get an infinitesimal ##x## as such as ##dx##?

Thanks.

$$d(x+x^2)=dx+d(x^2)=(1+2x)dx$$
For x=0, 2x=0 in the formula.

• Mike_bb
$$d(x+x^2)=dx+d(x^2)=(1+2x)dx$$
Sorry, I forgot to say that ##x+x^2## is just expression where x is variable. Not differential.

You would seem to like to have
$$\lim_{x\rightarrow 0} \frac{x+x^2}{x}=1$$
But it is not different from differentiation at any x
$$\frac{d(x+x^2)}{dx}=1+2x$$
for the case of x=0.

• Mike_bb
You would seem to like to have
$$\lim_{x\rightarrow 0} \frac{x+x^2}{x}=1$$
But it is not different from differentiation at any x
$$\frac{d(x+x^2)}{dx}=1+2x$$
for the case of x=0.
Ok. We have 2 similar expressions: 1. ##2x \Delta x + \Delta x^2## where ##\Delta x## is variable and 2. ##x+x^2## where ##x## is variable.
In 1. case we'll get ##2xdx## when ## \Delta x## tends to zero.
In 2. case we'll get ##x## when ##x## tends to zero.
But in the first case we'll get infinitesimal dx and in the second case we'll get just ##x## but not infinitesimal x. Why is it so?
Thanks.

Last edited:
Definition formula of differentiation,
$$f^{'}(x):=\lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}$$
when x=0
$$f^{'}(0)=\lim_{h\rightarrow 0}\frac{f(0+h)-f(0)}{h}$$
Further when f(0)=0
$$f^{'}(0)=\lim_{h\rightarrow 0}\frac{f(0+h)}{h}$$
we can replace h with any alphabet we like. When we happen to choose ’x’ instead of h, though it is confusing and even contradicts with 'x=0',
$$f^{'}(0)=\lim_{x\rightarrow 0}\frac{f(0+x)}{x}=\lim_{x\rightarrow 0}\frac{f(x)}{x}$$
It corresponds to the first formula in my post with f(x) = x+x^2.　 I wrote it imagining what you think but I do not recommend it because it would cause confusion by x=0 and x##\rightarrow 0## at the same time.

Last edited:
• • PeroK and Mike_bb
Definition formula of differentiation,
$$f^{'}(x):=\lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}$$
when x=0
$$f^{'}(0)=\lim_{h\rightarrow 0}\frac{f(0+h)-f(0)}{h}$$
Further when f(0)=0
$$f^{'}(0)=\lim_{h\rightarrow 0}\frac{f(0+h)}{h}$$
we can replace h with any alphabet we like. When we happen to choose ’x’ instead of h, though it is confusing and even contradicts with 'x=0',
$$f^{'}(0)=\lim_{x\rightarrow 0}\frac{f(0+x)}{x}=\lim_{x\rightarrow 0}\frac{f(x)}{x}$$
It corresponds to the first formula in my post with f(x) = x+x^2.　 I wrote it imagining what you think but I do not recommend it because it would cause confusion by x=0 and x##\rightarrow 0## at the same time.

• Dale, SammyS, weirdoguy and 1 other person
Let's assume that we have ##2x \Delta x + \Delta x^2##. When ##\Delta x## tends to zero we can neglect ##\Delta x^2## and we'll get ##2xdx##.
If "##\Delta x## tends to zero" then the whole expression ##2x \Delta x + \Delta x^2## "tends to" zero. In more precise terms, ##\lim_{\Delta x \to 0} 2x \Delta x + \Delta x^2 = 0##. However, ##\Delta x^2## approaches zero more rapidly than does ##\Delta x##, but that's not really relevant to what you're asking.

• Dale, bhobba and PeroK
However, ##\Delta x^2## approaches zero more rapidly than does ##\Delta x##
I didn't fully understand what does it mean? What is it used for?

I didn't fully understand what does it mean?
Let Δx = 0.1 then Δx2 = 0.01
Make Δx ten times smaller, Δx = 0.01, then Δx2 = 0.0001

I didn't fully understand what does it mean? What is it used for?
The expression ##\Delta x## is usually meant to represent a small (close to zero) number, but not an infinitesimal, which is what dx represents in some contexts.

If we start with ##\Delta x = 0.1## then ##(\Delta x)^2 = 0.01##. If we decrease ##\Delta x## to 0.01, then ##(\Delta x)^2 = 0.0001##.

As to what it's used for, you can often ignore very small numbers raised to high powers to get a decent approximation.

Last edited:
• bhobba
The expression ##\Delta x## is usually meant to represent a small (close to zero) number, but not an infinitesimal, which is what dx represents in some contexts.

If we start with ##\Delta x = 0.1## then ##\(Delta x)^2 = 0.01##. If we decrease ##\Delta x## to 0.01, then ##\(Delta x)^2 = 0.0001##.

As to what it's used for, you can often ignore very small numbers raised to high powers to get a decent approximation.
What is relation between ##\Delta x## and ##dx##? If we want to get term with dx then we discard ##\Delta x^2## and other high-order terms in expression. Is it true?

What is relation between ##\Delta x## and ##dx##?
There is none. Don't let the guys fool you. ##dx## is not defined at the level of this thread. We have
$$\dfrac{dx}{dt}=\lim_{t \to 0}\dfrac{\Delta x}{\Delta t} \stackrel{!}{\neq}\dfrac{\lim_{x \to 0}\Delta x}{\lim_{t \to 0}\Delta t}$$ but not ##dx## alone. Read post #8.
This infinitesimal chattery doesn't serve you well. It is infinite, infinitely small, but infinite small is zero. But ##dx## isn't zero. So take it as just written or learn differential forms.

If we want to get term with dx then we discard ##\Delta x^2## and other high-order terms in expression. Is it true?
When you discard something, then you make an error. Whether this is allowed or not depends on whether we only chat on the internet or you are building bridges.

Last edited:
• • Dale, bhobba, Delta2 and 1 other person
Hello.

Let's assume that we have ##2x \Delta x + \Delta x^2##. When ##\Delta x## tends to zero we can neglect ##\Delta x^2## and we'll get ##2xdx##.
Let's assume that we have ##x + x^2##. When ##x## tends to zero we can neglect ##x^2##. Will we get an infinitesimal ##x## as such as ##dx##?

Thanks.
It seems to me you are missing the basics of (standard) real analysis. In modern terminology "infinitesimal" is part of non-standard analysis.

If you are using an archaic textbook, then it may be difficult for us to help.

• bhobba
It seems to me you are missing the basics of (standard) real analysis. In modern terminology "infinitesimal" is part of non-standard analysis.

If you are using an archaic textbook, then it may be difficult for us to help.
I read in this source: http://www.bndhep.net/Lab/Math/Calculus.htm

The fact that x^2 becomes insignificant compared to x for very small values of x is a fundamental principle of infinitesimal calculus. We say x is infinitesimal when we allow its value to approach zero, but never actually reach zero, and we write x→0. To express the behavior of x + x^2 as x→0 we say, "The limit of x + x^2 as x→0 is x."

I read in this source: http://www.bndhep.net/Lab/Math/Calculus.htm

The fact that x^2 becomes insignificant compared to x for very small values of x is a fundamental principle of infinitesimal calculus. We say x is infinitesimal when we allow its value to approach zero, but never actually reach zero, and we write x→0. To express the behavior of x + x^2 as x→0 we say, "The limit of x + x^2 as x→0 is x."
Well, that source is wrong! Assuming it says what you say it says. The link you posted is broken.

There is no such thing as an infinitesimal in standard real analysis. Although the term is often thrown around loosely or erroneously.

• Mike_bb
• Dale and malawi_glenn
In Loomis & Sternberg's Advanced Calculus an infinitesimal is defined to be a function ##f## such that ##f(0)=0## and ##f## is continuous at zero (so its limit at zero is zero as well). I think I've encountered this (or similar) definition in some other textbooks (in Russian). Two further types of infinitesimal defined in the book are ##O## large and ##o## small (respectively, an infinitesimal that is Lipschitz continuous at zero, and an infinitesimal that goes to zero faster than the argument). I really liked this notation and how it's used to define the differential and derive its properties.

• • lurflurf and S.G. Janssens
The reason why in many derivations involving infinitesimals we drop higher powers of ##dx## (or ##\Delta x##) is because somewhere in the derivation we (though not always explicitly stated) divide by ##dx## and then take the limit of the respective term as ##dx \to 0## which means that the higher order powers will yield 0 because for example ##\frac{(dx)^2}{dx}=dx\to 0## or ##\frac{dx^3}{dx}=(dx)^2\to 0 ## as ##dx\to 0##. But we can't ignore the single dx term for example a term like ##2xdx## because after division by dx will become ##2x## which ##2x\to 2x## as ##dx\to 0##, while ##2x(dx)^2## after division by ##dx## will become ##2xdx## which tends to ##0## as ##dx\to 0##.

• Dale and Mike_bb
I think that the question is "very thin" the two equations: ## 2x\Delta x +\Delta x^2## for ##\Delta x \rightarrow 0## and ##x+x^2## for ##x\rightarrow 0## are conceptually different because are two different objects. The first equation approximated to the first order gives ##2xdx##, the second gives ##x##, one is a differential the other is a function ...
Ssnow

• Mike_bb
The only way to solve the 'paradoxes' of calculus is by using rigorous arguments found in real analysis. But at an intuitive level Δx, Δ f(x) = f(x + Δx) - f(x) etc, are simply small changes in x or f(x). dx and df(x) are infinitesimal changes in x and f(x). The difference between small and infinitesimal is simply 'a for all practical purposes' thing. Δx is small but different from 0. dx is also small, but for all practical purposes, it can be considered zero when you want it to be, even though it isn't. Exactly how small depends on the problem being considered, but is it assumed in an intuitive treatment of calculus, such can always be found. The idea is to get around the divide-by-zero thing. If dy and dx were 0, dy/dx would have no meaning. But if infinitesimal, everything is ok. Consider y = x^2. dy/dx = ((x + dx)^2 - x^2)/dx = (x^2 +2xdx +dx^2 - x^2)/dx = 2x + dx. This is where we assume that for all practical purposes dx =0 and get dy/dx = 2x. If we had used Δx instead, you would get Δy/Δx = 2x + Δx - close to but not for all practical purposes the same as 2x. In this way of doing calculus, when we say limit x→c f(x) = z, we mean when x is infinitesimally close to c, but not exactly c even though f(x) may not even be defined at c.

Why not do calculus using real analysis from the start? Real analysis requires some familiarity with rigorous formal proof. For its use in engineering, economics etc., you don't need to study this (for a thinking student, it does solve Zeno's paradox, for example. Start a new thread if interested), so the intuitive approach is done first, and the rigorous approach later for those that need to know it or are interested in knowing it. It's needed for advanced topics like Rigged Hilbert Spaces that those interested in mathematical physics often want to know to make rigorous sense of things like the Dirac Delta function. Still, it is surprising how far you can go with the intuitive approach - even that can be given an intuitive treatment. It usually is - very few people study Rigged Hilbert Spaces. Just nuts like me    .

Thanks
Bill

Last edited:
The only way to solve the 'paradoxes' of calculus is by using rigorous arguments found in real analysis. But at an intuitive level Δx, Δ f(x) = f(x + Δx) - f(x) etc, are simply small changes in x or f(x). dx and df(x) are infinitesimal changes in x and f(x). The difference between small and infinitesimal is simply 'a for all practical purposes' thing. Δx is small but different from 0. dx is also small, but for all practical purposes, it can be considered zero when you want it to be, even though it isn't. Exactly how small depends on the problem being considered, but is it assumed in an intuitive treatment of calculus, such can always be found. The idea is to get around the divide-by-zero thing. If dy and dx were 0, dy/dx would have no meaning. But if infinitesimal, everything is ok. Consider y = x^2. dy/dx = ((x + dx)^2 - x^2)/dx = (x^2 +2xdx +dx^2 - x^2)/dx = 2x + dx. This is where we assume that for all practical purposes dx =0 and get dy/dx = 2x. If we had used Δx instead, you would get Δy/Δx = 2x + Δx - close to but not for all practical purposes the same as 2x. In this way of doing calculus, when we say limit x→c f(x) = z, we mean when x is infinitesimally close to c, but not exactly c even though f(x) may not even be defined at c.

Why not do calculus using real analysis from the start? Real analysis requires some familiarity with rigorous formal proof. For its use in engineering, economics etc., you don't need to study this (for a thinking student, it does solve Zeno's paradox, for example. Start a new thread if interested), so the intuitive approach is done first, and the rigorous approach later for those that need to know it or are interested in knowing it. It's needed for advanced topics like Rigged Hilbert Spaces that those interested in mathematical physics often want to know to make rigorous sense of things like the Dirac Delta function. Still, it is surprising how far you can go with the intuitive approach - even that can be given an intuitive treatment. It usually is - very few people study Rigged Hilbert Spaces. Just nuts like me    .

Thanks
Bill
In my understanding, dy is the linear change that approximates the actual change of the function. So if y=x^2, dy =2x is the linear approximation to the local change of f(x)=x^2. I guess we can also formulate it in terms of the exterior derivative of a differential form.

• bhobba
In my understanding, dy is the linear change that approximates the actual change of the function. So if y=x^2, dy =2x is the linear approximation to the local change of f(x)=x^2. I guess we can also formulate it in terms of the exterior derivative of a differential form.

Yes, that is another way of looking at it. You take any smooth differentiable graph. Pick a point and then zoom in closer and closer. Eventually, again for all practical purposes, it will be a straight line that passes through that point. You can look at dy and dx as the sides of a right-angled triangle with that line as the hypotenuse. Intuitive calculus I think can be presented in many ways. But its rigorous underpinning is real analysis.

Thanks
Bil

• WWGD