Neglecting higher powers of small quantities in calculations

• I
Hi
If x(t) is considered to be small so that higher powers ( greater than 2 ) can be neglected in a calculation does that also imply that the time derivative of x(t) can be considered small and powers greater than 2 be neglected ?
Thanks

PeroK
Homework Helper
Gold Member
2020 Award
Hi
If x(t) is considered to be small so that higher powers ( greater than 2 ) can be neglected in a calculation does that also imply that the time derivative of x(t) can be considered small and powers greater than 2 be neglected ?
Thanks
In terms of mathematics, no. Take the function ##f(x) = x\sin(\frac 1 x)##. The function is bounded for small ##x##, but ##f'(x) = \sin(\frac 1 x) - \frac 1 x \cos (\frac 1 x)## which is unbounded for small ##x##.

In terms of physics, the physical constraints of the system may exclude this type of badly behaved function.

The specific problem involves a Lagrangian of the form ( A2 - a2r(dot)2-2ga)r where r(dot) is the time derivative of displacement , r
For small displacements r << 1 the Lagrangian is then simplified to ( A2 -2ga)r
Why does the term involving the square of the time derivative drop out ?

Another example i have seen is where a Lagrangian includes terms of α(dot)2 and a term of θ2 and a term of α(dot)θ(dot)sin(α-θ)
When the simplification that angles α and θ are both small is made the α(dot)θ(dot)sin(α-θ) term is then neglected. Why is this term neglected ?

Staff Emeritus
In terms of mathematics, no.
This.

If position is small does that mean velocity is also small?

PhDeezNutz and S.G. Janssens
This.

If position is small does that mean velocity is also small
No , velocity could be large even for a small position

But why are those term neglected in #3 and #4 ?

Office_Shredder
Staff Emeritus
Gold Member
If the object is starting at rest, then the velocity is zero when the position is zero. Maybe that's being used/assumed? I think we would need to see more of the description of these examples to draw a definitive conclusion.

pasmith
Homework Helper
The specific problem involves a Lagrangian of the form ( A2 - a2r(dot)2-2ga)r where r(dot) is the time derivative of displacement , r
For small displacements r << 1 the Lagrangian is then simplified to ( A2 -2ga)r
Why does the term involving the square of the time derivative drop out ?

If you reduce your problem to a linear problem by neglecting terms which are not constant or linear in $r$ and its derivatives, then the long-term behviour of the system is of two possible types:

(1) It decays to an equilibrium state, in which case $\dot r \to 0$ is small.
(2) It goes off to infinity.

If you find yourself in the first case, then your neglect of the higher order terms was justified. If you find yourself in the second case, then your neglect is not justified, and the effect of those terms is to steer the system to a different attractor (or it might still head off to infinity).

Last edited:
ergospherical
ergospherical
Gold Member
The specific problem involves a Lagrangian of the form ( A2 - a2r(dot)2-2ga)r where r(dot) is the time derivative of displacement , r
For small displacements r << 1 the Lagrangian is then simplified to ( A2 -2ga)r
Why does the term involving the square of the time derivative drop out ?
In small oscillations problems, after determining the exact equations of motion using the Euler-Lagrange equation, one can obtain the so-called linearised equations by neglecting all terms which are not linear in the generalised coordinates and their time derivatives.

In your example, that turns out to be equivalent to simply dropping the middle term ##-a^2 \dot{r}^2 r## from the Lagrangian.

(This approach will give you the same results as if you were to instead use the "approximate" quadratic forms ##T = \dfrac{1}{2} a_{ij}(\boldsymbol{q}_0) \dot{q}_i \dot{q}_j## and ##U = \dfrac{1}{2} \partial_i \partial_j U \bigg{|}_{\boldsymbol{q}_0} q_i q_j## to derive the equations of motion, with ##\boldsymbol{q}_0## the equilibrium point).

If you reduce your problem to a linear problem by neglecting terms which are not constant or linear in $r$ and its derivatives, then the long-term behviour of the system is of two possible types:

(1) It decays to an equilibrium state, in which case $\dot r \to 0$ is small.
(2) It goes off to infinity.

If you find yourself in the first case, then your neglect of the higher order terms was justified. If you find yourself in the second case, then your neglect is not justified, and the effect of those terms is to steer the system to a different attractor (or it might still head off to infinity).
If i have a simple pendulum with zero friction then it would oscillate forever. Is that an equilibrium state ? How does that imply that r(dot) is small ?
I have come across the following in some lecture notes "for small amplitude oscillations about θ = 0 then θ(dot) is also small". But why ? Can a harmonic oscillator not oscillate with a fast speed ?

Office_Shredder
Staff Emeritus
Gold Member
Well if it's a pendulum, then I would think not. The velocity at the bottom is a function of the potential energy, so if the potential energy is small because it doesn't go up very high, then the velocity of the pendulum is small.

dyn
That is a good point. What about a mass on a spring oscillating; if the spring has a large k value , could that not oscillate with a large speed ?

Office_Shredder
Staff Emeritus
Gold Member
Sure, but if you fix ##k##, then consider the maximum velocity as a function of the maximum displacement, you still get that the velocity goes to zero as the displacement goes to zero.

ergospherical
Gold Member
It is already explained in Arnold's book on mechanics. If you have a differential equation ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x})## with an equilibrium position ##\mathbf{x}_0##, then to first order
\begin{align*}
\boldsymbol{F}(\mathbf{x}) = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0) + O(\mathbf{x}^2)
\end{align*}with ##\boldsymbol{J} = \left( \dfrac{\partial F_i}{\partial x_j} \right)## the Jacobian matrix of ##\boldsymbol{F}(\mathbf{x})##. The linearised equation is ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0)##.

At the bottom of page 100 of Arnold: if ##\mathbf{x}_L(t)## is a solution to the linearised equation and ##\mathbf{x}_E(t)## is a solution to the exact equation, then for any ##\varepsilon > 0## there is a ##\delta > 0## such that if ##|\mathbf{x}_E(0)| < \delta## then ##|\mathbf{x}_E(t) - \mathbf{x}_L(t)| < \varepsilon \delta## for all times ##0 < t < t_{\mathrm{end}}##.

This is why you can neglect non-linear terms in problems of small oscillations about stable equilibrium positions (in the sense of Liapunov!).

PhDeezNutz, dyn and hutchphd
Sure, but if you fix ##k##, then consider the maximum velocity as a function of the maximum displacement, you still get that the velocity goes to zero as the displacement goes to zero.
I thought maximum velocity occurs at zero displacement ?

Office_Shredder
Staff Emeritus
Gold Member
The velocity at zero displacement is a function of the maximum displacement that occurs during the harmonic oscillation.

dyn
PeroK
Homework Helper
Gold Member
2020 Award
It is already explained in Arnold's book on mechanics. If you have a differential equation ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x})## with an equilibrium position ##\mathbf{x}_0##, then to first order
\begin{align*}
\boldsymbol{F}(\mathbf{x}) = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0) + O(\mathbf{x}^2)
\end{align*}with ##\boldsymbol{J} = \left( \dfrac{\partial F_i}{\partial x_j} \right)## the Jacobian matrix of ##\boldsymbol{F}(\mathbf{x})##. The linearised equation is ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0)##.

At the bottom of page 100 of Arnold: if ##\mathbf{x}_L(t)## is a solution to the linearised equation and ##\mathbf{x}_E(t)## is a solution to the exact equation, then for any ##\varepsilon > 0## there is a ##\delta > 0## such that if ##|\mathbf{x}_E(0)| < \delta## then ##|\mathbf{x}_E(t) - \mathbf{x}_L(t)| < \varepsilon \delta## for all times ##0 < t < t_{\mathrm{end}}##.

This is why you can neglect non-linear terms in problems of small oscillations about stable equilibrium positions (in the sense of Liapunov!).
That still assumes that ##F(x)## is "well-behaved". That can't possibly be true without some constraints on ##F## - having bounded derivatives would do it; or, possibly, having a Taylor series with non-zero radius of convergence about ##x_0## would be sufficient.

S.G. Janssens
S.G. Janssens
That still assumes that ##F(x)## is "well-behaved". That can't possibly be true without some constraints on ##F## - having bounded derivatives would do it; or, possibly, having a Taylor series with non-zero radius of convergence about ##x_0## would be sufficient.
You are correct, and the statement from post #14 that you quoted is indeed incorrect, and not only because of missing smoothness conditions on ##F##.

(It is not that I think you need my confirmation, I just want to stress the point.)

ergospherical
Gold Member
What is the mistake? Assuming that ##\mathbf{x}_L## and ##\mathbf{x}_E## have the same initial conditions and that ##\boldsymbol{F}## is well-behaved?

PeroK
Homework Helper
Gold Member
2020 Award
What is the mistake? Assuming that ##\mathbf{x}_L## and ##\mathbf{x}_E## have the same initial conditions and that ##\boldsymbol{F}## is well-behaved?
Does Arnold say anything about the properties of ##F## he's assuming? There must be conditions on ##F## for his analysis to hold. E.g. putting a maximum bound on all derivatives of ##F## clearly does the trick.

ergospherical
Gold Member
Not that I can see, but then again this is an applied text and the bit I referenced was discussing the linearisation of systems like ##\dfrac{d}{dt} \dfrac{\partial L}{\partial \dot{\boldsymbol{q}}} = \dfrac{\partial L}{\partial\boldsymbol{q}}## near equilibrium positions. I have no idea what the conditions on F are, but then again I don't particularly need to worry about it as a physics student :)

PeroK
Homework Helper
Gold Member
2020 Award
Not that I can see, but then again this is an applied text and the bit I referenced was discussing the linearisation of systems like ##\dfrac{d}{dt} \dfrac{\partial L}{\partial \dot{\boldsymbol{q}}} = \dfrac{\partial L}{\partial\boldsymbol{q}}## near equilibrium positions. I have no idea what the conditions on F are, but then again I don't particularly need to worry about it as a physics student :)
I found a PDF of Arnold's Mechanics. I don't believe that theorem on page 100. It doesn't look right at all. What it says is::

For any duration ##T## (no matter how large) and for any precision ##\epsilon## (no matter how small), simply by choosing a small enough initial offset from equilibrium (##\delta##), the exact solution and linearised solution remain within ##\delta \epsilon##. That can't be right.

The problem with the theorem as stated is that the smaller you make your initial offset ##\delta##, so the smaller you make the allowable error ##\delta \epsilon##.

The loophole is that the sum of terms in the Taylor series such as ##\frac{f''(x_0)x^2}{2} \dots## needn't converge as ##x^2## for small ##x##. That's why you need the derivatives to be bounded, not simply the Taylor series to converge.

PS And, of course, when you write:
\begin{align*}
\boldsymbol{F}(\mathbf{x}) = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0) + O(\mathbf{x}^2)
\end{align*}
Then bounded derivatives are exactly what you are assuming.

Last edited: