Neglecting higher powers of small quantities in calculations

  • I
  • Thread starter dyn
  • Start date
  • #1
dyn
697
49
Hi
If x(t) is considered to be small so that higher powers ( greater than 2 ) can be neglected in a calculation does that also imply that the time derivative of x(t) can be considered small and powers greater than 2 be neglected ?
Thanks
 

Answers and Replies

  • #2
PeroK
Science Advisor
Homework Helper
Insights Author
Gold Member
2020 Award
19,296
10,806
Hi
If x(t) is considered to be small so that higher powers ( greater than 2 ) can be neglected in a calculation does that also imply that the time derivative of x(t) can be considered small and powers greater than 2 be neglected ?
Thanks
In terms of mathematics, no. Take the function ##f(x) = x\sin(\frac 1 x)##. The function is bounded for small ##x##, but ##f'(x) = \sin(\frac 1 x) - \frac 1 x \cos (\frac 1 x)## which is unbounded for small ##x##.

In terms of physics, the physical constraints of the system may exclude this type of badly behaved function.
 
  • #3
dyn
697
49
The specific problem involves a Lagrangian of the form ( A2 - a2r(dot)2-2ga)r where r(dot) is the time derivative of displacement , r
For small displacements r << 1 the Lagrangian is then simplified to ( A2 -2ga)r
Why does the term involving the square of the time derivative drop out ?
 
  • #4
dyn
697
49
Another example i have seen is where a Lagrangian includes terms of α(dot)2 and a term of θ2 and a term of α(dot)θ(dot)sin(α-θ)
When the simplification that angles α and θ are both small is made the α(dot)θ(dot)sin(α-θ) term is then neglected. Why is this term neglected ?
 
  • #5
Vanadium 50
Staff Emeritus
Science Advisor
Education Advisor
27,661
11,883
In terms of mathematics, no.
This.

If position is small does that mean velocity is also small?
 
  • Like
Likes PhDeezNutz and S.G. Janssens
  • #6
dyn
697
49
This.

If position is small does that mean velocity is also small
No , velocity could be large even for a small position

But why are those term neglected in #3 and #4 ?
 
  • #7
Office_Shredder
Staff Emeritus
Science Advisor
Gold Member
4,766
737
If the object is starting at rest, then the velocity is zero when the position is zero. Maybe that's being used/assumed? I think we would need to see more of the description of these examples to draw a definitive conclusion.
 
  • #8
pasmith
Homework Helper
2,113
738
The specific problem involves a Lagrangian of the form ( A2 - a2r(dot)2-2ga)r where r(dot) is the time derivative of displacement , r
For small displacements r << 1 the Lagrangian is then simplified to ( A2 -2ga)r
Why does the term involving the square of the time derivative drop out ?

If you reduce your problem to a linear problem by neglecting terms which are not constant or linear in [itex]r[/itex] and its derivatives, then the long-term behviour of the system is of two possible types:

(1) It decays to an equilibrium state, in which case [itex]\dot r \to 0[/itex] is small.
(2) It goes off to infinity.

If you find yourself in the first case, then your neglect of the higher order terms was justified. If you find yourself in the second case, then your neglect is not justified, and the effect of those terms is to steer the system to a different attractor (or it might still head off to infinity).
 
Last edited:
  • Like
Likes ergospherical
  • #9
ergospherical
Gold Member
487
619
The specific problem involves a Lagrangian of the form ( A2 - a2r(dot)2-2ga)r where r(dot) is the time derivative of displacement , r
For small displacements r << 1 the Lagrangian is then simplified to ( A2 -2ga)r
Why does the term involving the square of the time derivative drop out ?
In small oscillations problems, after determining the exact equations of motion using the Euler-Lagrange equation, one can obtain the so-called linearised equations by neglecting all terms which are not linear in the generalised coordinates and their time derivatives.

In your example, that turns out to be equivalent to simply dropping the middle term ##-a^2 \dot{r}^2 r## from the Lagrangian.

(This approach will give you the same results as if you were to instead use the "approximate" quadratic forms ##T = \dfrac{1}{2} a_{ij}(\boldsymbol{q}_0) \dot{q}_i \dot{q}_j## and ##U = \dfrac{1}{2} \partial_i \partial_j U \bigg{|}_{\boldsymbol{q}_0} q_i q_j## to derive the equations of motion, with ##\boldsymbol{q}_0## the equilibrium point).
 
  • #10
dyn
697
49
If you reduce your problem to a linear problem by neglecting terms which are not constant or linear in [itex]r[/itex] and its derivatives, then the long-term behviour of the system is of two possible types:

(1) It decays to an equilibrium state, in which case [itex]\dot r \to 0[/itex] is small.
(2) It goes off to infinity.

If you find yourself in the first case, then your neglect of the higher order terms was justified. If you find yourself in the second case, then your neglect is not justified, and the effect of those terms is to steer the system to a different attractor (or it might still head off to infinity).
If i have a simple pendulum with zero friction then it would oscillate forever. Is that an equilibrium state ? How does that imply that r(dot) is small ?
I have come across the following in some lecture notes "for small amplitude oscillations about θ = 0 then θ(dot) is also small". But why ? Can a harmonic oscillator not oscillate with a fast speed ?
 
  • #11
Office_Shredder
Staff Emeritus
Science Advisor
Gold Member
4,766
737
Well if it's a pendulum, then I would think not. The velocity at the bottom is a function of the potential energy, so if the potential energy is small because it doesn't go up very high, then the velocity of the pendulum is small.
 
  • #12
dyn
697
49
That is a good point. What about a mass on a spring oscillating; if the spring has a large k value , could that not oscillate with a large speed ?
 
  • #13
Office_Shredder
Staff Emeritus
Science Advisor
Gold Member
4,766
737
Sure, but if you fix ##k##, then consider the maximum velocity as a function of the maximum displacement, you still get that the velocity goes to zero as the displacement goes to zero.
 
  • #14
ergospherical
Gold Member
487
619
It is already explained in Arnold's book on mechanics. If you have a differential equation ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x})## with an equilibrium position ##\mathbf{x}_0##, then to first order
\begin{align*}
\boldsymbol{F}(\mathbf{x}) = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0) + O(\mathbf{x}^2)
\end{align*}with ##\boldsymbol{J} = \left( \dfrac{\partial F_i}{\partial x_j} \right)## the Jacobian matrix of ##\boldsymbol{F}(\mathbf{x})##. The linearised equation is ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0)##.

At the bottom of page 100 of Arnold: if ##\mathbf{x}_L(t)## is a solution to the linearised equation and ##\mathbf{x}_E(t)## is a solution to the exact equation, then for any ##\varepsilon > 0## there is a ##\delta > 0## such that if ##|\mathbf{x}_E(0)| < \delta## then ##|\mathbf{x}_E(t) - \mathbf{x}_L(t)| < \varepsilon \delta## for all times ##0 < t < t_{\mathrm{end}}##.

This is why you can neglect non-linear terms in problems of small oscillations about stable equilibrium positions (in the sense of Liapunov!).
 
  • Like
Likes PhDeezNutz, dyn and hutchphd
  • #15
dyn
697
49
Sure, but if you fix ##k##, then consider the maximum velocity as a function of the maximum displacement, you still get that the velocity goes to zero as the displacement goes to zero.
I thought maximum velocity occurs at zero displacement ?
 
  • #16
Office_Shredder
Staff Emeritus
Science Advisor
Gold Member
4,766
737
The velocity at zero displacement is a function of the maximum displacement that occurs during the harmonic oscillation.
 
  • #17
PeroK
Science Advisor
Homework Helper
Insights Author
Gold Member
2020 Award
19,296
10,806
It is already explained in Arnold's book on mechanics. If you have a differential equation ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x})## with an equilibrium position ##\mathbf{x}_0##, then to first order
\begin{align*}
\boldsymbol{F}(\mathbf{x}) = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0) + O(\mathbf{x}^2)
\end{align*}with ##\boldsymbol{J} = \left( \dfrac{\partial F_i}{\partial x_j} \right)## the Jacobian matrix of ##\boldsymbol{F}(\mathbf{x})##. The linearised equation is ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0)##.

At the bottom of page 100 of Arnold: if ##\mathbf{x}_L(t)## is a solution to the linearised equation and ##\mathbf{x}_E(t)## is a solution to the exact equation, then for any ##\varepsilon > 0## there is a ##\delta > 0## such that if ##|\mathbf{x}_E(0)| < \delta## then ##|\mathbf{x}_E(t) - \mathbf{x}_L(t)| < \varepsilon \delta## for all times ##0 < t < t_{\mathrm{end}}##.

This is why you can neglect non-linear terms in problems of small oscillations about stable equilibrium positions (in the sense of Liapunov!).
That still assumes that ##F(x)## is "well-behaved". That can't possibly be true without some constraints on ##F## - having bounded derivatives would do it; or, possibly, having a Taylor series with non-zero radius of convergence about ##x_0## would be sufficient.
 
  • Like
Likes S.G. Janssens
  • #18
S.G. Janssens
Science Advisor
Education Advisor
989
772
That still assumes that ##F(x)## is "well-behaved". That can't possibly be true without some constraints on ##F## - having bounded derivatives would do it; or, possibly, having a Taylor series with non-zero radius of convergence about ##x_0## would be sufficient.
You are correct, and the statement from post #14 that you quoted is indeed incorrect, and not only because of missing smoothness conditions on ##F##.

(It is not that I think you need my confirmation, I just want to stress the point.)
 
  • #19
ergospherical
Gold Member
487
619
What is the mistake? Assuming that ##\mathbf{x}_L## and ##\mathbf{x}_E## have the same initial conditions and that ##\boldsymbol{F}## is well-behaved?
 
  • #20
PeroK
Science Advisor
Homework Helper
Insights Author
Gold Member
2020 Award
19,296
10,806
What is the mistake? Assuming that ##\mathbf{x}_L## and ##\mathbf{x}_E## have the same initial conditions and that ##\boldsymbol{F}## is well-behaved?
Does Arnold say anything about the properties of ##F## he's assuming? There must be conditions on ##F## for his analysis to hold. E.g. putting a maximum bound on all derivatives of ##F## clearly does the trick.
 
  • #21
ergospherical
Gold Member
487
619
Not that I can see, but then again this is an applied text and the bit I referenced was discussing the linearisation of systems like ##\dfrac{d}{dt} \dfrac{\partial L}{\partial \dot{\boldsymbol{q}}} = \dfrac{\partial L}{\partial\boldsymbol{q}}## near equilibrium positions. I have no idea what the conditions on F are, but then again I don't particularly need to worry about it as a physics student :)
 
  • #22
PeroK
Science Advisor
Homework Helper
Insights Author
Gold Member
2020 Award
19,296
10,806
Not that I can see, but then again this is an applied text and the bit I referenced was discussing the linearisation of systems like ##\dfrac{d}{dt} \dfrac{\partial L}{\partial \dot{\boldsymbol{q}}} = \dfrac{\partial L}{\partial\boldsymbol{q}}## near equilibrium positions. I have no idea what the conditions on F are, but then again I don't particularly need to worry about it as a physics student :)
I found a PDF of Arnold's Mechanics. I don't believe that theorem on page 100. It doesn't look right at all. What it says is::

For any duration ##T## (no matter how large) and for any precision ##\epsilon## (no matter how small), simply by choosing a small enough initial offset from equilibrium (##\delta##), the exact solution and linearised solution remain within ##\delta \epsilon##. That can't be right.

The problem with the theorem as stated is that the smaller you make your initial offset ##\delta##, so the smaller you make the allowable error ##\delta \epsilon##.

The loophole is that the sum of terms in the Taylor series such as ##\frac{f''(x_0)x^2}{2} \dots## needn't converge as ##x^2## for small ##x##. That's why you need the derivatives to be bounded, not simply the Taylor series to converge.

PS And, of course, when you write:
\begin{align*}
\boldsymbol{F}(\mathbf{x}) = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0) + O(\mathbf{x}^2)
\end{align*}
Then bounded derivatives are exactly what you are assuming.
 
Last edited:

Related Threads on Neglecting higher powers of small quantities in calculations

  • Last Post
Replies
11
Views
6K
  • Last Post
Replies
8
Views
5K
  • Last Post
Replies
3
Views
2K
Replies
5
Views
784
Replies
8
Views
1K
  • Last Post
Replies
5
Views
2K
Replies
1
Views
2K
Replies
7
Views
808
  • Last Post
Replies
2
Views
1K
Top