Differential Equation of a Pendulum

  • #1
Tygra
49
7
Homework Statement
How to compute the angle of displacement
Relevant Equations
In Post
Hi everyone,

I am going through the book called "Numerical Methods for Engineers" and I am trying to solve for the angle of displacement of a pendulum. I am using MATLAB to do this. However, I am getting some strange results.

In the book the differential equation is:

1736882738688.png


Because this is a second order differential equation, I have converted it to two first order equations so that I can solve it using Euler's method.

So I get:

1736882839935.png


1736882864431.png


Where:

g = gravitational constant
l = Pendulum length

Code:
g = 9.81;
l = 0.2;

t = 0:0.1:50;
dt = t(2) - t(1);

z(1) = 0
theta(1) = 90;

for i = 1:length(t)-1
    theta(i+1) = z(i)*dt + theta(i)
    z(i+1) = -(g/l)*sind(theta(i))*dt + z(i)
  
end

This is the graph

pendulum_graph.png



This does not look correct to me. The amplitude of the graph should decrease as the time goes on correct?

Can someone help please?
 
Physics news on Phys.org
  • #2
Any attempt to model physical processes on the basis of forces and accelerations suffers from cumulative errors in the integration. A steady increase in the system's energy is not unusual.
Reducing the time step will improve it but not cure it.
Tygra said:
The amplitude of the graph should decrease as the time goes on correct?
No, because you have not modelled any losses. The true result would be constant amplitude.

The eventual runaway in your chart is where the angle reaches 180°, so the pendulum goes over the top, spins right round, and goes faster and faster.

A crude fix is to assess the total energy at each step and tweak the velocity or angle to keep it constant. That's not really satisfactory because you could not use that if modelling friction, for example.

To understand the details, consider the downward trajectory. You calculate the acceleration for the next interval, but the acceleration is reducing, so the acceleration you use is greater than the average acceleration will be.

In your code, you calculate the new position then calculate the new velocity (z) based on the acceleration at the old position.

Edit: See post #8
You would do better to base it on some weighted average of the old and new positions. Could be a simple average, or there might be a better weighting.
Whether it is better to average the positions then take the sines or take the average of the sines I'm not sure without more detailed analysis.
 
Last edited:
  • Like
Likes berkeman
  • #3
One problem is that angles are measured in radians in math. In your numerical solution, you're using degrees. You'll need to put in a conversion factor, or more simply, use radians in the numerical solution.

Ideally, the amplitude of the oscillation shouldn't decrease because you haven't included any friction.
 
  • #4
vela said:
One problem is that angles are measured in radians in math. In your numerical solution, you're using degrees. You'll need to put in a conversion factor, or more simply, use radians in the numerical solution.
I assume "sind" means the argument is in degrees. Other than the error accumulation the results look right.
vela said:
Ideally, the amplitude of the oscillation shouldn't decrease because you haven't included any friction.
But it shouldn’t increase either.
 
  • #5
haruspex said:
I assume "sind" means the argument is in degrees. Other than the error accumulation the results look right.
But ##g/l## in the differential equation isn't in (degrees/s)^2. Given the values the OP used, the period should be about 0.9 s.

haruspex said:
But it shouldn’t increase either.
I was addressing the OP's belief the amplitude should decrease with time. I figured you already talked about why it increased in the numerical solution.
 
  • Like
Likes haruspex
  • #6
vela said:
But ##g/l## in the differential equation isn't in (degrees/s)^2. Given the values the OP used, the period should be about 0.9 s.
True, so that means the effective time units being used are not seconds.
 
  • #7
Thank you guys for your response. Considering what has been said how would I amend the equation to include friction/drag, so that the model behaves realistically?
 
  • #8
Tygra said:
Thank you guys for your response. Considering what has been said how would I amend the equation to include friction/drag, so that the model behaves realistically?
I've thought of a better (and more natural) way. Just use more terms of the Taylor series.
##\omega(t+\Delta t)\approx\omega(t)+\alpha(t)\Delta t+\frac 12\dot\alpha(t)##. Similarly for ##\theta##.
That generalises to the lossy case provided the nonconservative force is differentiable.

For the lossless case you can do even better by using the above for computing the new angle but then using energy conservation for the new angular velocity.
 
  • #9
haruspex said:
I've thought of a better (and more natural) way. Just use more terms of the Taylor series.
##\omega(t+\Delta t)\approx\omega(t)+\alpha(t)\Delta t+\frac 12\dot\alpha(t)##. Similarly for ##\theta##.
Should it not be
##\omega(t+\Delta t)\approx\omega(t)+\alpha(t)\Delta t+\frac 12\dot\alpha(t)(\Delta t)^2~##?
 
  • Like
Likes haruspex
  • #10
kuruman said:
Should it not be
##\omega(t+\Delta t)\approx\omega(t)+\alpha(t)\Delta t+\frac 12\dot\alpha(t)(\Delta t)^2~##?
Thanks for spotting it! Didn’t make it from brain to fingers.
 
  • #11
Tygra said:
Thank you guys for your response. Considering what has been said how would I amend the equation to include friction/drag, so that the model behaves realistically?
It would be instructive for you to think about first what direction the drag force would be in relative to the pendulum. It is usually assumed to be proportional to the velocity or in this case ##-blz## in your notation where ##b## is a constant. You would construct that term and add it to your original equation. Note that you should start with the original derivation before the mass cancels out. Try it and see what you get.
 
  • #12
I was looking at Jeffrey Chasnov's videos on Youtube and he provided the equation which is:

1736962629686.png


I tried it in MATLAB and got this behaviour:

angle.jpg


I have to be careful however, for my damping (c value). If it's too high, the pendulum does not oscillate.

I might be wrong, but if the damping is higher than the damping ratio, is this when the system will not oscillate???
 
  • Like
Likes haruspex
  • #13
Tygra said:
I was looking at Jeffrey Chasnov's videos on Youtube and he provided the equation which is:

View attachment 355900

I tried it in MATLAB and got this behaviour:

View attachment 355901

I have to be careful however, for my damping (c value). If it's too high, the pendulum does not oscillate.

I might be wrong, but if the damping is higher than the damping ratio, is this when the system will not oscillate???
Here is a nice site which discusses the levels of damping with simulations; It answers your question better than I can say it.

https://www.physicsandstuffiguess.com/pendulums/pendulumFriction.html
 
Last edited:
  • #14
Thanks for that, bob012345.

I was experimenting with my damping value and got underdamped, critically damped and overdamped responses.
 
  • Like
Likes bob012345
  • #15
Tygra said:
I am going through the book called "Numerical Methods for Engineers" and I am trying to solve for the angle of displacement of a pendulum. I am using MATLAB to do this. However, I am getting some strange results.

In the book the differential equation is:

[tex]\frac{d^2\theta}{dt^2} = -\frac g l \sin\theta[/tex]
That is the correct expression if [itex]\theta[/itex] is expressed in radians. It is not the correct expression if [itex]\theta[/itex] is expressed in degrees. This is your first mistake.

Note that I changed your image insert into a MathJax (LaTeX) equation. This is far preferable to images.
Tygra said:
Because this is a second order differential equation, I have converted it to two first order equations so that I can solve it using Euler's method.
This is your second mistake, and it is a huge one. When you convert a second order ODE to a first order ODE your are throwing key aspects of geometry under the bus. Euler's method is the worst numerical integration technique. Combine these two factors (throwing out geometry and using Euler's method for a second order ODE) and you will inevitably get behavior shown in your graph. Explicit Euler's method tends to gain energy, even when energy should be conserved. That artificial energy gain due to a lousy choice in numerical integration technique is what happened here.

There are many numerical integration techniques, some very complicated, but some as simple as the explicit Euler's method you used, that do a far better job than does the explicit Euler method. A very simple approach is to swap lines 11 and 12 in your code: Calculate z(i+1) before calculating theta(i+1) and then use use z(i+1) instead of z(i) to calculate theta(i+1).

This simple change results in the symplectic Euler method and this comes very close to conserving energy.

haruspex said:
But it shouldn’t increase either.
It will when using the technique the OP used. An artificial energy gain is very typical when using the explicit Euler method.

vela said:
But ##g/l## in the differential equation isn't in (degrees/s)^2. Given the values the OP used, the period should be about 0.9 s.
Or longer. That 0.9 seconds assumes small oscillations. Large oscillations will result in larger periods.

vela said:
I was addressing the OP's belief the amplitude should decrease with time. I figured you already talked about why it increased in the numerical solution.
In my opinion, the OP should work on getting the equations of motion that conserve energy integrated correctly (or close to correctly) before working on adding perturbations that lose energy.
 
  • Like
Likes WWGD, bob012345 and vela
  • #16
Tygra said:
I was looking at Jeffrey Chasnov's videos on Youtube and he provided the equation which is:
[tex]ml\frac{d^2\theta}{dt^2} + cl\frac{d\theta}{dt}+mg\sin\theta = 0\tag{1}[/tex]
Note again that I changed your inserted image to a LaTeX expression. Learn it! You will be congratulated when you do so.

The above can be re-expressed as
[tex]\frac{d^2\theta}{dt^2} + 2\zeta\omega_n {d\theta}{dt} + \omega_n^2 \sin\theta = 0\tag{2}[/tex]
where
  • [itex]\omega_n = \sqrt{\frac g l}[/itex] is the natural (undamped) frequency of this non-linear oscillator, and
  • [itex]\zeta = \frac12 \frac c {m \omega_n}[/itex] is the unitless damping factor.
The advantage of writing the equation in this form is that the value of [itex]\zeta[/itex] is perhaps easier to understand. If [itex]\zeta=0[/itex] you have an undamped oscillator. If [itex]\zeta=1[/itex], you have a critically damped oscillator. An underdamped oscillator results if [itex]0<\zeta<1[/itex] while an overdamped oscillator results if [itex]\zeta>1[/itex].

Note that if [itex]\theta[/itex] is small, the small angle approximation [itex]\sin\theta \approx \theta[/itex] can be used, simplifying the above expression to
[tex]\frac{d^2\theta}{dt^2} + 2\zeta\omega_n \frac{d\theta}{dt} + \omega_n^2 \theta = 0\tag{3}[/tex]

Assuming that the solution [itex]\theta(t)[/itex] is exponential; i.e., [itex]\theta(t)=A\exp(st)[/itex], results in
[tex]s^2+2 \zeta\omega_n s + {\omega_n}^2 = 0\tag{4}[/tex]
This has two solutions for [itex]s[/itex]: [itex]s = -\omega_n\left(\zeta \pm \sqrt{\zeta^2-1}\right)[/itex].
This is where using the damping ratio [itex]\zeta[/itex] comes into play.
  • If [itex]\zeta = 0[/itex] the solution is a pair of purely imaginary exponentials; in other words a sinusoid.
  • If [itex]0<\zeta<1[/itex], the solution is a pair of damped sinusoids with a frequency of [itex]\omega = \sqrt{1-\zeta^2}\omega_n[/itex].
  • If [itex]\zeta=1[/itex], the two solutions become one (a double solution) and the damping is an exponential decay (no oscillations).
  • If [itex]\zeta>1[/itex], the two solutions are real exponential decay (once again, no oscillations), with one of the decays faster than that of critical damping and the other slower. It's the slow damping rate that dominates given enough time.
 
Last edited:
  • Informative
Likes berkeman
  • #17
This is your second mistake, and it is a huge one. When you convert a second order ODE to a first order ODE your are throwing key aspects of geometry under the bus. Euler's method is the worst numerical integration technique.

Really? I practised with a couple of differential equations and found Euler's method to be accurate if the step size is small enough. I used second order and converted them to two first order equations.

I chose:

$$ 4y'' + 2y' = 0 $$

and

$$ 4y'' + 2y' + 3y = 0 $$


The analytical solution for these respectively are:

$$ y = 9 - 2exp(-t/2) $$

Initial conditions $$ y = 5, \frac{dy}{dx} = 2 $$

and

$$ y = exp(-t/4)(cos(\frac{\sqrt(11)}{4}t + 2.7135sin(\frac{\sqrt(11)}{4}t) $$

initial conditions $$ y = 1, \frac{dy}{dx} = 2 $$

Here is the MATLAB Code:

Code:
dy = 9 - 4*exp(-t/2); % analytical solution

y(1) = 5
z(1) = 2

for i = 1:length(t)-1
    z(i+1) = (-1/2*z(i))*dt + z(i);
    y(i+1) = z(i)*dt + y(i);
end

nexttile
plot(t,dy,t,y)
grid on
P = exp(-t/4).*(sol(1).*cos(sqrt(11)/4.*t) + sol(2)*sin(sqrt(11)/4.*t)); % Analytical solution

y(1) = 1
z(1) = 2

for i = 1:length(t)-1
    z(i+1) = (1/4*(-2*z(i) - 3*y(i)))*dt + z(i);
    y(i+1) = z(i)*dt + y(i);
end

nexttile
plot(t,P,t,y)
grid on

Here are the results respectively:

diff1.png


diff2.png


As you can see the results from the analytical solution and the numerical solution are identical. The red covers the blue perfectly.
 
  • Like
Likes bob012345
  • #18
Were you able to get an accurate numerical solution to your original undamped system after fixing the problem with the units?
 
  • #19
Tygra said:
As you can see the results from the analytical solution and the numerical solution are identical. The red covers the blue perfectly.
How subject the Euler method is to error accumulation depends on the equation. I outlined in post #2 why it is a problem in SHM. In other equations the errors might tend to cancel.
 
  • #20
vela said:
Were you able to get an accurate numerical solution to your original undamped system after fixing the problem with the units?
Unfortunately, Vela, no. It is still behaving like haruspex said.

shm.png
 
  • #21
haruspex said:
How subject the Euler method is to error accumulation depends on the equation. I outlined in post #2 why it is a problem in SHM. In other equations the errors might tend to cancel.
Ah I see. It would be interesting if we could compare the results I posted in post #12 (I don't know how to solve the analytical equation for that). The behaviour from that looks like it could be okay.
 
  • #22
Tygra said:
Unfortunately, Vela, no. It is still behaving like haruspex said.
Did you try the tweak D H suggested?
D H said:
A very simple approach is to swap lines 11 and 12 in your code: Calculate z(i+1) before calculating theta(i+1) and then use use z(i+1) instead of z(i) to calculate theta(i+1).
 
  • #23
Tygra said:
Ah I see. It would be interesting if we could compare the results I posted in post #12 (I don't know how to solve the analytical equation for that). The behaviour from that looks like it could be okay.
It will look saner merely because the drag term in the equation prevents the runaway behaviour, but it is not likely to make the discrepancy go away. Modelling it with Euler will still have it not decaying fast enough.

Btw, the equation in post #12 has a linear drag term. That's ok for small oscillations of a real pendulum (in which case, might as well replace ##\sin(\theta)## with ##\theta##), but for larger oscillations the drag will probably approximate quadratic through the bottom of the arc. It gets messy.
 
  • #24
@ haruspex
What about using the Runge Kutta Technique? Would that be better?
 
  • #25
D H said:
Assuming that the solution [itex]\theta(t)[/itex] is exponential; i.e., [itex]\theta(t)=A\exp(st)[/itex], results in
[tex]s^2+4 \zeta\omega_n s + {\omega_n}^2 = 0\tag{4}[/tex]
The numerical factor of second term should be 2.

D H said:
This has two solutions for [itex]s[/itex]: [itex]s = -\omega_n\left(1 \pm \sqrt{\zeta^2-1}\right)[/itex].
The term before the square root should be ##\zeta##. If not you would not have purely imaginary roots for ##\zeta = 0##.
 
  • #26
Tygra said:
@ haruspex
What about using the Runge Kutta Technique? Would that be better?
As far as I am aware, "Runge Kutta" is a range of methods parameterised by the number of levels, and level 1 is the same as Euler. I would have thought they were of similar efficiency to using that number of terms (+1) of the Taylor expansion, but I am no expert on this. If I get time, I'll try to do a proper analytic comparison.

Btw, when you use "@username" you should avoid putting a space after the @. It didn’t produce an alert for me.
 
  • #27
Is your timestep 1 second?
 
  • #28
Tygra said:
Unfortunately, Vela, no. It is still behaving like haruspex said.

View attachment 355961
Please show you latest equations and setup. Your period still seems too large. Also, just for a benchmark, you might try just putting your differential equation directly into a solver such as Wolfram Alpha and let it solve it. In other words, don't try to use any numerical technique.
 
  • #29
D H said:
A very simple approach is to swap lines 11 and 12 in your code: Calculate z(i+1) before calculating theta(i+1) and then use use z(i+1) instead of z(i) to calculate theta(i+1).
I do not see how that is inherently better in general. It merely approximates the average value of the angular velocity over the interval as being the value at the end of the interval instead of that at the start of the interval. A more convincing tweak would be to use the average of those two values, even though that might happen to give an inferior result in the pendulum case, as shown below.

The reason the change as you suggest does produce a big improvement in the pendulum case is quite subtle.
It happens that for a pendulum ##\theta, \alpha## always have opposite signs and ##\omega, \dot\alpha## likewise. As a result, each step of the post #1 algorithm overestimates gains in both KE and GPE and underestimates losses. Using the angular velocity at the end of the current step to estimate the angle change switches that around for GPE, so now the two errors tend to cancel.
We should not expect that benefit to carry over to differential equation modelling in general.
 
  • #30
haruspex said:
I do not see how that is inherently better in general. It merely approximates the average value of the angular velocity over the interval as being the value at the end of the interval instead of that at the start of the interval. A more convincing tweak would be to use the average of those two values, even though that might happen to give an inferior result in the pendulum case, as shown below.
The mean value theorem says that for every smooth function ##f(x)## and and every interval ##[a,b]## over which ##f(x)## is defined, there exists some ##c \in [a,b]## such that ##f(b)= f(a) + (b-a)\,f'(c)## . In other words, there exists some magical intermediate point such that the derivative at that point can be used to exactly compute the next value in the sequence ##\{(t_0, f(t_0)),\, (t_1,f(t_1)),\,\cdots\}##. The trick is to find that magical point, or equivalently to find the magical derivative value.

haruspex said:
The reason the change as you suggest does produce a big improvement in the pendulum case is quite subtle.
It happens that for a pendulum ##\theta, \alpha## always have opposite signs and ##\omega, \dot\alpha## likewise. As a result, each step of the post #1 algorithm overestimates gains in both KE and GPE and underestimates losses. Using the angular velocity at the end of the current step to estimate the angle change switches that around for GPE, so now the two errors tend to cancel.
We should not expect that benefit to carry over to differential equation modelling in general.
It's not subtle because of this behavior. It is even more subtle than you think, and to address your final line, it most definitely carries over to many other places where numerical integration of a differential equation is computed. The underlying ODE in this case is a second order ODE. Rewriting a system of ##N## second order ODEs (e.g., one per axis) to a system of ##2N## first order ODEs is possible, but doing so throws out geometry. That a plane pendulum is a system of second order ODEs is one aspect of "geometry". Another aspect of "geometry" is that an undamped plane pendulum can be expressed as a Lagrangian or Hamiltonian. Sympectic Euler is the simplest symplectic integrator that does not throw out geometry with the bath water. A symplectic integrator conserves a very close analog to energy in situations where energy is conserved.

An even better approach is the leapfrog technique, which can be written as
[tex]\begin{aligned}
a_i &= f(t_i, x_i, v_i) \\
v_{i+1/2} &= v_i + \frac12 a_i \Delta t \\
x_{i+1} &= x_i + v_{i+1/2} \Delta t \\
t_{i+1} &= t_i + \Delta t \\
a_{i+1} &= f(t_{i+1}, x_{i+1}, v_{i+1}) \\
v_{i+1} &= v_i + \frac12 a_{i+1} \Delta t
\end{aligned}[/tex] where ##a_i## is the acceleration at time ##t_i## calculated by some derivative function ##f(t_i, x_i, v_i)##.

Note that
  • If the derivative function ##f(t,x,v)## does not depend on velocity then it only needs to be computed once for the initial sequence and then only once for each subsequent sequence as the penultimate step of the prior sequence already computed ##a_{i+1}##. This is the case for the drag-free pendulum.
  • If the derivative function does not depend on velocity or time then energy is a conserved quantity . This is where symplectic integrators shine because their derivations explicitly conserve a quantity that is a very close analog of energy. Even when this is assumption only approximately true (e.g., small perturbations based on time and/or velocity), a symplectic integrator will still tend to perform much better than a similar non-symplectic integrator. There are plenty of places on physics where energy is close to a conserved quantity.
  • If on the other hand, the derivative function does depend on velocity, the penultimate step (##a_{i+1} = f(t_{i+1} + x_{i+1}, v_{i+1})##) cannot be computed without an estimate for ##v_{i+1}##. This makes the last two steps implicit. There are many ways to deal with this.
 
  • #31
Orodruin said:
The numerical factor of second term should be 2.


The term before the square root should be ##\zeta##. If not you would not have purely imaginary roots for ##\zeta = 0##.
I fixed those two errors in my post. Thank you very much!
 
  • Like
Likes Orodruin
  • #32
Tygra said:
Unfortunately, Vela, no. It is still behaving like haruspex said.

View attachment 355961
This is the problem you should be attacking prior to adding things like drag. If one cannot get even the simplest case to behave more or less correctly it is time to change techniques. In the case of an undamped pendulum, the peak amplitude during each cycle should remain more or less constant.

In the case of Euler's method applied to an undamped pendulum, this technique inevitably accumulates energy and eventually swings over the top. How long it takes this to happen depends on the step size. Decrease the step size and it will still happen, at least up to a point. At some point, you'll reach the problem of 1+1e-16 is exactly 1 on most computers. With too small a step size, any numerical integrator will eventually produce a non-moving pendulum, but it will take a long, long time to achieve that nonsense result.

In contrast, the symplectic Euler method tends to have total energy (and hence peak amplitude) oscillate a bit about the true value from step to step. There are much better symplectic integrators than symplectic Euler. You'll be able to take much bigger steps compared to symplectic Euler with those better techniques and still maintain good accuracy.

I mentioned leapfrog in a previous reply. Leapfrog is symplectic and is still simple to implement but it is much better than is symplectic Euler. Leapfrog lets one double the step size at no cost to accuracy, or it lets one keep the same step size with significantly better accuracy than symplectic Euler. Using bigger steps is important because the number of calls to the derivative function is a key driver of the computational cost. A technique that lets one double the step size at no cost to accuracy halves the number of derivative function calls. People have developed much higher order symplectic integrators than leapfrog.

If you take a class that covers numerically integrating an initial value problem (an ODE with given initial values), one of the first things the instructor will teach is almost inevitably Euler's method. The next thing you will be taught is to never use Euler's method, with examples. Euler's method is fine for teaching because it introduces the subject. In addition, several better integrators use Euler-like steps as intermediate steps. It's important to learn Euler's method. It's also important to learn not to use it.
 
  • #33
Tygra said:
What about using the Runge Kutta Technique? Would that be better?

If by "the Runge-Kutta Technique" you mean the classical RK4 algorithm, then yes, that would be significantly better (but see the caveat below). Do note, however, that as @haruspex noted, there is no one Runge-Kutta technique. Runge-Kutta is instead a family of techniques, conceptually infinite in size.

The caveat is that the classical RK4 algorithm does not conserve energy. No technique that rewrites a system of ##N## second order ODEs as a system of ##2N## first order ODEs is going to conserve energy. IIRC, classical RK4 tends to lose energy.

The upside of classical RK4 versus explicit Euler is that far, far fewer steps are typically needed for classical RK4 compared to explicit Euler to achieve the same accuracy. Classical RK4 makes four derivative calls per integration step. If RK4 enables the step size to be increased by more than a factor of four without harm compared to explicit Euler, then using classical RK4 is a win. Typically, switching from explicit Euler to classical RK4 enables increasing the step size by orders of magnitude (not just a factor of four or so). This typically makes RK4 a big win without being monstrously complex.

That said, there are some regimes where switching from explicit Euler to classical RK4 is a big lose, and that is when the very small step size is not only mandated by explicit Euler but also is mandated for physical reasons. For example, molecule-level simulations need very small time steps as they occasionally exhibit very fast dynamics during near collisions. In such cases, it is almost always symplectic Euler or some other simple symplectic technique such as leapfrog that is used instead of explicit Euler.
 
Back
Top