Why do we get oscillations in Euler's method of integration and what i

AI Thread Summary
Oscillations in Euler's method of integration arise when the step size is too large, causing the numerical solution to overshoot the analytic solution and leading to instability. This results in oscillatory behavior around the true solution, with increasing amplitude over time. The period of these oscillations corresponds to the step size used in the integration. The oscillations are distinct from random fluctuations because they follow a predictable pattern, alternating in sign due to the nature of the numerical error. The discussion also touches on the possibility of expressing these oscillations in a sine function format, although the oscillatory behavior is fundamentally linked to the numerical method's properties.
ErezAgh
Messages
5
Reaction score
0
When using Euler's method of integration, applied on a stochastic differential eq. :

For example - given
d/dt v=−γvΔt+sqrt(ϵ⋅Δt)Γ(t)
we loop over

v[n+1]=v[n]−γv[n]Δt+sqrt(ϵ⋅Δt)Γn.
(where −γv[n] is a force term, can be any force and Γn is some gaussian distributed random variable. ) .

Then if we choose Δt not small enough, we eventually get (if we run over long times, meaning many repeated iterations) that the solutions become "unstable and oscillations appear around the analytic solution, with amplitude becoming larger and larger with time" (~ collected from many different sources I found on the internet that mention the problem but don't discuss it in depth).

Why are these actual OSCILLATIONS, and not simply random fluctuations? What is the period of these oscillations?
 
Mathematics news on Phys.org
ErezAgh said:
Why are these actual OSCILLATIONS, and not simply random fluctuations?

For the same reason single-step numerical integration is unstable for non-stochastic problems when the step size is large enough so that each step 'overshoots' the analytic solution.

ErezAgh said:
What is the period of these oscillations?

Er, the step size.
 
ErezAgh said:
When using Euler's method of integration, applied on a stochastic differential eq. :

For example - given
d/dt v=−γvΔt+sqrt(ϵ⋅Δt)Γ(t)
we loop over

v[n+1]=v[n]−γv[n]Δt+sqrt(ϵ⋅Δt)Γn.
(where −γv[n] is a force term, can be any force and Γn is some gaussian distributed random variable. ) .

Then if we choose Δt not small enough, we eventually get (if we run over long times, meaning many repeated iterations) that the solutions become "unstable and oscillations appear around the analytic solution, with amplitude becoming larger and larger with time" (~ collected from many different sources I found on the internet that mention the problem but don't discuss it in depth).

Why are these actual OSCILLATIONS, and not simply random fluctuations?

Consider the non-stochastic ODE <br /> y&#039; = -ky with k &gt; 0 and y(0) = 1. This can be solved analytically: y(t) = e^{-kt}. Note that y \to 0 as t \to \infty. Applying Euler's method with step size h &gt; 0 we obtain <br /> y_{n+1} = y_n - hky_n = y_n(1 - hk) which again can be solved analytically:
<br /> y_n = (1 - hk)^n.<br /> Thus the error at time t = nh is given by <br /> \epsilon_n = y(nh) - y_n = e^{-nhk} - (1 - hk)^n.<br /> If hk &gt; 2 then 1 - hk &lt; -1, so that <br /> \epsilon_n = e^{-nhk} - (-1)^n|1 - hk|^n. Thus, since |1 - hk| &gt; 1 and e^{-nhk} &lt; 1, we see that |\epsilon_n| \to \infty and that \epsilon_n is alternatively positive and negative (which is what "oscillates" means in this context).
 
pasmith said:
Consider the non-stochastic ODE <br /> y&#039; = -ky with k &gt; 0 and y(0) = 1. This can be solved analytically: y(t) = e^{-kt}. Note that y \to 0 as t \to \infty. Applying Euler's method with step size h &gt; 0 we obtain <br /> y_{n+1} = y_n - hky_n = y_n(1 - hk) which again can be solved analytically:
<br /> y_n = (1 - hk)^n.<br /> Thus the error at time t = nh is given by <br /> \epsilon_n = y(nh) - y_n = e^{-nhk} - (1 - hk)^n.<br /> If hk &gt; 2 then 1 - hk &lt; -1, so that <br /> \epsilon_n = e^{-nhk} - (-1)^n|1 - hk|^n. Thus, since |1 - hk| &gt; 1 and e^{-nhk} &lt; 1, we see that |\epsilon_n| \to \infty and that \epsilon_n is alternatively positive and negative (which is what "oscillates" means in this context).

Thanks a lot both!

1) I see why it oscillates with a frequency of 1 step, (-1)^n, so this means you normalized the step size to be Er=1?
2) Silly question perhaps, but can this be written in the form of a sine() than?
 
Last edited:
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Back
Top