Why do we get oscillations in Euler's method of integration and what i

ErezAgh
Messages
5
Reaction score
0
When using Euler's method of integration, applied on a stochastic differential eq. :

For example - given
d/dt v=−γvΔt+sqrt(ϵ⋅Δt)Γ(t)
we loop over

v[n+1]=v[n]−γv[n]Δt+sqrt(ϵ⋅Δt)Γn.
(where −γv[n] is a force term, can be any force and Γn is some gaussian distributed random variable. ) .

Then if we choose Δt not small enough, we eventually get (if we run over long times, meaning many repeated iterations) that the solutions become "unstable and oscillations appear around the analytic solution, with amplitude becoming larger and larger with time" (~ collected from many different sources I found on the internet that mention the problem but don't discuss it in depth).

Why are these actual OSCILLATIONS, and not simply random fluctuations? What is the period of these oscillations?
 
Mathematics news on Phys.org
ErezAgh said:
Why are these actual OSCILLATIONS, and not simply random fluctuations?

For the same reason single-step numerical integration is unstable for non-stochastic problems when the step size is large enough so that each step 'overshoots' the analytic solution.

ErezAgh said:
What is the period of these oscillations?

Er, the step size.
 
ErezAgh said:
When using Euler's method of integration, applied on a stochastic differential eq. :

For example - given
d/dt v=−γvΔt+sqrt(ϵ⋅Δt)Γ(t)
we loop over

v[n+1]=v[n]−γv[n]Δt+sqrt(ϵ⋅Δt)Γn.
(where −γv[n] is a force term, can be any force and Γn is some gaussian distributed random variable. ) .

Then if we choose Δt not small enough, we eventually get (if we run over long times, meaning many repeated iterations) that the solutions become "unstable and oscillations appear around the analytic solution, with amplitude becoming larger and larger with time" (~ collected from many different sources I found on the internet that mention the problem but don't discuss it in depth).

Why are these actual OSCILLATIONS, and not simply random fluctuations?

Consider the non-stochastic ODE <br /> y&#039; = -ky with k &gt; 0 and y(0) = 1. This can be solved analytically: y(t) = e^{-kt}. Note that y \to 0 as t \to \infty. Applying Euler's method with step size h &gt; 0 we obtain <br /> y_{n+1} = y_n - hky_n = y_n(1 - hk) which again can be solved analytically:
<br /> y_n = (1 - hk)^n.<br /> Thus the error at time t = nh is given by <br /> \epsilon_n = y(nh) - y_n = e^{-nhk} - (1 - hk)^n.<br /> If hk &gt; 2 then 1 - hk &lt; -1, so that <br /> \epsilon_n = e^{-nhk} - (-1)^n|1 - hk|^n. Thus, since |1 - hk| &gt; 1 and e^{-nhk} &lt; 1, we see that |\epsilon_n| \to \infty and that \epsilon_n is alternatively positive and negative (which is what "oscillates" means in this context).
 
pasmith said:
Consider the non-stochastic ODE <br /> y&#039; = -ky with k &gt; 0 and y(0) = 1. This can be solved analytically: y(t) = e^{-kt}. Note that y \to 0 as t \to \infty. Applying Euler's method with step size h &gt; 0 we obtain <br /> y_{n+1} = y_n - hky_n = y_n(1 - hk) which again can be solved analytically:
<br /> y_n = (1 - hk)^n.<br /> Thus the error at time t = nh is given by <br /> \epsilon_n = y(nh) - y_n = e^{-nhk} - (1 - hk)^n.<br /> If hk &gt; 2 then 1 - hk &lt; -1, so that <br /> \epsilon_n = e^{-nhk} - (-1)^n|1 - hk|^n. Thus, since |1 - hk| &gt; 1 and e^{-nhk} &lt; 1, we see that |\epsilon_n| \to \infty and that \epsilon_n is alternatively positive and negative (which is what "oscillates" means in this context).

Thanks a lot both!

1) I see why it oscillates with a frequency of 1 step, (-1)^n, so this means you normalized the step size to be Er=1?
2) Silly question perhaps, but can this be written in the form of a sine() than?
 
Last edited:
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top