Stability of an ODE and Euler's method

Master J
Messages
219
Reaction score
0
I have been thinking about numerical methods for ODEs, and the whole notion of stability confuses me.

Take Euler's method for solving an ODE:

U_n+1 = U_n + h.A.U_n

where U_n = U_n( t ), A is the Jacobian and h is step size.

Rearrange:

U_n+1 = ( 1 + hA ).U_n

This method is only stable if (1 + hA) < 1 ( using the eigenvalues of A). But what does this mean!?? Every value of my function that I am numerically getting is less than the previous value. This seems rather useless, I don't get it? It appears to me that this method can only be used on functions that are strictly decreasing for all increasing t ?
 
Physics news on Phys.org
Master J said:
This seems rather useless

Yup. Euler's (forward difference) method IS "rather useless". In fact compared with almost any other numerical integration method, its not so much "rather useless" as "completely useless".

But it's a nice example of something that "obviosuly" look like a good idea, but turns out not to be.
 
Well, I'm still confused.

Say I have an ODE who's solution family y(t) is unstable. That is, for increasing t, the solution curves diverge from each other. In this case, J = df(y, t)/dy < 0.

So does this mean that ANY numerical method I use to solve this ODE will be unstable? With reference to http://courses.engr.illinois.edu/cs450/sp2010/odestability.pdf? there is a condition for all the methods, even the trapezoid rule etc. to be stable. And in each of these it implies that numerical values for each succesive value of y(t) are less than the previous, ie. y(t+h) < y(t).

So, in essence, what I gather here is that unless an ODE has the property that the magnitude of each value of the function y is LESS than the previous value, then it CANNOT be solved with a numerical method accurately?
Or, in another way, errors will always grow in solving an unstable ODE?

All this seems rather strange to me then. We cannot solve an ODE accurately unless the function is monotonically decreasing? What rather tiny area of applicability then!
 
Last edited:
There is the following linear Volterra equation of the second kind $$ y(x)+\int_{0}^{x} K(x-s) y(s)\,{\rm d}s = 1 $$ with kernel $$ K(x-s) = 1 - 4 \sum_{n=1}^{\infty} \dfrac{1}{\lambda_n^2} e^{-\beta \lambda_n^2 (x-s)} $$ where $y(0)=1$, $\beta>0$ and $\lambda_n$ is the $n$-th positive root of the equation $J_0(x)=0$ (here $n$ is a natural number that numbers these positive roots in the order of increasing their values), $J_0(x)$ is the Bessel function of the first kind of zero order. I...
Are there any good visualization tutorials, written or video, that show graphically how separation of variables works? I particularly have the time-independent Schrodinger Equation in mind. There are hundreds of demonstrations out there which essentially distill to copies of one another. However I am trying to visualize in my mind how this process looks graphically - for example plotting t on one axis and x on the other for f(x,t). I have seen other good visual representations of...

Similar threads

Replies
6
Views
2K
Replies
5
Views
2K
Replies
4
Views
2K
Replies
4
Views
2K
Replies
5
Views
5K
Replies
11
Views
3K
Back
Top