Stability of an ODE and Euler's method

Click For Summary
SUMMARY

The discussion centers on the stability of Ordinary Differential Equations (ODEs) when using Euler's method, specifically the formula U_n+1 = (1 + hA)U_n. It is established that Euler's method is only stable if the eigenvalues of the Jacobian matrix A satisfy (1 + hA) < 1. Participants conclude that Euler's method is ineffective for unstable ODEs, as it leads to numerical values that decrease monotonically, indicating a significant limitation in its applicability. The conversation highlights that numerical methods may fail to provide accurate solutions for ODEs unless the function exhibits a monotonically decreasing behavior.

PREREQUISITES
  • Understanding of Ordinary Differential Equations (ODEs)
  • Familiarity with numerical methods, specifically Euler's method
  • Knowledge of Jacobian matrices and eigenvalues
  • Concept of stability in numerical analysis
NEXT STEPS
  • Study the stability criteria for numerical methods beyond Euler's method
  • Learn about the trapezoidal rule and its stability conditions
  • Explore advanced numerical integration techniques such as Runge-Kutta methods
  • Investigate the implications of stability in the context of nonlinear ODEs
USEFUL FOR

Mathematicians, numerical analysts, and engineers involved in solving Ordinary Differential Equations, particularly those interested in the stability and accuracy of numerical methods.

Master J
Messages
219
Reaction score
0
I have been thinking about numerical methods for ODEs, and the whole notion of stability confuses me.

Take Euler's method for solving an ODE:

U_n+1 = U_n + h.A.U_n

where U_n = U_n( t ), A is the Jacobian and h is step size.

Rearrange:

U_n+1 = ( 1 + hA ).U_n

This method is only stable if (1 + hA) < 1 ( using the eigenvalues of A). But what does this mean!?? Every value of my function that I am numerically getting is less than the previous value. This seems rather useless, I don't get it? It appears to me that this method can only be used on functions that are strictly decreasing for all increasing t ?
 
Physics news on Phys.org
Master J said:
This seems rather useless

Yup. Euler's (forward difference) method IS "rather useless". In fact compared with almost any other numerical integration method, its not so much "rather useless" as "completely useless".

But it's a nice example of something that "obviosuly" look like a good idea, but turns out not to be.
 
Well, I'm still confused.

Say I have an ODE who's solution family y(t) is unstable. That is, for increasing t, the solution curves diverge from each other. In this case, J = df(y, t)/dy < 0.

So does this mean that ANY numerical method I use to solve this ODE will be unstable? With reference to http://courses.engr.illinois.edu/cs450/sp2010/odestability.pdf? there is a condition for all the methods, even the trapezoid rule etc. to be stable. And in each of these it implies that numerical values for each succesive value of y(t) are less than the previous, ie. y(t+h) < y(t).

So, in essence, what I gather here is that unless an ODE has the property that the magnitude of each value of the function y is LESS than the previous value, then it CANNOT be solved with a numerical method accurately?
Or, in another way, errors will always grow in solving an unstable ODE?

All this seems rather strange to me then. We cannot solve an ODE accurately unless the function is monotonically decreasing? What rather tiny area of applicability then!
 
Last edited:

Similar threads

  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
5K
  • · Replies 1 ·
Replies
1
Views
543
  • · Replies 1 ·
Replies
1
Views
2K