Wiki says that Runge-Kutta is iterative?

Click For Summary
SUMMARY

The discussion clarifies that Runge-Kutta methods, both implicit and explicit, are indeed iterative methods used for approximating solutions to ordinary differential equations. The iterative nature is highlighted by the dependency of each stage of the RK integration on the previous stages, which is a fundamental characteristic of these methods. The conversation also distinguishes between fixed iterations in Runge-Kutta methods and adaptive integration techniques, such as the RK45 integrator, which adjusts step sizes based on convergence criteria. Implicit integration methods, like Implicit Euler, further illustrate the concept of iteration by requiring convergence of the state at the end of the interval.

PREREQUISITES
  • Understanding of ordinary differential equations (ODEs)
  • Familiarity with numerical analysis concepts
  • Knowledge of explicit and implicit integration methods
  • Basic understanding of adaptive integration techniques
NEXT STEPS
  • Study the implementation of Runge-Kutta methods in Python using libraries like SciPy
  • Explore the RK45 adaptive integrator and its applications in numerical analysis
  • Learn about implicit integration techniques and their stability characteristics
  • Investigate convergence criteria in numerical methods for ODEs
USEFUL FOR

Mathematicians, numerical analysts, and software developers working on simulations or solving ordinary differential equations will benefit from this discussion.

nonequilibrium
Messages
1,412
Reaction score
2
Hi,

As you can seeh here, according to wikipedia, the Runge-Kutta methods are iterative, but I don't see what is iterative about it. As I see it, starting from a certain value of x where you know y, you calculate y at x+h, then at x+2h, etc. Where is the iteration?

first line of wiki page said:
In numerical analysis, the Runge–Kutta methods are an important family of implicit and explicit iterative methods for the approximation of solutions of ordinary differential equations.
 
Physics news on Phys.org
You do calculate y_{n+1} and t_{n+1} given y_n and t_n? That's iteration.
 
That's iteration? Oh... I thought iteration meant that each step you correct the previous value (by using the previous value somehow); such a method would also for example contain a cut-off criterion (when one has converged well enough to a certain solution).
 
mr. vodka said:
Oh... I thought iteration meant that each step you correct the previous value (by using the previous value somehow);

In a sense, that's what R-K methods do within each time step. You start by doing a forward-difference Euler step, then you make some corrections to it. Finally you take a weighted average of all the iterations as the "best" estimate of the solution.

For an R-K method of a given order, you do a fixed number of iterations rather than using a convergence criterion to decide when to stop.

FWIW there are other numerical methods which you would probably think were more "natural" iterative methods, where it is simpler just to do a fixed number of iterations that willl guarantee convergence (from a mathematical analysis of the method) rather than trying to numerically estimate when it has converged. For example if you can prove that 3 iterations is always enough, it may not be worth the extra cost of checking if 2 iterations is enough, but almost always finding that it isn't.
 
mr. vodka said:
Oh... I thought iteration meant that each step you correct the previous value (by using the previous value somehow); such a method would also for example contain a cut-off criterion (when one has converged well enough to a certain solution).
What the wikipedia article was referring to is that the 2nd stage of an RK integration step depends on the 1st stage, the 3rd stage depends on the first two stages, and so on.

You're appear to be thinking of adaptive integration techniques here. Runge-Kutta techniques can be used for this (as can many others). One way to do this is to run two integrators side by side, decreasing the step size if the two techniques disagree by too much, increasing it if they don't disagree enough. The widely used adaptive RK45 integrator does just this.

A different type of iteration is used in implicit integration. I'll illustrate with Euler's method, the simplest of all RK techniques. Here integration is performed via x(t+\Delta t) = x(t) + \Delta t \dot x(x(t),t). This is a very lousy technique for a number of reasons, one of which is that it is incredibly unstable. Implicit Euler uses the derivative at the end of the interval instead of at the start: x(t+\Delta t) = x(t) + \Delta t \dot x(x(t+\Delta t),t+\Delta t). There's a problem here: The derivative at the end of the interval depends on the (unknown) state at the end of the interval. Implicit techniques typically iterate until the state at the end of the interval converges to within some limit.
 

Similar threads

  • · Replies 65 ·
3
Replies
65
Views
8K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
2
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 6 ·
Replies
6
Views
5K