# Runge-Kutta Stability Regions

Based on this link, in particular Figure 1, what is the exact meaning of the plot?

To my understanding, it implies that for a given differential equation:

$$\frac {dy}{dt} = \lambda y$$

that the value ##\lambda \Delta t## has to be within the complex region shown in Figure 1 corresponding to which order of Runge-Kutta you used. To ensure I am understanding this plot correctly, must ##\lambda \Delta t < 0## if both ##\lambda## and ##\Delta t## are purely real? If ##\Delta t## is purely positive and real, wouldn't this restrict you to only considering equations where ##\lambda < 0##?

If we now consider a case where ##\lambda## is complex-valued and ##\Delta t## purely positive and real, what exactly is the intuition behind the value ##\lambda \Delta t = 0.01 + 2i## being stable yet ##\lambda \Delta t = 0.01## not being stable?

## Answers and Replies

DrClaude
Mentor
To ensure I am understanding this plot correctly, must ##\lambda \Delta t < 0## if both ##\lambda## and ##\Delta t## are purely real? If ##\Delta t## is purely positive and real, wouldn't this restrict you to only considering equations where ##\lambda < 0##?
Yes, because for ##\lambda > 0##, the actual solution goes to infinity. Have a look at https://en.wikipedia.org/wiki/Stiff_equation#A-stability

If we now consider a case where ##\lambda## is complex-valued and ##\Delta t## purely positive and real, what exactly is the intuition behind the value ##\lambda \Delta t = 0.01 + 2i## being stable yet ##\lambda \Delta t = 0.01## not being stable?
Good question.

Yes, because for ##\lambda > 0##, the actual solution goes to infinity. Have a look at https://en.wikipedia.org/wiki/Stiff_equation#A-stability

Good question.
So stability is dependent on the domain considered, correct? Since the solution would certainly go to infinity for ##\lambda > 0##. But if considering a small time domain (e.g. ##0 \leq t \leq 1##) where the solution, ##y##, is well-defined, do we still describe the Runge-Kutta method applied to this equation as unstable on this smaller domain?

So stability is dependent on the domain considered, correct? Since the solution would certainly go to infinity for ##\lambda > 0##. But if considering a small time domain (e.g. ##0 \leq t \leq 1##) where the solution, ##y##, is well-defined, do we still describe the Runge-Kutta method applied to this equation as unstable on this smaller domain?
No the stability is not dependent on the time domain. However, we need to distinguish between two situations. The first situation is where the physical model is unstable. In this case the analysis of the numerical method is much more subtle. Ideally we want to choose a numerical method that most accurately reproduces the unstable growth rate. But often there are many unstable modes and we can't accurately reproduce the growth rate of all the modes. In the second situation the physical model is stable. In this case we need to verify that the numerical method is stable. Here if the numerical model is unstable, then the unstable numerical modes will quickly grow up out of the noise and dominate the simulation. Beyond this point your results are going to be noisy garbage.

No the stability is not dependent on the time domain. However, we need to distinguish between two situations. The first situation is where the physical model is unstable. In this case the analysis of the numerical method is much more subtle. Ideally we want to choose a numerical method that most accurately reproduces the unstable growth rate. But often there are many unstable modes and we can't accurately reproduce the growth rate of all the modes. In the second situation the physical model is stable. In this case we need to verify that the numerical method is stable. Here if the numerical model is unstable, then the unstable numerical modes will quickly grow up out of the noise and dominate the simulation. Beyond this point your results are going to be noisy garbage.
So to clarify, is RK4 numerically unstable when solving the above test case for ##\lambda > 1## on any time domain, even if the solution does not go to infinity? When you say "unstable numerical modes will quickly grow up out of the noise and dominate the simulation," how exactly can the growth rate of these errors be quantified? Since I would assume if ##\lambda = 1.000001## then the numerical errors will not be so significant on a time domain of ## 0 \leq t \leq 0.000000001##. Also, if you have any ideas on the last question I stated in the above post, that would be greatly appreciated.

The stability of a numerical method does not depend on the time domain.

In my opinion, the question "when can I get away with using a unstable numerical method" is the wrong question and is a bad computation practice that you want to avoid. You can quantify a unstable method by the growth rate of the most unstable mode, and this will give you an estimate as to how long it will take for this mode to grow up out of the noise. And in some cases if you're barely unstable and you run the simulation for a short time, then yes you can get away with using a bad numerical method. But be warned you're playing with fire, and this is a bad practice.

It's much better to ask the question how can I stabilize the method or is there an alternative stable method that I can use. For instance if you only need to run for a short time and your using RK4 then why not reduce the time step a little. The computational savings you gain by using a slightly larger time isn't worth the loss of confidence it if the method is not unstable.

The stability of a numerical method does not depend on the time domain.

In my opinion, the question "when can I get away with using a unstable numerical method" is the wrong question and is a bad computation practice that you want to avoid. You can quantify a unstable method by the growth rate of the most unstable mode, and this will give you an estimate as to how long it will take for this mode to grow up out of the noise. And in some cases if you're barely unstable and you run the simulation for a short time, then yes you can get away with using a bad numerical method. But be warned you're playing with fire, and this is a bad practice.

It's much better to ask the question how can I stabilize the method or is there an alternative stable method that I can use. For instance if you only need to run for a short time and your using RK4 then why not reduce the time step a little. The computational savings you gain by using a slightly larger time isn't worth the loss of confidence it if the method is not unstable.
That's understandable and I agree. But I am nonetheless curious to understand how instability in a numerical method propagates errors. It appears the eigenvalue's magnitude is important along with the time-step taken, and the rate at which this instability grows was my main motivation for asking this question.

The stability of a numerical method does not depend on the time domain.

In my opinion, the question "when can I get away with using a unstable numerical method" is the wrong question and is a bad computation practice that you want to avoid. You can quantify a unstable method by the growth rate of the most unstable mode, and this will give you an estimate as to how long it will take for this mode to grow up out of the noise. And in some cases if you're barely unstable and you run the simulation for a short time, then yes you can get away with using a bad numerical method. But be warned you're playing with fire, and this is a bad practice.

It's much better to ask the question how can I stabilize the method or is there an alternative stable method that I can use. For instance if you only need to run for a short time and your using RK4 then why not reduce the time step a little. The computational savings you gain by using a slightly larger time isn't worth the loss of confidence it if the method is not unstable.
Just re-read the answer and well the question is largely focused on those domains where simply decreasing the time step will not bring ##\lambda \Delta t## within the region of stability (e.g ##\lambda > 0## and real). I agree there are stable methods as alternatives available, but interested in understanding the behaviour of this unstable method in the limit where ##\lambda \Delta t## is nearly in the region of stability.