How to improve stabilities of numerical solutions of PDEs

Kurret
Messages
141
Reaction score
0
This is a quite general question, but I am working with a system of partial differential equations in two variables. There is one time direction t and one spatial direction z and the numerical method is formulated by stepping forward in time. The problem is that I obtain instabilities, either at the endpoints or sometimes even in the interior, depending on which method I am using.

I have tried to approximate the spatial derivatives with finite differences of different orders, Runge Kutta, Euler backwards/forwards, pseudospectral methods based on Chebychev polynomials, I have tried to put the different functions on different grids, different methods to step forward in time, countless of ways to rewrite the equations or the order derivatives are evaluated. Even though there are some improvements, there is always some instability left and I am becoming desperate.

Is there any expert out there who could just list all possible ideas one can try to make time evolution of PDEs stable? I don't think there is any point in writing out the equations here since they are quite long and complex and would just intimidate, BUT I can say that they are non-linear, involve both first and second order derivatives in both z and t (but they can be formulated in the standard form ∂_t f=… by suitable redefinitions) and the system includes five different functions that will constitute the solution.
 
Physics news on Phys.org
Are you using a tool like Maple/Mathematica/Mathcad/MATLAB or are you programming this yourself or using other libraries?
 
If they're first order in time, backward euler usually does the trick. There are automatic integrators out there also that use the C. W. Gear method, or modifications thereof, based on higher order versions of backward euler. That would be my first choice.

Chet
 
Kurret said:
Is there any expert out there who could just list all possible ideas one can try to make time evolution of PDEs stable?

Um... Do you really want to ask it that way? "All possible ideas" would include a lot of ideas, only very few likely to be in any way useful.

The first thing you should try to do is check analytically if the system is in fact subject to wild variation. What do I mean? Does the system involve large changes in outcome from minute changes in input? Are there regions where minor variations will have a strong tendency to grow? Are the various derivatives rapidly changing in the regime you are working in?

Examples of such things: If you were near the onset of turbulence in a fluids problem, you need to use very different methods from the case when you are very far from turbulence. If you are near the point where a material is about to fail due to cracking, likewise. If you are near a change of state such as boiling, likewise. If you had elastic materials with widely different restorative forces, likewise. Or, in my little field, if you are close to criticality in a nuclear reactor, you have to use some special methods. Any time a material property changes rapidly or abruptly with changes in inputs, you need special methods.

Without knowing those kinds of details, it would be just whistling in the dark to guess what methods to use. Generically, one class of such methods deals with "stiffness." Another with "cliff edges." Another with changes of state. Another with explosions. And so on. The generic feature is, they do something different near an abrupt or large change.
 
Are there any good visualization tutorials, written or video, that show graphically how separation of variables works? I particularly have the time-independent Schrodinger Equation in mind. There are hundreds of demonstrations out there which essentially distill to copies of one another. However I am trying to visualize in my mind how this process looks graphically - for example plotting t on one axis and x on the other for f(x,t). I have seen other good visual representations of...
Back
Top