# Selecting a numerical method

Tags:
1. Jan 10, 2016

### observer1

Good Day

Let's say I have developed a new method to extract, more efficiently (yes, "more efficiently" is ill-defined; but bear with me) the differential equations that describe a specific phenomena (please just assume it).

So now I have a system of coupled second order differential equations with non-constant coefficients.

I have a few tasks now: 1) teach this method to students; 2) demonstrate that it (IT = my method in extracting the equations, not solving them) works; 3) optimized solution method.

With regard to 1 and 2, I must now create a test case. And I am confronted with all sorts of information about integration schemes: implicit/explicit, speed, cost, memory, and so many other issues.

Now, back in the day (30 years ago) when CPU's were slow and memory expensive, there were so many papers on finite element methods: reduced integration, Crout reduction and memory storage, hourglassing, etc. They all seem like a fart in a hurricane in today's world of fast CPU's and cheap memory.

Can the same thing be suggested about numerical method (yes, and I agree, that to a large extent,the finite element method itself is a glorified interpolation scheme -- but let's not go there) and its concomitant culture of research papers each purporting to reveal a faster and faster method?

My question is this... Assuming I have a stiff system and all the time in the world to create really tiny time steps and a lot of memory... does the choice of the integration scheme matter? (Runge-Kutta, Newton-Raphson, Newmark Beta, Central Difference, etc.)?

In reality, I know this should be an implicit method, but I don't have the time to implement it as speed is not my focus right now.

In time, IF I pass the first two hurdles, I can return to optimizing the integration scheme and seek collaborators. But should I be concerned about such issues at the start? Can one really pick a very bad method that cannot be quickly "fixed" by making the time step even smaller?

Last edited: Jan 10, 2016
2. Jan 10, 2016

### Krylov

I'm puzzled by what you mean by "extract". Do you mean you have found a more efficient way to derive the equations that describe a certain phenomenon (by pencil and paper or using a CAS, say), or are these equations themselves the result of some other numerical approximation method that you have managed to improve?

No, I don't think so. Sure, hardware has become incredibly fast and inexpensive, but this just means that the problems to be attacked can also be a lot more numerically demanding. For example, the kind of complex, multiscale phenomena analyzed nowadays by engineers using the finite element method were completely out of reach 30 years ago.
It's hard to say without knowing the problem and its scale of complexity, so I can only offer some generalities. Simple stiff problems can often be solved inefficiently using non-stiff integrators, but when reducing the time step too much you also incur more round off error. Particularly when solving the ODE is part of the solution of a bigger numerical problem, this can become an issue.

I would say: If your primary concern is not with writing an appropriate integrator yourself, why don't you use a canned one from the start that is suitable for stiff problems? There is enough choice in MATLAB (paid), Octave (free) or, if you require a compiled solution, for free on Netlib. You may also look at Ernst Hairer's page, which has some codes.

3. Jan 10, 2016

### Krylov

A short P.S.
Permit me to go there anyway, albeit briefly. I think that a substantial part of numerical analysis is a "glorified interpolation scheme", but that doesn't make it any less useful or ingenious. For a classical account of this, see https://www.amazon.com/Interpolation-Approximation-Dover-Books-Mathematics/dp/0486624951.

Last edited by a moderator: May 7, 2017
4. Jan 10, 2016

### Svein

The trouble is that there exist differential equations that are very unstable - a small difference between a calculated value and the theoretically correct value near the starting point will lead to increasingly greater divergence at each step.

The differential equations shown in textbooks are invariably of the "nice" type, they tend to converge to a "good" solution as you refine the steps or the method. Real life is not always according to those textbooks.

5. Jan 10, 2016

### bigfooted

The reason finite element methods gained popularity is because they can efficiently deal with very complicated 2D and 3D geometries and they are suited for structural analysis as well as for fluid dynamics problems, and this method is used a lot in the industry for this reason. If you have a system of second order ODE's in 1D, then there is certainly no need to use a finite element method. In the simplest cases the final system of equations that are solved are the same as those obtained from a finite difference method.

If your problem is stiff, you really need to look into stiff integration schemes, and at least understand what they do. You have to understand at least what is on the wiki page:
https://en.wikipedia.org/wiki/Stiff_equation
It might be that for your particular problem, there is a specific range of time steps, cell sizes and equation coefficients for which you will get a solution. But it might also give you a solution that looks stable and correct, but is in fact incorrect.

6. Jan 17, 2016

### Krylov

So, @observer1, any thoughts on what other people responded?

7. Jan 23, 2016

### observer1

Sorry.. I have been away.

I am going to go with an explicit scheme. And do few time steps. My goal is not really the solution but in setting up the equations: links, joints (revolute, ball, translational), etc.

The problem is the control of a crane on a ship. And it is just a demo. If it all works, I come back to it after the student graduates.