Adaptive step size algorithm in c++ for runge kutta

Click For Summary
SUMMARY

The discussion focuses on implementing an adaptive step size algorithm in a 4th order Runge-Kutta integrator for simulating basic orbits in C++. Key issues identified include the failure of the step size to adjust as expected and discrepancies in the orbit's arc length compared to Mathematica's solution. Recommendations include avoiding separation of x and y steps during integration, utilizing embedded Runge-Kutta methods for error control, and considering the Bogacki-Shampine method for improved accuracy.

PREREQUISITES
  • Understanding of 4th order Runge-Kutta integration
  • Familiarity with C++ programming
  • Knowledge of numerical methods for ordinary differential equations (ODEs)
  • Concept of error control in numerical integration
NEXT STEPS
  • Research embedded Runge-Kutta methods for adaptive step size control
  • Learn about the Bogacki-Shampine method for ODE integration
  • Explore predictor-corrector methods for improved accuracy in numerical solutions
  • Study examples of stiff equations to test integration methods
USEFUL FOR

Developers and researchers working on numerical simulations, particularly those implementing ODE solvers in C++, as well as anyone interested in enhancing the accuracy and efficiency of integration algorithms.

FunkyDwarf
Messages
481
Reaction score
0
Hey guys,

Attempting to write an adaptive step size function into a 4th order runge kutta integrator for basic orbits.

The problems (so far) are as follows:
1.) The step size does not seem to change as one would expect if the recursive definition of the StepSize function was working properly

2.) When compared with the mathematica solution, it would seem the orbit doesn't extend as far as it should in time, ie the arc length is shorter. It would seem this is due to the coordinates remaining constant after a certain point.

Code can be found here:
http://members.iinet.net.au/~housewrk/programtest.cpp

Apologies in advance for stupid errors and crappy coding :)

Cheers
-G
 
Last edited by a moderator:
Technology news on Phys.org
It looks like you're integrating slightly wrong. I wouldn't separate the Runge Kutta integration into x and y steps. You should solve for k1x = f(t, x) and k1y = f(t, y). Then, calculate k2x and k2y, and so forth.

The idea is you want to make a complete approximation going forward. You're stepping the components forward separately, but they're dependent on each other so you're introducing errors. For small enough stepsizes it might not matter, but it's still a bad idea.

Otherwise, for what you're doing, using a full step vs two half steps for error control is reasonable but I'd recommend taking a look into embedded Runge Kutta methods as they have a very easy way to implement error control.

I wrote up a ode integrator before and it's fairly easy to do this:
Code:
// Bogacki-Shampine method
void integrate(double &t, double &y, double &h)
{
	double k1, k2, k3, k4, y1, y2;

	// f(t, y) is our derivative
	k1 = f(t, y)
	k2 = f(t + h * 0.50, y + h * 0.50 * k1);
	k3 = f(t + h * 0.75, y + h * 0.75 * k2);

	// 3rd order accurate solution
	y1 = y + h * (2.0/9.0 * k1 + 1.0/3.0 * k2 + 4.0/9.0 * k3);

	k4 = f(t + h, y1);

	// 2nd order accurate solution
	y2 = y + h1 * (7.0/24.0 * k1 + 1.0/4.0 * k2 + 1.0/3.0 * k3 + 1.0/8.0 * k4);

	y = y2;
	t += h;

	// Define atol (absolute tolerance) and rtol (relative tolerance) somewhere
	// We use the difference between the two solutions as an approximation of the error, scaled by our desired absolute and relative tolerance
	// Both atol and rtol should be non-zero.

	double scale = atol + max(fabs(y1), fabs(y2)) * rtol;
	double error = max(fabs(y2 - y1), 2E-10) / scale;
	h = h * pow(error, -1.0/3.0);
}

Using something like an embedded method may be easier to do. You'll still want to control the integration and make sure a step doesn't introduce too big of an error (which the above code doesn't account for).

I'll have to look through your code some more and pick it apart to figure out what's wrong, though.
 
Even if you use an iterative method, such as predictor-corrector:

http://en.wikipedia.org/wiki/Predictor-corrector_method#Euler_Trapezoidal_Example

The algorithm may converge to a specific value for each step, but in that wiki example, it's a linear approximation for each step, and the overall error will increase with step size, regardless of convergence at each step. The method you show splits up each step into 4 steps, but it's still a linear weighted average over those 4 steps. You could just reduce step size by a factor of 4 and use a iterative method for more accuracy, but it would take more time. On a modern PC, I doubt CPU time is an issue for most projects.

If you want to vary the step size, it should be some inverse function of the magnitude of the function(s) you're trying to integrate, but this could lead to errors if the magnitude increased rapidly within a step period.

Wiki includes a good example of a "stiff equation" you can use to test your method with, since it's directly solvable:

http://en.wikipedia.org/wiki/Stiff_equation
 
Last edited:

Similar threads

  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 4 ·
Replies
4
Views
5K
  • · Replies 65 ·
3
Replies
65
Views
8K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
14
Views
5K
  • · Replies 4 ·
Replies
4
Views
8K
  • · Replies 10 ·
Replies
10
Views
13K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
2
Views
8K