I Runge-Kutta 4 w/ some sugar on the top: How to do error approximation?

  • #61
pbuk said:
Yes it is wrong - that is an absolute error.
Aha, thank you! So a relative error of ##0.5\cdot 10^{-n}## implies ##n+1## safe digits? According to
https://math.stackexchange.com/ques...-relative-error-give-number-of-correct-digits . (that's good - or at least better - news!)
pbuk said:
This depends on context - I would be inclined to do both, stating the assumption that must be made in order to use the second approach.
My lecturer uses ##\sum \left|\Delta \right|## when using the method I described, i.e. not the propagation suggested by BvU. I assume that the course wants me to use that one, even if it is pessimistic.
 
Physics news on Phys.org
  • #62
bremenfallturm said:
Aha, thank you! So a relative error of ##0.5\cdot 10^{-n}## implies ##n+1## safe digits? According to
https://math.stackexchange.com/ques...-relative-error-give-number-of-correct-digits . (that's good - or at least better - news!)
Hmmm, there is the danger of confusion here. Let's make sure we are all clear on the definitions:

If the true value of a quantity is ## x ## and the measured value is ## x_0 ## then the absolute error is ## x_0 - x ## and the relative error is ## \frac x{x_0} -1 ##.​

https://mathworld.wolfram.com/AbsoluteError.html
https://mathworld.wolfram.com/RelativeError.html

So ##0.5\times10^{-n}## could be either a relative error or an absolute error.

bremenfallturm said:
My lecturer uses ##\sum \left|\Delta \right|## when using the method I described, i.e. not the propagation suggested by BvU. I assume that the course wants me to use that one, even if it is pessimistic.
I agree with your lecturer, and it is not pessimistic: in the absence of other information it is the right method to use. The method suggested by BvU is optimistic because it assumes that the errors are independent random variables and in this problem there is no reason to indicate that that is true - which is why I suggested that if you did use it you should state that assumption.
 
  • #63
Hello, sorry for not responding here sooner. I have waited for this project to be reviewed. Unfortunately, I got some adjustments to make - not because of the errors (so I guess we're all good on our definitions).

But unfortunately, using the current algorithm to detect zero crossing is "too inefficient, do interpolation instead" to quote the feedback I got. Here is the current algorithm:
Remember the problem is that I need to flip the sign of the y derivative at y=0.
  1. Run Runge-Kutta with a "coarse" step length (0.5e-2) until a y solution crosses 0.
  2. Use the secant (bisection) method to find a step length that will make us land at exactly y=0, with machine epsilon error acceptable.
  3. Reset step length, change sign of derivative and continue iterating.
So, now I'm trying to figure out how to "do interpolation instead" to find the zero crossings.

It doesn't really make sense to me.

Let's say I find the 2 t (time) points closest to 0 and perform linear interpolation between them. I can then interpolate what time t that y is equal to 0 at, but the Runge-Kutta solutions vector is dependent on multiple things: ##x, y, \dot x, \dot y##.
I fail to see what to do after finding this t. Do I reset to the previous solution, flip the derivative, and then recalculate with the interpolated t value? If so, I do not get a good precision at all and the graph looks really wonky:
1717154401574.png

What is meant by "do interpolation instead"?
Should I do interpolation over each of these: ##x, y, \dot x, \dot y##?
 
Last edited:
  • #64
Edit: I see what is meant, I interpolated over everything that the Runge-Kutta 4 solution vector is dependent on. Results look promising. I am tired and it's late, will elaborate further tomorrow.
 
  • #65
Hello, time for an update I guess!
I hope I can get some last help from this topic, after that I'm hopefully done with the project.
I decided to do interpolation to find when y=0 the following way:
  1. When y<0, solve for one more iteration so we have 2 "grid points" below y=0.
  2. Take those 2 grid points and the 2 last grid points above y=0. Create an interpolation polynomial (of degree 3) between these points and interpolate ##y, \dot y, x, \dot x## as functions of ##t##.
  3. Solve using the bisection method for ##y_{interpolated}(t)=0##.
  4. Using the value obtained in (3), let's call it ##t_0##, plug it into all the interpolation polynomials (##y_{interpolated}(t_0), \dot y_{interpolated}(t_0), x_{interpolated}(t_0), \dot x_{interpolated}(t_0)##. ##\dot y## changes sign according to the assignment.
  5. The solutions obtained in (4) is set as being the last solution. All solutions below y=0 are scrapped, and Runge-Kutta solves for the next values using ##t_0## and the interpolated values as the last true solution
This does work, so I guess it's fine! However, I do have some new questions which also are (sigh) related to error calculation:
1. I am asked to have ##6-8## safe *digits*, aka. relative error which we have concluded in the past. With the interpolation around ##y=0## comes a new error, but it doesn't make sense to me how to calculate the relative error. When approximating the interpolation error, I compare my result with a fourth degree polynomial evalulated at the same point.
Here is an example for the two values of ##y_{interpolated}(t_0)## (see above for definition of ##t_0##)
Degree 3: -6.938893903907228e-18
Degree 4: 2.081668171172169e-17
So the absolute error in this case is ##\left| 6,938893903907228\cdot 10^{-18} - 2,081668171172169 \cdot 10^{-17} \right| \approx 2,8\cdot 10^{-17}##
But if I want to calculate the relative error, I take the absolute error divided by ##6,938893903907228\cdot 10^{-18}## which will turn out huge.
In some other cases, the value of the interpolation polynomial at degree 3 will turn out as ##0##, which is good because I want to find when ##y=0##. However, if I try to calculate the relative error in that case, I will get division by zero.
Since I use relative errors for everything else, I want to figure out how to calculate it for the interpolation polynomials.
2. Bonus question The interpolation around ##y=0## includes a small difference between the ##t## values when a fine step size is being used. For example, to solve the assignment which asks for a speed, to get the wanted accuracy, I have to interpolate between values that differ quite little in magnitude: here is an example:
2.685156250000135e-01 2.685937500000135e-01 2.686718750000135e-01 2.687500000000135e
The method does provide the same result as with the old algorithm, but I'm still a little worried regarding the interpolation accuracy - let's say I use these datapoints to create a coefficient matrix for the interpolation polynomial. Will that matrix not have quite a bad condition number?
I guess if it works it works, but is there any way I can improve the condition number (it has only been briefly mentioned in my course)

I hope this is clear and you can follow.
 
  • #66
bremenfallturm said:
In some other cases, the value of the interpolation polynomial at degree 3 will turn out as ##0##, which is good because I want to find when ##y=0##.
Yes: you want to find when ## y = 0 ##, so you are looking for 8 digits of accuracy in ## t ##, not ## y #!

bremenfallturm said:
2. I have to interpolate between values that differ quite little in magnitude: here is an example:
2.685156250000135e-01 2.685937500000135e-01 2.686718750000135e-01 2.687500000000135e
Those numbers differ by a factor of approximately 1 + 2.9e-4; Machine epsilon is 1.1e-16 so you have about 16-4 = 12 digits of accuracy which is plenty.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 15 ·
Replies
15
Views
3K
Replies
14
Views
4K
  • · Replies 6 ·
Replies
6
Views
25K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
1
Views
932
  • · Replies 4 ·
Replies
4
Views
8K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K