How Does Reducing Time Intervals Affect Accuracy in Numerical Integration?

  • Context: Undergrad 
  • Thread starter Thread starter walking
  • Start date Start date
  • Tags Tags
    Accuracy
Click For Summary

Discussion Overview

The discussion revolves around the effects of reducing time intervals on the accuracy of numerical integration, specifically in the context of Euler's method as presented in a textbook. Participants explore the implications of choosing different time intervals and how they relate to estimating accuracy in numerical simulations.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants suggest that the choice of the second time interval (0.25s) is arbitrary and serves as an example, arguing that any smaller interval could suffice for comparison.
  • Others propose that the purpose of using a smaller interval is to approximate what the values would be if the interval approached zero, thus allowing for an estimation of accuracy against the "true" values.
  • One participant questions whether a 'zero' interval would yield perfect accuracy and emphasizes the importance of understanding how accuracy improves as intervals decrease.
  • Another participant notes that if the difference in results between two intervals is negligible, it indicates that the step size is sufficiently small, while significant differences would necessitate trying even smaller intervals.
  • Some participants mention that in practical scenarios, factors like nonlinearities and uncertainties in coefficients may introduce more error than the step size itself.
  • One participant highlights that repeating the integration with progressively smaller intervals can help estimate error until the differences stabilize within an acceptable range.

Areas of Agreement / Disagreement

Participants generally agree that the second interval is likely chosen arbitrarily and that smaller intervals can provide better accuracy estimates. However, there is no consensus on the implications of using a 'zero' interval or the best strategies for determining acceptable error margins.

Contextual Notes

Participants acknowledge that the accuracy of numerical methods can vary significantly based on the chosen step sizes and that real-world factors may complicate the relationship between step size and accuracy.

walking
Messages
73
Reaction score
8
In the Tipler & Mosca textbook we are shown how numerical integration (the Euler's method species of it) is done using a spreadsheet program. The authors then go on to say on page 138 that the accuracy of this program can be estimated by first calculating the values for a time interval of 0.5s, then repeating this for a time interval of 0.25s and comparing the values obtained for the two intervals.

I believe that the second interval is arbitrary and the authors simply chose it as an example. It seems that any interval which is small compared to the first one would be sufficient. I think that the purpose of the second run of the program using the "small" interval is basically to mimick what the values would look like if the interval tends to zero. Thus by comparing the original interval to this "zero" interval, we can estimate the accuracy of the original interval with the "actual" values of the variables.
Am I interpreting this correctly?

Edit: The passage in question is the second paragraph on the second page here (the one starting "but how accurate..."):

https://www.dropbox.com/s/ltkcj6wcayd6m7v/tipler mosc accuracy.pdf?dl=0
 
Last edited:
Physics news on Phys.org
walking said:
I believe that the second interval is arbitrary and the authors simply chose it as an example. It seems that any interval which is small compared to the first one would be sufficient. I think that the purpose of the second run of the program using the "small" interval is basically to mimick what the values would look like if the interval tends to zero. Thus by comparing the original interval to this "zero" interval, we can estimate the accuracy of the original interval with the "actual" values of the variables.

Hmmm. Wouldn't the 'zero' interval be perfectly accurate for any valid method used? Even if not, I think the point of the 0.25 s interval is to show how accurate the method gets as you decrease the interval. Since we can't actually do a 'zero' interval, the accuracy of different methods is extremely important. If your accuracy goes up by a factor of two every time you halve the interval, while another method's accuracy goes up by a factor of three, then the second method can be run with a longer time interval for the desired accuracy and simulations and calculations performed using it will run faster.

That's my thoughts, but I'm not an expert in this matter.
 
Drakkith said:
Hmmm. Wouldn't the 'zero' interval be perfectly accurate for any valid method used? Even if not, I think the point of the 0.25 s interval is to show how accurate the method gets as you decrease the interval. Since we can't actually do a 'zero' interval, the accuracy of different methods is extremely important. If your accuracy goes up by a factor of two every time you halve the interval, while another method's accuracy goes up by a factor of three, then the second method can be run with a longer time interval for the desired accuracy and simulations and calculations performed using it will run faster.

That's my thoughts, but I'm not an expert in this matter.
I have attached the relevant passage and some context. Does it help?
 
If the difference in the results between 0.25 and 0.5 was negligable, then you know that the step size is small enough.

If the difference is large enough to be significant. Try a step smaller than 0.25 and then do the comparison again.

You want the largest step size you can (it makes the program run faster) such that the error is acceptable. Then use 1/10th of that step size to give yourself a safety margin. You have to decide for yourself, how much inaccuracy is acceptable.

There are strategies that can minimize the number of trials you have to make, but that may not be important to you.

Be aware that in real life it is common that nonlinearities and uncertainty in coefficient values contribute more error that step size.

But for academic purposes, comparison of results with any two step sizes proves the point.
 
  • Like
Likes   Reactions: walking
walking said:
I have attached the relevant passage and some context. Does it help?

Yes. I believe you are correct in that the 0.25 s time interval was chosen on a semi-arbitrary basis. Doubling the time steps only resulted in a difference of 0.4% in position and 0.05% in velocity. This likely indicates that the original time interval resulted in a final value that is close to the 'true' value.
 
  • Like
Likes   Reactions: walking
walking said:
I believe that the second interval is arbitrary and the authors simply chose it as an example. It seems that any interval which is small compared to the first one would be sufficient. I think that the purpose of the second run of the program using the "small" interval is basically to mimick what the values would look like if the interval tends to zero.
Yes - it's arbitrary and possibly the result of experience with the particular type of data being used.
To get a good estimate of the error, repeat the numerical integration with smaller and smaller intervals, until the differences settle down within some arbitrary range (the limit as Δx → 0). This is the equivalent to comparing the numerical method with the Integral Calculus result (for which there may not be a simple analytical answer, of course)
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
5K
  • · Replies 146 ·
5
Replies
146
Views
12K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 40 ·
2
Replies
40
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K