In the Tipler & Mosca textbook we are shown how numerical integration (the Euler's method species of it) is done using a spreadsheet program. The authors then go on to say on page 138 that the accuracy of this program can be estimated by first calculating the values for a time interval of 0.5s, then repeating this for a time interval of 0.25s and comparing the values obtained for the two intervals.

I believe that the second interval is arbitrary and the authors simply chose it as an example. It seems that any interval which is small compared to the first one would be sufficient. I think that the purpose of the second run of the program using the "small" interval is basically to mimick what the values would look like if the interval tends to zero. Thus by comparing the original interval to this "zero" interval, we can estimate the accuracy of the original interval with the "actual" values of the variables.

Am I interpreting this correctly?

Edit: The passage in question is the second paragraph on the second page here (the one starting "but how accurate..."):

https://www.dropbox.com/s/ltkcj6wcayd6m7v/tipler mosc accuracy.pdf?dl=0

I believe that the second interval is arbitrary and the authors simply chose it as an example. It seems that any interval which is small compared to the first one would be sufficient. I think that the purpose of the second run of the program using the "small" interval is basically to mimick what the values would look like if the interval tends to zero. Thus by comparing the original interval to this "zero" interval, we can estimate the accuracy of the original interval with the "actual" values of the variables.

Am I interpreting this correctly?

Edit: The passage in question is the second paragraph on the second page here (the one starting "but how accurate..."):

https://www.dropbox.com/s/ltkcj6wcayd6m7v/tipler mosc accuracy.pdf?dl=0

Last edited: