Is Finite Precision Arithmetic Affecting Your Numerical Integration Accuracy?

AI Thread Summary
The discussion centers on the challenges of numerical integration using methods like Simpson's and the trapezium rule, particularly when calculating non-analytical functions. A key point raised is that while reducing the step size (dx) generally improves accuracy, it can paradoxically lead to increased error due to floating point rounding and finite precision arithmetic, especially in 32-bit systems. The conversation highlights that the inherent error of numerical methods decreases with more strips, but finite precision can introduce significant inaccuracies. It is emphasized that using higher-order methods and 64-bit double precision instead of 32-bit floats can mitigate these issues. Additionally, strategies like sorting numbers before addition are suggested to enhance accuracy, particularly in cases with alternating signs. Overall, understanding the impact of finite precision on numerical integration is crucial for achieving reliable results.
dichotomy
Messages
24
Reaction score
0
we've a basic numerical integration assignment, using simpsons/trapezium rule to calculate some non analytical function.

Logic suggests that you use as many "strips" as possible (ie. as small a value of dx as possible) to estimate the value of the integral, however I've seen some graph some time ago showing how error can actually start to increase with smaller dx's, because of floating point rounding, and the inherent error of the methods. anyone seen/got anything like this (for a 32bit computer??)
 
Technology news on Phys.org
Using finite arithmetic, making the step size too small does indeed lead to loss of accuracy. The attached plots show how step size affects integration accuracy for a variety of integrators. The problems investigated are a torque-free symmetric top and a circular orbit in a spherically symmetric gravity field.
 

Attachments

  • symmetric_top_integ_error.jpg
    symmetric_top_integ_error.jpg
    15 KB · Views: 504
  • circular_orbit_integ_error.jpg
    circular_orbit_integ_error.jpg
    15.3 KB · Views: 544
Yep, its a well known fact of life.

Often higher order methods are a good workround, as DH's graphs illustrate.

Another good rule of thumb is: never do ANY "serious" computing in 32-bit precision.
 
All of my integration work was done in IEEE double precision: 64 bit doubles. I would never do anything but graphics using floats (32 bit doubles).
 
The OP referred to 32-bit.

I guessed DH used 64-bit, otherwise the errors of 1e-10 would be hard to explain :wink:
 
dichotomy said:
we've a basic numerical integration assignment, using simpsons/trapezium rule to calculate some non analytical function.
It irrelevant here, but I bet your function is analytic.


Logic suggests that you use as many "strips" as possible (ie. as small a value of dx as possible) to estimate the value of the integral, however I've seen some graph some time ago showing how error can actually start to increase with smaller dx's, because of floating point rounding, and the inherent error of the methods. anyone seen/got anything like this (for a 32bit computer??)
The increase is not due to the inherent error of the methods -- the inherent error goes to zero as the number of strips increases.

One thing you could try is to sort your numbers before adding them: you usually get more accurate results if you always add the smallest 2 numbers in your list of things to be added.
 
Hurkyl said:
The increase is not due to the inherent error of the methods -- the inherent error goes to zero as the number of strips increases.

This is not true when one uses finite arithmetic, for example using the 32 and 64 bit floating point representations(floats and doubles) available in most computer languages. Nasty things happen when (1+10^{-16})-1 = 0.
 
Yes, but that's a fault of the floating point arithmetic used, not the approximation method used.
 
Hurkyl said:
Yes, but that's a fault of the floating point arithmetic used, not the approximation method used.

I agree that the intrinsic (infinite precision) error for any numerical integration technique (quadrature or differential equations) must go to zero as the step size goes to zero. This condition is a sine qua non for numerical integration; failing this condition means you don't have an integrator.

However, different numerical quadrature and numerical differential equation integration techniques display different susceptibility to finite precision arithmetic. It is important to know how the use of finite precision arithmetic impacts the accuracy of the results of a numerical integration.

Hurkyl said:
One thing you could try is to sort your numbers before adding them: you usually get more accurate results if you always add the smallest 2 numbers in your list of things to be added.

Sometimes its the other way around. If the list of terms sorted by magnitude has alternating signs, you usually get more accurate results if you sum the largest terms first.
 
Back
Top