Is Finite Precision Arithmetic Affecting Your Numerical Integration Accuracy?

Click For Summary

Discussion Overview

The discussion revolves around the impact of finite precision arithmetic on the accuracy of numerical integration methods, specifically using Simpson's and trapezium rules for non-analytical functions. Participants explore how varying step sizes (dx) affect integration accuracy and the role of floating point representation in these calculations.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants suggest that using smaller step sizes (dx) in numerical integration can lead to increased error due to floating point rounding and the limitations of finite precision arithmetic.
  • Others argue that while the inherent error of numerical methods decreases with more strips, finite precision issues can cause unexpected increases in error at very small step sizes.
  • A participant mentions that higher order methods may mitigate some of the accuracy issues associated with finite precision.
  • There is a discussion about the preference for using 64-bit doubles over 32-bit floats for serious computations, as the latter can lead to significant errors.
  • Some participants propose techniques such as sorting numbers before addition to improve accuracy in numerical results.
  • It is noted that different numerical integration techniques may have varying susceptibility to finite precision errors, emphasizing the importance of understanding these impacts.

Areas of Agreement / Disagreement

Participants express differing views on the relationship between step size, inherent error, and finite precision arithmetic. While some agree on the general principles of numerical integration, there is no consensus on the specifics of how finite precision affects accuracy or the best practices to mitigate these effects.

Contextual Notes

Participants mention specific examples and conditions under which finite precision affects integration accuracy, but the discussion does not resolve the complexities or assumptions involved in these scenarios.

dichotomy
Messages
24
Reaction score
0
we've a basic numerical integration assignment, using simpsons/trapezium rule to calculate some non analytical function.

Logic suggests that you use as many "strips" as possible (ie. as small a value of dx as possible) to estimate the value of the integral, however I've seen some graph some time ago showing how error can actually start to increase with smaller dx's, because of floating point rounding, and the inherent error of the methods. anyone seen/got anything like this (for a 32bit computer??)
 
Technology news on Phys.org
Using finite arithmetic, making the step size too small does indeed lead to loss of accuracy. The attached plots show how step size affects integration accuracy for a variety of integrators. The problems investigated are a torque-free symmetric top and a circular orbit in a spherically symmetric gravity field.
 

Attachments

  • symmetric_top_integ_error.jpg
    symmetric_top_integ_error.jpg
    15 KB · Views: 536
  • circular_orbit_integ_error.jpg
    circular_orbit_integ_error.jpg
    15.3 KB · Views: 577
Yep, its a well known fact of life.

Often higher order methods are a good workround, as DH's graphs illustrate.

Another good rule of thumb is: never do ANY "serious" computing in 32-bit precision.
 
All of my integration work was done in IEEE double precision: 64 bit doubles. I would never do anything but graphics using floats (32 bit doubles).
 
The OP referred to 32-bit.

I guessed DH used 64-bit, otherwise the errors of 1e-10 would be hard to explain :wink:
 
dichotomy said:
we've a basic numerical integration assignment, using simpsons/trapezium rule to calculate some non analytical function.
It irrelevant here, but I bet your function is analytic.


Logic suggests that you use as many "strips" as possible (ie. as small a value of dx as possible) to estimate the value of the integral, however I've seen some graph some time ago showing how error can actually start to increase with smaller dx's, because of floating point rounding, and the inherent error of the methods. anyone seen/got anything like this (for a 32bit computer??)
The increase is not due to the inherent error of the methods -- the inherent error goes to zero as the number of strips increases.

One thing you could try is to sort your numbers before adding them: you usually get more accurate results if you always add the smallest 2 numbers in your list of things to be added.
 
Hurkyl said:
The increase is not due to the inherent error of the methods -- the inherent error goes to zero as the number of strips increases.

This is not true when one uses finite arithmetic, for example using the 32 and 64 bit floating point representations(floats and doubles) available in most computer languages. Nasty things happen when [itex](1+10^{-16})-1 = 0[/itex].
 
Yes, but that's a fault of the floating point arithmetic used, not the approximation method used.
 
Hurkyl said:
Yes, but that's a fault of the floating point arithmetic used, not the approximation method used.

I agree that the intrinsic (infinite precision) error for any numerical integration technique (quadrature or differential equations) must go to zero as the step size goes to zero. This condition is a sine qua non for numerical integration; failing this condition means you don't have an integrator.

However, different numerical quadrature and numerical differential equation integration techniques display different susceptibility to finite precision arithmetic. It is important to know how the use of finite precision arithmetic impacts the accuracy of the results of a numerical integration.

Hurkyl said:
One thing you could try is to sort your numbers before adding them: you usually get more accurate results if you always add the smallest 2 numbers in your list of things to be added.

Sometimes its the other way around. If the list of terms sorted by magnitude has alternating signs, you usually get more accurate results if you sum the largest terms first.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 4 ·
Replies
4
Views
6K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
5K
Replies
3
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
3
Views
6K