# Numerical integration error

1. Jan 20, 2007

### dichotomy

we've a basic numerical integration assignment, using simpsons/trapezium rule to calculate some non analytical function.

Logic suggests that you use as many "strips" as possible (ie. as small a value of dx as possible) to estimate the value of the integral, however i've seen some graph some time ago showing how error can actually start to increase with smaller dx's, because of floating point rounding, and the inherent error of the methods. anyone seen/got anything like this (for a 32bit computer??)

2. Jan 20, 2007

### D H

Staff Emeritus
Using finite arithmetic, making the step size too small does indeed lead to loss of accuracy. The attached plots show how step size affects integration accuracy for a variety of integrators. The problems investigated are a torque-free symmetric top and a circular orbit in a spherically symmetric gravity field.

#### Attached Files:

File size:
16.7 KB
Views:
74
• ###### circular_orbit_integ_error.jpg
File size:
16.8 KB
Views:
57
3. Jan 20, 2007

### AlephZero

Yep, its a well known fact of life.

Often higher order methods are a good workround, as DH's graphs illustrate.

Another good rule of thumb is: never do ANY "serious" computing in 32-bit precision.

4. Jan 20, 2007

### D H

Staff Emeritus
All of my integration work was done in IEEE double precision: 64 bit doubles. I would never do anything but graphics using floats (32 bit doubles).

5. Jan 20, 2007

### AlephZero

The OP referred to 32-bit.

I guessed DH used 64-bit, otherwise the errors of 1e-10 would be hard to explain

6. Jan 20, 2007

### Hurkyl

Staff Emeritus
It irrelevant here, but I bet your function is analytic.

The increase is not due to the inherent error of the methods -- the inherent error goes to zero as the number of strips increases.

One thing you could try is to sort your numbers before adding them: you usually get more accurate results if you always add the smallest 2 numbers in your list of things to be added.

7. Jan 20, 2007

### D H

Staff Emeritus
This is not true when one uses finite arithmetic, for example using the 32 and 64 bit floating point representations(floats and doubles) available in most computer languages. Nasty things happen when $(1+10^{-16})-1 = 0$.

8. Jan 20, 2007

### Hurkyl

Staff Emeritus
Yes, but that's a fault of the floating point arithmetic used, not the approximation method used.

9. Jan 22, 2007

### D H

Staff Emeritus
I agree that the intrinsic (infinite precision) error for any numerical integration technique (quadrature or differential equations) must go to zero as the step size goes to zero. This condition is a sine qua non for numerical integration; failing this condition means you don't have an integrator.

However, different numerical quadrature and numerical differential equation integration techniques display different susceptibility to finite precision arithmetic. It is important to know how the use of finite precision arithmetic impacts the accuracy of the results of a numerical integration.

Sometimes its the other way around. If the list of terms sorted by magnitude has alternating signs, you usually get more accurate results if you sum the largest terms first.