I Truncation Errors Explanation?

1. May 5, 2017

Ethan Singer

So I just began a course on Linear Algebra, and was curious about how we can estimate derivatives using centered differences: After a few minutes of Research, I find the proof involving something about a truncation error, which led me to the conclusion that when estimating derivatives, the rate of change may determine how accurate the estimation is... so my question is: Why?

That is to say, within the mentioned proof, they say that it's best to avoid low values of "h" when estimating derivatives, because if the derivative doesn't change rapidly, the value may be too close to zero... So in summation-

Why is it important to avoid zeroes in calculation? (In the sense that when estimating derivatives, if a particular value is too small, errors may ensue)

And What characterizes a function that changes "too dramatically"?

2. May 5, 2017

BvU

Hi,

Bit hard to answer in general: a concrete example wold be easier to comment on.

First comment on 'a few minutes of research' : read on in your text.

Generally this is about computing. In a computer, numbers are represented up to a certain relative precision, e.g. 10-6 or 10-15. For a derivative you need the difference between two numbers that are almost equal (depending on step size). That means relative uncertainty in the difference can become untolerably high.

Functions that change too dramatically have high values for derivatives. E.g: For a unit step function in your x, x+h interval you get 1/h as a derivative.

Last edited: May 5, 2017
3. May 5, 2017

Stephen Tashi

Consider the function $f(x) = .5 x$. If you had a computer that kept numbers to only 3 decimal places then the approximation for $f'(1)$ using $h = 0.001$ would be $(f(1.000+ 0.001) - f(1.000))/0.001 = 0.000$ because the truncation error makes (0.5)(1.000 + .001) indistinguishable from (0.500).