# Taylor approximation

1. Dec 16, 2012

### aaaa202

Often you use taylor series to approximate differential equations for easier solving. An example is the small angle approximation to the pendulum. My question is: Is there mathematical tool for calculating the error you make as time goes with such an approximation? Because I could Imagine it gets bigger and bigger with time.

2. Dec 17, 2012

### HallsofIvy

Staff Emeritus
Of course: Lagrange's remainder term for a Taylor's seriers truncated at the nth term, $(f^{(n)}(x_0)/n!) (x- x_0)^n$, is $(f^{(n)}(c)/(n+1)!) (x- x_0)^{n+1}$, where c is a number between $x$ and $x_0$, is given in any Calculus text, as well as at this website, http://www.millersville.edu/~bikenaga/calculus/tayerr/tayerr.html [Broken], and can be used to find the maximum possible error by replacing $f^{(n+1)}(c)$ with an upper bound on the n+1 derivative between $x_0$ and $x$ and taking the absolute value.

Last edited by a moderator: May 6, 2017
3. Dec 17, 2012

### Ray Vickson

His original question has a more complicated answer, if any: he want to know how the error behaves in the long run if he replaces, say, the DE dy/dt = f(y,t) by a simpler DE dy/dt = f0(y,t), where f0 is obtained from f by truncating a Taylor expansion.

I don't know the answer to his question, but I suspect lots of work has been done on problems of that type.

Last edited by a moderator: May 6, 2017
4. Dec 17, 2012

### pasmith

If $\dot x = f(t)$ and one truncates $f(t)$, then there is almost certainly a suitable remainder expression which one can integrate to get a bound on the error.

Truncating $\dot x = f(x)$ is much more difficult.

Aside from anything else, truncating a polynomial means that one loses roots, which means one potentially loses fixed points. That alters the topology of the phase space, so that solutions starting from the same point may display radically different long-term behaviour.

One can see this in the case of
$$\dot x = x(1-x)$$
subject to $x(0) = 3$ where $x(t) \to 1$ as $t \to \infty$ and
$$\dot x = x$$
where $|x(t)| \to \infty$ for any $x(0) \neq 0$.

There are results such as the Hartman-Grobman theorem which gives conditions on when a system behaves like its linearization near a fixed point, but it only applies locally (ie sufficiently close to the fixed point).