Taylor Approximation: Error Calculation Tool?

Click For Summary

Homework Help Overview

The discussion revolves around the use of Taylor series for approximating differential equations and the associated error calculations over time. The original poster questions whether there is a mathematical tool to assess how the error evolves as time progresses when using such approximations.

Discussion Character

  • Exploratory, Assumption checking, Conceptual clarification

Approaches and Questions Raised

  • Some participants reference Lagrange's remainder term as a method for calculating maximum possible error in Taylor series approximations. Others explore the implications of truncating differential equations and how this affects long-term behavior and error bounds.

Discussion Status

The conversation includes various interpretations of the original question, with participants suggesting that the behavior of error over time may be complex. There is acknowledgment of existing mathematical tools, but also uncertainty about their applicability to the specific long-term behavior of approximations.

Contextual Notes

Participants note that truncating functions can lead to significant changes in the dynamics of the system, potentially losing critical information such as roots and fixed points. The discussion highlights the complexity of analyzing error in the context of differential equations.

aaaa202
Messages
1,144
Reaction score
2
Often you use taylor series to approximate differential equations for easier solving. An example is the small angle approximation to the pendulum. My question is: Is there mathematical tool for calculating the error you make as time goes with such an approximation? Because I could Imagine it gets bigger and bigger with time.
 
Physics news on Phys.org
Of course: Lagrange's remainder term for a Taylor's seriers truncated at the nth term, [itex](f^{(n)}(x_0)/n!) (x- x_0)^n[/itex], is [itex](f^{(n)}(c)/(n+1)!) (x- x_0)^{n+1}[/itex], where c is a number between [itex]x[/itex] and [itex]x_0[/itex], is given in any Calculus text, as well as at this website, http://www.millersville.edu/~bikenaga/calculus/tayerr/tayerr.html , and can be used to find the maximum possible error by replacing [itex]f^{(n+1)}(c)[/itex] with an upper bound on the n+1 derivative between [itex]x_0[/itex] and [itex]x[/itex] and taking the absolute value.
 
Last edited by a moderator:
HallsofIvy said:
Of course: Lagrange's remainder term for a Taylor's seriers truncated at the nth term, [itex](f^{(n)}(x_0)/n!) (x- x_0)^n[/itex], is [itex](f^{(n)}(c)/(n+1)!) (x- x_0)^{n+1}[/itex], where c is a number between [itex]x[/itex] and [itex]x_0[/itex], is given in any Calculus text, as well as at this website, http://www.millersville.edu/~bikenaga/calculus/tayerr/tayerr.html , and can be used to find the maximum possible error by replacing [itex]f^{(n+1)}(c)[/itex] with an upper bound on the n+1 derivative between [itex]x_0[/itex] and [itex]x[/itex] and taking the absolute value.

His original question has a more complicated answer, if any: he want to know how the error behaves in the long run if he replaces, say, the DE dy/dt = f(y,t) by a simpler DE dy/dt = f0(y,t), where f0 is obtained from f by truncating a Taylor expansion.

I don't know the answer to his question, but I suspect lots of work has been done on problems of that type.
 
Last edited by a moderator:
Ray Vickson said:
His original question has a more complicated answer, if any: he want to know how the error behaves in the long run if he replaces, say, the DE dy/dt = f(y,t) by a simpler DE dy/dt = f0(y,t), where f0 is obtained from f by truncating a Taylor expansion.

I don't know the answer to his question, but I suspect lots of work has been done on problems of that type.

If [itex]\dot x = f(t)[/itex] and one truncates [itex]f(t)[/itex], then there is almost certainly a suitable remainder expression which one can integrate to get a bound on the error.

Truncating [itex]\dot x = f(x)[/itex] is much more difficult.

Aside from anything else, truncating a polynomial means that one loses roots, which means one potentially loses fixed points. That alters the topology of the phase space, so that solutions starting from the same point may display radically different long-term behaviour.

One can see this in the case of
[tex]\dot x = x(1-x)[/tex]
subject to [itex]x(0) = 3[/itex] where [itex]x(t) \to 1[/itex] as [itex]t \to \infty[/itex] and
[tex]\dot x = x[/tex]
where [itex]|x(t)| \to \infty[/itex] for any [itex]x(0) \neq 0[/itex].

There are results such as the Hartman-Grobman theorem which gives conditions on when a system behaves like its linearization near a fixed point, but it only applies locally (ie sufficiently close to the fixed point).
 

Similar threads

Replies
6
Views
3K
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 24 ·
Replies
24
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
1
Views
5K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
18
Views
5K
  • · Replies 3 ·
Replies
3
Views
4K