Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Taylor Series to Approximate Functions

  1. Jul 7, 2012 #1
    I get the many proofs behind it and all of the mechanics of how to use it. What I don't get is why it works..
    What was the though process of Brook Taylor when he devised his thing? I get that each new term is literally being added to previous ones along the y-axis to approximate the y value of the original function. But why is it that the coefficient that goes in front of x^n is the nth derivative of the function over n factorial? How does that makes sense? I know it works, but it seem like magic to me.. And I can't help but hate that.
     
  2. jcsd
  3. Jul 7, 2012 #2
    My calculus professor said the same thing, he thought it was amazing that local information like all the derivatives at a point would give all the global information.

    Here's how you might see it's okay that the coefficient should be the derivative.

    Take y=f(x)=ax^2+bx+c. Then y'(x)=2ax+b, y''(x)=2a, so...
    y(0)=c, y'(0)=b, y''(0)=2a.

    So if a function can be represented as a power series (an infinite polynomial), then it's derivatives match up with the coefficients just so.

    That's the "algebraic" thinking. Here's some "geometric" thinking. The linear approximation gives the closest straight line to f, that is

    f(x)≈L(x)=f(0)+f'(0)(x-0).

    A closer fit than a line is a second degree polynomial, that is, a parabola, call it T_2,

    f(x)≈T_2(x)=f(0)+f'(0)(x-0)+f''(0)/2*(x-0)^2.

    A cubic polynomial could give a closer fit.

    Now let the degree n go to infinity. If f is "analytic", then the taylor polynomials will converge to f. Most elementary functions (like trig, exp, etc) are analytic.

    Statistically, most functions are not analytic (for instance discontinuous functions). Even if we consider infinitely differentiable functions. Take f(x)=e^(-1/x^2) (If we define f(0)=0, it becomes infinitely differentiable). Then it turns out all the derivatives at the origin are zero. So, if it were "analytic", then f(x)=f(0)+f'(0)(x-0)+...=0, contradiction.

    Analytic functions can be studied in more detail in a subject called complex analysis, which is a very bizarre subject, with a very strange collection of facts. I used to think of it as the black magic subject. I think the wildness (or lack of) can be understood a little bit intuitively as it is the study of a very small collection of functions between two planes which are conformal, that is, right angles are mapped to right angles (well, almost, except for where the derivative is zero, but then the angles are still mapped in a fairly restricted way).
     
  4. Jul 8, 2012 #3
    In addition to the previous poster (who did a very good job), my method of understanding taylor polynomials worked like this:

    I actually love the idea of Taylor Polynomials because in a way they are set up like "it would make sense that this would approximate that function, so let's see if it does" (of course in development it was much more rigorous).

    The idea is that if you have an equation that is equal to the function at a given point and is equal to every derivative of that function at the same point then this equation should be able to predict with good accuracy the function in the region around the given point.

    Imagine you have an equation f(x) that has the value a at 0 and f'(x) has value b at 0.
    You start by defining an equation p(x) such that [itex]p(x)=f(0) [/itex] or more generally [itex] p(x)= a [/itex]

    Then you say you want p'(x) to equal f'(x).
    So setting up that equation: [itex]p'(x)=b[/itex]
    Then you antidifferentiate to get [itex]p(x)=[/itex][itex]\frac{b^2}{2}[/itex][itex]+f(0)[/itex] (just basic antidifferentiation)
    Remembering that you can switch f(0) with a, you get the 2nd degree taylor polynomial.

    Do you see why assuming the arbitrary derivative and antidifferentiating backwards gives you the taylor polynomial?
     
  5. Jul 8, 2012 #4

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    I'm not sure what you mean by "why it works". Works for what? I suspect that what you are marveling at simply is not true. The set of all functions that can be arbitrarily approximated by their Taylor polynomials, that is that are equal to their Taylor series, is a very very small part of the set of all possible functions.

    For one thing, the set of all continuous functions is a very small set, comparatively. The set of all differentiable functions, the set of all twice differentiable functions, ... , the set of all infinitely differentable functions, are each far smaller than the previous set.

    And even for infinitely differentiable functions it is NOT true that they can be arbitrarily closely approximated by their Taylor polynomials. For example the function [itex]f(x)= e^{-1/x^2}[/itex] if x is not 0, f(0)= 0, is infinitely differentiable but all derivatives are 0 at x= 0. That is, all its Taylor polynomials are are identcally 0 and so cannot approximate f not matter how large an n you use.

    What is true is that functions that can be arbitrarily closely approximated by Taylor polynomials (they are called "analytic functions") are very useful and so we work with them much more than other functions.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Taylor Series to Approximate Functions
  1. Taylor series tips (Replies: 4)

  2. Taylor series (Replies: 4)

Loading...