Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A question about Taylor and MacLauren series

  1. Aug 31, 2012 #1

    I have a kind of general question. So I understand that the goal of a Taylor function is to approximate a transcendental function using a polynomial function. This makes things easier to deal with sometimes. I understand that this works by chosing a polynomial function that seems to behave like the transcendental function does, at least in a localized area. For example f(c)=P(c) where f is our transcendental function and P is our polynomial function. As we add additional constrantaints, like f'(c)=P'(c) and f''(c)=P''(c) this approximation becomes better and better until eventually we can come up with a power series that is exactly the function.

    My question is, WHY is it that as the higher order derivatives match, the function becomes a better and better approximation? I mean, intuitively it makes sense: there are more things that the two functions have in common so it seems natural that a function that has more in common would be a better approximation than one that has less in common. I just don't understand what it is that is making it a better approximation. Why are the higher order derivatives so important?

    I'm not sure if this makes sense. I feel like I understand how it works, I just don't understand why.
  2. jcsd
  3. Aug 31, 2012 #2
    The way I look at it is just the intuition. Wouldn't it make sense that if the Taylor Polynomial was as similar as possible to the function at the one point then it would be a good approximation in the area?

    Being equal at higher derivatives just increases the level of similarity.
  4. Aug 31, 2012 #3


    User Avatar
    Science Advisor

    Nice Question:

    I don't have any actual theorem/result to cite, but I would say that ,since f

    is differentiable at x=c, the change in f is locally-linear near x, and f'(x)

    describes with some precision the line that describes this change. The second

    derivative gives a more precise description of how f'(x) changes, i.e., how

    the linear approximation changes, and so on.
  5. Aug 31, 2012 #4
    Thanks for the responses guys! I think I am seeing that my problem was a fundamental lack of understanding about what exactly higher order derivatives represent. I guess it's been easy for me to go through life just knowing that a derivative measures the rate of change, and then not thinking much or visualizing what a second derivative, or third derivative, or so on, tells us. Looking in to it now I can see that for example on a graph in 2 D, the second derivative tells us, like you said, how f'(x) changes, or the curvature of the graph at that point. So it makes sense now thinking about why a higher order Taylor polynomial approximates a function better because as we keep adding terms the graph seems to be conforming (locally) more and more to the function we are tryfing to approximate.

    I guess my problem now is I can visualize a derivative and what it means, and a second derivative and what it means, but for higher order derivatives is it possible to visualize? I just imagine we get to a point where instead of thinking "first derivative means this, second derivative means this" a general pattern would emerge in terms of visualizing things.

    Thanks for the responses it really got me thinking! Also Taylor polynomials are cool!!
  6. Aug 31, 2012 #5


    User Avatar
    Science Advisor

    I don't know if this is 100% an explanation, but there is also a result called the

    Weirstrass approximation theorem so that every continuous function in a closed

    interval can be approximated by polynomials.


    densely by using polynomials
  7. Sep 1, 2012 #6
    A proof of the Taylor series can be presented as follows. Assume that a function [itex]f(x)[/itex] has a power series in a given interval, in other words, assume it is holomorphic. Then, we know it has a power series:
    [tex]f(x)=\sum_{k=0}^{\infty}c_k (x-a)^k[/tex]
    where [itex]c_k[/itex] are coefficients and a is a constant. Differentiating one time in the above formula gives
    [tex]f'(x)=\sum_{k=1}^{\infty}c_k k(x-a)^{k-1}[/tex]
    It can be shown by induction that differentiating n times will give
    [tex]f^{(n)}(x)=\sum_{k=n}^{\infty}c_k k! (x-a)^{k-n}=c_n n!+\sum_{k=n+1}^{\infty}c_k k! (x-a)^{k-n}[/tex]
    Setting [itex]x=a[/itex] in the above formula gives us [itex]\displaystyle \frac{f^{(n)}(a)}{n!}=c_n[/itex]. Note that we assumed that the power series exists, so this implies that the function f must be holomorphic on the whole complex plane or at least meromorphic in a region D, in which it has a power series.
  8. Sep 1, 2012 #7
    For example, consider the e^x function. All of its derivatives are 1 when x = 0. When you know that the function's derivative is 1 at a certain point, you can estimate that function by a straight line. Let's say that the straight line is reasonably accurate on an interval h around 0. Let's now say you know that the tenth derivative is 1 at zero. Then the 9th derivative, constructed out of the value at the single point, can be considered to be reasonably accurate on the interval h, but the same way you could estimate the derivative at one point to be accurate for some interval, the values at the endpoints of the interval h would themselves be reasonably accurate within some interval, and then you would expect the 8th derivative to be accurate within a larger interval and so on.

    Basically, you can consider a single point to be accurate enough approximation of it's antiderivative on an interval, then since the antiderivative is accurate on that interval, you could construct an accurate antiderivative of the antiderivative on that interval, plus expect accuracy on an additional interval around the endpoints, as with the first antiderivative and so on.
  9. Sep 1, 2012 #8

    Stephen Tashi

    User Avatar
    Science Advisor

    Maybe it doesn't.

    You could probably invent a smooth function that has sharp curvature at the point x = x0 and a little further to the right it's graph becomes asymptotic to the line that is tangent to the function at the point x = x0. That would create places where the linear approximation is a better approximation that a higher degree polynomial approximation.

    You did say indicate that your focus was on points close to x = x0, so perhaps that's an "unfair" example. But it shows that you'd have to make your question more precise - not for the sake of a formal proof, I mean you have to make it precise just to get a correct intuitive answer.
  10. Sep 1, 2012 #9


    User Avatar
    Science Advisor

    Is it possible for a smooth function to have sharp curvature?
  11. Sep 2, 2012 #10

    Stephen Tashi

    User Avatar
    Science Advisor

    It can't have a point of non-differentiability, but for this example it just needs to curve away from the tangent line and then curve back to it.

    Some various ways of making the question of the original post precise:

    Let f(x) be a function (from the reals to the reals) that is infinitely differentiable in some open interval O that contains the point x = c.
    Let T[n,x] be the nth degree taylor polynomial formed by using the first n terms of the Taylor series for f(x) expanded about the point x = c.

    Version 1: ("uniformly" pointwise better) There exists an open interval S containing c such that for each pair of non-negative integers M and N with M > N and for each point p in S, | T[M,p] - f(p) ] <= | T[n,p] - f(p) |.

    Version 2: (Pointwise better): For each pair of non-negative integers M and N with M > N there exists an interval S containing c such that for each point p in S, | T[M,p] - f(p) ] <= | T[n,p] - f(p) |.

    Version 3: ("uniformly" least squares better) There exists a closed interval S containing c such that for each pair of non-negative integers M and N with M > N and for each point p in S, the integral of (T[M,p] - f(p))^2 dp over S is equal or less than the integral of (T[n,p] - f(p))^2 dp over S.

    Version 4: (least squares better) For each pair of non-negative integers M and N with M > N there exists a closed interval S containing c such that for each point p in S, the integral of (T[M,p] - f(p))^2 dp over S is equal or less than the integral of (T[n,p] - f(p))^2 dp over S.
    Last edited: Sep 2, 2012
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook