Why Taylor Series works so well for some functions and not for others


by s0ft
Tags: approximation, taylor polynomials
s0ft
s0ft is offline
#1
Mar20-13, 05:04 AM
P: 77
About a week ago, I learned about linear approximation from a great youtube video, it was by Adrian Banner and the series of his lectures I think were from his book Calculus LifeSaver. I truly thought it was so beautiful and powerful a concept. Shortly I also got to know the Taylor Series and the general concept of this technique of matching the derivatives of any function with that of an approximating polynomial around a point. I messed with it a little and was so amazed by its success in predicting functions like sines and cosines and exponentials. For these functions, the polynomial approximation is true for any x. But for others, I found it not to be so. So, why is it that certain functions like e^x and trigonometric functions have so closely fitting Taylor approximations and why not the others? Does it have to do with the convergence of the approximation polynomial? Or is there more to it than just that?
Phys.Org News Partner Mathematics news on Phys.org
Hyperbolic homogeneous polynomials, oh my!
Researchers help Boston Marathon organizers plan for 2014 race
'Math detective' analyzes odds for suspicious lottery wins
HallsofIvy
HallsofIvy is online now
#2
Mar20-13, 08:16 AM
Math
Emeritus
Sci Advisor
Thanks
PF Gold
P: 38,894
No, even if a function's Taylor series, [itex]\sum (f^{(n)}(a)/a!)(x- a)^n[/itex], converges for all x, it does not necessarily converge to the function itself (except at a). For example, the function [itex]f(x)= e^{-1/x^2}[/itex] if [itex]x\ne 0[/itex], f(0)= 0 is infinitely differentiable at all x and all of its derivatives at 0 are equal to 0. So its Taylor series, about a= 0, is identically 0 while f(x) is 0 only at x= 0.

Whether or not a function's Taylor series actually converges to the function itself is a very complicated question. Technically such functions are called "analytic" functions. (Sometimes "real analytic" to distinguish them from the same concept in functions of complex variables where the definition is the same but there are much more complicated consequences.)
Marioeden
Marioeden is offline
#3
Mar20-13, 02:05 PM
P: 64
The main idea here is the concept of "poles". This is basically when your function ends up dividing by zero. At these points the function isn't defined and hence your taylor series diverges.

As an example, consider 1/(1+x).
This has a pole at x=-1, which is where the function diverges but for |x|<1, your taylor series about zero should work fine. You can check this be looking at the taylor series itself which happens to be the sum of (-x)^n from n=0 to infinity.
You might think, well, my pole is at x=-1, but what about x=1, there isn't a pole there and my function =1/2. However you're taylor series becomes 1-1+1-1+1-1... forever, and this really doesn't make any sense. Similarly it fails for any |x|>1.

That was just a simple example to illustrate the idea which basically revolves around where an expansion is valid, and in general, your expansion will be valid up until you hit a pole in the complex plane. For nice functions like polynomials, e^x, sines and cosines, you're all good as there aren't any poles, but if you try this for something like tan(x), you'll hit a problem at |x|=pi/2.

I hope that helps, it's just a heuristic without going into too many details. Just try and think of things in a complex plane, you can draw a circle around the point you're expanding about and as long as there are no poles in that circle you can taylor expand up to (but possibly not including) the circle. (for 1/(1+z), you have a circle of radius 1, inside which your taylor series (-z)^n is valid). For expansions about poles, there is something called a Laurent expansion which you could look into if you're interested :)

s0ft
s0ft is offline
#4
Mar21-13, 12:40 AM
P: 77

Why Taylor Series works so well for some functions and not for others


Thanks.
So there is no other deeper logic in terms of which this apparently exact convergence for "very well-behaved" functions like e^x can be explained? And is there no other function to which the series fits similarly well?
Is it just empirical that for functions other than exp and sines and cosines(within the inverse existent domain ofc) the approximation curve starts to deviate away from the actual function after a certain interval?


Register to reply

Related Discussions
Taylor Series to Approximate Functions General Math 3
Shortcut to taylor series of f, given taylor series of g Calculus 2
Taylor Series using Geometric Series and Power Series Calculus & Beyond Homework 5
Calculating errors in Functions of two variables Taylor Series Calculus & Beyond Homework 1
[URGENT] Taylor Series without using the built-in MATLAB "Taylor's Function" Math & Science Software 1