Why Taylor Series works so well for some functions and not for others

AI Thread Summary
The discussion explores why Taylor Series approximations work effectively for certain functions like e^x and trigonometric functions while failing for others. It highlights that even if a Taylor series converges, it may not equal the original function, as seen with the example of f(x) = e^(-1/x^2). The concept of poles is introduced, explaining that functions can diverge at specific points, affecting the validity of their Taylor expansions. The conversation emphasizes the importance of analyzing functions in the complex plane to determine where Taylor expansions are applicable. Ultimately, the effectiveness of Taylor Series is tied to the nature of the function and its behavior around poles.
s0ft
Messages
83
Reaction score
0
About a week ago, I learned about linear approximation from a great youtube video, it was by Adrian Banner and the series of his lectures I think were from his book Calculus LifeSaver. I truly thought it was so beautiful and powerful a concept. Shortly I also got to know the Taylor Series and the general concept of this technique of matching the derivatives of any function with that of an approximating polynomial around a point. I messed with it a little and was so amazed by its success in predicting functions like sines and cosines and exponentials. For these functions, the polynomial approximation is true for any x. But for others, I found it not to be so. So, why is it that certain functions like e^x and trigonometric functions have so closely fitting Taylor approximations and why not the others? Does it have to do with the convergence of the approximation polynomial? Or is there more to it than just that?
 
Mathematics news on Phys.org
No, even if a function's Taylor series, \sum (f^{(n)}(a)/a!)(x- a)^n, converges for all x, it does not necessarily converge to the function itself (except at a). For example, the function f(x)= e^{-1/x^2} if x\ne 0, f(0)= 0 is infinitely differentiable at all x and all of its derivatives at 0 are equal to 0. So its Taylor series, about a= 0, is identically 0 while f(x) is 0 only at x= 0.

Whether or not a function's Taylor series actually converges to the function itself is a very complicated question. Technically such functions are called "analytic" functions. (Sometimes "real analytic" to distinguish them from the same concept in functions of complex variables where the definition is the same but there are much more complicated consequences.)
 
The main idea here is the concept of "poles". This is basically when your function ends up dividing by zero. At these points the function isn't defined and hence your taylor series diverges.

As an example, consider 1/(1+x).
This has a pole at x=-1, which is where the function diverges but for |x|<1, your taylor series about zero should work fine. You can check this be looking at the taylor series itself which happens to be the sum of (-x)^n from n=0 to infinity.
You might think, well, my pole is at x=-1, but what about x=1, there isn't a pole there and my function =1/2. However you're taylor series becomes 1-1+1-1+1-1... forever, and this really doesn't make any sense. Similarly it fails for any |x|>1.

That was just a simple example to illustrate the idea which basically revolves around where an expansion is valid, and in general, your expansion will be valid up until you hit a pole in the complex plane. For nice functions like polynomials, e^x, sines and cosines, you're all good as there aren't any poles, but if you try this for something like tan(x), you'll hit a problem at |x|=pi/2.

I hope that helps, it's just a heuristic without going into too many details. Just try and think of things in a complex plane, you can draw a circle around the point you're expanding about and as long as there are no poles in that circle you can taylor expand up to (but possibly not including) the circle. (for 1/(1+z), you have a circle of radius 1, inside which your taylor series (-z)^n is valid). For expansions about poles, there is something called a Laurent expansion which you could look into if you're interested :)
 
Thanks.
So there is no other deeper logic in terms of which this apparently exact convergence for "very well-behaved" functions like e^x can be explained? And is there no other function to which the series fits similarly well?
Is it just empirical that for functions other than exp and sines and cosines(within the inverse existent domain ofc) the approximation curve starts to deviate away from the actual function after a certain interval?
 
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Back
Top