Why Taylor Series works so well for some functions and not for others

Click For Summary
SUMMARY

The discussion focuses on the effectiveness of Taylor Series in approximating certain functions, particularly e^x, sine, and cosine, while highlighting limitations with others like f(x) = e^(-1/x^2). It establishes that a function's Taylor series may converge but not necessarily equal the function itself, especially at poles where the function is undefined. The concept of analytic functions is introduced, emphasizing the importance of poles in determining the validity of Taylor expansions. The conversation concludes that the convergence behavior of Taylor Series is closely tied to the presence of poles in the complex plane.

PREREQUISITES
  • Understanding of Taylor Series and polynomial approximation
  • Familiarity with the concept of analytic functions
  • Knowledge of poles in complex analysis
  • Basic calculus, including differentiation and convergence concepts
NEXT STEPS
  • Study the properties of analytic functions in detail
  • Learn about poles and their implications in complex analysis
  • Explore Laurent series and their applications in function approximation
  • Investigate the convergence of Taylor Series for various types of functions
USEFUL FOR

Mathematicians, students of calculus and complex analysis, and anyone interested in understanding the limitations and applications of Taylor Series in function approximation.

s0ft
Messages
83
Reaction score
0
About a week ago, I learned about linear approximation from a great youtube video, it was by Adrian Banner and the series of his lectures I think were from his book Calculus LifeSaver. I truly thought it was so beautiful and powerful a concept. Shortly I also got to know the Taylor Series and the general concept of this technique of matching the derivatives of any function with that of an approximating polynomial around a point. I messed with it a little and was so amazed by its success in predicting functions like sines and cosines and exponentials. For these functions, the polynomial approximation is true for any x. But for others, I found it not to be so. So, why is it that certain functions like e^x and trigonometric functions have so closely fitting Taylor approximations and why not the others? Does it have to do with the convergence of the approximation polynomial? Or is there more to it than just that?
 
Physics news on Phys.org
No, even if a function's Taylor series, \sum (f^{(n)}(a)/a!)(x- a)^n, converges for all x, it does not necessarily converge to the function itself (except at a). For example, the function f(x)= e^{-1/x^2} if x\ne 0, f(0)= 0 is infinitely differentiable at all x and all of its derivatives at 0 are equal to 0. So its Taylor series, about a= 0, is identically 0 while f(x) is 0 only at x= 0.

Whether or not a function's Taylor series actually converges to the function itself is a very complicated question. Technically such functions are called "analytic" functions. (Sometimes "real analytic" to distinguish them from the same concept in functions of complex variables where the definition is the same but there are much more complicated consequences.)
 
The main idea here is the concept of "poles". This is basically when your function ends up dividing by zero. At these points the function isn't defined and hence your taylor series diverges.

As an example, consider 1/(1+x).
This has a pole at x=-1, which is where the function diverges but for |x|<1, your taylor series about zero should work fine. You can check this be looking at the taylor series itself which happens to be the sum of (-x)^n from n=0 to infinity.
You might think, well, my pole is at x=-1, but what about x=1, there isn't a pole there and my function =1/2. However you're taylor series becomes 1-1+1-1+1-1... forever, and this really doesn't make any sense. Similarly it fails for any |x|>1.

That was just a simple example to illustrate the idea which basically revolves around where an expansion is valid, and in general, your expansion will be valid up until you hit a pole in the complex plane. For nice functions like polynomials, e^x, sines and cosines, you're all good as there aren't any poles, but if you try this for something like tan(x), you'll hit a problem at |x|=pi/2.

I hope that helps, it's just a heuristic without going into too many details. Just try and think of things in a complex plane, you can draw a circle around the point you're expanding about and as long as there are no poles in that circle you can taylor expand up to (but possibly not including) the circle. (for 1/(1+z), you have a circle of radius 1, inside which your taylor series (-z)^n is valid). For expansions about poles, there is something called a Laurent expansion which you could look into if you're interested :)
 
Thanks.
So there is no other deeper logic in terms of which this apparently exact convergence for "very well-behaved" functions like e^x can be explained? And is there no other function to which the series fits similarly well?
Is it just empirical that for functions other than exp and sines and cosines(within the inverse existent domain ofc) the approximation curve starts to deviate away from the actual function after a certain interval?
 

Similar threads

  • · Replies 19 ·
Replies
19
Views
2K
  • · Replies 16 ·
Replies
16
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K