When does a Taylor's Series not converge to its original function?

  • Thread starter Thread starter dawn_pingpong
  • Start date Start date
  • Tags Tags
    Function Series
Click For Summary
SUMMARY

The discussion centers on the convergence of Taylor series, specifically addressing the function \( e^{-\frac{1}{x^2}} \). It is established that while the Taylor series can converge at certain points, it may not represent the original function outside its radius of convergence. For instance, the Taylor series around \( x=0 \) for this function results in zero, despite the function being undefined at that point. The conversation also highlights that a function's Taylor series can converge to a value at points where the function itself is not defined, emphasizing the importance of understanding the radius of convergence and the nature of the function being analyzed.

PREREQUISITES
  • Understanding of Taylor series and their mathematical formulation
  • Knowledge of limits and continuity in calculus
  • Familiarity with the concept of radius of convergence
  • Basic differentiation and function behavior analysis
NEXT STEPS
  • Study the concept of radius of convergence for Taylor series
  • Explore examples of functions with Taylor series that do not converge to the original function
  • Learn about the properties of infinitely differentiable functions
  • Investigate the implications of undefined points in function analysis
USEFUL FOR

Students and educators in calculus, mathematicians exploring series convergence, and anyone interested in advanced mathematical analysis of functions and their approximations.

dawn_pingpong
Messages
47
Reaction score
0

Homework Statement



When does a Taylor's Series not converge to its original function? the commonly given example is
e^\frac{-1}{x^2}
But I really don't get how it works... For example if we find the series around the point x=1, then

http://www4a.wolframalpha.com/Calculate/MSP/MSP75061a390i474cg906c800004hhd9d3497i4fa6h?MSPStoreType=image/gif&s=58&w=865&h=29

This is the series right? However, but then the value at the point x=0 is undefined, as the denominator x cannot be zero? I really don't understand how this thing works. Thank you!
 
Last edited by a moderator:
Physics news on Phys.org
Maybe a simpler example is:
\frac{1}{1-x} = \sum x^{n}

if you take x=1.1

the sum doesn't converge at all, but f(1.1)=1/(1-1.1)=-10

In this case, the point is outside the sums radius of convergence which is (-1,1)

If you try f(1), f(1) isn't defined. If the function isn't defined at a point, then we shouldn't expect to find its taylor series converging there. I think about this in the following way. If we have an expression for f(x)=T(x) its taylor polynomial. And it was converging at places where f(x) isn't defined (such as 1/0) then we would have a way of finding 1/0.

For example: if we have the taylor series for f(x)=1/x as T(x). And we find T(0) converges to a number, say L, we would have a representation of 1/0=L.

These are 2 of the things that can happen, I'm not sure what else.

And looking at your taylor series it doesn't look quite right to me.

It should be something like f(x)=f(a)+f'(a)/1!(x-a)+f''(a)/2!(x-a)^2+...

I don't think you should have 1/x, 1/x^2 etc. taylor series (that I've seen) don't use 1/x 1/x^2 etc.

Also if you're expanding around 1 you should have (x-1), (x-1)^2, ... terms
 
Last edited:
dawn_pingpong said:

Homework Statement



When does a Taylor's Series not converge to its original function? the commonly given example is
e^\frac{-1}{x^2}
But I really don't get how it works... For example if we find the series around the point x=1, then

http://www4a.wolframalpha.com/Calculate/MSP/MSP75061a390i474cg906c800004hhd9d3497i4fa6h?MSPStoreType=image/gif&s=58&w=865&h=29

This is the series right? However, but then the value at the point x=0 is undefined, as the denominator x cannot be zero? I really don't understand how this thing works. Thank you!

Your Wolfram Alpha link didn't work. (By the way, don't put IMG tags around web links.)

The function e^{-1/x^2} is undefined at x = 0, but its limit is zero as x \rightarrow 0, so the correct function to use is in fact
<br /> g(x) = \begin{cases}<br /> e^{-1/x^2} &amp; \text{if }x \neq 0 \\<br /> 0 &amp; \text{if }x = 0<br /> \end{cases}<br />
This function has the property that g^{(n)}(0) = 0 for all n, so the Taylor series around x = 0 is identically zero. x = 0 is the only problematic point; if you expand the series around x = 1, the series will converge to g(x) in any interval not containing 0.
 
Last edited by a moderator:
Just to add a little to the already written comments. A Taylor series for a function ##f## having all derivatives about the point ##x=a##$$
\sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!}(x-a)^n$$always converges when ##x=a## to the function's value ##f(a)##. The real question is what about ##x## not at the expansion point ##a##. One example showed that ##f(x)## may fail to equal it's series at a value of ##x## because the series fails to converge there. That happens when ##x## is beyond the radius of convergence distance from ##a##. But in the second example you have a function ##f(x)## having all orders of derivatives at ##x=0## where the Taylor series not only converges to 0 for all ##x## but even all the partial sums are all 0. Yet the Taylor series in this case equals the function only at ##x=0##. Why this is interesting is because here we have an infinitely smooth function whose Taylor series converges for all ##x## but doesn't equal ##f(x)## anywhere but ##x=0##, its expansion point.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
10
Views
2K
Replies
18
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
6
Views
14K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
3
Views
2K