• Support PF! Buy your school textbooks, materials and every day products Here!

When does a Taylor's Series not converge to its original function?

  • #1

Homework Statement



When does a Taylor's Series not converge to its original function? the commonly given example is
[tex]e^\frac{-1}{x^2}[/tex]
But I really don't get how it works... For example if we find the series around the point x=1, then

http://www4a.wolframalpha.com/Calculate/MSP/MSP75061a390i474cg906c800004hhd9d3497i4fa6h?MSPStoreType=image/gif&s=58&w=865&h=29 [Broken]

This is the series right? However, but then the value at the point x=0 is undefined, as the denominator x cannot be zero? I really don't understand how this thing works. Thank you!
 
Last edited by a moderator:

Answers and Replies

  • #2
Maybe a simpler example is:
[itex]\frac{1}{1-x}[/itex] = [itex]\sum x^{n}[/itex]

if you take x=1.1

the sum doesn't converge at all, but f(1.1)=1/(1-1.1)=-10

In this case, the point is outside the sums radius of convergence which is (-1,1)

If you try f(1), f(1) isn't defined. If the function isn't defined at a point, then we shouldn't expect to find its taylor series converging there. I think about this in the following way. If we have an expression for f(x)=T(x) its taylor polynomial. And it was converging at places where f(x) isn't defined (such as 1/0) then we would have a way of finding 1/0.

For example: if we have the taylor series for f(x)=1/x as T(x). And we find T(0) converges to a number, say L, we would have a representation of 1/0=L.

These are 2 of the things that can happen, I'm not sure what else.

And looking at your taylor series it doesn't look quite right to me.

It should be something like f(x)=f(a)+f'(a)/1!(x-a)+f''(a)/2!(x-a)^2+...

I don't think you should have 1/x, 1/x^2 etc. taylor series (that I've seen) don't use 1/x 1/x^2 etc.

Also if you're expanding around 1 you should have (x-1), (x-1)^2, .... terms
 
Last edited:
  • #3
jbunniii
Science Advisor
Homework Helper
Insights Author
Gold Member
3,394
180

Homework Statement



When does a Taylor's Series not converge to its original function? the commonly given example is
[tex]e^\frac{-1}{x^2}[/tex]
But I really don't get how it works... For example if we find the series around the point x=1, then

http://www4a.wolframalpha.com/Calculate/MSP/MSP75061a390i474cg906c800004hhd9d3497i4fa6h?MSPStoreType=image/gif&s=58&w=865&h=29 [Broken]

This is the series right? However, but then the value at the point x=0 is undefined, as the denominator x cannot be zero? I really don't understand how this thing works. Thank you!
Your Wolfram Alpha link didn't work. (By the way, don't put IMG tags around web links.)

The function [itex]e^{-1/x^2}[/itex] is undefined at [itex]x = 0[/itex], but its limit is zero as [itex]x \rightarrow 0[/itex], so the correct function to use is in fact
[tex]
g(x) = \begin{cases}
e^{-1/x^2} & \text{if }x \neq 0 \\
0 & \text{if }x = 0
\end{cases}
[/tex]
This function has the property that [itex]g^{(n)}(0) = 0[/itex] for all [itex]n[/itex], so the Taylor series around [itex]x = 0[/itex] is identically zero. [itex]x = 0[/itex] is the only problematic point; if you expand the series around [itex]x = 1[/itex], the series will converge to [itex]g(x)[/itex] in any interval not containing 0.
 
Last edited by a moderator:
  • #4
LCKurtz
Science Advisor
Homework Helper
Insights Author
Gold Member
9,535
751
Just to add a little to the already written comments. A Taylor series for a function ##f## having all derivatives about the point ##x=a##$$
\sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!}(x-a)^n$$always converges when ##x=a## to the function's value ##f(a)##. The real question is what about ##x## not at the expansion point ##a##. One example showed that ##f(x)## may fail to equal it's series at a value of ##x## because the series fails to converge there. That happens when ##x## is beyond the radius of convergence distance from ##a##. But in the second example you have a function ##f(x)## having all orders of derivatives at ##x=0## where the Taylor series not only converges to 0 for all ##x## but even all the partial sums are all 0. Yet the Taylor series in this case equals the function only at ##x=0##. Why this is interesting is because here we have an infinitely smooth function whose Taylor series converges for all ##x## but doesn't equal ##f(x)## anywhere but ##x=0##, its expansion point.
 

Related Threads on When does a Taylor's Series not converge to its original function?

  • Last Post
Replies
12
Views
5K
Replies
4
Views
2K
Replies
10
Views
724
Replies
4
Views
577
Replies
6
Views
5K
  • Last Post
Replies
1
Views
1K
  • Last Post
Replies
1
Views
3K
  • Last Post
Replies
4
Views
482
  • Last Post
Replies
3
Views
1K
Replies
7
Views
1K
Top