Taylor series, an intuitive view

al_famky
Messages
28
Reaction score
0

Homework Statement


I read that the taylor series was a way to approximate the a function f(x) graphically, by addition and subtraction.
So say I have \frac{1}{1-x}=1+x+x^{2}+x^{3}+...+x^{n}...
suppose x=3, then the left and right side of the equation can't possibly equal the same thing. unless I misunderstood again.
I understand that graphically, the curves of the two functions get closer and closer as n approaches infinity...but I can't seem to see that tendency algebraically.
I wonder where I went wrong, or where I'm misunderstanding here...
 
Physics news on Phys.org
al_famky said:

Homework Statement


I read that the taylor series was a way to approximate the a function f(x) graphically, by addition and subtraction.
So say I have \frac{1}{1-x}=1+x+x^{2}+x^{3}+...+x^{n}...
suppose x=3, then the left and right side of the equation can't possibly equal the same thing. unless I misunderstood again.
I understand that graphically, the curves of the two functions get closer and closer as n approaches infinity...but I can't seem to see that tendency algebraically.
I wonder where I went wrong, or where I'm misunderstanding here...

What you have there is a geometric series with ratio ##x##. You only have convergence of the series if ##|x|<1##. Otherwise the series diverges and you don't get anything.
 
To elaborate a bit more, that is a Taylor series about x=0 for 1/(1-x). Every Taylor series comes with a radius of convergence - in this case 1. If you have a Taylor series about x=a with a radius of converge R, then the approximation gets better and better if |x-a|<R, but fails to give useful results if |x-a|>R
 
LCKurtz said:
What you have there is a geometric series with ratio ##x##. You only have convergence of the series if ##|x|<1##. Otherwise the series diverges and you don't get anything.
of course. I just had to overlook the boundaries...thank you.
 
Office_Shredder said:
To elaborate a bit more, that is a Taylor series about x=0 for 1/(1-x). Every Taylor series comes with a radius of convergence - in this case 1. If you have a Taylor series about x=a with a radius of converge R, then the approximation gets better and better if |x-a|<R, but fails to give useful results if |x-a|>R
about the radius of convergence...when I'm calculating, i understand that i can get a solution algebraically, but sometimes i wonder what it would look like on a graph, because I just can't seem to imagine how there would be a boundary to convergence.
would I be right to assume it behaves like a y=x^{4} graph, where the ends diverge but the middle converges?
 
al_famky said:
would I be right to assume it behaves like a y=x^{4} graph, where the ends diverge but the middle converges?

I don't really understand your example (how is it diverging and the ends and converging in the middle when it's just a single function?) but the description is correct.

http://www.wolframalpha.com/input/?i=graph+1/(1-x),+1+x,+1+x+x^2+x^3+x^4+x^5+x^6+x^7+on+[-2,2]

I graphed 1/(1-x) and a couple of approximations with Taylor polynomials. Notice that on the interval [-1,1] the high degree polynomial is a much better approximation than the linear graph. But on [1,2] the high order polynomial has gone off the top of the graph and is never coming back - the linear graph is a better approximation (although it's not a very good one). And again on the left side past x=-1 the high degree polynomial shoots off past the bottom of the graph, and for x<-1 the linear approximation is actually better (but again, not a good approximation).

Basically what's happening intuitively is that if you have a high degree polynomial, the highest degree term is going to dominate if x is really big. If the coefficients of your Taylor series shrink fast enough (for example, like 1/n!) then the region of "highest degree term dominates" keeps getting pushed out farther and farther, so the region where the Taylor polynomials are good approximations grows as your degree gets higher. If the coefficients don't shrink fast enough, then you still have a region where the polynomials are good approximations, but the region "highest degree term dominates" stays the same, so outside of your radius of convergence the high degree polynomials all shoot off to +/- infinity really fast regardless of what the function does
 
Office_Shredder said:
I don't really understand your example (how is it diverging and the ends and converging in the middle when it's just a single function?) but the description is correct.

http://www.wolframalpha.com/input/?i=graph+1/(1-x),+1+x,+1+x+x^2+x^3+x^4+x^5+x^6+x^7+on+[-2,2]
oh...I get it,, thanks for the graph. I completely misunderstood the definition of diverge and converge. I thought that converging meant that the function approached a finite limit as x approached infinity, because when learning about infinite integrals, we also talked about convergence and divergence...
but...does this mean that convergence and divergence only applies when there are two or more functions?
then how would an infinite integral diverge? when I think about convergence, I imagine a curve like \int\frac{1}{x}dx that has endpoints approaching y=0, so we're able to calculate the area inside. but...I just realized that \int\frac{1}{x}dx diverges.
ok, i don't think I understand the meaning of converge and diverge besides the formula rules. is there a way to explain it graphically or intuitively?
 
Back
Top