Studying Taylor and Maclaurin Series

kreil
Science Advisor
Insights Author
Messages
665
Reaction score
68
So I'm studying Taylor Series (I work ahead of my calc class so that when we cover topics I already know them and they are easier to study..) and tonight I found a formula for taylor series and maclaurin series, and i used them to prove eulers identity. However, I don't really know much about taylor or maclaurin series. I have a couple questions...

(i) How was the formula T(x)=\sum_{n=0}^\infty \frac{f^n(a)}{n!}(x-a)^n+R_n derived?

(ii) Is there a definite result for the remainder term Rn(I have seen cauchys and lagranges, I am just wondering if they are different forms of the same thing or if its still open)?

(iii) What exactly is a Taylor series (what is it used for)? I only know that the formula produces a series of functions which, if combined, construct the original function.

(iv) If anyone could offer up some interesting but difficult problems in relation to taylor series, maclaurin series, or power series, I would appreciate it.

Josh
 
Last edited:
Physics news on Phys.org
kreil said:
So I'm studying Taylor Series (I work ahead of my calc class so that when we cover topics I already know them and they are easier to study..) and tonight I found a formula for taylor series and maclaurin series, and i used them to prove eulers identity. However, I don't really know much about taylor or maclaurin series. I have a couple questions...
(i) How was the formula T(x)=\sum_{n=0}^\infty \frac{f^n(a)}{n!}(x-a)^n+R_n derived?
One approach is this: the constant function (i.e. simplest polynomial) equal to f(x) at a is, of course, f(a). The linear function (i.e. simplest polynomial) equal to f(x) and with the same slope as f at a is f'(x)(x-a)+ f(a). The quadratic function (i.e. simplest polynomial) equal to f(x) and with same first and second derivatives at a is \frac{f&quot;(a)}{2}(x-a)^2+ f&#039;(a)(x-a)+ f(a). In other words the nth Taylor polynomial of f at x= a is the simplest polynomial having the same value and first n-1 derivatives as f at a. The Taylor <b>series</b> is the power series having value and all derivatives at x= a equal to those of f. Since the derivatives of a function incorporate all &quot;local&quot; information about f, that is essentially saying that the Taylor series for f, at x= a, is &quot;locally&quot; the same as f.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> (ii) Is there a definite result for the remainder term R<sub>n</sub>(I have seen cauchys and lagranges, I am just wondering if they are different forms of the same thing or if its still open)? </div> </div> </blockquote> I&#039;m not sure what you are asking here! The Cauchy and Lagrange forms are the same thing. <br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> (iii) What exactly is a Taylor series (what is it used for)? I only know that the formula produces a series of functions which, if combined, construct the original function. </div> </div> </blockquote> It is an extension of the idea of approximating a function by polynomials. Polynomials are much easier to work with than more general functions. Taylor series are typically easier to work with than more general functions.<br /> By the way, it is NOT always the case that Taylor series &quot;construct the orginal function&quot;. There exist functions whose Taylor series converge for all x but NOT to the original function (except at x= a). Functions for which the Taylor series converges to the original function (in some neighborhood of a) are called &quot;analytic&quot; functions. Almost all the functions we use are analytic precisely because they are &quot;nice&quot;.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> (iv) If anyone could offer up some interesting but difficult problems in relation to taylor series, maclaurin series, or power series, I would appreciate it. <br /> Josh </div> </div> </blockquote> I&#039;m not sure what you would consider &quot;interesting but difficult&quot;.
 
well how exactly would i go about constructing a graph of sin(x) using taylor series polynomials? (I know sin(x) is analytic) If I expand sin(x) using taylor series, am i allowed to change the a for each term? I.e. at x=0 sin(x) would be approximated by a first degree polynomial, at x=1 a 3rd degree polynomial, at x=2 a 5th degree polynomial etc...

I'm just not sure if that's how it is done...
 
standard interesting, not entirely trivial exercise using taylor series:

compute the taylor series for e^x, use it to give an infinite series formula for e, and use that to show that e is irrational.
 
mathwonk said:
standard interesting, not entirely trivial exercise using taylor series:

compute the taylor series for e^x, use it to give an infinite series formula for e, and use that to show that e is irrational.
e=\sum_{n=0}^{\infty}\frac{1}{n!}

Hmm... How is this used to show e is irrational?
 
apmcavoy said:
Hmm... How is this used to show e is irrational?

It's an exercise :smile: ...start by assuming that e is rational, e=p/q and try to come up with a contradiction using the series.
 
show, for all n that n!e is not an integer.
 
kreil said:
However, I don't really know much about taylor or maclaurin series. I have a couple questions...

(i) How was the formula T(x)=\sum_{n=0}^\infty \frac{f^n(a)}{n!}(x-a)^n+R_n derived?
Here's is an article I wrote a while back on the derivation of taylor series. It doesn't have anything about errors but nevertheless it'll show you where did that odd looking formula for taylor series came from.
http://planetmath.org/?op=getobj&from=collab&id=69
 
  • #10
For mathwonks question, here it goes :
n!=n\cdot (n-1)!
\forall \natural x : n! is another natural number.
\forall \natural x : n!\cdot e is not a natural number, since the product of a natural number and a transcendental number can not be a natural number. I know this proof is very very weak, in fact i don't think i did anything at all...
 
  • #11
kreil said:
well how exactly would i go about constructing a graph of sin(x) using taylor series polynomials? (I know sin(x) is analytic) If I expand sin(x) using taylor series, am i allowed to change the a for each term? I.e. at x=0 sin(x) would be approximated by a first degree polynomial, at x=1 a 3rd degree polynomial, at x=2 a 5th degree polynomial etc...

I'm just not sure if that's how it is done...

If y= sin(x) then y'= cos(x), y"= -sin(x), etc. In particular, taking a= 0 (No, you cannot change a for each term- the Taylor's series is calculated at a particular value of a. Often that value is a= 0 in which case it is also called the MacLaurin's series. You could, for example, find the second degree Taylor polynomial for sin(x) at a= 0, then again at a= some small positive value, then again at a= some slightly larger value, and patch them together to get a good "piecewise quadratic" approximation to sin(x).) we have y(0)= 0, y'(0)= 1, y"(0)= 0, y"'(0)= -1, etc. We note that all even derivatives are 0, all odd are plus or minus 1 (in particular the 2n+1 derivative at 0 is (-1)n).
y= x is the "best" linear approximation to y= sin(x) near x=0, y= x- (1/3!)x3 is the best cubic approximation to y= sin(x) near x= 0, y(x)= x- (1/3!)x3+ (1/5!)x5 is the best 5th degree approximation, etc.

Here, by the way, is a way to solve the initial value problem
\frac{d^2y}{dx^2}+ x\frac{dy}{dx}+ y^2= 0
with y(0)= 1, y'(0)= 2.
We know from the boundary conditions that y(0)= 1, y'(0)= 2. Then
\frac{d^2y}{dx^2}(0)+ 0(2)+ 1= 0
so y"(0)= -1. Differentiating the equation with respect to x,
\frac{d^3y}{dx^3}+ x\frac{d^2y}{dx^2}+ \frac{dy}{dx}+ 2y\frac{dy}{dx}= 0[/itex]<br /> and, setting x= 0,<br /> \frac{d^3y}{dx^2}(0)+ 0(-1)+ 2+ 2(1)(2)= 0[/itex]&lt;br /&gt; so y&amp;quot;&amp;#039;(0)= -6.&lt;br /&gt; &lt;br /&gt; Continuing, we can calculate all derivatives to any desired order and so find the Taylor&amp;#039;s polynomials to any desired order (and, theoretically, the Taylor&amp;#039;s series). From the simple calculations above, &lt;br /&gt; y(x)= 1+ 2x- (1/2!)x^2- 6(1/3!)x^3= 1+ 2x- x^2/2- x^3&lt;br /&gt; is an approximation to third degree.
 
Last edited by a moderator:
  • #12
Could anyone tell me how i would go about solving mathwonks problem?
 
  • #13
It's basically the proof of the irrationality of e. Suppose n!e is an integer. Then write n!e = (n!/0! + n!/1! + n!/2! + ... + n!/n!) + (n!/(n+1)! + ...). As the expression in the first parentheses is clearly an integer, then we can conclude that the remainder of the expression is as well, i.e. that
1/(n+1) + 1/(n+1)(n+2) + 1/(n+1)(n+2)(n+3) + ...
is an integer.

On the other hand, that expression is bounded above by
1/(n+1) + 1/(n+1)^2 + 1/(n+1)^3 + ... = 1/(n+1) * 1/(1 - 1/(n+1)) = 1/n

But if n is an integer, that means our expression above is less than 1. Since it's positive as well, we have our contradiction (in particular, we found an integer that lies in (0, 1)).
 

Similar threads

Replies
8
Views
3K
Replies
1
Views
2K
Replies
1
Views
1K
Replies
1
Views
1K
Replies
4
Views
21K
Replies
5
Views
16K
Replies
6
Views
2K
Back
Top