# Taylor Series

Gold Member

## Main Question or Discussion Point

So I'm studying Taylor Series (I work ahead of my calc class so that when we cover topics I already know them and they are easier to study..) and tonight I found a formula for taylor series and maclaurin series, and i used them to prove eulers identity. However, I don't really know much about taylor or maclaurin series. I have a couple questions...

(i) How was the formula $$T(x)=\sum_{n=0}^\infty \frac{f^n(a)}{n!}(x-a)^n+R_n$$ derived?

(ii) Is there a definite result for the remainder term Rn(I have seen cauchys and lagranges, im just wondering if they are different forms of the same thing or if its still open)?

(iii) What exactly is a Taylor series (what is it used for)? I only know that the formula produces a series of functions which, if combined, construct the original function.

(iv) If anyone could offer up some interesting but difficult problems in relation to taylor series, maclaurin series, or power series, I would appreciate it.

Josh

Last edited:

HallsofIvy
Homework Helper
kreil said:
So I'm studying Taylor Series (I work ahead of my calc class so that when we cover topics I already know them and they are easier to study..) and tonight I found a formula for taylor series and maclaurin series, and i used them to prove eulers identity. However, I don't really know much about taylor or maclaurin series. I have a couple questions...
(i) How was the formula $$T(x)=\sum_{n=0}^\infty \frac{f^n(a)}{n!}(x-a)^n+R_n$$ derived?
One approach is this: the constant function (i.e. simplest polynomial) equal to f(x) at a is, of course, f(a). The linear function (i.e. simplest polynomial) equal to f(x) and with the same slope as f at a is f'(x)(x-a)+ f(a). The quadratic function (i.e. simplest polynomial) equal to f(x) and with same first and second derivatives at a is $\frac{f"(a)}{2}(x-a)^2+ f'(a)(x-a)+ f(a). In other words the nth Taylor polynomial of f at x= a is the simplest polynomial having the same value and first n-1 derivatives as f at a. The Taylor series is the power series having value and all derivatives at x= a equal to those of f. Since the derivatives of a function incorporate all "local" information about f, that is essentially saying that the Taylor series for f, at x= a, is "locally" the same as f. (ii) Is there a definite result for the remainder term Rn(I have seen cauchys and lagranges, im just wondering if they are different forms of the same thing or if its still open)? I'm not sure what you are asking here! The Cauchy and Lagrange forms are the same thing. (iii) What exactly is a Taylor series (what is it used for)? I only know that the formula produces a series of functions which, if combined, construct the original function. It is an extension of the idea of approximating a function by polynomials. Polynomials are much easier to work with than more general functions. Taylor series are typically easier to work with than more general functions. By the way, it is NOT always the case that Taylor series "construct the orginal function". There exist functions whose Taylor series converge for all x but NOT to the original function (except at x= a). Functions for which the Taylor series converges to the original function (in some neighborhood of a) are called "analytic" functions. Almost all the functions we use are analytic precisely because they are "nice". (iv) If anyone could offer up some interesting but difficult problems in relation to taylor series, maclaurin series, or power series, I would appreciate it. Josh I'm not sure what you would consider "interesting but difficult". Gold Member well how exactly would i go about constructing a graph of sin(x) using taylor series polynomials? (I know sin(x) is analytic) If I expand sin(x) using taylor series, am i allowed to change the a for each term? I.e. at x=0 sin(x) would be approximated by a first degree polynomial, at x=1 a 3rd degree polynomial, at x=2 a 5th degree polynomial etc... I'm just not sure if thats how it is done... mathwonk Science Advisor Homework Helper standard interesting, not entirely trivial exercise using taylor series: compute the taylor series for e^x, use it to give an infinite series formula for e, and use that to show that e is irrational. mathwonk said: standard interesting, not entirely trivial exercise using taylor series: compute the taylor series for e^x, use it to give an infinite series formula for e, and use that to show that e is irrational. $$e=\sum_{n=0}^{\infty}\frac{1}{n!}$$ Hmm... How is this used to show e is irrational? shmoe Science Advisor Homework Helper apmcavoy said: Hmm... How is this used to show e is irrational? It's an exercise ...start by assuming that e is rational, e=p/q and try to come up with a contradiction using the series. mathwonk Science Advisor Homework Helper show, for all n that n!e is not an integer. kreil said: However, I don't really know much about taylor or maclaurin series. I have a couple questions... (i) How was the formula $$T(x)=\sum_{n=0}^\infty \frac{f^n(a)}{n!}(x-a)^n+R_n$$ derived? Here's is an article I wrote a while back on the derivation of taylor series. It doesn't have anything about errors but nevertheless it'll show you where did that odd looking formula for taylor series came from. http://planetmath.org/?op=getobj&from=collab&id=69 Gib Z Homework Helper For mathwonks question, here it goes : $$n!=n\cdot (n-1)!$$ $$\forall \natural x : n!$$ is another natural number. $$\forall \natural x : n!\cdot e$$ is not a natural number, since the product of a natural number and a transcendental number can not be a natural number. I know this proof is very very weak, in fact i don't think i did anything at all... HallsofIvy Science Advisor Homework Helper well how exactly would i go about constructing a graph of sin(x) using taylor series polynomials? (I know sin(x) is analytic) If I expand sin(x) using taylor series, am i allowed to change the a for each term? I.e. at x=0 sin(x) would be approximated by a first degree polynomial, at x=1 a 3rd degree polynomial, at x=2 a 5th degree polynomial etc... I'm just not sure if thats how it is done... If y= sin(x) then y'= cos(x), y"= -sin(x), etc. In particular, taking a= 0 (No, you cannot change a for each term- the Taylor's series is calculated at a particular value of a. Often that value is a= 0 in which case it is also called the MacLaurin's series. You could, for example, find the second degree Taylor polynomial for sin(x) at a= 0, then again at a= some small positive value, then again at a= some slightly larger value, and patch them together to get a good "piecewise quadratic" approximation to sin(x).) we have y(0)= 0, y'(0)= 1, y"(0)= 0, y"'(0)= -1, etc. We note that all even derivatives are 0, all odd are plus or minus 1 (in particular the 2n+1 derivative at 0 is (-1)n). y= x is the "best" linear approximation to y= sin(x) near x=0, y= x- (1/3!)x3 is the best cubic approximation to y= sin(x) near x= 0, y(x)= x- (1/3!)x3+ (1/5!)x5 is the best 5th degree approximation, etc. Here, by the way, is a way to solve the initial value problem $$\frac{d^2y}{dx^2}+ x\frac{dy}{dx}+ y^2= 0$$ with y(0)= 1, y'(0)= 2. We know from the boundary conditions that y(0)= 1, y'(0)= 2. Then $$\frac{d^2y}{dx^2}(0)+ 0(2)+ 1= 0$$ so y"(0)= -1. Differentiating the equation with respect to x, $$\frac{d^3y}{dx^3}+ x\frac{d^2y}{dx^2}+ \frac{dy}{dx}+ 2y\frac{dy}{dx}= 0$ and, setting x= 0, [tex]\frac{d^3y}{dx^2}(0)+ 0(-1)+ 2+ 2(1)(2)= 0[/itex] so y"'(0)= -6. Continuing, we can calculate all derivatives to any desired order and so find the Taylor's polynomials to any desired order (and, theoretically, the Taylor's series). From the simple calculations above, [tex]y(x)= 1+ 2x- (1/2!)x^2- 6(1/3!)x^3= 1+ 2x- x^2/2- x^3$$
is an approximation to third degree.

Last edited by a moderator:
Gib Z
Homework Helper
Could anyone tell me how i would go about solving mathwonks problem?

morphism
Homework Helper
It's basically the proof of the irrationality of e. Suppose n!e is an integer. Then write n!e = (n!/0! + n!/1! + n!/2! + ... + n!/n!) + (n!/(n+1)! + ...). As the expression in the first parentheses is clearly an integer, then we can conclude that the remainder of the expression is as well, i.e. that
1/(n+1) + 1/(n+1)(n+2) + 1/(n+1)(n+2)(n+3) + ...
is an integer.

On the other hand, that expression is bounded above by
1/(n+1) + 1/(n+1)^2 + 1/(n+1)^3 + ... = 1/(n+1) * 1/(1 - 1/(n+1)) = 1/n

But if n is an integer, that means our expression above is less than 1. Since it's positive as well, we have our contradiction (in particular, we found an integer that lies in (0, 1)).