Why the Taylor Series has a Factorial Factor

Click For Summary

Discussion Overview

The discussion revolves around the presence of factorial factors in the Taylor series expansion of functions. Participants explore the mathematical reasoning behind the inclusion of the \( \frac{1}{n!} \) term in the series, examining its implications for function reconstruction and convergence properties.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Some participants propose that the factorial terms arise from the derivation of the Taylor series, particularly noting that the \( n! \) comes from the nth derivative of \( x^n \).
  • One participant describes a method involving integration by parts to illustrate how the factorials emerge in the Taylor series derivation.
  • Another participant discusses the conditions under which a function can be reconstructed from its derivatives, suggesting that if the nth derivative is bounded, the function's growth can be limited by \( \frac{M}{n!} t^n \).
  • Some participants argue that while the Taylor series can represent a function, it does not necessarily equal the function for all cases, citing examples like \( e^{-\frac{1}{x^2}} \), which has a Taylor series that is identically zero at a point but is not equal to the function elsewhere.
  • A later reply emphasizes the importance of using symbols other than equality to express the relationship between a function and its Taylor series, suggesting a more nuanced equivalence.
  • Another participant highlights that even if a function is infinitely differentiable, it does not guarantee that the Taylor series converges to the function itself, except at the point of expansion.

Areas of Agreement / Disagreement

Participants express multiple competing views regarding the relationship between a function and its Taylor series. While some agree on the derivation and implications of the factorial term, others emphasize the limitations and exceptions where the Taylor series does not equal the function.

Contextual Notes

Limitations include the dependence on the behavior of functions at specific points and the conditions under which the Taylor series converges to the function. The discussion also highlights the need for careful consideration of the definitions and assumptions involved in the analysis.

matematikuvol
Messages
190
Reaction score
0
Why in Taylor series we have some factoriel ##!## factor.
[tex]f(x)=f(0)+xf'(0)+\frac{x^2}{2!}f''(0)+...[/tex]
Why we have that ##\frac{1}{n!}## factor?
 
Physics news on Phys.org
matematikuvol said:
Why in Taylor series we have some factoriel ##!## factor.
[tex]f(x)=f(0)+xf'(0)+\frac{x^2}{2!}f''(0)+...[/tex]
Why we have that ##\frac{1}{n!}## factor?

Just take the nth derivative of the Taylor series expression. The constant term is the nth derivative at 0. n! comes from the nth derivative of xn, which gives the constant term.
 
[/itex]
matematikuvol said:
Why in Taylor series we have some factoriel ##!## factor.
[tex]f(x)=f(0)+xf'(0)+\frac{x^2}{2!}f''(0)+...[/tex]
Why we have that ##\frac{1}{n!}## factor?

The factorial terms appear in the derivation of the Taylor series.

For instance, a derivation for a Taylor series around zero start with the equation,

[tex]f(x) - f(0) = ∫^{1}_{0}f'((1-t)x)xdt[/tex]By integration by parts this integral also equals,

[tex]f'((1-t)x)xt |^{1}_{0} + ∫^{1}_{0}tf''((1-t)x)x^{2}dt[/tex]

where the first term equals f'(0)x

Integrating by parts again, the integral term becomes,

[tex]f''((1-t)x)x^{2}\frac{t^2}{2!} |^{1}_{0} + ∫^{1}_{0}\frac{t^2}{2!}f'''((1-t)x)x^{3}dt[/tex]

Simplifying again, the first term becomes

[tex]\frac{x^2}{2!}f''(0)[/tex]

So so far one has [tex]f(x) - f(0) = xf'(0) + \frac{x^2}{2!}f''(0) + ∫^{1}_{0}\frac{t^2}{2!}f'''((1-t)x)x^{3}dt[/tex]

Continuing to integrate by parts gives you the first n terms in the Taylor series plus a remainder integral. The series converges if this remainder shrinks to zero as n goes to infinity. The factorials fall out inductively and arise because the anti-derivative of

[tex]\frac{t^n}{n!}[/tex] is [tex]\frac{t^{n+1}}{n+1!}[/tex]
 
Last edited:
The problem is to reconstruct a function, given its value and derivatives at a single point, under nice conditions.

To give a baby example, suppose you know the velocity of an object is bounded by M. Then in t seconds, you know that it couldn't get farther than tM from the starting point.

To give another example, suppose the acceleration is bounded by M and suppose the initial velocity is zero. The velocity can't be more than tM in absolute value at time t, so the position can't be more than .5 t^2 M.

If M bounds the absolute value of the nth derivative, and all the previous derivatives and the function value vanish, you can play the same game and you get that function can grow no faster than M/n! t^n. So, your bound, M, doesn't have to be that great because it's getting divided by n!.

In the general case, you can subtract off a polynomial to reduce to the case above. What polynomial do you subtract from the function, so that all the derivatives up to the nth one are zero at the given point? The answer is the Taylor polynomials. You just want to come up with a polynomial whose derivatives are the same as the function, just at the given point. From there, you can refer to mathman's answer.

Under favorable conditions, for example, if ALL the derivatives are uniformly bounded by the SAME number, M, the n! will make the estimates get better and better as you pass to higher order polynomials and the Taylor series will converge to the function.
 
because

[tex]\left(\dfrac{d}{dx}\right)^n \frac{x^n}{n!}=1[/tex]
 
matematikuvol said:
Why in Taylor series we have some factoriel ##!## factor.
[tex]f(x)=f(0)+xf'(0)+\frac{x^2}{2!}f''(0)+...[/tex]

Not necessarily true, there are plenty of functions for which the taylor series doesn't equal the function. Even nice, non-piecewise ones like [itex]e^{-\frac1{x^2}}[/itex] (a famous one, the taylor series around x=0 is just 0.)

Anyways, here's the way I've always thought about it. Start differentiating it multiple times at x=0. You'll notice that both f and its taylor series have the same nth derivative at 0. Same derivatives "implies" (for most really simple, well-behaved functions) the same function.

Again, that was sort of ... not right, but it should give you some idea of why that factor exists.
 
^sometimes it is better to use a symbol other than =
[tex]\mathrm{f}(x) \sim \sum^\infty_{k=0} \frac{x^k}{k!} \dfrac{d^k \mathrm{f}}{dx^k}[/tex]

to emphasize the point that an equivalence can be defined by the relation has the same Taylor series as.
 
lurflurf's point is a good one. Even if a functions is infinitely differentiable, so that we can form its Taylor series, and even if that series converges for all x, the functions is not necessarily equal to it, except, of course, at the "initial point". The classic example is [itex]f(x)= e^{-1/x^2}[/itex] if x is not 0, f(0)= 0. It can be shown that this function is infinitely differentiable for all x, particularly at x= 0, and that all derivatives have the value "0" at x= 0. That is, its Taylor series about x= 0 (MacLaurin series) is identically equal to 0 for all x. But clearly f(x) is not identically 0. In fact it is 0 only at x= 0 so that is the only point the function is equal to its Taylor series.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 17 ·
Replies
17
Views
4K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 19 ·
Replies
19
Views
4K