Proving the Infinite Sum of e: A Mathematical Journey

In summary: The function ex is as "elementary" as the logarithm but its Taylor series, around x=0, has infinite radius of convergence. The function sin(x) is as "elementary" as the exponential but its Taylor series, around x=0, has infinite radius of convergence. As for "where do error terms come from?" They come from Taylor's theorem. If you know the Taylor's series for a function, it is the sum of its Taylor's series plus an "error term" that depends on the value of x. The remainder term "Rn(x)" for the Taylor's polynomial of degree n is given byRn(x)= (f^(n+1)(
  • #1
soandos
166
0
the common derivation of e is pretty straightforward, Lim x->oo (1+1/x)^x, but how does one prove that the infinite sum [tex]\sum[/tex][tex]\frac{1}{j!}[/tex] as j goes from 0 to oo is equal to e?

p.s. how does one use LaTex for the super and subscripts in summation?
 
Physics news on Phys.org
  • #2
use taylors theorem to show that e^x equals the limit of its taylor series, in particular for x=1.
 
  • #3
And use your "e" definition to prove
IF f(x) = e**x THEN f'(x) = f(x)

(your Taylor's expansion needs this result)
 
  • #4
It can be done more simply.
specialize (1+1/x)^x
to a_n=(1=1/n)^n (n a positve integer)
and expand with the binomial theorem
next show that if
b_n=the sum
lim sup b_n<=e<=lim inf b_n
thus the limit of the sum is e
this is done in most calculus books
 
  • #5
sorry to all:
i do not really understand what you are saying. in response to
It can be done more simply.
specialize (1+1/x)^x
to a_n=(1=1/n)^n (n a positve integer)
and expand with the binomial theorem
that i understand.
however, i do not get the next part.
could someone please explain it?
 
  • #6
does this make sense:
take the taylor series for e^x, at x=1, and you get the sum?
since i unfortunately have no experience with Taylor series, what is the proof for e^x.
sorry if i seem like i want to prove everything that anyone says, it is just that i do not have a background in the area.
 
  • #7
If y= ex, then ex+h- ex= ex(eh-1). The derivative of ex is
[tex]e^x\lim_{h\rightarrow 0}\frac{e^h-1}{h}[/tex]
a constant times ex. What is that constant?

Given that [itex]\lim_{x\rightarrow \infty} (1+ 1/x)^x= e[/itex], let h= 1/x. Then h goes to 0 as x goes infinity and [itex]\lim_{h\rightarrow 0} (1+h)^{1/h}= e[/itex]. That means that, for h very close to 0, [itex](1+h)^{1/h}[/itex] is very close to e and so 1+ h is very close to eh. Therefore, eh- 1 is close to h and (eh-1)/h is close to 1: in the limit, [tex]\lim_{h\rightarrow 0}\frac{e^h-1}{h}= 1[/tex] and so the derivative of ex is, again, ex. It follows that all derivatives of ex are ex and, at x= 0, 1. From that it follows that the MacLaurin series for ex is
[tex]\sum_{n=0}^\infty \frac{1}{n!} x^n[/tex]
and, finally, taking x= 1 that
[tex]\sum_{n=0}^\infty \frac{1}{n!}= e^1= e[/tex]

I should point out that it is perfectly valid to define exp(x) to be the function satisfying "dy/dx= y with y(0)= 1" and get the derivative immediately. We could also define "ln(x)" to be
[tex]\int_0^x \frac{1}{t}dt[/tex]
and then define exp(x) as its inverse function (after proving, of course, that has an inverse function).
 
Last edited by a moderator:
  • #8
soandos said:
does this make sense:
take the taylor series for e^x, at x=1, and you get the sum?
since i unfortunately have no experience with Taylor series, what is the proof for e^x.
sorry if i seem like i want to prove everything that anyone says, it is just that i do not have a background in the area.

Taylor Series are pretty simple.

Suppose that you can write some function f(x) as a polynomial of infinite degree.

[tex]f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ... + a_k x^k + ...[/tex]

How do you find all the coefficients [tex]a_i[/tex]? Plug in 0, and you get [tex]a_0[/tex]:

[tex]f(0) = a_0 + a_1 (0) + a_2 (0)^2 + a_3 (0)^3 + ... = a_0[/tex]

What about the rest? Well, take the derivative of f:

[tex]f'(x) = a_1 + 2 a_2 x + 3 a_3 x^2 + ... + k a_k x^{k-1} + ...[/tex]

Notice that [tex]a_0[/tex] drops out. You can plug in 0 to f' and extract [tex]a_1[/tex].

[tex]f'(0) = a_1 + 2 a_2 (0) + 3 a_3 (0)^2 + ... + k a_k (0)^{k-1} + ... = a_1[/tex]

To find [tex]a_2[/tex], you take the derivative again and find f''(0). But be careful this time. Taking the derivative has popped a '2' from the exponent on x and thrown it into your equation.

[tex]f''(x) = 2 a_2 + 6 a_3 x + ... + k (k-1) a_k x^{k-2} + ...[/tex]

[tex]f''(0) = 2 a_2 + 6 a_3 (0) + ... + k (k-1) a_k (0)^{k-2} + ... = 2 a_2[/tex]

So [tex]a_2 = \frac{f''(0)}{2}[/tex].

Continue this process, and you find that [tex]a_k = \frac{ f^{(k)}(0) }{k!}[/tex] (where [tex]f^{(k)}[/tex] is the k-th derivative and k! is k factorial).

So for any "well behaved" function f, we have

[tex]f(x) = \Sigma_{k=0}^\infty \frac{f^{(k)}(0)} {k!}x^k[/tex].

Now, applying this to [tex]f(x) = e^x[/tex], what do we get? Well, [tex]e^x[/tex] is magical, because no matter how many times you take the derivative, it stays the same. That is, [tex]f^{(k)}(x) = f(x) = e^x[/tex]. And knowing that [tex]e^0 = 1[/tex], we know that [tex]f^{(k)}(0) = f(0) = e^0 = 1[/tex]. So finally, for our grand finale, we have:

[tex]e^x = \Sigma_{k=0}^\infty \frac{1} {k!} x^k[/tex].

So if we want to know what the number "e" itself is equal to, we just set x = 1:

[tex]e = e^1 = \Sigma_{k=0}^\infty \frac{1} {k!}[/tex].
 
  • #9
As a side note, there's usually an error term for Taylor expansions, but elementary functions like the exponential, sine and cosine have an infinite radius of convergence so this is disregarded in those cases.

The LaTeX goes \sum_{n = 1}^{\infty} a_{n} to get [itex]\sum_{n = 1}^{\infty} a_{n}[/tex], you might find tutorials somewhere on here or elsewhere!
 
  • #10
HallsofIvy said:
I should point out that it is perfectly valid to define exp(x) to be the function satisfying "dy/dx= y with y(0)= 1"

This, at least to me, seems the easiest way. Then all we must do is to consider the series

[tex]\sum_{n=0}^{\infty} \frac{x^n}{n!}[/tex].

and we see immediately that it converges for all values of x by the ratio test, and that it fulfills the definitions requirements.
 
  • #11
yasiru89 said:
As a side note, there's usually an error term for Taylor expansions, but elementary functions like the exponential, sine and cosine have an infinite radius of convergence so this is disregarded in those cases.
No. Error terms have nothing to do with "radius of convergence". Error terms apply only to Taylor "polynomials" where we cut the Taylor series off after a fixed power "n".

And it has nothing to do with "elementary" functions. The function ln(x) is as "elementary" as exponential but its Taylor series, around x=1, has radius of convergence 1.
 
  • #12
soandos said:
sorry to all:
i do not really understand what you are saying. in response to
that i understand.
however, i do not get the next part.
could someone please explain it?

so you have (with n large)
(1+1/n)^n
by the binomial theorem we have
(1+1/n)^n=Σn!/k!/(n-k)!/n^k
1/k! is what we have in the alternate sum
so a two part proof would be
lim (1+1/n)^n=lim Σn!/k!/(n-k)!/n^k=Σlim n!/k!/(n-k)!/n^k=Σ1/k!
lim Σn!/k!/(n-k)!/n^k=Σlim n!/k!/(n-k)!/n^k
is the hard part since
lim n!/(n-k)!/n^k=1 is obvious
we should first show both limits exist
next we use a common calculus method to show two number are equal it is often easier to show that they are not unequal for example if we wish to show x=y we might show
x<=y
y<=x

now let
{s_i} be the sequence of partial sums of Σ1/k!
{t_i} be the sequence of values of (1+1/n)^n
with s and t there respective limits (both equal to e in the end)
it can be seen that
t_i<s_i<=s
because the terms in t_i when considered a sum by the binomial theorm
a {s_n} is a sequence of partial sums of positive numbers hence increasing
terms of t_i are the terms of s_i multiplied by
n!/(n-k)!/n^k (are all less than or equal 1)
that is
{t_i}={(1+1/n)^n}={Σn!/k!/(n-k)!/n^k}=={Σ[n!/(n-k)!/n^k][1/k!]}
so
t<=s
so the harder step will be
s<=t
suppose we make two approximations
one where n>N
one where n>M>N
the idea is when n is very large early terms of t_n look like terms of s_N and while later terms do not, they do not matter (they are small)
s-2eps<s_n-eps<t_n<=t whenever n>M>N
so we first estimated t_n<s_N-eps then estimated s_n<s-eps and combined them to get
s-2eps<=t
hence
s<=t
hence
s=t
qed
 
Last edited:
  • #13
HallsofIvy said:
No. Error terms have nothing to do with "radius of convergence". Error terms apply only to Taylor "polynomials" where we cut the Taylor series off after a fixed power "n".

And it has nothing to do with "elementary" functions. The function ln(x) is as "elementary" as exponential but its Taylor series, around x=1, has radius of convergence 1.
Sorry, run-ins with asymptotic truncations have given me a bit of a twisted vocabulary. In my defence, the 'error term' was meant as the difference between the value of the function and the limiting sum of the series whenever (and however) they might be summed (this being zero is not always the case, but often is) and not the 'remainder' of a Taylor polynomial approximation (which is the difference between the approximation and the funtion); given convergence throughout the plane and coinciding results for these functions, this can be disconsidered in this case. Even then, it is worth keeping in mind that a Taylor series is simply the limiting case of a Taylor polynomial, and thus the behaviour of that very same remainder can't be so easily discounted.

I wouldn't consider all elementary functions of course, only those under consideration, which are entire with the Taylor series coinciding with the function at every point.
 
Last edited:

1. How do you define an infinite sum?

An infinite sum is the sum of an infinite number of terms, where each term is added together to form a total value.

2. What is e in mathematics?

e is a mathematical constant approximately equal to 2.71828. It is a fundamental number in calculus and is often called the "natural base" or "Euler's number".

3. How is the infinite sum of e derived?

The infinite sum of e can be derived using the Taylor series expansion for the function e^x. This series is an infinite sum of terms that approach the value of e as the number of terms increases.

4. What is the significance of proving the infinite sum of e?

Proving the infinite sum of e is significant because it provides a deeper understanding of the mathematical properties of the constant and its applications in various fields, such as physics, finance, and engineering.

5. Is the infinite sum of e a finite value?

No, the infinite sum of e is an infinite value. This means that the sum cannot be calculated to a precise number, but rather approaches a certain value as the number of terms increases.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
3K
  • Linear and Abstract Algebra
Replies
4
Views
939
Replies
11
Views
850
  • Calculus and Beyond Homework Help
Replies
1
Views
255
  • Advanced Physics Homework Help
Replies
1
Views
685
Replies
1
Views
930
  • Linear and Abstract Algebra
Replies
7
Views
2K
Replies
1
Views
1K
Replies
4
Views
928
  • General Math
Replies
5
Views
1K
Back
Top