Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Derivation of e

  1. Nov 3, 2008 #1
    the common derivation of e is pretty straightforward, Lim x->oo (1+1/x)^x, but how does one prove that the infinite sum [tex]\sum[/tex][tex]\frac{1}{j!}[/tex] as j goes from 0 to oo is equal to e?

    p.s. how does one use LaTex for the super and subscripts in summation?
  2. jcsd
  3. Nov 3, 2008 #2


    User Avatar
    Science Advisor
    Homework Helper

    use taylors theorem to show that e^x equals the limit of its taylor series, in particular for x=1.
  4. Nov 3, 2008 #3
    And use your "e" definition to prove
    IF f(x) = e**x THEN f'(x) = f(x)

    (your Taylor's expansion needs this result)
  5. Nov 4, 2008 #4


    User Avatar
    Homework Helper

    It can be done more simply.
    specialize (1+1/x)^x
    to a_n=(1=1/n)^n (n a positve integer)
    and expand with the binomial theorem
    next show that if
    b_n=the sum
    lim sup b_n<=e<=lim inf b_n
    thus the limit of the sum is e
    this is done in most calculus books
  6. Nov 4, 2008 #5
    sorry to all:
    i do not really understand what you are saying. in response to
    that i understand.
    however, i do not get the next part.
    could someone please explain it?
  7. Nov 4, 2008 #6
    does this make sense:
    take the taylor series for e^x, at x=1, and you get the sum?
    since i unfortunately have no experience with Taylor series, what is the proof for e^x.
    sorry if i seem like i want to prove everything that anyone says, it is just that i do not have a background in the area.
  8. Nov 4, 2008 #7


    User Avatar
    Science Advisor

    If y= ex, then ex+h- ex= ex(eh-1). The derivative of ex is
    [tex]e^x\lim_{h\rightarrow 0}\frac{e^h-1}{h}[/tex]
    a constant times ex. What is that constant?

    Given that [itex]\lim_{x\rightarrow \infty} (1+ 1/x)^x= e[/itex], let h= 1/x. Then h goes to 0 as x goes infinity and [itex]\lim_{h\rightarrow 0} (1+h)^{1/h}= e[/itex]. That means that, for h very close to 0, [itex](1+h)^{1/h}[/itex] is very close to e and so 1+ h is very close to eh. Therefore, eh- 1 is close to h and (eh-1)/h is close to 1: in the limit, [tex]\lim_{h\rightarrow 0}\frac{e^h-1}{h}= 1[/tex] and so the derivative of ex is, again, ex. It follows that all derivatives of ex are ex and, at x= 0, 1. From that it follows that the MacLaurin series for ex is
    [tex]\sum_{n=0}^\infty \frac{1}{n!} x^n[/tex]
    and, finally, taking x= 1 that
    [tex]\sum_{n=0}^\infty \frac{1}{n!}= e^1= e[/tex]

    I should point out that it is perfectly valid to define exp(x) to be the function satisfying "dy/dx= y with y(0)= 1" and get the derivative immediately. We could also define "ln(x)" to be
    [tex]\int_0^x \frac{1}{t}dt[/tex]
    and then define exp(x) as its inverse function (after proving, of course, that has an inverse function).
    Last edited by a moderator: Nov 5, 2008
  9. Nov 4, 2008 #8
    Taylor Series are pretty simple.

    Suppose that you can write some function f(x) as a polynomial of infinite degree.

    [tex]f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ... + a_k x^k + ...[/tex]

    How do you find all the coefficients [tex]a_i[/tex]? Plug in 0, and you get [tex]a_0[/tex]:

    [tex]f(0) = a_0 + a_1 (0) + a_2 (0)^2 + a_3 (0)^3 + ... = a_0[/tex]

    What about the rest? Well, take the derivative of f:

    [tex]f'(x) = a_1 + 2 a_2 x + 3 a_3 x^2 + ... + k a_k x^{k-1} + ...[/tex]

    Notice that [tex]a_0[/tex] drops out. You can plug in 0 to f' and extract [tex]a_1[/tex].

    [tex]f'(0) = a_1 + 2 a_2 (0) + 3 a_3 (0)^2 + ... + k a_k (0)^{k-1} + ... = a_1[/tex]

    To find [tex]a_2[/tex], you take the derivative again and find f''(0). But be careful this time. Taking the derivative has popped a '2' from the exponent on x and thrown it into your equation.

    [tex]f''(x) = 2 a_2 + 6 a_3 x + ... + k (k-1) a_k x^{k-2} + ...[/tex]

    [tex]f''(0) = 2 a_2 + 6 a_3 (0) + ... + k (k-1) a_k (0)^{k-2} + ... = 2 a_2[/tex]

    So [tex]a_2 = \frac{f''(0)}{2}[/tex].

    Continue this process, and you find that [tex]a_k = \frac{ f^{(k)}(0) }{k!}[/tex] (where [tex]f^{(k)}[/tex] is the k-th derivative and k! is k factorial).

    So for any "well behaved" function f, we have

    [tex]f(x) = \Sigma_{k=0}^\infty \frac{f^{(k)}(0)} {k!}x^k[/tex].

    Now, applying this to [tex]f(x) = e^x[/tex], what do we get? Well, [tex]e^x[/tex] is magical, because no matter how many times you take the derivative, it stays the same. That is, [tex]f^{(k)}(x) = f(x) = e^x[/tex]. And knowing that [tex]e^0 = 1[/tex], we know that [tex]f^{(k)}(0) = f(0) = e^0 = 1[/tex]. So finally, for our grand finale, we have:

    [tex]e^x = \Sigma_{k=0}^\infty \frac{1} {k!} x^k[/tex].

    So if we want to know what the number "e" itself is equal to, we just set x = 1:

    [tex]e = e^1 = \Sigma_{k=0}^\infty \frac{1} {k!}[/tex].
  10. Nov 4, 2008 #9
    As a side note, there's usually an error term for Taylor expansions, but elementary functions like the exponential, sine and cosine have an infinite radius of convergence so this is disregarded in those cases.

    The LaTeX goes \sum_{n = 1}^{\infty} a_{n} to get [itex]\sum_{n = 1}^{\infty} a_{n}[/tex], you might find tutorials somewhere on here or elsewhere!
  11. Nov 5, 2008 #10

    Gib Z

    User Avatar
    Homework Helper

    This, at least to me, seems the easiest way. Then all we must do is to consider the series

    [tex]\sum_{n=0}^{\infty} \frac{x^n}{n!}[/tex].

    and we see immediately that it converges for all values of x by the ratio test, and that it fulfills the definitions requirements.
  12. Nov 5, 2008 #11


    User Avatar
    Science Advisor

    No. Error terms have nothing to do with "radius of convergence". Error terms apply only to Taylor "polynomials" where we cut the Taylor series off after a fixed power "n".

    And it has nothing to do with "elementary" functions. The function ln(x) is as "elementary" as exponential but its Taylor series, around x=1, has radius of convergence 1.
  13. Nov 5, 2008 #12


    User Avatar
    Homework Helper

    so you have (with n large)
    by the binomial theorem we have
    1/k! is what we have in the alternate sum
    so a two part proof would be
    lim (1+1/n)^n=lim Σn!/k!/(n-k)!/n^k=Σlim n!/k!/(n-k)!/n^k=Σ1/k!
    lim Σn!/k!/(n-k)!/n^k=Σlim n!/k!/(n-k)!/n^k
    is the hard part since
    lim n!/(n-k)!/n^k=1 is obvious
    we should first show both limits exist
    next we use a common calculus method to show two number are equal it is often easier to show that they are not unequal for example if we wish to show x=y we might show

    now let
    {s_i} be the sequence of partial sums of Σ1/k!
    {t_i} be the sequence of values of (1+1/n)^n
    with s and t there respective limits (both equal to e in the end)
    it can be seen that
    because the terms in t_i when considered a sum by the binomial theorm
    a {s_n} is a sequence of partial sums of positive numbers hence increasing
    terms of t_i are the terms of s_i multiplied by
    n!/(n-k)!/n^k (are all less than or equal 1)
    that is
    so the harder step will be
    suppose we make two approximations
    one where n>N
    one where n>M>N
    the idea is when n is very large early terms of t_n look like terms of s_N and while later terms do not, they do not matter (they are small)
    s-2eps<s_n-eps<t_n<=t whenever n>M>N
    so we first estimated t_n<s_N-eps then estimated s_n<s-eps and combined them to get
    Last edited: Nov 5, 2008
  14. Nov 5, 2008 #13
    Sorry, run-ins with asymptotic truncations have given me a bit of a twisted vocabulary. In my defence, the 'error term' was meant as the difference between the value of the function and the limiting sum of the series whenever (and however) they might be summed (this being zero is not always the case, but often is) and not the 'remainder' of a Taylor polynomial approximation (which is the difference between the approximation and the funtion); given convergence throughout the plane and coinciding results for these functions, this can be disconsidered in this case. Even then, it is worth keeping in mind that a Taylor series is simply the limiting case of a Taylor polynomial, and thus the behaviour of that very same remainder can't be so easily discounted.

    I wouldn't consider all elementary functions of course, only those under consideration, which are entire with the Taylor series coinciding with the function at every point.
    Last edited: Nov 5, 2008
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook