Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Euler's solution to zeta(2)

  1. Jun 28, 2005 #1

    saltydog

    User Avatar
    Science Advisor
    Homework Helper

    I've been reviewing Euler's proof for [itex]\zeta(2)[/itex] and though some of you might find it interesting too. We wish to find:

    [tex]\zeta(2)=\sum_{n=1}^{\infty}\frac{1}{n^2}[/tex]

    First a lemma:

    If a polynomial P(x), has non-zero roots [itex]r_i[/itex], and P(0)=1, then:

    [tex]P(x)=\left(1-\frac{x}{r_1}\right) \left(1-\frac{x}{r_2}\right) \left(1-\frac{x}{r_3}\right)...\left(1-\frac{x}{r_n}\right)[/tex]

    I found that interesting to prove and will leave it for the reader if they wish to do so.

    Now consider the polynomial:

    [tex]P(x)=1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+ . . .[/tex]

    Note that P(0)=1 but we don't know anything about it's roots yet.

    Also, consider the power series for Sin(x):

    [tex]Sin(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+...[/tex]

    Note that:

    [tex]xP(x)=Sin(x)[/tex]

    Now, since the Sin(x) has roots of 0, and [itex]\pm \pi[/tex], and the 'x' accounts for the zero root on the left, we are left with P(x) containing the remaining roots. Thus P(x) has non-zero roots and we can thus use the lemma above and state:

    [tex]
    \begin{align*}
    1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+ . . .&=\left(1-\frac{x}{\pi}\right)\left(1+\frac{x}{\pi}\right)\left(1-\frac{x}{2\pi}\right)\left(1+\frac{x}{2\pi}\right)...\\
    &=\left(1-\frac{x^2}{\pi^2}\right)\left(1-\frac{x^2}{4\pi^2}\right)\left(1-\frac{x^2}{9\pi^2}\right)\left(1-\frac{x^2}{16\pi^2}\right)...
    \end{align}
    [/tex]

    Now I tried multiplying four of those together by hand and with extreme difficulty was able to do so in some manner of order. Apparently Euler was able to do many, many more since he calculated by hand [itex]\zeta(26)[/itex]!

    Expanding this product and equating the coefficients to those of P(x) is the key to solving this problem . . .
     
  2. jcsd
  3. Jun 28, 2005 #2

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    That's not a polynomial. :tongue2:


    In general, the product representation also has terms that look like e^f(z) for some analytic function z... but we happen to luck out in this case! (But I can't prove why)


    I also can't justify the process of multiplying it out, but anyways...


    The trick, I'd imagine, is to think of it combinatorially rather than algebraically. When multiplying n binomials, one can naturally group the terms into (n+1) categories -- a term in the i-th category is the product of (n-i) left terms and i right terms.


    If we naively extend this to infinite products, ζ(2) is simple -- the x^2 term of P(X) is simply the sum of all the terms in category (1) -- those products formed from exactly one right term. An example of such a product is 1^∞ * -(x/nπ)^2.

    The x^4 term isn't so bad -- it's the sum of all products involving only two of the right terms. In other words:

    [tex]
    \frac{x^4}{5!} = \sum_{1 \leq m < n} \frac{x^4}{m^2 n^2 \pi^4}
    [/tex]

    Do you need a hint on where to go from there, to get ζ(4)?
     
    Last edited: Jun 28, 2005
  4. Jun 28, 2005 #3

    dextercioby

    User Avatar
    Science Advisor
    Homework Helper

    Incidentally,the infinite product representation of [itex] \sin x [/itex] is due to ...that's right,Leonhard Euler.

    Daniel.
     
  5. Jun 28, 2005 #4

    saltydog

    User Avatar
    Science Advisor
    Homework Helper

    Alright, he does make an extrapolation from a finite polynomial to one that is infinite. Are you saying there are some infinite representations such as these which are not considered polynomials or do I need to check the definition of a polynomial to find out such is only defined for finite number of monomials? I don't know and would like to. :smile:
     
  6. Jun 28, 2005 #5

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Yes, a polynomial, by definition, has only finitely many terms. When you have infinitely many terms, it's called a power series. (Actually, finitely many terms is a power series too! So, polynomials are a special case of a power series)
     
  7. Jun 28, 2005 #6

    saltydog

    User Avatar
    Science Advisor
    Homework Helper

    What, are you kidding me. I spent most of the day figuring out [itex]\zeta(2).[/itex] I never said I was quick at any of this. :yuck: I got time to work on it however and I'm patient. :smile:
     
  8. Jun 28, 2005 #7

    saltydog

    User Avatar
    Science Advisor
    Homework Helper

    Thanks . . . I appreciate that clarification.
     
  9. Jun 28, 2005 #8

    shmoe

    User Avatar
    Science Advisor
    Homework Helper

    I've seen snippets of a translated version of Euler's work and it included something like "what holds for polynomials holds in general" to 'justify' leaping into that infinite product for sine (look up hadamaard products if you want a full justification).

    as to why no e^f(z) term, sine is entire of order 1, so Hadamaard tells us f(z)=a+bz for some constants a, b. sin(z)/z as z->1 gives a, sine being odd gives b.
     
  10. Jun 28, 2005 #9

    saltydog

    User Avatar
    Science Advisor
    Homework Helper

    Thanks Shmoe. I'll look into Hadamaard products as I wish to better understand the suitability of Euler's proof. I was unaware of such considerations regarding infinite series and the product Euler uses. :approve:
     
  11. Jun 28, 2005 #10

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    If you want an example of where being naive fails, consider the gamma function. If I remember correctly, it is nowhere zero, and has poles of order 1 at all nonpositive integers.

    Thus, its reciprocal is entire, and has simple zeroes at all nonpositive integers. But, if you naively try to write its infinite product, you get something that doesn't converge!
     
  12. Jun 28, 2005 #11

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    It wasn't just an exp(f(z)) term out front -- some infinite products require exp(fn(z)) terms inside the product too (such as for the reciprocal of the gamma function). Is the fact we don't have any of those for sine covered by the same theorem?

    By the way, I never heard of being entire to a particular order before, and I can't seem to find info about it, or Hadamard products on Wikipedia. What's the basic idea behind it?
     
  13. Jun 28, 2005 #12

    shmoe

    User Avatar
    Science Advisor
    Homework Helper

    An entire function f(z) is of finite order a>=0 if [tex]f(z)=O(e^{|z|^a})[/tex] as |z|->infinity. This makes the zeros behave nicely, namely [tex]\sum |z|^{-a-\epsilon}[/tex] converges for any epsilon greater than zero, where the sum is taken over all the zeros of f in order of increasing magnitude (and including multiplicity).

    The "inner" exponential in the infinite product is there to ensure convergence of the product over the zeros of f (like you mentioned with Gamma). The higher the order the slower the guarantee on the growth of the zeros so the more we have to compensate. In general you'll have an exponential of a polynomial (take some logs to see why this will work, the convergence of the recipricals of the zeros is used here).

    For sine the general theory tells us we actually get (well after you kill the outer exponential as i mentioned in my last post):

    [tex]\sin(z)=z\prod_{n\neq 0}\left(1-\frac{z}{n\pi}\right)e^{z/n\pi}[/tex]

    When we combine positive and negative zeros the exponentials cancel.
     
  14. Jun 29, 2005 #13

    saltydog

    User Avatar
    Science Advisor
    Homework Helper

    I wish to make clear some confusion I had about polynomials above. Here, I'll put it in LaTex for punish work:

    For finite n, and given constants [itex]a_0,a_1,a_2,...a_n[/itex] in some field with [itex]a_n[/itex] non-zero, a polynomial function of degree n is a function of the form:

    [tex]f(x)=a_0+a_1x+a_2x^2+...+a_{n-1}x^{n-1}+a_nx^n[/tex]


    Continuing with my review of Euler's proof:

    Assuming for the moment that the infinite series used by Euler is equivalent to the infinite product he constructed, I calculated by hand the first four terms of the infinite product and equated coefficients to the expression for P(x):

    [tex]
    \begin{align*}
    P(x)&=1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+... \\

    &=1-x^2\left[\frac{1}{\pi^2}+\frac{1}{4\pi^2}+\frac{1}{9\pi^2}+\frac{1}{16\pi2}\right] \\

    &+ x^4\left[\frac{1}{4\pi^4}+\frac{1}{9\pi^4}+\frac{1}{16\pi^4}+\frac{1}{36\pi^4}+\frac{1}{64\pi^4}+\frac{1}{144\pi^4}\right] \\

    &- x^6\left[\frac{1}{64\pi^6}+\frac{1}{144\pi^6}+\frac{1}{432\pi^6}+\frac{1}{576\pi^6}\right] \\

    &+x^8\left[\frac{1}{432\pi^8}\right]

    \end{align}
    [/tex]

    Considering the trend developing for the [itex]x^2[/itex] term and equating coefficients we suspect:

    [tex]\frac{1}{3!}=\left[\frac{1}{\pi^2}+\frac{1}{4\pi^2}+\frac{1}{9\pi^2}+\frac{1}{16\pi2}+...\right][/tex]

    or:

    [tex]\frac{\pi^2}{6}=\left(1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+...\right)[/tex]

    Thus we suspect:

    [tex]\frac{\pi^2}{6}=\sum_{n=1}^{\infty}\frac{1}{n^2}[/tex]

    This turns out to be the case.

    Using the same reasoning, one can then equate other coefficients and in theory, determine:

    [tex]\sum_{n=1}^{\infty}\frac{1}{n^{2k}}[/tex]

    for k a natural number.

    Now, I'd really like to understand the theory being presented by Hurkly and Shmoe because that's just not happening for me up there. Suppose I just need to take a course on Complex Analysis . . .
     
  15. Jun 29, 2005 #14

    shmoe

    User Avatar
    Science Advisor
    Homework Helper

    Some complex analysis would be helpful to understand this infinite product business, though you can probably look up the relevant theoreom and understand it's application here without too much trouble.

    Have you met the Bernoulli numbers and their generating function before? It's possible to use the product form of sine to answer the question you've left hanging, evaluating zeta(2k), in one fell swoop if you are happy with an answer in terms of the Bernoulli numbers.
     
  16. Jun 29, 2005 #15

    saltydog

    User Avatar
    Science Advisor
    Homework Helper

    Hello Shmoe. Would you kindly tell me the relevant theorem to look up?

    I briefly searched hadamaard products on the net and it was not clear to me at all.
     
  17. Jun 29, 2005 #16

    shmoe

    User Avatar
    Science Advisor
    Homework Helper

  18. Jun 29, 2005 #17

    saltydog

    User Avatar
    Science Advisor
    Homework Helper

    Thanks a bunch Shmoe. I'm looking at the references and related links. Very interesting. :smile:

    What exactly is the issue here? It's so sad I have to ask that but it's true. I mean, where "exactly" in the Euler proof am I being naive? Is it this part:

    [tex]
    \begin{align*}
    1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+ . . .&=\left(1-\frac{x}{\pi}\right)\left(1+\frac{x}{\pi}\right)\left(1-\frac{x}{2\pi}\right)\left(1+\frac{x}{2\pi}\right)...\\
    &=\left(1-\frac{x^2}{\pi^2}\right)\left(1-\frac{x^2}{4\pi^2}\right)\left(1-\frac{x^2}{9\pi^2}\right)\left(1-\frac{x^2}{16\pi^2}\right)...
    \end{align}
    [/tex]

    That is, when can a power series be represented by an infinite product? Or when can a function such as Sin(x) be represented by an infinite product of the kind presented in the proof? Also, I don't understand why you and Hurkyl refer to complex functions and Complex Analysis in general with regards to this problem?

    I appreciate yours and Hurkyl's help. Any additional help you or others can give me, I do also. :smile:
     
    Last edited: Jun 29, 2005
  19. Jun 29, 2005 #18

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The complexes are the natural setting for talking about power series. In it, all the mysteries of life are explained!

    Have you ever wondered why any Taylor series of a function like 1/(1+x^2)'s has a finite interval of convergence, but functions like e^x converge everywhere?

    Or why the interval happens to be as large as it is? For example the Taylor series for 1/(1+x^2) converges for |x| < 1, and its Taylor series about 1 converges for |x - 1| < &radic;2.

    The answer is that power series like to converge on disks in the complex plane. (And diverge outside that disk. The boundary is a tougher question)

    A Taylor series for a complex function will converge on the largest possible disk on which it is defined!

    Since e^z is defined everywhere, it's Taylor series about any point converges everywhere.

    However, 1/(1+z^2) has singularities at +i and -i. Thus, any Taylor series will converge on the largest disk not containing +i or -i. For the Taylor series about z=1, that is a disk of radius &radic;2.
     
  20. Jun 29, 2005 #19

    saltydog

    User Avatar
    Science Advisor
    Homework Helper


    Very interesting Hurkyl. I need a book on Complex Analysis and then start doing the problems but then I'd have to stay away from PF because you guys always bring up such interesting problems and I'd get distracted. :smile:
     
  21. Jun 29, 2005 #20
    Speaking of which, what is a good complex analysis book?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Euler's solution to zeta(2)
  1. Almost zeta(2) (Replies: 6)

Loading...