Euler's Proof for the Zeta Function at 2

  • Thread starter saltydog
  • Start date
In summary, Euler's proof for \zeta(2) shows that if a polynomial P(x), has non-zero roots r_i, and P(0)=1, then:P(x)=\left(1-\frac{x}{r_1}\right) \left(1-\frac{x}{r_2}\right) \left(1-\frac{x}{r_3}\right)...\left(1-\frac{x}{r_n}\right)though some of you might find it interesting too. We wish to find:\begin{align*}\left(1-\frac{x}{\
  • #1
saltydog
Science Advisor
Homework Helper
1,591
3
I've been reviewing Euler's proof for [itex]\zeta(2)[/itex] and though some of you might find it interesting too. We wish to find:

[tex]\zeta(2)=\sum_{n=1}^{\infty}\frac{1}{n^2}[/tex]

First a lemma:

If a polynomial P(x), has non-zero roots [itex]r_i[/itex], and P(0)=1, then:

[tex]P(x)=\left(1-\frac{x}{r_1}\right) \left(1-\frac{x}{r_2}\right) \left(1-\frac{x}{r_3}\right)...\left(1-\frac{x}{r_n}\right)[/tex]

I found that interesting to prove and will leave it for the reader if they wish to do so.

Now consider the polynomial:

[tex]P(x)=1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+ . . .[/tex]

Note that P(0)=1 but we don't know anything about it's roots yet.

Also, consider the power series for Sin(x):

[tex]Sin(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+...[/tex]

Note that:

[tex]xP(x)=Sin(x)[/tex]

Now, since the Sin(x) has roots of 0, and [itex]\pm \pi[/tex], and the 'x' accounts for the zero root on the left, we are left with P(x) containing the remaining roots. Thus P(x) has non-zero roots and we can thus use the lemma above and state:

[tex]
\begin{align*}
1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+ . . .&=\left(1-\frac{x}{\pi}\right)\left(1+\frac{x}{\pi}\right)\left(1-\frac{x}{2\pi}\right)\left(1+\frac{x}{2\pi}\right)...\\
&=\left(1-\frac{x^2}{\pi^2}\right)\left(1-\frac{x^2}{4\pi^2}\right)\left(1-\frac{x^2}{9\pi^2}\right)\left(1-\frac{x^2}{16\pi^2}\right)...
\end{align}
[/tex]

Now I tried multiplying four of those together by hand and with extreme difficulty was able to do so in some manner of order. Apparently Euler was able to do many, many more since he calculated by hand [itex]\zeta(26)[/itex]!

Expanding this product and equating the coefficients to those of P(x) is the key to solving this problem . . .
 
Physics news on Phys.org
  • #2
Now consider the polynomial:

[tex]P(x)=1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+ . . .[/tex]

That's not a polynomial. :tongue2:


In general, the product representation also has terms that look like e^f(z) for some analytic function z... but we happen to luck out in this case! (But I can't prove why)


I also can't justify the process of multiplying it out, but anyways...


The trick, I'd imagine, is to think of it combinatorially rather than algebraically. When multiplying n binomials, one can naturally group the terms into (n+1) categories -- a term in the i-th category is the product of (n-i) left terms and i right terms.


If we naively extend this to infinite products, ζ(2) is simple -- the x^2 term of P(X) is simply the sum of all the terms in category (1) -- those products formed from exactly one right term. An example of such a product is 1^∞ * -(x/nπ)^2.

The x^4 term isn't so bad -- it's the sum of all products involving only two of the right terms. In other words:

[tex]
\frac{x^4}{5!} = \sum_{1 \leq m < n} \frac{x^4}{m^2 n^2 \pi^4}
[/tex]

Do you need a hint on where to go from there, to get ζ(4)?
 
Last edited:
  • #3
Incidentally,the infinite product representation of [itex] \sin x [/itex] is due to ...that's right,Leonhard Euler.

Daniel.
 
  • #4
Hurkyl said:
That's not a polynomial. :tongue2:


In general, the product representation also has terms that look like e^f(z) for some analytic function z... but we happen to luck out in this case! (But I can't prove why)

Alright, he does make an extrapolation from a finite polynomial to one that is infinite. Are you saying there are some infinite representations such as these which are not considered polynomials or do I need to check the definition of a polynomial to find out such is only defined for finite number of monomials? I don't know and would like to. :smile:
 
  • #5
Yes, a polynomial, by definition, has only finitely many terms. When you have infinitely many terms, it's called a power series. (Actually, finitely many terms is a power series too! So, polynomials are a special case of a power series)
 
  • #6
Hurkyl said:
Do you need a hint on where to go from there, to get ζ(4)?

What, are you kidding me. I spent most of the day figuring out [itex]\zeta(2).[/itex] I never said I was quick at any of this. :yuck: I got time to work on it however and I'm patient. :smile:
 
  • #7
Hurkyl said:
Yes, a polynomial, by definition, has only finitely many terms. When you have infinitely many terms, it's called a power series. (Actually, finitely many terms is a power series too! So, polynomials are a special case of a power series)

Thanks . . . I appreciate that clarification.
 
  • #8
I've seen snippets of a translated version of Euler's work and it included something like "what holds for polynomials holds in general" to 'justify' leaping into that infinite product for sine (look up hadamaard products if you want a full justification).

as to why no e^f(z) term, sine is entire of order 1, so Hadamaard tells us f(z)=a+bz for some constants a, b. sin(z)/z as z->1 gives a, sine being odd gives b.
 
  • #9
shmoe said:
I've seen snippets of a translated version of Euler's work and it included something like "what holds for polynomials holds in general" to 'justify' leaping into that infinite product for sine (look up hadamaard products if you want a full justification).

as to why no e^f(z) term, sine is entire of order 1, so Hadamaard tells us f(z)=a+bz for some constants a, b. sin(z)/z as z->1 gives a, sine being odd gives b.

Thanks Shmoe. I'll look into Hadamaard products as I wish to better understand the suitability of Euler's proof. I was unaware of such considerations regarding infinite series and the product Euler uses. :approve:
 
  • #10
If you want an example of where being naive fails, consider the gamma function. If I remember correctly, it is nowhere zero, and has poles of order 1 at all nonpositive integers.

Thus, its reciprocal is entire, and has simple zeroes at all nonpositive integers. But, if you naively try to write its infinite product, you get something that doesn't converge!
 
  • #11
It wasn't just an exp(f(z)) term out front -- some infinite products require exp(fn(z)) terms inside the product too (such as for the reciprocal of the gamma function). Is the fact we don't have any of those for sine covered by the same theorem?

By the way, I never heard of being entire to a particular order before, and I can't seem to find info about it, or Hadamard products on Wikipedia. What's the basic idea behind it?
 
  • #12
An entire function f(z) is of finite order a>=0 if [tex]f(z)=O(e^{|z|^a})[/tex] as |z|->infinity. This makes the zeros behave nicely, namely [tex]\sum |z|^{-a-\epsilon}[/tex] converges for any epsilon greater than zero, where the sum is taken over all the zeros of f in order of increasing magnitude (and including multiplicity).

The "inner" exponential in the infinite product is there to ensure convergence of the product over the zeros of f (like you mentioned with Gamma). The higher the order the slower the guarantee on the growth of the zeros so the more we have to compensate. In general you'll have an exponential of a polynomial (take some logs to see why this will work, the convergence of the recipricals of the zeros is used here).

For sine the general theory tells us we actually get (well after you kill the outer exponential as i mentioned in my last post):

[tex]\sin(z)=z\prod_{n\neq 0}\left(1-\frac{z}{n\pi}\right)e^{z/n\pi}[/tex]

When we combine positive and negative zeros the exponentials cancel.
 
  • #13
I wish to make clear some confusion I had about polynomials above. Here, I'll put it in LaTex for punish work:

For finite n, and given constants [itex]a_0,a_1,a_2,...a_n[/itex] in some field with [itex]a_n[/itex] non-zero, a polynomial function of degree n is a function of the form:

[tex]f(x)=a_0+a_1x+a_2x^2+...+a_{n-1}x^{n-1}+a_nx^n[/tex]


Continuing with my review of Euler's proof:

Assuming for the moment that the infinite series used by Euler is equivalent to the infinite product he constructed, I calculated by hand the first four terms of the infinite product and equated coefficients to the expression for P(x):

[tex]
\begin{align*}
P(x)&=1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+... \\

&=1-x^2\left[\frac{1}{\pi^2}+\frac{1}{4\pi^2}+\frac{1}{9\pi^2}+\frac{1}{16\pi2}\right] \\

&+ x^4\left[\frac{1}{4\pi^4}+\frac{1}{9\pi^4}+\frac{1}{16\pi^4}+\frac{1}{36\pi^4}+\frac{1}{64\pi^4}+\frac{1}{144\pi^4}\right] \\

&- x^6\left[\frac{1}{64\pi^6}+\frac{1}{144\pi^6}+\frac{1}{432\pi^6}+\frac{1}{576\pi^6}\right] \\

&+x^8\left[\frac{1}{432\pi^8}\right]

\end{align}
[/tex]

Considering the trend developing for the [itex]x^2[/itex] term and equating coefficients we suspect:

[tex]\frac{1}{3!}=\left[\frac{1}{\pi^2}+\frac{1}{4\pi^2}+\frac{1}{9\pi^2}+\frac{1}{16\pi2}+...\right][/tex]

or:

[tex]\frac{\pi^2}{6}=\left(1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+...\right)[/tex]

Thus we suspect:

[tex]\frac{\pi^2}{6}=\sum_{n=1}^{\infty}\frac{1}{n^2}[/tex]

This turns out to be the case.

Using the same reasoning, one can then equate other coefficients and in theory, determine:

[tex]\sum_{n=1}^{\infty}\frac{1}{n^{2k}}[/tex]

for k a natural number.

Now, I'd really like to understand the theory being presented by Hurkly and Shmoe because that's just not happening for me up there. Suppose I just need to take a course on Complex Analysis . . .
 
  • #14
Some complex analysis would be helpful to understand this infinite product business, though you can probably look up the relevant theoreom and understand it's application here without too much trouble.

Have you met the Bernoulli numbers and their generating function before? It's possible to use the product form of sine to answer the question you've left hanging, evaluating zeta(2k), in one fell swoop if you are happy with an answer in terms of the Bernoulli numbers.
 
  • #15
shmoe said:
Some complex analysis would be helpful to understand this infinite product business, though you can probably look up the relevant theoreom and understand it's application here without too much trouble.

Hello Shmoe. Would you kindly tell me the relevant theorem to look up?

I briefly searched hadamaard products on the net and it was not clear to me at all.
 
  • #16
I should have also mentioned to look for http://planetmath.org/encyclopedia/WeierstrassProductTheorem.html [Broken] but my terminology is tainted by the usage in number theory texts where the usual applications are to functions of finite order.

Also relevant, part of how finite order offects this product:

http://mathworld.wolfram.com/HadamardFactorizationTheorem.html

and a bit on infinite products in general:

http://mathworld.wolfram.com/InfiniteProduct.html
http://en.wikipedia.org/wiki/Infinite_product
 
Last edited by a moderator:
  • #17
Thanks a bunch Shmoe. I'm looking at the references and related links. Very interesting. :smile:

What exactly is the issue here? It's so sad I have to ask that but it's true. I mean, where "exactly" in the Euler proof am I being naive? Is it this part:

[tex]
\begin{align*}
1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+ . . .&=\left(1-\frac{x}{\pi}\right)\left(1+\frac{x}{\pi}\right)\left(1-\frac{x}{2\pi}\right)\left(1+\frac{x}{2\pi}\right)...\\
&=\left(1-\frac{x^2}{\pi^2}\right)\left(1-\frac{x^2}{4\pi^2}\right)\left(1-\frac{x^2}{9\pi^2}\right)\left(1-\frac{x^2}{16\pi^2}\right)...
\end{align}
[/tex]

That is, when can a power series be represented by an infinite product? Or when can a function such as Sin(x) be represented by an infinite product of the kind presented in the proof? Also, I don't understand why you and Hurkyl refer to complex functions and Complex Analysis in general with regards to this problem?

I appreciate yours and Hurkyl's help. Any additional help you or others can give me, I do also. :smile:
 
Last edited:
  • #18
The complexes are the natural setting for talking about power series. In it, all the mysteries of life are explained!

Have you ever wondered why any Taylor series of a function like 1/(1+x^2)'s has a finite interval of convergence, but functions like e^x converge everywhere?

Or why the interval happens to be as large as it is? For example the Taylor series for 1/(1+x^2) converges for |x| < 1, and its Taylor series about 1 converges for |x - 1| < &radic;2.

The answer is that power series like to converge on disks in the complex plane. (And diverge outside that disk. The boundary is a tougher question)

A Taylor series for a complex function will converge on the largest possible disk on which it is defined!

Since e^z is defined everywhere, it's Taylor series about any point converges everywhere.

However, 1/(1+z^2) has singularities at +i and -i. Thus, any Taylor series will converge on the largest disk not containing +i or -i. For the Taylor series about z=1, that is a disk of radius &radic;2.
 
  • #19
Hurkyl said:
The complexes are the natural setting for talking about power series. In it, all the mysteries of life are explained!

Have you ever wondered why any Taylor series of a function like 1/(1+x^2)'s has a finite interval of convergence, but functions like e^x converge everywhere?

Or why the interval happens to be as large as it is? For example the Taylor series for 1/(1+x^2) converges for |x| < 1, and its Taylor series about 1 converges for |x - 1| < √2.

The answer is that power series like to converge on disks in the complex plane. (And diverge outside that disk. The boundary is a tougher question)

A Taylor series for a complex function will converge on the largest possible disk on which it is defined!

Since e^z is defined everywhere, it's Taylor series about any point converges everywhere.

However, 1/(1+z^2) has singularities at +i and -i. Thus, any Taylor series will converge on the largest disk not containing +i or -i. For the Taylor series about z=1, that is a disk of radius √2.


Very interesting Hurkyl. I need a book on Complex Analysis and then start doing the problems but then I'd have to stay away from PF because you guys always bring up such interesting problems and I'd get distracted. :smile:
 
  • #20
saltydog said:
Very interesting Hurkyl. I need a book on Complex Analysis and then start doing the problems...

Speaking of which, what is a good complex analysis book?
 
  • #21
Well, we can give exercises too. :smile:

For any complex power series:

[tex]
\sum_{n = 0}^{\infty} a_n z^n
[/tex]

prove that it has a radius of convergence r such that the series converges for |z| < r and diverges for |z| > r.
 
  • #22
Hurkyl said:
Well, we can give exercises too. :smile:

For any complex power series:

[tex]
\sum_{n = 0}^{\infty} a_n z^n
[/tex]

prove that it has a radius of convergence r such that the series converges for |z| < r and diverges for |z| > r.

Yes, I'll work on that. I have Kreyszig and it has a nice section on Complex Analysis but it's easy to get distracted in here. I mean, I was kinda working on uniqueness criteria for ODEs and look what happen? Oh and don't forget Steven. I like Real Analysis too. :smile:
 
  • #23
Hurkyl said:
Well, we can give exercises too. :smile:

For any complex power series:

[tex]
\sum_{n = 0}^{\infty} a_n z^n
[/tex]

prove that it has a radius of convergence r such that the series converges for |z| < r and diverges for |z| > r.

Based on the Ratio Test for complex series, the power series:

[tex]\sum_{n=0}^{\infty}a_nz^n[/tex]

converges if::

[tex]\mathop\lim\limits_{n\to\infty}\left|\frac{a_{n+1}z^{n+1}}{a_nz^n}\right|<1[/tex]

That is if:

[tex]\mathop\lim\limits_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right||z|<1[/tex]

Consider:

[tex]L=\mathop\lim\limits_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right|[/tex]

If L=0 then the ratio test gives convergence for all finite z.

If[itex]L\ne 0[/itex] then L>0 and we have convergence if:

[tex]L|z|<1[/tex]

or:

[tex]|z|<\frac{1}{L}[/tex]

And if [itex]L\to\infty[/itex] then by the ratio test, the series diverges for all [itex]z\ne 0[/itex]

These facts are embodied in the Cauchy-Hadamard formula (using L defined above):

[tex]R=\frac{1}{L}=\mathop\lim\limits_{n\to\infty}\left|\frac{a_n}{a_{n+1}}\right|[/tex]

where R is the radius of convergence, i.e.:

[tex]|z|<R[/tex]

In the complex plane, this is necessarilly a circle of radius R, centered at the origin. No general conclusions can me made about the convergence of the power series on the perimeter of the circle. The series may converge at some or all or none of these points.

You know, a few people reading this may think, "yea, make him do a plot too!". :smile:
 
  • #24
PhilG said:
Speaking of which, what is a good complex analysis book?
I don't know of any that are all things to all people.
A few to consider are Churchill and at a slightly higher level Lang. Also most introductory books on applied math and many on calculus (ie. advanced calculus) also cover the basics of complex analysis. Of course this is ignoring the whole complex analysis vs. complex variables issue.
 
  • #25
Thanks for the suggestions.

lurflurf said:
Of course this is ignoring the whole complex analysis vs. complex variables issue.

What's the difference?
 
  • #26
PhilG said:
What's the difference?
Complex variables and complex analysis can mostly be used interchangeably. Complex variables has a conatation of being from a more applied and less rigorous perspective. Complex variables can be thought of as calculus with i. At many colleges courses with both titles are offered, either as a two couse sequence or as overlaping courses. Math snobs look down on complex variables like they do calculus.
 
  • #27
Based on the Ratio Test for complex series, the power series:

Yep, looks good! And, incidentally, the proof of the ratio test for complexes is also identical to the proof over the reals, I believe.
 

1. What is Euler's solution to zeta(2)?

Euler's solution to zeta(2), also known as the Basel problem, is a mathematical proof that the infinite series 1 + 1/4 + 1/9 + 1/16 + ... converges to π/6.

2. Why is Euler's solution to zeta(2) significant?

Euler's solution to zeta(2) is significant because it provided a proof for the convergence of the infinite series 1 + 1/4 + 1/9 + 1/16 + ..., which had been previously assumed but not proven. It also helped to pave the way for further developments in the study of infinite series and their convergence.

3. How did Euler come up with his solution to zeta(2)?

Euler used a clever manipulation of the Taylor series expansion for sin(x) to derive his solution to zeta(2). He also made use of the fact that sin(x) has infinitely many roots at x = nπ, where n is an integer, to simplify the problem.

4. Can Euler's solution to zeta(2) be extended to other values of zeta?

Yes, Euler's solution to zeta(2) can be extended to other values of zeta, such as zeta(4), zeta(6), and so on. However, the method used to derive the solution becomes more complex and involves the use of higher-order derivatives of sin(x).

5. What are some real-life applications of Euler's solution to zeta(2)?

Euler's solution to zeta(2) has various applications in mathematics and physics. It is used in the calculation of certain physical constants, such as the Riemann zeta function and the electrostatic energy of a charged sphere. It is also used in the study of number theory and the distribution of prime numbers.

Similar threads

Replies
4
Views
187
Replies
3
Views
1K
Replies
3
Views
214
Replies
1
Views
1K
Replies
3
Views
1K
Replies
3
Views
934
Replies
1
Views
809
  • Calculus
Replies
29
Views
511
Replies
5
Views
974
Back
Top