Ramanujan Summation & Riemann Zeta Function: Negative Values

In summary: S + Sr = r - r^2 + r^3 - r^4 + ... - ...S(1+ r) = r / [1+r]S = r/(1+r)^2So, when you take the limit of both sides of the equation as r -> 1, we have:Lim S (r-> 1) = Lim r/(1+r)^2 (r-> 1)1 - 2(1)^2 + 3(1)^3 - 4(1)^4 + ... - ... = 1/4(remember how I defined my S awhile ago? )or simply, 1 - 2 +
  • #1
Gib Z
Homework Helper
3,351
7
I was wondering if anyone could tell me more about the Riemann Zeta function, esp at negative values. Especially when [tex] \sum_{n=1}^{\infty}n= \frac{-1}{12} R[/tex] where R is the Ramanujan Summation Operator. Could anyone post a proof?
 
Physics news on Phys.org
  • #2
Actually... I have thought of a proof for that.. but without that "R". I hope this helps...

You first consider the "-" summation (from i = 1 to infinity) of i(-r)^i. Let this sum be equal to S. Then,

S = r - 2r^2 + 3r^3 - 4r^4 + ... - ...
Sr = 0 + r^2 - 2r^3 + 3r^4 - ... + ...

S + Sr = r - r^2 + r^3 - r^4 + ... - ...
S(1+ r) = r / [1+r]
S = r/(1+r)^2

When you take the limit of both sides of the equation as r -> 1, we have:

Lim S (r-> 1) = Lim r/(1+r)^2 (r-> 1)

1 - 2(1)^2 + 3(1)^3 - 4(1)^4 + ... - ... = 1/4
(remember how I defined my S awhile ago? )

or simply, 1 - 2 + 3 - 4 + .. - ... = 1/4 (remember this)

Now, If I let T = 1 + 2 + 3 + ... then,

2T = 2(1 + 2 + 3 + ... ).. then,
4T = 4(1 + 2 + 3 + ... ) = 2(2 + 4 + 6 + ... ) (remember this too)

Therefore, 1/4 + 4T = ?.
Look...

1 - 2 + 3 - 4 + .. - ...
+
2(2) + 2(4) + 2(6) + 2(8) +...

As you can see, the odd numbers are left as is, but the even numbers are combined,
say 2(2) - 2 and 2(4) -4.

Thus, 1/4 + 4T = T.
Solving for T, you got -1/12.

So,

[tex] \sum_{n=1}^{\infty}n= \frac{-1}{12} [/tex]
 
Last edited:
  • #3
That is excellent, thanks. The R merely shows that you have to derive the expression specially like you did, rather than the conventional thought that it diverges. The Riemann zeta function: [tex] Zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s}[/tex] at even negative s, ie -2,-4,-6... are equal to zero, can anyone prove that?
 
  • #5
irony of truth said:
1 - 2(1)^2 + 3(1)^3 - 4(1)^4 + ... - ... = 1/4
(remember how I defined my S awhile ago? )

or simply, 1 - 2 + 3 - 4 + .. - ... = 1/4 (remember this)

Now, If I let T = 1 + 2 + 3 + ... then,

2T = 2(1 + 2 + 3 + ... ).. then,
4T = 4(1 + 2 + 3 + ... ) = 2(2 + 4 + 6 + ... ) (remember this too)

Therefore, 1/4 + 4T = ?.
Look...

1 - 2 + 3 - 4 + .. - ...
+
2(2) + 2(4) + 2(6) + 2(8) +...

As you can see, the odd numbers are left as is, but the even numbers are combined,
say 2(2) - 2 and 2(4) -4.

Thus, 1/4 + 4T = T.
Solving for T, you got -1/12.
I don't think this is logically correct. One cannot rearrange the terms of an infinite series unless it is absolutely convergent. The series 1-2+3-4... is not absolutely convergent.
You shall get fallacy...what you have derived is 1+2+3+...ad inf =-1/12, sum of all positive numbers is negative!
Example:

ln(2)= 1- 1/2 + 1/3 - 1/4+...ad inf
=(1 + 1/2 + 1/3 + 1/4 +...) - 2(1/2 + 1/4 + 1/6 +...)
=(1 + 1/2 + 1/3 + 1/4 +...)-(1 + 1/2 + 1/3 + 1/4 +...)
=0
=ln(1)
Therefore ln(2)=ln(1) , that is 1=2.
Since 1- 1/2 + 1/3 - 1/4+...ad inf is not absolutely convergent, its terms cannot be rearranged.
 
Last edited:
  • #6
True, Maybe this proof wasn't the most rigorous, but none the less achieves the desired result. We know in forehand that that answer is correct, even though umm...illogical lol. The result that the series is equal to -1/12 is usual in string theory, if you check the link above.
 
  • #7
Gib Z said:
True, Maybe this proof wasn't the most rigorous, but none the less achieves the desired result. We know in forehand that that answer is correct, even though umm...illogical lol. The result that the series is equal to -1/12 is usual in string theory, if you check the link above.

LOL, I am not a man of Physics, unable to understand the physical interpretation of String theory facts.
 
  • #8
lol i Don't know Strings so much either, just the theory and the most basic math.
I know enough though to understand the link. Theres 2 proofs for the question, look there if you like.
 
  • #9
irony of truth said:
S = r - 2r^2 + 3r^3 - 4r^4 + ... - ...
Sr = 0 + r^2 - 2r^3 + 3r^4 - ... + ...

S + Sr = r - r^2 + r^3 - r^4 + ... - ...
S(1+ r) = r / [1+r]
S = r/(1+r)^2

When you take the limit of both sides of the equation as r -> 1, we have:

Lim S (r-> 1) = Lim r/(1+r)^2 (r-> 1)

1 - 2(1)^2 + 3(1)^3 - 4(1)^4 + ... - ... = 1/4
(remember how I defined my S awhile ago? )

[/tex]

'Gib Z' I don't know what to say...
My konwledge goes in this way,
r - r^2 + r^3 - r^4...ad inf
= r/(1+r) if and only if "-1< r <1" holds. The inequalities being strict.
When one uses the limit r->1, that means he calculates limit r->1- and limit r ->1+ and found them to be equal. But by taking limit r ->1+ one violets the necessary and sufficient condiditon |r|<1 for this summation.
 
  • #10
Well yes, as I said its not a very rigorous proof, but its sufficient for showing a high school class and that's what I originally intended this for. The Link has 2 rigorous proofs, look there. I know your disbelief, its not logical.
 
  • #11
A more formal derivation!

Forgive the lack of LaTeX but the notation is self-explanatory! The following combined with the analytic extension theorems of complex analysis provide suitable Euler-Abel sums for negative values of the Riemann Zeta function.

Note that <B*_k> stands for the old Bernoulli numbers(the k th one here) and <B_k> the new ones(I explain in the end)

1 + e^iy + e^2iy + e^3iy + ... = (1 - e^iy)^-1 = 1/2 + (1/2)i.cot y/2
Equating real and imaginary parts we have,

cos y + cos 2y + cos 3y + ... = -1/2 --------(1)
and,
sin y + sin 2y + sin 3y + ... = (1/2)cot y/2 ------------(2)

Replace y by (y + pi) and we have,
cos y - cos 2y + cos 3y - ... = 1/2

differentiate 2k times and we have
Sum [n=1 to infinity] (-1)^(n-1) n^2k cos ny = 0
differentiate 2k-1 times and we have,
Sum [n=1 to infinity] (-1)^(n-1) n^(2k-1) sin ny = 0

for (2) we get,
sin y - sin 2y + sin 3y - ... = (1/2)tan y/2
since the Taylor expansion of (1/2)tan y/2 is,
Sum [k=1 to infinity] {(2^2k - 1) <B*_k> y^(2k-1)} / (2k)!

Differentiate 2k-1 times, put y=1 and equate to get,

Sum [n=1 to infinity] (-1)^(n-1) n^(2k-1) = [(-1)^(k-1) (2^2k - 1) <B*_k>] / 2k


Now recall Z(1- 2k) = 1^(2k-1) + 2^(2k-1) + 3^(2k-1) + ...
where Z is Riemann's Zeta function, we see that the function we handled,
N(1-2k) = Sum [n=1 to infinity] (-1)^(n-1) n^(2k-1) = 1^(2k-1) - 2^(2k-1) + 3^(2k-1) - ...

satisfies,
(1 - 2^2k) Z(1-2k) = N(1-2k)
Then,
Z(1-2k) = (-1)^k <B*_k> / 2k

But with the modern notation for the Bernoulli numbers,
<B_2n> = (-1)^(n+1) <B*_n>

therefore,
Z(1-2k) = (-1)^(2k+1) <B_2k> / 2k = - <B_2k> / 2k

Quite generally,
Z(1-n) = (-1)^(n-1) <B_n> / n

for all positive integer k(even zero in a sense, more below), defining the Zeta function for odd negative integers (the case we considered first obviously shows that even negative integers make the function zero, these are its trivial zeros, the other zeros constitute the Riemann Hypothesis, the greatest intellectual challenge faced by man)
Now the fun bit, put k=1 and we have the magical,
Z(-1 ) = 1 + 2 + 3+ 4 + ... = -1 / 12 , since B<2> = 1/6

This value actually holds a meaning(though we are hard put to explain in any connotation of the word brief with regard to use in physics, but it has to do with what may be termed 'measure' of infinities, thankfully in math definition is all that matters!) and is extensively used with the case k=2 in theoretical physics(namely the Casimir Effect and Perturbation theory on quantum fields), but applications are always of secondary importance!

Rigorous enough eh? Yes, but only if we consider it in a special summation definition (the extended A definition due to Euler, though the Ramanujan sum coincides too)and note the analyic extention of the zeta function to the whole plane.

Actually this is most awe-inspiring since it fits with the (seemingly independant) analytic extention perfectly. As in how k=0 gives the simple pole!

The 'trickery' per se is that le^iyl = 1 for all y, but since we avoid the actual pole there is no damage done!
 
  • #12
So I came across a connection to Taylor's series, imagine my surprise when I learned that one can think of $f(t + n)$ as the result of the operator ${\exp}{n{\delta}}$ applied on $f(t)$
Then we have,
$\sum_{n = 0}^{\infty} f(n) = \sum_{n = 0}^{\infty} {\exp}{n{\delta}} f(0)$

When t is limited to 0.
We can sum the exponential series as a geometric progression(this works even for the operator)and,
$\sum_{n = 0}^{\infty} f(n) = (1 - {\exp}{{\delta}})^{ - 1} f(0)$

since,
$\frac {{\delta}}{{\exp}{{\delta}} - 1} = \sum_{n = 0}^{\infty} \frac {B_{n}}{n!} {\delta}^{n}$
we substitute and we have,

$\sum_{n = 0}^{\infty} f(n) = - \sum_{n = 0}^{\infty} \frac {B_{n}}{n!} {\delta}^{n - 1}f(0)$

(the operations on the operator are valid)

put f(n) = n and we have 1+2+3+4+...= -1/12
etc.
 
  • #13
That is some interesting working yasiru89, i will need to look at it in more detail. By the way, on these forums instead of $ for tex, at the start of the code write [ tex ] without the space between the brackets, and [ /tex ] without spaces, when the code is finished.
 
  • #14
And [ itex ] for stuff inlined with ordinary text.
 
  • #15
Yeah, thanks for pointing that out, I'm mostly a pen and paper guy and my tex is restricted to posts on mathforums!(my first post isn't tex)

The beauty of mathematics is that logic can at times have very little bearing and all that matters is definition!(if you can follow it up that is)
 
  • #16
yasiru89 said:
(if you can follow it up that is)
Of course, that's where you need logic. :tongue:
 
  • #17
I meant that our logic is bound by geometries and the freedom of definition absolves us. Yet we look the other way!
 
  • #18
wait a second, does this mean that the stuff I just reviewed about infinite series is in fact WRONG?! If there do exist rigorous proofs of the fact that 1 + 2 + 3 + ... = -1/12 then calculus lives inside of an inconsistent theory??

please can someone clarify whether or not this is a FACT?!
 
  • #19
If there do exist rigorous proofs of the fact that 1 + 2 + 3 + ... = -1/12
The common meaning of the symbols you wrote is equivalent to "the infinite summation operation you learned in calculus". And in that meaning, there does not exist a proof of that statement.

Only when you consider different kinds of 'summation' operations can statements like that be true.
 
  • #20
SiddharthM said:
wait a second, does this mean that the stuff I just reviewed about infinite series is in fact WRONG?! If there do exist rigorous proofs of the fact that 1 + 2 + 3 + ... = -1/12 then calculus lives inside of an inconsistent theory??

please can someone clarify whether or not this is a FACT?!

I didn't put an R in that equation for no reason you know =]
 
  • #21
But you must understand(forgive my apparent reduction to metaphysics, but that's not what it is) that whether it be the R, A, E, B, C, etc. definitions it does not make the relations any less true. For example some are as valid as the algebraical operation of long division. Hence my comment(last in previous page)!
Note the connexion between the Bernoulli/differential operator derivation I gave secondly, the Euler MacLaurin sum formula and Ramanujan definition of a sum (which is perhaps most unorthodox, being, very bluntly put- the difference between the actual sum and integral!
 
  • #23
yasiru89 said:
But you must understand(forgive my apparent reduction to metaphysics, but that's not what it is) that whether it be the R, A, E, B, C, etc. definitions it does not make the relations any less true.
Of course it does. The sum
1 + 2 + 3 + ... = -1/12​
is an explicit example of this -- this relation is patently false for the infinite summation operator you learned in elementary calculus (the sum is [itex]+\infty[/itex]), and it is also false for summations of generalized functions (the sum doesn't exist), however this relation is true for Ramanujan summation.
 
  • #24
Gee I must've missed that bit..
NOT!
It IS valid for many sorts of generalized functions as is apparent from my 2nd post (though that's not explicitly Ramanujan summation, for more consider the Euler MacLaurin formula)
These new summation operators are most all legitimate, can be made rigorous, consistent and are most inconveniently for those who curl up in their comfy math beds(or dare I say Couches) - true.
It is all in the definition as Hurkyl apparently concedes and does not invade upon the rigours of pure mathematics(as that is indeed my main area of interest)
 
  • #25
Oh and I noticed you seem to think of the summation operator as uniquely defined, not so mate, the one in your 'absolute truth' calculus courses is just one instance and balances itself hazardously on a range of convergence criteria -
which is all very well and appreciated, but we mustn't overlook other possibilities. I'd imagine you'd be more open to this sort of thing on 'physicsforums' since almost textbook results from this neck of the woods are required in perturbation theory sometimes as I understand!
 
  • #26
yasiru89 said:
Gee I must've missed that bit..
NOT!
It IS valid for many sorts of generalized functions as is apparent from my 2nd post (though that's not explicitly Ramanujan summation, for more consider the Euler MacLaurin formula)
Wrong. But I'll chalk that up to not knowing what I meant by "generalized function". If the functions [itex]f_n[/itex] are defined by

[tex]f_n(x) := \begin{cases} \frac{1}{2n} & x \in [-n, n] \\ 0 & x \notin [-n, x] \end{cases}[/tex]

Then the limit [itex]\lim_{n \rightarrow 0} f_n[/itex] doesn't exist. But if we instead work in the space of, for example, Schwartz distributions, then the limit exists and is equal to the delta function.

The general procedure here is to expand your universe of discourse so that (ordinary!) limits and sums that failed to converge in the old universe really do converge in the new universe.

This is an entirely different method than what you've been considering. You retain the ordinary definition of "limit", "sum", "integral", et cetera, but you expand the class of objects you're working with.



These new summation operators are most all legitimate, can be made rigorous, consistent
Nobody said otherwise. The point is that they are NEW summation operators, and the class of valid relations involving these NEW summation will be different than the class of valid relations involving the OLD summation operator.


and are most inconveniently for those who curl up in their comfy math beds(or dare I say Couches) - true.
It is all in the definition as Hurkyl apparently concedes and does not invade upon the rigours of pure mathematics(as that is indeed my main area of interest)
Can the attitude right now.



yasiru89 said:
Oh and I noticed you seem to think of the summation operator as uniquely defined
Well duh. The summation operator from elementary calculus has a precise definition that distinguishes it from all other possible operators. The other (rigorous) generalizations of the summation operator I know also have precise definitions that distinguish them from all other possible operators.
 
  • #27
The milllion dollar physics question is: Given any naively diverging sum appearing in some calculation, which summation device do you use, or even are you allowed to use.

Normally we appeal to some overarching symmetry principle (for instance the 1+2+3 sequence appears in a functional integral calculation of the casimir energy so we kinda are allowed to use dimensional regularization which also yields -1/12). However I have absolutely no idea why zeta regularization is permitted
 
  • #28
You might find
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.cmp/1103900982

a helpful link, I for one brushed over some of the physics but Dr Hawkin's argument is straightforward and builds up a strong case for Zeta regularization over the dimensional method in certain instances.

And Hurkyl, you're right about the generalizations- I didn't think you wanted to expand the space you were working on, indeed that sort of thing is more akin to dimensional antics in all sincerity.
Paul Dirac fell into disapproval with Von Neumann when he first used the Dirac Delta function(introduced by Heaviside if I am not mistaken) but Schwarz distribution theory expanded the scope of a function and made it legitimate. Similarly the 'spaces' we work with are simply the generalizations of our devices. The best example, perhaps, is that the representation 1+ z+ z^2+ z^3+ ... = (1-z)^-1 is valid for all z(except perhaps the pole at unity) under the analytical uniqueness afforded by complex function theory. A bigger space yes, and more powerful tools too- but most definitions remain the same, the sum at least, since one can possibly(though not all that successfully) defer for the case of integration. In effect the new sum operators are in themselves 'suitable generalizations' of a 'classical'(though NOT absolute) operator. You may choose to consider them completely different entities and proceed as you say, but that forces an ugly disconnectedness, for even though definitions must stand faultless our notion of sum (indeed any other object we may seek to generalize) has not faltered (except perhaps the feeling of impossibility which I attribute to something more psychological than mathematical and which has prompted this, dare I say heated discussion)
Note how I haven't used any sum that isn't consistent, and hence the relations I have employed(regardless whether they agree with a traditional operator or not) hold true in their given respects while our notion of sum has not varied(except for the leap it demands such that it be understood in a proper sense), further this sense, hence definition is all important as to avoid misunderstanding. My 'sum' may, given my connotation be the constant of the Euler MacLaurin formula (e.g. for [tex]{\zeta}(1)[/tex]) or an extended limit(e.g. in Euler-Abel summation)
 
  • #29


If anyone's interested, I'm taking a look at divergent series and resummations at http://mathrants.blogspot.com [Broken]
 
Last edited by a moderator:

What is Ramanujan Summation?

Ramanujan Summation is a mathematical technique used to assign a finite value to certain infinite sums. It was developed by the Indian mathematician Srinivasa Ramanujan and is often used in the study of divergent series.

What is the Riemann Zeta Function?

The Riemann Zeta Function is a mathematical function that was defined by the German mathematician Bernhard Riemann. It is used to study the distribution of prime numbers and has applications in number theory, physics, and engineering.

Can the Riemann Zeta Function handle negative values?

Yes, the Riemann Zeta Function can handle negative values. In fact, it is defined for all complex numbers except for 1, where it is undefined.

What is the significance of negative values in the Riemann Zeta Function?

Negative values of the Riemann Zeta Function are important because they allow for the evaluation of certain integrals and infinite sums that cannot be evaluated using traditional methods. They also have connections to other areas of mathematics and physics, such as the Riemann Hypothesis and the Casimir effect.

How are negative values of the Riemann Zeta Function calculated using Ramanujan Summation?

Negative values of the Riemann Zeta Function can be calculated using Ramanujan Summation by using a technique called analytic continuation. This involves extending the function to complex values and using the properties of complex numbers to evaluate the function at negative values.

Similar threads

  • General Math
Replies
3
Views
1K
  • Topology and Analysis
Replies
3
Views
1K
Replies
14
Views
2K
Replies
5
Views
3K
  • General Math
Replies
4
Views
1K
Replies
5
Views
977
Replies
8
Views
2K
Replies
5
Views
2K
Replies
2
Views
690
Replies
2
Views
1K
Back
Top