I What is convergence and 1+2+3+4....... = -1/12

  • Thread starter Thread starter bhobba
  • Start date Start date
  • Tags Tags
    Convergence
Click For Summary
The discussion explores the concept of convergence in relation to divergent series, specifically the series 1-2+3-4+5..., which is identified as divergent through the alternating series test. It introduces Borel summation as a method to transform divergent series into convergent forms, allowing for analytic continuation to obtain values like -1/12 for the sum of all integers. The conversation highlights the importance of rewriting series in a more natural form to avoid unnecessary restrictions on their convergence. Participants also discuss the uniqueness of analytic continuation and the potential for multiple functions to yield the same result upon evaluation. The thread emphasizes the significance of understanding these mathematical concepts, despite their esoteric nature.
  • #31
Its simple. Let's spell it out again. This time I will use the Zeta function in a different form C(k) defined as ∑n^k (the sum is from 1 - not zero). Now 2*2^k*C(k) = 2*2^k +2*4^k +2*6^k ... So C(k) - 2*2^k*C(k) = 1 + (1 - 2)2^k + 3^k +(1-2)4^k ... = 1 - 2^k +3^k - 4^k ... which I will call E(k). So we have (1-2*2^k)*C(k) = E(k) or C(k) = E(k)/(1-2*2^k). Now we will show for k=0 and k=1 E(k) is linear and stable by using what's called Generic Summation to sum them. Hardy took this as the defining axioms of a series summation as pointed out in your posted article. If these axioms give a value to the series then they obviously obey those axioms. Let's start with k=0 so E(0) = 1 - 1 + 1 - 1 ... (this is called Grandi's series). There are a number of ways of summing it - but here simply applying the axioms is easiest E(0) = 1 - E(0) or 2*E(0) = 1 ⇒ E(0) = 1/2. But C(0) = E(0)/(1-2*2^k) = E(0)/(1-2*2^0) = -1/2. So C(0) = 1+1+1+1+1+... = -1/2 Similarly E(1) = 1 -2 +3 -4 ... = 1 - (1+1) + (1+2) - (1+3) ... = (1 - 1 + 1 - 1 ... ) - (1 - 2 + 3 ...) = 1/2 - E(1) ⇒ E(1) = 1/4. And we get C(1) = 1+2+3+4... = E(1)/(1-2*2^1) = -1/12.

You can do the rest by using linear and stable summation techniques like Borel Exponential Summation.

How did it evade the theorems in your paper? By transforming the problem into one where they do not apply - as can be seen by E(k) being linear and stable. The n in ∑n^k ensures you can't perform things like taking the first term out etc. For example in the proof it says: That the method Y is not totally regular is immediate since otherwise it should assign the value +∞ to the proposed expression. By (C), Y ({1,1,1,…}) has the same value as 0 + 1 + 1 + 1 + …. However in the form ∑n^k you can't put zero in front of it - the sum is from 1. You will find similar issues with other parts of the proof.

Thanks
Bill
 
Last edited:
Physics news on Phys.org
  • #32
Thank for your answer, but I think we are using different definitions regarding stability. In fact, you say

The n in ∑n^k ensures you can't perform things like taking the first term out etc.

But the definition of stability is (at least from what I read):

Y(a1;a2;a3;a4;...)=a1+Y(a2;a3;a4;...)


So, If the Zeta Function Sum method "forbids" taking the first term out, then it does not guarantee this equation for every sequence in the domain of Y and, as a consequence, it is not stable given this definition of stability.

Ps: It should be the case that we should be using different definitions of stability because I can't see how this method could be stable given that, as it is mentioned in the paper, there is no consistent, linear, stable and regular method that asigns a value to (1;2;3;4;...)
 
  • #33
the_pulp said:
Thank for your answer, but I think we are using different definitions regarding stability.

Its the same.

What I am doing is taking a certain class of sums - note it is not a general method of summation - and transforming it into another sum. That sum is of the form ∑ (-1)^(n+1) * n^k/C where C has been defined before - just too lazy to write it out. That new sum is stable and can be summed by stable summation methods such as Borel Exponential.

The theorems are about general summation methods not specific examples - note again - it is not a general summation method but how to sum a class of sums eg those of the Zeta function. That's why the theorems fail. Rigorously they fail because ∑ (-1)^(n+1) * n^k is stable.

So to discuss stability you need to discuss the stability of ∑ (-1)^(n+1) * n^k. That is easily seen as stable because its Boral Sum exists and limit e^n*(-1)^(n+1) * n^k is zero and there is a theorem that says if that is the case tghen is the same as Boral Exponetial which is stable.

The quote you gave before depends on the details in that book he quotes.
"... In Nesterenko & Pirozhenko (1997) we encounter an attempt to justify the use of the Riemann’s Zeta function. The authors refer to Hardy’s book for the actual method. They use axioms A and B and the zeta function to write equality between ∑ n=1 ∞ n and -1∕12 (see their eq. (2.20)). The conclusion is evident: the method does not comply with Hardy’s axioms. Furthermore, the result is false since to reach their conclusion the authors disregard a divergent contribution. Hence, the equal sign does not relate identical quantities as it should..."
\
The fact he says he is dropping and infinity suggests he is using Ramnujan summation which I will explain later - have to head off to lunch now.

Thanks
Bill
 
  • #34
I have been dealing with this problem for two years and proposed my solution in the article Y.N. Zayko, The Geometric Interpretation of Some Mathematical Expressions Containing the Riemann zeta-Function, Mathematics Letters, 2016; 2 (6): 42-46.
The following are excerpts from this (and other) articles on this topic.

Usually, when we mention the Riemann zeta-function, the famous Riemann hypothesis (RH)
comes to memory, which says that the real parts of the nontrivial zeros of the zeta-function is
1/2. By the way, mathematicians have not yet been able to find its evidence (or refutation). This
result is so important (it is related to the distribution of prime numbers) that Clay's mathematical
institute has included RH in the number of the most important problems of the millennium.

However, in physical applications, the Riemann zeta-function appears much more often without
any mention of RH. As an example, perhaps not the best, we mention the problem of
regularization of divergent expressions of field theory-the so-called zeta-regularization of S.
Hawking. Mathematicians have long been accustomed to the fact that if we represent the final
result of the theory in the form of an expression containing the zeta-function, then one does not
have to worry, that it, being written in another form, may contain divergence, i.e. be
meaningless. This is due to one surprising feature of the zeta-function - to "absorb" infinity into
itself, i.e. to ascribe to the expressions, at first sight, divergent, finite values. For example

zeta (0) = 1 + 1 + 1 + 1 + ... = - 0.5
zeta (-1) = 1 + 2 + 3 + 4 + 5 + ... = - 1/12

However, this fact that did not cause surprise of the mathematicians surprised the non-specialists
[1].
An attempt to comprehend the above results was undertaken in [2]. The meaning of the last
paper is to represent the calculation of the zeta function as the result of the operation of a certain
Turing machine (MT) the role of a tape of which plays a numerical axis, and the role of a head
plays some physical particle which is moving in accordance with the equations of motion
determined by the divergent expressions on the right-hand side of formulas given above. Since
partial sums in the expression for the second formula determine the path traveled by the particle
at constant acceleration, it is necessary to introduce into the equations of motion the source of
this acceleration, or of gravity according to Einstein's equivalence principle. In other words, for
the equations of motion of the particle, one should choose the equations consistent with the
general theory of relativity of Einstein with a suitable source. After solving them, we define the
metric on the numerical axis in which the motion of the particle will no longer cause surprise
because the final path that a particle will pass in an infinite time will be finite. The final
expression (-1/12) for the path is obtained if we take into account the curvature of the metric of
the numerical axis in accordance with the solution of the Einstein equations. In fairness, it should
be noted that the result in this paper differs from the exact one by about 3% due to the fact that
instead of the relativistic expression , the nonrelativistic expression was used for acceleration of
the particle. (In the opposite case, the equations could not be solved)

From the point of view of the theory of the Turing machine, the result obtained means the
inclusion infinity in the number of admissible values of the counting time. Earlier infinite time
meant non-computability of the problem. This is true, since summation of a divergent series on
an ordinary Turing machine is related to non-computable problems. Therefore, the MT described
in the paper refers to the so-called relativistic MT [3].
In development of these ideas the calculation of the zeta function of the complex argument [4]
was performed and the RH was proved [5]. In addition, the idea was expressed that computation,
like motion, can change the geometry of the numerical continuum, and moreover the recognized
system of Euclidean postulates should be changed to conform with the above formulas [6].
References
1. D. Berman, M. Freiberger, Infinity or -1/12 ?, + plus magazine, Feb. 18,
2014, http://plus.maths.org/content/infinity-or-just-112.
2. Y.N. Zayko, The Geometricl Interpretation of Some Mathematical Expressions Containing
the Riemann zeta-Function, Mathematics Letters, 2016; 2 (6): 42-46.
3. I. Nemeti, G. David, Relativistic Computers and the Turing Barrier. Applied Mathematics
and Computation, 178, 118-142, 2006.
4. Y. N. Zayko, Calculation of the Riemann Zeta-function on a Relativistic Computer,
Mathematics and Computer Science, 2017; 2 (2): 20-26.
5. Y. N. Zayko, The Proof of the Riemann Hypothesis on a Relativistic Turing Machine,
International Journal of Theoretical and Applied Mathematics. 2017; 3 (6): 219-
224, http: //www.sciencepublishinggroup.com/j/ijtam, doi: 10.11648 / j.ijtam.20170306.17.
6. Y. N. Zayko, The Second Postulate of Euclid and the Hyperbolic Geometry, International
Journal of Scientific and Innovative Mathematical Research (IJSIMR), Volume 6, Issue 4, 2018,
PP 16-20. http://dx.doi.org/10.20431/2347-3142.0604003; arXiv: 1706.08378, 1706.08378v1
[math.GM])
 
  • Like
Likes bhobba
  • #35
The theorems are about general summation methods not specific examples - note again - it is not a general summation method but how to sum a class of sums eg those of the Zeta function. That's why the theorems fail. Rigorously they fail because ∑ (-1)^(n+1) * n^k is stable. So to discuss stability you need to discuss the stability of ∑ (-1)^(n+1) * n^k.

Reference https://www.physicsforums.com/threa...e-and-1-2-3-4-1-12.959557/page-2#post-6087836

What does it mean that a sequence is Stable? What I've read is that a method can be stable or not. But not a sequence. I guess, what you are saying is that a pair sequence-method is stable, but I'm not sure.

I'm having trouble understanding you, do you have a reference in order to look at it directly from the source and not bothering you anymore (at least until I read the source)?

Thanks again anyway
 
  • #36
Yuriy Zayko said:
I have been dealing with this problem for two years and proposed my solution in the article Y.N. Zayko, The Geometric Interpretation of Some Mathematical Expressions Containing the Riemann zeta-Function, Mathematics Letters, 2016; 2 (6): 42-46.

Very very interesting stuff. It emboldened me to explain, as promised, Ramanujan Summation - which I do below. The conclusion I have reached is its simply writing, in a reasonable way, the series so it can be summed.

It's based on the Euler-Maclauren series, a not what I would call rigorous, but short, derivation I will now give. Define the linear operator Ef(x) = f(x+1). Define Df(x) = f'(x). Also f(x+1) = f(0) + f'(x) + f''(x)/2! + f'''(x)/3! ... (the Maclauren expansion), or Ef(x) = f(x) +Df(x) + D^2f(x)/2! + D^3f(x)/3! ... = e^Df(x). We Also need the Bernoulli numbers which are defined by the expansion of x/(e^x - 1) = ∑B(k)*x^k/k! - you can look up the values with a simple internet search.

f(0) + f(1) + f(2) ... f(n-1) = (1 + E + E^2 ++++++E^(n-1))f(0) = (E^n -1/E-1)f(0) = (E^n -1)(1/e^D -1)f(0) = D^-1*(D/e^D -1)f(x)|0 to n = D^-1f(x)|0 to n + ∑ B(k)*D^(k-1)f(x)/k!| 0 to n = 0 to n ∫f(x) + ∑B(k)*D^(k-1)f(n)/k! - ∑B(k)*D^(k-1)f(0)/k!. Now the sum is from 1. Let n → ∞ and most of the time Sn = ∑B(k)*D^(k-1)f(n)/k! → 0 so we will assume that - certainly its the case for convergent series since → 0. So you end up with ∫f(x) - ∑B(k)*D^(k-1)f(0)/k!. So f(0) + f(1) + f2) +f(3) ...= ∫f(x) - ∑B(k)*D^(k-1)f(0)/k!. Notice regardless of n (- ∑B(k)*D^(k-1)f(0)/k!) does not depend on n and is called the Ramanujan sum. Again note the sum is from 1. We would like the Ramanujan Sum to be the same for concergent series. This is done by defining it as ∫f(x) + C where C is the Ramanujan sum. If it is finite then it is ∫f(x) + C. If it is infinite it is defined as C.

I won't go any further into it - you can look it up and see what various series give in various cases - it gives the good old -1/12 for 1+2+3+4... but with an interesting twist.

Thanks
Bill
 
Last edited:
  • #37
the_pulp said:
What does it mean that a sequence is Stable?

Take S = 1+1+1+1... . It is not stable because S = 1+1+1+1... = 1+S . It's inherent in the series. The interesting thing is by transforming that series to the Eta function you get S = (1-1+1-1... )/-1. The thing is in that form it is stable. 1-1+1-1... (called the Grande Series) is stable. One way to show a series is stable is to sum it by a methods that are known to only work with stable series - one such is Borel Exponentiation summation. Borel summation (which can be used to sum some series that are not stable - so is not a stable method) is usually easier to use in practice than the exponential method, but to help here, there is this theorem that says Borel Summation gives the same answer as Borel Exponential summation (hence must be stable) if limit t → ∞ e^-t*an = 0. Now applying that to Grande's series you see it's true. So if Borel summation gives an answer the series is stable ∫∑(-t)^n*e^-t/n! = ∫ e^-2t = 1/2. Also for stable series one can apply generic summation that uses that stability to figure out the sum. For the Grande Series this gives S = 1 - S or S=1/2. This means 1+1+1+1... = -1/2 despite the series not being stable. How this happens is you make reasonable assumptions about converting it to the Grande Series, for example we could have shown 1 + S = 1 -1/2 = 1/2 - that's why its not stable, its inconsistent etc. Knowing this we do make use of stability properties in calculating the sum so you can easily convert it to the Grande Series. Unless you do something like that you can't sum it. To be clear if a series is not stable when summing it you can't use stability ie you can't use S = a0 + S' when you sum it. So you do not do it.

One other thing I want to mention is where does the infinity in 1+1+1+1... etc go in the summation. Well I showed C(k) (1 - 2*2^k) = E(k) - where E(k) is stable and summable to a finite number. Its that (1- 2*2^k) term - the infinities in C(k) are canceled by the infinities in (2*2^k)*C(k) to give finite answers. I have done, in a smart way of course, that dreaded shuffling of infinities done in non-Wilsonian re-normalization that I so detest - and so did Wilson. He vowed and declared to sort it out - and did - for which he got a Nobel. The real answer in physics of course is you must have a cutoff so there is nothing infinite to begin with - that's the basis of Wilsonian Re-normalization - every theory must come with a natural cutoff to avoid infinities if they can occur:
http://scipp.ucsc.edu/~dine/ph295/wilsonian_renormalization.pdf

See equation 1.

The thing is this. A re-normalization technique called zeta function re-normalization exists - it's a bit of generalization of dimensional re-normalization and has some nice properties such as no counter-terms needed. I at first thought - this may be the clue that Wilson may not be right - on investigation - bummer - he was. Well he was a Putman Fellow twice at 19/20 or so (he entered Harvard at about 16)- I can't compete with that.

The reason analytic continuation works is now simple from converting it to the Eta function - that is always finite summable by say Borel Summation. If C(k) is the same when k<0 it must be the same in the entire place - except k=-1 of course which is a non-removable singularity. This is more easily seen using Ramanujan Summation which I will post a bit later - have to go to lunch now (I eat a lot of lunches don't I:biggrin::biggrin::biggrin::biggrin::biggrin::biggrin:)

Thanks
Bill
 
Last edited:

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
5K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 16 ·
Replies
16
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K