# Is String Theory Built On Funny Math?

• B
Mentor
In asking a question about analysis textbooks there was a bit of a chat about things like S = 1 - 1 + 1 - 1 .... = 1 - (1 + -1 +1 -1 ......) or 2S = 1 or S=1/2. I will say straight away the answer to whats going on - what infinite sums are, are simply definitions and believe it or not there are a number of them - but students usually only learn one. However they all have some general properties and those general properties allow us to often calculate the sum even without knowing what definition we are using.

Well we got onto the good old Ramanujan Sum 1 +2 + 3 + 4 ...... = -1/12. Believe it or not as I will show in a you tube video link its used in string theory and elsewhere in physics. I will give my proof which is a little different to others.

Define C(s) = 1^s + 2^s + 3^s ....... We get the sum we want when s=1. It turns out using standard methods of complex analysis (analytic continuation and what not) it exists when s=1. So how to calculate it:

2*2^s*C(s) = 2*2^s + 2*4^s +2*6^s ....
(1 - 2*2^s)*C(s) = 1^s - 2^s + 3^s - 4^s .......

Let s=1 so you have -3*C(1) = 1 - 2 + 3 - 4 ............

Now 1 - 2 +3 - 4 ........ = 1 - (1 - 1 + 1 - 1 ......) - (1 - 2 + 3 - 4 .......)

So -3*C(1) = 1 - 1/2 + 3*C(1) or C(1) = 1 + 2 + 3 + 4 ...... = -1/12

Wow - it should be infinite - or should it? Whats going on - answer - it depends on your definition of infinite sum.

Now have a look at the following videos:

And if you want more heavy math see the blog by Terry Tao the guy that started me thinking about this:
https://terrytao.wordpress.com/2010...tion-and-real-variable-analytic-continuation/

Now the 64 million dollar question is this, Look in video 1 - he opens a string theory text - low and behold - we use the -1/12 result. But this is not the only area - see:
https://en.wikipedia.org/wiki/Zeta_function_regularization

What's going on here. Sure in physics you want finite answers - but why this funny definition of summation?

Thanks
Bill

Last edited:

stevendaryl
Staff Emeritus
In asking a question about analysis textbooks there was a bit of a chat about things like S = 1 - 1 + 1 - 1 .... = 1 - (1 + -1 +1 -1 ......) or 2S = 1 or S=1/2. I will say straight away the answer to whats going on - what infinite sums are, are simply definitions and believe it or not there are a number of them - but students usually only learn one. However they all have some general properties and those general properties allow us to often calculate the sum even without knowing what definition we are using.

Well we got onto the good old Ramanujan Sum 1 +2 + 3 + 4 ...... = -1/12. Believe it or not as I will show in a you tube video link its used in string theory and elsewhere in physics. I will give my proof which is a little different to others.

Define C(s) = 1^s + 2^s + 3^s ....... We get the sum we want when s=1. It turns out using standard methods of complex analysis (analytic continuation and what not) it exists when s=1. So how to calculate it:

2*2^s*C(s) = 2*2^s + 2*4^s +2*6^s ....
(1 - 2*2^s)*C(s) = 1^s - 2^s + 3^s - 4^s .......

Let s=1 so you have -3*C(1) = 1 - 2 + 3 - 4 ............

Now 1 - 2 +3 - 4 ........ = 1 - (1 - 1 + 1 - 1 ......) - (1 - 2 + 3 - 4 .......)

So -3*C(1) = 1 - 1/2 + 3*C(1) or C(1) = 1 + 2 + 3 + 4 ...... = -1/12

Wow - it should be infinite - or should it? Whats going on - answer - it depends on your definition of infinite sum.

Now have a look at the following videos:

And if you want more heavy math see the blog by Terry Tao the guy that started me thinking about this:
https://terrytao.wordpress.com/2010...tion-and-real-variable-analytic-continuation/

Now the 64 million dollar question is this, Look in video 1 - he opens a string theory text - low and behold - we use the -1/12 result. But this is not the only area - see:
https://en.wikipedia.org/wiki/Zeta_function_regularization

What's going on here. Sure in physics you want finite answers - but why this funny definition of summation?

Thanks
Bill

I don't consider it a "definition" of summation. Summation already has a definition for convergent series. What I think of Zeta function regularization (also, a less powerful technique that I sort of thought of myself, but it turns out to be known as Abel summation) is that it's just a way to map formal series to reals that happens to agree with summation for convergent series.

A question I have is whether there are multiple summation techniques that lead to the same answer for convergent series, but different answers for divergent series. Does Abel summation and Riemann summation both give the same answer (when they both give an answer)?

The idea behind Abel summation (I think I'm giving the right definition) is this: You want to compute ##\sum_i a_i##. You make it into a function of ##z## by defining: ##f(z) = \sum_i a_i z^i##. If ##f(z)## is an analytic function, then you take the limit as ##z \rightarrow 1##.

For example, ##1-1+1-1...##. Make it into a function: ##f(z) = 1 - z + z^2 - z^3 ... = \frac{1}{1+z}##. Then you take the limit as ##z \rightarrow 1##: ##f(1) = 1/2##. Zeta regularization is more powerful, but I'm not sure that it always gives the same answer as Abel summation.

stevendaryl
Staff Emeritus
Abel summation is not powerful enough to compute ##1 + 2 + 3 + ...##. But it can compute the related summation ##1 - 2 + 3 - 4 ...##:

##f(x) = 1 - 2x + 3x^2 - 4x^3 + ...##

You can see that ##f(x) = - \frac{d}{dx} (1 - x + x^2 - x^3 ...) = - \frac{d}{dx} \frac{1}{1+x} = \frac{1}{(1+x)^2}##. So ##f(1) = \frac{1}{4}##.

• bhobba
atyy
• bhobba
Mentor
A question I have is whether there are multiple summation techniques that lead to the same answer for convergent series, but different answers for divergent series. Does Abel summation and Riemann summation both give the same answer (when they both give an answer)?

They lead to the same answer if they work - but some work for more series than others - as far as I know Ramanujan Summation is the most powerful. There is an issue of whats called stability. Normally infinite sums have properties like ∑(ai + bi) = ∑ai + ∑bi and c*∑ai = ∑c*ai. But for some methods the following fails ∑ai = a0 + ∑ai - the first the sum is from 0 - in the second from 1. Such are called unstable. Only some methods such as analytic continuation from a Dirichlet series or Ramanujan Summation can handle that. 1 + 2 + 3 ....... is unstable - so only more powerful methods work.

1 + 2 + 3 ....... = -1/12 = 0 + (1 + 2 + 3 .......)
0 + 1 + 2 + 3 ....... = -1/12

Subtract them and you get 1 + 1 + 1 + 1 ........ = 0

Now (1 + 1 + 1 + 1 ......) - (1 -1 + 1 - 1 .......) = 2 + 2 + 2 ......... = 2*(1 + 1 + 1 ........)

So S - 1/2 = 2*S or S = -1/2

Danger Will Robinson - Danger.

Care required - great care.

Maybe I should get Hardy's Book recommended by Demystifier in the analysis thread:
https://www.amazon.com/dp/0821826492/?tag=pfamazon01-20

Not what I usually read these days but like Rigged Hilbert Spaces sometimes a mans gotta do what a mans gotta do because that's what a mans gotta do - said with his John Wayne voice       To make matters worse you can add (1 - 1 +1 -1 ....) and get S = 1/2, not just what it should be -1/2. This putting zeroes in infinite sums seems a real issue.

Thanks
Bill

Last edited:
Carl Bender gives an excellent lecture series about summing series and the different conditions that are necessary. It is a long series of lectures but I found it very useful.

Cheers

• bhobba
haushofer
I'm not sure why you specifically pick out string theory. Zeta-function regularization is also used in other quantum field theories, e.g. in the phenomen of the Casimir effect. It basically uses the philosophy that, since the analytic continuation of a function is unique (here: the zeta function), one can try to analytically continue divergent expressions and hope for the best. Of course, this should be compared to other ways of regularization to check if one obtains the same answers. For string theory, it does (as e.g. the notes of Tong show).

But for "ordinary quantum field theories" a similar thing happens with Wick-rotations. Osterwalder and Schrader guarantee (under certain assumptions) that Schwinger functions can be analytically continued to imaginary time such that one can regulate. This is just as "funny" as zeta-function regularization (I don't see a difference, but maybe I'm missing something).

• bhobba and Demystifier
Mentor
I'm not sure why you specifically pick out string theory.

In the title it's simply related to the first video where they opened a string theory textbook and showed it used there. Its used in a lot of other areas of course eg the Casmir calculation posted by Atty. The question still remains however, other than of course you want finite answers, why it works. I am starting to form the view this has some relation to re-normalization group theory which I want to investigate further. At this stage I don't see the difference with normal regularization either - its the same issue with the same solution - but I want to investigate and think about it a bit further.

Thanks
Bill

Last edited:
haushofer
Ah, ok, I'll check the video then. Yes, the "why" question is not easily answered. I think a lot of textbooks do a poor job explaining exactly why these kinds of regularisations work, why we're allowed to do it and what's the philosophy behind it. It has puzzled me anyway. It fits well in a "shut up and calculate" mentality.

Mentor
Ah, ok, I'll check the video then. Yes, the "why" question is not easily answered. I think a lot of textbooks do a poor job explaining exactly why these kinds of regularisations work, why we're allowed to do it and what's the philosophy behind it. It has puzzled me anyway. It fits well in a "shut up and calculate" mentality.

As I said my suspicion is its related to re-normalization group theory and an actual physically relevant cutoff - but I want to think about it more. Terry Tao tends to look at it in cut-off function terms as well.

Thanks
Bill

haushofer
By the way, I find that numberphile video confusing. These manipulations with infinite sums are really hocus-pocus. What would truly be interesting is to discuss why Euler's hocus pocus coincides with the full machinery of modern analytic continuation. I don't have the answer to that one.

Mentor
By the way, I find that numberphile video confusing. These manipulations with infinite sums are really hocus-pocus. What would truly be interesting is to discuss why Euler's hocus pocus coincides with the full machinery of modern analytic continuation. I don't have the answer to that one.

It is hocus-pocus - the second video is better - but is approaching it the wrong way IMHO. It says 1 +2 +3 + 4............ = ∞. That's correct under the usual way we think of limits - but what a limit is, is just a definition - you can define it any reasonable way you like - the choice usually being application.

Terry's link of course is the best of all - but a bit of a slog.

Anyway about to watch a bit of Wimbledon and return when what I am interested in is over or waiting for the match to start.

Thanks
Bill

stevendaryl
Staff Emeritus
It simply related to the first video where they opened a string theory textbook and showed it used there. Its used in a lot of other areas of course eg the Casmir calculation posted by Atty. The question still remains however, other than of course you want finite answers, why it works. I am starting to form the view this has some relation to re-normalization group theory which I want to investigate further.

In my opinion, it's incorrect to think of a divergent series as having a value, even if you know how to calculate that value. What I think is going on instead is something like this:
1. You have a quantity ##A## that is implicitly defined. Maybe it's defined as ##f(0)## where ##f## is defined as the solution to some differential equation. Or maybe it's defined as the value of some integral. But in any case, you have a definition or axioms describing ##A##.
2. You have a naive approach to computing ##A## via an infinite series. That is you come up with a formal power series ##\sum_j a_j## such that if the series converges, it should converge to ##A##.
3. Unfortunately, the series doesn't converge.
4. However, there is a related series ##\sum_j a_j(\lambda)## that is convergent, at least for some range of values for ##\lambda##. Term-by-term, ##a_j(\lambda) \rightarrow a_j## as ##\lambda \rightarrow \lambda_0##
5. The series ##\sum_j a_j(\lambda)## converges for values of ##\lambda## within some range to a sensible (analytic?) function ##B(\lambda)##.
6. Even though the series doesn't converge when ##\lambda = \lambda_0##, we assume that ##A = B(\lambda_0)##, if ##B(\lambda)## has an analytic continuation to ##\lambda = \lambda_0##.
I don't think it makes sense in these circumstances to say that the divergent series ##\sum_j a_j## actually converges to ##B(\lambda_0)##. Rather, it's a heuristic or a guess: Whatever reason you had for believing that ##\sum_j a_j## was a way to calculate ##A##, those reasons are consistent with the hypothesis that ##A = B(\lambda_0)##.

A trivial example might be as follows: You have some physically meaningful quantity ##A## that is associated with a parameter ##\lambda## (perhaps a coupling constant of some sort). You have physical reasons for believing that the value of ##A## satisfies the self-referential implicit definition:

##A = 1 - \lambda A##

You might try to solve this in a power series in ##\alpha##, and you will find:

##A = 1 - \lambda+ \lambda^2 - \lambda^3 ...##

This series converges if ##-1< \lambda < +1##. Unfortunately, the measured value of ##\lambda## is 2, far away from the region of convergence.

You proceed as follows: If the series ##1 -\lambda + \lambda^2 - ...## converges, then it converges to the function ##B(\lambda) = \frac{1}{1+\lambda}##. But the function ##B(\lambda)## is well-defined even when ##\lambda = 2##. So you assume that ##A = B(2) = \frac{1}{3}##

I would not say that somehow ##1 - 2 + 4 - 8 ...## converges to 1/3. Rather, I would say that the reason for thinking ##A = 1 - 2 + 4 - 8 ...## is because that was a way to solve the equation ##A = 1 - \lambda A## when ##\lambda = 2##. But the choice ##A = 1/3## solves that equation, as well.

So it's more that the divergent series is hinting about what the value of ##A## is, not that the series converges to that value.

• odietrich
fresh_42
Mentor
2021 Award
Rather, it's a heuristic or a guess: Whatever reason you had for believing that ##\sum_j a_j## was a way to calculate ##A##, those reasons are consistent with the hypothesis that ##A = B(\lambda_0)##.
Shouldn't it be: "those reasons are inconsistent" because in case both are equal, we don't have an issue.

So the basic question is: What does an analytic continuation tell us about an otherwise undefined value? And is this matter related to the special behavior of complex numbers? For short: Is is a mathematical subject?

• bhobba
stevendaryl
Staff Emeritus

Shouldn't it be: "those reasons are inconsistent" because in case both are equal, we don't have an issue.

No, I meant consistent. Look at the toy example again:

1. For whatever reason, I think that the quantity ##A## satisfies ##A = 1 - \lambda A##.
2. Because of this, I'm led to the naive calculation method: ##A = 1 - \lambda + \lambda^2 ...##
3. I recognize that that's the expansion for ##B(\lambda) = 1/(1+\lambda)##.
4. So I guess that ##A = 1/(1+\lambda)##.
5. In this case, I can actually go back to step 1 and verify that this does indeed solve: ##A = 1 - \lambda A##.
In the case I'm interested in, ##\lambda = 2##, so the series in step #2 doesn't actually converge. But that doesn't matter, because the actual definition of ##A## is not the series, it's the equation in #1. And the solution ##A = 1/(1+\lambda)## satisfies equation 1, and is perfectly well-defined when ##\lambda = 2##.

So the two things that I'm saying are consistent are: (1) The equation #1, which is the starting point from which I derived the series solution. (2) The analytic solution ##B(\lambda) = 1/(1+\lambda)##. The series only served as a way to get to the analytic solution. I wouldn't say that the series is the solution, and that ##B(\lambda)## is some creative way to sum the series. I would say that ##B(\lambda)## is the solution and the series is an approach to finding ##B(\lambda)##.

So the basic question is: What does an analytic continuation tell us about an otherwise undefined value? And is this matter related to the special behavior of complex numbers? For short: Is is a mathematical subject?

I think it's a puzzle as to why physics quantities should be analytic. It's certainly nice when they are, but I don't know why we should expect them to be.

I've seen an argument that the power series that occur in QED can't possibly sum to an analytic function, because the behavior for a small negative fine-structure constant cannot be smoothly related to the behavior for a small positive fine-structure constant. I don't remember exactly how that goes.

--
Daryl

Last edited:
Divergent series such as these which magically produce finite results are treated as 'asymptotic expansions' of functions, basically related to evaluating the series expansion of a function outside of it's domain of convergence and getting the value of the function you got the series from despite the series diverging at that point. This is discussed in the Hardy book referenced. A very nice discussion of the history and how an interpretation was attached to such expressions is given in Kline's History of Math book in the short Divergent Series chapter, really good read. Another source for this is the short chapter in W&W with an example motivating it in the beginning:

https://archive.org/stream/courseofmodernan00whit#page/150/mode/2up

• atyy, bhobba and fresh_42
king vitamin
• 