# 1-2+3-4 divergent series confusion

1. Jul 16, 2009

### icosane

How did Euler came up with a solution of 1/4 for the series 1-2+3-4.... There is a wikipedia page explaining it,
http://en.wikipedia.org/wiki/1_−_2_+_3_−_4_+_·_·_·
but I'm not quite following the manipulation of the numbers. How is a 1 pulled out and everything else goes to 0? It just doesn't make any sense to me.

Also how is it that a divergent series can equal a single number? Is that not the definition of convergence? To go to a single number? What is up with this? Thanks a lot!

2. Jul 16, 2009

### ice109

it converges in a different sense. it converges in the sense of summability. the article mentions it: abel summation.

abel summation gives you things like 1+1-1+1-... = 1/2

3. Jul 16, 2009

### Count Iblis

Borel resummation also gives 1/4 for the series:

Divide the nth term by n! and multiply by n! and then replace that last n! by the integral from x = 0 to infinity of x^n exp(-x)dx, interchange integration and summation, sum the series and then compute the integral.

4. Jul 16, 2009

### arildno

It is important to realize that these other types of summation represent OTHER operations on a set of infinite elements than what we call "ordinary" summation.

Although all of these summation principles yield the same predictions/answers on finite sets, they may diverge from another when applied to infinite sets.

5. Jul 16, 2009

### Count Iblis

Last edited by a moderator: Apr 24, 2017
6. Jul 16, 2009

### ice109

Last edited by a moderator: Apr 24, 2017
7. Jul 16, 2009

One way that used to be used was to identify the individual terms of an infinite series of numbers as the coefficients of a carefully selected power series and then, by assigning a specific value to $$x$$ in the series, on the one hand obtain the original series and on the other find a "sum" by evaluating the function represented. To the original poster, the value 1/4 was found something like this:

$$\frac 1 {1-x} = 1 + x + x^2 + x^3 + x^4 + \cdots$$

Differentiate both sides.

$$\frac 1 {(1-x)^2} = 1 + 2x + 3x^2 + 4x^3 + \cdots$$

Now set $$x = -1$$

$$\frac 1 4 = 1 -2 + 3 - 4 + \cdots$$

One major drawback of this procedure was that by careful selection of the initial function, a single series of numbers could be made to 'correspond' to different algebraic expressions, and so competing calculations could show different sums for a single series: this clearly was not good.
Of course, all of this was done before intricacies of infinite series, limit processes, and functions, were completely worked out. However, the ideas relating to summability were, and are, quite useful.

8. Jul 16, 2009

### Bob_for_short

In physics (and mathematics too) the series like 1-2+3-4.... follow from some function expansion so they represent it, after carefully summing up. It is especially easy to understand if we apply a non-linear summation (the searched function is not a polynomial, isn't it?).

9. Jul 17, 2009

### icosane

Thanks for the replies, guys. Just one question... How is it that we can set x=-1 if the equation below only applies if the absolute value of x is less than 1?

10. Jul 17, 2009

### zetafunction

the idea icosane is to insert $$x=-1+ \epsilon$$ being epsilon any positive quantity so x belongs to the convergence raidus of the series, and we can ignore the epsilon terms in the end after performing the calculations.

11. Jul 17, 2009

Aaaaah, that's the \$64 question, isn't it? Remember, these 'calculations' were done before the behavior of infinite series and limits were completely worked out; it is better to think of them as formal manipulations rather than rigorous proofs.

One classic book on infinite series and sequences is by Konrad Knopp, "Theory and Applications of Infinite Series". In the introductory chapter on divergent series he makes this comment/observation about the type of work I've shown.

"It is true that most mathematicians of those times held themselves aloof from such results in instinctive mistrust, and recognized only those which are true in the present-day sense. But they had no clear insight into the reasons why one type of result should be admitted, and not the other."

He further states "... Euler always let these stand when they occurred naturally by expanding an analytical expression which itself possessed a definite value. This value was in every case regarded as the sum of the series."

These are from Chapter XIII, page 458, in my copy of the book.

A similar book by Bromwich also contains very good discussions of these issues: my copy is in my school office at this time, so I can't point you to a chapter.

12. Jul 17, 2009

No, not in the original "derivations" by Euler and others - see my previous post and the associated text.

13. Jul 17, 2009

### Count Iblis

You'll find a lot of information already in the first few pages. What happens often in physics problems is that you have some problem that you cannot solve exactly and then the standard procedure is to set up a perturbation teory. The problem you want to solve may be not so far removed from a different problem that can be solved exactly.

Formally, you can write:

Model = Solvable Model + [Model - Solvable Model]

Then what you do is you consider:

Model(g) = Solvable Model + g [Model - Solvable Model]

and assume that Model(g) can be expanded in powers of g. Then you insert g = 1 in that expansion. Usually the terms in this expansion can be computed if the solvable model also yields exact expressons for correlations.

Now, what often appens is that while the sum of the first few terms gives a good approximation, the series does not converge. THe series is then an asymptotic expansion, in the sense that if you keep the number of terms fixed and let g go to zero, then the error goes to zero according to the last ignored term.

The physical reason for this often that the model has some new features that are not present in the exactly solved model even for infinitesimally small g. The exact difference between the phenomena in the real model as a function of g will then contain non-analytical terms.

If you then write down a formal power expansion then there is no way that expansion can converge, because then the function would be analytic. On the other hand, usually no information about the model is lost by the formal manipulations that leads to the series expansion, so somehow all of the features of the original model are present in the coefficients of the power series, even if the series does not converge.

What happens if you sum the series is that at first you see that the partial summations seem to converge but then it starts to diverge again. The smaller you choose g, the longer it takes before the series starts to diverge.

The approximation you get by summing exactly till the point where the series starts to diverge is called the supersaymptotic approximation. This is also called optimal truncation. If you do thus, then you capture the analytical part of the answer. The error is nonalytical, e.g. of the form exp(-a/g^2).

Since the presense of such a nonalytical term is responsible for the divergence of the series, it should be possible to extract this nonalytical contribution from the divergent tail. That can be done e.g. using Borel resummation. You then need to approximate the late terms of the series in some standard form and then you can resum the series to various degrees of aproximation. If you do this systematically, you get another series containing non-analytic terms in g, which is called a hyperasymptotic series.

Then, such a series is itself also divergent, and you can then iterate this whole procedure to get a second hyperasymptotic series, etc. etc.