ice109 said:
that's 110pg book - too long of an investment just to satisfy my curiosity. so maybe you can explain to me how one can sum things that represent physical reality in this way.
You'll find a lot of information already in the first few pages. What happens often in physics problems is that you have some problem that you cannot solve exactly and then the standard procedure is to set up a perturbation teory. The problem you want to solve may be not so far removed from a different problem that can be solved exactly.
Formally, you can write:
Model = Solvable Model + [Model - Solvable Model]
Then what you do is you consider:
Model(g) = Solvable Model + g [Model - Solvable Model]
and assume that Model(g) can be expanded in powers of g. Then you insert g = 1 in that expansion. Usually the terms in this expansion can be computed if the solvable model also yields exact expressons for correlations.
Now, what often appens is that while the sum of the first few terms gives a good approximation, the series does not converge. THe series is then an asymptotic expansion, in the sense that if you keep the number of terms fixed and let g go to zero, then the error goes to zero according to the last ignored term.
The physical reason for this often that the model has some new features that are not present in the exactly solved model even for infinitesimally small g. The exact difference between the phenomena in the real model as a function of g will then contain non-analytical terms.
If you then write down a formal power expansion then there is no way that expansion can converge, because then the function would be analytic. On the other hand, usually no information about the model is lost by the formal manipulations that leads to the series expansion, so somehow all of the features of the original model are present in the coefficients of the power series, even if the series does not converge.
What happens if you sum the series is that at first you see that the partial summations seem to converge but then it starts to diverge again. The smaller you choose g, the longer it takes before the series starts to diverge.
The approximation you get by summing exactly till the point where the series starts to diverge is called the supersaymptotic approximation. This is also called optimal truncation. If you do thus, then you capture the analytical part of the answer. The error is nonalytical, e.g. of the form exp(-a/g^2).
Since the presense of such a nonalytical term is responsible for the divergence of the series, it should be possible to extract this nonalytical contribution from the divergent tail. That can be done e.g. using Borel resummation. You then need to approximate the late terms of the series in some standard form and then you can resum the series to various degrees of aproximation. If you do this systematically, you get another series containing non-analytic terms in g, which is called a hyperasymptotic series.
Then, such a series is itself also divergent, and you can then iterate this whole procedure to get a second hyperasymptotic series, etc. etc.