# On the electron's mass.

1. Aug 22, 2004

### quasar987

I remember a physics teacher I had in college telling me that... (what follows is obscure but it's what he said)... "If you add all the Feynman diagrams for the electron, the sum, representing its mass, DIVERGE! On the other hand, if you consider only the first few terms of the sum, it matches very well the experimentally measured mass."

Can someone tell me what's true and what's not in what he said and how does one goes from Feynman diagrams to what seems to be (judging by the terms he used) a normal sum ($\Sigma$).

2. Aug 23, 2004

### jtolliver

A Feynman diagram is really just a representation of a certain calculation; For details try searching for Feynman rules.

3. Jul 29, 2005

### quasar987

I somewhat found the answer to this question (it was my second post on PF, haha). Feyman mentions it in the last of his 4 lectures on QED as the first main problem of the theory. Has this problem of QED been solved today?

What about the coupling constant 'c'? Has it been explained why it is what it is?

4. Jul 29, 2005

### Meir Achuz

Your teacher was probably referring to a proof by Dyson that the perturbation expansion in powers of e diverges, and is probably an asymptotic expansion. Such expansions can give accurate results for a small number of terms, but diverge if carried to an infinite number of terms.

5. Jul 29, 2005

### quasar987

Yes, that's what he said. Is this considered a major flaw in QED?

6. Jul 29, 2005

### Meir Achuz

No, because any expansion, even a convergent one, would be an approximation.
Asymptotic expansions are a mathematically valid way of getting accurate results, although not of infitesimal accuracy. There are non-perturbative approaches to QED, but even they would ultimately require numerical approximations, probably less accurate than perturbation theory. Also, in the coupled electro-weak theory,
there are cancellations at high Q^2 that remove the mass divergence.

7. Jul 31, 2005

### CarlB

There's a branch of now somewhat obscure mathematics called "divergent series" that has just this sort of property. It is the subject that got Ramanujan an invitation to study mathematics under Hardy, and included the infamous result used in string theory:

$$1+2+3+4+... = -\frac{1}{12}$$

http://math.furman.edu/~dcs/courses/math15/lectures/lecture-14.pdf

Anyway, with divergent series, the first n terms converge rapidly, but the later series cause the series to diverge drastically.

I got reminded of this recently when I added the exponential function to my Geometric algebra calculator (which is written in Java). We all know that $$e^-\kappa$$ goes to zero as $$\kappa$$ goes to (plus) infinity. This result can be generalized for $$\kappa$$ a matrix. For example, if

$$\psi = \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right)$$

then $$e^{-\kappa \psi}$$ doesn't quite go to zero as $$\kappa$$ goes to infinity, but it goes to something similar.

Anyway, it was a small surprise to me when my series for exponential diverged badly when I took the limit as $$\kappa$$ goes to infinity for something that is similar to the $$\psi$$ above.

Uh, the exponential function is not an example of a divergent series, but I got reminded of the subject nevertheless.

Carl

8. Jul 31, 2005

### Hurkyl

Staff Emeritus
I'm reminded of the first time I tried to approximate the cumulative standard normal distribution function by a Taylor series.

The function I wanted to compute was essentially:

$$f(x) = \int_{-\infty}^x e^{-t^2} \, dt$$

So, I simply replaced the exponential with its Taylor series, and swapped order of sum and integral:

$$f(x) = \int_{-\infty}^x \sum_{n = 0}^{\infty} \frac{(-1)^n}{n!} t^{2n} \, dt$$
$$f(x) = \sum_{n = 0}^{\infty} \int_{-\infty}^x \frac{(-1)^n}{n!} t^{2n} \, dt$$

which is easy enough to "compute":

$$f(x) = \sum_{n = 0}^{\infty} \left[ \frac{(-1)^n}{n!} \frac{t^{2n+1}}{2n+1} \right]_{-\infty}^x$$
$$f(x) = \sum_{n = 0}^{\infty} \left(\frac{(-1)^n}{n!} \frac{x^{2n+1}}{2n+1} + (-1)^n \infty \right)$$

But, I figure all the infinities just add up to a constant, and I know what f(0) is supposed to be, so I collect them into a single term and write:

$$f(x) = f(0) + \sum_{n = 0}^{\infty} \frac{(-1)^n}{n!} \frac{x^{2n+1}}{2n+1}$$

Of course, the right way to do this is to break the integral up into the ranges (-∞, 0] and [0, x], which I eventually figured out once I got that final sum, but this struck me as being analogous to what I've heard about renormalization: collecting the infinities into something that was "known".