# Struggles with the Continuum – Freeman Dyson and QED

Last time I sketched how physicists use quantum electrodynamics, or ‘QED’, to compute answers to physics problems as power series in the fine structure constant, which is

$$ \alpha = \frac{1}{4 \pi \epsilon_0} \frac{e^2}{\hbar c} \approx \frac{1}{137.036} . $$

I concluded with a famous example: the magnetic moment of the electron. With a truly heroic computation, physicists have used QED to compute this quantity up to order ##\alpha^5##. If we also take other Standard Model effects into account we get agreement to roughly one part in ##10^{12}##.

However, if we continue adding up terms in this power series, there is no guarantee that the answer converges. Indeed, in 1952 Freeman Dyson gave a heuristic argument that makes physicists expect that the series *diverges*, along with most other power series in QED!

The argument goes as follows. If these power series converged for small positive ##\alpha##, they would have a nonzero radius of convergence, so they would also converge for small negative ##\alpha##. Thus, QED would make sense for small negative values of ##\alpha##, which correspond to *imaginary* values of the electron’s charge. If the electron had an imaginary charge, electrons would attract each other electrostatically, since the usual repulsive force between them is proportional to ##e^2##. Thus, if the power series converged, we would have a theory like QED for electrons that attract rather than repel each other.

However, there is a good reason to believe that QED cannot make sense for electrons that attract. The reason is that it describes a world where the vacuum is unstable. That is, there would be states with arbitrarily large negative energy containing many electrons and positrons. Thus, we expect that the vacuum could spontaneously turn into electrons and positrons together with photons (to conserve energy). Of course, this not a rigorous proof that the power series in QED diverge: just an argument that it would be strange if they did not.

To see why electrons that attract could have arbitrarily large negative energy, consider a state ##\psi## with a large number ##N## of such electrons inside a ball of radius ##R##. We require that these electrons have small momenta, so that nonrelativistic quantum mechanics gives a good approximation to the situation. Since its momentum is small, the kinetic energy of each electron is a small fraction of its rest energy ##m_e c^2##. If we let ##\langle \psi, E \psi\rangle## be the expected value of the total rest energy and kinetic energy of all the electrons, it follows that ##\langle \psi, E\psi \rangle## is approximately proportional to ##N##.

The Pauli exclusion principle puts a limit on how many electrons with momentum below some bound can fit inside a ball of radius ##R##. This number is asymptotically proportional to the volume of the ball. Thus, we can assume ##N## is approximately proportional to ##R^3##. It follows that ##\langle \psi, E \psi \rangle## is approximately proportional to ##R^3##.

There is also the negative potential energy to consider. Let ##V## be the operator for potential energy. Since we have ##N## electrons attracted by an ##1/r## potential, and each pair contributes to the potential energy, we see that ##\langle \psi , V \psi \rangle## is approximately proportional to ##-N^2 R^{-1}##, or ##-R^5##. Since ##R^5## grows faster than ##R^3##, we can make the expected energy ##\langle \psi, (E + V) \psi \rangle## arbitrarily large and negative as ##N,R \to \infty##.

Note the interesting contrast between this result and some previous ones we have seen. In Newtonian mechanics, the energy of particles attracting each other with a ##1/r## potential is unbounded below. In quantum mechanics, thanks the uncertainty principle, the energy is bounded below for any fixed number of particles. However, quantum field theory allows for the creation of particles, and this changes everything! Dyson’s disaster arises because the vacuum can turn into a state with *arbitrarily large numbers* of electrons and positrons. This disaster only occurs in an imaginary world where ##\alpha## is negative — but it may be enough to prevent the power series in QED from having a nonzero radius of convergence.

We are left with a puzzle: how can perturbative QED work so well in practice, if the power series in QED diverge?

Much is known about this puzzle. There is an extensive theory of ‘Borel summation’, which allows one to extract well-defined answers from certain divergent power series. For example, consider a particle of mass ##m## on a line in a potential

$$ V(x) = x^2 + \beta x^4 .$$

When ##\beta \ge 0## this potential is bounded below, but when ##\beta < 0## it is not: classically, it describes a particle that can shoot to infinity in a finite time. Let ##H = K + V## be the quantum Hamiltonian for this particle, where ##K## is the usual operator for the kinetic energy and ##V## is the operator for potential energy. When ##\beta \ge 0##, the Hamiltonian ##H## is essentially self-adjoint on the set of smooth wavefunctions that vanish outside a bounded interval. This means that the theory makes sense. Moreover, in this case ##H## has a ‘ground state’: a state ##\psi## whose expected energy ##\langle \psi, H \psi \rangle## is as low as possible. Call this expected energy ##E(\beta)##. One can show that ##E(\beta)## depends smoothly on ##\beta## for ##\beta \ge 0##, and one can write down a Taylor series for ##E(\beta)##.

On the other hand, when ##\beta < 0## the Hamiltonian ##H## is *not* essentially self-adjoint. This means that the quantum mechanics of a particle in this potential is ill-behaved when ##\beta < 0##. Heuristically speaking, the problem is that such a particle could tunnel through the barrier given by the local maxima of ##V(x)## and shoot off to infinity in a finite time.

This situation is similar to Dyson’s disaster, since we have a theory that is well-behaved for ##\beta \ge 0## and ill-behaved for ##\beta < 0##. As before, the bad behavior seems to arise from our ability to convert an infinite amount of potential energy into other forms of energy. However, in this simpler situation one can *prove* that the Taylor series for ##E(\beta)## does not converge. Barry Simon did this around 1969. Moreover, one can prove that Borel summation, applied to this Taylor series, gives the correct value of ##E(\beta)## for ##\beta \ge 0##. The same is known to be true for certain quantum field theories. Analyzing these examples, one can see why summing the first few terms of a power series can give a good approximation to the correct answer even though the series diverges. The terms in the series get smaller and smaller for a while, but eventually they become huge.

Unfortunately, nobody has been able to carry out this kind of analysis for quantum electrodynamics. In fact, the current conventional wisdom is that this theory is inconsistent, due to problems at very short distance scales. In our discussion so far, we summed over Feynman diagrams with ##\le n## vertices to get the first ##n## terms of power series for answers to physical questions. However, one can also sum over all diagrams with ##\le n## loops. This more sophisticated approach to renormalization, which sums over infinitely many diagrams, may dig a bit deeper into the problems faced by quantum field theories.

If we use this alternate approach for QED we find something surprising. Recall that in renormalization we impose a momentum cutoff ##\Lambda##, essentially ignoring waves of wavelength less than ##\hbar/\Lambda##, and use this to work out a relation between the the electron’s bare charge ##e_\mathrm{bare}(\Lambda)## and its renormalized charge ##e_\mathrm{ren}##. We try to choose ##e_\mathrm{bare}(\Lambda)## that makes ##e_\mathrm{ren}## equal to the electron’s experimentally observed charge ##e##. If we sum over Feynman diagrams with ##\le n## vertices this is always possible. But if we sum over Feynman diagrams with at most one loop, it ceases to be possible when ##\Lambda## reaches a certain very large value, namely

$$ \Lambda \; = \; \exp\left(\frac{3 \pi}{2 \alpha} + \frac{5}{6}\right) m_e c \; \approx \; e^{647} m_e c. $$

According to this one-loop calculation, the electron’s bare charge becomes *infinite* at this point! This value of ##\Lambda## is known as a ‘Landau pole’, since it was first noticed in about 1954 by Lev Landau and his colleagues.

What is the meaning of the Landau pole? We said that poetically speaking, the bare charge of the electron is the charge we would see if we could strip off the electron’s virtual particle cloud. A somewhat more precise statement is that ##e_\mathrm{bare}(\Lambda)## is the charge we would see if we collided two electrons head-on with a momentum on the order of ##\Lambda##. In this collision, there is a good chance that the electrons would come within a distance of ##\hbar/\Lambda## from each other. The larger ##\Lambda## is, the smaller this distance is, and the more we penetrate past the effects of the virtual particle cloud, whose polarization ‘shields’ the electron’s charge. Thus, the larger ##\Lambda## is, the larger ##e_\mathrm{bare}(\Lambda)## becomes.

So far, all this makes good sense: physicists have done experiments to actually measure this effect. The problem is that according to a one-loop calculation, ##e_\mathrm{bare}(\Lambda)## becomes infinite when ##\Lambda## reaches a certain huge value.

Of course, summing only over diagrams with at most one loop is not definitive. Physicists have repeated the calculation summing over diagrams with ##\le 2## loops, and again found a Landau pole. But again, this is not definitive. Nobody knows what will happen as we consider diagrams with more and more loops. Moreover, the distance ##\hbar/\Lambda## corresponding to the Landau pole is absurdly small! For the one-loop calculation quoted above, this distance is about

$$ e^{-647} \frac{\hbar}{m_e c} \; \approx \; 6 \cdot 10^{-294}\, \mathrm{meters} . $$

This is hundreds of orders of magnitude smaller than the length scales physicists have explored so far. Currently the Large Hadron Collider can probe energies up to about 10 TeV, and thus distances down to about ##2 \cdot 10^{-20}## meters, or about 0.00002 times the radius of a proton. Quantum field theory seems to be holding up very well so far, but no reasonable physicist would be willing to extrapolate this success down to ##6 \cdot 10^{-294}## meters, and few seem upset at problems that manifest themselves only at such a short distance scale.

Indeed, attitudes on renormalization have changed significantly since 1948, when Feynman, Schwinger and Tomonoga developed it for QED. At first it seemed a bit like a trick. Later, as the success of renormalization became ever more thoroughly confirmed, it became accepted. However, some of the most thoughtful physicists remained worried. In 1975, Dirac said:

Most physicists are very satisfied with the situation. They say: ‘Quantum electrodynamics is a good theory and we do not have to worry about it any more.’ I must say that I am very dissatisfied with the situation, because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small — not neglecting it just because it is infinitely great and you do not want it!

As late as 1985, Feynman wrote:

The shell game that we play [. . .] is technically called ‘renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.

By now renormalization is thoroughly accepted among physicists. The key move was a change of attitude emphasized by Kenneth Wilson in the 1970s. Instead of treating quantum field theory as the correct description of physics at arbitrarily large energy-momenta, we can assume it is only an approximation. For renormalizable theories, one can argue that even if quantum field theory is inaccurate at large energy-momenta, the corrections become negligible at smaller, experimentally accessible energy-momenta. If so, instead of seeking to take the ##\Lambda \to \infty## limit, we can use renormalization to relate bare quantities at some large but finite value of ##\Lambda## to experimentally observed quantities.

From this practical-minded viewpoint, the possibility of a Landau pole in QED is less important than the behavior of the Standard Model. Physicists believe that the Standard Model would suffer from Landau pole at momenta low enough to cause serious problems if the Higgs boson were considerably more massive than it actually is. Thus, they were relieved when the Higgs was discovered at the Large Hadron Collider with a mass of about 125 GeV/c^{2}. However, the Standard Model may still suffer from a Landau pole at high momenta, as well as an instability of the vacuum.

Regardless of practicalities, for the *mathematical* physicist, the question of whether or not QED and the Standard Model can be made into well-defined mathematical structures that obey the axioms of quantum field theory remain open problems of great interest. Most physicists believe that this can be done for pure Yang–Mills theory, but actually proving this is the first step towards winning $1,000,000 from the Clay Mathematics Institute.

I’m a mathematical physicist. I work at the math department at U. C. Riverside in California, and also at the Centre for Quantum Technologies in Singapore. I used to do quantum gravity and n-categories, but now I mainly work on network theory and the Azimuth Project, which is a way for scientists, engineers and mathematicians to do something about the global ecological crisis.

(excuse in advance if this question is too newbie)I was reading Penrose's Road to Reality, chapters 7, where he discusses how some complex functions have Taylor expansions that are only correct within a small domain. The expansion converges within that domain but diverges outside of it. This is usually when the complex function has singularities, or is multi-sheeted, like y = log z. To get a description of the whole domain you need analytic continuation (patch together Taylor expansions around different points). The same is true for more general Reimannian surfaces (chapter 8).Anyway, my question is whether this property has anything to do with the QED problem. It sounds quite similar to me. Maybe the latter iterations are somehow actually sampling from outside the domain of convergence.

"If these power series converged for small positive alpha, they would have a nonzero radius of convergence, so they would also converge for small negative alpha. Thus, QED would make sense for small negative values of alpha [which is problematic since it describes unstable vacuum]."I don't understand the above logic. Why is it a problem that QED with negative alpha results in a physically nonsensical picture? Why would anyone care? This is not a model of our Universe.

Thanks!

Another wonderful paper – many thanks to John.

I would like to mention a series of lectures I enjoyed very much and re-watch them every now and then with equal enjoyment:

It examines perturbation theory, extracting finite answers from divergent series, and some of the things touched on in the paper. The most startling for me was the origin of quantisation which naturally comes out of this approach. Fascinating. Oh – he touches on its relation to renormalisation – not in depth though.

Thanks

Bill

Hi John,

I’m not familiar with the 2-loop calculation. Does the Landau pole appear at the same energy as for 1-loop, or does it move?

Almost everything in this Continuum series of Insights articles is way over my head. Nevertheless, I manage to learn something, and to gain a glimpse of the outstanding problems in this field when I read them. I can only attribute that to excellent writing.

Kudos and thank you John Baez.

One cannot stress this enough. These Insights are written in a way to explain the issues discussed “as simple as possible but not simper”. It’s really great!

If we are considering imaginary values of charge, then logically we must also consider imaginary values for the EM fields. A classical plane wave with imaginary amplitude would have a (real-valued) Poynting vector opposite the direction of propagation- the wave radiates negative energy. I am guessing that the QED equivalent would be a negative-energy photon. Now if all of the electrons & positrons have only imaginary charge, I don’t see how real-valued fields could be generated. It would seem that in such a pair production, the only photons that could be produced would have negative energy, so energy conservation should rule out this scenario!

@john baez , is my question above meaningful?

In this power series, the “complex valued independent variable” is the fine-structure constant alpha. By the definition of a power series, or any function for that matter, all the terms must be evaluated for the same value of the variable. The important question is whether the actual value, about 1/137, is within the domain of convergence. Dyson argued that it must not be, because domains of convergence are symmetrical around the origin. If the series converges for 1/137, it must also converge for alpha=-1/137, That would imply that in a hypothetical universe with imaginary-charged electrons, we would have finite values for all the measurements that are finite in real life. But in such a universe the vacuum should instantly generate infinite numbers of electrons, positrons, & photons, so it seems that the series should diverge. The same argument implies that the domain of convergence should simply be zero.