Gauge invariance/Lorentz invariance of regulator in QED

In summary, cutoff regularization can cause issues with gauge invariance and Lorentz invariance, particularly when dealing with gauge theories. This is because the regulator may break these symmetries in intermediate steps of the calculation, leading to unphysical results and difficulties with renormalization. Dimensional regularization is a more convenient choice, but care must still be taken to ensure gauge symmetry is preserved.
  • #1
WannabeNewton
Science Advisor
5,844
550
See the passage attached below.

Consider the 1-loop vertex correction (c.f. p.2 of http://bolvan.ph.utexas.edu/~vadim/classes/2012f/vertex.pdf) and vacuum polarization diagrams in QED. A very simple UV regulator that makes the integrals for the amplitude very simple is the prescription that we take an arbitrary high-energy cutoff ##\Lambda_0## for a theory ##\mathcal{L}_0##, introduce another cutoff ##\Lambda < \Lambda_0##, compute the corrections to the amplitude for ##q > \Lambda##, and add the corrections into the effective lagrangian ##\mathcal{L}_{\Lambda}## using running couplings in the local field operators corresponding to the corrected diagrams.

In the case of the vertex correction diagram the ##q > \Lambda## regulator clearly breaks Lorentz invariance and in the case of the vacuum polarization diagram it breaks gauge invariance because in the latter case we integrate over charged Fermion momenta whereas in the former case we only integrate over photon momenta.

What I don't understand is why in the first case the lack of Lorentz invariance in the regulator does not pose any problems in the calculation of the amplitude and everything goes through easily with the simple Taylor expansion method of evaluating the loop whereas in the second case the lack of gauge invariance does pose a problem if one uses this regulator.

As I understand it this regulator generates a renormalized mass term for the photon but I neither understand where it comes from nor why it poses a problem for the amplitude itself since one can go ahead and easily calculate it using the same Taylor expansion method as for the other diagram.

Could someone explain this? Thanks in advance!

gauge invariance regulator.png
 
Physics news on Phys.org
  • #2
Cutoff regularizations are tricky for gauge theories, because you violate Lorntz covariance of the proper vertex functions and Ward-Takahashi identities, relatd to gauge invariance. The vacuum polarization is no longer 4-transverse, which is because to prove it for the Feynman diagram depicted in your text excerpt, you have to shft the loop momentum, and then the cancellation works only, if you int`egrate over the full 4-momentum space. It is much more convenient to use a gauge-invariant regularization scheme like dim. reg. or BPHZ. Otherwise the gauge covariant definition of counter terms becomes very cumbersome.
 
  • #3
There is an interesting discussion in http://arxiv.org/abs/hep-th/9602156v2 about when it might be ok to use a cutoff that doesn't respect the gauge inavriance.
 
  • #4
vanhees71 said:
Otherwise the gauge covariant definition of counter terms becomes very cumbersome.

Hi, thanks. Could you expand upon this last point if possible? Or is there an example out there I could take a look at that you know of? We're doing dimensional regularization next so I would like to understand the shortcomings of the above kind of regulator beforehand. Thanks again!
 
  • #5
The problem with non-gauge covariant regulators is that in the intermediate steps of your calculation you break gauge invariance. The proper vertex functions, which are the fundamental building blocks of Feynman diagrams used to calculate the S-matrix elements, which represent the observables like cross sections and decay widths within vacuum quantum field theory, are of course themselves generally not gauge invariant but "gauge covariant", i.e., they fulfill constraints known as Ward-Takahashi (or Slavnov-Taylor for the non-abelian case) identities. Among them is the transversality of the gauge-boson self-energy ("polarization") for the un-Higgsed case. This transversality is violated, using a non-gauge covariant regulator. This immediately shows the big desaster of such an approach: The self-energy has to be resummed to the full propagator (in the approximation of the self-energy being calculated at some loop order of perturbation theory), and this propagator describes the propagation of particles in the physical picture, and the self-energy describes the radiative corrections of the propagating particles (in the sense of the spectral properties). Now, if the self-energy is not transverse, this implies that within your model unphysical degrees of freedom become interacting. For the corresponding S-matrix elements, which as observable quantities must be gauge invariant, this implies a violation of gauge invariance, and the unphysical degrees of freedom of the field lead to a violation of unitarity and can even lead to negative probabilities and/or the violation of Poincare invariance. So it is a big desaster.

On the other hand, you want to renormalize the theory, i.e., define the bare quantities in terms of physical quantities (in the case of Dyson-renormalizable theories you need to adapt the wave-function normalization factors, masses, and couplings already present in the Lagrangian and not more quantities) and then take the physical limit of the regulator. To this end you must make subtractions at some given energy scale (most save is to choose space-like four-momenta and a mass-independent renormalization scheme to avoid trouble with IR and collinear singularities). These subtractions lead to gauge-symmetry breaking counter terms in the Lagrangian as long as the regulator is not taken at its physical value (i.e., for your cutoff [itex]\Lambda \rightarrow \infty[/itex]). To be sure that you don't have the trouble with gauge-symmetry breaking in your final result for the S matrix, you must make sure that these gauge-violating counter terms strictly vanish in the physical limit, and this is quite inconvenient.

This explains, why the breakthrough for the gauge theories as foundation of the Standard Model was 't Hooft's thesis in 1971: He used dimensional regularization which is in many cases a very convenient choice. The only trouble are chiral gauge models like the electroweak sector of the Standard Model. Here one must take further care in connection with [itex]\gamma^5[/itex]. However, that's the case for any regulator anyway: You have to check the anomaly freedom of the gauge symmetry too and control where you put an anomaly of "accidental symmetries" such that you avoid to anomalously break the gauge symmetry. An example is QCD, where the global U_A(1) symmetry (in the limit of massless quarks a symmetry of the classical Lagrangian) must be anomalously broken and not the U(1), because this would imply the breaking of the SU(3) gauge symmetry of QCD. The U_A(1) anomaly, however, is very welcome, because it explains the decay rate of a neutral pion to two photons and also why the [itex]\eta'[/itex] meson is not a (pseudo-) Goldstone meson of the U_A(1) (approximate) symmetry of QCD in the light-quark sector. Anyway, the dimensional regularization keeps everything gauge invariant for the regularized theory, you can define gauge invariant counter terms and then the Ward-Takahashi identities are fulfilled at any stage of the calculation. Finally you can put the regulator (here the space-time dimension [itex]d[/itex]) to the physical value [itex]d \rightarrow 4[/itex].

Another way is BPHZ, where you define the renormalization conditions and do the corresponding subtractions to the proper vertex functions without any intermediate regularization. Also here, if there is trouble with anomalies, you have to take these into account and make sure that you anomalously break only the symmetries which are not important for the local gauge symmetries of the model. Only if it is possible to keep the local symmetries free of anomalies your model makes physical sense. For the standard model this is the case due to the specific charge pattern of quarks and leptons. It's one of the most marvelous consistency checks in physics I know of :-).

For details on renormalization, see my (still far from being complete) QFT manuscript:

http://fias.uni-frankfurt.de/~hees/publ/lect.pdf
 
  • Like
Likes WannabeNewton and MathematicalPhysicist
  • #6
Brilliant, thank you for the comprehensive explanation!

I understand how a non-transverse self-energy will lead to coupling of unphysical modes but exactly how does this lead to non-unitary of the S-matrix?
 
  • #7
Among the unphysical modes are at least one that has "negative-norm states" and are thus not part of the physical Hilbert space. If you mix them into the interacting parts of the field degrees of freedom, this implies that the corresponding transition "probabilities" of scattering processes can come out negative. Already this makes no sense. On top, if the scalar product of the Hilbert space is no longer positive definite, the S-matrix is not unitary anymore. So, if you use a non-gauge-invariant regulator, the regularized results may be problematic, and you must make sure that after renormalization and potting the regulators to their physical values (e.g., let UV cutoffs go to infinity) the S-matrix elements get rid of these unphysical contributions. It's much safer to use regulators that respect gauge invariance. Very convenient is almost always dimensional regularization. Only for chiral theories (like the electroweak standard model) involving specifically four-dimensional objects like ##\gamma^5## there can be a problem, because it's not a priori clear, how to generalize such "four-dimensional objects" to arbitrary dimensions. Usually this involves a careful analysis of corresponding anomalies, and you have to invoke additional constraints from the demand that local gauge symmetries must be anomaly free, and you have to shift the anomalies to the currents, which are not relevant for the gauge symmetry (e.g., the axial U(1) current must take the anomaly in QCD, which is achieved by using the 't Hooft convention for ##\gamma^5##, i.e., anticommuting with ##\gamma^0,\ldots, \gamma^3## and commuting with ##\gamma^{\mu}## for ##\mu \geq 4##).

Another possibility is to use a mass-independent version of BPHZ renormalization, where you make subtractions in the integrands (at a space like renormalization point). There the anomaly constraints have to be worked in as necessary part of the renormalization conditions. Anomalies appear, because there is no regularization scheme respecting the corresponding symmetry, i.e., the symmetry of the classical action is not preserved by quantization. This can be shown with path-integral methods and traced back to the non-invariance of the path-integral measure under the anomalously broken symmetry (Fujikawa). If I remember right, these aspects of anomalies are very well treated in

Böhm, Denner, Joos, Gauge Theories of the Strong and Electroweak Interactions, Teubner (2001).
 
  • Like
Likes WannabeNewton
  • #8
Thank you for the explanation, it is much more lucid now. Do you happen to have more references on BPHZ (does Srednicki discuss it for example)? Also, is it related to the Pauli-Villars regulator in any way, since PV also involves subtracting off a Lorentz-invariant divergent term from the 1-loop diagram? PV is a regularization technique I am familiar with so if they are related then it would make it much easier for me to become familiarized with BPHZ.
 
Last edited:

What is gauge invariance in QED?

Gauge invariance is a fundamental principle in quantum electrodynamics (QED) that states that the physical laws and equations describing electromagnetic interactions should be independent of the choice of gauge, or mathematical representation of the electromagnetic field. This means that different choices of gauge should yield the same physical predictions.

How is gauge invariance related to Lorentz invariance?

Lorentz invariance is the principle that the laws of physics should be the same in all inertial reference frames. Gauge invariance is a specific case of Lorentz invariance, as the choice of gauge is a mathematical transformation that preserves the Lorentz symmetry of physical laws.

What is the role of gauge invariance in QED calculations?

Gauge invariance plays a crucial role in QED calculations, as it ensures that unphysical results are not obtained due to the choice of gauge. It also allows for the cancellation of unphysical degrees of freedom in the theory, resulting in physically meaningful predictions.

Why is gauge invariance important in the renormalization of QED?

In the renormalization process of QED, gauge invariance is important because it ensures that the theory remains consistent and predictive at all energy scales. Without gauge invariance, the renormalization process would lead to unphysical results and the breakdown of the theory.

Can gauge invariance be violated in QED?

No, gauge invariance is a fundamental principle of QED and there is currently no evidence of it being violated. However, there are theories beyond QED that predict violations of gauge invariance, which are being actively studied and tested by experiments.

Similar threads

  • Quantum Physics
Replies
2
Views
1K
Replies
1
Views
1K
  • Quantum Physics
Replies
10
Views
1K
Replies
24
Views
2K
  • Advanced Physics Homework Help
2
Replies
58
Views
5K
Replies
3
Views
3K
  • Other Physics Topics
Replies
1
Views
2K
Replies
5
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
1K
  • Special and General Relativity
Replies
32
Views
4K
Back
Top