Renormalisation and running of the φ^3 and other coupling constants

In summary: If massless particles are involved you have additional trouble with...There is no problem with massless particles, you can just use off-shell momenta.
  • #1
tomdodd4598
138
13
TL;DR Summary
How is coupling renormalisation done in general?
Hey there!

I am still rather new to renormalising QFT, still using the cut-off scheme with counterterms, and have only looked at the φ^4 model to one loop order.

In that model, we renormalise with a counterterm to the one-loop four-point 1PI diagram at a certain energy scale.

Do I simply, in the case of φ^3, do roughly the same, that is renormalise with a counterterm to the one-loop three-point 1PI diagram at a certain energy scale?

The reason I ask is because I was confused by the renormalisation of the electric charge in QED, which involved looking at the photon propagator rather than the interaction vertex as I had originally expected...

This leads me to the overarching question: is there a generic way to know which diagrams/processes to look at in order to renormalise interaction coupling constants, and does the fact of whether a coupling is dimensionless or not, or whether the single-vertex tree-level interaction can actually conserve energy and momentum (not the case in φ^3 or QED) affect this in any way?

Thanks in advance!
 
Physics news on Phys.org
  • #2
In ##\phi^4## (or ##\phi^3##) theory there's unfortunately not the simplification that you can get the coupling-constant renormalization via a self-energy diagram but you have to directly consider the 1PI four-point (three-point) function directly.

The reason that you can get the running of the coupling from the vacuum polarization diagrams of the gauge bosons in gauge theories is the corresponding Ward-Takahashi (or for non-Abelian theories in other than the background-field gauge the Slavnov-Taylor) identities.
 
  • Like
Likes tomdodd4598
  • #3
vanhees71 said:
In ##\phi^4## (or ##\phi^3##) theory there's unfortunately not the simplification that you can get the coupling-constant renormalization via a self-energy diagram but you have to directly consider the 1PI four-point (three-point) function directly.
Ok, thanks. But then what is my energy scale or renormalisation point? In the case of φ^4 I considered the 1PI piece of an on-shell ##2 \rightarrow 2## scattering process with ##s=t=u=\mu ^{ 2 }##, but there is no such on-shell scenario for the φ^3 interaction.

vanhees71 said:
The reason that you can get the running of the coupling from the vacuum polarization diagrams of the gauge bosons in gauge theories is the corresponding Ward-Takahashi (or for non-Abelian theories in other than the background-field gauge the Slavnov-Taylor) identities.
Ah, and I suppose this comes from comparing the renormalisations of the electron kinetic term and the interaction term, and demanding that they be equal (as they form the covariant derivative)?
 
Last edited:
  • #4
Actually, I realize the choice of ##s=t=u=\mu ^{ 2 }## in the ##{ \varphi }^{ 4 }## case does not correspond to an on-shell scattering process, but was chosen "for simplicity", whatever that may mean.

I suppose one could instead choose ##s=4\mu ^{ 2 }## and ##t=u=0##, but honestly I'm then not sure how to recreate the factor of 3 in the widely quoted ##\beta \left( \lambda \right) =3{ { \lambda }^{ 2 } }/{ { \left( 4\pi \right) }^{ 2 } }##.

Anyways, the question still stands for which reference momenta I should be using for the ##{ \varphi }^{ 3 }## case, as the choice could affect the sign of the contribution.
 
Last edited:
  • #5
Okay, choosing ##{ \left( { p }_{ 1 }+{ p }_{ 2 } \right) }^{ 2 }={ \left( { p }_{ 1 }+{ p }_{ 3 } \right) }^{ 2 }={ \left( { p }_{ 2 }+{ p }_{ 3 } \right) }^{ 2 }=-{ \mu }^{ 2 }## got me a result of the form ##\beta \left( \widetilde { g } \right) =-\widetilde { g } -a{ { \widetilde { g } }^{ 3 } }+\dots ##, where ##a## is some positive real number and ##g\left( \mu \right) ≔\mu \widetilde { g } \left( \mu \right) ##, i.e. ##\widetilde { g }## is the dimensionless coupling, which seems to match with sources dotted around online.
 
  • #6
The renormalization scale of course always somehow enters. You only have to choose it away from any cuts of the 1PI n-point functions. If massless particles are involved you have additional trouble with on-shell renormalization schemes concerning IR problems. In such cases a save way is to choose off-shell ("spacelike") momenta defining the renormalization scale. An elegant way is to use dimensional regularization and the MS or ##\overline{\text{MS}}## renormalization schemes, which are mass-independent schemes by construction. Here the renormalization scale enters in a somewhat more abstract way by introducing a momentum scale such as to keep the couplings of the same mass dimension as in 4 spacetime dimensions.
 
  • Like
Likes king vitamin
  • #7
vanhees71 said:
The renormalization scale of course always somehow enters. You only have to choose it away from any cuts of the 1PI n-point functions.
You'll have to forgive me for being a little shallow on this subject for now - what is a 'cut' of an n-point function? A branch cut perhaps? If so, where do these occur?
vanhees71 said:
If massless particles are involved you have additional trouble with on-shell renormalization schemes concerning IR problems. In such cases a save way is to choose off-shell ("spacelike") momenta defining the renormalization scale. An elegant way is to use dimensional regularization and the MS or ##\overline{\text{MS}}## renormalization schemes, which are mass-independent schemes by construction. Here the renormalization scale enters in a somewhat more abstract way by introducing a momentum scale such as to keep the couplings of the same mass dimension as in 4 spacetime dimensions.
Are you saying the scale enters more abstractly with dim. reg.? I very much get the feeling that I need to learn about dim. reg. at some point given all the notes I come across in this area!

Could I also ask whether my definition of the dimensionless coupling is a valid one? I pretty much came up with it in trying to force a recreation of the beta function I read I should get, so I imagine /hope that in the dim. reg. scheme, you effectively get that definition 'automatically'?
 
  • #8
Yes, dim. reg. is a very convenient calculational technique to deal with the infinite loop integrals in QFT. As I said, the drawback is that it is pretty abstract, and the true meaning of renormalization is much better reflected in BPHZ renormalization, which can however become quite tricky for real-world calculations. So a mixture of a lot of techniques and doing some simple (one-loop) examples helps a lot to understand renormalization.

With "cut" I indeed meant the branch cut. To give a simple example, why I said that the on-shell renormalization scheme can become problematic as soon as massless fields are involved.

QED is often presented in the "on-shell renormalization scheme". In the counter-term formulation that means that from the very beginning you work with the physical electron mass. But what does "physical electron mass" mean in terms of the 1PI N-point functions? It just says that the electron propagator has a pole at momenta with ##p^2=m^2##, where ##m## is the physical electron mass.

At tree level the propagator of course is
$$G_0(p)=\frac{1}{\gamma_{\mu} p^{\mu} -m} :=\frac{\gamma_{\mu} p^{\mu} +m}{p^2-m^2+\mathrm{i} 0^+},$$
but at the one-loop level the self-energy diagram consists of a loop with an electron and a photon propagator. The relation between the Green function and the self-energy is given by Dyson's equation,
$$G(p)=G_0 + G_0 \Sigma G \; \Rightarrow \; G(p)=\frac{1}{p_{\mu} \gamma^{\mu} -m -\Sigma(p)}.$$
Now that ##m## stays the physical mass for the one-loop self-energy you should have ##\Sigma(p)=0## for ##p^2=m^2##.

This however makes trouble, because the one-loop self-energy has a branch cut in the ##p^2## plane along ##p^2 > (m+m_{\gamma})##, but ##m_{\gamma}=0##, i.e., you cannot simply subtract the infinite mass part of the self-energy at ##p^2=m^2##, because there is this singularity.
 
  • Like
Likes tomdodd4598
  • #9
vanhees71 said:
QED is often presented in the "on-shell renormalization scheme". In the counter-term formulation that means that from the very beginning you work with the physical electron mass.
...
This however makes trouble, because the one-loop self-energy has a branch cut in the ##p^2## plane along ##p^2 > (m+m_{\gamma})##, but ##m_{\gamma}=0##, i.e., you cannot simply subtract the infinite mass part of the self-energy at ##p^2=m^2##, because there is this singularity.
So unless I miss an element of the point, this presentation of the renormalisation of QED is rather iffy? Why is it presented in this way if it's an invalid technique (I know this is a little beside the original question)?

I presume the definition of the dimensionless coupling is okay then? I'm now rather confused about the choice for the scale, or what it represents if not the energy of some real process. Say I measure the coupling in some experiment at some energy... is it not important that I make sure my theory predicts that coupling at that energy, and if so, surely that is what the scale is meant to represent?
 
  • #10
It's not invalid but pretty subtle as far as it comes to the IR problems.

The renormalization scale is indeed determined by the experiment you want the theory apply to. E.g., the running of the electromagnetic coupling constant has been measured at ##\mu \simeq M_{\text{Z}}##, where the corresponding finestructure constant is about 1/128 rather than the 1/137 where the renormalization scale is chosen at low energies ##\mu \simeq 0^+##.

In the context of another thread in this forum, I also found the following lecture notes on the arXiv, which I think discuss the renormalization schemes, the running coupling, and all that, very well:

https://arxiv.org/abs/hep-ph/0508242
 
  • Like
Likes tomdodd4598
  • #11
I came across that set of notes reasonably recently, very possibly from the same thread :)
It did go through things reasonably clearly, but even the on-shell piece of those notes is tied in pretty heavily to the dim. reg. approach, so it didn't quite answer the questions I had.
vanhees71 said:
The renormalization scale is indeed determined by the experiment you want the theory apply to. E.g., the running of the electromagnetic coupling constant has been measured at ##\mu \simeq M_{\text{Z}}##, where the corresponding finestructure constant is about 1/128 rather than the 1/137 where the renormalization scale is chosen at low energies ##\mu \simeq 0^+##.
Ah yes, I have heard of this, so perhaps this would be good to focus on. What does ##\mu \simeq M_{\text{Z}}## actually mean here? Is this the 'scale' of some ##2\rightarrow 2## scattering experiment?

If so, what are the values of ##s##, ##t## and ##u## for my diagrams in terms of ##\mu##? Surely it has to be an on-shell choice, which ##s=t=u={ \mu }^{ 2 }## isn't? But on the other hand, if I don't use this choice, I can't, for example, recreate the widely quoted beta function for ##{ \varphi }^{ 4 }##...

Furthermore, harking back to my original question, if this is the case then surely I'm not looking at renormalising the raw three-point vertex diagram, but actually a four-point diagram of two vertices?
 
  • #12
Here is an example, how ##\alpha_{\text{em}}(t)## (##t##: Mandelstam ##t##) is measured in Bhabba scattering (elastic electron-positron scattering) at the LEP:

https://arxiv.org/abs/hep-ex/0505072v3

Also I don't understand your statement concerning the elastic-scattering diagrams in ##\phi^4## theory. The choice ##s=t=u=\mu^2## is not at an on-shell point. It's just a convenient choice to renormalized the proper four-point vertex function introducing the renormalization scale and thus defining a renormalization scheme. The ##\beta## function is evaluated within this renormalization scheme.

Further, you can choose any regularization scheme you like, and dim. reg. is a very powerful one, because it preserves many symmetries, including local gauge symmetries, i.e., at any stage of the calculation the Ward-Takahashi or Slavnov-Taylor identities for the proper vertex functions hold true, which makes life much easier.

My interpretation of the historical development of renormalization theory of (non-abelian) gauge theories is that 't Hooft's breakthrough, building of course a lot on his thesis advisor's, Veltman, earlier work (and also using his computer algebra program), is due to the ingenious idea of dimensional regularization, which simplifies the calculation of loop diagrams a lot compared to other techniques (like naive cut-off, Pauli-Villars).

So that in the above quoted lecture notes the author uses dim. reg. is just a matter of convenience. The final renormalized result does not depend on the specific regularization procedure but only on the choice of the renormalization scheme. The on-shell scheme can be formulated by renormalization conditions on the (divergent) proper vertex functions without any reference to any specific renormalization scheme.

Of course, there are renormalization schemes which uses dim. reg. to define them. These are the minimal-subtraction schemes (only subtract the divergent pieces) or the modified minimal-subtraction schemes, where you also subtract additionally some inconvenient finite pieces in addition to the divergent ones. All these are mass-independent renormalization schemes, which also can be defined by renormalization conditions on the proper vertex functions. See my notes, Sects. 5.11.1 and 5.11.2 (the latter for the MS scheme and the fact that they are mass-independent renormalization schemes).

https://itp.uni-frankfurt.de/~hees/publ/matherg2.pdf
 
  • Like
Likes tomdodd4598
  • #13
vanhees71 said:
Also I don't understand your statement concerning the elastic-scattering diagrams in ##\phi^4## theory. The choice ##s=t=u=\mu^2## is not at an on-shell point. It's just a convenient choice to renormalized the proper four-point vertex function introducing the renormalization scale and thus defining a renormalization scheme. The ##\beta## function is evaluated within this renormalization scheme.

Further, you can choose any regularization scheme you like, and dim. reg. is a very powerful one, because it preserves many symmetries, including local gauge symmetries, i.e., at any stage of the calculation the Ward-Takahashi or Slavnov-Taylor identities for the proper vertex functions hold true, which makes life much easier.
So the beta function does depend on the point used? If so, that would clear things up a bit, though I was previously under the assumption that there was only one, 'unique' ##\beta \left( g \right)##.
 
  • #14
The RG equations depend on the renormalization scheme. The discovery of the mass-independent schemes helped a lot to simplify the RG equations a la Gell-Mann and Low, Callan and Symanzik. This goes back to Weinberg (1973) and particularly Kugo (1977):

S. Weinberg, New approach to the renormalization group,
Phys. Rev. D 8 (1973) 3497.
https://dx.doi.org/10.1103/PhysRevD.8.3497

T. Kugo, Symmetric and mass-independent renormalization,
Prog. Theor. Phys. 57 (1977) 593.
https://dx.doi.org/10.1143/PTP.57.593
 
  • #15
vanhees71 said:
The RG equations depend on the renormalization scheme.
Interesting... so, for example, the RG equation for the charge in QED that you see in various places (to one loop),
##\beta \left( e \right) =\frac { { e }^{ 3 } }{ 12{ \pi }^{ 2 } }##,
is only the case for some particular scheme and/or choice of the scale? Because I'm sure I've seen this result in notes using either dim. reg. or a cut-off.

Sticking just to the cut-off case, does this mean I should expect that my ##\beta \left( g \right)## will depend on the choice of my point?
 
  • #17
vanhees71 said:
Have a look at this PhD thesis, which investigates the scheme dependence in gauge theories:

https://www.stonybrook.edu/commcms/grad-physics-astronomy/_theses/choi-gongjun-may-2018.pdf
To be honest, this was a little beyond me. I feel like I'm either asking the wrong questions or don't know enough about the subject yet to get the answers, because I'm still confused - thanks for your patience though :P

Perhaps to summarise, in the books I looked between to indroduce myself to QFT renormalisation, Zee and Blundell, the cut-off scheme was used and the scale ##\mu## is introduced. You and others I've asked have said it's just down to convenience, but I really don't see how it is convenient to choose an off-shell point if it's used to define the so-called 'physical' coupling. Others have asked what it is (e.g. here and here) and the answers also seem to suggest that ##\mu## isn't really related to anything physical, but I fail to see how that is the case.

I feel like if I don't actually know what ##\mu## is then not only am I unsure about what these texbooks actually mean, but I'm doing mathematics rather than physics, renormalising using some parameter which seems to have a rather ethereal connection to any experiment.

I know you've sent me that nice paper describing how the fine structure constant is measured to vary with ##t##, and so it makes sense to me in this case that ##\mu## is just some particular value of ##\sqrt { t }##, but I'm still a little unsure how universal this method is for two reasons:

1. QED has a nice identity which allows the running of ##\alpha## to be related to the photon propagator. This can't be done in the case of ##{ \varphi }^{ 3 }##, and I have to look at the vertex... but there is no on shell choice for the 3-point function!

2. In the case of ##{ \varphi }^{ 4 }## at least, the 4-point function takes the form (to one loop, with cutoff ##\Lambda##, counterterm ##\delta \lambda## and some constant ##a##): ##\tilde { \Gamma } \left( { p }_{ 1 },...,{ p }_{ 4 } \right) \sim \lambda +\delta \lambda -a{ \lambda }^{ 2 }\ln { \left[ \frac { { \Lambda }^{ 6 } }{ stu } \right] }##.
The choice ##s=t=u={ \mu }^{ 2 }##, fixing the 'physical' ##\lambda ={ \tilde { \Gamma } }_{ s=t=u={ \mu }^{ 2 } }##, yields: ##\tilde { \Gamma } \left( { p }_{ 1 },...,{ p }_{ 4 } \right) \sim \lambda +a{ \lambda }^{ 2 }\ln { \left[ \frac { stu }{ { \mu }^{ 6 } } \right] }##, leading to the positive beta function written earlier.
However, if I choose something like ##s=u=0,\quad t={ \mu }^{ 2 }##, then I run into trouble with the exploding logarithm.
 
  • #18
Yes, if you have ##\phi^4## theory with ##m=0## you cannot renormalize at the off-shell point where all external momenta of the diagram are set to ##0##. That's why one chooses another space-like renormalization point which introduces the renormalization scale.

The physics is solely given by the S-matrix elements and the cross sections derived from them. You have to use empirical data, involving some set of S-matrix elements to fit the renormalized constants (masses, coupling constants).
 
  • #19
This is a very late response to this discussion, but I felt I had to say that I realized something that makes the cut-off scheme a little easier to get my head around, at least to answer the second of the two questions in my last post. I always felt points made concerning the scale ##\mu## in the context of dimensional regularisation did not always translate entirely clearly to the cut-off scheme, but here's something simple that helped me get over the issue of renormalising at an on-shell point:

Instead of choosing ##s=t=u={ \mu }^{ 2 }## (which is off-shell), or ##\{s=4{ \mu }^{ 2 },\quad t=u=0\}## (which seems to me to involve a logarithmic divergence), choose ##\{s={ \sigma }{ \mu }^{ 2 },\quad t={ \tau }{ \mu }^{ 2 },\quad u={ \upsilon }{ \mu }^{ 2 }\}##, where ##\sigma,\tau,\upsilon## are dimensionless free parameters.

We then have $$\tilde { \Gamma } \left( { p }_{ 1 },...,{ p }_{ 4 } \right) \sim \lambda +a{ \lambda }^{ 2 }\ln { \left[ \frac { stu }{ \sigma \tau \upsilon { \mu }^{ 6 } } \right] },$$ and the beta function is then easily shown in a similar way to be ##\beta \left( \lambda \right) =3{ { \lambda }^{ 2 } }/{ { \left( 4\pi \right) }^{ 2 } }##, independent of ##\sigma,\tau,\upsilon##.
 
  • #20
vanhees71 said:
Yes, if you have ##\phi^4## theory with ##m=0## you cannot renormalize at the off-shell point where all external momenta of the diagram are set to ##0##. That's why one chooses another space-like renormalization point which introduces the renormalization scale.

The physics is solely given by the S-matrix elements and the cross sections derived from them. You have to use empirical data, involving some set of S-matrix elements to fit the renormalized constants (masses, coupling constants).

I was originally introduced to the concept of running couplings and the Callan-Symanzik equations by considering the variation of the mass scale ##\mu## in minimal-subtraction. However recently I've been reading Peskin & Schroeder and there the concepts are introduced by considering renormalization conditions at arbitrary momenta. I understand that these two approaches are qualitatively the same in that they both address the issue of large logarithms and massless particles, however I was wondering what the individual strengths and weaknesses of these methods were?
 
  • #21
The strength of dimensional regularization is its mathematical simplicity in the sense that it preserves Poincare symmetry (of course for (d+1)-dimensional Minkowski space with ##d \in \mathbb{N}##) and, even more important, local gauge symmetries. In addition it's mass-independent and thus can be used to define convenient renormalization schemes like the MS or ##\overline{\text{MS}}## schemes. Here it is important to introduce the regularization scale ##\mu## to keep the energy dimensions of the fields, masses, and couplings as they are in (1+3)-dim. Minkowski space.

The disadvantage is that its physical meaning is a bit obscure, i.e., it's not so clear what the scale ##\mu## physically means. It's of course introduced on abstract grounds to keep the energy dimension of all quantities the same as in (3+1)-dimensional Minkowski space. In some cases there's also trouble in how to extend (1+3)-dimensional mathematical objects to (1+d) dimensions, like the Levi-Civita symbol ##\epsilon^{\mu \nu \rho \sigma}## or ##\gamma^5=\mathrm{i} \gamma^0 \gamma^1 \gamma^2 \gamma^3##. There the prescription depends on the application, but it's not somehow "naturally clear" how to choose these rules. For ##\gamma^5## there's the 't Hooft-Veltman choice to define ##\gamma^5## such as to anti-commute with ##\gamma^0,\ldots \gamma^3## as in (1+d) dimensions but commute with the ##\gamma^{\mu}## for ##\mu \geq 4## This leads to the right treatment of the axial anomaly, i.e., to make the axial U(1) the anomalously broken symmetry (in QCD or QED, where the vector currents must be conserved because of local gauge invariance).
 
  • #22
vanhees71 said:
The strength of dimensional regularization is its mathematical simplicity in the sense that it preserves Poincare symmetry (of course for (d+1)-dimensional Minkowski space with ##d \in \mathbb{N}##) and, even more important, local gauge symmetries. In addition it's mass-independent and thus can be used to define convenient renormalization schemes like the MS or ##\overline{\text{MS}}## schemes. Here it is important to introduce the regularization scale ##\mu## to keep the energy dimensions of the fields, masses, and couplings as they are in (1+3)-dim. Minkowski space.

The disadvantage is that its physical meaning is a bit obscure, i.e., it's not so clear what the scale ##\mu## physically means. It's of course introduced on abstract grounds to keep the energy dimension of all quantities the same as in (3+1)-dimensional Minkowski space. In some cases there's also trouble in how to extend (1+3)-dimensional mathematical objects to (1+d) dimensions, like the Levi-Civita symbol ##\epsilon^{\mu \nu \rho \sigma}## or ##\gamma^5=\mathrm{i} \gamma^0 \gamma^1 \gamma^2 \gamma^3##. There the prescription depends on the application, but it's not somehow "naturally clear" how to choose these rules. For ##\gamma^5## there's the 't Hooft-Veltman choice to define ##\gamma^5## such as to anti-commute with ##\gamma^0,\ldots \gamma^3## as in (1+d) dimensions but commute with the ##\gamma^{\mu}## for ##\mu \geq 4## This leads to the right treatment of the axial anomaly, i.e., to make the axial U(1) the anomalously broken symmetry (in QCD or QED, where the vector currents must be conserved because of local gauge invariance).

Thanks, is the "running" of the couplings in the ##MS## case with ##\mu## the same as the running of the couplings with the arbitrary subtraction point ##p^2 = M^2## in off-shell renormalisation?
 
  • #23

1. What is renormalisation and why is it important in physics?

Renormalisation is a mathematical technique used in theoretical physics to remove infinities that arise in certain calculations. It is important because it allows us to make meaningful predictions and calculations in quantum field theory, which is the framework used to describe the behavior of particles at a subatomic level.

2. How does renormalisation affect the value of coupling constants?

Renormalisation affects the value of coupling constants by allowing us to redefine them at different energy scales. This is known as the running of coupling constants, and it takes into account the fact that the strength of interactions between particles can change at different energy levels.

3. What is the significance of the φ^3 coupling constant in renormalisation?

The φ^3 coupling constant is significant because it appears in the Lagrangian, which is the mathematical expression that describes the dynamics of a system in quantum field theory. By renormalising this coupling constant, we can account for the interactions between particles and accurately predict their behavior.

4. Can renormalisation be applied to other coupling constants besides φ^3?

Yes, renormalisation can be applied to other coupling constants in quantum field theory, such as the electromagnetic coupling constant and the strong interaction coupling constant. The technique is not limited to a specific type of coupling constant, but rather it is a general method for removing infinities and improving the accuracy of calculations.

5. How does the running of coupling constants impact our understanding of the behavior of particles?

The running of coupling constants allows us to better understand the behavior of particles at different energy scales. By taking into account the changes in the strength of interactions, we can make more accurate predictions about the behavior of particles and their interactions with each other. This is essential in understanding the fundamental building blocks of our universe and their behavior.

Similar threads

Replies
1
Views
760
  • Quantum Physics
Replies
5
Views
2K
  • Quantum Physics
Replies
4
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
2K
Replies
5
Views
1K
  • Quantum Physics
Replies
2
Views
1K
Replies
1
Views
1K
Replies
4
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
3
Views
2K
Back
Top