Understanding of renormalization

In summary: Now I understand why it is not arbitrary.In summary, renormalization is a procedure used in quantum field theory to account for divergent terms in loop integrals by adding counterterms to the Lagrangian. This allows for finite calculations and can be applied to higher order corrections. The substitution of running coupling constants only works for 1-loop corrections and additional techniques are needed for higher orders. The basic idea involves calculating a different process to one loop and using the measured result to solve for a "bare" charge, which is then used in the actual process to cancel out divergent terms. This procedure is not arbitrary but can be confusing for beginners and should be learned from resources such as Peskin and Schroeder.
  • #1
jdstokes
523
1
My (weak) understanding of renormalization is that following regularization, the divergent terms coming from loop integrals can be canceled by adding counterterms to the Lagrangian which are of the same form as the original terms.

What does this mean in terms of actual calculations? Does it mean that 1-loop corrections can be taken into account by substituting the running coupling constants into tree-level amplitudes? What about higher loop corrections?
 
Physics news on Phys.org
  • #2


Sounds like a question for a theory expert (I'm a mere experimentalist).

I can answer part of your second question with some confidence, though: Accounting for higher-order corrections is a lot trickier than simply substituting the running coupling constant into the tree-level amplitude. The coupling constant does vary with probe energy, but even using this, you still have to account for higher-order corrections.

Wish I could say more - these are deep waters indeed.
 
  • #3


Thanks for your response sideways.

Indeed, I've yet to find a single discussion of renormalization that sits well with me.

Am I correct in saying that the substitution trick will work as long as we restrict to 1-loop corrections?
 
  • #4


I think regardless of the renormalization scheme, in order to calculate the loop diagrams, you have to calculate the loop diagrams, and not the tree diagrams.
 
  • #5


jdstokes said:
My (weak) understanding of renormalization is that following regularization, the divergent terms coming from loop integrals can be canceled by adding counterterms to the Lagrangian which are of the same form as the original terms.

What does this mean in terms of actual calculations? Does it mean that 1-loop corrections can be taken into account by substituting the running coupling constants into tree-level amplitudes? What about higher loop corrections?


Well, the leading log divergences can be taken care off by substituting the running coupling constants but this whole issue of the renormalization group approach just confuses the issues so it's better, at least when trying to understanding the *basic* idea of renormalization, to not go get into that.

In terms of actual calculation, here is what one does. Let's focus on the charge renormalization in QED to be more specific (let's set aside mass and wavefunction renormalization). Let's say you want to calculate a process to one loop.

First, you calculate to one loop a *different* process that is well-measured experimentally, say the scattering of an electron from a heavy nucleus at low energy. You get a divergent result that you regulate using your favorite regulator. Now you *impose* that your calculation gives the measured result. This means that the charge that appears in yoru Lagrangian is not the physical charge but a "bare" charge. So you solve for the bare charge in terms of the physical charge (and this dependence will of course contain a parameter introduced by the regularization).

Now you go back to the process you are really interested in. You do the one loop calculation and you stick in the bare charge expressed in terms of the physical charge. Lo and behold, to first order in the loop expansion, all divergent terms cancel out and the final result is finite (and is of course expressed in terms of the physical charge measured in the other process and of the regulator).

This is the bare bone idea. Now, in the case where the coupling constant is not small and logs threaten the loop expansion, such as in QCD, a bunch of extra tricks are employed and then one gets into the renormalization group and a way to sum up leading logs and so on. But this is a "second level" that is better discussed when the basic idea is cleared up.

Hope this helps a little.
 
  • #6


Thanks nrqed,

I have not thought about renormalization in this way before. Interesting.

Where would you suggest that a beginner (familiar with tree-level QFT) learn this stuff from?
 
  • #7


Peskin and Schroeder
 
  • #8


Haelfix said:
Peskin and Schroeder

I dunno, I didn't find P&S's discussion of renormalization particularly illuminating on first iteration. Perhaps I'll go back over it.
 
  • #9


nrqed said:
Well, the leading log divergences can be taken care off by substituting the running coupling constants but this whole issue of the renormalization group approach just confuses the issues so it's better, at least when trying to understanding the *basic* idea of renormalization, to not go get into that.

In terms of actual calculation, here is what one does. Let's focus on the charge renormalization in QED to be more specific (let's set aside mass and wavefunction renormalization). Let's say you want to calculate a process to one loop.

First, you calculate to one loop a *different* process that is well-measured experimentally, say the scattering of an electron from a heavy nucleus at low energy. You get a divergent result that you regulate using your favorite regulator. Now you *impose* that your calculation gives the measured result. This means that the charge that appears in yoru Lagrangian is not the physical charge but a "bare" charge. So you solve for the bare charge in terms of the physical charge (and this dependence will of course contain a parameter introduced by the regularization).

Now you go back to the process you are really interested in. You do the one loop calculation and you stick in the bare charge expressed in terms of the physical charge. Lo and behold, to first order in the loop expansion, all divergent terms cancel out and the final result is finite (and is of course expressed in terms of the physical charge measured in the other process and of the regulator).

This is the bare bone idea. Now, in the case where the coupling constant is not small and logs threaten the loop expansion, such as in QCD, a bunch of extra tricks are employed and then one gets into the renormalization group and a way to sum up leading logs and so on. But this is a "second level" that is better discussed when the basic idea is cleared up.

Hope this helps a little.
A very nice explanation.

Actually, when I first learned the renormalization procedure, I doubted it so much. Because the procedure seems to be quite arbitrary (due to the infinity + finite = infinity). The result we calculated after imposing some regularization is always depend on some arbitrary mass scale, moreover, the result (finite quantity) also depends on the renormalization scheme.
Hence, it's seemingly impossible to get a "number" which you can tell the experimentalist to check with data.

I think the key point is that we must sacrifice one experiment, using a certain experiment to fix the number of the arbitrary mass scale [tex]\mu[/tex], only after doing so, your theory starts to have the prediction ability. You can now predict the results of other experiments.

But, what I felt very strange is, almost no book stresses on this point and almost no book states the renormalization procedure clearly. If my understanding is not correct, anyone is welcome to correct me.
 
  • #10


There are nice lecture notes about renormalization in today's arxiv:

D.I.Kazakov, "Radiative Corrections, Divergences, Regularization, Renormalization, Renormalization Group and All That in Examples in Quantum Field Theory" http://arxiv.org/abs/0901.2208
 
  • #11


Isn't there a renormalization scheme, called the "on-shell method", that's so much simpler than having running coupling constants? In the on-shell method the mass-scale parameter is canceled by counterterms so you don't even have to bother with it, and the mass in the Lagrangian is the true mass. I know the on-shell method fails for zero-mass particles, but if the particle has mass, why can't the on-shell method be used?
 
  • #12


jdstokes said:
Thanks nrqed,

I have not thought about renormalization in this way before. Interesting.

Where would you suggest that a beginner (familiar with tree-level QFT) learn this stuff from?

You are welcome.

I have never found a discussion that I found to my complete satisfaction either. It is amazing, given that the basic idea is so simple. The problem is that the basic idea is simple but the actual implementation has become highly sophisticated, with many advanced tricks and techniques invented over the years (in particular, the renormalization group). And I think that many people learn the advance tricks and lose track of the underlying meaning ( I even think that some people do advanced calculations without truly understanding what they are doing) . The whole thing about running coupling constants, beta functions, subtraction scheme, etc, is not needed to understand renormalization. It is a second layer designed to optimized calculations when the coupling constant is not small.

Renormalization also confused be when I was an undergrad. So when I got to grad school, I picked a research project that would force me to understand it: a two loop calculation in the context of an effective field theory. That forced me to clarify the whole thing and to see how it worked in an actual complex calculation.


One common misconception is that renormalization is forced upon us because of the divergent integrals. In fact, even if all calculations were finite, we would still have to renormalize. Renormalization is needed simply because we have a theory with a set of parameters and we need to relate those parameters to experimental values. That's all there is to it. We would need to do that even if all integrals were convergent.
And since we do calculations as a loop expansion, we need to fix again the parameters of the theory evey time we increase the number of loops.

So renormalization is needed no matter if there are divergences or not. regularization is what is required because of the infinities.


I would suggest that you have a look at the book Gauge theories in particle physics by Aitchison and Hey. They have a nice discussion on renormalization in volume 1.
 
  • #13


I like Srednicki's QFT book; you can get a draft copy free from his web page.
 
  • #14


ismaili said:
A very nice explanation.
Thanks
Actually, when I first learned the renormalization procedure, I doubted it so much. Because the procedure seems to be quite arbitrary (due to the infinity + finite = infinity). The result we calculated after imposing some regularization is always depend on some arbitrary mass scale, moreover, the result (finite quantity) also depends on the renormalization scheme.
Hence, it's seemingly impossible to get a "number" which you can tell the experimentalist to check with data.
And this is one thing that confuses beginners. It is true that the regularized result has a finite part that is pretty much arbitrary. On the other hand, the renormalized result is unambiguous. This is a source of confusion because people often focus on the regularized quantities (especially in QCD) and then one gets into the whole thing about choosing a subtraction scheme and so on. And that is very confusing to students because it does give the impression that the physical results are scheme dependent. Of course it's not. I know that profs all know that but they don't always emphasize that enough to students.

I think the key point is that we must sacrifice one experiment, using a certain experiment to fix the number of the arbitrary mass scale [tex]\mu[/tex], only after doing so, your theory starts to have the prediction ability. You can now predict the results of other experiments.

But, what I felt very strange is, almost no book stresses on this point and almost no book states the renormalization procedure clearly. If my understanding is not correct, anyone is welcome to correct me.

You are completely correct. I would add the following point, which I think is not always fully appreciated. This "sacrificing" of an experimental result is not reserved to quantum field theory. It is also necessary in classical physics!

Let's say we want to calculate the period of oscillation of a mass attached to an ideal spring (classically). We need to "sacrifice" two experimental results: one to determine the spring constant and one to determine the mass of the particle! So this idea of using some measured value in order to calculate some processes is not reserved to QFT!

The difference is that, in contrast with the classical case, in QFT this "fixing" of the theory's parameters must be done order by order in a loop expansion. And of course, the calculations are divergent so we need regularization. In addition, when we deal with a coupling constant that is not very small, as in QCD, then the experiment that is used to fixed the coupling constant of the theory must be close in energy to the process we are interested in. If the difference is too large, logs threaten the loop expansion. Then one may use the renormalization group to sum up the most severe log dependence.


Patrick
 
  • #15


Okay, I've been reading P&S again and there's something I'm just not getting.

Starting with the bare Lagrangian, we aim to express the bare coupling constants in terms of physical (but arbitrary) quantities, so that the amplitudes computed from the bare Lagrangian are meaningful.

Having chosen definitions for the physical couplings, we can separate the bare Lagrangian into the form

[itex]\mathcal{L}_0 = \mathcal{L} + \mathrm{counterterms}[/itex].

To perform a 1-loop calculation, we include the the 1-loop term from [itex]\mathcal{L}[/itex] but only the tree-level term from `counterterms' (see Eq. (10.21) p. 326).

I can't understand why they are ignoring the 1-loop contribution of the counterms to the amplitude??
 
  • #16


They're ignoring the 1-loop contribution of the counterterms probably because it's higher order than the 1-loop term you're trying to calculate.
 
  • #17


jdstokes said:
...

[itex]\mathcal{L}_0 = \mathcal{L} + \mathrm{counterterms}[/itex].

To perform a 1-loop calculation, we include the the 1-loop term from [itex]\mathcal{L}[/itex] but only the tree-level term from `counterterms' (see Eq. (10.21) p. 326).

I can't understand why they are ignoring the 1-loop contribution of the counterms to the amplitude??

(I see I spent too long writing this and got scooped. Here is more information elaborating on what RedX pointed out.)

The one-loop diagrams involving counterterms enter at the next order in perturbation theory -- if you check out the result for the tree-level counterterm for that example (Eqn. 10.24), you'll see it is proportional to [tex]\lambda^2[/tex], just like the one-loop term from [itex]\mathcal L[/itex].

Learning perturbation theory in non-relativistic quantum mechanics, you saw that we always need to work consistently to a given order -- every term of that order must be included. One-loop counterterm contributions would be of order [tex]\lambda^3[/tex] (or higher), like two-loop diagrams from [itex]\mathcal L[/itex], so if you want to include the former in the calculation, you have to include the latter as well.

This is what nrqed pointed out -- "this "fixing" of the theory's parameters must be done order by order in a loop expansion." Peskin and Schroeder go on to do the calculation of [tex]\lambda^3[/tex] terms for this process in section 10.5, starting on page 338.

(Edit to add that number-of-loops is a somewhat awkward way to write/think about this, as what matters is powers of the parameter [tex]\lambda[/tex].)
 
Last edited:
  • #18


Thanks for your replies RedX and daschaich,

I'm learning a lot more about QFT thanks to people on this board.

You make a good point that the 1-loop counterterm can be ignored since it is presumably higher order than \lambda^2.

The only thing that troubles me is that we didn't know this in advance until we actually did the calculation ignoring the 1-loop correction to the counterterms. It sure would have helped if P&S explained what they were doing.

To make matters even more confusing, I've read either in P&S or another QFT book that is more important to expand in the number of loops rather than the coupling constant.

In retrospect, however, it all makes sense.
 
  • #19


jdstokes said:
Thanks for your replies RedX and daschaich,

I'm learning a lot more about QFT thanks to people on this board.

You make a good point that the 1-loop counterterm can be ignored since it is presumably higher order than \lambda^2.

The only thing that troubles me is that we didn't know this in advance until we actually did the calculation ignoring the 1-loop correction to the counterterms. It sure would have helped if P&S explained what they were doing.

To make matters even more confusing, I've read either in P&S or another QFT book that is more important to expand in the number of loops rather than the coupling constant.

In retrospect, however, it all makes sense.

Since one uses perturbation theory to calculate n-point functions, one must renormalize the theory order by order.
Usually, the tree level counter terms are designed to cancel the divergences of the 1-loop level. And then one uses the new introduced counter term vertices to construct loop diagrams of counter terms to cancel the divergences of loop diagrams with purely original parameters.

As for the importance of loop expansion, let's digress from renormalization for a while.
The loop expansion is important, because, the loop expansion is actually an expansion of quantum correction, i.e. an expansion of [tex]\hbar[/tex]. Consider the perturbation expansion of the generating functional,
[tex]Z[J] = \exp\left\{\frac{i}{\hbar}\mathcal{L}_{i}\left[-i\frac{\delta}{\delta J}\right]\right\}Z_0[/tex]
, where the free Gaussian part can be integrated to be
[tex]Z_0 = \mathcal{N}\exp\left[\frac{1}{2i}\hbar\int{dx}dyJ(x)\Delta_F(x-y)J(y)\right][/tex]
We have inserted the [tex]\hbar[/tex] in the formula explicitly.
Now, we see clearly that each vertex contributes a factor of [tex]\hbar^{-1}[/tex], and each propagator contributes a factor of [tex]\hbar[/tex], therefore, a Feynman diagram would have a factor of [tex]\hbar^{I-V} = \hbar^{L-1}[/tex], where [tex]I[/tex] is the number of internal line, and [tex]V[/tex] is the number of the vertices, [tex]L[/tex] is the number of loops.
So, this means, the perturbative expansion in terms of loops is equivalent to an expansion of quantum correction!
 
  • #20


ismaili said:
...therefore, a Feynman diagram would have a factor of [tex]\hbar^{I-V} = \hbar^{L-1}[/tex], where [tex]I[/tex] is the number of internal line, and [tex]V[/tex] is the number of the vertices, [tex]L[/tex] is the number of loops. So, this means, the perturbative expansion in terms of loops is equivalent to an expansion of quantum correction!

Does this mean that the tree-level diagrams (L=0) are proportional to [tex]1/ \hbar[/tex], one-loop proportional to [tex] \hbar^0=1[/tex], and two-loop [tex]\hbar[/tex]? Because if this is true, then shouldn't the tree-level diagrams be huge compared to other diagrams, so why calculate the other diagrams? 1/h is huge compared to 1 or h.
 
  • #21


Okay, after revisiting P&S and Ryder I feel like I understand regularization and renormalization to my satisfaction.

I'm still utterly confused about running coupling constants and their significance.

When an amplitude is computed from a regulated and renormalized bare Lagrangian, the answer depends on the physical couplings measured at a certain energy scale M, and also the variable momentum transfer q.

The renormalization group equation expresses the relationship between how the renormalized correlation function and the field strength renormalization vary with M; the conclusion being that a rescaling of q in the renormalized correlation function is equivalent to the replacement of the physical couplings by running couplings + an overall rescaling.

But so what? The regularized and renormalized amplitude is valid for arbitrary q, so why complicate with issue by defining unnecessary quantities?

Would I be correct in saying that the running couplings are not really of practical value for computing amplitudes and are merely introduced ``to speculate outside the domain of perturbation theory'' as Ryder writes in his book?
 
  • #22


jdstokes said:
Okay, after revisiting P&S and Ryder I feel like I understand regularization and renormalization to my satisfaction.

I'm still utterly confused about running coupling constants and their significance.

When an amplitude is computed from a regulated and renormalized bare Lagrangian, the answer depends on the physical couplings measured at a certain energy scale M, and also the variable momentum transfer q.

The renormalization group equation expresses the relationship between how the renormalized correlation function and the field strength renormalization vary with M; the conclusion being that a rescaling of q in the renormalized correlation function is equivalent to the replacement of the physical couplings by running couplings + an overall rescaling.

But so what? The regularized and renormalized amplitude is valid for arbitrary q, so why complicate with issue by defining unnecessary quantities?

Would I be correct in saying that the running couplings are not really of practical value for computing amplitudes and are merely introduced ``to speculate outside the domain of perturbation theory'' as Ryder writes in his book?

If a coupling constant is small enough, there is never any real need to use the renormalization group approach (which leads to finding the running coupling constants among other things). However, if a coupling constant is not very small compared to 1, then potential problems arise. Recall the example I mentioned about using one experiment at some energy E, say, to fix the bare coupling constant and then using this result to compute some other physical process at some other energy E'. This generates logs that typically contain the ratio E/E'. If the coupling constant "g" is not very small, then g^2 log(E/E') may be of order unity. So the whole perturbative approach falls apart and one cannot trust any result one gets.


The renormalization group allows to sums up the leading log divergences (or nlo and so on, depending on the work put it) to *all orders* in the loop expansion, which makes the perturbative expansion more reliable.
 
Last edited:
  • #23


RedX said:
Does this mean that the tree-level diagrams (L=0) are proportional to [tex]1/ \hbar[/tex], one-loop proportional to [tex] \hbar^0=1[/tex], and two-loop [tex]\hbar[/tex]? Because if this is true, then shouldn't the tree-level diagrams be huge compared to other diagrams, so why calculate the other diagrams? 1/h is huge compared to 1 or h.

First of all, note that [tex]L=I-V+1[/tex] is only valid when [tex]V\geqslant1[/tex]; and, in the counting of number of Planck constant, I neglected the external lines.
Consider the 2-point function, it's a propagator, the power of [tex]\hbar[/tex]-factor is 1. The tadpole diagram, which is the 1-loop diagram for a 2-point function, has [tex]\hbar[/tex]-factor 2 (two external lines contribute [tex]\hbar^2[/tex]). Hence the factor of [tex]\hbar[/tex] for each diagram is actually [tex]\hbar^n[/tex] where [tex]n[/tex] is positive.

Moreover, we usually work in the unit that [tex]\hbar=1[/tex], hence the loop expansion is good or not usually is judged by the magnitude of the coupling constant.
 
  • #24


nrqed said:
If a coupling constant is small enough, there is never any real need to use the renormalization group approach (which leads to finding the running coupling constants among other things). However, if a coupling constant is not very small compared to 1, then potential problems arise. Recall the example I mentioned about using one experiment at some energy E, say, to fix the bare coupling constant and then using this result to compute some other physical process at some other energy E'. This generates logs that typically contain the ratio E/E'. If the coupling constant "g" is not very small, then g^2 log(E/E') may be of order unity. So the whole perturbative approach falls apart and one cannot trust any result one gets.


The renormalization group allows to sums up the leading log divergences )or nnlo and so on, depending on the work put it) to *all orders* in the loop expansion, which makes the perturbative expansion more reliable.

Allow me to ask a naive or somehow stupid question. Actually, I still haven't really completely understood the renormalization group and renormalization theory. :blushing:

Can I realize the running coupling constant as follows: The running of coupling constants tells us how the renormalization procedure adjusts by itself such that the unrenormalized n-point function is independent of the arbitrary scale [tex]M[/tex], since, however, the arbitrary scale [tex]M[/tex] is only introduced when we perform renormalization. And, before we sacrifice one experiment at some energy scale to fix the parameter, say [tex]g[/tex], the running of this parameter is actually a function with an undetermined offset,
[tex] g = g(M) + offset [/tex]
where the [tex]offset[/tex] comes from the arbitrary renormalisation scheme. Then, once we compare the calculation result to a sacrificed experiment, we can fix [tex]offset[/tex], and then the running [tex]g[/tex] tells us how the coupling constant would vary at different energy scales.

Moreover, I want to confirm that, if we measure some physical quantity at two different energy scales, the number we get would be different at different energy scale, right? I mean, whether, experimentally, we can confirm that the parameters really run; or the running of parameters is just a theoretical tool or theoretical description of the world, and, when we perform experiments, we will see that, for example, the electron mass is unchanged at 10MeV or at 10GeV. which one is correct?

Thank you so much for the discussion!
 
  • #25


ismaili said:
First of all, note that [tex]L=I-V+1[/tex] is only valid when [tex]V\geqslant1[/tex]; and, in the counting of number of Planck constant, I neglected the external lines.
Consider the 2-point function, it's a propagator, the power of [tex]\hbar[/tex]-factor is 1. The tadpole diagram, which is the 1-loop diagram for a 2-point function, has [tex]\hbar[/tex]-factor 2 (two external lines contribute [tex]\hbar^2[/tex]). Hence the factor of [tex]\hbar[/tex] for each diagram is actually [tex]\hbar^n[/tex] where [tex]n[/tex] is positive.

Moreover, we usually work in the unit that [tex]\hbar=1[/tex], hence the loop expansion is good or not usually is judged by the magnitude of the coupling constant.

I forgot about the external lines. So each external line contributes a factor of h, so there is no total of 1/h. This is confusing because usually we set h=1, so h, h^2, h^3,... are all the same. The fine structure constant is for example 1/137, which is much bigger than h, so it would seem that the effect of h overwhelms the fine structure constant.
 
  • #26


"I mean, whether, experimentally, we can confirm that the parameters really run"

The answer is yes, experimentally we see that the parameters run. For instance the fine structure *constant* isn't really constant, it does depend on E and that has been seen and verified in experiment. You can think of it physically as a sort of dielectric shielding by electron-positron pairs. And yes, the numbers can and do change, the strength of electromagnetism changes based on energy!

Typically the variation is smallish (but non negligable) for QED, but can be quite large for QCD. If you open up an arxiv paper from experimentalists for some process, you will notice they always give a set of energy scales for each physical result. Thats very important, otherwise the numbers are mumbo jumbo
 
  • #27


nrqed said:
This generates logs that typically contain the ratio E/E'. If the coupling constant "g" is not very small, then g^2 log(E/E') may be of order unity. So the whole perturbative approach falls apart and one cannot trust any result one gets.

I'll have to think about it some more but at first sight this doesn't make much sense to me.

Isn't the perturbation expansion parameter g^2, not g^2 ln(E/E'). The physical coupling should be independent of E'.
 
  • #28


jdstokes said:
I'll have to think about it some more but at first sight this doesn't make much sense to me.

Isn't the perturbation expansion parameter g^2, not g^2 ln(E/E'). The physical coupling should be independent of E'.

Yes, it is possible to say that the coupling constant is fixed. What I was saying is that the final result for the process you calculate (after regularization and renormalization) will contain a factor g^2 log(E/E') times a bunch of stuff (constants, angular dependence, etc which I assume all give a result of order 1). I was talking about the whole, final expression. Now, this one-loop result should be smaller (much smaller, preferably) than the the tree level result. But it contains this factor g^2 log(E/E') which could be dangerously close to 1.

Likewise, the two level final result will contain a factor g^4 (log^2(E/E') + subleading logs).

So if g^2 log(E/E') is of order one, the whole perturbative approach collapses. What renormalization group allows is, using only the one-loop result , to sum up all the leding log cntributions into a new, "running" coupling constant. So we then define a new coupling constant which contains all the effect of the

g^2 log(E/E') + c_1 g^4 log^2(E/E') + c_2 g^6 log^3(E/E') ...

expansion. It is quite neat that the one-loop result can be used to include all those higher order contributions. This is what is special and powerful about the renormalization group.

Of course, one can push the approach to include as well the subleading logs (like g^4 log(E/E') and g^6 log^2(E/E') ) but in introductory textbooks, usually only the leading logs are included.

Hope this makes a bit more sense now. These are good questions!

Regards,

Patrick
 
  • #29


ismaili said:
Moreover, I want to confirm that, if we measure some physical quantity at two different energy scales, the number we get would be different at different energy scale, right? I mean, whether, experimentally, we can confirm that the parameters really run; or the running of parameters is just a theoretical tool or theoretical description of the world,

Hi Ismaili, I will try to get back to your other question as soon as I have a minute.
Thank you for jumping in the discussion.

To add to what Haelfix said.

The physical running can be seen, yes. If one fixes the tree level fine structure constant using low energy scattering of an electron of a nucleus, one finds [tex] \alpha_{em} \approx 1/137 [/tex]. On the other hand, if one uses a process at the sacle of the Z mass (another popular point), one finds [tex] \alpha_{em} \approx 1/128 [/tex].

Yes, it is possible to do all calculations without ever mentionning running coupling constants and the renormalization group, if the coupling constant is very small. It's really QCD that forces us to use new tricks beyond bare bones renormalization theory.
 
  • #30


RedX said:
I forgot about the external lines. So each external line contributes a factor of h, so there is no total of 1/h. This is confusing because usually we set h=1, so h, h^2, h^3,... are all the same. The fine structure constant is for example 1/137, which is much bigger than h, so it would seem that the effect of h overwhelms the fine structure constant.


In natural units, the fine structure constant is written as [tex] e^2/(4 \pi) [/tex]. But when we reinstate the factors of [tex] \hbar [/tex], c and [tex] 4 \pi \epsilon_0 [/tex], the fine structure constant is [tex] e^2/(4 \pi \epsilon_0 \hbar c) [/tex]. So the expansion in [tex] \alpha [/tex] generates factors of [tex] 1/\hbar [/tex] which cancel the factors of [tex] \hbar [/tex] mentioned in the previous posts.
 
  • #31


ismaili said:
Allow me to ask a naive or somehow stupid question. Actually, I still haven't really completely understood the renormalization group and renormalization theory. :blushing:

Can I realize the running coupling constant as follows: The running of coupling constants tells us how the renormalization procedure adjusts by itself such that the unrenormalized n-point function is independent of the arbitrary scale [tex]M[/tex], since, however, the arbitrary scale [tex]M[/tex] is only introduced when we perform renormalization.

I just wrote a long reply and then it got erased. :mad:

I might get some time to rewrite it later.

The short answer is that it is the renormalized n-point function which is independent of the scale M. Since it should not make any difference if one has chosen to fix their coupling constant at M or, say , 2M. So the RG equation essentially says that

[tex] \frac{d G_{phys}}{dM} = 0 [/tex]

There are two dependence on M which cancels out (I am keeping things simple here, no worrying about wavefunction or mass renormalization). On one hand, the one-loop (say) renormalized coupling constant depends on M through the one-loop calculation that was used to fix it at energy M. But the actual physical value measured at M also depends on M. These two dependence cancel out, leaving the result for G(M') independent of M.
 
  • #32


Uhh, I think you might have made a typo here: it is the unremormalized n-point function which is independent of M, not the renormalized.

Quoting Ryder (p. 324)

``The renormalized 1PI function [itex]\Gamma^{(n)}_{\mathrm{r}}[/itex] depends on [itex]\mu[/itex] [the renormalization scale], as shown in equation (9.48), through the dependence of [itex]Z_\phi[/itex] on [itex]\mu[/itex]. In other words, the unrenormalized function [itex]\Gamma^{(n)}[/itex] given by (9.49) is independent of [itex]\mu[/itex]...''
 
  • #33


nrqed said:
What I was saying is that the final result for the process you calculate (after regularization and renormalization) will contain a factor g^2 log(E/E') times a bunch of stuff (constants, angular dependence, etc which I assume all give a result of order 1). I was talking about the whole, final expression.

I see. So given that increasing orders in perturbation theory are not giving terms of decreasing magnitude, we deduce that perturbation theory is failing. Interesting, I never throught about it that way for some reason.

nrqed said:
It is quite neat that the one-loop result can be used to include all those higher order contributions. This is what is special and powerful about the renormalization group.

That is an amazing claim. What's even more amazing is that I can't find any mention of this in the usual textbooks on the subject.

What I would really like to see is a full perturbative QCD calculation, starting with analysis at one energy scale, and then using the running couplings to analyse the same process at a different energy scale. Is there somewhere I can find this in the literature?
 
  • #34


jdstokes said:
Uhh, I think you might have made a typo here: it is the unremormalized n-point function which is independent of M, not the renormalized.

Quoting Ryder (p. 324)

``The renormalized 1PI function [itex]\Gamma^{(n)}_{\mathrm{r}}[/itex] depends on [itex]\mu[/itex] [the renormalization scale], as shown in equation (9.48), through the dependence of [itex]Z_\phi[/itex] on [itex]\mu[/itex]. In other words, the unrenormalized function [itex]\Gamma^{(n)}[/itex] given by (9.49) is independent of [itex]\mu[/itex]...''

Ok, I was not careful enough with the exact wording. By "renormalized" function, I meant the whole, final, renormalized amplitude, including the coupling constant. Clearly this cannot depend on the value of mu since mu is completely arbitrary.

I will have a look at Ryder when I get home. Thanks for pointing this out.
 
  • #35


jdstokes said:
I see. So given that increasing orders in perturbation theory are not giving terms of decreasing magnitude, we deduce that perturbation theory is failing. Interesting, I never throught about it that way for some reason.
exactly.
That is an amazing claim. What's even more amazing is that I can't find any mention of this in the usual textbooks on the subject.

What I would really like to see is a full perturbative QCD calculation, starting with analysis at one energy scale, and then using the running couplings to analyse the same process at a different energy scale. Is there somewhere I can find this in the literature?

Just to make sure what I wrote is clear: solving the RG equation using a one-loop result only sums up the *leading log* corrections to all orders. Now, if one uses he coupling constant fund by solving the RG differential equation in a calculation, say to one loop, the mu dependence then does not cancel out because one is using a coupling constant that contains corrections from a higher number of loops. In that case, one has to estimate a sensible value of mu. This is the case in many QCD calculations, including calculations on a lattice.

I don't have my books with me but I would say that Greiner's book on QCD is the most likely place to see a thorough discussion. Maybe Aitchison and Hey do it well too.
 

Similar threads

Replies
10
Views
979
  • Quantum Physics
Replies
4
Views
1K
  • Quantum Physics
Replies
1
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
480
  • Quantum Physics
2
Replies
57
Views
5K
Replies
6
Views
3K
  • Quantum Physics
Replies
3
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
4
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
3
Views
2K
Replies
22
Views
2K
Back
Top