# Understanding of renormalization

1. Jan 15, 2009

### jdstokes

My (weak) understanding of renormalization is that following regularization, the divergent terms coming from loop integrals can be canceled by adding counterterms to the Lagrangian which are of the same form as the original terms.

What does this mean in terms of actual calculations? Does it mean that 1-loop corrections can be taken into account by substituting the running coupling constants into tree-level amplitudes? What about higher loop corrections?

2. Jan 15, 2009

### Sideways

Re: Renormalization

Sounds like a question for a theory expert (I'm a mere experimentalist).

I can answer part of your second question with some confidence, though: Accounting for higher-order corrections is a lot trickier than simply substituting the running coupling constant into the tree-level amplitude. The coupling constant does vary with probe energy, but even using this, you still have to account for higher-order corrections.

Wish I could say more - these are deep waters indeed.

3. Jan 15, 2009

### jdstokes

Re: Renormalization

Indeed, I've yet to find a single discussion of renormalization that sits well with me.

Am I correct in saying that the substitution trick will work as long as we restrict to 1-loop corrections?

4. Jan 15, 2009

### RedX

Re: Renormalization

I think regardless of the renormalization scheme, in order to calculate the loop diagrams, you have to calculate the loop diagrams, and not the tree diagrams.

5. Jan 15, 2009

### nrqed

Re: Renormalization

Well, the leading log divergences can be taken care off by substituting the running coupling constants but this whole issue of the renormalization group approach just confuses the issues so it's better, at least when trying to understanding the *basic* idea of renormalization, to not go get into that.

In terms of actual calculation, here is what one does. Let's focus on the charge renormalization in QED to be more specific (let's set aside mass and wavefunction renormalization). Let's say you want to calculate a process to one loop.

First, you calculate to one loop a *different* process that is well-measured experimentally, say the scattering of an electron from a heavy nucleus at low energy. You get a divergent result that you regulate using your favorite regulator. Now you *impose* that your calculation gives the measured result. This means that the charge that appears in yoru Lagrangian is not the physical charge but a "bare" charge. So you solve for the bare charge in terms of the physical charge (and this dependence will of course contain a parameter introduced by the regularization).

Now you go back to the process you are really interested in. You do the one loop calculation and you stick in the bare charge expressed in terms of the physical charge. Lo and behold, to first order in the loop expansion, all divergent terms cancel out and the final result is finite (and is of course expressed in terms of the physical charge measured in the other process and of the regulator).

This is the bare bone idea. Now, in the case where the coupling constant is not small and logs threaten the loop expansion, such as in QCD, a bunch of extra tricks are employed and then one gets into the renormalization group and a way to sum up leading logs and so on. But this is a "second level" that is better discussed when the basic idea is cleared up.

Hope this helps a little.

6. Jan 15, 2009

### jdstokes

Re: Renormalization

Thanks nrqed,

I have not thought about renormalization in this way before. Interesting.

Where would you suggest that a beginner (familiar with tree-level QFT) learn this stuff from?

7. Jan 15, 2009

### Haelfix

Re: Renormalization

Peskin and Schroeder

8. Jan 15, 2009

### jdstokes

Re: Renormalization

I dunno, I didn't find P&S's discussion of renormalization particularly illuminating on first iteration. Perhaps I'll go back over it.

9. Jan 16, 2009

### ismaili

Re: Renormalization

A very nice explanation.

Actually, when I first learned the renormalization procedure, I doubted it so much. Because the procedure seems to be quite arbitrary (due to the infinity + finite = infinity). The result we calculated after imposing some regularization is always depend on some arbitrary mass scale, moreover, the result (finite quantity) also depends on the renormalization scheme.
Hence, it's seemingly impossible to get a "number" which you can tell the experimentalist to check with data.

I think the key point is that we must sacrifice one experiment, using a certain experiment to fix the number of the arbitrary mass scale $$\mu$$, only after doing so, your theory starts to have the prediction ability. You can now predict the results of other experiments.

But, what I felt very strange is, almost no book stresses on this point and almost no book states the renormalization procedure clearly. If my understanding is not correct, anyone is welcome to correct me.

10. Jan 16, 2009

### meopemuk

Re: Renormalization

There are nice lecture notes about renormalization in today's arxiv:

D.I.Kazakov, "Radiative Corrections, Divergences, Regularization, Renormalization, Renormalization Group and All That in Examples in Quantum Field Theory" http://arxiv.org/abs/0901.2208

11. Jan 16, 2009

### RedX

Re: Renormalization

Isn't there a renormalization scheme, called the "on-shell method", that's so much simpler than having running coupling constants? In the on-shell method the mass-scale parameter is cancelled by counterterms so you don't even have to bother with it, and the mass in the Lagrangian is the true mass. I know the on-shell method fails for zero-mass particles, but if the particle has mass, why can't the on-shell method be used?

12. Jan 16, 2009

### nrqed

Re: Renormalization

You are welcome.

I have never found a discussion that I found to my complete satisfaction either. It is amazing, given that the basic idea is so simple. The problem is that the basic idea is simple but the actual implementation has become highly sophisticated, with many advanced tricks and techniques invented over the years (in particular, the renormalization group). And I think that many people learn the advance tricks and lose track of the underlying meaning ( I even think that some people do advanced calculations without truly understanding what they are doing) . The whole thing about running coupling constants, beta functions, subtraction scheme, etc, is not needed to understand renormalization. It is a second layer designed to optimized calculations when the coupling constant is not small.

Renormalization also confused be when I was an undergrad. So when I got to grad school, I picked a research project that would force me to understand it: a two loop calculation in the context of an effective field theory. That forced me to clarify the whole thing and to see how it worked in an actual complex calculation.

One common misconception is that renormalization is forced upon us because of the divergent integrals. In fact, even if all calculations were finite, we would still have to renormalize. Renormalization is needed simply because we have a theory with a set of parameters and we need to relate those parameters to experimental values. That's all there is to it. We would need to do that even if all integrals were convergent.
And since we do calculations as a loop expansion, we need to fix again the parameters of the theory evey time we increase the number of loops.

So renormalization is needed no matter if there are divergences or not. regularization is what is required because of the infinities.

I would suggest that you have a look at the book Gauge theories in particle physics by Aitchison and Hey. They have a nice discussion on renormalization in volume 1.

13. Jan 16, 2009

### Avodyne

Re: Renormalization

I like Srednicki's QFT book; you can get a draft copy free from his web page.

14. Jan 16, 2009

### nrqed

Re: Renormalization

Thanks
And this is one thing that confuses beginners. It is true that the regularized result has a finite part that is pretty much arbitrary. On the other hand, the renormalized result is unambiguous. This is a source of confusion because people often focus on the regularized quantities (especially in QCD) and then one gets into the whole thing about choosing a subtraction scheme and so on. And that is very confusing to students because it does give the impression that the physical results are scheme dependent. Of course it's not. I know that profs all know that but they don't always emphasize that enough to students.

You are completely correct. I would add the following point, which I think is not always fully appreciated. This "sacrificing" of an experimental result is not reserved to quantum field theory. It is also necessary in classical physics!

Let's say we want to calculate the period of oscillation of a mass attached to an ideal spring (classically). We need to "sacrifice" two experimental results: one to determine the spring constant and one to determine the mass of the particle! So this idea of using some measured value in order to calculate some processes is not reserved to QFT!

The difference is that, in contrast with the classical case, in QFT this "fixing" of the theory's parameters must be done order by order in a loop expansion. And of course, the calculations are divergent so we need regularization. In addition, when we deal with a coupling constant that is not very small, as in QCD, then the experiment that is used to fixed the coupling constant of the theory must be close in energy to the process we are interested in. If the difference is too large, logs threaten the loop expansion. Then one may use the renormalization group to sum up the most severe log dependence.

Patrick

15. Jan 17, 2009

### jdstokes

Re: Renormalization

Okay, I've been reading P&S again and there's something I'm just not getting.

Starting with the bare Lagrangian, we aim to express the bare coupling constants in terms of physical (but arbitrary) quantities, so that the amplitudes computed from the bare Lagrangian are meaningful.

Having chosen definitions for the physical couplings, we can separate the bare Lagrangian into the form

$\mathcal{L}_0 = \mathcal{L} + \mathrm{counterterms}$.

To perform a 1-loop calculation, we include the the 1-loop term from $\mathcal{L}$ but only the tree-level term from counterterms' (see Eq. (10.21) p. 326).

I can't understand why they are ignoring the 1-loop contribution of the counterms to the amplitude??

16. Jan 17, 2009

### RedX

Re: Renormalization

They're ignoring the 1-loop contribution of the counterterms probably because it's higher order than the 1-loop term you're trying to calculate.

17. Jan 17, 2009

### daschaich

Re: Renormalization

(I see I spent too long writing this and got scooped. Here is more information elaborating on what RedX pointed out.)

The one-loop diagrams involving counterterms enter at the next order in perturbation theory -- if you check out the result for the tree-level counterterm for that example (Eqn. 10.24), you'll see it is proportional to $$\lambda^2$$, just like the one-loop term from $\mathcal L$.

Learning perturbation theory in non-relativistic quantum mechanics, you saw that we always need to work consistently to a given order -- every term of that order must be included. One-loop counterterm contributions would be of order $$\lambda^3$$ (or higher), like two-loop diagrams from $\mathcal L$, so if you want to include the former in the calculation, you have to include the latter as well.

This is what nrqed pointed out -- "this "fixing" of the theory's parameters must be done order by order in a loop expansion." Peskin and Schroeder go on to do the calculation of $$\lambda^3$$ terms for this process in section 10.5, starting on page 338.

(Edit to add that number-of-loops is a somewhat awkward way to write/think about this, as what matters is powers of the parameter $$\lambda$$.)

Last edited: Jan 17, 2009
18. Jan 17, 2009

### jdstokes

Re: Renormalization

Thanks for your replies RedX and daschaich,

I'm learning a lot more about QFT thanks to people on this board.

You make a good point that the 1-loop counterterm can be ignored since it is presumably higher order than \lambda^2.

The only thing that troubles me is that we didn't know this in advance until we actually did the calculation ignoring the 1-loop correction to the counterterms. It sure would have helped if P&S explained what they were doing.

To make matters even more confusing, I've read either in P&S or another QFT book that is more important to expand in the number of loops rather than the coupling constant.

In retrospect, however, it all makes sense.

19. Jan 17, 2009

### ismaili

Re: Renormalization

Since one uses perturbation theory to calculate n-point functions, one must renormalize the theory order by order.
Usually, the tree level counter terms are designed to cancel the divergences of the 1-loop level. And then one uses the new introduced counter term vertices to construct loop diagrams of counter terms to cancel the divergences of loop diagrams with purely original parameters.

As for the importance of loop expansion, lets digress from renormalization for a while.
The loop expansion is important, because, the loop expansion is actually an expansion of quantum correction, i.e. an expansion of $$\hbar$$. Consider the perturbation expansion of the generating functional,
$$Z[J] = \exp\left\{\frac{i}{\hbar}\mathcal{L}_{i}\left[-i\frac{\delta}{\delta J}\right]\right\}Z_0$$
, where the free Gaussian part can be integrated to be
$$Z_0 = \mathcal{N}\exp\left[\frac{1}{2i}\hbar\int{dx}dyJ(x)\Delta_F(x-y)J(y)\right]$$
We have inserted the $$\hbar$$ in the formula explicitly.
Now, we see clearly that each vertex contributes a factor of $$\hbar^{-1}$$, and each propagator contributes a factor of $$\hbar$$, therefore, a Feynman diagram would have a factor of $$\hbar^{I-V} = \hbar^{L-1}$$, where $$I$$ is the number of internal line, and $$V$$ is the number of the vertices, $$L$$ is the number of loops.
So, this means, the perturbative expansion in terms of loops is equivalent to an expansion of quantum correction!

20. Jan 17, 2009

### RedX

Re: Renormalization

Does this mean that the tree-level diagrams (L=0) are proportional to $$1/ \hbar$$, one-loop proportional to $$\hbar^0=1$$, and two-loop $$\hbar$$? Because if this is true, then shouldn't the tree-level diagrams be huge compared to other diagrams, so why calculate the other diagrams? 1/h is huge compared to 1 or h.

21. Jan 18, 2009

### jdstokes

Re: Renormalization

Okay, after revisiting P&S and Ryder I feel like I understand regularization and renormalization to my satisfaction.

I'm still utterly confused about running coupling constants and their significance.

When an amplitude is computed from a regulated and renormalized bare Lagrangian, the answer depends on the physical couplings measured at a certain energy scale M, and also the variable momentum transfer q.

The renormalization group equation expresses the relationship between how the renormalized correlation function and the field strength renormalization vary with M; the conclusion being that a rescaling of q in the renormalized correlation function is equivalent to the replacement of the physical couplings by running couplings + an overall rescaling.

But so what? The regularized and renormalized amplitude is valid for arbitrary q, so why complicate with issue by defining unnecessary quantities?

Would I be correct in saying that the running couplings are not really of practical value for computing amplitudes and are merely introduced `to speculate outside the domain of perturbation theory'' as Ryder writes in his book?

22. Jan 18, 2009

### nrqed

Re: Renormalization

If a coupling constant is small enough, there is never any real need to use the renormalization group approach (which leads to finding the running coupling constants among other things). However, if a coupling constant is not very small compared to 1, then potential problems arise. Recall the example I mentioned about using one experiment at some energy E, say, to fix the bare coupling constant and then using this result to compute some other physical process at some other energy E'. This generates logs that typically contain the ratio E/E'. If the coupling constant "g" is not very small, then g^2 log(E/E') may be of order unity. So the whole perturbative approach falls apart and one cannot trust any result one gets.

The renormalization group allows to sums up the leading log divergences (or nlo and so on, depending on the work put it) to *all orders* in the loop expansion, which makes the perturbative expansion more reliable.

Last edited: Jan 18, 2009
23. Jan 18, 2009

### ismaili

Re: Renormalization

First of all, note that $$L=I-V+1$$ is only valid when $$V\geqslant1$$; and, in the counting of number of Planck constant, I neglected the external lines.
Consider the 2-point function, it's a propagator, the power of $$\hbar$$-factor is 1. The tadpole diagram, which is the 1-loop diagram for a 2-point function, has $$\hbar$$-factor 2 (two external lines contribute $$\hbar^2$$). Hence the factor of $$\hbar$$ for each diagram is actually $$\hbar^n$$ where $$n$$ is positive.

Moreover, we usually work in the unit that $$\hbar=1$$, hence the loop expansion is good or not usually is judged by the magnitude of the coupling constant.

24. Jan 18, 2009

### ismaili

Re: Renormalization

Allow me to ask a naive or somehow stupid question. Actually, I still haven't really completely understood the renormalization group and renormalization theory.

Can I realize the running coupling constant as follows: The running of coupling constants tells us how the renormalization procedure adjusts by itself such that the unrenormalized n-point function is independent of the arbitrary scale $$M$$, since, however, the arbitrary scale $$M$$ is only introduced when we perform renormalization. And, before we sacrifice one experiment at some energy scale to fix the parameter, say $$g$$, the running of this parameter is actually a function with an undetermined offset,
$$g = g(M) + offset$$
where the $$offset$$ comes from the arbitrary renormalisation scheme. Then, once we compare the calculation result to a sacrificed experiment, we can fix $$offset$$, and then the running $$g$$ tells us how the coupling constant would vary at different energy scales.

Moreover, I wanna confirm that, if we measure some physical quantity at two different energy scales, the number we get would be different at different energy scale, right? I mean, whether, experimentally, we can confirm that the parameters really run; or the running of parameters is just a theoretical tool or theoretical description of the world, and, when we perform experiments, we will see that, for example, the electron mass is unchanged at 10MeV or at 10GeV. which one is correct?

Thank you so much for the discussion!

25. Jan 18, 2009

### RedX

Re: Renormalization

I forgot about the external lines. So each external line contributes a factor of h, so there is no total of 1/h. This is confusing because usually we set h=1, so h, h^2, h^3,... are all the same. The fine structure constant is for example 1/137, which is much bigger than h, so it would seem that the effect of h overwhelms the fine structure constant.