Understanding the renormalization group

In summary, renormalization is a process in which the theory is reparametrized in order to make physical quantities finite. This is done by absorbing the divergences into the relations between the bare and measurable quantities. The renormalization group expresses that the choice of renormalization point should not affect physical quantities, but this may not hold true for perturbation expansions. The equation μ∂gR/∂μ=β(gR) expresses how the renormalized coupling must change when the renormalization point is changed, and it is possible to end up with different β-functions and thus different theories depending on the choice of renormalization point. However, the use of renormalization allows for a small perturbation
  • #1
center o bass
560
2
From what I now understand of renormalization it is really a reparametrization of the theory in terms of measurable quantities instead of the 'inobservable bare quantities' that follow the Lagrangian; at least that is one interpretation of what is going on. The originally divergent physical quantities in the theory thus are rendered finite by absorbing the divergences into the relations between the bare and the measurable quantities.

For example in ##\phi^4##-theory one is reparametrizing the four-point vertex function ##\Gamma^{(4)}(p_i)## in terms of a renormalized coupling constant ##g_R## which is defined by 'measuring' ##\Gamma^{(4)}## at a certain energy/momentum scale (or renormalization point) ##\mu## and this together with a renormalization of the mass renders it finite (at least to one-loop corrections).

As I understand it so far the renormalization group expresses the fact that it should not matter at which renormalization point, and thus which value of ##g_R## you choose to reparametrize your theory; physical quantities should not be affected by that choice. (This is called the renormalization group law.) Or at least that should be the case if we were working with the exact theory and not a perturbation expansion of it.
According to this paper it does have an effect on your theory to a finite order of your perturbation expansion what parametrization you choose. I therefore wonder what the interpretation and use of equations like

$$ \mu \frac{\partial g_R}{\partial \mu} = \beta(g_R) $$

or the so called Callan-Symanzik equation is. The thing I so far think they express is that if you have already reparametrized your theory at say, ##\mu_0##, obtaining a renormalized coupling ##g_0##, the equation above expresses how ##g_R## must change in order for the theory to remain invariant when the renormalization point ## \mu## is changing. Is this correct?
If it is then it seems like when you choose your first renormalization point ##\mu_0## you generally get 'another theory' than if you had choosen ##\mu_0'## and that with these two initial renormalization points one might end up with different ##\beta##-functions (for the differential equation above) expressing how the renormalized coupling must change to leave the theory corresponding to ##\mu_0## or ##\mu_0'## invariant. It seems very unsatisfactory to get different ##\phi^4## theories depending on which renormalization point you choose.

However I suspect that I have gotten something wrong here?(In the Perimeter QFT II video lectures (http://pirsa.org/displayFlash.php?id=11110011) it seems like the equation above is derived for phi-four theory by demanding that ##\Gamma(g_R(\mu_2); \mu_2) = \Gamma(g_R(\mu_1); \mu_1)##. The lecturer then goes on to say that depending on what you measure for the first renormalization point you are 'describing a different theory'; it is only the same theory if the renormalized coupling constants lies on the line described by the equation ##\Gamma(g_R(\mu_2); \mu_2) = \Gamma(g_R(\mu_1); \mu_1)## as a function of the renormalization point ##\mu##.=
 
Last edited:
Physics news on Phys.org
  • #2
center o bass said:
From what I now understand of renormalization it is really a reparametrization of the theory in terms of measurable quantities instead of the 'inobservable bare quantities' that follow the Lagrangian; at least that is one interpretation of what is going on. The originally divergent physical quantities in the theory thus are rendered finite by absorbing the divergences into the relations between the bare and the measurable quantities.

You have to think of it in two stages, there is the arbitrary process of renormalization which is a rescaling of the parameters, this defines the renormalization group. Then there are conclusions drawn from this structure which gives you the "instead of" effective quantities vs bare ones.

This typically goes hand in hand with regularization which is e.g. selecting cut-off scales to stop expressions from diverging. Seeing how the values change when you change cut-offs allows you to see ratios of divergent quantities approaching finite limits though in the field theoretic formulation the quantities themselves diverge. You thus identify regularization scale independent physical relationships.

Disclaimer: That's my understanding of the process however I have done very little QFT work and thus haven't carried the process through. Someone with more intimate experience may have corrections to my characterization.
[My graduate work was in trying to find non-field theoretic non-divergent formulations. Very speculative and not carried through beyond preliminary alternative foundations.]
 
  • #3
I wouldn't describe myself as having more than a rudimentary understanding either, but the following paper I have found very helpful and have gone through it a number of times, each time I get some new insight:
http://arxiv.org/pdf/hep-th/0212049.pdf

Now for my take. We get infinities due to a dimensional mismatch in the expansion of some function F of x, say energy, in terms of say a dimensionless coupling constant. To first order you get F = the coupling constant so F is dimensionless. But if we go to second order then F depends on the energy so it can't be dimensionless. This is why you get infinities - this allows it to still be dimensionless. This shows we have not included a parameter y in our theory to divide x by so F is always dimensionless by depending on x/y instead of just x. The most obvious choice for what y is, is the regulator, so the real cause of the problem is the function wasn't regularized. So we regularize it - but you still have the infinity when the regulator is taken to infinity. We know physically why it happens - a dimensional mismatch - but mathematically why does it happen? The most obvious culprit is a bad choice of perturbation parameter - if its not very small (much less than 1) perturbation theory is invalid. Ok what do we do - we choose another parameter and make sure its small. Renormalisation chooses the value of the coupling constant as actually measured and assumes it is a function of the coupling constant that appears as the parameter in the theory. We know 100% for sure it is small so if that is used to perturb about it should work fine. When you work through the math, as per the link I gave - you see something interesting - in order for the process to work the infinities must depend only on the regulator and in the terms they appear they cancel. The result doesn't then depend on the regulator and all your equations are just fine, nice and finite. It also shows mathematically why the problem occurred in the first place - the parameter in the original equation - now called bare - turns out to depend on the regulator and - horror of horrors, blows up to infinity - no wonder it was a bad choice to perturb around when the regulator is taken to the limit. This is the magic of renormalisation.

The renormalisation group allows you to relate the measured parameters you use at a certain energy level (they are given the fancy name renormalised - but its a probably better to call it physical) to what the parameters would be measured at another energy level.

Anyway enough of my take on it - if you carefully go through the paper I am sure things will be a lot clearer.

Thanks
Bill
 
Last edited:
  • #4
bhobba said:
I wouldn't describe myself as having more than a rudimentary understanding either, but the following paper I have found very helpful and have gone through it a number of times, each time I get some new insight:
http://arxiv.org/pdf/hep-th/0212049.pdf

Now for my take. We get infinities due to a dimensional mismatch in the expansion of some function F of x, say energy, in terms of say a dimensionless coupling constant. To first order you get F = the coupling constant so F is dimensionless. But if we go to second order then F depends on the energy so it can't be dimensionless. This is why you get infinities - this allows it to still be dimensionless. This shows we have not included a parameter y in our theory to divide x by so F is always dimensionless by depending on x/y instead of just x. The most obvious choice for what y is, is the regulator, so the real cause of the problem is the function wasn't regularized. So we regularize it - but you still have the infinity when the regulator is taken to infinity. We know physically why it happens - a dimensional mismatch - but mathematically why does it happen? The most obvious culprit is a bad choice of perturbation parameter - if its not very small (much less than 1) perturbation theory is invalid. Ok what do we do - we choose another parameter and make sure its small. Renormalisation chooses the value of the coupling constant as actually measured and assumes it is a function of the coupling constant that appears as the parameter in the theory. We know 100% for sure it is small so if that is used to perturb about it should work fine. When you work through the math, as per the link I gave - you see something interesting - in order for the process to work the infinities must depend only on the regulator and in the terms they appear they cancel. The result doesn't then depend on the regulator and all your equations are just fine, nice and finite. It also shows mathematically why the problem occurred in the first place - the parameter in the original equation - now called bare - turns out to depend on the regulator and - horror of horrors, blows up to infinity - no wonder it was a bad choice to perturb around when the regulator is taken to the limit. This is the magic of renormalisation.

The renormalisation group allows you to relate the measured parameters you use at a certain energy level (they are given the fancy name renormalised - but its a probably better to call it physical) to what the parameters would be measured at another energy level.

Anyway enough of my take on it - if you carefully go through the paper I am sure things will be a lot clearer.

Thanks
Bill

Hi. Actually I have gone trough most of that paper, it was the paper i linked to in my forum post. I found it very illuminating on the parts dealing with the renormalization procedure, but I did not quite get the part dealing with the renormalization group. As I understood it in that paper the renormalization group law expresses that it does not matter how you parametrize your theory in terms of 'measurables' if the theory were exact. However when we do perturbation, the choice of parametrization can make a difference. The renormalization group law can then be used to improve our perturbation results.

I also picked up something I think you also did; namely that there are problems with even the renormalized pertubation. Namely when the energies get's large, logarithm's in the perturbation theory blow up invalidating the expansion. I found that Weinberg takes something like this perspective in his second volume where he stated that the origins of the group was a tool to make sense of the expansion at high energies.

My frustration is however to connect these perspectives with others that are common in different books. Some book derive the Callan-Symanzik equation by saying that the bare n-point vertex function is independent of the renormalization point. Similarly in the video-lecture i linked to above there seem to be the understanding that the Callan-Symanzik equation (only the beta function) expresses how the coupling constant changes with scale and that it changes in such a way to make the theory 'invariant'.

So I'm wondering does the group equations reflect a _demand_ that physical functions must be invariant under change of scale, or that it is a fact that they _are_ invariant, or is it rather a tool for making a better predictions in a theory which is not actually invariant due to the perturbation expansion?
 
  • #5
center o bass said:
Hi. Actually I have gone trough most of that paper, it was the paper i linked to in my forum post. I found it very illuminating on the parts dealing with the renormalization procedure, but I did not quite get the part dealing with the renormalization group.

Sorry - didn't notice it was that paper

center o bass said:
As I understood it in that paper the renormalization group law expresses that it does not matter how you parametrize your theory in terms of 'measurables' if the theory were exact. However when we do perturbation, the choice of parametrization can make a difference. The renormalization group law can then be used to improve our perturbation results.

Yea - it doesn't matter what you use as the renormalised parameter but I am not sure what you mean by exact. The choice of parameter makes a difference in terms of calculating your stuff. It doesn't make any difference as far as the theory being predictive but if you use a renormalised parameter at an energy scale a lot different than you are doing your calculations at that would seem to be making your job harder than it should be. But I haven't personally gone through such calculations myself - I have only read textbook accounts. Like I say my knowledge, right now anyway, is rather rudimentary.

center o bass said:
I also picked up something I think you also did; namely that there are problems with even the renormalized pertubation. Namely when the energies get's large, logarithm's in the perturbation theory blow up invalidating the expansion.

Yea - you can't really trust any calculations where the bare parameter is large - perturbation theory is invalidated. But what you are doing is perturbing amount a small parameter - the renormalised parameter - and the idea is the divergence of the expansion of the bare parameter is terms of that FIXED small parameter cancels the divergence of the term to that order. But since it is held small you do not have the problem of the bare parameter - unless you work in an energy scale where it is not small - but I think it is assumed you don't push it into regions like that ie for QED, you stay well below the Landau pole - which arent hard - its evidently a few orders of magnitude even beyond the plank scale - which is way above where QED breaks down and the Electroweak takes over.

center o bass said:
My frustration is however to connect these perspectives with others that are common in different books. Some book derive the Callan-Symanzik equation by saying that the bare n-point vertex function is independent of the renormalization point. Similarly in the video-lecture i linked to above there seem to be the understanding that the Callan-Symanzik equation (only the beta function) expresses how the coupling constant changes with scale and that it changes in such a way to make the theory 'invariant'.

Now you are getting into areas I am not familiar with. But as far as renormalisation goes in an actual field theory like the phi 4 theory I like the BPS procedure. The usual one looks a bit slight of hand to me. I can relate it back to that paper and get a grip that way but it seems to be making it harder than it should be. With the BPS you express it in terms of the renormalised parameters from the start. Of course they blow up but just like happens in the linked paper are canceled by counter-terms to be finite. For me that's a lot easier.

center o bass said:
So I'm wondering does the group equations reflect a _demand_ that physical functions must be invariant under change of scale, or that it is a fact that they _are_ invariant, or is it rather a tool for making a better predictions in a theory which is not actually invariant due to the perturbation expansion?

I think the reason renormalisation works is if they change in a way different to the log terms that are typical and crop up in the linked paper the renormalisation group flow quickly drives them to zero. It is this log form that keeps the form the same under change of scale. But again we are getting into areas I am not, at least at this stage of my studies into QFT, not fully up to speed on.

Thanks
Bill
 
  • #6
Maybe I'm being naive, but I think the answer to your question is rather simple (where simple has been renormalised from its everday value to include relativistic quantum field theory!:smile:).

So let's say we renormalise the theory using [tex]\Gamma^{(4)}[/tex], assuming this is a theory where there is only one coupling and this is sufficient, like [tex]\phi^4[/tex] as you mentioned.

Well then a renormalisation scheme is where we set:
[tex]i\Gamma^{(4)}(p_i = \mu) = \lambda[/tex]

Now for different choices of [tex]\mu[/tex] and the value of [tex]\lambda[/tex] you do indeed get different theories. However certain choices of [tex]\mu, \lambda[/tex] result in the exact same theory. The renormalisation group equation:
[tex]\mu \frac{\partial \lambda}{\partial \mu} = \beta(\lambda)[/tex]
is derived, as is basically mentioned in the Princeton notes, by the condition:
[tex]\Gamma(\lambda(\mu_2); \mu_2) = \Gamma(\lambda(\mu_1); \mu_1)[/tex]
since this is just the statement that two choices result in the same theory. From this condition
you derive the renormalisation group equation, or in other words you find that choices of [tex]\mu, \lambda[/tex] that result in the same theory all lie along a line in [tex]\mu, \lambda[/tex] space. To phrase it another way, a single theory is represented by a line in [tex]\mu, \lambda[/tex] space. The renormalisation group equation is simply an ODE characterising this line.
 

What is the renormalization group?

The renormalization group is a mathematical framework used in theoretical physics to study how physical systems behave at different length scales. It helps us understand how the properties of a system change as we zoom in or out.

Why is the renormalization group important?

The renormalization group is important because it allows us to study complex physical systems that are difficult to analyze directly. It also helps us understand how different physical phenomena are interconnected, and how they behave at different scales.

What are the key concepts of the renormalization group?

The key concepts of the renormalization group include scaling, fixed points, and universality. Scaling refers to the behavior of a system at different length scales, while fixed points are points in the phase space where the system's behavior remains unchanged. Universality refers to the phenomenon where different physical systems exhibit the same behavior at critical points.

How is the renormalization group used in research?

The renormalization group is used in research to study a wide range of physical systems, from condensed matter physics to cosmology. It also has applications in other fields, such as economics and computer science.

What are the limitations of the renormalization group?

One limitation of the renormalization group is that it is a theoretical framework and may not always accurately describe real-world systems. It also relies on certain assumptions, such as the system being in a state of equilibrium, which may not always be true. Additionally, it can be mathematically complex and difficult to apply in certain situations.

Similar threads

Replies
0
Views
585
Replies
10
Views
978
Replies
2
Views
410
Replies
2
Views
850
  • Quantum Physics
Replies
5
Views
858
  • Quantum Physics
2
Replies
57
Views
5K
Replies
25
Views
2K
  • Quantum Physics
Replies
15
Views
2K
Replies
1
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
4
Views
1K
Back
Top