Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Renormalisation unique?

  1. Jan 18, 2007 #1

    As far as I understood Renormalisation you try to find renormalized quantities R1,R2,... which are related to the bare quantities B1,B2,... in the following way:
    - Ri is finite when you send Bi to infinity
    - you can write every lorentz-invariant-amplitude of the theory in terms of the Ri instead of the Bi

    I have one concern about this: is your choice of Ri unique (up to a additive constant)? I mean, is the relation between Ri and Bi unambigously given? What we want to have are physically measurable quantities, which can all derived from lorentz-invariant-amplitudes for processes. If theres only one possible choice for your renormalized quantity, Renormalisation is a method that makes really sense, otherwise it would depend on arbitrariness.
    LI-amplitudes are constructed from vertex-function and propagators, so you have to make sure you can write all vertex-functions and propagators in dependence of unambigously defined Ri .

    Does anybody know the textbooks of Griffiths or Ryder? So we could discus it explicitly...

    Best regards Martin
  2. jcsd
  3. Jan 21, 2007 #2
    Ryder for example has the following relation (for 1-loop-oder) between renormalised mass M and bare mass m in Phi^4-theory:
    m²=M²(1+ g/(16Pi²[4-d]) )

    The thing is: he dropped all the finite terms in the result of the calculation of the (onliest) divergent 1-loop-order diagram. I guess by ignoring all the finite terms he chooses a particular "renormalisation-scheme" (i.e. I guess the choice which of the finite terms you drop corresponds to the choice of the parameters in the counter-term-method). The finite terms depend on the bare mass! So you could involve such an arbitrary finite term in the relation above (choose another renormalisation-scheme), which means the relation between bare and renormalised quantity is ambiguous.

    But the predictions for measurable quantities of your theory depend on the choice of renormalisation-scheme (e.g. http://arxiv.org/abs/hep-ph/9412236).

    So there should be a canonnically determined choice of the renormalisation-scheme (something which motivates it!) if renormalisation should make sense, otherwise it's simply fitting your theory (and not fitting the paramenters of a theory) to the experiment. :grumpy:
    Last edited: Jan 21, 2007
  4. Jan 22, 2007 #3

    Physics Monkey

    User Avatar
    Science Advisor
    Homework Helper

    Hi Sunset,

    You are correct, renormalization is far from unique. There are essentially two parts to what is broadly called "renormalization."

    The first part involves a choice of regulator, the regulator being your tool to tame divergent amplitdues. The most popular choice of regulator is definitely dimensional regularization.

    The second part involves the actual proedure of renormalization, and again there are many choices available. One very popular choice is the [tex] \bar{MS} [/tex] scheme. This scheme is defined by subtracting off only the divergent pieces of amplitudes while allowing finite corrections to accrue. You can contrast this with the most naive (and seemingly physical) scheme where your renormalization conditions amount to fixing certain propagator poles and residues.

    There are any number of technical and physical reasons why the simplest "physical" scheme may not be useful, but the important point is that it doesn't matter. The physical predictions of your theory are quite independent of your renormalization scheme, they simply follow from your bare lagrangian (inserted in some path integral on a lattice, say). The renormalization scheme is simply a choice about how to break up your bare lagrangian into a renormalized piece and a counterterm piece (in the context of renormalized perturbation theory). Within a given scheme, the renormalization group refers to your ability to shuffle things back and forth between the renormalized lagrangian and the counterterm lagrangian.

    If you found this helpful I can try to answer more of your questions here, but may I also suggest you pick up the book "Renormalization" by Henry Collins. It will go a long to demystifying renormalization for you.

    P.S. I just saw your article link, so let me comment on that. While the predictions of your physical theory are independent of renormalization scheme, its important to remember that we don't really know how to get those predictions exactly. What we have is perturbation theory, and since renormalization is basically a way to do perturbation theory in a smart way, it follows that some renormalization schemes may do better than others. This is what your linked article refers to I think.
    Last edited: Jan 22, 2007
  5. Jan 23, 2007 #4
    Got an small question, can all the types of divegences be written in the form:

    [tex] \int_{0}^{\infty}dpp^{m} [/tex] with m an integer

    since if we had:

    [tex] \int_{0}^{\infty}dp\mathcal F(p)

    we could make a Taylor expansion of F in powers of p,is all right??
    Last edited: Jan 23, 2007
  6. Jan 23, 2007 #5


    User Avatar
    Science Advisor

    The choice of regularization can and does change the actual results your theory outputs, in this sense there is somewhat of an ambiguity in field theory (albeit greatly demystified in the 70s and somewhat intuitively obvious). For instance dimensional regularization completely misses the hierarchy problem of particle physics.

    Otoh the choice of renormalization scheme is more a technical issue in the sense that while it does change the results, it just means one of them is approaching some attractor point better than the other as was pointed out earlier. Sometimes you can actually improve the results to even obtain some nonperturbative sectors of the theory.

    In general you can more or less match these various schemes theoretically in the appropriate regimes and can show that they are consistent at a physicists lvl of rigor (although this is extremely hard to do, and the papers that do it are somewhat hard to track down and sometimes even unpublished)
  7. Jan 24, 2007 #6
    Ok, you say if we consider enough orders in pertubation theory, predictions using different renormalisation schemes become more and more equal. This would be satisfying.

    When I speak of predictions, I always have something in mind as the Lorenzinvariant Amplitude L. Let's consider the simplest process possible: the propagation of one particle of momentum p. The pertubation series for L is: free-propagator + 1-loop correction-diagram (neglecting higher loop orders). We have to regularise the 1-loop-correction-diagram i.e our regularised expression contains the regularisation-parameters (e.g. µ and 4-d), bare mass m and couling constant g. So L is a function of m, g, µ , d and momentum p.
    We now carry out renormalisation, using a certain renormalisation scheme, such as MS-scheme. So L becomes a different function of renormalised quantities M,G:
    Using a different Renormalisation (Renormalisation-scheme)i.e different renormalised quantities , we should get again a different function F':

    If we fit with masses from Particle-Data-book (experiment) we receive different predictions for propability /M/² , i.e. different renormalisation schemes lead to diferent predictions!

    P.S. I'll check Collins book in our library

    Best regards
  8. Jan 31, 2007 #7
    Zee's in a nutshell ch. III.1 Cutting off...

    He defines the renormalised coupling constant G as

    -iG = M(s0,t0,u0) = -ig +iCg² [log(cutoff²/s0) + log( ... ] (4)

    and he gets the result

    M = -iG + iCG² [ log(s0/s) + log(t0/t) + log(u0/u) ] (9)

    Why iG ? I mean I could write an arbitrary function -if(G) instead of -iG in formula (4):
    -if(G) = M(s0,t0,u0) = -ig +iCg² [log(cutoff²/s0) + log( ... ]

    Then you receive your result:

    M = -if(G) + iCf²(G) [ log(s0/s) + log(t0/t) + log(u0/u) ]

    ( e.g. f(G)=G² )

    :confused: Isn't this a problem?

    What I can measure in experiment is M for s,t,u .
  9. Feb 4, 2007 #8
    Hi Sunset,

    What you are pointing out is that infinity minus infinity is completely indeterminate. Since it is completely indeterminate, the arbitrary function that you talk about could even depend on variables that never even entered the equations in the first place. What Zee, and the other textbooks on renormalized QFT do is therefore nonsense. I have been pointing this out, on and off, for twenty years, and the response of HEP theorists is to pretend that I do not exist. Whether you would have better luck, I cannot say, but as things stand, I would recommend getting out and doing something based on logic instead.
  10. Feb 4, 2007 #9


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    I may not have understoof your point completely so let me know if I am way off. But the point is that we are working in a loop expansion. So it is understood that all the calculations are meant to be accurate to some order in the coupling constant. So it would not be consistent to introduce a function of the coupling constant given that you are already doing all the calculations up to some power of that coupling constant. You would especially not redefine G^2 since you have already neglected all the loop diagrams of order G^2 relative to the diagram you are working with!! I suppose you could use something fancier like (1-e^(-G)) but then again, since all the diagrams of order G^2 or higher relative to the diagram you are considering have been neglected anyway, there is no point in doing something like this. One should simply redefine G^1.

    I hope this helps. Those are great questions. Renormalization is a very tricky concept.

  11. Feb 4, 2007 #10
    Hi cgoakley!

    On the other hand QCD and QED are QFT's which describe nature apparently pretty well after Renormalisation. I doubt this can be reached only by "fitting theory to experiment" - ok maybe you have to put in something in by hand, but I assume your theory makes predictions that go beyond that ("verifying theory by experiment").

    Best regards
  12. Feb 4, 2007 #11
    Hi Patrick!

    You are right, the function f is not completly arbitrary then. But f(G)=G² would be ok, because I didn't neglect second order in g:

    M(s,t,u) = -ig +iCg² [log(cutoff²/s) + log( ... ) + ... ]
  13. Feb 4, 2007 #12


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi. Ok, sorry about the confusion. Let's first clear up the notation. I am assuming you use G for the renormalized coupling constant and "g" for the bare coupling constant, right?
    Then the way people would normally write that would be

    [tex]-iG = M(s0,t0,u0) = -ig +iC G^2 [log(\frac{cutoff^2}{s0}) + log( ... ] [/tex]

    Where notice that in the second term, one uses the renormalized coupling constant. So this equation allows to relate G to g up to order G^2. At the next order, one would get an expression of the form

    -iG = -ig + stuff to order G^2 plus stuff of order G^3 etc

    I have much more to say about renormalization and renormalization schemes and the relation between bare and renormalized parameters but I will just post a bit at a time so that we can be on the same wavelength.

    Great questions

  14. Feb 4, 2007 #13


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Let me say it in a different way:

    The goal is to write g as an expansion in G:

    [tex] g= f_1 G + f_2 G^2 + f_3 G^3 + \ldots [/tex]

    where the F_i are functions of the parameters of the tehory and of the cutoff (and are generally divergent...notice that even if there were no infinities in the theory, one would still have to renormalize!)

    The above expression is the starting point. All you have to do is to plug this in your diagrams. Now, by definition, we impose that the amplitude be equal to some measured value at some kinematic point, liek you said M(s0,t0,u0) is defined to be -iG. Now, you calculate a tree level diagram with the Lagrangian (which contains the bare parameter g) and fix this to be -iG. But since you are working at tree level, you keep only the first term in the expansion I gave above. This fixes f_1 to be 1.

    Now, you repeat with a two loop diagram. You plug you g given above up to order G^2 and use f_1 =1 (which you found from the tree-level matching). That will allow you to fix f_2.

    And on and on.

    In the context of effective field theories, this is called "matching". Renormalization is really just that: matching the bare coefficients to measurable quantities, order by order in a coupling constant expansion.

    the question of "scheme" is something added to that and is really not necessary at all!!! But it's one of those things that people do because it makes things simpler in practice but is NOT necessary in principle.

    I will write more if you want me to.

    Hope this helps.

  15. Feb 4, 2007 #14


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    To go back to your question about using a different function f(G).
    We don't want to change that as we renormalize. The whole point is that we want to define the bare coupling constant "g" such that calculations with the theory reproduce a fixed, measured quantity, no matter how many loops we include. The only thing is that since we can only do calculations in a perturbative context, we can only defined the bare parameters as expansions in the coupling constant.

    So the point is to impose the theory to reproduce the measurable
    M(s0,to,uo) up to the number of loops we want to use in whatever calculation we are doing.

    So if you plan to do tree level calculations, you must impose that, up to tree level,

    calculation of tree-level M_0 with bare parameter = measured value of M_0

    (where I use M_0 = M(so,to,uo) )

    If you plan to do one-loop calculations, you must impose

    calculation of one-loop M_0 with bare parameter = measured value of M_0

    and so on. You always fix to the same measurable quantity, no matter how many loops you keep in your calculation. Now, in the case we are discussion here, the measurable quantity M_0 happens to be -iG, the renormalized coupling constant. But the pricniple is still that order by order in the loop expansion we impose the calculation to give a measured, fixed, quantity.

    Hope this makes sense

  16. Feb 4, 2007 #15
    I doubt very much that, to tree level at least, there is anything seriously wrong with the standard model.

    The problem is loops, which mostly diverge. HEP theorists have a gourmet's appreciation of different kinds of divergent integral, classifying them as "log", "quadratic", "quartic", etc. but the reality is that either an integral diverges or it does not. Reparametrising a quartic-divergent integral will get you a quadratically divergent integral and vice versa, and differencing divergent integrals just gives you an indeterminate value. A completely indeterminate value - it may be plus or minus infinity, it may be a (finite) constant, or it may be a finite function of any variables you could possibly think of. It is just completely indeterminate. Most often, HEP theorists choose a constant here, but, as you discovered, the mathematics does not require this. In effect, what they are doing is deciding in advance the kind of answer they want out of the calculation. It turns out that "reasonable" choices here accurately get us the Lamb Shift (despite the absence of a QFT description of the single-electron atom), the anomalous magnetic moment of the electron, and up till recently, the anomalous magnetic moment of the muon. But without a consistent mathematical substructure beneath, this success is bogus and serves only to confuse the issue.

    QFT text books should not try to give the impression that so-called "effective" field theory can be uniquely derived from first principles. It cannot. Their derivations, such as the one you cite, prove only that if one lowers mathematical standards sufficiently, one can prove practically anything.
  17. Feb 4, 2007 #16
    Thanks for your help!

    Yes, G for the renormalized coupling constant and "g" for the bare coupling constant

    Yes, I just realized
    to receive
    M = -iG + iCG² [ log(s0/s) + log(t0/t) + log(u0/u) ]

    you have to use

    -iG =M(s0,t0,u0)=-ig + iCG² [ log(cutoff²/s0) + ... ]

    instead of -iG = M(s0,t0,u0) = -ig +iCg² [log(cutoff²/s0) + log( ... ] (4)

    I haven't the book here in the moment, but as far as I remember Zee doesn't stress that he "changes g into G" - or I read over.
    I haven't understood that issue in Ryder too, why the error involved by this change is of neglectable order.
    Last edited: Feb 4, 2007
  18. Feb 4, 2007 #17


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member


    If you read my other post it might make things more clear (where I write g as an expansion in G).

    You have a put your finger on an important point: why can we do this?
    Or, in other words, why is it legitimate to write g as an expansion in powers of G?

    Strictly speaking, this is non-sense since the coefficients of the expansion are formally infinite! This is what bugs/bugged a lot of people. If there is a term log(cutoff) alpha in the QED expansion, the whole approach clearly only makes sense only if log (cutoff) alpha << 1 or cutoff << e^(1/alpha) ,
    cutoff << e^137). (one can be much more rigorous than this to discuss convergence but you get the main idea).
    So, strictly speaking, if we let the cutoff go to infinity the whole thing makes no sense. The modern point of view is that any of the field theories we work with are low wnergy effective field theories so that the cutoff has a physical meaning: the scale at which our field theory is no longer a good description of nature. In that case, there is no problem with the whole procedure since the cutoff should really not be taken to go to infinity.

    (Things are much worse in QCD of course since the coupling constant is not that small at most energy scales (and is of order one at low energy scales). This is what makes QCD so tough. )

  19. Feb 4, 2007 #18


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    That is true, the relation between the bare and renormalized parameters depend on you choice of scheme. But that does not matter! The physical results of the theory will not depend on that choice of scheme (*Up to a given order in the coupling constant*!)

    The key point is that any calculation of a physical quantity involves calculations of loop diagrams with bare parameters . There are therefore *two* sources of divergences: coming from the loop themselves *and* coming from the bare parameters expressed in terms of physical quantities (the renormalized quantities). The two divergences cancel, leaving a unique and well defined result for the physical quantity being calculated.

    Now, what are "renormlization schemes"? Well. it's just that people get lazy. For example, in dimensional regularization, divergences pop up in the form of 1/epsilon. If there is such a term in the bare parameters, of course the same term with an opposite sign will arise form the loops. So people invent this rule: whenever you see a 1/epsilon, just ignore it (the actual schemes used in dim reg are not that simple but I just want to illustrate the idea)...don't write it down. So when they fix the bare parameter in terms of the renormalized quantities, they ignore all those terms and quote a relation between the bare and renormalized quantities in that scheme. Now, when doing a loop calculation with those expressions for the bare parameters, people must use the same rule when calculating the loops. So they ignore the 1/epsilon there as well, giving a finite and well-defined result for the physical quantity being calculated.

    Of course, you could say anything you want, for example: whenever you do any intergral that has a 1/epsilon divergence, always drop that term and add to your diagram +5. As long as you follow the same rule everywhere, you wil get the same result for any physical quantity as before. This sounds crazy but in dim reg there are often conbinations of certain constants that always appear grouped with the divergences so one might as well drop them.

    Of ocurse, all this is not necessary. One could simply calculate all the integrals completely in whatever regularization one is using and never talk about "scheme".

    As I said, a choice of scheme is totally unnecessary so it confuses things a greta deal. It is just something people do to make their life easier but there is no need at all to do that in principle.
  20. Feb 4, 2007 #19


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi Sunset.

    I don't want to dump too much information at once so I will be quiet for a while (plus I need to prepare my classes!). But I will mention something briefly. There is a whole different aspect to this renormalization scheme business but in my opinion, it's better to not dump too much information at once as it tends to confuse the issues more than clarifying them. After all I explained is clear to you we can get in the next level of things (if I have time to post...the next few weeks will be crazy in terms of teaching). Let me just briefly mention it just so that at least you have heard of it.
    The whole business makes sense as long as the expansion in the coupling constant times the expressions f_1, f_2, etc (which are formally divergent! but must be considered finite in the spirit of effective field theories) is valid. This is shaky in QCD. Now, in dimensional regularuzation, two parameters appear: epsilon (which has to be formally taken to zero at the end of a calculation)and a scale "mu". Divergences appear as inverse powers of epsilon. The mu appears in logs. After renormalization, they disappear, of course. But in QCD, people worry a lot about the convergences of the expansion. So they start assigning some physical meaning to "mu" (even though it formally disappears from any physical result). And they start looking at the relation between the bare and renormalized parameters (even though that formally plays no role in any calculation of a physical quantity). They look at what values of "mu" would make the terms of the expansion decrease faster thereby improving the convergence of the expansion between the bare and renormalized parameters. This is a whole different can of worms that we can discuss later. But for now, I think it is important to set this aside and focus on the basic idea as I presented in my previous posts.


  21. Feb 4, 2007 #20
    Hey thanks for all of your replies, I have to think about it for a while! What I ask myself spontanously:

    You say I start with

    [tex] g= f_1 G + f_2 G^2 + f_3 G^3 + \ldots [/tex] ,

    and later it will drop out that f1=1 and f2 = C[log(cutoff²/s0) + ...], right?

    [tex] M = -i f_1 G + ( i C f_1 G )^2 ( ( log ( cutoff^2 / s ) ) + \ldots ) [/tex]

    or what do you mean?
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Renormalisation unique?