Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Are there different "types" of renormalization

  1. Feb 17, 2017 #1

    ftr

    User Avatar

    I see re normalization being discussed in many situations and it is not very clear what unites them. For example it is talked about during self energy, then when integrals are blown by high energy(in scattering problems I presume), or some problem with IR(the opposite).

    Then there are these "techniques" including group re normalization, Wilson, Kadanoff and it is not clear how , when or if they are used where.
    Please, do not link to wiki I noticed they are close to useless alas some references.
     
  2. jcsd
  3. Feb 18, 2017 #2

    bhobba

    User Avatar
    Science Advisor
    Gold Member

    Yes - there are different ways - even ways to avoid it completely.

    There is the conventional way of shuffling infinities around (hate it - its not legit math IMHO)

    Then there is the BPH method which for me makes much more sense.

    They are equivalent, but like I said the conventional way I don't like or even really understand.

    See:
    https://arxiv.org/pdf/1208.4700.pdf

    Thanks
    Bill
     
  4. Feb 18, 2017 #3

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    Renormalization is a very general technique. It is applied in many different contexts (see, e.g., here) and has a priori nothing to do with removing infinities.

    There are two distinct (but related) types of renormalization procedures - those of Peterman-Stueckelberg type that preserve the content of the theory and lead to a renormalization group, and those of Kadanoff-Wilson type that average over high frequency degrees of freedom (integrating out high energy modes, blocking, etc.) and lead only to a renormalization semigroup (though the ''semi'' is usually suppressed). The difference is that one can reverse the RG transformation in the first case, but not in the second case since high frequency information has been lost.

    You may also wish to look at my tutorial paper Renormalization without infinities - a tutorial and my Insight article Causal Perturbation Theory.
     
  5. Feb 18, 2017 #4

    ftr

    User Avatar

    Thanks Arnold, bhobba. While I continue reading(trying to understand) all the references that you gave I have some sub-questions.

    !. is there a different treatment for renormalization for path-integral vs operator vs Schrodinger pictures.
    2. is the high energy cutoff prescription(sound like a disease) is to cure the infinities of the integral OR is that the couplings and mass ACTUALLY change for the energy scales which we have to take into account since interaction will happen at all scales up to the energy of the particles(I hope I am clear).
    3. Isn't the case that renormalization is an indication that something was wrong from the beginning where we put in the "physical" constants in the first place. Meaning there was something wrong with the original equations that we setup whether this picture or that picture(they all seem to suffer the same fate).


    Edit: One thing that is also not clear to me is that even when coupling constants ,like alpha, is calculated at high energy, it seems the STRENGTH of the interaction is hardly due to that correction. In another word the strength of the interaction is not only charge dependent, is that true?
     
    Last edited: Feb 18, 2017
  6. Feb 18, 2017 #5

    bhobba

    User Avatar
    Science Advisor
    Gold Member

    Ok.

    As far as I know the electron charge is the strength of interaction.

    The root cause of renormalization is the cutoff dependence of certain physical quantities like charge and mass. It is thought to be due to taking the continuum limit of a field. It doesn't occur in so called lattice field theory (and you dont have stuff like virtual particles either which is proof positive they dont exist - but that is another story) - but one is left with the embarising question, just like with a cutoff in normal QFT, as to what lattice size to use:
    http://www.itp.uni-hannover.de/~lechtenf/Events/Lectures/wiese.pdf

    One can even use it as a regularization technique (see section 2.2):
    http://www.physics.umd.edu/courses/Phys851/Luty/notes/renorm.pdf

    Thanks
    Bill
     
  7. Feb 18, 2017 #6

    ftr

    User Avatar

    I don't have much time now , but maybe later I explain what I mean by my question.

    I am not clear about if the mass, coupling PHYSICALLY change (depending on the momenta of interaction) OR it is just calculational because of the cutoff. I am not even sure if my question makes sense.
     
  8. Feb 18, 2017 #7

    bhobba

    User Avatar
    Science Advisor
    Gold Member

    They physically change - it not just calculational. If you measure the charge at higher energy its larger than at lower energy. I read somewhere it has been confirmed experimentally. Renormalisation group flow allows you to calculate what it would be at any energy scale.

    https://en.wikipedia.org/wiki/Renormalization
    The solution was to realize that the quantities initially appearing in the theory's formulae (such as the formula for the Lagrangian), representing such things as the electron's electric charge and mass, as well as the normalizations of the quantum fields themselves, did not actually correspond to the physical constants measured in the laboratory. As written, they were bare quantities that did not take into account the contribution of virtual-particle loop effects to the physical constants themselves. Among other things, these effects would include the quantum counterpart of the electromagnetic back-reaction that so vexed classical theorists of electromagnetism. In general, these effects would be just as divergent as the amplitudes under study in the first place; so finite measured quantities would in general imply divergent bare quantities.

    In order to make contact with reality, then, the formulae would have to be rewritten in terms of measurable, renormalized quantities. The charge of the electron, say, would be defined in terms of a quantity measured at a specific kinematic renormalization point or subtraction point (which will generally have a characteristic energy, called the renormalization scale or simply the energy scale). The parts of the Lagrangian left over, involving the remaining portions of the bare quantities, could then be reinterpreted as counterterms, involved in divergent diagrams exactly canceling out the troublesome divergences for other diagrams.

    The counter-term method is the BPH method I mentioned before. One writes the Lagrangian in two parts - the first part has the actual measured values at whatever energy scale you want to measure them. You subtract that from the original Lagrangian that contains the cutoff dependent 'bare' quantities to give a remainder called the counter terms. You then do the calculations with first part just like you normally would - it still blows up of course - but now you have these counter terms you can adjust to cancel the divergences so the actual measured values appear in you equations.

    I very simply explain it here:
    https://www.physicsforums.com/insights/renormalisation-made-easy/

    From the above you pick some U, measure it, and call it the re-normalized quantity. Expand that in a Taylor series in the not re-normalized quantity that has some cutoff so you aren't mucking around with infinity and substitute. Behold - the cutoff disappears. That's basically all that's going on with BPH. When you subract 2 from 1 in the above paper you see the terms with cutoff (that would go to infinity without the cutoff) cancel. So in BPH you get sneaky and say beforehand we want this to happen and adjust the counter terms to do just that.

    Thanks
    Bill
     
    Last edited: Feb 18, 2017
  9. Feb 18, 2017 #8

    atyy

    User Avatar
    Science Advisor

    No, the Wilsonian renormalization does not necessarily lose information - asymptotic freedom and safety are possibilities.
     
  10. Feb 18, 2017 #9

    atyy

    User Avatar
    Science Advisor

    By Thevenin and Norton theorems, one can reduce large circuits to simple ones. Are the large circuits being "physically" reduced to simple ones? Whatever answer is correct for that case is also correct for renormalization. Remember - the simplified theory is an effective theory, and renormalization says QED is an effective theory.
     
  11. Feb 19, 2017 #10

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    These cases only reduce the loss to zero in the limit where the renormalization parameter approaches a given value (which may be infinity). At all other places of the renormalization trajectory, one has a loss.
     
  12. Feb 19, 2017 #11

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    1. No. Though how it looks like may be different. But this is because there are many specific ways to do the renormalization but all lead to the same final results at fixed loop order.

    2. The physical mass are not parameters of the theory but things computed from the theory that must match experimental values. The cutoff dependence of the bare parameters is needed because no matter what the physical mass and charge are, the true interaction coefficients are infinitesimally small, and cannot be resolved without rescaling them by an infinite amount. This is what happens during the renormalization. In nonrelativistic theories, there is no such problem and all couplings and renormalizations are finite - though still necessary to get afterwards a sensible thermodynamic limit.

    3. What is wrong from the beginning is that it is assumed that the Hilbert space of the interacting theory is the same as that of the free theory. That this is not the case (Haag's theorem) is the root of the infinities in the first place.

    ad Edit. All physical parameters (that in particular determine the interaction strength) are set not by the cutoff-dependent parameters but by the renormalization conditions imposed.
     
  13. Feb 19, 2017 #12

    ftr

    User Avatar

    Being a EE myself worked on those problems for a long time even designing digital circuits. I can understand your point, however in the circuits case(and many other engineering) we know the physicality of the components then we stick bunch of them together so that we have certain output for certain input. Then we play around untill we find "best solution/equivalent" (optimized with software these days). Then essentially treat the system as a black box.

    When I was doing my Master's I impressed my adviser by debugging a problem with an electronics box, that controlled a motor, which I was working with-(I was only responsible for analyzing motor behavior). All the numbers on the chips have been wiped out because the system was "patented"(somebody thought that was prudent!). But my knowledge of electronics was enough to figure out the box since I already knew the input and the output.

    But the problem of physics is similar but we don't know the inner working, we are trying to figuring it out by input output(experiments), and some clever guessing since we don't live in that realm. Guess what, I am trying to debug the black box of reality this time.:biggrin:
     
    Last edited: Feb 19, 2017
  14. Feb 20, 2017 #13

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    This means that the effective charge changes. But the number ##e## in the renormalization condition determining the interaction remains fixed. For QED, there is just a 2-parameter family of QED theories, one for each fixed positive choice of ##e## and ##m##.
     
  15. Feb 20, 2017 #14

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    That's quite a bit more complex, though! Moreover, you are inside this box, which further complicates things - at least if you want to treat the black box as a quantum system!
     
    Last edited: Feb 20, 2017
  16. Feb 20, 2017 #15

    ftr

    User Avatar

    I am like you Arnold, I think know what is going on but like you I am adamant to make sure.:smile:

    Back to the thick of it, I was reading this reference I found from your links (Thank you for that)

    https://arxiv.org/pdf/hep-th/9602024.pdf

    I have few questions, if you don't mind
    1. On page 14 he says that Weinberg proposed the running mass, Why is that, how come they did not have this problem before.
    2. The review is nice but not well translated. Is there a similar but modern review.
    3. Its amazing that after all these years the relation between the different approaches is still not clear(debatable) between the experts.

    Thanks
     
  17. Feb 20, 2017 #16

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    1. At that time, the RG was not in the mainstream.
    2.+3. Most people aren't really interested and blur the distinction. Thus the theme is not often discussed.
     
  18. Feb 20, 2017 #17

    ftr

    User Avatar

    I am asking why mass was not treated as running before that. Also, is there a difference in the renormalization for the cases of elastic vs inelastic scattering. Thank you,
     
  19. Feb 21, 2017 #18

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    He says on p.14 that it was known already in the mid 50's. Weinberg gave it a catchy name in the context of the RG.
    No. The latter only refers to which states you pich to evaluate S-matrix elements.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted