Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Does renormalization means discarding corrections to a known constant?

  1. Apr 24, 2009 #1
    Dear experts,

    Does renormalization mean discarding corrections to a known constant?

    I mean, we assign a known value to the electron mass or charge, whatever, in the zeroth order of the perturbation theory, for example, in QED. In the next order we obtain a correction to this value (finite, for instance). AFAIK, the renormalization means replacing the known constant+correction by the same known constant. To justify this, one says that the constant is not known anymore but "bare" now and only the sum is observable.

    In my opinion, this replacing may be understood without ambiguity as discarding the correction and keeping the known value intact. Is it so?


  2. jcsd
  3. Apr 24, 2009 #2
    Renormalization in QED is a bit more complicated than just manipulation with constants (electron's mass and charge).

    When we calculate the QED S-matrix with the original Hamiltonian in high perturbation orders, we find that matrix elements (scattering amplitudes) come out infinite. The deep reason for this is that QED Hamiltonian contains self-interactions of particles. For example, a single electron self-interacts with itself, so that the scattering amplitude of the innocent process electron->electron is infinite already in the 2nd perturbation order.

    The renormalization theory then says that the original QED Hamiltonian is bad, and it should be replaced by another Hamiltonian, which has additional "counterterms". The actual form of these counterterms is chosen by imposing two physically reasonable conditions. The first (mass renormalization) condition tells that the scattering amplitude of a single particle (e.g., electron) should be exactly the same as if this was was a free non-interacting particle. The corresponding counterterms then cancel all self-interaction effects in the S-matrix. The second (charge renormalization) condition tells that in the low-energy limit the scattering amplitude of two charged particles should tend to the value known from classical electrodynamics. For example, this amplitude should be proportional to the charge squared e^2, and should not have any contributions from higher orders, e.g. e^4.

    It appears that there are three types of counterterms needed to satisfy these conditions. Counterterms of one type are designed to cancel infinite contributions from electron self-energy loops in Feynman diagrams. The other counterterm type cancels divergences due to photon self-energy loops. Counterterms of the third type cancel divergences in vertex loops. The counterterm interaction operators are formally infinite, so in order to perform calculations we need to introduce artificial cutoffs in momentum integrals. This is called regularization. In the end, the cutoffs should be removed, and, hopefully, all scattering amplitudes should remain finite in this limit.

    After adding these counterterms to the QED Hamiltonian, one finds that, indeed, all divergences in the S-matrix get canceled by counterterms, and this cancelation is valid in all perturbation orders. However, counterterms do not cancel exactly the contributions from loop integrals. Some residual (finite) terms remain in each perturbation order. These terms are called "radiative corrections". They appear as remainders after subtractions infinity-infinity, and their calculation is the major task of the renormalization theory. The calculation of these radiative corrections in high perturbation orders is what made the renormalized QED the most accurate physical theory ever proposed.

    As far as I know, there is no explanation why this messy renormalization process actually works. It does lead to very accurate predictions, but the reason for this success is rather mysterious. Another problem is that by adding counterterms to the Hamiltonian of QED, this operator becomes formally infinite (in the limit of removed cutoffs). This is a high price we pay for obtaining finite and accurate S-matrix. Without a well-defined finite Hamiltonian, renormalized QFT is not a complete and self-consistent theory. For example, in QFT it is impossible to evaluate the time evolution of states and observables. Bound states and their energies cannot be obtained (as in ordinary quantum mechanics) by diagonalization of the Hamiltonian.
  4. Apr 24, 2009 #3
    Thank you, Eugene, but I know all that. That is why I proposed an explanation and a solution in my "Reformulation instead of Renormalizations".

    My question is kind of Yes-or-No question. I would like to learn the answer.

  5. Apr 24, 2009 #4


    User Avatar
    Science Advisor

    Unless I'm mistaken, is this not exactly what renormalization is on the formal level?
    Shifting the constants to have infinite values corresponding to divergences.

    Not to nit pick, but the condition is that it should have a pole at the physical mass. The two-point function will be very different from the free field theory. Unless I've misunderstood you and something else is meant.

    Actually it makes it rigorously finite in the limit of removed cut-offs. See the lovely paper of Glimm and Jaffe "Infinite Renormalization of the Hamiltonian is necessary".
  6. Apr 24, 2009 #5


    User Avatar
    Science Advisor

    I think perturbative renormalization, on the physical level, is the process of working out order by order what the bare mass should be in order that the physical mass is the value you know from experiment. It doesn't really involve ignoring things.
  7. Apr 24, 2009 #6
    The question is about equivalence of renormalizations (whatever ideology is used) and discarding corrections. Does anybody (an expert) know the answer?

  8. Apr 24, 2009 #7
    There is no bare mass in the zeroth order; only an observable mass m (it can be taken equal to 1). (The same for the electron charge.) There is a correction to a known fundamental constant in the next PT order. It is discarded and the result (a scattering amplitude, an energy level, whatever) becomes better than in the zeroth approximation. Is it so?

    Last edited: Apr 24, 2009
  9. Apr 24, 2009 #8


    User Avatar
    Science Advisor

    No, I don't believe so. In the theory there is the bare mass, which must be a function of the cutoff in order that every correlation function is not identically zero. The problem is what function of the cutoff? Usually you use the renormalization conditions and perturbation theory to figure this out and it so happens that at zero order it equals the physical mass. At next order you see it's the physical mass plus some function of the physical mass and so on. However your not discarding anything, you're just working out the bare mass.
  10. Apr 24, 2009 #9
    You are right that usually one imposes two conditions on the electron propagator in the renormalized theory: The propagator must have a pole at the physical electron mass, and the residue must be equal to 1. However, propagators are not physical quantities. They are just abstract factors in Feynman integrals. So, I thought it would be more helpful to express the mass renormalization condition through something more directly related to the experiment. In any realistic theory of scattering, a free electron should propagate without (self-)interaction. This means that the matrix element of the S-operator between two 1-electron states should be <p|S |p'>= delta(p-p'). This simple condition is violated in the naive QED. Its validity is restored by adding renormalization counterterms to the interaction Hamiltonian.

    Thanks for the reference. I'll check it out. Perhaps you can explain briefly how the Hamiltonian of renormalized QFT can be finite? The counterterms are divergent, right? So, by adding the divergent counterms to the original Hamiltonian, we make the full Hamiltonian divergent too. Right? Thanks.
  11. Apr 24, 2009 #10
    Hi Bob,

    Most certainly, I misundestood your question. Reading your post I've got an impression that you've simplified the renormalization procedure to trivial adjustment of parameters m and e in each perturbation order. I've attempted to tell you that actual renormalization steps are more complicated. But you already know that, so I'm not sure what the question was.
  12. Apr 25, 2009 #11
    The very first time the people encountered the mass "corrections" in the classical electrodynamics (Lorentz, Abraham) of the self-acting electron. As I showed in "Reformulation instead of Renormalizations" (see Introduction, formula (I3)), the mass renormalization meant discarding the correction to the known value and postulating the resulting, new equation as a dynamical equation.

    QED Hamiltonian uses the same self-interaction ansatz, so the same problem appear. In the Feynman-Schwinger QED the corrections to the fundamental constants were discarded too.
    The notion of a bare mass (or a charge) is a way to make discarding look as a legitimate prescription. But we all agree that in the zeroth order (in the initial and final states in a scattering problem, for example) the values of the fundamental constants are the experimental ones. For the zeroth order solutions and for their Lagrangian many features are implemented to describe the experimental properties: Lorentz invariance, gauge invariance, classical limits, CPT, spin-statistics, propagator poles at physical parameters, etc.

    Then, the theory is constructed so that in the next order some "corrections" to these fundamental constants appear (self-action effects). If these corrections were even small, the next order solutions would not agree with the experiments since the new mass and charge are different from the experimental values. So these corrections are discarded and the initial fundamental constants are kept intact. As soon as this discarding is obviously not legitimate, they invented the notion of "bare" constants that "absorb" the corrections. There are many posterior ways to "justify" the renormalizations - effective field approach, counter-terms, R-operation, etc. They are nothing but the ways to say that it is the mass which is wrong, not the corrections to it. (Poor guilty mass, it is wrong, but we know how to "doctor" it.)

    I state that with the same success we can say that the fundamental constants are right but corrections to them are wrong, and this is the reason of their discarding. They are just not necessary. This prescription is absolutely equivalent to the renormalization results, isn't it?

    Last edited: Apr 25, 2009
  13. Apr 25, 2009 #12


    User Avatar
    Science Advisor

    That is true, of classical electrodynamics.

    The same problem does not occur, rather a different problem occurs. That of multiplication of the fields at the same point. A distribution-theoretic problem.

    That's only true for theories where the fields are directly related to the particles. It is not true in QCD for instance where the bare mass of the Dirac(quark) fields is at no order equal to the experimentally measured mass of any particle in the theory. For instance hadrons or mesons.

    No, renormalization is basically looking at your lagrangian and seeing that it contains distributions being multiplied. From distribution theory you know that powers can be defined. It turns out that for qft the well-defined powers are just the old undefined powers with "infinite constants" in front. We call these the bare mass, e.t.c. due to the power they multiply. It's difficult to work out these powers so we do it order-by-order using certain conditions to help us. The physical mass is a parameter of the theory used as such a condition. However at no point is anything discarded.
  14. Apr 25, 2009 #13
    I do not say that you discard something. I say I keep the fundamental constants intact and discard corrections to them. Is it equivalent to your results? Yes or no question.

  15. Apr 25, 2009 #14


    User Avatar
    Science Advisor

    Well if the bare mass was kept the same as the physical mass and all expressions of it at higher powers of the coupling were discarded the theory would not exist, which is very different from the conventional approach where it does exist.
  16. Apr 25, 2009 #15


    User Avatar
    Science Advisor

    Oh that's fine, didn't see what you meant at first.

    It sounds strange alright. Basically the original Hamiltonian is totally divergent as an operator on Fock space. It's not densely-defined, self-adjoint or semi-bounded, this is because powers of the field are undefined being distributions. The renormalized Hamiltonian is not because the divergences of field powerings is cancelled by the counterterm divergences or because the counterterms move you to a non-Fock space where the Hamiltonian is defined.
    See page 4 of the thread "Unbounded operators in non-relativistic QM of one spin-0 particle" located https://www.physicsforums.com/showthread.php?t=304711&page=4". I've tried to give an explanation.
    Last edited by a moderator: Apr 24, 2017
  17. Apr 25, 2009 #16
    Your statement is wrong. Apart from corrections to the fundamental constants, there are other corrections to the solutions. I discard only those that "correct" fundamental constants. So there is a non trivial reminder after such a discarding. I do not discard all contributions of higher orders, I do not stay within the zeroth approximation of the field equation solutions.

    Let us make it evident. Let us take a QED Lagrangian without counter-terms. And let us put the mass m=1 and the charge e=1 in the zeroth order. That is our guess of the Lagrangian and that are our units.

    In the next order I obtain a finite correction, say 3, to the mass: 1+3. I see that in a weighting experiment my electron weights now more than 1*g. I do not like it and I discard the addendum 3 from this expression. So my prescription is 1+3 ->1.

    You say that in the expression 1+3 the value 1 is not 1 anymore but uncertain (?). To fix it, you require ?+3=1, that is the same result as my prescription. We obtain the same results for amplitudes and energies because now we returned to the right constant values and nothing else is involved in the renormalizations. In your approach your value (?) changes from order to order to be always =1. In my approach I eliminate unnecessary addenda to the right phenomenological constants. We obtain the same solutions (with 1 for mass, for example).

    Of course, these complications are due to self-action ansatz. I managed to reformulate the theory in terms of "interaction" ansatz, but it is another story.

    I would like to make it evident that the constant renormalizations can be understood as discarding extra addenda to the right fundamental constants in the frame of the original Lagrangian.

    Last edited: Apr 25, 2009
  18. Apr 25, 2009 #17


    User Avatar
    Science Advisor

    It is those additional terms in the expression for the bare mass which allow the theory to exist. With out them the theory does not exist. This is rigorously proven to be true.

    If what you are discussing is looking at the expansions for the correlation functions and discarding those infinities which you know to be related to the values of the constants, you will still not get the right results because certain finite parts in the final renormalized expression come about due to the specific nature of cancellations between that appropriate expression for the bare constants to that order and the correlation functions.
  19. Apr 25, 2009 #18
    So your answer is "no". But in my opinion the "specific nature of cancellations" in your and my approaches give absolutely equivalent "renormalized" constants. Then we both obtain the same finite (improved) solutions in each higher order.

    Can anybody else express himself on this subject?

    Last edited: Apr 25, 2009
  20. Apr 25, 2009 #19


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    From an operational point of view, yes that's correct (as long as it is done very carefully).

    From a conceptual point of view, what really happens is that there are parameters in the Lagrangian which are, a priori, completely arbitrary. The only way to find out what they are equal to is to do some computation and compare the result to some physical quantity. This must be repeated, ordere by order in the loop expansion. This is what renomalization accomplishes.
  21. Apr 26, 2009 #20
    Does this take into account the work Connes, Kreimer and Marcolli on the Hopf algebra of the renormalization group ?
  22. Apr 26, 2009 #21
    I've tried to understand their paper, but going on what I can understand at the moment, I don't think they've solved anything about renormalisation yet. They provide a way to systematically renormalise, provided you can sum the 1PI diagrams. Personally, I do not find that a helpful conclusion, since even summation of 1PI leads to UV divergences.

    Incidentally, on the original discussion, I would like to point at Ken Wilson's Nobel speech as an excellent intuitive take on what renormalisation is doing.
  23. Apr 26, 2009 #22

    Do I understand it correctly that in the Wilsonian approach QED is not considered a self-consistent closed theory? One cannot take the limit of infinite momentum cutoff, because of "space-time granularity", non-point nature of particles (strings?) or some similar yet unknown short-distance behavior. This seems to be consistent with my statement that the Hamiltonian of renormalized QED becomes infinite in the infinite cutoff limit.
  24. Apr 26, 2009 #23
    Thank you, NRQED, for your positive answer. The correction discarding was practised and called “renormalizations on go”. It boils down to the requirement that the renormalization factors (like Z, Z_1, etc.) are equal to 1. It is briefly described in “Quantum Electrodynamics” by Berestetskii, Lifgarbagez, Pitaevskii. It has some practical conveniences.

    Of course, such a discarding is not legitimate mathematically and much of effort was spent to “justify” it. We know how many different “concepts” have been proposed to “justify” the renormalizations. But the only working “justification” or the real reason of practising it is the good agreement with experiments.

    That is one of the “concepts”. It denies the original Lagrangian from the very beginning: they say each term in it is wrong and to “cure” it we must add counter-terms of the same (wrong) structure. The fundamental constants are just fitting (bare) parameters. And their values change from order to order. It gives the same renormalized solutions but it is the only consolation. The logic and the mathematics are not satisfactory. No wonder all that does not help out in the non renormalizable theories, for example, in quantum gravity.

    Many years ago (in 1980-ies) I encountered divergent perturbative corrections in some simple problem. The perturbation “potential” V(z) was proportional to the Dirac delta-function squared. The matrix elements were clearly divergent. (It is similar to the QED divergences seen in the coordinate representation (distribution products).) I could have made a “regularization” and perform the renormalization procedure but fortunately I found the physical reason of appearing such a singular potential and I managed to reformulate the whole problem in terms of a better initial approximation. The perturbation series became finite from the very beginning, as it should be. I could compare my perturbative results with the exact solution since such was the problem. I published my results in academic journals of the former USSR.

    Since then I have been sure that the problems in QED are of the same nature – due to too bad initial approximation. Finally I found a better approximation and reformulated the theory in the way that no divergences appear. I answered all questions that puzzle people in QED. In order to explain the mathematical and physical reasons of appearing the perturbative corrections to the fundamental and known constants, I reduced the problem to its essential part and uploaded it to arxiv. There you can find the answer why the renormalizations “work”. I hope I made the story sufficiently transparent so that any person knowing the classical mechanics and quantum mechanics can easily find the answers. You will see why corrections to the fundamental constants appear, when they are legitimate and when they are not, and how to reformulate the theory of interacting things in a correct way.

    Briefly, in QED the distribution product in my approach is not singular because of natural electron form-factor: the electron charge is not point-like any more in my initial approximation but smeared quantum mechanically due to coupling to the quantized electromagnetic field. The electron and the quantized EMF form together an “electronium” whose mathematical structure is similar to the atomic one. We all know that the negative and positive (!) charges in an atom are not point-like but smeared. I called the corresponding compound system in QED an “electronium”. In other words, I managed to take the coupling exactly in the zeroth order (initial) approximation of the field equation solutions. I think it is the most physical approach to describing interacting fields. The theory now reminds a potential interaction of compound systems ("fermioniums") in the usual quantum mechanics.

    For those who do not get bored with reading simple articles, I provide the references:
    “Reformulation instead of Renormalizations”, arxiv:0811.4416
    and “Atom as a “Dressed” Nucleus”, arxiv:0806.2635 by Vladimir Kalitvianski. The latter was published in the Central European Journal of Physics, V. 7, N. 1., pp. 1-11, (2009) (Russian publications are not available in the West).

    You will not regret reading them. I am very interested in discussing my ideas and solutions.
    I am also interested in getting a position in academia to be able to finalize my research.

    I thank you all for your participation in this thread.


    Last edited: Apr 26, 2009
  25. Apr 27, 2009 #24
    Dear Strangerep,

    Thank you for your letter. I received it as an e-mail. Surprisingly it is not visible in the forum!

    Thank you for your suggestions on my writing style. It is not perfect, I admit, and it is not my purpose to offend the people. My style is explained partially by the lack of “good education” and partially by the lack of feedback from scientists who could make suggestions at the early stage of writing. Any suggestions are welcome anytime.

    Concerning the “classical” rather than “quantum” consideration, I made it on purpose. Historically first I worked out that in QM and then reduced it to a simple CM problem to make the problem the most explicit. If you take patience to finish reading my papers despite my bad stile, you will find it justified, I hope. CM approach is the simplest, comprehensible, and unambiguous. It reveals the problem of correction appearing in the most transparent way. Transition to QM is then elementary.

    My “dressing” is somewhat different from the approaches known from the literature. I hope you will catch the difference quickly. My dressing is an ansatz (insight) based on the atomic analogy. In 1985 I found that any atom has a positive charge form-factor and inelastic scattering channels under scattering at large angles, and I quickly realised that it was the physical and technical solution of QED conceptual problems. I published it first as a preprint of Sukhumi Institute of Physics and Technology, then as an article in the Ukrainian Physical Journal, and recently as an article in the CEJP (2009). You will see that the problem of infrared and ultraviolet divergences is eliminated at one stroke by using a better, more correct physically and closer to the exact solutions mathematically initial approximation for the solutions.

    Please, do not take my bad style too personally. Just try to extract the useful information.

    Sincerely yours,

    Last edited: Apr 27, 2009
  26. Apr 27, 2009 #25


    User Avatar
    Science Advisor

    I changed my mind and decided that some of my suggestions were not appropriate
    in a public forum, so I deleted my posting. But you're obviously subscribed to this thread
    so you got it immediately by email before I deleted it.

    Hmmm. Your paper arxiv:0811.4416 is 26 pages, most of which is classical non-relativistic
    motivation for your "trial relativistic Hamiltonian" for "novel QED" in your eq(60) on p25.
    Your Hamiltonian contains a free part in terms of what you call "electronium" or "fermionium",
    expressed in terms of center-of-inertia (CI) momentum P, together with an interaction
    potential expressed in terms of CI coordinates R and modified relative coordinates
    r which involve both R and the electric field E.

    In other approaches, sometimes called "unitary dressing transformation", one tries to
    deduce a better Hamiltonian in terms of composite fields (so-called "dressed particles",
    although I don't like that term), using certain criteria about the desired new Hamiltonian
    to guide one's choice of transformation. (References: M.I.Shirokov in arxiv:nucl-th/0102037,
    E.V.Stefanovich in arxiv:hep-th/0503076, and other references therein.)

    These other approaches differ from yours in that they start from standard relativistic
    QED/QFT, and then deduce perturbatively what the "correct" modified Hamiltonian should
    be (but only up to low order). In contrast, you use classical non-relativistic arguments to
    motivate a new QED Hamiltonian eq(60) as an ansatz. Apparently, you claim this is the
    exactly correct QED Hamiltonian? I.e., not merely a low order approximation?
    If so, such a claim is not substantiated in your paper. You only say that "no infrared
    divergences arise" and "[...]No ultraviolet divergences arise either [...]", but without
    giving details. To substantiate such a large claim requires much more extensive treatment,
    quantizing the theory of which eq(60) is a starting point, showing that it is
    relativistically acceptable in every way, deriving scattering formulae for well-known
    cases, including the anomalous magnetic moment to high order, all done in a rigorous way
    (not merely qualitivatively). If such treatment and calculations already exist, they should be
    translated into English and put on the arxiv.

    You might also consider discussing your ideas on the Independent Research forum,
    which is where some of this discussion belongs, in my humble opinion.
    (If you decide to do this, be sure to read the guidelines at the top of that forum,
    explaining the necessary format of the postings there.)
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook