Renormalization with cutoff

In summary, the cutoff-method used for regulating divergences amounts to not integrating over field configurations that have a Fourier-momentum greater than the cutoff in the path integral. However, later on the cut-off is taken to infinity, so in fact we do integrate over all field configurations! The divergences are solved by writing the original bare couplings in terms of experimentally measured couplings, and one consequence is that the experimentally measured couplings change with energy (a change governed by the renormalization group equation) of the experiment.
  • #1
RedX
970
3
The cutoff-method used for regulating divergences amounts to not integrating over field configurations that have a Fourier-momentum greater than the cutoff in the path integral. However, later on the cut-off is taken to infinity, so in fact we do integrate over all field configurations! The divergences are solved by writing the original bare couplings in terms of experimentally measured couplings, and one consequence is that the experimentally measured couplings change with energy (a change governed by the renormalization group equation) of the experiment.

Do the bare couplings change with respect to the cutoff? Or, since everything is going to be taken to infinity anyways, then this is a question that doesn't need to be asked?

In the effective-field theory approach, there is a cutoff for the path integral. However, the cutoff is not taken to infinity at the end - this is really a cutoff. As a consequence, loop momenta are only integrated to the cutoff and not to infinity. Therefore there are no divergences in this approach at all. Another consequence is that all bare couplings are finite. These bare couplings change according to the exact same beta function of the physical couplings talked about earlier (when the cutoff is taken to infinity), except instead of the derivative being with respect to the energy of the experiment, it is with respect to the cutoff.

What is the relationship between the energy of an experiment and the cutoff used in an effective-field theory approach, such that they obey the same equations? Certainly the coupling constants must have different initial values, because there is a real difference in that one is integrated to infinity, and the other to the cutoff, so the couplings must be different to give back the same answer?
 
Physics news on Phys.org
  • #2
RedX said:
...Do the bare couplings change with respect to the cutoff?

If you look at the relationship between the bare and physical constants, it depends on the cut-off L. One obtains the Landau pole, for example.

But you can look at the constant renormalizations as at discarding preturbative corrections to the fundamental, physical values of constants. Than L-dependence is contained in the terms being discarded.
 
  • #3
Bob_for_short said:
If you look at the relationship between the bare and physical constants, it depends on the cut-off L. One obtains the Landau pole, for example.

Right, so the physical constant cannot change with cutoff, but can change with energy scale.
If there is a relationship between the physical constant, the cutoff, and the bare constant, then since the physical constant does not depend on the cutoff, that means the bare constant depends on the cutoff.

So the physical constant depends on the energy scale, and the bare constant depends on cutoff? Are both the dependence of the same form, i.e., there is a beta function for the bare constant in terms of the cutoff that's the same as the beta function for the physical constant in terms of the energy scale?

As for the Landau pole, I've got a question about that too.

The beta function for QCD is given by:

[tex]\beta(g)=-\frac{7}{16 \pi^2}g^3=-Ag^3 [/tex]

which integrates out to:

[tex]g(t)=\left(\frac{1}{2At+C}\right)^{1/2}[/tex]

where C is an arbitrary constant, and [tex]t=ln(\mu)[/tex] where [tex]\mu[/tex] is the energy scale. Therefore there is a Landau pole at [tex]t=-\frac{C}{2A} [/tex]. To determine C, someone needs to measure the coupling [tex]g_P [/tex] at some energy [tex]\mu_P [/tex]. C ends up being:

[tex]C=\frac{1}{g_{P}^{2}}-2A \times ln(\mu_P) [/tex]

Once you have C, taken from a measurement, you can calculate the Landau pole to be: [tex]\mu=.2 GeV [/tex].

My question is at what value of energy do they measure the amplitude in order to calculate the Landau pole?
 
  • #4
I am afraid I cannot answer your questions. Maybe somebody else, more professional will do. I am very critical to the whole (self-action + renormalizations) way of building interacting fields. You can find my arguments at my blog.
 
  • #5
Hello

I would say that actually you described two different things :

- Sending the cut-off to infinity is usually done in high-energy physics because you don't know where it stands actually. Sending it to infinity and keeping finite part provides you the asymptotic behaviour in a sense : "what you could expect if you look at it from very far away". This process is perturbative renormalization. The RG condition you mention "to be experimental" are actually a reparametrization of the theory since you are not interested in what happens in the UV (that's not physically seen in experiment)

- the other problem of having a finite cut-off and the related question of the change of the bare parameters is actually addressing Wilson's ideas on RG and its Nobel prize. A clear picute of this can be found on wikipedia with the illustration of Kadanoff blok spins transformation. Bu you are right : there are "flow" equations for the bare quantities as well and their related evolution when moving the UV cut-off downwards...
In this viewpoint, the cut-off is the microscopic scale at which all fluctuations have been averaged out (in the sense the coupling constants you deal with mimic the content of the physics for the scales above). The experimental scale you face in such a case has to be lower than the cut-off you have for your effective theory.

A simple example :
- QCD is perturbatively renormalizable in the UV (asymptotic freedom)
- when entering low-energy physics, this collapses at the cut-off of \Lambda_QCD
- from this cut-off, non perturbative stuffs appear and quarks and gluons are not "relevant anymore". You use baryons and mesons description for the scales below. Usually it's done by chiral perturbation theory that is not perturbatively renormalizable. You have a physical cut-off \Lambda_QCD for your effective theory (e.g. Weinberg lagrangian) that describes the dynamics of the low energy degrees of freedom (hadrons). But if you are doing an experiment with typical momentum above Lambda_QCD, you won't see hadrons so that your effective theory breaks downs.

And finaly, if you are able using flows equations to decrease the UV cut off below the experimental scale you are watching, you can do classicla computations with your effective theory ! And it will include all the quantum effects from the higher energy physics !
 
  • #6
Here's how I understand it. The bare constants do depend on cutoff. However, they do not depend on an energy scale. This energy scale can be either the mass parameter introduced in dimensional regularization, or a momentum parameter introduced in cutoff renormalization (I believe this is called a momentum subtraction point).

Because of Wilson's ideas, you cannot take the cutoff to infinity! The cutoff represents a number above which we believe our physics is incorrect - so for example, one cannot take the cutoff to higher than Planck scale, because our current physics is probably wrong at that energy. But here's the rub: one can actually take the cutoff to higher than the Planck scale! What happens is that the error you make can all be absorbed in a finite renormalization of all operators (including those of negative mass dimension), and this has to do with the fact that such mistakes that result from keeping high-energy modes of the field can be represented by local operators (local because the energy-time uncertainty relationship, for a short amount of time [tex]t=1/\Lambda[/tex] which is equal to zero when [tex]\Lambda[/tex] is big, so these high energy modes exist instantaneously and can be represented by local or point interactions (e.g. 4-point of electroweak theory) rather than virtual particle exchanges). If the cutoff [tex]\Lambda [/tex] is taken to infinity, then all the coefficients of operators of negative mass dimension vanish so there is no trouble with renormalization. Incidentally, the reason one can dimensionally regulate an effective field theory (thereby including the higher momentum modes which you're not supposed to include) is the very same reason as mentioned above: the error you make can be represented by local interactions or a finite renormalization of the coefficients in the terms of the Wilson Lagrangian.

Low-energy QCD and the fermion condensates that come from chiral perturbation theory is a particularly peculiar use of effective field theory. I would say that it's atypical for most effective theories, so it's not really a good example for understanding effective field theories. But it is an effective field theory.

But if you are doing an experiment with typical momentum above Lambda_QCD, you won't see hadrons so that your effective theory breaks downs.

This is where I'm a little confused. Usually in effective field theories, going past the cutoff introduces new heavier particles that were integrated out in the original Wilson scheme, but the lighter particles still do exist.

Take the LHC for example. They accelerate protons to really high energies. Does this mean that we don't see the hadrons (the proton), but only the quarks?
 
  • #7
Here's how I understand it. The bare constants do depend on cutoff. However, they do not depend on an energy scale. This energy scale can be either the mass parameter introduced in dimensional regularization, or a momentum parameter introduced in cutoff renormalization (I believe this is called a momentum subtraction point).

This is the perturbative way of seeing stuff... The only purpose of such a process is to get rid of the physics you don't know (i.e. the UV). You reparametrize everything in the IR with the IR couplings and degrees of freedom. Perturbation theory moves the UV cut-off (up to infinity at the end) and tries to keep the IR physics unchanged (in this sense it is Wilson ideas in the IR).

Because of Wilson's ideas, you cannot take the cutoff to infinity! The cutoff represents a number above which we believe our physics is incorrect - so for example, one cannot take the cutoff to higher than Planck scale, because our current physics is probably wrong at that energy.

Yes, in Wilson picture the cut-off has to be finite and all the theories are effective. In the mean time, they all are renormalizable. Thus it is always wrong to put cut-off to infinity in this picture. You have to start at some know finite UV scale with a "God given " theory. The main difference with perturbation theory is that you explictly know where your UV lies (e. g. statistical physics where you have the lattice spacing as natural cut-off). But if you are able to reach asymptotic scaling i.e. UV and IR scales are far enought and the trajectory remains perturbative, the both give the same result.

The Planck scale insertion is rather tricky since the first principles of your quantization break down, and gravity has also to be strong (and we have no clue of the shape of UV gravity). You can off course put Planck scale or higher in Wilson picture but it is no sense since the description you have implictly relies on your quantization.

What happens is that the error you make can all be absorbed in a finite renormalization of all operators (including those of negative mass dimension), and this has to do with the fact that such mistakes that result from keeping high-energy modes of the field can be represented by local operators (local because the energy-time uncertainty relationship, for a short amount of time LaTeX Code: t=1/\\Lambda which is equal to zero when LaTeX Code: \\Lambda is big, so these high energy modes exist instantaneously and can be represented by local or point interactions (e.g. 4-point of electroweak theory) rather than virtual particle exchanges). If the cutoff LaTeX Code: \\Lambda is taken to infinity, then all the coefficients of operators of negative mass dimension vanish so there is no trouble with renormalization. Incidentally, the reason one can dimensionally regulate an effective field theory (thereby including the higher momentum modes which you're not supposed to include) is the very same reason as mentioned above: the error you make can be represented by local interactions or a finite renormalization of the coefficients in the terms of the Wilson Lagrangian.

Wow ! that's pretty dense ! I am not sure to understand fully everything you imply here... But what I can say is : the vanishing operators you mention are non-perturbatively renormalizable so you have no right to take the cut-off to infinity. In the RG picture, this is because they are irrelevant wrt the Gaussian UV fixed point. At finite UV, you can add them, but you expect their effect to be washed out as you compute the fluctuations towards lower scales (they should be totally suppressed if the UV scaling regime is inifinitely long). This does not mean they play no role. It is clear such a theory can be rephrased into a renormalizable one while changing the bare parameters values but this should change the beta functions : this is universality at play ! For example, take a phi^6 in d=4. This is non renormalizable from power counting. Keeping cut-off finite and computing the one-loop correction to phi^4 you realize that the phi^6 vertex contributes to the phi^4 one and that this correction is finitie even when the cut-off is removed.

I don't understand your local operators picture. When blocking, there is no good reason why the theory reexpressed in low degrees energies of freedom has to be local ... To be more prosaic, the result from the computation of the fluctuations is given by a Tr Log coming from a functional determinant. This gives local effective theory ONLY if the leading term is homogenous... Which means that you need to be able to perform grandient expansion... And there are a lot of funny cases where this breaks down (UV completion of GR breaking Lorentz, non-relativisitic cold bosons etc... )

Low-energy QCD and the fermion condensates that come from chiral perturbation theory is a particularly peculiar use of effective field theory. I would say that it's atypical for most effective theories, so it's not really a good example for understanding effective field theories. But it is an effective field theory.

Why ? What is so particular ? (except the fact that we are not able to solve QCD directly on the contrary to standard model and its effective Four-fermions effective theory for electroweak interaction)


This is where I'm a little confused. Usually in effective field theories, going past the cutoff introduces new heavier particles that were integrated out in the original Wilson scheme, but the lighter particles still do exist.

Take the LHC for example. They accelerate protons to really high energies. Does this mean that we don't see the hadrons (the proton), but only the quarks?

Well, \Lambda_QCD is the upper bound of validity of chiral PT as far as I know (I am not an expert one this). Above you have high energy qcd and below, the weird strcuture of the qcd vacuum induces the bound states : bbaryons, mesons etc...

In LHC, ALICE experiment is doing this job : heavy ions collisions. There may be a quark gluon plasma due to high T and density. But for lower temperature, the quarks and gluons produced in the collision (the same for p-p) are not stable and you have for example hadronization. So that you never see directly the quarks but they are the ones produced...
 
  • #8
From page 23 of some effective field theory notes ( http://arxiv.org/PS_cache/hep-th/pdf/0701/0701053v2.pdf ):

2.2.3 The dimensionally regularized Wilson action: The Wilson
action defined with an explicit cutoff is somewhat cumbersome for practical calculations,
for a variety of reasons. Cutoffs make it difficult to keep the gauge
symmetries of a problem manifest when there are spin-one gauge bosons (like
photons) in the problem. Cutoffs also complicate our goal of following how the
heavy scale, M, appears in physically interesting quantities like γ[ℓ], because
they muddy the dimensional arguments used to identify which interactions in SW
contribute to observables order-by-order in 1/M.

It is much more convenient to use dimensional regularization, even though
dimensional regularization seems to run counter to the entire spirit of a low-energy
action by keeping momenta which are arbitrarily high. This is not a problem in
practice, however, because the error we make by keeping such high-momentum
modes can itself always be absorbed into an appropriate renormalization of the
effective couplings. This is always possible precisely because our ‘mistake’ is
to keep high-energy modes, whose contributions at low energies can always be
represented using local effective interactions. Whatever damage we do by using
dimensional regularization to define the low-energy effective action can always be
undone by appropriately renormalizing our effective couplings.

You can guess the form of the Wilson Lagrangian (all terms that satisfy any underlying symmetries, including negative mass dimension terms). The next step is to draw Feynman diagrams using the Lagrangian, and you have to calculate loops. If the Wilson Lagrangian were regulated with a finite cutoff, then these loops present no problem because they are already finite. If the Wilson Lagrangian is regulated using dimensional regulation, then the integrals suffer from infinities that are the result of keeping the high energy modes in your integration (these infinities would have been cutoff had you used cutoff regularization). The infinities in a dimensionally-regulated Wilson action are taken care of by "matching" to low energy experiment order by order in E/M for some high mass scale M: this matching suppresses the infinite amount of work you need to renormalize a nonrenormalizeable theory.

And finaly, if you are able using flows equations to decrease the UV cut off below the experimental scale you are watching, you can do classicla computations with your effective theory ! And it will include all the quantum effects from the higher energy physics !

This is actually a common mistake. There is some bad terminology with regards to effective field theories. There are three types of actions: the fundamental action, the quantum or quantum effective action, and the Wilson action or Wilson effective action. The quantum action is the Legendre transformation of the vacuum-to-vacuum expectation value in the presence of a source [tex]\Gamma[\phi]=W[J]-\int J \phi d^dx [/tex]. It has the property that its tree diagrams give the correct result to all loops. Also, as you say, it obeys the classical equation [tex]\frac{\delta \Gamma}{\delta \phi}=-J [/tex] for motion in the presence of source. It is calculated usually with saddle-point methods (Coleman-Weinberg mechanism I believe it's sometimes called). The Wilson action on the other hand is a Lagrangian where you have to calculate the loops and does not obey any classical equations of motion. You can however find a quantum action for the Wilson action. The quantum action would have the properties that it gives the same results using classical differentiation that using the Wilson action with quantum mechanical path integration does.

The reason I think chiral perturbation theory is so weird is that you are taking a certain vacuum-expectation value of a product of quark fields, and then quantizing it thereby turning it into a meson, and then constructing a meson Lagrangian that is a Wilson Lagrangian because you include meson-meson terms that are non-renormalizeable. Moreover, the baryon fields are represented by the very same spinors that are usually reserved for the more fundamental fields like electrons and neutrinos and quarks. There's just too many weird things going on.
 
  • #9
I think we have a misunderstanding here. When I discuss Wilson's ideas, I mean what is now called Functional Renormalization Group. For this, there are mainly 3 schemes : Polchinski's relying on the functional W and a smooth cut-off function, Wegner-Houghton and the shapr cutoff and average action which is a kind of Schwinger-Dyson approach (in the sense we never compute directly a functional integral). Burgess is an expert in none of these approaches that are genuinely non-perturbative in essence. A perturbative expansion has no reason to work to see the emergence of an effective theory from another since it relies only the asymptotic scaling around gaussian fixed points.

Another thing is the use of dimensional regularization which is possible only in the framework of perturbative expansion. This analytic contunuation, even powerfull tool for analytical computations has at least one severe shortcoming : it breaks decoupling theorem (e.g. Appelquist Carazzone) since it can take into account contributions that follow three prescritions (that you can find in the begininnng of e.g. Zinn Justin's book). The one related to scaling properties is clearly violated for example in the Higgs sector, which amounts in the impossiblity to see with dim. reg. the hierarchy problem of the Higgs mass (which is one of the reasons that pushes us to believe that S. M. is only effective). But you are right saying it is naturally preserving gauge invariance and thus is widely used.

Let me note in passing that having an explicit finite cut-off is not trivially gauge invariant but many approaches can overcome this difficulty leading to a gauge invariant result : Stueckelberg approach, background field etc...

Another aspect since you are mentionning a matching of parameters that relies on a decoupling theorem is that I never found a proof of such theorem that is preserved within a scalar broken phase (e.g. the Higgs !). And a scalar broken phase has to be IR nonperturbative at least as far as the flows defined by fluctuations around phi=0 are concerned (such as yukawa couplings !)

I agree with your definition of the different actions even if in my mind, only the effective one is important to be computed. By the way, you can demonstre that what you call Wilson action is actually the effective action is you perform infinitely small blocking (the aim of functional RG). Coleman and Weinberg mechanism has nothing to do with loop-expansion (actually, it breaks it explictly as you allow order hbar and classical level to have the same order of magnitude). It is a mechanism for spontaneous symmetry breaking.

When blocking you original action, you recover at each step an action that is effective in the sense the cut-off has been lowered, and the new couplings in it results from taking into account the fluctuations of higher momentum. This blocking even if started on the bare action builds actually the effective one at a new cut-off. So that, for infinitely many blocking steps, you have the effective action but without any fluctuation remaining. Your entire system dynamics has been encoded in the couplings constants (and the shape of this effective action that is in general very difficult). This action used at tree-level gives you all the 1-PI vertices you need for example to compute an S matrix for your orignal theory but WITHOUT ANY perturbative assumption.
 
  • #10
Wow, you really know a lot about effective field theory. I've just read the Burgess paper and a chapter from a textbook (Srednicki).

Burgess mentions that a renormalizeable theory is one in which it is okay to neglect q/M where q is the scale of the experiment, and M is a heavy mass scale gotten from what you call a "god-given" or more fundamental theory. That is, what determines renormalizeablity is not if you can take something to infinity, but if you are satisfied with results to zeroth order in q/M.

Take the [tex]\phi^4 [/tex] scalar theory in 4-dimensions. In 2x2 scattering, if you don't take the cutoff to infinity, then you get terms like:

[tex]\frac{s^2}{\Lambda^2}+\left(\frac{s^2}{\Lambda^2}\right)^2+... [/tex]

that still survive (if you took the cutoff to infinity you wouldn't have to worry about these terms as they'd go to zero),
where s is a Mandelstam variable. Take the first term:

[tex]\frac{(k_{1}^2+2k_1k_2+k_{2}^2)}{\Lambda^2} [/tex]

To create a counter-term to cancel the middle term (as the result should not depend on an artificial cutoff [tex]\Lambda [/tex]), you have to add the following term to the Lagrangian:

[tex]L=(\partial_\mu \phi \partial^\mu \phi )\frac{\phi^2}{\Lambda^2}[/tex]

because this term produces the cross-term k_1k_2 and allows attachment to 4 particles. This counter-term is of course non-renormalizeable by power-counting. To cancel the [tex]k_{1}^2 / \Lambda^2 [/tex] you would add

[tex]L=(\partial^2 \phi)\frac{\phi^3}{\Lambda^2}[/tex]

So according to Burgess, [tex]\phi^4 [/tex] theory with only quartic interaction terms looks renormalizeable, but there are really non-renormalizeable terms that we've neglected because we don't care about terms of order [tex]q^2/\Lambda^2 [/tex] such as the ones above.

What really confuses me is that since terms like [tex](\partial_\mu \phi \partial^\mu \phi )\frac{\phi^2}{\Lambda^2}[/tex] involve an arbitrary cutoff, and not a god-given mass scale like say the Planck mass or a heavy particle. Non-renormalizeable terms that involve an arbitrary cutoff by necessity exist only to cancel other terms that have a cutoff. If non-renormalizeable terms have a real heavy mass instead of a cut-off, then they need not exist only to cancel something, and can be part of the actual answer. So why bother calculating these nonrenormalizeable cutoff terms when they don't contribute to anything anyways?

Well, \Lambda_QCD is the upper bound of validity of chiral PT as far as I know (I am not an expert one this). Above you have high energy qcd and below, the weird strcuture of the qcd vacuum induces the bound states : bbaryons, mesons etc...

In LHC, ALICE experiment is doing this job : heavy ions collisions. There may be a quark gluon plasma due to high T and density. But for lower temperature, the quarks and gluons produced in the collision (the same for p-p) are not stable and you have for example hadronization. So that you never see directly the quarks but they are the ones produced...

I actually have a problem with \Lambda_QCD being a natural cutoff for chiral perturbation theory. \Lambda_QCD doesn't really represent something physical. It's where the coupling of QCD goes to infinity, and depends slightly on your renormalization scheme (although it is standard to quote \Lambda_QCD for MS-bar, \Lambda_QCD changes with renormalization scheme). Hadronization is something physical. It's convenient to define \Lambda_QCD as the cutoff because that's where perturbation theory breaks down, but the cutoff doesn't seem to me a natural or god-given scale, but an arbitrary cutoff.

Do you happen to know how the official value of \Lambda_QCD is measured? All you need is one experiment at any high energy to measure it, and this experiment can be quark-quark scattering or gluon-gluon-gluon vertex or something else.
 
Last edited:
  • #11
Well actually for effective theory considerations, there are actually two "schools".

There are people trying to use perturbation theory at any cost and using mainly power-counting arguments. Which seems to be what Burgess does. Let me note by the way what Georgi (a very smart guy) told to Coleman (another genius) about effective theories :

One of my motivations in agreeing to write this article was a question
that my colleague Sidney Coleman asked me: What's wrong with form
factors? What's wrong with just integrating out heavy particles and
large momentum modes, a la the Wilson approach, and using the resulting
nonlocal theory as your interaction?
The answer of course is " Nothing" -but this is not an effective field
theory calculation. It is just a way of doing the full theory calculation.
But the fact that one of the world's greatest field-theorists would ask
such a question convinced me that the idea of continuum effective field
theory is not universally understood.
The real answer to the question "Why continuum effective field theory?"
is "Because it is easier!"

You can find it in http://arjournals.annualreviews.org/doi/abs/10.1146/annurev.ns.43.120193.001233 if you have access. This is a very nice introduction to effective theories and has a lot of critical discussions about what is currently done.

From my point of view, I would say that usually perturbation theory should work provided that you know its UV completion and that the flow remains perturbative (which can be tested only by non perturbative means unfortunately.) and the saddle point expansion is doable (for example broken phi^4 the loop-expansion is lost in the IR of the broken phase)

If you are interested in such weird nonlocal operator of higher order, if I may advise you, I would try something very simple for a deeper understanding : try to put in field theory languauge the antiferomagnetic situation. Clearly, the p^2 term can not be dominating anymore as your ground state is basically a plane wave whose wave number is given by the lattice spacing. So that the only possibility to describe efficiently the situation is to have for the kinetic terms : p^4/cst -p^2...
Off course, if you send the would be cut-off to infinity, you face huge troubles ...

Well, your are asking very smart and hard questions (at least from my point of view). From what I can tell about Lambda_QCD is that from my point of view, it hasn't an exact physical meaning. It is just a typical scale. It is true that it is not directly related to a natural cut-off of chiral perturbation theory but it can mimic it pretty well. From what I can say (also my current research) is that Lambda_QCD is the IR typical scale were everything goes bad for QCD (you are certainly aware of the deep IR problems). Its exact location has to depend of the RG scheme since it is also at the boundary of validity of PT (In a certain sense, the triviality bound in scalar field theory faces the same problem.) :
The role of the gauge fixing seems proeminent, the non-perturbative structure of the vacuum is large and so far unknown (even if some people claim for example that it is composed for 90 % of t'Hooft Polyakov monopoles). I have never heard of a computation below Lambda_QCD that is acually theoretically fully reliable and understood. But what is clear is that the difficulty of the IR sector is not only due to the large qcd coupling : you need much more assumptions than the coupling strenght to have a viable PT : unique saddle point, homoegenous contributions leading, etc ...

So that for me, Lambda_QCD is the upper bound of my job. The purpose of chiral PT is to try to mimic (very well actually) the bound states in this "phase" with more relevant degrees of freedom with "simpler dynamics" (for example the gluon propagator below Lambda_QCD is much more involved as the previous example in the antiferromagnetism but you can propagate quite easily a nucleon using e.g. Weinberg lagrangian) And in such a case it is not stupid to use Lambda_QCD as the cut-off (mainly implict by the way) since the hadrons are actually non-perturbative solutions i.e. then can not be found using PT from QCD. You can have a toy model picture with Gross Neveu model which has a broken chiral phase and exhibits baryons (the bound state kink-anti-kinks) that can exist only where the flow has turned non-perturbative.

But please let me remind I am not an expert in chiral PT. My current job is non-perturbative RG for low IR QCD... So that don't take my consideration (that my be argued by other physicists) fro granted as in the IR sector, it is mainly a matter of point of view since we know basically nothing but nuclear spectrum phenomenology...

If you are interested in "how to build effective theories" using Wilson's ideas (what Georgi said to be too difficult) there is a Phys Reports presenting you the current research status :
http://dx.doi.org/10.1016/S0370-1573(01)00098-9 or if you don't have access, the preprint : http://arxiv.org/abs/hep-ph/0005122
 
Last edited by a moderator:

What is renormalization with cutoff?

Renormalization with cutoff is a technique used in theoretical physics to remove infinite or unphysical quantities from calculations by imposing an upper limit, or cutoff, on certain parameters. This allows for more accurate and meaningful results.

Why is renormalization with cutoff necessary?

In many quantum field theories, calculations can produce infinite values due to the underlying mathematical equations. Renormalization with cutoff helps to remove these infinities and provide physically meaningful results by setting a limit on the values that can be used in the equations.

How does renormalization with cutoff work?

Renormalization with cutoff involves introducing a parameter that represents the maximum energy or distance that can be probed in a system. This cutoff is used to eliminate divergences in calculations and allows for more precise predictions of physical phenomena.

What are the limitations of renormalization with cutoff?

One limitation of renormalization with cutoff is that it is not always applicable, particularly in cases where there are no natural bounds on the parameters being studied. Additionally, the choice of cutoff can affect the final results and may require additional theoretical considerations.

What are some applications of renormalization with cutoff?

Renormalization with cutoff is commonly used in quantum field theory, statistical mechanics, and condensed matter physics. It has also been applied in other fields such as cosmology and fluid dynamics to remove infinities and improve the accuracy of calculations.

Similar threads

Replies
0
Views
576
Replies
89
Views
3K
  • Quantum Physics
Replies
8
Views
2K
Replies
3
Views
1K
Replies
28
Views
1K
  • Quantum Physics
Replies
1
Views
2K
Replies
10
Views
972
  • Quantum Physics
Replies
15
Views
2K
  • Quantum Physics
Replies
6
Views
3K
Back
Top