# Why Renormalisation Needs a Cutoff - Comments

• Insights
• bhobba
In summary: F(x) is just λ.In summary, the cutoff does NOT go away if we keep it at a finite value, even when we are dealing with logs! Instead, one generically get terms fo the form \ln((\Lambda^2+k_1^2)/(\Lambda^2+k_2^2)) or 1/(\Lambda^2+k_1^2) - 1/(\Lambda^2 + k_2^2)."
bhobba
Mentor
bhobba submitted a new PF Insights post

Why Renormalisation Needs a Cutoff

Continue reading the Original PF Insights Post.

Demystifier, vanhees71 and PWiz
Watch out... the presentation is a bit misleading for the following reason:In actual calculations, when integrating loop diagrams, one almost never get a pure log of the form $\ln(\Lambda/k)$ where k is some energy scale. It is almost never like this. Instead, one gets typically something of the form $\ln( (\Lambda^2+k_1^2)/k_2^2)$. And one can even have cases (when there are scalar bosons loops, for example) where in addition to these terms, one can have terms of the form $1/(k^2 + \Lambda^2)$.

So after renormalizing, the cutoff does NOT go away if we keep it at a finite value, even when we are dealing with logs! Instead, one generically get terms fo the form $\ln((\Lambda^2+k_1^2)/(\Lambda^2+k_2^2))$ or $1/(\Lambda^2+k_1^2) - 1/(\Lambda^2 + k_2^2)$.

We see that the cutoff does not go away, even if the theory is renormalizable!

BUT we see that if we take the limit $\Lambda \rightarrow \infty$, THEN the cutoff disappears. This is the reason for taking this limit!

This is the "old" approach to renormalization (pre Ken Wilson, say). The modern point of view is that the cutoff should not be taken to infinity. But then one must treat the theory as an effective field theory and there is an infinite number of terms to be included in the lagrangian. This is for another post. But my point here was to convey that the cutoff does not go away even in renormalizable theories if we don't take the limit cutoff goes to infinity.

Cheers,Patrick

Last edited:
thierrykauf and vanhees71
nrqed said:
This is the "old" approach to renormalization (pre Ken Wilson, say). The modern point of view is that the cutoff should not be taken to infinity. But then one must treat the theory as and effective field theory and there is an infinite of terms to be included in the lagrangian. This is for another post. But my point here was to convey that the cutoff does not go away even in renormalizable theories if we don't take the limit cutoff goes to infinity.

Including the (usually finite) cutoff, and the infinite number of terms is so important conceptually. I don't know why even modern texts like Srednicki or Schwartz put it so late, and even then make it hard to extract the key concept (well, Schwartz is pretty good, actually). On the other hand, the statistical mechanics texts do this right away.

Last edited:
FYI I can't see the Tex in Patrick's reply in the original Insights post.

Just trying to get a handhold (I would like to understand this)

"Suppose we have a function G(x) that depends on some parameter λ ie G(x,λ). Then, so perturbation theory can be used, expand it in a power series about λ:

G(x) = G0 + G1(x)*λ + G2(x)*λ^2 + …….."

Why isn't this written:

G(x,λ) = G0 + G1(x)*λ + G2(x)*λ^2 + …….. ?

But regardless, do I understand correctly that this is saying that G(x,λ) can be decomposed into a linear combination of functions ${ G }_{ i }(x)$ multiplied by powers of λ (That just what the power series expansion technique)?

(somewhat aside) It's been a long time since I learned about power series expansions. But they have always bugged me because of their dependence on "convergence at infinity". I get that there are lot's of key tools that use infinite limits. But it has been a regular thorn in my side. To be honest I always sort of associated the QM "infinities" problem with this... that you had to "sum over histories" but that there was effectively no limit to the terms in the sum. Only recently have I realized that the "energy level" is associated with the "cuttoff"."In perturbation theory, for theoretical convenience, it is usual to define a new function F(x) = (G(x) – G0)/G1 so:

F(x) = λ + F2(x) *λ^2 + …….."

Do I understand correctly that this just normalizes (scales) the "power series representation of G(x,λ)" to the difference between the first to constants of expansion of G(x,λ)?

"Suppose λ is small, then F(x) = λ, F has the dimensions of λ, so is dimensionless"

...This is also seen by its definition where G(x) – G0 is divided by G1(x). But let's expand F2(x) in a power series about x so F2(x) = F20 + F21*x + F22*x^2 + ……. = F20 + F21*x + O(x^2). Suppose x is small, so O(x^2) can be neglected, then F2(x) has the dimensions of x, hence to second order of λ, F(x) has the dimensions of x. Here we have a dimensional mismatch. This is the exact reason the equations blow up – in order for it to be dimensionless it can't depend on x. This can only happen if F2(x) is a constant or infinity. Either of course is death for our theory – but nature seemed to choose infinity – the reason for which will be examined later.This is because powers of small numbers go to zero in the limit, correct?

I guess I find this confusing because (at least in the software I use) I wouldn't be able to get away with just assuming the "dimension" x of my expression therefore completely vanishes? The software won't "automatically start to neglect the dimension-ality of a system just because the value of the Range in that dimension is ensie-weensie, or whatever. This has always seemed onto-logically correct to me. Nor will it automatically add dimension.

I can declare something "Dimensional" to be suddenly "Dimensionless", change length into Btu's or whatever). After all it's just a computer, I can make it do whatever I want. But it seems telling to me that without instructions for how/when/where to do this, the computer can't "automatically" do so .

I guess I have assumed this was for a pretty deep reason, that somehow logically there is simply not enough information in any scalar value alone (even zero) to determine it's dimensionality (or lack thereof)?

 I think this is clicking. Now "x" is a number of apples in the world of apples "Λ" :

"Now for the solution. The only way to avoid this is to divide x by some parameter, Λ, of the same units as x, so it becomes dimensionless.

The correct equation is:

F(x/Λ) = λ + F2(x/Λ) *λ^2 + F2(x/Λ) *λ^2 +………+ Fi(x/Λ) *λ^i + ……………

We see, due to dimensional analysis of the perturbation methods used, we have neglected a parameter in our theory, which can be interpreted as a cut-off. It is this oversight that has caused the trouble all along."

I'm interested to see where the log's come from now...
But... must... eat...

Last edited:
nrqed said:
Instead, one gets typically something of the form $\ln( (\Lambda^2+k_1^2)/k_2^2)$.

But for large Λ its the same. I have gone through the exact calculations for the messon/meeson scattering in my original paper, and it, even without taking a large number approximation you get $\ln( (\Lambda^2/k_2^2)$ . Are you sure you are talking about the large energy approximation I am using in the paper?

Thanks
Bill

Jimster41 said:
But regardless, do I understand correctly that this is saying that G(x,λ) can be decomposed into a linear combination of functions ${ G }_{ i }(x)$ multiplied by powers of λ (That just what the power series expansion technique)?

Its a Taylor series expansion - in applied math you generally assume you can do that.

Jimster41 said:
Do I understand correctly that this just normalizes (scales) the "power series representation of G(x,λ)" to the difference between the first to constants of expansion of G(x,λ)?

Not quite - because of the division it creates something dimensionless - its different to a rescaling which would simply be a change of units.

"Suppose λ is small, then F(x) = λ, F has the dimensions of λ, so is dimensionless"

Jimster41 said:
This is because powers of small numbers go to zero in the limit, correct?

No. Its because F must be dimensionless - but the expansion says it isn't. This is an inconsistency - to accommodate it, it must be infinity or a constant - if it actually depended on x it woul not be dimensionless.

Jimster41 said:
I can declare something "Dimensional" to be suddenly "Dimensionless", change length into Btu's or whatever). After all it's just a computer, I can make it do whatever I want. But it seems telling to me that without instructions for how/when/where to do this, the computer can't "automatically" do so .

I am sorry - but you can't do that. Its modelling something - nothing you can do can change what its modelling.

Thanks
Bill

bhobba said:
Its a Taylor series expansion - in applied math you generally assume you can do that.
Not quite - because of the division it creates something dimensionless - its different to a rescaling which would simply be a change of units.

"Suppose λ is small, then F(x) = λ, F has the dimensions of λ, so is dimensionless"
No. Its because F must be dimensionless - but the expansion says it isn't. This is an inconsistency - to accommodate it, it must be infinity or a constant - if it actually depended on x it woul not be dimensionless.
I am sorry - but you can't do that. Its modelling something - nothing you can do can change what its modelling.

Thanks
Bill
Thanks for the direction Bill.
I need to chew on this more, but I feel like I'm learning something.

I just wanted to clarify, I didn't mean to imply a conversion from length to Btu's had some specific, real, quality of meaning, I just meant that the computer can be told to "recast" some value. Like, "hey computer, I know I said 10degF + 20 deltaF = 30 degF, but I just totally changed my mind. It equals 30 "Ice cream cones". If I tell it not to care, It will let me do things that are dimensionally nonsensical. At the end of the day, I am the one telling it what "modeling something" means. But no, of course I would be disappointed and confused to say the least, if the temperature outside changed 20 degrees and I somehow expected "ice-cream cones".

Last edited:
bhobba said:
No. Its because F must be dimensionless - but the expansion says it isn't. This is an inconsistency - to accommodate it, it must be infinity or a constant - if it actually depended on x it woul not be dimensionless.

I might be just being stupid, but I don't understand this point. You have a dimensionless function of x, $F(x)$. It can be written as a power series in x, as follows:

$F(x) = F_0 + x F_1 + x^2 F_2 + ...$

If x is small, then we can approximate F by just the first two terms, so:

$F(x) = F_0 + x F_1$

I don't understand why you say that if $F$ actually depended on x, it would not be dimensionless. What it seems to me is that $F$ is dimensionless, and so is $F_0$, but $F_1$ has the dimensions of $\frac{1}{D}$, where $D$ is the dimensions of x.

I agree that if you want all the $F_i$ to be dimensionless, then you can't have an expansion in $x$, you have to have an expansion in $\frac{x}{\Lambda}$ where $\Lambda$ has the same dimensions as $x$. But saying that $F$ is dimensionless doesn't imply that $F_1$ is dimensionless.

bhobba said:
But for large Λ its the same. I have gone through the exact calculations for the messon/meeson scattering in my original paper, and it, even without taking a large number approximation you get $\ln( (\Lambda^2/k_2^2)$ . Are you sure you are talking about the large energy approximation I am using in the paper?

Thanks
Bill
Hi Bill,

I was talking about a general calculation, so I am not assuming that the external momenta are much larger than other physical scales (like the masses of the particles in the loops). My point is that in general, if one does a one loop QFT calculation, the Lambda do not cancel out unless we take the infinite limit.
We could discuss a more specific example if you want, you could just give me the Feynman rules you were using. Or we could just consider a vacuum polarization in QED or a vertex correction or even a Higgs loop. Of course, if you assume that all the masses of the particles are negligible compared to external momenta, things simplify greatly. But one should also be able to calculate quantities where this is not a valid approximation. And even if it is a good approximation, one should be able to go beyond that limit.

Regards,

Patrick

stevendaryl said:
I might be just being stupid, but I don't understand this point. You have a dimensionless function of x, $F(x)$. It can be written as a power series in x, as follows:

$F(x) = F_0 + x F_1 + x^2 F_2 + ...$

If x is small, then we can approximate F by just the first two terms, so:

$F(x) = F_0 + x F_1$

I don't understand why you say that if $F$ actually depended on x, it would not be dimensionless. What it seems to me is that $F$ is dimensionless, and so is $F_0$, but $F_1$ has the dimensions of $\frac{1}{D}$, where $D$ is the dimensions of x.

I agree that if you want all the $F_i$ to be dimensionless, then you can't have an expansion in $x$, you have to have an expansion in $\frac{x}{\Lambda}$ where $\Lambda$ has the same dimensions as $x$. But saying that $F$ is dimensionless doesn't imply that $F_1$ is dimensionless.
You are absolutely correct, stevendaryl. I was going to make the same remark. The higher Fs come from a Taylor expansion and the terms in a Taylor expansion (the terms that multiply the powers of the variable in which we expand) all have different dimensions since they come from derivatives with respect to a dimensional quantity! So there is no problem with the dimensions of any of the F's. We cannot conclude anything one way or another from the existence of the Taylor expansion.

stevendaryl said:
I don't understand why you say that if $F$ actually depended on x, it would not be dimensionless.

I simply expanded F2 to make it clearer what's going on. You can argue it has dimension F2(X) - whatever F2 is - if its squared it's dimensions x^2, if its √ it has dimensions x^1/2 etc. If its constant then its dimensionless. It looks obvious to me from what dimensions means in dimensional analysis.

Thanks
Bill

bhobba said:
I simply expanded F2 to make it clearer what's going on. You can argue it has dimension F2(X) - whatever F2 is - if its squared it's dimensions x^2, if its √ it has dimensions x^1/2 etc. If its constant then its dimensionless. It looks obvious to me from what dimensions means in dimensional analysis.

I guess it's obvious that if $F(x)$ is dimensionless, but $x$ is not, then $F(x)$ can be rewritten as a function of $\frac{x}{\Lambda}$, where $\Lambda$ is a scale factor with the same dimensions as $x$. But I don't see what that shows about the interpretation of $\Lambda$ as a cutoff.

nrqed said:
I was talking about a general calculation, so I am not assuming that the external momenta are much larger than other physical scales (like the masses of the particles in the loops). My point is that in general, if one does a one loop QFT calculation

Hi Patrick.

I am sure you are correct in general - but I was not considering the general case - I was considering the high energy scale of a one parameter theory. The example you gave has two parameters K1 and K2.

An example is the scattering amplitude from the Φ^4 theory which is what I considered in the first paper. From Zee page 145 - its

i*λ + 1/(2*(2π)^4) ∫ dk^4 1/((k^2 - m^2)((K - k)^2 - m^2)).

For simplicity we go to the high energy regime so m can be neglected

i*λ + 1/(2*(2π)^4) ∫ dk^4 1/(k^2*(K - k)^2)

Zee claims its i*λ + i*C*log(Λ^2/K^2) which is exactly the form I came up with.

However I have to say there were a number of approximations made in deriving it (Zee did it on page 151 and 152 - but its rather tricky). I don't know if that's a factor.

Thanks
Bill

Last edited:
stevendaryl said:
I guess it's obvious that if $F(x)$ is dimensionless, but $x$ is not, then $F(x)$ can be rewritten as a function of $\frac{x}{\Lambda}$, where $\Lambda$ is a scale factor with the same dimensions as $x$. But I don't see what that shows about the interpretation of $\Lambda$ as a cutoff.

Ahhhh.

That I agree with. Its simply a reasonable interpretation.

Thanks
Bill

In the Wilsonian view, the cutoff is not simply a reasonable interpretation. The cutoff is truly a cutoff (unless it turns out the theory is asymptotically free or safe).

We basically start with a cutoff, because we assume that the true high energy theory is strings or something unknown to us. However, we guess that we can do physics at low energy by assuming (for example) that special relativity, electrons, positrons and the electromagnetic field approximately exist, even though they may not really do so in the true theory of everything. Then we write down all theories consistent with this assumption, and see what predictions we can make for physics at low energy and finite resolution. Because we write down all theories, all possible terms automatically appear in our initial guess. Even if we don't write down all possible terms, we will find that the flow to low energy automatically generates them. Since we started with all terms, at low enough energy, we will find the traditional "renormalizable" terms as well as "nonrenormalizable" terms that we traditionally don't want, but the nonrenormalizable terms will be suppressed in powers of the cutoff.

If for some reason we made such a fantastic guess that is potentially the true theory of everything, we will find that we can remove the cutoff. Regardless, it is not necessary in order to do low energy physics with finite resolution.

Last edited:
bhobba said:
Hi Patrick.

I am sure you are correct in general - but I was not considering the general case - I was considering the high energy scale of a one parameter theory. The example you gave has two parameters K1 and K2.

An example is the scattering amplitude from the Φ^4 theory which is what I considered in the first paper. From Zee page 145 - its

i*λ + 1/(2*(2π)^4) ∫ dk^4 1/((k^2 - m^2)((K - k)^2 - m^2)).

For simplicity we go to the high energy regime so m can be neglected

i*λ + 1/(2*(2π)^4) ∫ dk^4 1/(k^2*(K - k)^2)

Zee claims its i*λ + i*C*log(Λ^2/K^2) which is exactly the form I came up with.

However I have to say there were a number of approximations made in deriving it (Zee did it on page 151 and 152 - band its rather tricky). I don't know if that's a factor.

Thanks
Bill

Hi Bill,

Yes, my point was about not considering the high energy regime. I realize that things simplify greatly when we can neglect all the masses (and my k's can be masses).
But I did not get the feeling that you were implying that your conclusions were only valid at the condition of working in the high energy regime. Such a result is then of limited interest since of course the real calculation includes the correction due to finite masses or a real calculation is not necesseraly done at high energy.

You wrote in your first blog

"The cut-off is gone. Everything is expressed in terms of quantities we know. That we don’t know what cut-off to use now doesn’t matter."

My point is that this conclusion does not follow in general for a quantum field theory, even if it is renormalizable. Your demonstration worked only because you were taking the high energy limit but it does not follow in general, which was my point.

Regards,

Patrick

bhobba
nrqed said:
Your demonstration worked only because you were taking the high energy limit but it does not follow in general, which was my point.

Good point.

But what do you think of the following argument - take the integral before:
∫ dk^4 1/((k^2 - m^2)((K - k)^2 - m^2)).

We divide the integral into two bits - the sum of a finite integral bit and two high energy bits that are of the form ∫ dk^4 1/k^4 because k swamps the other terms - one from - ∞ the other to +∞. Then by subtracting from the finite integral ∫ dk^4 1/k^4 over that finite region, the improper integrals become ∫ dk^4 1/k^4 for -∞ to ∞. This has the form Limit Λ → ∞ C*log (Λ^2). In this case it has the form for my original argument to be applicable. Of course approximations are used - but what Zee used was full of them as well.

This was like in your example where Λ was large.

Thanks
Bill

Last edited:
bhobba said:
Good point.

But what do you think of the following argument - take the integral before:
∫ dk^4 1/((k^2 - m^2)((K - k)^2 - m^2)).

We divide the integral into two bits - the sum of a finite integral bit and two high energy bits that are of the form ∫ dk^4 1/k^4 because k swamps the other terms - one from - ∞ the other to +∞. Then by subtracting from the finite integral ∫ dk^4 1/k^4 over that finite region, the improper integrals become ∫ dk^4 1/k^4 for -∞ to ∞. This has the form Limit Λ → ∞ C*log (Λ^2). In this case it has the form for my original argument to be applicable. Of course approximations are used - but what Zee used was full of them as well.

This was like in your example where Λ was large.

Thanks
Bill
Hi Bill,

Well, even if one does something like this, one has to make some approximation in some of the regions. One makes his type of approximation only when making hand waving arguments. The actual calculations should be done without such approximations. Or else, one should make clear the approximations made to reach a conclusion. But if we are serious about using a QFT to calculate physical results, there is no justification for dropping all masses and then, as I said, the cutoff does not cancel out unless we take it to infinity.

bhobba
nrqed said:
there is no justification for dropping all masses and then, as I said, the cutoff does not cancel out unless we take it to infinity.

Again good point

Thanks
Bill

Hi Guys

Just a quick question how one actually does integrals like that. Zee used a Pauli-Villiars approximation and even then you end up with

C* ∫ log (Λ^2/(m^2 - α(1-α)*K^2) dα where the integral is from 0 to 1

Do you do it in dimensional regularisation?

Thanks
Bill

Last edited:
bhobba said:
Hi Guys

Just a quick question how one actually does integrals like that. Zee used a Pauli-Villiars approximation and even then you end up with

C* ∫ log (Λ^2/(m^2 - α(1-α)*K^2) dα where the integral is from 0 to 1

Do you do it in dimensional regularisation?

Thanks
Bill
Using dimensional regularization would mean not introducing the cutoff and the integral would be quite different.

But I don't see any problem carrying out this integral explicitly. It can be done explicitly. You can use the online Mathematica integrator, for example.

nrqed said:
You can use the online Mathematica integrator, for example.

Do - yes - its actually in a table of integrals - plug and chug.

Thanks
Bill

bhobba said:
Do - yes - its actually in a table of integrals - plug and chug.

Thanks
Bill
Ah, your question was about what people in the field prefer to use to regularize their integrals? Then yes, dimensional regularization is almost always used, in practice. In an abelian gauge theory like QED, one may use a Pauli-Villars regularization but in non abelian gauge theories, one uses dimensional regularization and then, by habit, people use dim reg everywhere (after introducing the gauge fixing terms, Fadeed-Popov ghosts, etc, in the lagrangian)

bhobba
Is "the cutoff" whatever it needs to be, dimensionally, and in value, depending on the specific problem being solved?
Or, is there a canonical "maximal/minimal" cutoff. For some reason I was thinking it was thelinit given by the Planck constant.

Is it correct to say that the cutoff is really along the "probability" axis regardless of the dimension in question? That renormalization is drawing a boundary around the peak of the "probability" wave. Even though the wave keeps going, the probability of observation is always? (or "generally")decreasing, so introducing a "cutoff" is practical, and is just about how infinitesimal a degree of uncertainty is tolerable.

If this is the case, is it true, do I understand correctly that "many body" case is more worrisome?

nrqed said:
Ah, your question was about what people in the field prefer to use to regularize their integrals?

That's it. But just for the heck of it I did the integral from the table - yuck.

Interestingly, in the high energy regime its log (Λ^2/K^2) + ∫ log (α - α^2) dα - where the integral is from 0 to 1 and I dropped the C. ∫ log (α - α^2) dα is a bit nasty as well - but its a constant whatever it is. This is the constant I got in my paper from solving the differential equation for F2.

Thanks
Bill

Last edited:
Jimster41 said:
Is "the cutoff" whatever it needs to be, dimensionally, and in value, depending on the specific problem being solved?
Or, is there a canonical "maximal/minimal" cutoff. For some reason I was thinking it was thelinit given by the Planck constant.

Is it correct to say that the cutoff is really along the "probability" axis regardless of the dimension in question? That renormalization is drawing a boundary around the peak of the "probability" wave. Even though the wave keeps going, the probability of observation is always? (or "generally")decreasing, so introducing a "cutoff" is practical, and is just about how infinitesimal a degree of uncertainty is tolerable.

If this is the case, is it true, do I understand correctly that "many body" case is more worrisome?

First, the cutoff is not sharp, since it just represents an energy above which new degrees of freedom must enter. Roughly, the "absolutely necessary" (nonsharp) cutoff is determined by our guess of the low energy theory.

For example, in quantum electrodynamics, the "absolutely necessary" (nonsharp) cutoff is given by the Landau pole and lies above the Planck scale. However, reality may intervene way before that, and cause our theory to be false. In particular, we expect quantum gravity to render QED already false below the Planck scale.

Of course, the Planck scale is again the "absolutely necessary" (nonsharp) cutoff for quantum general relativity. But if there are low energy stringy effects, then quantum general relativity will be false even at energies far below the Plack scale.

Jimster41 said:
Is "the cutoff" whatever it needs to be, dimensionally, and in value, depending on the specific problem being solved?

In Wilson's approach its an actual value - but the cut-off used depends on the coupling constant chosen and the regime you are working with. This will be a lot clearer when I do my paper on renormalisation group flow.

Jimster41 said:
Is it correct to say that the cut-off is really along the "probability" axis regardless of the dimension in question?

I don't know what you mean by that.

Thanks
Bill

atyy said:
For example, in quantum electrodynamics, the "absolutely necessary" (nonsharp) cutoff is given by the Landau pole and lies above the Planck scale. However, reality may intervene way before that, and cause our theory to be false. In particular, we expect quantum gravity to render QED already false below the Planck scale.

There is no may in QED - long before then the electroweak theory takes over - but for the electroweak theory, which I believe has its own Landau pole, that looks a real issue.

Thanks
Bill

I had the Feynman "Path Integral Formulation" roughly in mind, and I had a connection between his "probability amplitudes" and Schrodinger's wave equation.

I was associating the "cutoff" with a boundary on the space of "possible paths", limiting the integration to run only over the ones with some minimum probability, rather than an infinite number of them.

And that at the heart of that approach was a dependence on the natural "diffusivity" of the wave equation.

Last edited:
Jimster41 said:
I was associating the "cutoff" with a boundary on the space of "possible paths", limiting the integration to run only over the ones with some minimum probability, rather than an infinite number of them.

The Feynman path integral must always sum over all paths (or all energies). We can split the sum into "partial" sums over high, intermediate and low energies, and each partial sum must enter the final sum. But how can we sum over all paths when we don't know the true theory of everything? We cannot, but we can guess. The cutoff represents our guess as to what the sum looks like after the partial sum over the high energies (strings or whatever) has been done.

Jimster41

## 1. What is renormalization and why does it need a cutoff?

Renormalization is a mathematical technique used in theoretical physics to remove infinities that arise in certain calculations. It is necessary because in some quantum field theories, the equations become infinite when certain parameters, such as energy or momentum, approach zero. The cutoff is a mathematical tool used to regulate these infinities, allowing for sensible calculations to be made.

## 2. How does the cutoff work in renormalization?

The cutoff is a maximum value that is imposed on certain parameters in the equations. This prevents the equations from becoming infinite and allows for finite calculations to be made. The cutoff is usually removed at the end of the calculation, resulting in a finite and physically meaningful result.

## 3. Why is renormalization important in theoretical physics?

Renormalization is a crucial tool in theoretical physics because it allows for the prediction of physical phenomena at extremely small scales, such as in quantum mechanics. Without renormalization, many important calculations would be impossible or would yield meaningless results.

## 4. Are there any limitations to renormalization with a cutoff?

While renormalization with a cutoff is a powerful technique, it does have its limitations. For example, it cannot be used in all quantum field theories and may not always give physically meaningful results. Additionally, the choice of cutoff can affect the final result, leading to some degree of uncertainty.

## 5. What are some real-world applications of renormalization with a cutoff?

Renormalization with a cutoff is used in a wide range of fields, including particle physics, condensed matter physics, and cosmology. It has been successfully applied to explain phenomena such as the behavior of quarks and gluons in the strong nuclear force, the behavior of electrons in superconductors, and the evolution of the universe in the early stages of the Big Bang.

Replies
24
Views
3K
Replies
395
Views
21K
Replies
132
Views
12K
Replies
38
Views
6K
Replies
108
Views
15K
Replies
9
Views
2K
Replies
15
Views
2K
Replies
22
Views
6K
Replies
21
Views
3K
Replies
9
Views
3K