# Why Renormalisation in Quantum Theory Needs a Cutoff

**Estimated Read Time:**6 minute(s)

**Common Topics:**order, paper, parameter, dimensional, theory

Table of Contents

## Introduction

This is a follow on from my paper explaining renormalization. A question was raised – why exactly do we need a cut-off. There is a deep reason to do with dimensional analysis, and the power series expansion used in perturbation theory. Along the way, we will see renormalization in a more general setting, and exactly why logarithms, like in the previous paper, so often crop up.

**A More General Look At Renormalisation**

Suppose we have a function G(x) that depends on some parameter λ ie G(x,λ). Then, so perturbation theory can be used, expand it in a power series about λ:

G(x) = G0 + G1(x)*λ + G2(x)*λ^2 + ……..

In perturbation theory, for theoretical convenience, it is usual to define a new function F(x) = (G(x) – G0)/G1 so:

F(x) = λ + F2(x) *λ^2 + ……..

It makes things like formal inversion etc easier. This seems a pretty innocuous thing to do, but from dimensional analysis, has a consequence that lies at the heart of where QFT infinities come from, and the need for a cut-off. To see this, suppose x has some kind of dimension such as, for example, momentum squared, and λ is dimensionless. A number of theories fall into this class; such as:

quantum electrodynamics where the fine structure constant is dimensionless, and only high energies are considered, so the electron mass is negligible;

the Weinberg-Salam model of electroweak interactions;

the meson theory, again at high energies so the mass is negligible, used as an example in my previous paper. K^2 has dimensions of momentum squared and the coupling constant is dimensionless.

Suppose λ is small, then F(x) = λ, F has the dimensions of λ, so is dimensionless. This is also seen by its definition where G(x) – G0 is divided by G1(x). But lets expand F2(x) in a power series about x so F2(x) = F20 + F21*x + F22*x^2 + ……. = F20 + F21*x + O(x^2). Suppose x is small, so O(x^2) can be neglected, then F2(x) has the dimensions of x, hence to the second order of λ, F(x) has the dimensions of x. Here we have a dimensional mismatch. This is the exact reason the equations blow up – in order for it to be dimensionless it cant depend on x. This can only happen if F2(x) is a constant or infinity. Either of course is death for our theory – but nature seemed to choose infinity – the reason for which will be examined later.

Now for the solution. The only way to avoid this is to divide x by some parameter, Λ, of the same units as x, so it becomes dimensionless.

The correct equation is:

F(x/Λ) = λ + F2(x/Λ) *λ^2 + F2(x/Λ) *λ^2 +………+ Fi(x/Λ) *λ^i + ……………

We see, due to dimensional analysis of the perturbation methods used, we have neglected a parameter in our theory, which can be interpreted as a cut-off. It is this oversight that has caused the trouble all along.

**Consequence Of The Introduction Of Λ**

To second order we have F(x/Λ) = λ + F2(x/Λ) *λ^2. (1)

The issue is while we know there is a Λ, we do not know its value so, as per the example in my previous paper, we want a formula without it. Similar to what was done before, we define the renormalized coupling constant:

λr = F(u/Λ) = λ + F2(u/Λ) *λ^2. (2)

Here u is some arbitrarily chosen value of x that yields a value of λr that can be measured.

Subtracting (2) from (1), and noting that to second-order λr^2 = λ^2, we get:

F(x/Λ) = λr + (F2(x/Λ) – F2(u/Λ))*λr^2.

We want this to not depend on Λ, so F2(x/Λ) – F2(u/Λ) = f(x,u), where f(x,u) depends on x and u, but not Λ. Theories, where this works to eliminate Λ, are called renormalizable. Not all theories are renormalizable – but as we will see, if it is, this imposes restrictions on the equations.

Let g(x) = f(x,1) = F2(x/Λ) – F2(1/Λ) ⇒f(x,u) = g(x) – g(u). Let K(x) = F2(x/Λ) – g(x) ⇒ K(x) – K(u) = 0 ⇒ K(x) = K(u). But since x and u are independent, K can’t depend on x or u, so must only depend on Λ ie K = K(Λ). Thus:

F2(x/Λ) = g(x) + K(Λ).

We see the renormalization condition, which is basically we want to get rid of the unknown Λ, determines the form of F2(x/Λ), namely it is the sum of a function of x and a function of Λ. The reason renormalization works are when you subtract (2) from (1) the Λ dependant term cancels.

**Why You Get Logarithms**

An interesting consequence of this is it must involve logarithms. That the meson/meson scattering formula in the previous paper contained them is no accident.

Taking the derivative wrt to x in F2(x/Λ) = g(x) + K(Λ) ⇒ F2′(x/Λ)/Λ = g'(x). Let x =1. F2′(1/Λ)/Λ = g'(1) which will be called -α, where its conventional to use a minus sign because that’s what tends to occur in equations, such as the C in the previous paper. Let 1/Λ = y ⇒ F2′(y) = -α/y whose solution is F2(y) = -α*log(y) + C.

Hence we have:

F2(x/Λ) = -α*log (x/Λ) + C = α*log (Λ/x) + C = α*log (Λ) – α*log (x) + C.

As promised we see that α occurs in α*log (Λ) like the meson/meson scattering equation; justifying the negative sign.

This has exactly the same form as the equation for meson/meson scattering in the previous paper. However it can be simplified further to eliminate C. This is done by subtracting C*λ^2 from F(x/Λ) to give F(x/Λ) – C*λ^2. Using this new F we have:

F2(x/Λ) = α*log (Λ/x) = α*log (Λ) – α*log (x).

**Why Did This Take So Long To Sort Out**

We have seen the use of perturbation theory secretly requires another parameter to make sense. If you don’t include it, dimensional analysis shows you will get nonsense, with this nonsense manifesting in the infinities.

Even worse was an incorrect assumption about the coupling constant λ. Measurements showed it was much less than 1, so it looked reasonable to use in a perturbation expansion. But now we know there is a neglected parameter, Λ, in our equations, let’s look at what happens to λ when that is taken into account.

To second order:

λ = λr + a*λr^2

F(x/Λ) = λ + α*log(Λ/x)*λ^2 = λr + a*λr^2 + α*log(Λ/x)*λr^2 = λr + (α*log(Λ/x) + a)*λr^2

But λr = F(u/Λ) = λr + (α*log(Λ/u) + a)*λr^2 ⇒ α*log(Λ/u) + a = 0 ⇒ a = -α*log(Λ/u) = α*log(u/Λ). Hence:

λ = λr + a*λr^2 = λr + α*log(u/Λ)*λr^2.

We see the coupling constant depends on this new parameter. Now, making the reasonable interpretation of Λ as a cut-off let’s remove it by taking the limit at infinity similar to the previous paper. When this is done, we see to first order, the coupling constant λ = λr, so, in our first order calculations, no problem arose. But at second order it blows up to -∞. It’s also interesting to note the other reasonable choice to get rid of Λ, taking the limit to zero, also leads to it blowing up – this time to ∞.

In perturbation theory, you want what we perturb about to be much less than one. But for it to actually be infinite – that’s really, really bad, and no wonder you get nonsense infinite answers.

Measurements gave small values of the coupling constant, which from the above equation, means Λ isn’t too large, or small. This is what fooled people all those years.

**Conclusion**

We have seen there is a secret parameter in our theories required by dimensional analysis. The inclusion of this parameter, and the renormalization condition, leads to them having a certain form. For theories with a dimensional parameter, and a dimensionless coupling constant, to second-order it is F(x/Λ) = λ + α*log(Λ/x)*λ^2.

It’s very interesting that dimensional considerations show why there is a parameter missing. When it’s not introduced, you get nonsense. If it’s included, then requiring our equations to be renormalizable, constrains its form.

I posted the following paper before:

It extends these ideas a lot further by calculating higher-order terms and investigating the important renormalization group. The trouble is it has a few (relatively minor) errors and isn’t 100% clear what’s going on in some areas.

I hope to do some further papers giving the third and higher-order terms, plus the renormalization group.

My favourite interest is exactly how can we view the world so what science tells us is intuitive.

[QUOTE=”Jimster41, post: 5120363, member: 517770″]I was associating the “cutoff” with a boundary on the space of “possible paths”, limiting the integration to run only over the ones with some minimum probability, rather than an infinite number of them.[/QUOTE]

The Feynman path integral must always sum over all paths (or all energies). We can split the sum into “partial” sums over high, intermediate and low energies, and each partial sum must enter the final sum. But how can we sum over all paths when we don’t know the true theory of everything? We cannot, but we can guess. The cutoff represents our guess as to what the sum looks like after the partial sum over the high energies (strings or whatever) has been done.

I had the Feynman “Path Integral Formulation” roughly in mind, and I had a connection between his “probability amplitudes” and Schrodinger’s wave equation.

I was associating the “cutoff” with a boundary on the space of “possible paths”, limiting the integration to run only over the ones with some minimum probability, rather than an infinite number of them.

And that at the heart of that approach was a dependence on the natural “diffusivity” of the wave equation.

[QUOTE=”atyy, post: 5120343, member: 123698″]For example, in quantum electrodynamics, the “absolutely necessary” (nonsharp) cutoff is given by the Landau pole and lies above the Planck scale. However, reality may intervene way before that, and cause our theory to be false. In particular, we expect quantum gravity to render QED already false below the Planck scale.[/QUOTE]

There is no may in QED – long before then the electroweak theory takes over – but for the electroweak theory, which I believe has its own Landau pole, that looks a real issue.

Thanks

Bill

[QUOTE=”Jimster41, post: 5120323, member: 517770″]Is “the cutoff” whatever it needs to be, dimensionally, and in value, depending on the specific problem being solved?[/QUOTE]

In Wilson’s approach its an actual value – but the cut-off used depends on the coupling constant chosen and the regime you are working with. This will be a lot clearer when I do my paper on renormalisation group flow.

[QUOTE=”Jimster41, post: 5120323, member: 517770″]Is it correct to say that the cut-off is really along the “probability” axis regardless of the dimension in question?[/QUOTE]

I don’t know what you mean by that.

Thanks

Bill

[QUOTE=”Jimster41, post: 5120323, member: 517770″]Is “the cutoff” whatever it needs to be, dimensionally, and in value, depending on the specific problem being solved?

Or, is there a canonical “maximal/minimal” cutoff. For some reason I was thinking it was thelinit given by the Planck constant.

Is it correct to say that the cutoff is really along the “probability” axis regardless of the dimension in question? That renormalization is drawing a boundary around the peak of the “probability” wave. Even though the wave keeps going, the probability of observation is always? (or “generally”)decreasing, so introducing a “cutoff” is practical, and is just about how infinitesimal a degree of uncertainty is tolerable.

If this is the case, is it true, do I understand correctly that “many body” case is more worrisome?[/QUOTE]

First, the cutoff is not sharp, since it just represents an energy above which new degrees of freedom must enter. Roughly, the “absolutely necessary” (nonsharp) cutoff is determined by our guess of the low energy theory.

For example, in quantum electrodynamics, the “absolutely necessary” (nonsharp) cutoff is given by the Landau pole and lies above the Planck scale. However, reality may intervene way before that, and cause our theory to be false. In particular, we expect quantum gravity to render QED already false below the Planck scale.

Of course, the Planck scale is again the “absolutely necessary” (nonsharp) cutoff for quantum general relativity. But if there are low energy stringy effects, then quantum general relativity will be false even at energies far below the Plack scale.

[QUOTE=”nrqed, post: 5120287, member: 15416″]Ah, your question was about what people in the field prefer to use to regularize their integrals?[/QUOTE]

That’s it. But just for the heck of it I did the integral from the table – yuck.

Interestingly, in the high energy regime its log (Λ^2/K^2) + ∫ log (α – α^2) dα – where the integral is from 0 to 1 and I dropped the C. ∫ log (α – α^2) dα is a bit nasty as well – but its a constant whatever it is. This is the constant I got in my paper from solving the differential equation for F2.

Thanks

Bill

Is “the cutoff” whatever it needs to be, dimensionally, and in value, depending on the specific problem being solved?

Or, is there a canonical “maximal/minimal” cutoff. For some reason I was thinking it was thelinit given by the Planck constant.

Is it correct to say that the cutoff is really along the “probability” axis regardless of the dimension in question? That renormalization is drawing a boundary around the peak of the “probability” wave. Even though the wave keeps going, the probability of observation is always? (or “generally”)decreasing, so introducing a “cutoff” is practical, and is just about how infinitesimal a degree of uncertainty is tolerable.

If this is the case, is it true, do I understand correctly that “many body” case is more worrisome?

[QUOTE=”bhobba, post: 5120280, member: 366323″]Do – yes – its actually in a table of integrals – plug and chug.

Thanks

Bill[/QUOTE]

Ah, your question was about what people in the field prefer to use to regularize their integrals? Then yes, dimensional regularization is almost always used, in practice. In an abelian gauge theory like QED, one may use a Pauli-Villars regularization but in non abelian gauge theories, one uses dimensional regularization and then, by habit, people use dim reg everywhere (after introducing the gauge fixing terms, Fadeed-Popov ghosts, etc, in the lagrangian)

[QUOTE=”nrqed, post: 5120257, member: 15416″]You can use the online Mathematica integrator, for example.[/QUOTE]

Do – yes – its actually in a table of integrals – plug and chug.

Thanks

Bill

[QUOTE=”bhobba, post: 5120236, member: 366323″]Hi Guys

Just a quick question how one actually does integrals like that. Zee used a Pauli-Villiars approximation and even then you end up with

C* ∫ log (Λ^2/(m^2 – α(1-α)*K^2) dα where the integral is from 0 to 1

Do you do it in dimensional regularisation?

Thanks

Bill[/QUOTE]

Using dimensional regularization would mean not introducing the cutoff and the integral would be quite different.

But I don’t see any problem carrying out this integral explicitly. It can be done explicitly. You can use the online Mathematica integrator, for example.

Hi Guys

Just a quick question how one actually does integrals like that. Zee used a Pauli-Villiars approximation and even then you end up with

C* ∫ log (Λ^2/(m^2 – α(1-α)*K^2) dα where the integral is from 0 to 1

Do you do it in dimensional regularisation?

Thanks

Bill

[QUOTE=”nrqed, post: 5120213, member: 15416″] there is no justification for dropping all masses and then, as I said, the cutoff does not cancel out unless we take it to infinity.[/QUOTE]

Again good point :smile::smile::smile::smile::smile::smile:

Thanks

Bill

[QUOTE=”bhobba, post: 5120121, member: 366323″]Good point.

But what do you think of the following argument – take the integral before:

∫ dk^4 1/((k^2 – m^2)((K – k)^2 – m^2)).

We divide the integral into two bits – the sum of a finite integral bit and two high energy bits that are of the form ∫ dk^4 1/k^4 because k swamps the other terms – one from – ∞ the other to +∞. Then by subtracting from the finite integral ∫ dk^4 1/k^4 over that finite region, the improper integrals become ∫ dk^4 1/k^4 for -∞ to ∞. This has the form Limit Λ → ∞ C*log (Λ^2). In this case it has the form for my original argument to be applicable. Of course approximations are used – but what Zee used was full of them as well.

This was like in your example where Λ was large.

Thanks

Bill[/QUOTE]

Hi Bill,

Well, even if one does something like this, one has to make some approximation in some of the regions. One makes his type of approximation only when making hand waving arguments. The actual calculations should be done without such approximations. Or else, one should make clear the approximations made to reach a conclusion. But if we are serious about using a QFT to calculate physical results, there is no justification for dropping all masses and then, as I said, the cutoff does not cancel out unless we take it to infinity.

[QUOTE=”nrqed, post: 5120120, member: 15416″]Your demonstration worked only because you were taking the high energy limit but it does not follow in general, which was my point.[/QUOTE]

Good point.

But what do you think of the following argument – take the integral before:

∫ dk^4 1/((k^2 – m^2)((K – k)^2 – m^2)).

We divide the integral into two bits – the sum of a finite integral bit and two high energy bits that are of the form ∫ dk^4 1/k^4 because k swamps the other terms – one from – ∞ the other to +∞. Then by subtracting from the finite integral ∫ dk^4 1/k^4 over that finite region, the improper integrals become ∫ dk^4 1/k^4 for -∞ to ∞. This has the form Limit Λ → ∞ C*log (Λ^2). In this case it has the form for my original argument to be applicable. Of course approximations are used – but what Zee used was full of them as well.

This was like in your example where Λ was large.

Thanks

Bill

[QUOTE=”bhobba, post: 5119967, member: 366323″]Hi Patrick.

I am sure you are correct in general – but I was not considering the general case – I was considering the high energy scale of a one parameter theory. The example you gave has two parameters K1 and K2.

An example is the scattering amplitude from the Φ^4 theory which is what I considered in the first paper. From Zee page 145 – its

i*λ + 1/(2*(2π)^4) ∫ dk^4 1/((k^2 – m^2)((K – k)^2 – m^2)).

For simplicity we go to the high energy regime so m can be neglected

i*λ + 1/(2*(2π)^4) ∫ dk^4 1/(k^2*(K – k)^2)

Zee claims its i*λ + i*C*log(Λ^2/K^2) which is exactly the form I came up with.

However I have to say there were a number of approximations made in deriving it (Zee did it on page 151 and 152 – band its rather tricky). I don’t know if that’s a factor.

Thanks

Bill[/QUOTE]

Hi Bill,

Yes, my point was about not considering the high energy regime. I realize that things simplify greatly when we can neglect all the masses (and my k’s can be masses).

But I did not get the feeling that you were implying that your conclusions were only valid at the condition of working in the high energy regime. Such a result is then of limited interest since of course the real calculation includes the correction due to finite masses or a real calculation is not necesseraly done at high energy.

You wrote in your first blog

“The cut-off is gone. Everything is expressed in terms of quantities we know. That we don’t know what cut-off to use now doesn’t matter.”

My point is that this conclusion does not follow in general for a quantum field theory, even if it is renormalizable. Your demonstration worked only because you were taking the high energy limit but it does not follow in general, which was my point.

Regards,

Patrick

In the Wilsonian view, the cutoff is not simply a reasonable interpretation. The cutoff is truly a cutoff (unless it turns out the theory is asymptotically free or safe).

We basically start with a cutoff, because we assume that the true high energy theory is strings or something unknown to us. However, we guess that we can do physics at low energy by assuming (for example) that special relativity, electrons, positrons and the electromagnetic field approximately exist, even though they may not really do so in the true theory of everything. Then we write down all theories consistent with this assumption, and see what predictions we can make for physics at low energy and finite resolution. Because we write down all theories, all possible terms automatically appear in our initial guess. Even if we don’t write down all possible terms, we will find that the flow to low energy automatically generates them. Since we started with all terms, at low enough energy, we will find the traditional “renormalizable” terms as well as “nonrenormalizable” terms that we traditionally don’t want, but the nonrenormalizable terms will be suppressed in powers of the cutoff.

If for some reason we made such a fantastic guess that is potentially the true theory of everything, we will find that we can remove the cutoff. Regardless, it is not necessary in order to do low energy physics with finite resolution.

[QUOTE=”stevendaryl, post: 5119955, member: 372855″]I guess it’s obvious that if [itex]F(x)[/itex] is dimensionless, but [itex]x[/itex] is not, then [itex]F(x)[/itex] can be rewritten as a function of [itex]frac{x}{Lambda}[/itex], where [itex]Lambda[/itex] is a scale factor with the same dimensions as [itex]x[/itex]. But I don’t see what that shows about the interpretation of [itex]Lambda[/itex] as a cutoff.[/QUOTE]

Ahhhh.

That I agree with. Its simply a reasonable interpretation.

Thanks

Bill

[QUOTE=”nrqed, post: 5119733, member: 15416″]I was talking about a general calculation, so I am not assuming that the external momenta are much larger than other physical scales (like the masses of the particles in the loops). My point is that in general, if one does a one loop QFT calculation[/QUOTE]

Hi Patrick.

I am sure you are correct in general – but I was not considering the general case – I was considering the high energy scale of a one parameter theory. The example you gave has two parameters K1 and K2.

An example is the scattering amplitude from the Φ^4 theory which is what I considered in the first paper. From Zee page 145 – its

i*λ + 1/(2*(2π)^4) ∫ dk^4 1/((k^2 – m^2)((K – k)^2 – m^2)).

For simplicity we go to the high energy regime so m can be neglected

i*λ + 1/(2*(2π)^4) ∫ dk^4 1/(k^2*(K – k)^2)

Zee claims its i*λ + i*C*log(Λ^2/K^2) which is exactly the form I came up with.

However I have to say there were a number of approximations made in deriving it (Zee did it on page 151 and 152 – but its rather tricky). I don’t know if that’s a factor.

Thanks

Bill

[QUOTE=”bhobba, post: 5119845, member: 366323″]I simply expanded F2 to make it clearer what’s going on. You can argue it has dimension F2(X) – whatever F2 is – if its squared it’s dimensions x^2, if its √ it has dimensions x^1/2 etc. If its constant then its dimensionless. It looks obvious to me from what dimensions means in dimensional analysis.[/QUOTE]

I guess it’s obvious that if [itex]F(x)[/itex] is dimensionless, but [itex]x[/itex] is not, then [itex]F(x)[/itex] can be rewritten as a function of [itex]frac{x}{Lambda}[/itex], where [itex]Lambda[/itex] is a scale factor with the same dimensions as [itex]x[/itex]. But I don’t see what that shows about the interpretation of [itex]Lambda[/itex] as a cutoff.

[QUOTE=”stevendaryl, post: 5119666, member: 372855″]I don’t understand why you say that if [itex]F[/itex] actually depended on x, it would not be dimensionless.[/QUOTE]

I simply expanded F2 to make it clearer what’s going on. You can argue it has dimension F2(X) – whatever F2 is – if its squared it’s dimensions x^2, if its √ it has dimensions x^1/2 etc. If its constant then its dimensionless. It looks obvious to me from what dimensions means in dimensional analysis.

Thanks

Bill

[QUOTE=”stevendaryl, post: 5119666, member: 372855″]I might be just being stupid, but I don’t understand this point. You have a dimensionless function of x, [itex]F(x)[/itex]. It can be written as a power series in x, as follows:

[itex]F(x) = F_0 + x F_1 + x^2 F_2 + …[/itex]

If x is small, then we can approximate F by just the first two terms, so:

[itex]F(x) = F_0 + x F_1[/itex]

I don’t understand why you say that if [itex]F[/itex] actually depended on x, it would not be dimensionless. What it seems to me is that [itex]F[/itex] is dimensionless, and so is [itex]F_0[/itex], but [itex]F_1[/itex] has the dimensions of [itex]frac{1}{D}[/itex], where [itex]D[/itex] is the dimensions of x.

I agree that if you want all the [itex]F_i[/itex] to be dimensionless, then you can’t have an expansion in [itex]x[/itex], you have to have an expansion in [itex]frac{x}{Lambda}[/itex] where [itex]Lambda[/itex] has the same dimensions as [itex]x[/itex]. But saying that [itex]F[/itex] is dimensionless doesn’t imply that [itex]F_1[/itex] is dimensionless.[/QUOTE]

You are absolutely correct, stevendaryl. I was going to make the same remark. The higher Fs come from a Taylor expansion and the terms in a Taylor expansion (the terms that multiply the powers of the variable in which we expand) all have different dimensions since they come from derivatives with respect to a dimensional quantity! So there is no problem with the dimensions of any of the F’s. We cannot conclude anything one way or another from the existence of the Taylor expansion.

[QUOTE=”bhobba, post: 5119589, member: 366323″]But for large Λ its the same. I have gone through the exact calculations for the messon/meeson scattering in my original paper, and it, even without taking a large number approximation you get [itex] ln( (Lambda^2/k_2^2) [/itex] . Are you sure you are talking about the large energy approximation I am using in the paper?

Thanks

Bill[/QUOTE]

Hi Bill,

I was talking about a general calculation, so I am not assuming that the external momenta are much larger than other physical scales (like the masses of the particles in the loops). My point is that in general, if one does a one loop QFT calculation, the Lambda do not cancel out unless we take the infinite limit.

We could discuss a more specific example if you want, you could just give me the Feynman rules you were using. Or we could just consider a vacuum polarization in QED or a vertex correction or even a Higgs loop. Of course, if you assume that all the masses of the particles are negligible compared to external momenta, things simplify greatly. But one should also be able to calculate quantities where this is not a valid approximation. And even if it is a good approximation, one should be able to go beyond that limit.

Regards,

Patrick

[QUOTE=”bhobba, post: 5119611, member: 366323″]No. Its because F must be dimensionless – but the expansion says it isn’t. This is an inconsistency – to accommodate it, it must be infinity or a constant – if it actually depended on x it woul not be dimensionless.[/QUOTE]

I might be just being stupid, but I don’t understand this point. You have a dimensionless function of x, [itex]F(x)[/itex]. It can be written as a power series in x, as follows:

[itex]F(x) = F_0 + x F_1 + x^2 F_2 + …[/itex]

If x is small, then we can approximate F by just the first two terms, so:

[itex]F(x) = F_0 + x F_1[/itex]

I don’t understand why you say that if [itex]F[/itex] actually depended on x, it would not be dimensionless. What it seems to me is that [itex]F[/itex] is dimensionless, and so is [itex]F_0[/itex], but [itex]F_1[/itex] has the dimensions of [itex]frac{1}{D}[/itex], where [itex]D[/itex] is the dimensions of x.

I agree that if you want all the [itex]F_i[/itex] to be dimensionless, then you can’t have an expansion in [itex]x[/itex], you have to have an expansion in [itex]frac{x}{Lambda}[/itex] where [itex]Lambda[/itex] has the same dimensions as [itex]x[/itex]. But saying that [itex]F[/itex] is dimensionless doesn’t imply that [itex]F_1[/itex] is dimensionless.

[QUOTE=”bhobba, post: 5119611, member: 366323″]Its a Taylor series expansion – in applied math you generally assume you can do that.

Not quite – because of the division it creates something dimensionless – its different to a rescaling which would simply be a change of units.

“Suppose λ is small, then F(x) = λ, F has the dimensions of λ, so is dimensionless”

No. Its because F must be dimensionless – but the expansion says it isn’t. This is an inconsistency – to accommodate it, it must be infinity or a constant – if it actually depended on x it woul not be dimensionless.

I am sorry – but you cant do that. Its modelling something – nothing you can do can change what its modelling.

Thanks

Bill[/QUOTE]

Thanks for the direction Bill.

I need to chew on this more, but I feel like I’m learning something.

I just wanted to clarify, I didn’t mean to imply a conversion from length to Btu’s had some specific, real, quality of meaning, I just meant that the computer can be told to “recast” some value. Like, “hey computer, I know I said 10degF + 20 deltaF = 30 degF, but I just totally changed my mind. It equals 30 “Ice cream cones”. If I tell it not to care, It will let me do things that are dimensionally nonsensical. At the end of the day, I am the one telling it what “modeling something” means. But no, of course I would be disappointed and confused to say the least, if the temperature outside changed 20 degrees and I somehow expected “ice-cream cones”.

[QUOTE=”Jimster41, post: 5119508, member: 517770″]But regardless, do I understand correctly that this is saying that G(x,λ) can be decomposed into a linear combination of functions [itex]{ G }_{ i }(x)[/itex] multiplied by powers of λ (That just what the power series expansion technique)?[/QUOTE]

Its a Taylor series expansion – in applied math you generally assume you can do that.

[QUOTE=”Jimster41, post: 5119508, member: 517770″]Do I understand correctly that this just normalizes (scales) the “power series representation of G(x,λ)” to the difference between the first to constants of expansion of G(x,λ)?[/QUOTE]

Not quite – because of the division it creates something dimensionless – its different to a rescaling which would simply be a change of units.

“Suppose λ is small, then F(x) = λ, F has the dimensions of λ, so is dimensionless”

[QUOTE=”Jimster41, post: 5119508, member: 517770″]This is because powers of small numbers go to zero in the limit, correct?[/QUOTE]

No. Its because F must be dimensionless – but the expansion says it isn’t. This is an inconsistency – to accommodate it, it must be infinity or a constant – if it actually depended on x it woul not be dimensionless.

[QUOTE=”Jimster41, post: 5119508, member: 517770″]I can declare something “Dimensional” to be suddenly “Dimensionless”, change length into Btu’s or whatever). After all it’s just a computer, I can make it do whatever I want. But it seems telling to me that without instructions for how/when/where to do this, the computer can’t “automatically” do so .[/QUOTE]

I am sorry – but you cant do that. Its modelling something – nothing you can do can change what its modelling.

Thanks

Bill

[QUOTE=”nrqed, post: 5119334, member: 15416″]Instead, one gets typically something of the form [itex] ln( (Lambda^2+k_1^2)/k_2^2) [/itex].[/QUOTE]

But for large Λ its the same. I have gone through the exact calculations for the messon/meeson scattering in my original paper, and it, even without taking a large number approximation you get [itex] ln( (Lambda^2/k_2^2) [/itex] . Are you sure you are talking about the large energy approximation I am using in the paper?

Thanks

Bill

FYI I can’t see the Tex in Patrick’s reply in the original Insights post.

Just trying to get a handhold (I would like to understand this)

“Suppose we have a function G(x) that depends on some parameter λ ie G(x,λ). Then, so perturbation theory can be used, expand it in a power series about λ:

G(x) = G0 + G1(x)*λ + G2(x)*λ^2 + ……..”

Why isn’t this written:

G(x,λ) = G0 + G1(x)*λ + G2(x)*λ^2 + …….. ?

But regardless, do I understand correctly that this is saying that G(x,λ) can be decomposed into a linear combination of functions [itex]{ G }_{ i }(x)[/itex] multiplied by powers of λ (That just what the power series expansion technique)?

(somewhat aside) It’s been a long time since I learned about power series expansions. But they have always bugged me because of their dependence on “convergence at infinity”. I get that there are lot’s of key tools that use infinite limits. But it has been a regular thorn in my side. To be honest I always sort of associated the QM “infinities” problem with this… that you had to “sum over histories” but that there was effectively no limit to the terms in the sum. Only recently have I realized that the “energy level” is associated with the “cuttoff”.

“In perturbation theory, for theoretical convenience, it is usual to define a new function F(x) = (G(x) – G0)/G1 so:

F(x) = λ + F2(x) *λ^2 + ……..”

Do I understand correctly that this just normalizes (scales) the “power series representation of G(x,λ)” to the difference between the first to constants of expansion of G(x,λ)?

“Suppose λ is small, then F(x) = λ, F has the dimensions of λ, so is dimensionless”

…This is also seen by its definition where G(x) – G0 is divided by G1(x). But lets expand F2(x) in a power series about x so F2(x) = F20 + F21*x + F22*x^2 + ……. = F20 + F21*x + O(x^2). Suppose x is small, so O(x^2) can be neglected, then F2(x) has the dimensions of x, hence to second order of λ, F(x) has the dimensions of x. Here we have a dimensional mismatch. This is the exact reason the equations blow up – in order for it to be dimensionless it cant depend on x. This can only happen if F2(x) is a constant or infinity. Either of course is death for our theory – but nature seemed to choose infinity – the reason for which will be examined later.

This is because powers of small numbers go to zero in the limit, correct?

I guess I find this confusing because (at least in the software I use) I wouldn’t be able to get away with just assuming the “dimension” x of my expression therefore completely vanishes? The software won’t “automatically start to neglect the dimension-ality of a system just because the value of the Range in that dimension is ensie-weensie, or whatever. This has always seemed onto-logically correct to me. Nor will it automatically add dimension.

I can declare something “Dimensional” to be suddenly “Dimensionless”, change length into Btu’s or whatever). After all it’s just a computer, I can make it do whatever I want. But it seems telling to me that without instructions for how/when/where to do this, the computer can’t “automatically” do so .

I guess I have assumed this was for a pretty deep reason, that somehow logically there is simply not enough information in any scalar value alone (even zero) to determine it’s dimensionality (or lack thereof)?

Still reading (and re-reading).

[Edit] I [I]think[/I] this is clicking. Now “x” is a number of apples in the world of apples “Λ” :

“Now for the solution. The only way to avoid this is to divide x by some parameter, Λ, of the same units as x, so it becomes dimensionless.

The correct equation is:

F(x/Λ) = λ + F2(x/Λ) *λ^2 + F2(x/Λ) *λ^2 +………+ Fi(x/Λ) *λ^i + ……………

We see, due to dimensional analysis of the perturbation methods used, we have neglected a parameter in our theory, which can be interpreted as a cut-off. It is this oversight that has caused the trouble all along.”

I’m interested to see where the log’s come from now…

But… must… eat…

[QUOTE=”nrqed, post: 5119343, member: 15416″]This is the “old” approach to renormalization (pre Ken Wilson, say). The modern point of view is that the cutoff should not be taken to infinity. But then one must treat the theory as and effective field theory and there is an infinite of terms to be included in the lagrangian. This is for another post. But my point here was to convey that the cutoff does not go away even in renormalizable theories if we don’t take the limit cutoff goes to infinity.[/QUOTE]

Including the (usually finite) cutoff, and the infinite number of terms is so important conceptually. I don’t know why even modern texts like Srednicki or Schwartz put it so late, and even then make it hard to extract the key concept (well, Schwartz is pretty good, actually). On the other hand, the statistical mechanics texts do this right away.

I am sorry, I messed up again by unintentionally including my first post in my reply, making the whole thing a mess. And I don't know how to go back and edit a post or or to deleter a post, so here is my final version!Watch out… the presentation is a bit misleading for the following reason:In actual calculations, when integrating loop diagrams, one almost never get a pure log of the formln ( Lambda / k) where k is some energy scale (could be a mass). It is almost never like this. Instead, one gets typically something of the form ln((Lambda^2 + k^2)/(u^2)) where u is another energy scale.And one can even have cases (when there are scalar bosons loops, for example) where in addition to these terms, one can have terms of the form 1/(k^2+Lambda^2).So after renormalizing, the cutoff does NOT go away if we keep it at a finite value, even when we are dealing with logs! Instead, one generically get terms fo the formln[ (Lambda^2 + k^2) / (Lambda^2+u^2)] or1/(Lambda^2+k^2) – 1/(Lambda^2 + u^2)We see that the cutoff does not go away, even if the theory is renormalizable!BUT we see that if we take the limit Lambda goes to infinity, THEN the cutoff disappears. This is the reason for taking this limit!This is the "old" approach to renormalization (pre Ken Wilson, say). The modern point of view is that the cutoff should not be taken to infinity. But then one must treat the theory as and effective field theory and there is an infinite of terms to be included in the lagrangian. This is for another post. But my point here was to convey that the cutoff does not go away even in renormalizable theories if we don't take the limit cutoff goes to infinity.Cheers,Patrick

:-( my equations did not show up so let me write them without using TeX.Watch out… the presentation is a bit misleading for the following reason:In actual calculations, when integrating loop diagrams, one almost never get a pure log of the formln ( Lambda / k) where k is some energy scale (could be a mass). It is almost never like this. Instead, one gets typically something of the form ln((Lambda^2 + k^2)/(u^2)) where u is another energy scale.And one can even have cases (when there are scalar bosons loops, for example) where in addition to these terms, one can have terms of the form 1/(k^2+Lambda^2).So after renormalizing, the cutoff does NOT go away if we keep it at a finite value, even when we are dealing with logs! Instead, one generically get terms fo the form ln[ (Lambda^2 + k^2) / (Lambda^2+u^2)] or 1/(Lambda^2+k^2) – 1/(Lambda^2 + u^2)We see that the cutoff does not go away, even if the theory is renormalizable!BUT we see that if we take the limit Lambda goes to infinity, THEN the cutoff disappears. This is the reason for taking this limit!This is the "old" approach to renormalization (pre Ken Wilson, say). The modern point of view is that the cutoff should not be taken to infinity. But then one must treat the theory as and effective field theory and there is an infinite of terms to be included in the lagrangian. This is for another post. But my point here was to convey that the cutoff does not go away even in renormalizable theories if we don't take the limit cutoff goes to infinity.Cheers,Patrick

Watch out… the presentation is a bit misleading for the following reason:In actual calculations, when integrating loop diagrams, one almost never get a pure log of the form [itex] \ln(\Lambda/k) [/itex] where k is some energy scale. It is almost never like this. Instead, one gets typically something of the form [itex] \ln( (\Lambda^2+k_1^2)/k_2^2) [/itex]. And one can even have cases (when there are scalar bosons loops, for example) where in addition to these terms, one can have terms of the form [itex]1/(k^2 + \Lambda^2) [/itex]. So after renormalizing, the cutoff does NOT go away if we keep it at a finite value, even when we are dealing with logs! Instead, one generically get terms fo the form [itex] \ln((\Lambda^2+k_1^2)/(\Lambda^2+k_2^2)) [/itex] or [itex]1/(\Lambda^2+k_1) – 1/(\Lambda^2 + k_2^2) [/itex].We see that the cutoff does not go away, even if the theory is renormalizable. BUT we see that if we take the limit [itex] \Lambda \rightarrow \infty [/itex], THEN the cutoff disappears. This is the reason for taking this limit! This is the "old" approach to renormalization (pre Ken Wilson, say). The modern point of view is that the cutoff should not be taken to infinity. But then one must treat the theory as and effective field theory and there is an infinite of terms to be included in the lagrangian. This is for another post. But my point here was to convey that the cutoff does not go away even in renormalizable theories if we don't take the limit cutoff goes to infinity.Cheers,Patrick