Is Renormalisation Unique for Physically Measurable Quantities?

  • Thread starter Sunset
  • Start date
In summary: is more a technical issue in the sense that while it does change the results, it does so in a way that is physically irrelevant.
  • #1
Sunset
63
0
Hi!

As far as I understood Renormalisation you try to find renormalized quantities R1,R2,... which are related to the bare quantities B1,B2,... in the following way:
- Ri is finite when you send Bi to infinity
- you can write every lorentz-invariant-amplitude of the theory in terms of the Ri instead of the Bi

I have one concern about this: is your choice of Ri unique (up to a additive constant)? I mean, is the relation between Ri and Bi unambigously given? What we want to have are physically measurable quantities, which can all derived from lorentz-invariant-amplitudes for processes. If there's only one possible choice for your renormalized quantity, Renormalisation is a method that makes really sense, otherwise it would depend on arbitrariness.
LI-amplitudes are constructed from vertex-function and propagators, so you have to make sure you can write all vertex-functions and propagators in dependence of unambigously defined Ri .

Does anybody know the textbooks of Griffiths or Ryder? So we could discus it explicitly...

Best regards Martin
 
Physics news on Phys.org
  • #2
Ryder for example has the following relation (for 1-loop-oder) between renormalised mass M and bare mass m in Phi^4-theory:
m²=M²(1+ g/(16Pi²[4-d]) )

The thing is: he dropped all the finite terms in the result of the calculation of the (onliest) divergent 1-loop-order diagram. I guess by ignoring all the finite terms he chooses a particular "renormalisation-scheme" (i.e. I guess the choice which of the finite terms you drop corresponds to the choice of the parameters in the counter-term-method). The finite terms depend on the bare mass! So you could involve such an arbitrary finite term in the relation above (choose another renormalisation-scheme), which means the relation between bare and renormalised quantity is ambiguous.

But the predictions for measurable quantities of your theory depend on the choice of renormalisation-scheme (e.g. http://arxiv.org/abs/hep-ph/9412236) .

So there should be a canonnically determined choice of the renormalisation-scheme (something which motivates it!) if renormalisation should make sense, otherwise it's simply fitting your theory (and not fitting the paramenters of a theory) to the experiment. :grumpy:
 
Last edited by a moderator:
  • #3
Hi Sunset,

You are correct, renormalization is far from unique. There are essentially two parts to what is broadly called "renormalization."

The first part involves a choice of regulator, the regulator being your tool to tame divergent amplitdues. The most popular choice of regulator is definitely dimensional regularization.

The second part involves the actual proedure of renormalization, and again there are many choices available. One very popular choice is the [tex] \bar{MS} [/tex] scheme. This scheme is defined by subtracting off only the divergent pieces of amplitudes while allowing finite corrections to accrue. You can contrast this with the most naive (and seemingly physical) scheme where your renormalization conditions amount to fixing certain propagator poles and residues.

There are any number of technical and physical reasons why the simplest "physical" scheme may not be useful, but the important point is that it doesn't matter. The physical predictions of your theory are quite independent of your renormalization scheme, they simply follow from your bare lagrangian (inserted in some path integral on a lattice, say). The renormalization scheme is simply a choice about how to break up your bare lagrangian into a renormalized piece and a counterterm piece (in the context of renormalized perturbation theory). Within a given scheme, the renormalization group refers to your ability to shuffle things back and forth between the renormalized lagrangian and the counterterm lagrangian.

If you found this helpful I can try to answer more of your questions here, but may I also suggest you pick up the book "Renormalization" by Henry Collins. It will go a long to demystifying renormalization for you.

P.S. I just saw your article link, so let me comment on that. While the predictions of your physical theory are independent of renormalization scheme, its important to remember that we don't really know how to get those predictions exactly. What we have is perturbation theory, and since renormalization is basically a way to do perturbation theory in a smart way, it follows that some renormalization schemes may do better than others. This is what your linked article refers to I think.
 
Last edited:
  • #4
Got an small question, can all the types of divegences be written in the form:

[tex] \int_{0}^{\infty}dpp^{m} [/tex] with m an integer

since if we had:

[tex] \int_{0}^{\infty}dp\mathcal F(p)
[/tex]

we could make a Taylor expansion of F in powers of p,is all right??
 
Last edited:
  • #5
The choice of regularization can and does change the actual results your theory outputs, in this sense there is somewhat of an ambiguity in field theory (albeit greatly demystified in the 70s and somewhat intuitively obvious). For instance dimensional regularization completely misses the hierarchy problem of particle physics.

Otoh the choice of renormalization scheme is more a technical issue in the sense that while it does change the results, it just means one of them is approaching some attractor point better than the other as was pointed out earlier. Sometimes you can actually improve the results to even obtain some nonperturbative sectors of the theory.

In general you can more or less match these various schemes theoretically in the appropriate regimes and can show that they are consistent at a physicists lvl of rigor (although this is extremely hard to do, and the papers that do it are somewhat hard to track down and sometimes even unpublished)
 
  • #6
Ok, you say if we consider enough orders in pertubation theory, predictions using different renormalisation schemes become more and more equal. This would be satisfying.

When I speak of predictions, I always have something in mind as the Lorenzinvariant Amplitude L. Let's consider the simplest process possible: the propagation of one particle of momentum p. The pertubation series for L is: free-propagator + 1-loop correction-diagram (neglecting higher loop orders). We have to regularise the 1-loop-correction-diagram i.e our regularised expression contains the regularisation-parameters (e.g. µ and 4-d), bare mass m and couling constant g. So L is a function of m, g, µ , d and momentum p.
L=f(m,g,µ,d,p)
We now carry out renormalisation, using a certain renormalisation scheme, such as MS-scheme. So L becomes a different function of renormalised quantities M,G:
L=F(M,G,µ,d,p)
Using a different Renormalisation (Renormalisation-scheme)i.e different renormalised quantities , we should get again a different function F':
L=F'(M',G',µ,d,p)

If we fit with masses from Particle-Data-book (experiment) we receive different predictions for propability /M/² , i.e. different renormalisation schemes lead to diferent predictions!

P.S. I'll check Collins book in our library

Best regards
 
  • #7
Zee's in a nutshell ch. III.1 Cutting off...

He defines the renormalised coupling constant G as

-iG = M(s0,t0,u0) = -ig +iCg² [log(cutoff²/s0) + log( ... ] (4)

and he gets the result

M = -iG + iCG² [ log(s0/s) + log(t0/t) + log(u0/u) ] (9)



Why iG ? I mean I could write an arbitrary function -if(G) instead of -iG in formula (4):
-if(G) = M(s0,t0,u0) = -ig +iCg² [log(cutoff²/s0) + log( ... ]

Then you receive your result:

M = -if(G) + iCf²(G) [ log(s0/s) + log(t0/t) + log(u0/u) ]

( e.g. f(G)=G² )

:confused: Isn't this a problem?

What I can measure in experiment is M for s,t,u .
 
  • #8
Hi Sunset,

What you are pointing out is that infinity minus infinity is completely indeterminate. Since it is completely indeterminate, the arbitrary function that you talk about could even depend on variables that never even entered the equations in the first place. What Zee, and the other textbooks on renormalized QFT do is therefore nonsense. I have been pointing this out, on and off, for twenty years, and the response of HEP theorists is to pretend that I do not exist. Whether you would have better luck, I cannot say, but as things stand, I would recommend getting out and doing something based on logic instead.
 
  • #9
Sunset said:
He defines the renormalised coupling constant G as

-iG = M(s0,t0,u0) = -ig +iCg² [log(cutoff²/s0) + log( ... ] (4)

and he gets the result

M = -iG + iCG² [ log(s0/s) + log(t0/t) + log(u0/u) ] (9)



Why iG ? I mean I could write an arbitrary function -if(G) instead of -iG in formula (4):
-if(G) = M(s0,t0,u0) = -ig +iCg² [log(cutoff²/s0) + log( ... ]

Then you receive your result:

M = -if(G) + iCf²(G) [ log(s0/s) + log(t0/t) + log(u0/u) ]

( e.g. f(G)=G² )

:confused: Isn't this a problem?

What I can measure in experiment is M for s,t,u .

I may not have understoof your point completely so let me know if I am way off. But the point is that we are working in a loop expansion. So it is understood that all the calculations are meant to be accurate to some order in the coupling constant. So it would not be consistent to introduce a function of the coupling constant given that you are already doing all the calculations up to some power of that coupling constant. You would especially not redefine G^2 since you have already neglected all the loop diagrams of order G^2 relative to the diagram you are working with! I suppose you could use something fancier like (1-e^(-G)) but then again, since all the diagrams of order G^2 or higher relative to the diagram you are considering have been neglected anyway, there is no point in doing something like this. One should simply redefine G^1.

I hope this helps. Those are great questions. Renormalization is a very tricky concept.

Patrick
 
  • #10
Hi cgoakley!

On the other hand QCD and QED are QFT's which describe nature apparently pretty well after Renormalisation. I doubt this can be reached only by "fitting theory to experiment" - ok maybe you have to put in something in by hand, but I assume your theory makes predictions that go beyond that ("verifying theory by experiment").

Best regards
 
  • #11
Hi Patrick!

nrqed said:
You would especially not redefine G^2 since you have already neglected all the loop diagrams of order G^2 relative to the diagram you are working with!

You are right, the function f is not completely arbitrary then. But f(G)=G² would be ok, because I didn't neglect second order in g:

M(s,t,u) = -ig +iCg² [log(cutoff²/s) + log( ... ) + ... ]
 
  • #12
Sunset said:
Hi Patrick!



You are right, the function f is not completely arbitrary then. But f(G)=G² would be ok, because I didn't neglect second order in g:

M(s,t,u) = -ig +iCg² [log(cutoff²/s) + log( ... ) + ... ]

Hi. Ok, sorry about the confusion. Let's first clear up the notation. I am assuming you use G for the renormalized coupling constant and "g" for the bare coupling constant, right?
Then the way people would normally write that would be

[tex]-iG = M(s0,t0,u0) = -ig +iC G^2 [log(\frac{cutoff^2}{s0}) + log( ... ] [/tex]

Where notice that in the second term, one uses the renormalized coupling constant. So this equation allows to relate G to g up to order G^2. At the next order, one would get an expression of the form

-iG = -ig + stuff to order G^2 plus stuff of order G^3 etc

I have much more to say about renormalization and renormalization schemes and the relation between bare and renormalized parameters but I will just post a bit at a time so that we can be on the same wavelength.

Great questions

Patrick
 
  • #13
nrqed said:
Hi. Ok, sorry about the confusion. Let's first clear up the notation. I am assuming you use G for the renormalized coupling constant and "g" for the bare coupling constant, right?
Then the way people would normally write that would be

[tex]-iG = M(s0,t0,u0) = -ig +iC G^2 [log(\frac{cutoff^2}{s0}) + log( ... ] [/tex]

Where notice that in the second term, one uses the renormalized coupling constant. So this equation allows to relate G to g up to order G^2. At the next order, one would get an expression of the form

-iG = -ig + stuff to order G^2 plus stuff of order G^3 etc

I have much more to say about renormalization and renormalization schemes and the relation between bare and renormalized parameters but I will just post a bit at a time so that we can be on the same wavelength.

Great questions

Patrick

Let me say it in a different way:

The goal is to write g as an expansion in G:

[tex] g= f_1 G + f_2 G^2 + f_3 G^3 + \ldots [/tex]

where the F_i are functions of the parameters of the tehory and of the cutoff (and are generally divergent...notice that even if there were no infinities in the theory, one would still have to renormalize!)

The above expression is the starting point. All you have to do is to plug this in your diagrams. Now, by definition, we impose that the amplitude be equal to some measured value at some kinematic point, liek you said M(s0,t0,u0) is defined to be -iG. Now, you calculate a tree level diagram with the Lagrangian (which contains the bare parameter g) and fix this to be -iG. But since you are working at tree level, you keep only the first term in the expansion I gave above. This fixes f_1 to be 1.

Now, you repeat with a two loop diagram. You plug you g given above up to order G^2 and use f_1 =1 (which you found from the tree-level matching). That will allow you to fix f_2.

And on and on.

In the context of effective field theories, this is called "matching". Renormalization is really just that: matching the bare coefficients to measurable quantities, order by order in a coupling constant expansion.

the question of "scheme" is something added to that and is really not necessary at all! But it's one of those things that people do because it makes things simpler in practice but is NOT necessary in principle.

I will write more if you want me to.

Hope this helps.

Patrick
 
  • #14
To go back to your question about using a different function f(G).
We don't want to change that as we renormalize. The whole point is that we want to define the bare coupling constant "g" such that calculations with the theory reproduce a fixed, measured quantity, no matter how many loops we include. The only thing is that since we can only do calculations in a perturbative context, we can only defined the bare parameters as expansions in the coupling constant.

So the point is to impose the theory to reproduce the measurable
M(s0,to,uo) up to the number of loops we want to use in whatever calculation we are doing.

So if you plan to do tree level calculations, you must impose that, up to tree level,

calculation of tree-level M_0 with bare parameter = measured value of M_0

(where I use M_0 = M(so,to,uo) )


If you plan to do one-loop calculations, you must impose


calculation of one-loop M_0 with bare parameter = measured value of M_0


and so on. You always fix to the same measurable quantity, no matter how many loops you keep in your calculation. Now, in the case we are discussion here, the measurable quantity M_0 happens to be -iG, the renormalized coupling constant. But the pricniple is still that order by order in the loop expansion we impose the calculation to give a measured, fixed, quantity.


Hope this makes sense

Patrick
 
  • #15
Sunset said:
Hi cgoakley!

On the other hand QCD and QED are QFT's which describe nature apparently pretty well after Renormalisation. I doubt this can be reached only by "fitting theory to experiment" - ok maybe you have to put in something in by hand, but I assume your theory makes predictions that go beyond that ("verifying theory by experiment").

Best regards

I doubt very much that, to tree level at least, there is anything seriously wrong with the standard model.

The problem is loops, which mostly diverge. HEP theorists have a gourmet's appreciation of different kinds of divergent integral, classifying them as "log", "quadratic", "quartic", etc. but the reality is that either an integral diverges or it does not. Reparametrising a quartic-divergent integral will get you a quadratically divergent integral and vice versa, and differencing divergent integrals just gives you an indeterminate value. A completely indeterminate value - it may be plus or minus infinity, it may be a (finite) constant, or it may be a finite function of any variables you could possibly think of. It is just completely indeterminate. Most often, HEP theorists choose a constant here, but, as you discovered, the mathematics does not require this. In effect, what they are doing is deciding in advance the kind of answer they want out of the calculation. It turns out that "reasonable" choices here accurately get us the Lamb Shift (despite the absence of a QFT description of the single-electron atom), the anomalous magnetic moment of the electron, and up till recently, the anomalous magnetic moment of the muon. But without a consistent mathematical substructure beneath, this success is bogus and serves only to confuse the issue.

QFT textbooks should not try to give the impression that so-called "effective" field theory can be uniquely derived from first principles. It cannot. Their derivations, such as the one you cite, prove only that if one lowers mathematical standards sufficiently, one can prove practically anything.
 
  • #16
Thanks for your help!

Yes, G for the renormalized coupling constant and "g" for the bare coupling constant

Where notice that in the second term, one uses the renormalized coupling constant.

Yes, I just realized
to receive
M = -iG + iCG² [ log(s0/s) + log(t0/t) + log(u0/u) ]

you have to use

-iG =M(s0,t0,u0)=-ig + iCG² [ log(cutoff²/s0) + ... ]

instead of -iG = M(s0,t0,u0) = -ig +iCg² [log(cutoff²/s0) + log( ... ] (4)

I haven't the book here in the moment, but as far as I remember Zee doesn't stress that he "changes g into G" - or I read over.
I haven't understood that issue in Ryder too, why the error involved by this change is of neglectable order.
 
Last edited:
  • #17
Sunset said:
Thanks for your help!

Yes, G for the renormalized coupling constant and "g" for the bare coupling constant



Yes, I just realized
to receive
M = -iG + iCG² [ log(s0/s) + log(t0/t) + log(u0/u) ]

you have to use

-iG =M(s0,t0,u0)=-ig + iCG² [ log(cutoff²/s0) + ... ]

instead of -iG = M(s0,t0,u0) = -ig +iCg² [log(cutoff²/s0) + log( ... ] (4)

I haven't the book here in the moment, but as far as I remember Zee doesn't stress that he "changes g into G" - or I read over.
I haven't understood that issue in Ryder too, why the error involved by this change is of neglectable order.


Ok.

If you read my other post it might make things more clear (where I write g as an expansion in G).

You have a put your finger on an important point: why can we do this?
Or, in other words, why is it legitimate to write g as an expansion in powers of G?

Strictly speaking, this is non-sense since the coefficients of the expansion are formally infinite! This is what bugs/bugged a lot of people. If there is a term log(cutoff) alpha in the QED expansion, the whole approach clearly only makes sense only if log (cutoff) alpha << 1 or cutoff << e^(1/alpha) ,
cutoff << e^137). (one can be much more rigorous than this to discuss convergence but you get the main idea).
So, strictly speaking, if we let the cutoff go to infinity the whole thing makes no sense. The modern point of view is that any of the field theories we work with are low wnergy effective field theories so that the cutoff has a physical meaning: the scale at which our field theory is no longer a good description of nature. In that case, there is no problem with the whole procedure since the cutoff should really not be taken to go to infinity.


(Things are much worse in QCD of course since the coupling constant is not that small at most energy scales (and is of order one at low energy scales). This is what makes QCD so tough. )

Pat
 
  • #18
Sunset said:
Ryder for example has the following relation (for 1-loop-oder) between renormalised mass M and bare mass m in Phi^4-theory:
m²=M²(1+ g/(16Pi²[4-d]) )

The thing is: he dropped all the finite terms in the result of the calculation of the (onliest) divergent 1-loop-order diagram. I guess by ignoring all the finite terms he chooses a particular "renormalisation-scheme" (i.e. I guess the choice which of the finite terms you drop corresponds to the choice of the parameters in the counter-term-method). The finite terms depend on the bare mass! So you could involve such an arbitrary finite term in the relation above (choose another renormalisation-scheme), which means the relation between bare and renormalised quantity is ambiguous.
That is true, the relation between the bare and renormalized parameters depend on you choice of scheme. But that does not matter! The physical results of the theory will not depend on that choice of scheme (*Up to a given order in the coupling constant*!)

The key point is that any calculation of a physical quantity involves calculations of loop diagrams with bare parameters . There are therefore *two* sources of divergences: coming from the loop themselves *and* coming from the bare parameters expressed in terms of physical quantities (the renormalized quantities). The two divergences cancel, leaving a unique and well defined result for the physical quantity being calculated.

Now, what are "renormlization schemes"? Well. it's just that people get lazy. For example, in dimensional regularization, divergences pop up in the form of 1/epsilon. If there is such a term in the bare parameters, of course the same term with an opposite sign will arise form the loops. So people invent this rule: whenever you see a 1/epsilon, just ignore it (the actual schemes used in dim reg are not that simple but I just want to illustrate the idea)...don't write it down. So when they fix the bare parameter in terms of the renormalized quantities, they ignore all those terms and quote a relation between the bare and renormalized quantities in that scheme. Now, when doing a loop calculation with those expressions for the bare parameters, people must use the same rule when calculating the loops. So they ignore the 1/epsilon there as well, giving a finite and well-defined result for the physical quantity being calculated.

Of course, you could say anything you want, for example: whenever you do any intergral that has a 1/epsilon divergence, always drop that term and add to your diagram +5. As long as you follow the same rule everywhere, you wil get the same result for any physical quantity as before. This sounds crazy but in dim reg there are often conbinations of certain constants that always appear grouped with the divergences so one might as well drop them.

Of ocurse, all this is not necessary. One could simply calculate all the integrals completely in whatever regularization one is using and never talk about "scheme".


As I said, a choice of scheme is totally unnecessary so it confuses things a greta deal. It is just something people do to make their life easier but there is no need at all to do that in principle.
 
  • #19
nrqed said:
Ok.
(Things are much worse in QCD of course since the coupling constant is not that small at most energy scales (and is of order one at low energy scales). This is what makes QCD so tough. )

Pat


Hi Sunset.

I don't want to dump too much information at once so I will be quiet for a while (plus I need to prepare my classes!). But I will mention something briefly. There is a whole different aspect to this renormalization scheme business but in my opinion, it's better to not dump too much information at once as it tends to confuse the issues more than clarifying them. After all I explained is clear to you we can get in the next level of things (if I have time to post...the next few weeks will be crazy in terms of teaching). Let me just briefly mention it just so that at least you have heard of it.
The whole business makes sense as long as the expansion in the coupling constant times the expressions f_1, f_2, etc (which are formally divergent! but must be considered finite in the spirit of effective field theories) is valid. This is shaky in QCD. Now, in dimensional regularuzation, two parameters appear: epsilon (which has to be formally taken to zero at the end of a calculation)and a scale "mu". Divergences appear as inverse powers of epsilon. The mu appears in logs. After renormalization, they disappear, of course. But in QCD, people worry a lot about the convergences of the expansion. So they start assigning some physical meaning to "mu" (even though it formally disappears from any physical result). And they start looking at the relation between the bare and renormalized parameters (even though that formally plays no role in any calculation of a physical quantity). They look at what values of "mu" would make the terms of the expansion decrease faster thereby improving the convergence of the expansion between the bare and renormalized parameters. This is a whole different can of worms that we can discuss later. But for now, I think it is important to set this aside and focus on the basic idea as I presented in my previous posts.

Regards

Patrick
 
  • #20
Hey thanks for all of your replies, I have to think about it for a while! What I ask myself spontanously:

You say I start with

[tex] g= f_1 G + f_2 G^2 + f_3 G^3 + \ldots [/tex] ,

and later it will drop out that f1=1 and f2 = C[log(cutoff²/s0) + ...], right?


All you have to do is to plug this in your diagrams..

[tex] M = -i f_1 G + ( i C f_1 G )^2 ( ( log ( cutoff^2 / s ) ) + \ldots ) [/tex]

or what do you mean?
 
  • #21
nrqed said:
I don't want to dump too much information at once so I will be quiet for a while (plus I need to prepare my classes!). But I will mention something briefly. There is a whole different aspect to this renormalization scheme business but in my opinion, it's better to not dump too much information at once as it tends to confuse the issues more than clarifying them. After all I explained is clear to you we can get in the next level of things (if I have time to post...the next few weeks will be crazy in terms of teaching).

All right then, I'll think about it calmly, and we go on with discussion another time

Thanks again, and best regards, Martin
 
  • #22
Sunset said:
Hey thanks for all of your replies, I have to think about it for a while! What I ask myself spontanously:

You say I start with

[tex] g= f_1 G + f_2 G^2 + f_3 G^3 + \ldots [/tex] ,

and later it will drop out that f1=1 and f2 = C[log(cutoff²/s0) + ...], right?




[tex] M = -i f_1 G + ( i C f_1 G )^2 ( ( log ( cutoff^2 / s ) ) + \ldots ) [/tex]

or what do you mean?

What I mean is the following (it would be easier with a blackboard in front of us!)

First, it's important to distinguish the *measured* M and the *calculated* M.
The two are set equal to one another, of course. But the calculated M is a set of Feynman diagrams. Now, in those Feynman diagrams, whenever there is a vertex, you use the bare coupling constant "g" given by the above expansion. Those diagrams, if they have loops, will of course contain divergences. So you end up with something like this:

TREE LEVEL MATCHING:

M_measured = tree level diagram using for the coupling constant g = f_1 G



This gives

M_measured = -i(f_1 G)

In the example you gave, M_measured is defined to be -i G so we get f_1 = 1.

ONE-LOOP MATCHING:

M_measured = -i(G + f_2 G^2+...) PLUS L_1 (G+ f_2 G^2 + ...)^2

where L_1 is the result of the one-loop integration with all the vertex factors and propagators etc. It's a divergent quantity that must be regularized.

Notice that we should use only g= G in the one-loop diagram since that coupling is squared and we are working up to order G^2. So we get

M_measured = -i(G + f_2 G^2+...) PLUS L_1 G^2

Since the tree-level imposed the definition given a bit above, you end up with

0 = -i f_2 G^2 + L_1 G^2

so f_2 = -i L_1


and so on to any number of loops. Does that make sense?
 
  • #23
Sunset said:
All right then, I'll think about it calmly, and we go on with discussion another time

Thanks again, and best regards, Martin

But keep posting and asking questions! It's just that I might reply once a day or so. But keep asking and I will keep replying!


Regards

patrick
 
  • #24
But the calculated M is a set of Feynman diagrams.

In Zee we have only the M for Meson-Meson-Scattering (2 Mesons going in, two mesons going out) , so always the same diagrams (tree-level: 4 external lines attached to vertex, 1loop: the 3 diagrams for s,t and u channel)

so f_2 = -i L_1

where L_1=if2 = iC[log(cutoff²/s)

Yes that makes sense, although I wonder it should be C[log(cutoff²/s0)..but ok

But keep posting and asking questions! It's just that I might reply once a day or so.

No problem, I have time! Only at my office I have a computer, so my replies can have delay too... CU
 
Last edited:
  • #25
nrqed said:
What I mean is the following (it would be easier with a blackboard in front of us!)

First, it's important to distinguish the *measured* M and the *calculated* M.
The two are set equal to one another, of course. But the calculated M is a set of Feynman diagrams. Now, in those Feynman diagrams, whenever there is a vertex, you use the bare coupling constant "g" given by the above expansion. Those diagrams, if they have loops, will of course contain divergences. So you end up with something like this:

TREE LEVEL MATCHING:

M_measured = tree level diagram using for the coupling constant g = f_1 G



This gives

M_measured = -i(f_1 G)

In the example you gave, M_measured is defined to be -i G so we get f_1 = 1.

ONE-LOOP MATCHING:

M_measured = -i(G + f_2 G^2+...) PLUS L_1 (G+ f_2 G^2 + ...)^2

where L_1 is the result of the one-loop integration with all the vertex factors and propagators etc. It's a divergent quantity that must be regularized.

Notice that we should use only g= G in the one-loop diagram since that coupling is squared and we are working up to order G^2. So we get

M_measured = -i(G + f_2 G^2+...) PLUS L_1 G^2

Since the tree-level imposed the definition given a bit above, you end up with

0 = -i f_2 G^2 + L_1 G^2

so f_2 = -i L_1


and so on to any number of loops. Does that make sense?

Just to add to this.

So now that the g is fixed up to one-loop, it can be used in any one-loop calculation (at the condition that all the other parameters have been fixed to one-loop as well, of course, but to keep things simple, let's ignore mass renormalization adn wavefunction renormalization). For example, let's say you want to calculate the amplitude at some other kinematical point than so, to and uo. Let's call this new amplitude M'

Let's say you wanted to calculate it up to one loop. Then you would calculate

M' = tree level diagram times using g= (G + f_2 G) PLUS one-loop diagram using g= G


Now the loop diagram will diverge. But it will not be exactly equal to the value L_1 you had when you were fixing g to one-loop. In fact, the one-loop diagram will be equal to L_1 plus some finite piece. So we get

M'= -i (G -iL_1 G^2) + G^2 (L_1 + finite piece) = -iG + G^2 (finite piece)


and everything is finite and everything is in terms of G which is a measurable quantity (since M_0 is defined as -iG).

So that's the key point: the divergences in your loop and in the bare parameter (through the appearance of L_1).

Does that make sense??

PAtrick
 
  • #26
Sunset said:
In Zee we have only the M for Meson-Meson-Scattering (2 Mesons going in, two mesons going out) , so always the same diagrams (tree-level: 4 external lines attached to vertex, 1loop: the 3 diagrams for s,t and u channel)
Ok.
where L_1=if2 = iC[log(cutoff²/s)

Yes that makes sense, although I wonder it should be C[log(cutoff²/s0)..but ok
Well, it should be at the kinematic point where you define your amplitude to be -iG. You seemed to be assuming s0,t0,u0 so it should be s0 in the log, indeed. (as for the factors of "i", I amdit that I am not being too careful.)
Notice that if you fix the coupling constant to be defined at some other kinematic point (let's say s_1, t_1, u_1), the value that will be measured will be usually different (so the "G" used in the expansion will have a different numerical value). That's the whole story of "running coupling constant". That's a different issue.

Aside: although, it does tie in with the whole thing about assigning a specific value to [itex] \mu [/itex] in dim reg, as I discussed in the other post...The idea is that one can go back and use a different matching point in such a way that the *physical* expansion in the coupling constant G converges more rapidly (I am talkking about the final expansion after the full renormalization has been carried out and all the divergences and bare parameters have disappeared). It turns out that the relation between the bare and renormalized parameters may be used in dim reg to optimize this expansion. we are getting into a discussion about renormalization group at this point but again, it's better to first make sure that basic renormalization is clear before getting into that.

Patrick
 
  • #27
Well, it should be at the kinematic point where you define your amplitude to be -iG. You seemed to be assuming s0,t0,u0 so it should be s0 in the log, indeed
ok


Too bad I have to leave my office now (in Germany its 20:30). But need to catch my railway, tomorrow I have to work, but on Tuesday I'm back at university, I print the whole (really helpfull!) thread and will go through it at home,

have a nice day, CU
 
  • #28
Sunset said:
ok


Too bad I have to leave my office now (in Germany its 20:30). But need to catch my railway, tomorrow I have to work, but on Tuesday I'm back at university, I print the whole (really helpfull!) thread and will go through it at home,

have a nice day, CU

Tschuss!

Pat
 
  • #29
Hi Patrick!

Tried to latex but he doesn't like my code, I will try to edit this (don't know if it's readable)

Let me put things together:

Assumption : g= f_1G + f_2G² + O(G³)
Definition: -iG := M(s0,t0,u0) which I write M_{0}


M_{0} = -ig +iC g^2 [log(\frac{\Lambda^2}{s_{0}}) + log( \frac{\Lambda^2}{t_{0}}) +log(\frac{\Lambda^2}{u_{0}})] + O(g^3) = -if_{1}G + O(G^2) \\
\Rightarrow \underline{f_{1}=1} \\

M_{0} = -ig +iC g^2 [log(\frac{\Lambda^2}{s_{0}}) + log( \frac{\Lambda^2}{t_{0}}) + log(\frac{\Lambda^2}{u_{0}})] + O(g^3) \\
= -if_{1}G - f_{2} G^2 + iC f_{1}^2 G^2 [ log(\frac{\Lambda^2}{s_{0}})
+log( \frac{\Lambda^2}{t_{0}}) + log(\frac{\Lambda^2}{u_{0}}) ] +O(g^3) \\
\Rightarrow \underline{f_{2} = C[log(\frac{\Lambda^2}{s0}} + log( ... ]} \\

\mbox{ I use} [log(\frac{\Lambda^2}{s_{0}}) + log( \frac{\Lambda^2}{t_{0}}) + log(\frac{\Lambda^2}{u_{0}})] \equiv [point0] \\

\mbox{what you call M' is } \\
M' = -ig +iC g^2 [log(\frac{\Lambda^2}{s'_{0}}) + log( \frac{\Lambda^2}{t'_{0}}) + log(\frac{\Lambda^2}{u'_{0}})] + O(g^3) = -iG - iC[point0] G^2 + iC G^2 [point'] + O(G^3) \\
= -iG + iC G^2 [log(\frac{s'}{s_{0}}) + log(\frac{t'}{t_{0}}) + log(\frac{u'}{u_{0}}) ] + O(G^3) \\ [/tex]

I tried out some other definitions (what I meant before with f(G) instead of G), I tell you what happened:

1) M_{0} := -iH -5 (I use H to differentiate from G)

Assumption: g = a_{1} H + a_{2} H^2 + O(H³) \\

M_{0} = -ig +iC g^2 [point0] + O(g^3) = -i a_{1} H + O(H^2) \\
\Rightarrow a_{1}=1- 5i H^-1
which is not consistent with the assumption g = a_{1} H + a_{2} H^2 + O(H³)
Thatfore M_{0} := -iH -5 is NOT a possible definition

2) M_{0} := -iH -5H
[tex]\underline{a_{1}=1-5i} \\

M_{0} = -ig +iC g^2 [point0] + O(g^3) = -ia_{1} H - ia_{2} H^2 + iC a_{1}^2 H^2 [point0] + O(H^3) \\
=-iH -5H -i a_{2} H^2 + iC H^2 [point0] + 5iC H^2 [point0] \\
\Rightarrow 0=-i a_{2} H^2 + iC H^2 [point0] + 5iC H^2 [point0] \Rightarrow \underline{a_{2}=6C[point0]} \\

\Rightarrow M'= -ig + iC g^2 [point'] + O(g^3) = \\
=(-i-5)H -6iC[point0] H^2 + (iC + 10C -25iC)[point'] H^2 + O(H^3)
mbox{that would mean M' depends on (\Lambda)^2 ,no way to cancel out! So the definition again \\
is NOT possible, this time because of an other reason}

Well, -iG := M(s0,t0,u0) which I write M_{0} sems to be the onliest possible choice for your definition.



M' = -ig +iC g^2 [log(\frac{\Lambda^2}{s'_{0}}) + log( \frac{\Lambda^2}{t'_{0}}) + log(\frac{\Lambda^2}{u'_{0}})] + O(g^3) = -iG - iC[point0] G^2 + iC G^2 [point'] + O(G^3) \\
= -iG + iC G^2 [log(\frac{s'}{s_{0}}) + log(\frac{t'}{t_{0}}) + log(\frac{u'}{u_{0}}) ] + O(G^3) (*)

tells us the following:

By experiment, you can measure the probability for (2Meson/2Meson-scattering to happen for
certain scattering-angles) for different s',t' and u' . You can plot M' in dependence of s (or t or u) and the prediction of our renormalised theory (i.e. there might be seen something else in experiment!) is: there exists a constant G, for which you can fit your curve described by formula (*) to your datapoints, whereas the value of G depends on s0,t0,u0 ! This is not trivial, you may find in experiment that there is no such constant.
 
Last edited:
  • #30
Everytime I try to edit, he displays my old entry, In the preview I don't see the Latexresult, sorry have to post some crap
 
Last edited:
  • #31
[tex] M_{0} = -ig +iC g^2 [log(\frac{\Lambda^2}{s_{0}}) + log( \frac{\Lambda^2}{t_{0}}) + log(\frac{\Lambda^2}{u_{0}})] + O(g^3) \newline
= -if_{1}G + O(G^2) \Rightarrow \underline{f_{1}=1} [/tex]
 
  • #32
Sunset said:
Hi Patrick!
Hi Martin!
I am between classes now so I won't be able to reply until later tonight. I will just try to see if I can get your equations to display properly in Tex.


-----------------------------

Tried to latex but he doesn't like my code, I will try to edit this (don't know if it's readable)

Let me put things together:

Assumption : g= f_1G + f_2G² + O(G³)
Definition: -iG := M(s0,t0,u0) which I write M_{0}


[tex] M_{0} = -ig +iC g^2 [log(\frac{\Lambda^2}{s_{0}}) + log( \frac{\Lambda^2}{t_{0}}) +log(\frac{\Lambda^2}{u_{0}})] + O(g^3) = -if_{1}G + O(G^2)
\Rightarrow \underline{f_{1}=1} [/tex]

[tex]
M_{0} = -ig +iC g^2 [log(\frac{\Lambda^2}{s_{0}}) + log( \frac{\Lambda^2}{t_{0}}) + log(\frac{\Lambda^2}{u_{0}})] + O(g^3) \\
[/tex]

[tex] = -if_{1}G - f_{2} G^2 + iC f_{1}^2 G^2 [ log(\frac{\Lambda^2}{s_{0}})
+log( \frac{\Lambda^2}{t_{0}}) + log(\frac{\Lambda^2}{u_{0}}) ] +O(g^3) \\
\Rightarrow \underline{f_{2} = C[log(\frac{\Lambda^2}{s0}} + log( ... ]} \\
[/tex]


[tex]
\mbox{ I use} [log(\frac{\Lambda^2}{s_{0}}) + log( \frac{\Lambda^2}{t_{0}}) + log(\frac{\Lambda^2}{u_{0}})] \equiv [point0] \\
[/tex]

[tex]
\mbox{what you call M' is } \\
M' = -ig +iC g^2 [log(\frac{\Lambda^2}{s'_{0}}) + log( \frac{\Lambda^2}{t'_{0}}) + log(\frac{\Lambda^2}{u'_{0}})] + O(g^3) [/tex]

[tex]= -iG - iC[point0] G^2 + iC G^2 [point'] + O(G^3) [/tex]

[tex]
= -iG + iC G^2 [log(\frac{s'}{s_{0}}) + log(\frac{t'}{t_{0}}) + log(\frac{u'}{u_{0}}) ] + O(G^3) \\ [/tex]

I tried out some other definitions (what I meant before with f(G) instead of G), I tell you what happened:

1) M_{0} := -iH -5 (I use H to differentiate from G)

Assumption: g = a_{1} H + a_{2} H^2 + O(H³) \\

[tex]
M_{0} = -ig +iC g^2 [point0] + O(g^3) = -i a_{1} H + O(H^2) \\
\Rightarrow a_{1}=1- 5i H^-1 [/tex]
which is not consistent with the assumption g = a_{1} H + a_{2} H^2 + O(H³)
Therefore M_{0} := -iH -5 is NOT a possible definition

2) M_{0} := -iH -5H
[tex]\underline{a_{1}=1-5i} \\

M_{0} = -ig +iC g^2 [point0] + O(g^3) = -ia_{1} H - ia_{2} H^2 + iC a_{1}^2 H^2 [point0] + O(H^3) [/tex]

[tex]
=-iH -5H -i a_{2} H^2 + iC H^2 [point0] + 5iC H^2 [point0] \\
\Rightarrow 0=-i a_{2} H^2 + iC H^2 [point0] + 5iC H^2 [point0] \Rightarrow \underline{a_{2}=6C[point0]} \\
[/tex]

[tex]
\Rightarrow M'= -ig + iC g^2 [point'] + O(g^3) = \\
=(-i-5)H -6iC[point0] H^2 + (iC + 10C -25iC)[point'] H^2 + O(H^3) [/tex]
that would mean M' depends on (\Lambda)^2 ,no way to cancel out! So the definition again
is NOT possible, this time because of an other reason

Well, -iG := M(s0,t0,u0) which I write M_{0} sems to be the onliest possible choice for your definition.


[tex]
M' = -ig +iC g^2 [log(\frac{\Lambda^2}{s'_{0}}) + log( \frac{\Lambda^2}{t'_{0}}) + log(\frac{\Lambda^2}{u'_{0}})] + O(g^3) [/tex]

[tex]= -iG - iC[point0] G^2 + iC G^2 [point'] + O(G^3) \\
= -iG + iC G^2 [log(\frac{s'}{s_{0}}) + log(\frac{t'}{t_{0}}) + log(\frac{u'}{u_{0}}) ] + O(G^3) (*) [/tex]

tells us the following:

By experiment, you can measure the probability for (2Meson/2Meson-scattering to happen for
certain scattering-angles) for different s',t' and u' . You can plot M' in dependence of s (or t or u) and the prediction of our renormalised theory (i.e. there might be seen something else in experiment!) is: there exists a constant G, for which you can fit your curve described by formula (*) to your datapoints, whereas the value of G depends on s0,t0,u0 ! This is not trivial, you may find in experiment that there is no such constant.[/QUOTE]
 
Last edited:
  • #33
Great! Thanks! The \\ doesn't work
 
  • #34
With
[tex]
\mbox{what you call M' is } \\
M' = -ig +iC g^2 [log(\frac{\Lambda^2}{s'_{0}}) + log( \frac{\Lambda^2}{t'_{0}}) + log(\frac{\Lambda^2}{u'_{0}})] + O(g^3) [/tex]
I meant
[tex]
\mbox{what you call M' is } \\
M' = -ig +iC g^2 [log(\frac{\Lambda^2}{s'}) + log( \frac{\Lambda^2}{t'}) + log(\frac{\Lambda^2}{u'})] + O(g^3) [/tex]
 
  • #35
Sunset said:
Great! Thanks! The \\ doesn't work

Hi Martin...

Just a quick comment.

It seems to me that you are not being consistent with your "H" case.

In the one-loop matching, the one-loop diagram will contain the bare coupling constant squared, so it will contain [itex] (a_1 H + a_2 H)^2 \approx a_1^2 H^2 [/itex]. You seemed to have simply used H^2 (so you are doing it as if a_1 was one!). Now you see what will happen: this will be [tex] (1- \frac{5i}{H})^2 H^2 = H^2 - \frac{10 i}{H} - 25 [/tex]
So when you will solve for [itex] a_2 [/itex], it will contain a constant piece plus a term in 1/H plus a term in 1/H^2.

If you do this consistently, all the divergences will drop out. But you see that this type of definition is a pain because all your coefficients a_i will contain expansions in poers of 1/H and that makes it a pain to work with. This is why defining the amplitude to simply be your coupling constant (as opposed to teh coupling constant plus some number) makes things *much* easier!

Do you see what I mean?

Patrick
 

Similar threads

  • Quantum Physics
Replies
5
Views
3K
Replies
14
Views
2K
  • Advanced Physics Homework Help
Replies
2
Views
2K
  • STEM Academic Advising
Replies
4
Views
2K
  • Beyond the Standard Models
Replies
2
Views
2K
  • Beyond the Standard Models
Replies
12
Views
4K
  • Beyond the Standard Models
Replies
24
Views
7K
Replies
9
Views
6K
Replies
26
Views
8K
  • Beyond the Standard Models
Replies
10
Views
2K
Back
Top