Is Renormalisation Unique for Physically Measurable Quantities?

  • Context: Graduate 
  • Thread starter Thread starter Sunset
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around the uniqueness of renormalization in quantum field theory, particularly whether the choice of renormalized quantities is unambiguous and how this affects physically measurable quantities. Participants explore various aspects of renormalization, including its dependence on regularization schemes and the implications for theoretical predictions.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • Some participants express concern about the uniqueness of renormalized quantities, questioning if the relationship between renormalized and bare quantities is unambiguous.
  • Others point out that different renormalization schemes can lead to different predictions for measurable quantities, suggesting an inherent ambiguity in the process.
  • A participant notes that the choice of regularization can affect the results of a theory, highlighting the complexity of renormalization.
  • Some argue that while predictions may vary with different renormalization schemes, they can converge with sufficient orders in perturbation theory.
  • There is mention of specific renormalization schemes, such as the \bar{MS} scheme, and how they handle divergent amplitudes differently.
  • One participant raises a question about the implications of defining arbitrary functions in the context of renormalized coupling constants, suggesting potential indeterminacy in the results.

Areas of Agreement / Disagreement

Participants generally do not agree on the uniqueness of renormalization, with multiple competing views on the implications of different renormalization schemes and their effects on theoretical predictions. The discussion remains unresolved regarding the clarity and consistency of renormalization in relation to physically measurable quantities.

Contextual Notes

Limitations include the dependence on specific definitions of renormalization schemes and regularization methods, as well as unresolved questions about the nature of divergences and their treatment in various contexts.

  • #31
M_{0} = -ig +iC g^2 [log(\frac{\Lambda^2}{s_{0}}) + log( \frac{\Lambda^2}{t_{0}}) + log(\frac{\Lambda^2}{u_{0}})] + O(g^3) \newline<br /> = -if_{1}G + O(G^2) \Rightarrow \underline{f_{1}=1}
 
Physics news on Phys.org
  • #32
Sunset said:
Hi Patrick!
Hi Martin!
I am between classes now so I won't be able to reply until later tonight. I will just try to see if I can get your equations to display properly in Tex.


-----------------------------

Tried to latex but he doesn't like my code, I will try to edit this (don't know if it's readable)

Let me put things together:

Assumption : g= f_1G + f_2G² + O(G³)
Definition: -iG := M(s0,t0,u0) which I write M_{0}


M_{0} = -ig +iC g^2 [log(\frac{\Lambda^2}{s_{0}}) + log( \frac{\Lambda^2}{t_{0}}) +log(\frac{\Lambda^2}{u_{0}})] + O(g^3) = -if_{1}G + O(G^2) <br /> \Rightarrow \underline{f_{1}=1}

<br /> M_{0} = -ig +iC g^2 [log(\frac{\Lambda^2}{s_{0}}) + log( \frac{\Lambda^2}{t_{0}}) + log(\frac{\Lambda^2}{u_{0}})] + O(g^3) \\ <br />

= -if_{1}G - f_{2} G^2 + iC f_{1}^2 G^2 [ log(\frac{\Lambda^2}{s_{0}}) <br /> +log( \frac{\Lambda^2}{t_{0}}) + log(\frac{\Lambda^2}{u_{0}}) ] +O(g^3) \\ <br /> \Rightarrow \underline{f_{2} = C[log(\frac{\Lambda^2}{s0}} + log( ... ]} \\ <br />


<br /> \mbox{ I use} [log(\frac{\Lambda^2}{s_{0}}) + log( \frac{\Lambda^2}{t_{0}}) + log(\frac{\Lambda^2}{u_{0}})] \equiv [point0] \\ <br />

<br /> \mbox{what you call M&#039; is } \\ <br /> M&#039; = -ig +iC g^2 [log(\frac{\Lambda^2}{s&#039;_{0}}) + log( \frac{\Lambda^2}{t&#039;_{0}}) + log(\frac{\Lambda^2}{u&#039;_{0}})] + O(g^3)

= -iG - iC[point0] G^2 + iC G^2 [point&#039;] + O(G^3)

<br /> = -iG + iC G^2 [log(\frac{s&#039;}{s_{0}}) + log(\frac{t&#039;}{t_{0}}) + log(\frac{u&#039;}{u_{0}}) ] + O(G^3) \\

I tried out some other definitions (what I meant before with f(G) instead of G), I tell you what happened:

1) M_{0} := -iH -5 (I use H to differentiate from G)

Assumption: g = a_{1} H + a_{2} H^2 + O(H³) \\

<br /> M_{0} = -ig +iC g^2 [point0] + O(g^3) = -i a_{1} H + O(H^2) \\ <br /> \Rightarrow a_{1}=1- 5i H^-1
which is not consistent with the assumption g = a_{1} H + a_{2} H^2 + O(H³)
Therefore M_{0} := -iH -5 is NOT a possible definition

2) M_{0} := -iH -5H
\underline{a_{1}=1-5i} \\ <br /> <br /> M_{0} = -ig +iC g^2 [point0] + O(g^3) = -ia_{1} H - ia_{2} H^2 + iC a_{1}^2 H^2 [point0] + O(H^3)

<br /> =-iH -5H -i a_{2} H^2 + iC H^2 [point0] + 5iC H^2 [point0] \\ <br /> \Rightarrow 0=-i a_{2} H^2 + iC H^2 [point0] + 5iC H^2 [point0] \Rightarrow \underline{a_{2}=6C[point0]} \\ <br />

<br /> \Rightarrow M&#039;= -ig + iC g^2 [point&#039;] + O(g^3) = \\ <br /> =(-i-5)H -6iC[point0] H^2 + (iC + 10C -25iC)[point&#039;] H^2 + O(H^3)
that would mean M' depends on (\Lambda)^2 ,no way to cancel out! So the definition again
is NOT possible, this time because of an other reason

Well, -iG := M(s0,t0,u0) which I write M_{0} sems to be the onliest possible choice for your definition.


<br /> M&#039; = -ig +iC g^2 [log(\frac{\Lambda^2}{s&#039;_{0}}) + log( \frac{\Lambda^2}{t&#039;_{0}}) + log(\frac{\Lambda^2}{u&#039;_{0}})] + O(g^3)

= -iG - iC[point0] G^2 + iC G^2 [point&#039;] + O(G^3) \\ <br /> = -iG + iC G^2 [log(\frac{s&#039;}{s_{0}}) + log(\frac{t&#039;}{t_{0}}) + log(\frac{u&#039;}{u_{0}}) ] + O(G^3) (*)

tells us the following:

By experiment, you can measure the probability for (2Meson/2Meson-scattering to happen for
certain scattering-angles) for different s',t' and u' . You can plot M' in dependence of s (or t or u) and the prediction of our renormalised theory (i.e. there might be seen something else in experiment!) is: there exists a constant G, for which you can fit your curve described by formula (*) to your datapoints, whereas the value of G depends on s0,t0,u0 ! This is not trivial, you may find in experiment that there is no such constant.[/QUOTE]
 
Last edited:
  • #33
Great! Thanks! The \\ doesn't work
 
  • #34
With
<br /> \mbox{what you call M&#039; is } \\ <br /> M&#039; = -ig +iC g^2 [log(\frac{\Lambda^2}{s&#039;_{0}}) + log( \frac{\Lambda^2}{t&#039;_{0}}) + log(\frac{\Lambda^2}{u&#039;_{0}})] + O(g^3)
I meant
<br /> \mbox{what you call M&#039; is } \\ <br /> M&#039; = -ig +iC g^2 [log(\frac{\Lambda^2}{s&#039;}) + log( \frac{\Lambda^2}{t&#039;}) + log(\frac{\Lambda^2}{u&#039;})] + O(g^3)
 
  • #35
Sunset said:
Great! Thanks! The \\ doesn't work

Hi Martin...

Just a quick comment.

It seems to me that you are not being consistent with your "H" case.

In the one-loop matching, the one-loop diagram will contain the bare coupling constant squared, so it will contain (a_1 H + a_2 H)^2 \approx a_1^2 H^2. You seemed to have simply used H^2 (so you are doing it as if a_1 was one!). Now you see what will happen: this will be (1- \frac{5i}{H})^2 H^2 = H^2 - \frac{10 i}{H} - 25
So when you will solve for a_2, it will contain a constant piece plus a term in 1/H plus a term in 1/H^2.

If you do this consistently, all the divergences will drop out. But you see that this type of definition is a pain because all your coefficients a_i will contain expansions in poers of 1/H and that makes it a pain to work with. This is why defining the amplitude to simply be your coupling constant (as opposed to the coupling constant plus some number) makes things *much* easier!

Do you see what I mean?

Patrick
 
  • #36
Hi!
you seemed to have simply used H^2 (so you are doing it as if a_1 was one!).[\QUOTE]

Yes I made a mistake here:

-ia_{1} H - ia_{2} H^2 + iC a_{1}^2 H^2 [point0] + O(H^3)
<br /> =-iH -5H -i a_{2} H^2 + iC H^2 [point0] + 5iC H^2 [point0]

I didn't use a_1=1 but I forgot the "²" : right side is -i(1-5i)H - ia_2H² + iC(1-5i)²H²[point0]
=> a_2=-C(1-5i)²[point0]

all your coefficients a_i will contain expansions in poers of 1/H[\QUOTE]
So that is a contradiction to our assumption that you can expand g in powers of H when the coefficients depend on H!

Best regards
 
  • #37
If you haven't said the coefficients can depend on H, I would say the assumption g=f_1G + f_2G² + ... makes the choice M_{0}=-iG unique, because only with this definition you can make M' independent of Lambda.

But you see that this type of definition is a pain because all your coefficients a_i will contain expansions in poers of 1/H and that makes it a pain to work with. This is why defining the amplitude to simply be your coupling constant (as opposed to the coupling constant plus some number) makes things *much* easier!
After what you say, I have to assume there exist other possible definitions for H than M_{0}=-iH . Here's the interesting point: does a different definition change my predictions?
Remember what I have in mind with "predictions":
By experiment, you can measure the probability for (2Meson/2Meson-scattering to happen for
certain scattering-angles) for different s',t' and u' . You can plot M' in dependence of s (or t or u) and the prediction of our renormalised theory (i.e. there might be seen something else in experiment!) is: there exists a constant G, for which you can fit your curve described by formula (*) to your datapoints, whereas the value of G depends on s0,t0,u0 ! This is not trivial, you may find in experiment that there is no such constant.

I would say NO, because although you have a different expression for M', you will get the same curve. Let's assume - as you said - you can cancel out Lambda when you're defining M_{0}:=-iH-5H . M' is a expression different to (*) depending on H. This time your prediction is, that there exists a constant H for which you can fit your datapoints. This is exactly the same KIND of information, the same essence as before.
The two expressions describe the same curve, only fitting-process might be in some cases more comfortable.

Is it the same principle here? I mean, using a different renormalisation scheme doesn't change the KIND of information, although the way how it is packaged looks a bit different...?
Haelfix said:
Otoh the choice of renormalization scheme is more a technical issue in the sense that while it does change the results, it just means one of them is approaching some attractor point better than the other as was pointed out earlier. Sometimes you can actually improve the results to even obtain some nonperturbative sectors of the theory.

This would be very satisfying to me.


Although it would be interesting, how g=f_1G + f_2G² + ... can be motivated further. I mean, we're dealing with infinities, so it is not trivial that there exists a power-expansion, or?
We haven't used a certain Renormalisation scheme in that case, right? Or does it correspond to the assumption g=f_1G + f_2G² + ... ?
 
  • #38
Sunset said:
using a different renormalisation scheme doesn't change the KIND of information, although the way how it is packaged looks a bit different...?

I came to the opinion that this should be true:

To be more precise, let's take the MS scheme applied to Phi^4 theory (I read it Ryder):

He uses dimensional regularisation (it's clear to me why regularisation doesn't change the predictions, I'm ok with that). He regularises and renormalises only the 2-point-vertex-function and the 4-pt-vertex-function. I guess because every possible M is constructable form these two things. Renormalisation of 4-pt-function is in principle the same as we discussed so far (renormalisation of coupling constant).

The regularised self energy is
\Sigma= - (g m^2)/(16 \pi^2 \epsilon) - (g m^2)/(32 \pi^2) (1- 0.577 + ln((4 \pi \mu^2)/m^2) + O(\epsilon) + O(g^2)
\Gamma^{(2)}=p^2 - m^2 -\Sigma
He drops all the finite terms (MS-scheme) so he receives
\Gamma^{(2)}=p^2 -m^2[1- g/(16 \pi^2 \epsilon)

He defines the renormalised mass M as
\Gamma^{(2)}=p^2 - M^2

So he gets
m^2=M^2[1+ g/(16 \pi^2 \epsilon)]

Dropping the O(\epsilon) terms is ok, because you let epsilon go to zero. µ is arbitrary, because in the regularised Lagrangian it appears as \mu^\epsilon so it can be choosen as m^2/4pi so the only finite term (remember: at order g) which contributes is
(g m^2)/(32 \pi^2) [1- 0.577]
So if you compute M for a certain process you end up with a different expression (as in the case discused before). So If you fit your curve for measured values for M and measured mass M (this is NOT a parameter of your theory, it is measured with well-known apparatures, which have never heard of Quantum-Field-theory - you don't have to fit it like g!) you receive again different values for g but it's exactly the same kind of prediction as discussed above.

Best regards Martin
 
Last edited:

Similar threads

  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
5K
  • · Replies 17 ·
Replies
17
Views
4K
Replies
24
Views
8K
Replies
9
Views
6K