geoduck
- 257
- 2
In Zee's QFT book he writes an amplitude as:
M(p)=\lambda_0+\Gamma(\Lambda,p,\lambda_0)
He then states that you make a measurement:
M(\mu)=\lambda_0+\Gamma(\Lambda,\mu,\lambda_0) \equiv \lambda_R
and substitute that into M(p) to get:
M(p)=\lambda_R+\left[\Gamma(\Lambda,p,\lambda_0)-\Gamma(\Lambda,\mu,\lambda_0) \right]
which is independent of \Lambda. But isn't this only true if \Gamma is logarithmically divergent in a ratio \Lambda^2/p^2? What if this is not the case?
But generally speaking, doesn't the RG equation say that if:
M(p)=\lambda_0+\Gamma(\Lambda,p,\lambda_0)
then it must be true that:
M(p)=\lambda_R+\Gamma(\mu,p,\lambda_0)
Doesn't this force a log dependence, because:
M(p)=\lambda_R+\Gamma(\mu,p,\lambda_0)=<br /> \lambda_R+\left[\Gamma(\Lambda,p,\lambda_0)-\Gamma(\Lambda,\mu,\lambda_0) \right]
which gives an equation involving \Gamma, and doesn't that equation force a log dependence of \Gamma?
But surely you can have divergences that aren't log!
M(p)=\lambda_0+\Gamma(\Lambda,p,\lambda_0)
He then states that you make a measurement:
M(\mu)=\lambda_0+\Gamma(\Lambda,\mu,\lambda_0) \equiv \lambda_R
and substitute that into M(p) to get:
M(p)=\lambda_R+\left[\Gamma(\Lambda,p,\lambda_0)-\Gamma(\Lambda,\mu,\lambda_0) \right]
which is independent of \Lambda. But isn't this only true if \Gamma is logarithmically divergent in a ratio \Lambda^2/p^2? What if this is not the case?
But generally speaking, doesn't the RG equation say that if:
M(p)=\lambda_0+\Gamma(\Lambda,p,\lambda_0)
then it must be true that:
M(p)=\lambda_R+\Gamma(\mu,p,\lambda_0)
Doesn't this force a log dependence, because:
M(p)=\lambda_R+\Gamma(\mu,p,\lambda_0)=<br /> \lambda_R+\left[\Gamma(\Lambda,p,\lambda_0)-\Gamma(\Lambda,\mu,\lambda_0) \right]
which gives an equation involving \Gamma, and doesn't that equation force a log dependence of \Gamma?
But surely you can have divergences that aren't log!