Grassmann Algebra: Derivative of $\theta_j \theta_k \theta_l$

  • Thread starter Thread starter latentcorpse
  • Start date Start date
  • Tags Tags
    Algebra Grassmann
latentcorpse
Messages
1,411
Reaction score
0
If \{ \theta_i \} are a set of Grassmann numbers then what is

\frac{\partial}{\partial \theta_i} ( \theta_j \theta_k \theta_l)

I know that \frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \delta_{ij} \theta_k - \theta_j \delta_{ik} - we need this to be the case becuse if we set j=k then the LHS becomes the derivative of \theta_j^2=0 and so we need the RHS to vanish as well (hence the minus sign!)

However, now that there are three variables present, I am confused as to what should pick up a minus sign upon differentiation and what should not?

Thanks.
 
Physics news on Phys.org
The minus sign in

<br /> \frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \delta_{ij} \theta_k - \theta_j \delta_{ik}<br />

can also be derived from noting that \partial/\partial \theta_i is Grassman, so

<br /> \frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \left( \frac{\partial\theta_j}{\partial \theta_i} \right) \theta_k -<br /> \theta_j \left( \frac{\partial \theta_k }{\partial \theta_i} \right).<br />

This method applies directly to the product of 3 Grassman numbers.
 
fzero said:
The minus sign in

<br /> \frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \delta_{ij} \theta_k - \theta_j \delta_{ik}<br />

can also be derived from noting that \partial/\partial \theta_i is Grassman, so

<br /> \frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \left( \frac{\partial\theta_j}{\partial \theta_i} \right) \theta_k -<br /> \theta_j \left( \frac{\partial \theta_k }{\partial \theta_i} \right).<br />

This method applies directly to the product of 3 Grassman numbers.

Great. I do have a question about Grassmann integration though.

\int d^n \theta f(\theta_1, \dots \theta_n) = \int d \theta_n d \theta_{n-1} \dots d \theta_1 f(\theta_1, \dots , \theta_n)
where f(\theta_1 , \dots , \theta_n = a + a_i \theta_i + \dots + \frac{1}{n!} a_{i_1} \dots a_{i_n} \theta_{i_1} \dots \theta_{i_n}

So I can see that only the last term will survive because all the preceeding terms are constant with respect to \theta_n and since the integral of a constant with respect to a grassmann variable is zero, they will all die. Therefore this simplifies to

\int d^n \frac{1}{n!} a_{i_1} \dots a_{i_n} \theta_{i_1} \dots \theta_{i_n}

My notes then say that \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \epsilon_{i_1 \dots i_n} where \epsilon_{i_1 \dots i_n} is the antisymmetric tensor that equals 1 if the indices are ordered in an even permutation, -1 if they are ordered in an odd permutation and 0 if any two indices are the same.

Apparently this then means that \int d^n \theta f(\theta_1 , \dots , \theta_n) = \frac{1}{n!} \epsilon_{i_1 \dots i_n} a_{i_1} \dots a_{i_n} = a_{1 \dots n}

So my question is, how does that last equality work? How does he get rid of the epsilon? And where does the n! go?


Given this definition, does that mean that say \int d^3 \theta \theta_1 \theta_3 \theta_2 =-1 as \epsilon_{132}=-1 i.e. we would have the last two variables to do this integral and that would pick up a minus sign?

Thanks.
 
latentcorpse said:
Apparently this then means that \int d^n \theta f(\theta_1 , \dots , \theta_n) = \frac{1}{n!} \epsilon_{i_1 \dots i_n} a_{i_1} \dots a_{i_n} = a_{1 \dots n}

So my question is, how does that last equality work? How does he get rid of the epsilon? And where does the n! go?

The a_i are odd variables. Just expand \epsilon_{i_1 \dots i_n} a_{i_1} \dots a_{i_n}, count the number of terms and apply permutations to write each one as a_{1} \cdots a_{n}.

Given this definition, does that mean that say \int d^3 \theta \theta_1 \theta_3 \theta_2 =-1 as \epsilon_{132}=-1 i.e. we would have the last two variables to do this integral and that would pick up a minus sign?

Yes. You should be able to derive

<br /> \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \epsilon_{i_1 \dots i_n}<br />

from the basic formula

\int d\theta_i \theta_i = 1 ~~(\text{no sum on} ~i).
 
fzero said:
The a_i are odd variables. Just expand \epsilon_{i_1 \dots i_n} a_{i_1} \dots a_{i_n}, count the number of terms and apply permutations to write each one as a_{1} \cdots a_{n}.

Got it.

fzero said:
Yes. You should be able to derive

<br /> \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \epsilon_{i_1 \dots i_n}<br />

from the basic formula

\int d\theta_i \theta_i = 1 ~~(\text{no sum on} ~i).
[\QUOTE]
This I don't know how to do...And also, when it comes to complex integration, we have been given \theta=\frac{\theta_1+i \theta_2}{\sqrt{2}} , \bar{\theta} = \frac{\theta_1 - \theta_2}{\sqrt{2}}

We are then told that d \theta d \bar{\theta} = d \theta_1 d \theta_2 \times i

However, I find that d \theta d \bar{\theta} = \frac{d \theta_1^2 - i d \theta_1 d \theta_2 + i d \theta_2 d \theta-1 + d \theta_2^2}{2}
Obviously the \theta^2 terms vanish and so I find
d \theta d \bar{\theta} = \frac{ -2 i d \theta_1 d \theta_2}{2} = - i d \theta_1 d \theta_2 which is out by a minus sign?Also, do you know how we get the extra factor of 1/b in 9.67 in Peskin and Schroeder i.e. why is \int d \bar{\theta} d \theta \theta \bar{\theta} e^{-\bar{\theta} b \theta} = \frac{1}{b}b = 1?And finally, why do we have \int d \theta d \bar{\theta} e^{- \bar{\theta} b \theta} = \int d \bar{\theta} d \theta ( 1 - \bar{\theta} b \theta)?
Shouldn't we have additional higher order terms from the expansion of the exponential?
 
Last edited:
latentcorpse said:
This I don't know how to do...

I actually have a mistake, it should be

<br /> \int \theta_i d\theta_i = 1 ~~(\text{no sum on} ~i).<br />

The proof of that formula just amounts to counting signs when you permute to express

<br /> \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \prod_i \int \theta_i d\theta_i.<br />

And also, when it comes to complex integration, we have been given \theta=\frac{\theta_1+i \theta_2}{\sqrt{2}} , \bar{\theta} = \frac{\theta_1 - \theta_2}{\sqrt{2}}

We are then told that d \theta d \bar{\theta} = d \theta_1 d \theta_2 \times i

However, I find that d \theta d \bar{\theta} = \frac{d \theta_1^2 - i d \theta_1 d \theta_2 + i d \theta_2 d \theta-1 + d \theta_2^2}{2}
Obviously the \theta^2 terms vanish and so I find
d \theta d \bar{\theta} = \frac{ -2 i d \theta_1 d \theta_2}{2} = - i d \theta_1 d \theta_2 which is out by a minus sign?

The minus sign looks correct. That sign cancels out in any nonzero integral, such as

\int d\theta d\bar{\theta} \theta\bar{\theta} = 1.

Also, do you know how we get the extra factor of 1/b in 9.67 in Peskin and Schroeder i.e. why is \int d \bar{\theta} d \theta \theta \bar{\theta} e^{-\bar{\theta} b \theta} = \frac{1}{b}b = 1?

The point is that the b-dependent terms in the integrand vanish, so that

\int d \bar{\theta} d \theta \theta \bar{\theta} e^{-\bar{\theta} b \theta} =1.[/tex]<br /> <br /> The 1/b is put in later to compare with the integral without \theta \bar{\theta} inserted.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> And finally, why do we have \int d \theta d \bar{\theta} e^{- \bar{\theta} b \theta} = \int d \bar{\theta} d \theta ( 1 - \bar{\theta} b \theta)?<br /> Shouldn&#039;t we have additional higher order terms from the expansion of the exponential? </div> </div> </blockquote><br /> The higher order terms vanish as \theta^2=0.
 
fzero said:
I actually have a mistake, it should be

<br /> \int \theta_i d\theta_i = 1 ~~(\text{no sum on} ~i).<br />

The proof of that formula just amounts to counting signs when you permute to express
My notes and, by the looks of things P&S, define it the way you had it originally i.e. that \int d \theta_i \theta_i =1 (since the d \theta_i is Grassmann as well, we should get a minus sign if we got to the way you have it above. Are there different conventions at play here?

As for trying to prove the identity, I don't get it...
We know \int d^n \theta = \int d \theta_{i_n} d \theta_{i_{n-1}} \dots d_{i_1} so surely, \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \int d \theta_{i_n} \dots d \theta_{i_1} \theta_{i_1} \dots \theta_{i_n} =\int d \theta_{i_n} \dots d \theta_{i_2} \theta_{i_2} \dots \theta_{i_n} = \dots = \int d \theta_{i_n} \theta_{i_n} = 1
Where does the permuting come into play?

fzero said:
<br /> \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \prod_i \int \theta_i d\theta_i.<br />



The minus sign looks correct. That sign cancels out in any nonzero integral, such as

\int d\theta d\bar{\theta} \theta\bar{\theta} = 1.
[/itex]
Yeah I can't see what's going wrong either but there must be something because P&S and my notes both have \int d \bar{\theta} d \theta \theta \bar{\theta}=1 which is different to what we get if we assume that minus sign is right?


fzero said:
The point is that the b-dependent terms in the integrand vanish, so that

\int d \bar{\theta} d \theta \theta \bar{\theta} e^{-\bar{\theta} b \theta} =1.[/tex]<br /> <br /> The 1/b is put in later to compare with the integral without \theta \bar{\theta} inserted.<br />
<br /> Ok. But how do you actually do that integral, I find that <br /> <br /> \int d \bar{\theta} d \theta \theta \bar{\theta} e^{- \bar{\theta} b \theta} = \int d \bar{\theta} d \theta \theta \bar{\theta} ( 1 + b \theta \bar{\theta}) = \int d \bar{\theta} d \theta ( \theta \bar{\theta} + b \theta \bar{\theta} \theta \bar{\theta} )<br /> The first term gives us a b and the second one seems to give b \theta \bar{\theta}<br /> <br /> <br /> <br /> And the last thing we do on Grassmann integrals is that I_B = \int d^N \bar{\theta} d^N \theta e^{-\bar{\theta_i} B_{ij} \theta_j = \text{det } B by making a unitary transformation. Anyway I can do that calculation fine. But he then says that in the presence of source terms,<br /> <br /> I_B(\eta , \bar{\eta}) = \int d^N \bar{\theta} d \theta e^{-\bar{\theta}_i B_{ij} \theta_j + \bar{\eta}_i \theta_j + \bar{\theta}_i \eta_j}<br /> and we can complete the square to show<br /> I_B(\eta , \bar{\eta}) = \text{det } B \times e^{+\bar{\eta}_i B_{ij}^{-1} \eta_j}<br /> <br /> Now I have been trying to show this but I don&#039;t think I am completing the square correctly as I keep getting cross terms that I can&#039;t get rid of. I tried \tilde{\theta}_i = \theta_i - B_{ij}^{-1} \eta_j<br /> <br /> Thanks a lot again!
 
latentcorpse said:
My notes and, by the looks of things P&S, define it the way you had it originally i.e. that \int d \theta_i \theta_i =1 (since the d \theta_i is Grassmann as well, we should get a minus sign if we got to the way you have it above. Are there different conventions at play here?

Every minus sign is convention-dependent. For instance, your d^n\theta = d\theta_n\cdots d\theta_1 is another convention.

As for trying to prove the identity, I don't get it...
We know \int d^n \theta = \int d \theta_{i_n} d \theta_{i_{n-1}} \dots d_{i_1} so surely, \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \int d \theta_{i_n} \dots d \theta_{i_1} \theta_{i_1} \dots \theta_{i_n} =\int d \theta_{i_n} \dots d \theta_{i_2} \theta_{i_2} \dots \theta_{i_n} = \dots = \int d \theta_{i_n} \theta_{i_n} = 1
Where does the permuting come into play?

Because d^n\theta = d\theta_n\cdots d\theta_1 by your convention, so

\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \int d\theta_n\cdots d\theta_1\theta_{i_1} \dots \theta_{i_n}

depends on how we rearrange to put the appropriate \theta with the matching d\theta.

Yeah I can't see what's going wrong either but there must be something because P&S and my notes both have \int d \bar{\theta} d \theta \theta \bar{\theta}=1 which is different to what we get if we assume that minus sign is right?

No, the sign is common to d\theta d \bar{\theta} and \theta \bar{\theta} so it cancels in the product.

Ok. But how do you actually do that integral, I find that

\int d \bar{\theta} d \theta \theta \bar{\theta} e^{- \bar{\theta} b \theta} = \int d \bar{\theta} d \theta \theta \bar{\theta} ( 1 + b \theta \bar{\theta}) = \int d \bar{\theta} d \theta ( \theta \bar{\theta} + b \theta \bar{\theta} \theta \bar{\theta} )
The first term gives us a b and the second one seems to give b \theta \bar{\theta}

The 2nd term vanishes because it's proportional to \theta^2 \bar{\theta}^2 and
\theta^2 = \bar{\theta}^2=1 since they are odd variables.

And the last thing we do on Grassmann integrals is that I_B = \int d^N \bar{\theta} d^N \theta e^{-\bar{\theta_i} B_{ij} \theta_j = \text{det } B by making a unitary transformation. Anyway I can do that calculation fine. But he then says that in the presence of source terms,

I_B(\eta , \bar{\eta}) = \int d^N \bar{\theta} d \theta e^{-\bar{\theta}_i B_{ij} \theta_j + \bar{\eta}_i \theta_j + \bar{\theta}_i \eta_j}
and we can complete the square to show
I_B(\eta , \bar{\eta}) = \text{det } B \times e^{+\bar{\eta}_i B_{ij}^{-1} \eta_j}

Now I have been trying to show this but I don't think I am completing the square correctly as I keep getting cross terms that I can't get rid of. I tried \tilde{\theta}_i = \theta_i - B_{ij}^{-1} \eta_j

You want (in matrix form)

\theta = S \tilde{\theta} + B^{-1} \eta,

where S diagonalizes B, so S^{-1} B S = I.
 
fzero said:
Every minus sign is convention-dependent. For instance, your d^n\theta = d\theta_n\cdots d\theta_1 is another convention.



Because d^n\theta = d\theta_n\cdots d\theta_1 by your convention, so

\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \int d\theta_n\cdots d\theta_1\theta_{i_1} \dots \theta_{i_n}

depends on how we rearrange to put the appropriate \theta with the matching d\theta.
So if you were asked to show this is an exam, what would you do? As far as I can tell, we just get it to \int d \theta_N \dots d \theta_1 \theta_{i_1} \dots \theta_{i_N} and then just describe in words that it's to do with making permutations? Is there any maths we can do?

fzero said:
No, the sign is common to d\theta d \bar{\theta} and \theta \bar{\theta} so it cancels in the product.
Yeah but there is some inconsistency here. We claim that if that -i was correct then
\int d \bar{\theta} d \theta \bar{\theta} \theta =1 but both my notes and P&S say \int d \bar{\theta} d \theta \theta \bar{\theta} =1


fzero said:
You want (in matrix form)

\theta = S \tilde{\theta} + B^{-1} \eta,

where S diagonalizes B, so S^{-1} B S = I.

I've been playing about with this but can't get it. How are we going to get an S^{-1} out of anything? Also, how on Earth did you know to take that to complete the square?
 
  • #10
latentcorpse said:
So if you were asked to show this is an exam, what would you do? As far as I can tell, we just get it to \int d \theta_N \dots d \theta_1 \theta_{i_1} \dots \theta_{i_N} and then just describe in words that it's to do with making permutations? Is there any maths we can do?

You could also show that \theta_{i_1} \cdots \theta_{i_N} = \epsilon_{i_1\cdots i_N} \theta_1\cdots \theta_N. It's obvious by considering various cases, but maybe there's a more elegant proof.

Yeah but there is some inconsistency here. We claim that if that -i was correct then
\int d \bar{\theta} d \theta \bar{\theta} \theta =1 but both my notes and P&S say \int d \bar{\theta} d \theta \theta \bar{\theta} =1

I wrote the former term down using the opposite convention for the integral. If you work things out in P&S conventions, the relative minus sign is precisely what you need to cancel the minus from i^2.


I've been playing about with this but can't get it. How are we going to get an S^{-1} out of anything? Also, how on Earth did you know to take that to complete the square?

S is Hermitian and when you consider the \bar{\theta} expression, you end up with a complex conjugation. Then when you put things together into the quadratic expression, you also transpose \bar{\theta}, so S^\dagger=S^{-1} appears in the correct places. The \eta term comes from comparing the cross-terms.
 
  • #11
fzero said:
I wrote the former term down using the opposite convention for the integral. If you work things out in P&S conventions, the relative minus sign is precisely what you need to cancel the minus from i^2.
Ok. So we agreed that d \theta d \bar{\theta} = -i d \theta_1 d \theta_2

And if you work out \theta \bar{\theta} = \frac{1}{2} ( i \theta_2 \theta_1 - i \theta_1 \theta_2 ) = - i \theta_1 \theta_2
\Rightarrow \bar{\theta} \theta = i \theta_2 \theta_1

Therefore \int d \theta d \bar{\theta} ( \bar{\theta} \theta ) = - i \int d \theta_1 d \theta_2 ( i \theta_2 \theta_1) = -i \times i =1

So this is correct, yes? This means then that my written notes are wrong and that there should indeed be the minus sign in d \theta d \bar{\theta} = -i d \theta_1 d \theta_2?



fzero said:
S is Hermitian and when you consider the \bar{\theta} expression, you end up with a complex conjugation. Then when you put things together into the quadratic expression, you also transpose \bar{\theta}, so S^\dagger=S^{-1} appears in the correct places. The \eta term comes from comparing the cross-terms.

Ok. So the exponent looks like - \bar{\theta} B \theta + \bar{eta} \theta + \bar{\theta} \eta
Under the transformation \theta \rightarrow S \tilde{\theta} + B^{-1} \eta we get
- (S^* \bar{\tilde{\theta}} + ( B^{-1})^* \bar{\eta} ) B ( S \theta + B^{-1} \eta ) + \bar{\eta} S \tilde{\theta} + \bar{\eta} B \eta + S^* \bar{\tilde{\theta}} \eta + B^{-1} \eta \eta
Is this going in the right direction?

Cheers.
 
  • #12
latentcorpse said:
Ok. So we agreed that d \theta d \bar{\theta} = -i d \theta_1 d \theta_2

And if you work out \theta \bar{\theta} = \frac{1}{2} ( i \theta_2 \theta_1 - i \theta_1 \theta_2 ) = - i \theta_1 \theta_2
\Rightarrow \bar{\theta} \theta = i \theta_2 \theta_1

Therefore \int d \theta d \bar{\theta} ( \bar{\theta} \theta ) = - i \int d \theta_1 d \theta_2 ( i \theta_2 \theta_1) = -i \times i =1

So this is correct, yes? This means then that my written notes are wrong and that there should indeed be the minus sign in d \theta d \bar{\theta} = -i d \theta_1 d \theta_2?

Probably. I couldn't say for certain, since there could be some convention that's different.


Ok. So the exponent looks like - \bar{\theta} B \theta + \bar{eta} \theta + \bar{\theta} \eta
Under the transformation \theta \rightarrow S \tilde{\theta} + B^{-1} \eta we get
- (S^* \bar{\tilde{\theta}} + ( B^{-1})^* \bar{\eta} ) B ( S \theta + B^{-1} \eta ) + \bar{\eta} S \tilde{\theta} + \bar{\eta} B \eta + S^* \bar{\tilde{\theta}} \eta + B^{-1} \eta \eta
Is this going in the right direction?

Note: I accidentally called S Hermitian when I meant unitary. I gave the correct property as an equation, but wanted to correct that.

If you're going to use the matrix notation (and I think it's easier to do so) you have to keep track of transposes. So \bar{\theta} is Hermitian conjugate in this notation, not just complex conjugate, so

\bar{\theta} = \bar{\tilde{\theta}} S^\dagger + \bar{\eta} B^{-1}.

I derived this by canceling cross-terms, but it could also be computed directly by using the Hermiticity of B.
 
  • #13
fzero said:
Probably. I couldn't say for certain, since there could be some convention that's different.

I actually disagree again!

I find d \theta d \bar{\theta}= - i d \theta_1 d \theta_2

and \bar{\theta} \theta = -i \theta_2 \theta_1

which would give an answer of -i x -i =-1 which is wrong! AAArrgghh!
And also, could you confirm whether \gamma^\mu \gamma^\nu = \frac{1}{2} \{ \gamma^\mu , \gamma^\nu \} where \gamam^\mu are the matrices of the clifford algebra?

Thanks.
 
Last edited:
  • #14
latentcorpse said:
I actually disagree again!

I find d \theta d \bar{\theta}= - i d \theta_1 d \theta_2

and \bar{\theta} \theta = -i \theta_2 \theta_1

which would give an answer of -i x -i =-1 which is wrong! AAArrgghh!

\theta \bar{\theta} and d \theta d \bar{\theta} have the same structure so your previous calculation was correct.

And also, could you confirm whether \gamma^\mu \gamma^\nu = \frac{1}{2} \{ \gamma^\mu , \gamma^\nu \} where \gamam^\mu are the matrices of the clifford algebra?

Thanks.

That would imply that [\gamma^\mu, \gamma^\nu]=0, so no.
 
  • #15
fzero said:
\theta \bar{\theta} and d \theta d \bar{\theta} have the same structure so your previous calculation was correct.

Explicitly:

d\ theta d \bar{\theta} = \frac{1}{2} ( d \theta_1 + i d \theta_2 )( d \theta_1 - i d \theta_2) = \frac{1}{2} ( d \theta_1^2 - i d \theta_1 d \theta_2 + i d \theta_2 d \theta_1 + d \theta_2^2) = - i d \theta_1 d \theta_2

and \bar{ \theta} \theta = \frac{1}{2} ( \theta_1 - i \theta_2 )( \theta_1 + i \theta_2) = \frac{1}{2} ( \theta_1^2 + i \theta_1 \theta_2 - i \theta_2 \theta_1 + \theta_2^2) = i \theta_1 \theta_2

so \int d \theta d \bar{\theta} \theta \bar{\theta} = - i \times i \int d \theta_1 d \theta_2 \theta_1 \theta_2 = - i \times i \times (-1) \int d \theta_1 d \theta_2 \theta_2 \theta_1 = -i^2 \times -1 = i^2 =-1

I simply cannot spot the mistake!Moving on, what is the point of Wick Rotation? We introduce it so that we can use Euclidean space Feynman rules for loop integrals rather than momentum space Feynman rules. What's the reason for this? Is it just easier in Euclidean rules or something?

And take for example a simple loop integral, in Euclidean space feynman rules this gives
\int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2} \propto \int_0^\infty \frac{k^{d-1}dk}{k^2+m^2}
Where does that proportionality come from?

Lastly, I was trying to replicate the calculation for the amplitude at the top of p324 in P&S for the diagram with 4 external lines, but I can't get it. How do we do that?
And I'm assuming when he says "the only divergent amplitudes are..." and then draws those 3 diagrams on p324, that the grey circles just represent any collection of internal lines that can be made so that they have 4 point vertices, yes?

EDIT: Also at the bottom of p323, he says that all amplitudes with an odd number of vertices will vanish in this \phi^4 theory. Why on Earth is that?
 
  • #16
latentcorpse said:
Explicitly:

d\ theta d \bar{\theta} = \frac{1}{2} ( d \theta_1 + i d \theta_2 )( d \theta_1 - i d \theta_2) = \frac{1}{2} ( d \theta_1^2 - i d \theta_1 d \theta_2 + i d \theta_2 d \theta_1 + d \theta_2^2) = - i d \theta_1 d \theta_2

and \bar{ \theta} \theta = \frac{1}{2} ( \theta_1 - i \theta_2 )( \theta_1 + i \theta_2) = \frac{1}{2} ( \theta_1^2 + i \theta_1 \theta_2 - i \theta_2 \theta_1 + \theta_2^2) = i \theta_1 \theta_2

so \int d \theta d \bar{\theta} \theta \bar{\theta} = - i \times i \int d \theta_1 d \theta_2 \theta_1 \theta_2 = - i \times i \times (-1) \int d \theta_1 d \theta_2 \theta_2 \theta_1 = -i^2 \times -1 = i^2 =-1

I simply cannot spot the mistake!

Your calculation is correct. P&S claim that

\int d \bar{\theta} d \theta \theta \bar{\theta} =1.

Moving on, what is the point of Wick Rotation? We introduce it so that we can use Euclidean space Feynman rules for loop integrals rather than momentum space Feynman rules. What's the reason for this? Is it just easier in Euclidean rules or something?

It makes path integrals more convergent, since the weight is now e^{-S/\hbar}. The divergence structure of loop integrals is also easier to elucidate in Euclidean space, hence regularization is easier.

And take for example a simple loop integral, in Euclidean space feynman rules this gives
\int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2} \propto \int_0^\infty \frac{k^{d-1}dk}{k^2+m^2}
Where does that proportionality come from?

The RHS is the result of choosing spherical coordinates in momentum space.

Lastly, I was trying to replicate the calculation for the amplitude at the top of p324 in P&S for the diagram with 4 external lines, but I can't get it. How do we do that?

We only have a quartic vertex, so the simplest 1-loop diagrams have 2 vertices and 2 internal lines. Therefore there are 2 internal propagators involving the loop momentum, so the diagram is proportional to

\int d^4k \frac{1}{k^4}.

And I'm assuming when he says "the only divergent amplitudes are..." and then draws those 3 diagrams on p324, that the grey circles just represent any collection of internal lines that can be made so that they have 4 point vertices, yes?

Yes.

EDIT: Also at the bottom of p323, he says that all amplitudes with an odd number of vertices will vanish in this \phi^4 theory. Why on Earth is that?

You can use the \phi\rightarrow -\phi symmetry to show directly that correlation functions of an odd number of fields must vanish. Alternatively, if you set up a diagram with an odd number of external legs, you'll find that you must connect an odd number of internal legs in pairs, which is impossible.
 
  • #17
fzero said:
Your calculation is correct. P&S claim that

\int d \bar{\theta} d \theta \theta \bar{\theta} =1.
Sorry to drag this out. I think I may have made a mistake in what I wrote out last time. To try and reproduce P&S calculation:
\theta=\frac{1}{\sqrt{2}}(\theta_1 + i \theta_2), \quad \bar{\theta} = \frac{1}{\sqrt{2}} ( \theta_1 - i \theta_2)
d \bar{\theta} d \theta = \frac{1}{2} ( d \theta_1 - i d \theta_2 ) ( d \thteta_1 + i d \theta_2 ) = \frac{1}{2} ( i d \theta_1 d \theta_2 - i d \theta_2 d \theta_1) = i d \theta_1 d \theta_2
And \theta \bar{\theta} = \frac{1}{2} ( \theta_1 + i \theta_2 )( \theta_1 - i \theta_2 ) = \frac{1}{2} ( - i \theta_1 \theta_2 + i \theta_2 \theta_1 ) = i \theta_2 \theta_1
So \int d \bar{\theta} d \theta \theta \bar{\theta} = i^2 \int d \theta_1 d \theta_2 \theta_2 \theta_1 = i^2 = -1
fzero said:
It makes path integrals more convergent, since the weight is now e^{-S/\hbar}. The divergence structure of loop integrals is also easier to elucidate in Euclidean space, hence regularization is easier.
So previously our path integrals had weight e^{iS/\hbar}, right? How does the weight change to what you said? And why does that new weight make it more convergent?

fzero said:
The RHS is the result of choosing spherical coordinates in momentum space.
Well shouldn't there also be a \int k^{d-1} piece? Why can that be dropped from the proportionality?

fzero said:
We only have a quartic vertex, so the simplest 1-loop diagrams have 2 vertices and 2 internal lines. Therefore there are 2 internal propagators involving the loop momentum, so the diagram is proportional to

\int d^4k \frac{1}{k^4}.
How do you know those are the simplest diagrams? Experience?
And how does that integral give a log? I tried substituting u=k^4 but that doesn't help?
fzero said:
You can use the \phi\rightarrow -\phi symmetry to show directly that correlation functions of an odd number of fields must vanish. Alternatively, if you set up a diagram with an odd number of external legs, you'll find that you must connect an odd number of internal legs in pairs, which is impossible.
I can see this from the diagrams but not quite sure what you mean for doing it for the correlation functions - what formula do I use?

Also, he's introducing analytic continuation and discusses how we can use analytic continuation of the Gamma function to extend the definition of n! to negative numbers.
We have the relationship \Gamma(n+1)=n! for n \in \mathbb{Z}^+ which is fair enough. Then he goes through the procedure of using \Gamma(\alpha) = \frac{1}{\alpha} \Gamma(\alpha+1) to extend this to -1 and so on and so forth till we can extend over the whole complex plane. However, we find that for \alpha near 0, we have \Gamma(\alpha) \tilde \frac{1}{\alpha} and since \Gamma(n+1)=n!, wouldn't this mean that 0! would diverge? But we expect 0!=1?
Furthermore, does this definition actually mean we can put a value on things like (-2)! and if so how?

Thanks again!
 
Last edited:
  • #18
latentcorpse said:
Sorry to drag this out. I think I may have made a mistake in what I wrote out last time. To try and reproduce P&S calculation:
\theta=\frac{1}{\sqrt{2}}(\theta_1 + i \theta_2), \quad \bar{\theta} = \frac{1}{\sqrt{2}} ( \theta_1 - i \theta_2)
d \bar{\theta} d \theta = \frac{1}{2} ( d \theta_1 - i d \theta_2 ) ( d \thteta_1 + i d \theta_2 ) = \frac{1}{2} ( i d \theta_1 d \theta_2 - i d \theta_2 d \theta_1) = i d \theta_1 d \theta_2
And \theta \bar{\theta} = \frac{1}{2} ( \theta_1 + i \theta_2 )( \theta_1 - i \theta_2 ) = \frac{1}{2} ( - i \theta_1 \theta_2 + i \theta_2 \theta_1 ) = i \theta_2 \theta_1
So \int d \bar{\theta} d \theta \theta \bar{\theta} = i^2 \int d \theta_1 d \theta_2 \theta_2 \theta_1 = i^2 = -1

It seems that one has to be more careful. The right way to keep track of the change in measure is to compute the Jacobian. For Grassmann variables, the determinant of the Jacobian comes in with the reciprocal power (this is worked out in Srednicki's book for example). Since the Jacobian here depends on i, this introduces an extra minus sign, which is what we need to get P&S's result.

So previously our path integrals had weight e^{iS/\hbar}, right? How does the weight change to what you said? And why does that new weight make it more convergent?

Just consider a quadratic action. One integral is oscillatory and the other is a Gaussian.

Well shouldn't there also be a \int k^{d-1} piece? Why can that be dropped from the proportionality?

The k^{d-1} factor is there.

How do you know those are the simplest diagrams? Experience?

You can just try to draw diagrams that are one-particle irreducible.

And how does that integral give a log? I tried substituting u=k^4 but that doesn't help?

\int \frac{d^4k}{k^4} \sim \int \frac{dk}{k}

I can see this from the diagrams but not quite sure what you mean for doing it for the correlation functions - what formula do I use?

You can show for example that

\langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle = - \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle

Also, he's introducing analytic continuation and discusses how we can use analytic continuation of the Gamma function to extend the definition of n! to negative numbers.
We have the relationship \Gamma(n+1)=n! for n \in \mathbb{Z}^+ which is fair enough. Then he goes through the procedure of using \Gamma(\alpha) = \frac{1}{\alpha} \Gamma(\alpha+1) to extend this to -1 and so on and so forth till we can extend over the whole complex plane. However, we find that for \alpha near 0, we have \Gamma(\alpha) \tilde \frac{1}{\alpha} and since \Gamma(n+1)=n!, wouldn't this mean that 0! would diverge? But we expect 0!=1?
Furthermore, does this definition actually mean we can put a value on things like (-2)! and if so how?

The Gamma function has simple poles at \alpha = 0,-1,-2,\ldots. However 0!=\Gamma(1) = 1.
 
  • #19
fzero said:
It seems that one has to be more careful. The right way to keep track of the change in measure is to compute the Jacobian. For Grassmann variables, the determinant of the Jacobian comes in with the reciprocal power (this is worked out in Srednicki's book for example). Since the Jacobian here depends on i, this introduces an extra minus sign, which is what we need to get P&S's result.
I don't follow what you're on about with reciprocal powers etc?
fzero said:
The k^{d-1} factor is there.
Why is it treated as a constant though? Is it because if we break up \int d^dk = \int k^{d-1} dk \int d^{d-1}k, the \int d^{d-1}k measure has no k dependence?

fzero said:
You can just try to draw diagrams that are one-particle irreducible.
Why one particle irreducible? Do we always just work with these?
As for finding one with 2 vertices and 2 lines - I'm struggling!
I can draw my four external lines coming in and each one hitting the corner of a square. That is definitely 1PI but it has 4 internal lines and 4 vertices.
When trying to get one with 2 vertices and 2 internal lines, I drew a cross, with a vertex at the centre and then stuck a loop on one of the external legs. However, I'm fairly sure that is not 1PI as I can cut the loop and the diagram will no longer be connected!
fzero said:
You can show for example that

\langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle = - \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle

How though? I am not sure how to expand \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle in this case?

fzero said:
The Gamma function has simple poles at \alpha = 0,-1,-2,\ldots. However 0!=\Gamma(1) = 1.
Yes because \Gamma(\alpha) = \frac{\Gamma(\alpha+1)}{\alpha} and so \Gamma(0)=\Gamma(1) / 1 = 1/1 = 1 which agrees with 0!=1
Although, is it true to say that there is still a simple pole associated with \Gamma(0) since if we take \alpha near 0 then \Gamma(\alpha) \tilde \frac{1}{\alpha} since \Gamma(1)=1, right?

However, since we are claiming that the gamma function extends the factorial to the whole complex plane, I am wondering, how do you go about computing, say, (-2)!?

Thanks.
 
Last edited:
  • #20
latentcorpse said:
I don't follow what you're on about with reciprocal powers etc?

You should probably just look at the computation leading up to (44.18) in http://www.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf What I should have realized sooner is that multiplying differentials almost never leads to the correct measure. What is true is that you could compute the measure from the volume form if you're careful. But it's usually faster to use the Jacobian.

Why is it treated as a constant though? Is it because if we break up \int d^dk = \int k^{d-1} dk \int d^{d-1}k, the \int d^{d-1}k measure has no k dependence?

No, the k-dependence is there. Specifically

<br /> \int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2}= \int \frac{d\Omega_{d-1}}{(2 \pi)^d} \int \frac{dk~ k^{d-1}}{k^2+m^2}= \frac{V_{d-1}}{ (2 \pi)^d} \int_0^\infty \frac{k^{d-1}dk}{k^2+m^2},<br />

where V_{d-1} is the volume of a d-1 sphere. The proportionality constant is just the ratio appearing in front of the last expression.

Why one particle irreducible? Do we always just work with these?

The diagrams corresponding to the expansion of what's usually called the generating function (usually written as W[J]) are connected and include propagators on the external legs. What's usually called the effective action (\Gamma[\phi]) involves the one-particle irreducible diagrams with no propagators on the external legs.

As for finding one with 2 vertices and 2 lines - I'm struggling!
I can draw my four external lines coming in and each one hitting the corner of a square. That is definitely 1PI but it has 4 internal lines and 4 vertices.

That sounds like you're using cubic instead of quartic vertices.

When trying to get one with 2 vertices and 2 internal lines, I drew a cross, with a vertex at the centre and then stuck a loop on one of the external legs. However, I'm fairly sure that is not 1PI as I can cut the loop and the diagram will no longer be connected!

The diagram I'm talking about is

attachment.php?attachmentid=35149&stc=1&d=1304463328.jpg


How though? I am not sure how to expand \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle in this case?

All we've done is applied the \phi\rightarrow -\phi symmetry to the correlator. There's no need to evaluate it since we've shown that it must be zero.

Yes because \Gamma(\alpha) = \frac{\Gamma(\alpha+1)}{\alpha} and so \Gamma(0)=\Gamma(1) / 1 = 1/1 = 1 which agrees with 0!=1
Although, is it true to say that there is still a simple pole associated with \Gamma(0) since if we take \alpha near 0 then \Gamma(\alpha) \tilde \frac{1}{\alpha} since \Gamma(1)=1, right?

However, since we are claiming that the gamma function extends the factorial to the whole complex plane, I am wondering, how do you go about computing, say, (-2)!?

You could compute a residue or principal value, but the factorial does not extend to the whole plane. It extends to everywhere that it doesn't have a pole, which is the plane minus the nonpositive integers.
 

Attachments

  • fish.jpg
    fish.jpg
    1.2 KB · Views: 626
  • #21
fzero said:
You should probably just look at the computation leading up to (44.18) in http://www.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf What I should have realized sooner is that multiplying differentials almost never leads to the correct measure. What is true is that you could compute the measure from the volume form if you're careful. But it's usually faster to use the Jacobian.
So what are our J_{i_1j_1} in this case?
fzero said:
The diagrams corresponding to the expansion of what's usually called the generating function (usually written as W[J]) are connected and include propagators on the external legs. What's usually called the effective action (\Gamma[\phi]) involves the one-particle irreducible diagrams with no propagators on the external legs.
Yes, I have these same definitions. But this doesn't explain why we consider 1PI diagrams? Is it because they don't have propagators on the external legs and are therefore simpler to deal with?

fzero said:
The diagram I'm talking about is

attachment.php?attachmentid=35149&stc=1&d=1304463328.jpg
Ok so given this graph, if we put k as the loop momenta then the Euclidean space Feynman rules tell us that we get
\int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2} \frac{1}{(p-k)^2+m^2} \rightarrow \int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^4}
If we regularise by imposing a high momentum UV cutoff k \leq \Lambda
then we obtain the result \tilde \log{\Lambda} as on p324 P&S.
Is this correct?

Also, when we compute these loop integrals, what exactly are we computing? He just draws a diagram and then puts something like ~ log (k) next to it but doesn't actually say WHAT IS ~log(k). It's not a green's function or anything like that. What actually is it?

fzero said:
All we've done is applied the \phi\rightarrow -\phi symmetry to the correlator. There's no need to evaluate it since we've shown that it must be zero.
Do you just mean that you took \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle and made the transformation \phi \rightarrow - \phi which gives 3 minus signs (one for each \phi) and since (-1)^3=-1 we get an overall minus sign that we can pull out showing that the 3 point green's function is equal to minus itself and therefore must vanish?

fzero said:
You could compute a residue or principal value, but the factorial does not extend to the whole plane. It extends to everywhere that it doesn't have a pole, which is the plane minus the nonpositive integers.
Ok. So you're telling me that \Gamma is defined everywhere except 0,-1,-2,-3,...?
But we just showed a few posts ago that \Gamma(0)=1 to be consistent with 0!=1

So if you were asked to find (-1.5)!, could you just compute \Gamma(-0.5) using its integral definition as \Gamma(\alpha) = \int_0^\int x^{\alpha-1}e^{-x}dx?

Lastly, my notes discuss three methods of regularisation: (i) UV cut-off which I think I understand (assuming you agree with my calculation of \log(\Lambda) above.
(ii) Using a spacetime lattice (although this doesn't seem very important as he just mentions it in passing.
(iii) Dimensional Regularisation. This seems like the most important. He just says though:
The superficial degree opf divergence is given by D=(d-4)L + \displaystyle\sum_n (n-4) V_n - E + 4 which depends on the dimension d.
Usually this is just defined for d \in \mathbb{Z}^+ but we can regulate by analytic continuation to d \in \mathbb{C}.
He has included an example afterwords and I can follow the maths in it but I don't really understand what we are doing. How do we analytically continue to complex d and what's the point of doing so?

Thanks very much!
 
  • #22
latentcorpse said:
So what are our J_{i_1j_1} in this case?

That's easy to figure out by inverting the formula for the complex variables. Srednicki does the calculation a bit later in that section.

Yes, I have these same definitions. But this doesn't explain why we consider 1PI diagrams? Is it because they don't have propagators on the external legs and are therefore simpler to deal with?

The LSZ formula tells us that S-matrix elements are related to time-ordered correlation functions. We can build all of the correlations functions from the 1PIs as ingredients I guess.

Ok so given this graph, if we put k as the loop momenta then the Euclidean space Feynman rules tell us that we get
\int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2} \frac{1}{(p-k)^2+m^2} \rightarrow \int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^4}
If we regularise by imposing a high momentum UV cutoff k \leq \Lambda
then we obtain the result \tilde \log{\Lambda} as on p324 P&S.
Is this correct?

Basically yes.

Also, when we compute these loop integrals, what exactly are we computing? He just draws a diagram and then puts something like ~ log (k) next to it but doesn't actually say WHAT IS ~log(k). It's not a green's function or anything like that. What actually is it?

Each diagram is a specific way of Wick contracting the operators in a momentum space correlation function.

Do you just mean that you took \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle and made the transformation \phi \rightarrow - \phi which gives 3 minus signs (one for each \phi) and since (-1)^3=-1 we get an overall minus sign that we can pull out showing that the 3 point green's function is equal to minus itself and therefore must vanish?

Yes.

Ok. So you're telling me that \Gamma is defined everywhere except 0,-1,-2,-3,...?
But we just showed a few posts ago that \Gamma(0)=1 to be consistent with 0!=1

No, 0!=\Gamma(1)

So if you were asked to find (-1.5)!, could you just compute \Gamma(-0.5) using its integral definition as \Gamma(\alpha) = \int_0^\int x^{\alpha-1}e^{-x}dx?

There are identities for \Gamma(z/2) and \Gamma(1-z) that are usually easier to work with than the integral.

Lastly, my notes discuss three methods of regularisation: (i) UV cut-off which I think I understand (assuming you agree with my calculation of \log(\Lambda) above.
(ii) Using a spacetime lattice (although this doesn't seem very important as he just mentions it in passing.
(iii) Dimensional Regularisation. This seems like the most important. He just says though:
The superficial degree opf divergence is given by D=(d-4)L + \displaystyle\sum_n (n-4) V_n - E + 4 which depends on the dimension d.
Usually this is just defined for d \in \mathbb{Z}^+ but we can regulate by analytic continuation to d \in \mathbb{C}.
He has included an example afterwords and I can follow the maths in it but I don't really understand what we are doing. How do we analytically continue to complex d and what's the point of doing so?

You'd be better off trying to read the details in a decent text. You are generally expressing loop integrals in terms of Gamma functions which define the analytic continuation. Since the Gamma functions have a known pole structure, one can determine the divergent and finite parts of each integral.
 
  • #23
fzero said:
Basically yes.
I'm getting an infinity though when i write it out in full

\int \frac{d^4k}{k^4} = \int \frac{dk}{k}|_0^\Lambda = \log{\Lambda} - \log{0} but \log{0}=-\infty so I end up with a +\infty term? <blockquote data-attributes="" data-quote="fzero" data-source="post: 3282351" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-title"> fzero said: </div> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> No, 0!=\Gamma(1) </div> </div> </blockquote>Ok. So we&#039;re saying that \Gamma(0) is undefined and we can never know it&#039;s value because of the pole there? That makes sense I guess...<br /> <br /> <blockquote data-attributes="" data-quote="fzero" data-source="post: 3282351" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-title"> fzero said: </div> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> There are identities for \Gamma(z/2) and \Gamma(1-z) that are usually easier to work with than the integral. </div> </div> </blockquote>So I should know these identities I guess. But you are saying that calculation the factorial of a negative number is possible providing it isn&#039;t at a pole? i.e. (-1.5)! exists?<br /> <br /> What about complex numbers (-1+i)! Does that exist?<br /> <br /> And so, given these two methods of regularisation :(i) momentum space cut-off and (ii) dimensional regularisation, how do we know which one to use? Will questions generally state which one they want you to use or is it better to use a particular one?<br /> <br /> Thanks.
 
Last edited:
  • #24
latentcorpse said:
I'm getting an infinity though when i write it out in full

\int \frac{d^4k}{k^4} = \int \frac{dk}{k}|_0^\Lambda = \log{\Lambda} - \log{0} but \log{0}=-\infty so I end up with a +\infty term?
<br /> <br /> You get this divergence because you dropped the particle mass out of the integral. That was valid when we were considering the UV behavior, but not valid when considering small momenta. If you consider the complete integral, the mass acts as an IR cutoff.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> Ok. So we&#039;re saying that \Gamma(0) is undefined and we can never know it&#039;s value because of the pole there? That makes sense I guess... </div> </div> </blockquote><br /> I should have been more careful. We can isolate the divergent and finite parts of the Gamma function at the poles. This is what&#039;s used in dimensional regularization. My comment above was that 0! does not correspond to a pole of the Gamma function.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> So I should know these identities I guess. But you are saying that calculation the factorial of a negative number is possible providing it isn&#039;t at a pole? i.e. (-1.5)! exists?<br /> <br /> What about complex numbers (-1+i)! Does that exist? </div> </div> </blockquote><br /> Both of those exist.
 
  • #25
fzero said:
You get this divergence because you dropped the particle mass out of the integral. That was valid when we were considering the UV behavior, but not valid when considering small momenta. If you consider the complete integral, the mass acts as an IR cutoff.
So going back to
\int_{k \leq \Lambda} \frac{d^4}k{(2 \pi)^d} \frac{1}{k^2+m^2} \frac{1}{(p-k)^2+m^2}
What should I use as a substitution? I can't get rid of the p term?

Also, aren't we meant ot be talking about divergent graphs here? We get our answer to be \log{\Lambda} but \Lambda is bounded above meaning that it can't diverge. This is the point of regularisation right? We have imposed this upper limit on momenta in order to render an initially divergent loop integral finite?

Isn't this just a sort of "quick fix" though? It doesn't really seem to solve the problem as we cans till just take \Lambda \rightarrow \infty and the integral will diverge again, won't it?

fzero said:
Both of those exist.
So how would we calculate (1-i)!, say?Can you try and explain to me what renormalisation is all about as my notes offer a fairly inadequate description and I can't really follow what P&S is on about! It seems to be something to do with the mass and the physical mass (which appears as the pole in the full two point function are different?

And there's something in my notes that's bugging me: He claims that a tree level (i.e. no loops) diagram with 2 external legs that are amputated contributes \hat{F}^{(0)}_2(p,-p)=p^2+m^2
I don't see why though? Surely the Feynman rules tell us it should have a \frac{1}{p^2+m^2}, no?

Cheers.
 
Last edited:
  • #26
latentcorpse said:
So going back to
\int_{k \leq \Lambda} \frac{d^4}k{(2 \pi)^d} \frac{1}{k^2+m^2} \frac{1}{(p-k)^2+m^2}
What should I use as a substitution? I can't get rid of the p term?

Also, aren't we meant ot be talking about divergent graphs here? We get our answer to be \log{\Lambda} but \Lambda is bounded above meaning that it can't diverge. This is the point of regularisation right? We have imposed this upper limit on momenta in order to render an initially divergent loop integral finite?

Isn't this just a sort of "quick fix" though? It doesn't really seem to solve the problem as we cans till just take \Lambda \rightarrow \infty and the integral will diverge again, won't it?

This integral is eq (10.20) in P&S and is evaluated in dimensional regularization there.

So how would we calculate (1-i)!, say?

You probably have to do an integral somewhere, since I'm not remembering an expression for \Gamma(\pm i).

Can you try and explain to me what renormalisation is all about as my notes offer a fairly inadequate description and I can't really follow what P&S is on about! It seems to be something to do with the mass and the physical mass (which appears as the pole in the full two point function are different?

I'd suggest trying some other texts. I can try to answer specific questions, but I can't try to regurgitate material that takes pages and pages in books.

And there's something in my notes that's bugging me: He claims that a tree level (i.e. no loops) diagram with 2 external legs that are amputated contributes \hat{F}^{(0)}_2(p,-p)=p^2+m^2
I don't see why though? Surely the Feynman rules tell us it should have a \frac{1}{p^2+m^2}, no?

The definition of the amputated correlation function is

G(p_1,\ldots p_n) = G(p_1)\cdots G(p_n) G_{\text{amp}}(p_1,\ldots p_n).

Since

G(p,-p) \sim \frac{1}{p^2+m^2} \sim G(p)

we find that

G_{\text{amp}}(p,-p) \sim p^2+m^2.
 
  • #27
latentcorpse said:
If \{ \theta_i \} are a set of Grassmann numbers then what is

\frac{\partial}{\partial \theta_i} ( \theta_j \theta_k \theta_l)

The \frac{\partial}{\partial \theta_i} is undefined on a Grassmann algebra. You either have a left derivative, or a right derivative operator depending on your wish. The 2 operators in general are not the same and in particular this difference is felt once you have a product of anticommuting objects.

Read more on this in Henneaux and Teitelboim's book on the quantization of gauge theories.
 
  • #28
fzero said:
This integral is eq (10.20) in P&S and is evaluated in dimensional regularization there.
I see how he gets the first line of (10.20) from Euclidean space Feynman rules, then he seems to do lots of steps at once and I can't keep up! He says that this is equal to (-i \lambda)^2 \cdot i V(p^2) where V(p^2) is given on the next page. Why is that the formula for V? And what's this x integral all about? I reckon it's the feynman parameter he's on about but our lecture notes haven't mentioned these...

fzero said:
You probably have to do an integral somewhere, since I'm not remembering an expression for \Gamma(\pm i).
So we have (-1+i)! = \Gamma(i) = \int_0^\infty x^{i-1} e^{-x} dx[/itex] and we'd have to evaluate this?

What about my question concerning which method to use for regularisation? How do we decide between dimensional regularisation or imposing a momentum space cut-off?

fzero said:
I'd suggest trying some other texts. I can try to answer specific questions, but I can't try to regurgitate material that takes pages and pages in books.
Sure. Well the notes that (more or less) follow my lectures are here:
http://www.damtp.cam.ac.uk/user/ho/Notes.pdf
Our renormalisation stuff is round about p50. Half-way down this page he says that the loop integrals lead to divergent terms and these divergent terms can be canceled off by adding new "counter terms" to the Lagrangian so that we have a new lagrangian of the form
\mathcal{L}+\mathcal{L}_{ct} where the counter term lagrangian is given by
\mathcal{L}_{ct}=-\frac{1}{2}A \partial_\mu \phi \partial^\mu \phi - V_{ct}(\phi)
and the counter term potential is given by E \phi + \frac{B}{2} \phi^2 + \frac{C}{3!} \phi^3 + \frac{D}{4!} \phi^4
Why does the counter term lagrangian take this form? I don't see why any of these terms are there or how they help cancel stuff?

fzero said:
The definition of the amputated correlation function is

G(p_1,\ldots p_n) = G(p_1)\cdots G(p_n) G_{\text{amp}}(p_1,\ldots p_n).

Since

G(p,-p) \sim \frac{1}{p^2+m^2} \sim G(p)

we find that

G_{\text{amp}}(p,-p) \sim p^2+m^2.
[/QUOTE]

So we have that (I think this is the same as what you have)
F_n(p_1 , \dots , p_n) = i (2 \pi)^d \delta^{(d)}( \displaystyle\sum_{i=1}^n p_i ) \displaystyle\prod_{i=1}^n \left( \frac{-i}{p^2+m^2} \right) \hat{F}_n(p_1 , \dots , p_n)
where \hat{F}_n(p_1 , \dots , p_n) corresponds to the amputated correlation function.
I can see how my definition falls out of the Euclidean space Feynman rules although it seems to have some extra terms relative to yours?
Anyway, I agree that if we have two external legs then we get (-i/(p^2+m^2))^2*\hat{F} on the RHS. Why is the full correlation function equal to 1/(p^2+m^2)? How do you know that?
And my definition looks as though it's going to disagree with yours by an overall minus sign when we work out \hat{F}?

Thanks again.
 
Last edited by a moderator:
  • #29
latentcorpse said:
I see how he gets the first line of (10.20) from Euclidean space Feynman rules, then he seems to do lots of steps at once and I can't keep up! He says that this is equal to (-i \lambda)^2 \cdot i V(p^2) where V(p^2) is given on the next page. Why is that the formula for V? And what's this x integral all about? I reckon it's the feynman parameter he's on about but our lecture notes haven't mentioned these...

(10.20) serves as the definition of V(p^2). The Feynman parameter method is the same thing that you were asking about in https://www.physicsforums.com/showthread.php?t=478538 P&S refer to their section 7.5.

So we have (-1+i)! = \Gamma(i) = \int_0^\infty x^{i-1} e^{-x} dx[/itex] and we'd have to evaluate this?

Or use some other representation of the Gamma function. Wolframalpha seems to use the series representations: http://www.wolframalpha.com/input/?i=gamma(i)

What about my question concerning which method to use for regularisation? How do we decide between dimensional regularisation or imposing a momentum space cut-off?

The momentum space cut-off will eventually break gauge covariance. For some diagrams it works, but for general processes it will turn up as a nonzero photon mass generated through radiative corrections. Dimensional regularization preserves gauge covariance at the expense of being difficult to understand.

Sure. Well the notes that (more or less) follow my lectures are here:
http://www.damtp.cam.ac.uk/user/ho/Notes.pdf
Our renormalisation stuff is round about p50. Half-way down this page he says that the loop integrals lead to divergent terms and these divergent terms can be canceled off by adding new "counter terms" to the Lagrangian so that we have a new lagrangian of the form
\mathcal{L}+\mathcal{L}_{ct} where the counter term lagrangian is given by
\mathcal{L}_{ct}=-\frac{1}{2}A \partial_\mu \phi \partial^\mu \phi - V_{ct}(\phi)
and the counter term potential is given by E \phi + \frac{B}{2} \phi^2 + \frac{C}{3!} \phi^3 + \frac{D}{4!} \phi^4
Why does the counter term lagrangian take this form? I don't see why any of these terms are there or how they help cancel stuff?

The idea is that we can consider the effect of adding any single term to our original Lagrangian, either by just noting that we shift the parameters of the original theory or by adding new diagrams where we replace some lines and vertices with the appropriate counterterms. Either way, we can now express our physical quantities in terms of the bare couplings and the coefficients of the counterterms. The divergent parts of these quantities are given by the \hat{\tau}s at the top of that page, while the counterterms contribute by adding the \hat{\tau}_{c.t.}s to the bare result. The renormalization scheme consists of choosing the counterterm coefficients in such a way that the divergences are removed.

So we have that (I think this is the same as what you have)
F_n(p_1 , \dots , p_n) = i (2 \pi)^d \delta^{(d)}( \displaystyle\sum_{i=1}^n p_i ) \displaystyle\prod_{i=1}^n \left( \frac{-i}{p^2+m^2} \right) \hat{F}_n(p_1 , \dots , p_n)
where \hat{F}_n(p_1 , \dots , p_n) corresponds to the amputated correlation function.
I can see how my definition falls out of the Euclidean space Feynman rules although it seems to have some extra terms relative to yours?
Anyway, I agree that if we have two external legs then we get (-i/(p^2+m^2))^2*\hat{F} on the RHS. Why is the full correlation function equal to 1/(p^2+m^2)? How do you know that?
And my definition looks as though it's going to disagree with yours by an overall minus sign when we work out \hat{F}?

There are various conventions that can be used to define these expressions. It's probably unnecessary to put the momentum conservation delta function in by hand, since it should be enforced by the Feynman rules already when the external lines are on-shell.
 
Last edited by a moderator:
  • #30
fzero said:
(10.20) serves as the definition of V(p^2). The Feynman parameter method is the same thing that you were asking about in https://www.physicsforums.com/showthread.php?t=478538 P&S refer to their section 7.5.

I found the calculation in full here:
http://arxiv.org/PS_cache/arxiv/pdf/0901/0901.2208v1.pdf
See p14 & p15.
I follow up to (2.6): why does he get an \Omega_4?
Surely \int d^4k = \int_{S^3} d \Omega_3 \int k^3 dk, no?

And then in (2.7), I understand the change of variables he used to get the expression on the left of this line but how does that integrate to give the RHS?

And then how does (2.8) come about? Surely we have 5 factors of 2 on the bottom i.e. 32 since we had a (2 pi)^4 and also a 1.2 from (2.7)? He also appears to have only pi^2 rather than pi^4 on the bottom?

fzero said:
The momentum space cut-off will eventually break gauge covariance. For some diagrams it works, but for general processes it will turn up as a nonzero photon mass generated through radiative corrections. Dimensional regularization preserves gauge covariance at the expense of being difficult to understand.
So, given an arbitrary loop integral to compute, is it best to start of by trying momentum space cut off (since it is easier) and if that fails then try dim reg? Or should we just always try dim reg straight away since we know that it works?

fzero said:
The idea is that we can consider the effect of adding any single term to our original Lagrangian, either by just noting that we shift the parameters of the original theory or by adding new diagrams where we replace some lines and vertices with the appropriate counterterms. Either way, we can now express our physical quantities in terms of the bare couplings and the coefficients of the counterterms. The divergent parts of these quantities are given by the \hat{\tau}s at the top of that page, while the counterterms contribute by adding the \hat{\tau}_{c.t.}s to the bare result. The renormalization scheme consists of choosing the counterterm coefficients in such a way that the divergences are removed.
Yep, so I worked out those divergent parts already using the dimensional regularisation prescription i think. And I understand (or at least am prepared to accept) that we can add new terms to the lagrangian in order to cancel off these divergences. However, I do not understand why the counter terms in the lagrangian give the contributions that they do to the amputated n point function
i.e. why is \hat{\tau}^{(0)}(p,-p)_{\text{c.t.}}=-Ap^2-B?
And why do they then give those diagrams drawn underneath?
fzero said:
There are various conventions that can be used to define these expressions. It's probably unnecessary to put the momentum conservation delta function in by hand, since it should be enforced by the Feynman rules already when the external lines are on-shell.
Ok, so given my convention, we get the RHS is equal to \frac{-1}{(p^2+m^2)^2} \hat{F}(p,-p) and the RHS is equal to F(p,-p). So what do we substitute for F(p,-p) and why?
 
Last edited:
  • #31
latentcorpse said:
I found the calculation in full here:
http://arxiv.org/PS_cache/arxiv/pdf/0901/0901.2208v1.pdf
See p14 & p15.
I follow up to (2.6): why does he get an \Omega_4?
Surely \int d^4k = \int_{S^3} d \Omega_3 \int k^3 dk, no?

Yes, he's wrong.

And then in (2.7), I understand the change of variables he used to get the expression on the left of this line but how does that integrate to give the RHS?

The integral can be done by substitution. I get a slightly different answer than he does.

And then how does (2.8) come about? Surely we have 5 factors of 2 on the bottom i.e. 32 since we had a (2 pi)^4 and also a 1.2 from (2.7)? He also appears to have only pi^2 rather than pi^4 on the bottom?

I think he has a factor of 2 wrong in the integral above, so it's pointless to try to follow his algebra.

So, given an arbitrary loop integral to compute, is it best to start of by trying momentum space cut off (since it is easier) and if that fails then try dim reg? Or should we just always try dim reg straight away since we know that it works?

It's probably best to use DR since it's probably not obvious from just one or two diagrams whether the primitive cutoff is going to break something important.

Yep, so I worked out those divergent parts already using the dimensional regularisation prescription i think. And I understand (or at least am prepared to accept) that we can add new terms to the lagrangian in order to cancel off these divergences. However, I do not understand why the counter terms in the lagrangian give the contributions that they do to the amputated n point function
i.e. why is \hat{\tau}^{(0)}(p,-p)_{\text{c.t.}}=-Ap^2-B?
And why do they then give those diagrams drawn underneath?

The counterterms are used to compute new vertices. So you just have to compute Fourier transforms to get the tree level diagrams:

A(\partial \phi)^2 \rightarrow - A p^2

- B \phi2 \rightarrow - B,

etc. On the RHS we've gotten rid of the factors of \hat{\phi}(p) as usual when writing Feynman rules.

Ok, so given my convention, we get the RHS is equal to \frac{-1}{(p^2+m^2)^2} \hat{F}(p,-p) and the RHS is equal to F(p,-p). So what do we substitute for F(p,-p) and why?

F(p,-p) is the connected 2pt function, which should just be the propagator.
 
  • #32
fzero said:
The integral can be done by substitution. I get a slightly different answer than he does.
What would your substitution be? I tried u=k^2+p^2x(1-x)+m^2 as well as u=(k^2+p^2x(1-x)+m^2)^2 but couldn't get either to work?

fzero said:
The counterterms are used to compute new vertices. So you just have to compute Fourier transforms to get the tree level diagrams:

A(\partial \phi)^2 \rightarrow - A p^2

- B \phi2 \rightarrow - B,

etc. On the RHS we've gotten rid of the factors of \hat{\phi}(p) as usual when writing Feynman rules.
Ok. So you're saying that we take the Fourier transform to get the momentum space Feynman rules? How do you know only to Fourier transform the A and B contributions for \hat{\tau}_2^{(0)} though?

And even if I went through and did all the Fourier transforms, how do we know what the corresponding vertices look like?

Lastly, are the vertices he has drawn on that page corresponding to B,C,D,E or \hat{\tau}_1^{(0)},\hat{\tau}_2^{(0)},\hat{\tau}_3^{(0)},\hat{\tau}_4^{(0)}?
fzero said:
F(p,-p) is the connected 2pt function, which should just be the propagator.
Ok but surely the full 2 point function should have external leg contributions as well, no?

My notes also claim that the renormalised parameters m, \lambda, \phi depend on the RG (renormalisation group) scale \mu but are independent of the cut off \epsilon whereas the bar parameters are dependent on the cutoff and independent of the RG scale.
I have two questions about this statement:
(i) I thought the whole point of renormalisation was to get physical parameters i.e. ones that don't change when you change the scale? But clearly these will change and we have to further manipulate them (i.e. go to the bare parameters) to get fixed ones. What do the bare parameters represent?
(ii) When he talks about the cutoff \epsilon, even though this has come from dimensional regularisation, does this correspond exactly to the high momentum cutoof \Lambda from UV cutoff regularisation since surely taking the limit \Lambda \rightarrow \infty and \epsilon \rightarrow 0 have the same effect?
Thanks.
 
Last edited:
  • #33
latentcorpse said:
What would your substitution be? I tried u=k^2+p^2x(1-x)+m^2 as well as u=(k^2+p^2x(1-x)+m^2)^2 but couldn't get either to work?

u=k^2+p^2x(1-x)+m^2 works. You might want to try again.

Ok. So you're saying that we take the Fourier transform to get the momentum space Feynman rules? How do you know only to Fourier transform the A and B contributions for \hat{\tau}_2^{(0)} though?

The leading divergence from the counterterms is at tree-level (since we choose the coefficients to be divergent). There are no contributions to the tree level 2pt function from the other counterterms.

And even if I went through and did all the Fourier transforms, how do we know what the corresponding vertices look like?

The number of external legs corresponds to the number of fields in the term from the Lagrangian. The factor is the coupling constant up to sign or other conventions. The rules are given on page 21 of your notes.

Lastly, are the vertices he has drawn on that page corresponding to B,C,D,E or \hat{\tau}_1^{(0)},\hat{\tau}_2^{(0)},\hat{\tau}_3^{(0)},\hat{\tau}_4^{(0)}?

\hat{\tau}_2^{(0)} is the leading divergence in the 2pt function. It comes from one-loop graphs. \hat{\tau}_2^{(0)}_\text{c.t.} comes from the tree-level counterterms, since we're choosing the coefficients of the counterterms themselves to be divergent.

Ok but surely the full 2 point function should have external leg contributions as well, no?

The 2pt function is what you compute from \langle T \phi(x)\phi(y)\rangle. At tree level there aren't external leg contributions.

My notes also claim that the renormalised parameters m, \lambda, \phi depend on the RG (renormalisation group) scale \mu but are independent of the cut off \epsilon whereas the bar parameters are dependent on the cutoff and independent of the RG scale.
I have two questions about this statement:
(i) I thought the whole point of renormalisation was to get physical parameters i.e. ones that don't change when you change the scale? But clearly these will change and we have to further manipulate them (i.e. go to the bare parameters) to get fixed ones. What do the bare parameters represent?

The point of renormalization is to get finite physical parameters. As a consequence they depend on scale. The bare parameters aren't directly measurable because they are divergent.

(ii) When he talks about the cutoff \epsilon, even though this has come from dimensional regularisation, does this correspond exactly to the high momentum cutoof \Lambda from UV cutoff regularisation since surely taking the limit \Lambda \rightarrow \infty and \epsilon \rightarrow 0 have the same effect?

The divergent parts should agree with the substitution 1/\epsilon \sim \log \Lambda. I don't think that there's any deeper connection between the two methods.
Thanks.[/QUOTE]
 
  • #34
fzero said:
u=k^2+p^2x(1-x)+m^2 works. You might want to try again.
ok ill try it

My notes keep chopping and changing between G's and F's for greens functions. I thought F was a greens function in momentum space and G was a greens function in position space but I just saw a G(p)? Is this notation standard? If so, can you enlighten me as to what they are meant to be?

Also, is there a difference between green's functions and correlations functions? Up until now we appear to use the two interchangably?
fzero said:
The 2pt function is what you compute from \langle T \phi(x)\phi(y)\rangle. At tree level there aren't external leg contributions.

How does this look:

We have the equation F^{(n)}(p_1, \dots , p_n) = i (2 \pi)^d \delta^{d}(\displaystyle\sum_{i=1}^\infty p_i) \displaystyle\prod_{i=1}^n \left( \frac{-i}{p_i^2+m^2} \right) \hat{F}_n(p_1, \dots , p_n)

Now we're trying to solve for \hat{F}_2(p,-p)

So the RHS of our above equation (neglecting the delta function piece) is
\frac{-1}{(p^2+m^2)^2} \hat{F}_2(p,-p)

Now the RHS is the full two point function which should just be an internal line contribution so we should have F_2(p,-p) = \frac{1}{p^2+m^2}

Rearranging this gives \hat{F}_2(p,-p)=-(p^2+m^2)=-p^2-m^2
Furthermore, the renormalisation theorem tells us that if all the subdivergences are removed and we find the superficial degree of divergence, D, to be positive or equal to 0 then our diagram is divergent and the divergent part is given by a polynomial in the couplings and the external momenta of degree D

However, in my notes, we work out that

\hat{F}_1^{(1)} \sim \frac{1}{16 \pi^2 \epsilon} g m^2 is the divergent piece for a theory with \phi^4 (coupling \lambda) and \phi^3 (coupling g) interactions in 4 dimensions for 1 loop with 1 external line (that has been amputated).

However, naive power counting gives the superficial degree of divergence as D=4-E-V_3 where E is the number of external legs and V_3 is the number of valence 3 vertices. So for 1 external line above we find D=3-V_3=2 and so we should have a degree 2 polynomial in couplings and momenta.

However, this divergent piece only has one factor of g, not 2?
What's going on?
I thought that it might be that the mass counts but it isn't a coupling OR a momenta, is it?

This is a problem throughout as I have \hat{F}_3^{(1)} \sim \frac{1}{16 \pi^2 \epsilon} 3 g \lambda
This should have D=4-3-1=0 but quite clearly this polynomial is quadratic in couplings leading to the same problem!And lastly, is the following true: we want to use renormalisation to get finite physical parameters. During renormalisation, we create renormalised parameters (that depend on the RG(renormalisation group) scale but are independent of the regulator) and then when we add the counter terms to the original lagrangian we create the bare lagrangian (the bare quantities depend on the regulator but not the RG scale). Neither of these are the physical quantities though since physical quantities should be independent of RG scale and regulator, shouldn't they?
(i)I'm not sure my understanding of how renormalised and bare quantities are produced is correct? Re-reading what I wrote above it sounds like I've said they both arise for the same reason. This can't be true as they are different things!
(ii) How do we get the physical parameters out at the end of all this?

Cheers.
 
Last edited:
  • #35
latentcorpse said:
ok ill try it

My notes keep chopping and changing between G's and F's for greens functions. I thought F was a greens function in momentum space and G was a greens function in position space but I just saw a G(p)? Is this notation standard? If so, can you enlighten me as to what they are meant to be?

Also, is there a difference between green's functions and correlations functions? Up until now we appear to use the two interchangably?

Use of F and G is not standard between authors, though usually people use G. Momentum space vs position space should be clear from context. Usually people don't bother to put a tilde over the momentum space functions because of this.

All Green functions are correlation functions of some type, but can differ in terms of connectedness. For example,

G(1,2,\ldots N) = (-i)^N \left. \frac{\delta}{\delta J_1} \cdots \frac{\delta}{\delta J_N} W[J]\right|_{J=0}

are the Green functions computed from the generating functional. The RHS is clearly a correlation function of the fields. However the Green functions computed from Z[J] = -i \log W[J] are the connected Green functions.


Furthermore, the renormalisation theorem tells us that if all the subdivergences are removed and we find the superficial degree of divergence, D, to be positive or equal to 0 then our diagram is divergent and the divergent part is given by a polynomial in the couplings and the external momenta of degree D

The degree corresponds to the expression \Lambda^D in the primitive cutoff scheme (where D=0 corresponds to \log \Lambda). The polynomial in couplings and momenta must be of scaling dimension D to compensate.

However, in my notes, we work out that

\hat{F}_1^{(1)} \sim \frac{1}{16 \pi^2 \epsilon} g m^2 is the divergent piece for a theory with \phi^4 (coupling \lambda) and \phi^3 (coupling g) interactions in 4 dimensions for 1 loop with 1 external line (that has been amputated).

However, naive power counting gives the superficial degree of divergence as D=4-E-V_3 where E is the number of external legs and V_3 is the number of valence 3 vertices. So for 1 external line above we find D=3-V_3=2 and so we should have a degree 2 polynomial in couplings and momenta.

However, this divergent piece only has one factor of g, not 2?
What's going on?
I thought that it might be that the mass counts but it isn't a coupling OR a momenta, is it?

This is a problem throughout as I have \hat{F}_3^{(1)} \sim \frac{1}{16 \pi^2 \epsilon} 3 g \lambda
This should have D=4-3-1=0 but quite clearly this polynomial is quadratic in couplings leading to the same problem!

The expression of couplings and momenta must have scaling or mass dimension D. The actual number of n-pt couplings would be determined from V_n, while the dependence on momenta is determined by the scaling dimension.

And lastly, is the following true: we want to use renormalisation to get finite physical parameters. During renormalisation, we create renormalised parameters (that depend on the RG(renormalisation group) scale but are independent of the regulator) and then when we add the counter terms to the original lagrangian we create the bare lagrangian (the bare quantities depend on the regulator but not the RG scale). Neither of these are the physical quantities though since physical quantities should be independent of RG scale and regulator, shouldn't they?
(i)I'm not sure my understanding of how renormalised and bare quantities are produced is correct? Re-reading what I wrote above it sounds like I've said they both arise for the same reason. This can't be true as they are different things!
(ii) How do we get the physical parameters out at the end of all this?

Cheers.

The renormalized parameters are the physical parameters. If you were to measure the fine structure constant from the strength of the electromagnetic interaction, you would find a different value at different energy scales. If you were to scatter electrons at a few MeV, you'd find a value close to 1/137, while if you scattered electrons at a center of mass energy around 80 GeV, you'd find 1/128.
 
  • #36
fzero said:
The degree corresponds to the expression \Lambda^D in the primitive cutoff scheme (where D=0 corresponds to \log \Lambda). The polynomial in couplings and momenta must be of scaling dimension D to compensate.

The expression of couplings and momenta must have scaling or mass dimension D. The actual number of n-pt couplings would be determined from V_n, while the dependence on momenta is determined by the scaling dimension.
I'm afraid I still don't really understand why the examples of the polynomials I posted in my previous post weren't working out.

fzero said:
The renormalized parameters are the physical parameters. If you were to measure the fine structure constant from the strength of the electromagnetic interaction, you would find a different value at different energy scales. If you were to scatter electrons at a few MeV, you'd find a value close to 1/137, while if you scattered electrons at a center of mass energy around 80 GeV, you'd find 1/128.

So I'm confused as to how the renormalised parameters arise in renormalisation. Is it the addition of the counter terms that changes the parameters? And if so, how does this work?


Finally, if renormalisation produces the renormalised, physical parameters that we were after all along, why do we then go and define bare parameters! What's the point in that?

Thanks.
 
  • #37
latentcorpse said:
I'm afraid I still don't really understand why the examples of the polynomials I posted in my previous post weren't working out.

Go back and read the renormalization theorem. It says that the polynomial has dimension D. Somehow you misread this as "degree D," which is incorrect.

So I'm confused as to how the renormalised parameters arise in renormalisation. Is it the addition of the counter terms that changes the parameters? And if so, how does this work?

The addition of the counterterms to the original Lagrangian gives the bare parameters. The renormalized quantities are what we get when we remove the divergence from the bare parameters.

Finally, if renormalisation produces the renormalised, physical parameters that we were after all along, why do we then go and define bare parameters! What's the point in that?

It's natural to define the bare parameters because there's a counterterm for every original term in the Lagrangian.
 
  • #38
fzero said:
Go back and read the renormalization theorem. It says that the polynomial has dimension D. Somehow you misread this as "degree D," which is incorrect.
Ok I see. That's fine then.

fzero said:
The addition of the counterterms to the original Lagrangian gives the bare parameters. The renormalized quantities are what we get when we remove the divergence from the bare parameters.

It's natural to define the bare parameters because there's a counterterm for every original term in the Lagrangian.

So I have the result for \phi^4 theory with 1 loop that the addition of counterterms gives m_B^2=\frac{m^2+B}{Z}=m^2 ( 1 + \frac{\lambda}{16 \pi^2 \epsilon})

So all I need to do to get the physical, renormalised mass is just rearrange the above for m?

That would give m^2=\frac{m_B^2}{1+\frac{\lambda}{16 \pi^2 \epsilon}}
However, I don't see how this is physical as it still has dependence on the regulator and in fact as \epsilon \rightarrow 0 we see m^2 \rightarrow 0

Surely this can't be right, can it?

Thanks.
 
  • #39
latentcorpse said:
So I have the result for \phi^4 theory with 1 loop that the addition of counterterms gives m_B^2=\frac{m^2+B}{Z}=m^2 ( 1 + \frac{\lambda}{16 \pi^2 \epsilon})

So all I need to do to get the physical, renormalised mass is just rearrange the above for m?

That would give m^2=\frac{m_B^2}{1+\frac{\lambda}{16 \pi^2 \epsilon}}
However, I don't see how this is physical as it still has dependence on the regulator and in fact as \epsilon \rightarrow 0 we see m^2 \rightarrow 0

Surely this can't be right, can it?

Thanks.

The bare mass itself depends on the regulator and is divergent, so that expression isn't very helpful. The right way to derive the renormalized parameters is to use the bare parameters to write down the renormalization group equations and then solve those for the scale dependent renormalized parameters.
 
  • #40
fzero said:
The bare mass itself depends on the regulator and is divergent, so that expression isn't very helpful. The right way to derive the renormalized parameters is to use the bare parameters to write down the renormalization group equations and then solve those for the scale dependent renormalized parameters.

We discussed how to solve the Callan Symanzik eqn (which i think is rg eqn) by introducing a running coupling. This can be used to find the coupling as a function of energy scale which tells us the regions in which preturbation theory is applicable. Is this what you are talking about?

How would one solve it for the mass though?
 
  • #41
latentcorpse said:
We discussed how to solve the Callan Symanzik eqn (which i think is rg eqn) by introducing a running coupling. This can be used to find the coupling as a function of energy scale which tells us the regions in which preturbation theory is applicable. Is this what you are talking about?

How would one solve it for the mass though?

For \lambda \phi^4 you have a pair of equations to one-loop order which are

\mu \frac{d\lambda}{d\mu} = \frac{3\lambda^2}{16\pi^2},~~\mu \frac{dm^2}{d\mu} = \frac{m^2\lambda}{16\pi^2}.

They are simple to integrate.
 
Last edited:
  • #42
fzero said:
For \lambda \phi^4 you have a pair of equations to one-loop order which are

\mu \frac{d\lambda}{d\mu} = \frac{3\lambda^2}{16\pi^2},~~mu \frac{dm^2}{d\mu} = \frac{m^2\lambda}{16\pi^2}.

They are simple to integrate.

I have \beta_\lambda=\frac{3 \lambda^2}{16 \pi^2}
where \beta_\lambda=\hat{\beta}_\lambda+\lambda \epsilon with \hat{\beta}_\lambda=\mu \frac{\partial \lambda}{\partial \mu}
So this appears that we can only get the equation you had and integrate to get a \epsilon independent result if \epsilon=0 i.e. if d=4. Surely this procedure should also work in other dimensions though?

My m equation is slightly different I have

\frac{\mu}{m^2} \frac{d m^2}{d \mu} = \frac{\lambda}{16 \pi^2}

Either way I can see how this will work out.

So we get the renormalised parameters by solving for the coefficients of the Callan-Symanzik eqn?

Is it ok that m is going to turn out to be a function of lambda?Also, I have a calculation in my notes I am unsure of. He is showing that naive quantisation of the pure SU(N) Yang Mills lagrangian (in free theory) encounters a problem because zero modes appear as a result of the action being gauge invariant.
Anyway, I am struggling to prove the gauge invariance of the action.
He writes that under an infinitesimal gauge transformation A_\mu^a \rightarrow A_\mu^a + \partial_\mu \lambda^a,
we find S_0=\int d^dx \mathcal{L}_{\text{YM}} = -\int d^dx \frac{1}{2} A_\mu^a \Delta^{\mu \nu} A_\nu^a cahnges by
\delta S_0 = \frac{1}{2} \int d^dx \left( \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a + A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a \right)
I understand this so far (its just substitution).
He then claims that this can be integrated by parts to give
-\int d^d x A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a=0

I don't follow either of these last two equalities? Why does integration by parts give that term and why does it then vanish?

Thanks again!
 
Last edited:
  • #43
latentcorpse said:
I have \beta_\lambda=\frac{3 \lambda^2}{16 \pi^2}
where \beta_\lambda=\hat{\beta}_\lambda+\lambda \epsilon with \hat{\beta}_\lambda=\mu \frac{\partial \lambda}{\partial \mu}
So this appears that we can only get the equation you had and integrate to get a \epsilon independent result if \epsilon=0 i.e. if d=4. Surely this procedure should also work in other dimensions though?

My m equation is slightly different I have

\frac{\mu}{m^2} \frac{d m^2}{d \mu} = \frac{\lambda}{16 \pi^2}

Either way I can see how this will work out.

So we get the renormalised parameters by solving for the coefficients of the Callan-Symanzik eqn?

Is it ok that m is going to turn out to be a function of lambda?

The solution of the RG equations is a curve in the (m,\lambda) plane parameterized by \mu. A form like m^2 = f(\lambda,\mu) is reasonable.

Also, I have a calculation in my notes I am unsure of. He is showing that naive quantisation of the pure SU(N) Yang Mills lagrangian (in free theory) encounters a problem because zero modes appear as a result of the action being gauge invariant.
Anyway, I am struggling to prove the gauge invariance of the action.
He writes that under an infinitesimal gauge transformation A_\mu^a \rightarrow A_\mu^a + \partial_\mu \lambda^a,
we find S_0=\int d^dx \mathcal{L}_{\text{YM}} = -\int d^dx \frac{1}{2} A_\mu^a \Delta^{\mu \nu} A_\nu^a cahnges by
\delta S_0 = \frac{1}{2} \int d^dx \left( \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a + A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a \right)
I understand this so far (its just substitution).
He then claims that this can be integrated by parts to give
-\int d^d x A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a=0

I don't follow either of these last two equalities? Why does integration by parts give that term and why does it then vanish?

Thanks again!

You have to integrate by parts with the \partial_\mu in the first term of \delta S_0. This gives a surface term and a term of the form quoted. \Delta^{\mu \nu} \partial_\nu \lambda^a=0 is simple algebra using the expression for (\Delta A)_\nu given on page 69 of those notes.
 
  • #44
fzero said:
The solution of the RG equations is a curve in the (m,\lambda) plane parameterized by \mu. A form like m^2 = f(\lambda,\mu) is reasonable.
But our \lambda solution was just in terms of \lambda and \mu. Shouldn't it also have some m dependence if it is in the (m,\lambda) plane? Or is that contained within the \mu dependence?

And what about the issue of \beta_\lambda=\hat{\beta}_\lambda+\lambda \epsilon that I raised in ym last post that seems to suggest this will only work if \epsilon=0 i.e. d=4?

fzero said:
You have to integrate by parts with the \partial_\mu in the first term of \delta S_0. This gives a surface term and a term of the form quoted. \Delta^{\mu \nu} \partial_\nu \lambda^a=0 is simple algebra using the expression for (\Delta A)_\nu given on page 69 of those notes.
Right, I still don't get that. Integrating the first term as you suggest I find
\frac{1}{2}\lambda^a \Delta^{\mu \nu} A_\nu^a - \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A^a_\nu
So why can I neglect the surface term? Does \lambda vanish at infinity? If so, why?
And how does that second term rearrange to what I want? The derivative is now on the A rather than the lambda like we want in the final answer?
 
Last edited:
  • #45
latentcorpse said:
But our \lambda solution was just in terms of \lambda and \mu. Shouldn't it also have some m dependence if it is in the (m,\lambda) plane? Or is that contained within the \mu dependence?

It's not necessary, I was just explaining why it would be natural if it did.

And what about the issue of \beta_\lambda=\hat{\beta}_\lambda+\lambda \epsilon that I raised in ym last post that seems to suggest this will only work if \epsilon=0 i.e. d=4?

All of the dependence on the regulator is contained in the bare parameters. Since the bare parameters are independent of the RG scale, the RG equations don't depend on the regulator.

Right, I still don't get that. Integrating the first term as you suggest I find
\frac{1}{2}\lambda^a \Delta^{\mu \nu} A_\nu^a - \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A^a_\nu
So why can I neglect the surface term? Does \lambda vanish at infinity? If so, why?
And how does that second term rearrange to what I want? The derivative is now on the A rather than the lambda like we want in the final answer?

That first term should be a surface integral. That integral should vanish by suitable boundary conditions on either \lambda or A as usual. As for the other term, I mispoke a bit. However you should still be able to show that \Delta^{\mu\nu}\partial_\mu A_\nu = 0.
 
  • #46
fzero said:
All of the dependence on the regulator is contained in the bare parameters. Since the bare parameters are independent of the RG scale, the RG equations don't depend on the regulator.
I don't get this - which of beta and beta hat is bare? I guess explicitly beta would be bare as it has obvious epsilon depedence. However, I don't see how this solves the problem?

fzero said:
That first term should be a surface integral. That integral should vanish by suitable boundary conditions on either \lambda or A as usual. As for the other term, I mispoke a bit. However you should still be able to show that \Delta^{\mu\nu}\partial_\mu A_\nu = 0.
Ok I need to get it to the form -\int d^dx A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a though.
So I integrate the 1st term by parts giving two terms (one of which goes away as its surface) the remaining term won't combine with the 2nd term in the original integral though. I tried integrating the 2nd term by parts but it didn't give me anything that i could combine with the remaining term.
 
  • #47
latentcorpse said:
I don't get this - which of beta and beta hat is bare? I guess explicitly beta would be bare as it has obvious epsilon depedence. However, I don't see how this solves the problem?

I was making a more general statement whose consequence is that the correct beta functions to use are the ones that are not dependent on the regulator.

Ok I need to get it to the form -\int d^dx A_\mu^a \Delta^{\mu \nu} \partial_\nu \lambda^a though.

You don't need to do so in order to show the result. If you want to do that, you'll need to integrate by parts (2x) with the derivatives in \Delta^{\mu \nu}.

So I integrate the 1st term by parts giving two terms (one of which goes away as its surface) the remaining term won't combine with the 2nd term in the original integral though. I tried integrating the 2nd term by parts but it didn't give me anything that i could combine with the remaining term.

The non surface terms you end up with both vanish. I don't know why you'd want to integrate either by parts.
 
  • #48
fzero said:
I was making a more general statement whose consequence is that the correct beta functions to use are the ones that are not dependent on the regulator.
Ok. So we agree that we can get to \beta_\lambda = \frac{3 \lambda^2}{16 \pi^2} + \mathcal{O}(\lambda)
But \beta(\lambda)=\hat{\beta}(\lambda) + \lambda \epsilon = \mu \frac{d \lambda}{d \mu} +\lambda \epsilon by definition.
Now how do we get rid of the \lambda \epsilon term so that we can integrate up and solve for the renormalised coupling?

fzero said:
You don't need to do so in order to show the result. If you want to do that, you'll need to integrate by parts (2x) with the derivatives in \Delta^{\mu \nu}.

The non surface terms you end up with both vanish. I don't know why you'd want to integrate either by parts.

Ok. In the original expression I find that the 2nd term A_\mu^a \Dleta^{\mu \nu} \partial_\nu \lambda^a vanishes when we expand out \Delta^{\mu \nu} (which I think is right).

So this means that \delta S_0 = - \frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a
I'm now confused as to what to do to show this is 0? Should I expend out the \Delta^{\mu \nu} as well?Finally, we say that a theory is renormalisable if it is rendered finite by adding a finite number of counterterms to the initial lagrangian and if these counterterms take the same form as terms present in the original lagrangian.
This is correct, yes?
However, when we say "it is rendered finite", what is "it"? It doesn't really make sense to say "the theory is rendered finite" when it's really something like the divergences in the amputated green's functions associated with loop diagrams, or something like that...
Can you clear up what "it"(i.e. the thing that gets rendered finite) actually is?
Cheers.
 
Last edited:
  • #49
latentcorpse said:
Ok. So we agree that we can get to \beta_\lambda = \frac{3 \lambda^2}{16 \pi^2} + \mathcal{O}(\lambda)
But \beta(\lambda)=\hat{\beta}(\lambda) + \lambda \epsilon = \mu \frac{d \lambda}{d \mu} +\lambda \epsilon by definition.
Now how do we get rid of the \lambda \epsilon term so that we can integrate up and solve for the renormalised coupling?

You can just take \epsilon\rightarrow 0.

Ok. In the original expression I find that the 2nd term A_\mu^a \Dleta^{\mu \nu} \partial_\nu \lambda^a vanishes when we expand out \Delta^{\mu \nu} (which I think is right).

So this means that \delta S_0 = - \frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a
I'm now confused as to what to do to show this is 0? Should I expend out the \Delta^{\mu \nu} as well?

We already dealt with this term in posts 44 and 45.

Finally, we say that a theory is renormalisable if it is rendered finite by adding a finite number of counterterms to the initial lagrangian and if these counterterms take the same form as terms present in the original lagrangian.
This is correct, yes?
However, when we say "it is rendered finite", what is "it"? It doesn't really make sense to say "the theory is rendered finite" when it's really something like the divergences in the amputated green's functions associated with loop diagrams, or something like that...
Can you clear up what "it"(i.e. the thing that gets rendered finite) actually is?
Cheers.

The Green functions (connected, amputated or otherwise) are the physical observables. These are what we make finite initially. From them and the RG group, we can also show that the physical parameters are finite as well.
 
  • #50
fzero said:
You can just take \epsilon\rightarrow 0.
We already dealt with this term in posts 44 and 45.
The Green functions (connected, amputated or otherwise) are the physical observables. These are what we make finite initially. From them and the RG group, we can also show that the physical parameters are finite as well.

Got it. Thanks.

Although, one thing is bothering me: when we do the integration by parts -\frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a = - \frac{1}{2} \lambda^a \Delta^{\mu \nu} A_\nu^a + \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A_\nu^a
haven't we lost an index on the first term there? So the indices aren't balanced in this equation?Could you take a look at the thread I made on solving the Callan-Symanzik equation please?

Thanks very much.
 
Last edited:
Back
Top