Grassmann Algebra: Derivative of $\theta_j \theta_k \theta_l$

In summary, the conversation discusses Grassmann numbers and their derivatives, as well as properties of Grassmann integration. It also touches on complex integration and the derivation of the formula for d \theta d \bar{\theta}.
  • #1
latentcorpse
1,444
0
If [itex]\{ \theta_i \}[/itex] are a set of Grassmann numbers then what is

[itex]\frac{\partial}{\partial \theta_i} ( \theta_j \theta_k \theta_l)[/itex]

I know that [itex]\frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \delta_{ij} \theta_k - \theta_j \delta_{ik}[/itex] - we need this to be the case becuse if we set [itex]j=k[/itex] then the LHS becomes the derivative of [itex]\theta_j^2=0[/itex] and so we need the RHS to vanish as well (hence the minus sign!)

However, now that there are three variables present, I am confused as to what should pick up a minus sign upon differentiation and what should not?

Thanks.
 
Physics news on Phys.org
  • #2
The minus sign in

[itex]
\frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \delta_{ij} \theta_k - \theta_j \delta_{ik}
[/itex]

can also be derived from noting that [tex]\partial/\partial \theta_i[/tex] is Grassman, so

[itex]
\frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \left( \frac{\partial\theta_j}{\partial \theta_i} \right) \theta_k -
\theta_j \left( \frac{\partial \theta_k }{\partial \theta_i} \right).
[/itex]

This method applies directly to the product of 3 Grassman numbers.
 
  • #3
fzero said:
The minus sign in

[itex]
\frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \delta_{ij} \theta_k - \theta_j \delta_{ik}
[/itex]

can also be derived from noting that [tex]\partial/\partial \theta_i[/tex] is Grassman, so

[itex]
\frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \left( \frac{\partial\theta_j}{\partial \theta_i} \right) \theta_k -
\theta_j \left( \frac{\partial \theta_k }{\partial \theta_i} \right).
[/itex]

This method applies directly to the product of 3 Grassman numbers.

Great. I do have a question about Grassmann integration though.

[itex]\int d^n \theta f(\theta_1, \dots \theta_n) = \int d \theta_n d \theta_{n-1} \dots d \theta_1 f(\theta_1, \dots , \theta_n)[/itex]
where [itex]f(\theta_1 , \dots , \theta_n = a + a_i \theta_i + \dots + \frac{1}{n!} a_{i_1} \dots a_{i_n} \theta_{i_1} \dots \theta_{i_n}[/itex]

So I can see that only the last term will survive because all the preceeding terms are constant with respect to [itex]\theta_n[/itex] and since the integral of a constant with respect to a grassmann variable is zero, they will all die. Therefore this simplifies to

[itex]\int d^n \frac{1}{n!} a_{i_1} \dots a_{i_n} \theta_{i_1} \dots \theta_{i_n}[/itex]

My notes then say that [itex]\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \epsilon_{i_1 \dots i_n}[/itex] where [itex]\epsilon_{i_1 \dots i_n}[/itex] is the antisymmetric tensor that equals 1 if the indices are ordered in an even permutation, -1 if they are ordered in an odd permutation and 0 if any two indices are the same.

Apparently this then means that [itex]\int d^n \theta f(\theta_1 , \dots , \theta_n) = \frac{1}{n!} \epsilon_{i_1 \dots i_n} a_{i_1} \dots a_{i_n} = a_{1 \dots n}[/itex]

So my question is, how does that last equality work? How does he get rid of the epsilon? And where does the n! go?


Given this definition, does that mean that say [itex]\int d^3 \theta \theta_1 \theta_3 \theta_2 =-1[/itex] as [itex]\epsilon_{132}=-1[/itex] i.e. we would have the last two variables to do this integral and that would pick up a minus sign?

Thanks.
 
  • #4
latentcorpse said:
Apparently this then means that [itex]\int d^n \theta f(\theta_1 , \dots , \theta_n) = \frac{1}{n!} \epsilon_{i_1 \dots i_n} a_{i_1} \dots a_{i_n} = a_{1 \dots n}[/itex]

So my question is, how does that last equality work? How does he get rid of the epsilon? And where does the n! go?

The [tex]a_i[/tex] are odd variables. Just expand [tex] \epsilon_{i_1 \dots i_n} a_{i_1} \dots a_{i_n}[/tex], count the number of terms and apply permutations to write each one as [tex]a_{1} \cdots a_{n}[/tex].

Given this definition, does that mean that say [itex]\int d^3 \theta \theta_1 \theta_3 \theta_2 =-1[/itex] as [itex]\epsilon_{132}=-1[/itex] i.e. we would have the last two variables to do this integral and that would pick up a minus sign?

Yes. You should be able to derive

[itex]
\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \epsilon_{i_1 \dots i_n}
[/itex]

from the basic formula

[tex]\int d\theta_i \theta_i = 1 ~~(\text{no sum on} ~i).[/tex]
 
  • #5
fzero said:
The [tex]a_i[/tex] are odd variables. Just expand [tex] \epsilon_{i_1 \dots i_n} a_{i_1} \dots a_{i_n}[/tex], count the number of terms and apply permutations to write each one as [tex]a_{1} \cdots a_{n}[/tex].

Got it.

fzero said:
Yes. You should be able to derive

[itex]
\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \epsilon_{i_1 \dots i_n}
[/itex]

from the basic formula

[tex]\int d\theta_i \theta_i = 1 ~~(\text{no sum on} ~i).[/tex]
[\QUOTE]
This I don't know how to do...And also, when it comes to complex integration, we have been given [itex]\theta=\frac{\theta_1+i \theta_2}{\sqrt{2}} , \bar{\theta} = \frac{\theta_1 - \theta_2}{\sqrt{2}}[/itex]

We are then told that [itex]d \theta d \bar{\theta} = d \theta_1 d \theta_2 \times i[/itex]

However, I find that [itex]d \theta d \bar{\theta} = \frac{d \theta_1^2 - i d \theta_1 d \theta_2 + i d \theta_2 d \theta-1 + d \theta_2^2}{2}[/itex]
Obviously the [itex]\theta^2[/itex] terms vanish and so I find
[itex]d \theta d \bar{\theta} = \frac{ -2 i d \theta_1 d \theta_2}{2} = - i d \theta_1 d \theta_2[/itex] which is out by a minus sign?Also, do you know how we get the extra factor of 1/b in 9.67 in Peskin and Schroeder i.e. why is [itex]\int d \bar{\theta} d \theta \theta \bar{\theta} e^{-\bar{\theta} b \theta} = \frac{1}{b}b = 1[/itex]?And finally, why do we have [itex]\int d \theta d \bar{\theta} e^{- \bar{\theta} b \theta} = \int d \bar{\theta} d \theta ( 1 - \bar{\theta} b \theta)[/itex]?
Shouldn't we have additional higher order terms from the expansion of the exponential?
 
Last edited:
  • #6
latentcorpse said:
This I don't know how to do...

I actually have a mistake, it should be

[tex]
\int \theta_i d\theta_i = 1 ~~(\text{no sum on} ~i).
[/tex]

The proof of that formula just amounts to counting signs when you permute to express

[itex]
\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \prod_i \int \theta_i d\theta_i.
[/itex]

And also, when it comes to complex integration, we have been given [itex]\theta=\frac{\theta_1+i \theta_2}{\sqrt{2}} , \bar{\theta} = \frac{\theta_1 - \theta_2}{\sqrt{2}}[/itex]

We are then told that [itex]d \theta d \bar{\theta} = d \theta_1 d \theta_2 \times i[/itex]

However, I find that [itex]d \theta d \bar{\theta} = \frac{d \theta_1^2 - i d \theta_1 d \theta_2 + i d \theta_2 d \theta-1 + d \theta_2^2}{2}[/itex]
Obviously the [itex]\theta^2[/itex] terms vanish and so I find
[itex]d \theta d \bar{\theta} = \frac{ -2 i d \theta_1 d \theta_2}{2} = - i d \theta_1 d \theta_2[/itex] which is out by a minus sign?

The minus sign looks correct. That sign cancels out in any nonzero integral, such as

[tex]\int d\theta d\bar{\theta} \theta\bar{\theta} = 1.[/tex]

Also, do you know how we get the extra factor of 1/b in 9.67 in Peskin and Schroeder i.e. why is [itex]\int d \bar{\theta} d \theta \theta \bar{\theta} e^{-\bar{\theta} b \theta} = \frac{1}{b}b = 1[/itex]?

The point is that the [tex]b[/tex]-dependent terms in the integrand vanish, so that

[itex]\int d \bar{\theta} d \theta \theta \bar{\theta} e^{-\bar{\theta} b \theta} =1.[/tex]

The [tex]1/b[/tex] is put in later to compare with the integral without [tex]\theta \bar{\theta}[/tex] inserted.

And finally, why do we have [itex]\int d \theta d \bar{\theta} e^{- \bar{\theta} b \theta} = \int d \bar{\theta} d \theta ( 1 - \bar{\theta} b \theta)[/itex]?
Shouldn't we have additional higher order terms from the expansion of the exponential?

The higher order terms vanish as [tex]\theta^2=0[/tex].
 
  • #7
fzero said:
I actually have a mistake, it should be

[tex]
\int \theta_i d\theta_i = 1 ~~(\text{no sum on} ~i).
[/tex]

The proof of that formula just amounts to counting signs when you permute to express
My notes and, by the looks of things P&S, define it the way you had it originally i.e. that [itex]\int d \theta_i \theta_i =1[/itex] (since the [itex]d \theta_i[/itex] is Grassmann as well, we should get a minus sign if we got to the way you have it above. Are there different conventions at play here?

As for trying to prove the identity, I don't get it...
We know [itex]\int d^n \theta = \int d \theta_{i_n} d \theta_{i_{n-1}} \dots d_{i_1}[/itex] so surely, [itex]\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \int d \theta_{i_n} \dots d \theta_{i_1} \theta_{i_1} \dots \theta_{i_n} =\int d \theta_{i_n} \dots d \theta_{i_2} \theta_{i_2} \dots \theta_{i_n} = \dots = \int d \theta_{i_n} \theta_{i_n} = 1[/itex]
Where does the permuting come into play?

fzero said:
[itex]
\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \prod_i \int \theta_i d\theta_i.
[/itex]



The minus sign looks correct. That sign cancels out in any nonzero integral, such as

[tex]\int d\theta d\bar{\theta} \theta\bar{\theta} = 1.[/tex]
[/itex]
Yeah I can't see what's going wrong either but there must be something because P&S and my notes both have [itex]\int d \bar{\theta} d \theta \theta \bar{\theta}=1[/itex] which is different to what we get if we assume that minus sign is right?


fzero said:
The point is that the [tex]b[/tex]-dependent terms in the integrand vanish, so that

[itex]\int d \bar{\theta} d \theta \theta \bar{\theta} e^{-\bar{\theta} b \theta} =1.[/tex]

The [tex]1/b[/tex] is put in later to compare with the integral without [tex]\theta \bar{\theta}[/tex] inserted.
Ok. But how do you actually do that integral, I find that

[itex]\int d \bar{\theta} d \theta \theta \bar{\theta} e^{- \bar{\theta} b \theta} = \int d \bar{\theta} d \theta \theta \bar{\theta} ( 1 + b \theta \bar{\theta}) = \int d \bar{\theta} d \theta ( \theta \bar{\theta} + b \theta \bar{\theta} \theta \bar{\theta} )[/itex]
The first term gives us a b and the second one seems to give [itex]b \theta \bar{\theta}[/itex]



And the last thing we do on Grassmann integrals is that [itex]I_B = \int d^N \bar{\theta} d^N \theta e^{-\bar{\theta_i} B_{ij} \theta_j = \text{det } B[/itex] by making a unitary transformation. Anyway I can do that calculation fine. But he then says that in the presence of source terms,

[itex]I_B(\eta , \bar{\eta}) = \int d^N \bar{\theta} d \theta e^{-\bar{\theta}_i B_{ij} \theta_j + \bar{\eta}_i \theta_j + \bar{\theta}_i \eta_j}[/itex]
and we can complete the square to show
[itex]I_B(\eta , \bar{\eta}) = \text{det } B \times e^{+\bar{\eta}_i B_{ij}^{-1} \eta_j}[/itex]

Now I have been trying to show this but I don't think I am completing the square correctly as I keep getting cross terms that I can't get rid of. I tried [itex]\tilde{\theta}_i = \theta_i - B_{ij}^{-1} \eta_j[/itex]

Thanks a lot again!
 
  • #8
latentcorpse said:
My notes and, by the looks of things P&S, define it the way you had it originally i.e. that [itex]\int d \theta_i \theta_i =1[/itex] (since the [itex]d \theta_i[/itex] is Grassmann as well, we should get a minus sign if we got to the way you have it above. Are there different conventions at play here?

Every minus sign is convention-dependent. For instance, your [tex]d^n\theta = d\theta_n\cdots d\theta_1[/tex] is another convention.

As for trying to prove the identity, I don't get it...
We know [itex]\int d^n \theta = \int d \theta_{i_n} d \theta_{i_{n-1}} \dots d_{i_1}[/itex] so surely, [itex]\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \int d \theta_{i_n} \dots d \theta_{i_1} \theta_{i_1} \dots \theta_{i_n} =\int d \theta_{i_n} \dots d \theta_{i_2} \theta_{i_2} \dots \theta_{i_n} = \dots = \int d \theta_{i_n} \theta_{i_n} = 1[/itex]
Where does the permuting come into play?

Because [tex]d^n\theta = d\theta_n\cdots d\theta_1[/tex] by your convention, so

[tex]\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \int d\theta_n\cdots d\theta_1\theta_{i_1} \dots \theta_{i_n} [/tex]

depends on how we rearrange to put the appropriate [tex]\theta[/tex] with the matching [tex]d\theta[/tex].

Yeah I can't see what's going wrong either but there must be something because P&S and my notes both have [itex]\int d \bar{\theta} d \theta \theta \bar{\theta}=1[/itex] which is different to what we get if we assume that minus sign is right?

No, the sign is common to [tex]d\theta d \bar{\theta}[/tex] and [tex]\theta \bar{\theta}[/tex] so it cancels in the product.

Ok. But how do you actually do that integral, I find that

[itex]\int d \bar{\theta} d \theta \theta \bar{\theta} e^{- \bar{\theta} b \theta} = \int d \bar{\theta} d \theta \theta \bar{\theta} ( 1 + b \theta \bar{\theta}) = \int d \bar{\theta} d \theta ( \theta \bar{\theta} + b \theta \bar{\theta} \theta \bar{\theta} )[/itex]
The first term gives us a b and the second one seems to give [itex]b \theta \bar{\theta}[/itex]

The 2nd term vanishes because it's proportional to [tex]\theta^2 \bar{\theta}^2[/tex] and
[tex]\theta^2 = \bar{\theta}^2=1[/tex] since they are odd variables.

And the last thing we do on Grassmann integrals is that [itex]I_B = \int d^N \bar{\theta} d^N \theta e^{-\bar{\theta_i} B_{ij} \theta_j = \text{det } B[/itex] by making a unitary transformation. Anyway I can do that calculation fine. But he then says that in the presence of source terms,

[itex]I_B(\eta , \bar{\eta}) = \int d^N \bar{\theta} d \theta e^{-\bar{\theta}_i B_{ij} \theta_j + \bar{\eta}_i \theta_j + \bar{\theta}_i \eta_j}[/itex]
and we can complete the square to show
[itex]I_B(\eta , \bar{\eta}) = \text{det } B \times e^{+\bar{\eta}_i B_{ij}^{-1} \eta_j}[/itex]

Now I have been trying to show this but I don't think I am completing the square correctly as I keep getting cross terms that I can't get rid of. I tried [itex]\tilde{\theta}_i = \theta_i - B_{ij}^{-1} \eta_j[/itex]

You want (in matrix form)

[tex] \theta = S \tilde{\theta} + B^{-1} \eta,[/tex]

where [tex]S[/tex] diagonalizes [tex]B[/tex], so [tex] S^{-1} B S = I[/tex].
 
  • #9
fzero said:
Every minus sign is convention-dependent. For instance, your [tex]d^n\theta = d\theta_n\cdots d\theta_1[/tex] is another convention.



Because [tex]d^n\theta = d\theta_n\cdots d\theta_1[/tex] by your convention, so

[tex]\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \int d\theta_n\cdots d\theta_1\theta_{i_1} \dots \theta_{i_n} [/tex]

depends on how we rearrange to put the appropriate [tex]\theta[/tex] with the matching [tex]d\theta[/tex].
So if you were asked to show this is an exam, what would you do? As far as I can tell, we just get it to [itex]\int d \theta_N \dots d \theta_1 \theta_{i_1} \dots \theta_{i_N}[/itex] and then just describe in words that it's to do with making permutations? Is there any maths we can do?

fzero said:
No, the sign is common to [tex]d\theta d \bar{\theta}[/tex] and [tex]\theta \bar{\theta}[/tex] so it cancels in the product.
Yeah but there is some inconsistency here. We claim that if that -i was correct then
[itex]\int d \bar{\theta} d \theta \bar{\theta} \theta =1[/itex] but both my notes and P&S say [itex]\int d \bar{\theta} d \theta \theta \bar{\theta} =1[/itex]


fzero said:
You want (in matrix form)

[tex] \theta = S \tilde{\theta} + B^{-1} \eta,[/tex]

where [tex]S[/tex] diagonalizes [tex]B[/tex], so [tex] S^{-1} B S = I[/tex].

I've been playing about with this but can't get it. How are we going to get an [itex]S^{-1}[/itex] out of anything? Also, how on Earth did you know to take that to complete the square?
 
  • #10
latentcorpse said:
So if you were asked to show this is an exam, what would you do? As far as I can tell, we just get it to [itex]\int d \theta_N \dots d \theta_1 \theta_{i_1} \dots \theta_{i_N}[/itex] and then just describe in words that it's to do with making permutations? Is there any maths we can do?

You could also show that [tex]\theta_{i_1} \cdots \theta_{i_N} = \epsilon_{i_1\cdots i_N} \theta_1\cdots \theta_N[/tex]. It's obvious by considering various cases, but maybe there's a more elegant proof.

Yeah but there is some inconsistency here. We claim that if that -i was correct then
[itex]\int d \bar{\theta} d \theta \bar{\theta} \theta =1[/itex] but both my notes and P&S say [itex]\int d \bar{\theta} d \theta \theta \bar{\theta} =1[/itex]

I wrote the former term down using the opposite convention for the integral. If you work things out in P&S conventions, the relative minus sign is precisely what you need to cancel the minus from [tex]i^2[/tex].


I've been playing about with this but can't get it. How are we going to get an [itex]S^{-1}[/itex] out of anything? Also, how on Earth did you know to take that to complete the square?

[tex]S[/tex] is Hermitian and when you consider the [tex]\bar{\theta}[/tex] expression, you end up with a complex conjugation. Then when you put things together into the quadratic expression, you also transpose [tex]\bar{\theta}[/tex], so [tex]S^\dagger=S^{-1}[/tex] appears in the correct places. The [tex]\eta[/tex] term comes from comparing the cross-terms.
 
  • #11
fzero said:
I wrote the former term down using the opposite convention for the integral. If you work things out in P&S conventions, the relative minus sign is precisely what you need to cancel the minus from [tex]i^2[/tex].
Ok. So we agreed that [itex]d \theta d \bar{\theta} = -i d \theta_1 d \theta_2[/itex]

And if you work out [itex]\theta \bar{\theta} = \frac{1}{2} ( i \theta_2 \theta_1 - i \theta_1 \theta_2 ) = - i \theta_1 \theta_2[/itex]
[itex]\Rightarrow \bar{\theta} \theta = i \theta_2 \theta_1[/itex]

Therefore [itex]\int d \theta d \bar{\theta} ( \bar{\theta} \theta ) = - i \int d \theta_1 d \theta_2 ( i \theta_2 \theta_1) = -i \times i =1[/itex]

So this is correct, yes? This means then that my written notes are wrong and that there should indeed be the minus sign in [itex]d \theta d \bar{\theta} = -i d \theta_1 d \theta_2[/itex]?



fzero said:
[tex]S[/tex] is Hermitian and when you consider the [tex]\bar{\theta}[/tex] expression, you end up with a complex conjugation. Then when you put things together into the quadratic expression, you also transpose [tex]\bar{\theta}[/tex], so [tex]S^\dagger=S^{-1}[/tex] appears in the correct places. The [tex]\eta[/tex] term comes from comparing the cross-terms.

Ok. So the exponent looks like [itex]- \bar{\theta} B \theta + \bar{eta} \theta + \bar{\theta} \eta[/itex]
Under the transformation [itex]\theta \rightarrow S \tilde{\theta} + B^{-1} \eta[/itex] we get
[itex]- (S^* \bar{\tilde{\theta}} + ( B^{-1})^* \bar{\eta} ) B ( S \theta + B^{-1} \eta ) + \bar{\eta} S \tilde{\theta} + \bar{\eta} B \eta + S^* \bar{\tilde{\theta}} \eta + B^{-1} \eta \eta[/itex]
Is this going in the right direction?

Cheers.
 
  • #12
latentcorpse said:
Ok. So we agreed that [itex]d \theta d \bar{\theta} = -i d \theta_1 d \theta_2[/itex]

And if you work out [itex]\theta \bar{\theta} = \frac{1}{2} ( i \theta_2 \theta_1 - i \theta_1 \theta_2 ) = - i \theta_1 \theta_2[/itex]
[itex]\Rightarrow \bar{\theta} \theta = i \theta_2 \theta_1[/itex]

Therefore [itex]\int d \theta d \bar{\theta} ( \bar{\theta} \theta ) = - i \int d \theta_1 d \theta_2 ( i \theta_2 \theta_1) = -i \times i =1[/itex]

So this is correct, yes? This means then that my written notes are wrong and that there should indeed be the minus sign in [itex]d \theta d \bar{\theta} = -i d \theta_1 d \theta_2[/itex]?

Probably. I couldn't say for certain, since there could be some convention that's different.


Ok. So the exponent looks like [itex]- \bar{\theta} B \theta + \bar{eta} \theta + \bar{\theta} \eta[/itex]
Under the transformation [itex]\theta \rightarrow S \tilde{\theta} + B^{-1} \eta[/itex] we get
[itex]- (S^* \bar{\tilde{\theta}} + ( B^{-1})^* \bar{\eta} ) B ( S \theta + B^{-1} \eta ) + \bar{\eta} S \tilde{\theta} + \bar{\eta} B \eta + S^* \bar{\tilde{\theta}} \eta + B^{-1} \eta \eta[/itex]
Is this going in the right direction?

Note: I accidentally called [tex]S[/tex] Hermitian when I meant unitary. I gave the correct property as an equation, but wanted to correct that.

If you're going to use the matrix notation (and I think it's easier to do so) you have to keep track of transposes. So [tex]\bar{\theta}[/tex] is Hermitian conjugate in this notation, not just complex conjugate, so

[tex]\bar{\theta} = \bar{\tilde{\theta}} S^\dagger + \bar{\eta} B^{-1}.[/tex]

I derived this by canceling cross-terms, but it could also be computed directly by using the Hermiticity of [tex]B[/tex].
 
  • #13
fzero said:
Probably. I couldn't say for certain, since there could be some convention that's different.

I actually disagree again!

I find [itex]d \theta d \bar{\theta}= - i d \theta_1 d \theta_2[/itex]

and [itex]\bar{\theta} \theta = -i \theta_2 \theta_1[/itex]

which would give an answer of -i x -i =-1 which is wrong! AAArrgghh!
And also, could you confirm whether [itex]\gamma^\mu \gamma^\nu = \frac{1}{2} \{ \gamma^\mu , \gamma^\nu \}[/itex] where [itex]\gamam^\mu[/itex] are the matrices of the clifford algebra?

Thanks.
 
Last edited:
  • #14
latentcorpse said:
I actually disagree again!

I find [itex]d \theta d \bar{\theta}= - i d \theta_1 d \theta_2[/itex]

and [itex]\bar{\theta} \theta = -i \theta_2 \theta_1[/itex]

which would give an answer of -i x -i =-1 which is wrong! AAArrgghh!

[tex] \theta \bar{\theta}[/tex] and [tex]d \theta d \bar{\theta}[/tex] have the same structure so your previous calculation was correct.

And also, could you confirm whether [itex]\gamma^\mu \gamma^\nu = \frac{1}{2} \{ \gamma^\mu , \gamma^\nu \}[/itex] where [itex]\gamam^\mu[/itex] are the matrices of the clifford algebra?

Thanks.

That would imply that [tex] [\gamma^\mu, \gamma^\nu]=0[/tex], so no.
 
  • #15
fzero said:
[tex] \theta \bar{\theta}[/tex] and [tex]d \theta d \bar{\theta}[/tex] have the same structure so your previous calculation was correct.

Explicitly:

[itex]d\ theta d \bar{\theta} = \frac{1}{2} ( d \theta_1 + i d \theta_2 )( d \theta_1 - i d \theta_2) = \frac{1}{2} ( d \theta_1^2 - i d \theta_1 d \theta_2 + i d \theta_2 d \theta_1 + d \theta_2^2) = - i d \theta_1 d \theta_2[/itex]

and [itex]\bar{ \theta} \theta = \frac{1}{2} ( \theta_1 - i \theta_2 )( \theta_1 + i \theta_2) = \frac{1}{2} ( \theta_1^2 + i \theta_1 \theta_2 - i \theta_2 \theta_1 + \theta_2^2) = i \theta_1 \theta_2[/itex]

so [itex]\int d \theta d \bar{\theta} \theta \bar{\theta} = - i \times i \int d \theta_1 d \theta_2 \theta_1 \theta_2 = - i \times i \times (-1) \int d \theta_1 d \theta_2 \theta_2 \theta_1 = -i^2 \times -1 = i^2 =-1[/itex]

I simply cannot spot the mistake!Moving on, what is the point of Wick Rotation? We introduce it so that we can use Euclidean space Feynman rules for loop integrals rather than momentum space Feynman rules. What's the reason for this? Is it just easier in Euclidean rules or something?

And take for example a simple loop integral, in Euclidean space feynman rules this gives
[itex]\int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2} \propto \int_0^\infty \frac{k^{d-1}dk}{k^2+m^2}[/itex]
Where does that proportionality come from?

Lastly, I was trying to replicate the calculation for the amplitude at the top of p324 in P&S for the diagram with 4 external lines, but I can't get it. How do we do that?
And I'm assuming when he says "the only divergent amplitudes are..." and then draws those 3 diagrams on p324, that the grey circles just represent any collection of internal lines that can be made so that they have 4 point vertices, yes?

EDIT: Also at the bottom of p323, he says that all amplitudes with an odd number of vertices will vanish in this [itex]\phi^4[/itex] theory. Why on Earth is that?
 
  • #16
latentcorpse said:
Explicitly:

[itex]d\ theta d \bar{\theta} = \frac{1}{2} ( d \theta_1 + i d \theta_2 )( d \theta_1 - i d \theta_2) = \frac{1}{2} ( d \theta_1^2 - i d \theta_1 d \theta_2 + i d \theta_2 d \theta_1 + d \theta_2^2) = - i d \theta_1 d \theta_2[/itex]

and [itex]\bar{ \theta} \theta = \frac{1}{2} ( \theta_1 - i \theta_2 )( \theta_1 + i \theta_2) = \frac{1}{2} ( \theta_1^2 + i \theta_1 \theta_2 - i \theta_2 \theta_1 + \theta_2^2) = i \theta_1 \theta_2[/itex]

so [itex]\int d \theta d \bar{\theta} \theta \bar{\theta} = - i \times i \int d \theta_1 d \theta_2 \theta_1 \theta_2 = - i \times i \times (-1) \int d \theta_1 d \theta_2 \theta_2 \theta_1 = -i^2 \times -1 = i^2 =-1[/itex]

I simply cannot spot the mistake!

Your calculation is correct. P&S claim that

[itex]\int d \bar{\theta} d \theta \theta \bar{\theta} =1.[/itex]

Moving on, what is the point of Wick Rotation? We introduce it so that we can use Euclidean space Feynman rules for loop integrals rather than momentum space Feynman rules. What's the reason for this? Is it just easier in Euclidean rules or something?

It makes path integrals more convergent, since the weight is now [tex]e^{-S/\hbar}[/tex]. The divergence structure of loop integrals is also easier to elucidate in Euclidean space, hence regularization is easier.

And take for example a simple loop integral, in Euclidean space feynman rules this gives
[itex]\int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2} \propto \int_0^\infty \frac{k^{d-1}dk}{k^2+m^2}[/itex]
Where does that proportionality come from?

The RHS is the result of choosing spherical coordinates in momentum space.

Lastly, I was trying to replicate the calculation for the amplitude at the top of p324 in P&S for the diagram with 4 external lines, but I can't get it. How do we do that?

We only have a quartic vertex, so the simplest 1-loop diagrams have 2 vertices and 2 internal lines. Therefore there are 2 internal propagators involving the loop momentum, so the diagram is proportional to

[tex] \int d^4k \frac{1}{k^4}.[/tex]

And I'm assuming when he says "the only divergent amplitudes are..." and then draws those 3 diagrams on p324, that the grey circles just represent any collection of internal lines that can be made so that they have 4 point vertices, yes?

Yes.

EDIT: Also at the bottom of p323, he says that all amplitudes with an odd number of vertices will vanish in this [itex]\phi^4[/itex] theory. Why on Earth is that?

You can use the [tex]\phi\rightarrow -\phi [/tex] symmetry to show directly that correlation functions of an odd number of fields must vanish. Alternatively, if you set up a diagram with an odd number of external legs, you'll find that you must connect an odd number of internal legs in pairs, which is impossible.
 
  • #17
fzero said:
Your calculation is correct. P&S claim that

[itex]\int d \bar{\theta} d \theta \theta \bar{\theta} =1.[/itex]
Sorry to drag this out. I think I may have made a mistake in what I wrote out last time. To try and reproduce P&S calculation:
[itex]\theta=\frac{1}{\sqrt{2}}(\theta_1 + i \theta_2), \quad \bar{\theta} = \frac{1}{\sqrt{2}} ( \theta_1 - i \theta_2)[/itex]
[itex]d \bar{\theta} d \theta = \frac{1}{2} ( d \theta_1 - i d \theta_2 ) ( d \thteta_1 + i d \theta_2 ) = \frac{1}{2} ( i d \theta_1 d \theta_2 - i d \theta_2 d \theta_1) = i d \theta_1 d \theta_2[/itex]
And [itex]\theta \bar{\theta} = \frac{1}{2} ( \theta_1 + i \theta_2 )( \theta_1 - i \theta_2 ) = \frac{1}{2} ( - i \theta_1 \theta_2 + i \theta_2 \theta_1 ) = i \theta_2 \theta_1[/itex]
So [itex]\int d \bar{\theta} d \theta \theta \bar{\theta} = i^2 \int d \theta_1 d \theta_2 \theta_2 \theta_1 = i^2 = -1[/itex]
fzero said:
It makes path integrals more convergent, since the weight is now [tex]e^{-S/\hbar}[/tex]. The divergence structure of loop integrals is also easier to elucidate in Euclidean space, hence regularization is easier.
So previously our path integrals had weight [itex]e^{iS/\hbar}[/itex], right? How does the weight change to what you said? And why does that new weight make it more convergent?

fzero said:
The RHS is the result of choosing spherical coordinates in momentum space.
Well shouldn't there also be a [itex]\int k^{d-1}[/itex] piece? Why can that be dropped from the proportionality?

fzero said:
We only have a quartic vertex, so the simplest 1-loop diagrams have 2 vertices and 2 internal lines. Therefore there are 2 internal propagators involving the loop momentum, so the diagram is proportional to

[tex] \int d^4k \frac{1}{k^4}.[/tex]
How do you know those are the simplest diagrams? Experience?
And how does that integral give a log? I tried substituting [itex]u=k^4[/itex] but that doesn't help?
fzero said:
You can use the [tex]\phi\rightarrow -\phi [/tex] symmetry to show directly that correlation functions of an odd number of fields must vanish. Alternatively, if you set up a diagram with an odd number of external legs, you'll find that you must connect an odd number of internal legs in pairs, which is impossible.
I can see this from the diagrams but not quite sure what you mean for doing it for the correlation functions - what formula do I use?

Also, he's introducing analytic continuation and discusses how we can use analytic continuation of the Gamma function to extend the definition of n! to negative numbers.
We have the relationship [itex]\Gamma(n+1)=n![/itex] for [itex]n \in \mathbb{Z}^+[/itex] which is fair enough. Then he goes through the procedure of using [itex]\Gamma(\alpha) = \frac{1}{\alpha} \Gamma(\alpha+1)[/itex] to extend this to -1 and so on and so forth till we can extend over the whole complex plane. However, we find that for [itex]\alpha[/itex] near 0, we have [itex] \Gamma(\alpha) \tilde \frac{1}{\alpha}[/itex] and since [itex]\Gamma(n+1)=n![/itex], wouldn't this mean that 0! would diverge? But we expect 0!=1?
Furthermore, does this definition actually mean we can put a value on things like (-2)! and if so how?

Thanks again!
 
Last edited:
  • #18
latentcorpse said:
Sorry to drag this out. I think I may have made a mistake in what I wrote out last time. To try and reproduce P&S calculation:
[itex]\theta=\frac{1}{\sqrt{2}}(\theta_1 + i \theta_2), \quad \bar{\theta} = \frac{1}{\sqrt{2}} ( \theta_1 - i \theta_2)[/itex]
[itex]d \bar{\theta} d \theta = \frac{1}{2} ( d \theta_1 - i d \theta_2 ) ( d \thteta_1 + i d \theta_2 ) = \frac{1}{2} ( i d \theta_1 d \theta_2 - i d \theta_2 d \theta_1) = i d \theta_1 d \theta_2[/itex]
And [itex]\theta \bar{\theta} = \frac{1}{2} ( \theta_1 + i \theta_2 )( \theta_1 - i \theta_2 ) = \frac{1}{2} ( - i \theta_1 \theta_2 + i \theta_2 \theta_1 ) = i \theta_2 \theta_1[/itex]
So [itex]\int d \bar{\theta} d \theta \theta \bar{\theta} = i^2 \int d \theta_1 d \theta_2 \theta_2 \theta_1 = i^2 = -1[/itex]

It seems that one has to be more careful. The right way to keep track of the change in measure is to compute the Jacobian. For Grassmann variables, the determinant of the Jacobian comes in with the reciprocal power (this is worked out in Srednicki's book for example). Since the Jacobian here depends on [tex]i[/tex], this introduces an extra minus sign, which is what we need to get P&S's result.

So previously our path integrals had weight [itex]e^{iS/\hbar}[/itex], right? How does the weight change to what you said? And why does that new weight make it more convergent?

Just consider a quadratic action. One integral is oscillatory and the other is a Gaussian.

Well shouldn't there also be a [itex]\int k^{d-1}[/itex] piece? Why can that be dropped from the proportionality?

The [tex]k^{d-1}[/tex] factor is there.

How do you know those are the simplest diagrams? Experience?

You can just try to draw diagrams that are one-particle irreducible.

And how does that integral give a log? I tried substituting [itex]u=k^4[/itex] but that doesn't help?

[tex]\int \frac{d^4k}{k^4} \sim \int \frac{dk}{k}[/tex]

I can see this from the diagrams but not quite sure what you mean for doing it for the correlation functions - what formula do I use?

You can show for example that

[tex]\langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle = - \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle[/tex]

Also, he's introducing analytic continuation and discusses how we can use analytic continuation of the Gamma function to extend the definition of n! to negative numbers.
We have the relationship [itex]\Gamma(n+1)=n![/itex] for [itex]n \in \mathbb{Z}^+[/itex] which is fair enough. Then he goes through the procedure of using [itex]\Gamma(\alpha) = \frac{1}{\alpha} \Gamma(\alpha+1)[/itex] to extend this to -1 and so on and so forth till we can extend over the whole complex plane. However, we find that for [itex]\alpha[/itex] near 0, we have [itex] \Gamma(\alpha) \tilde \frac{1}{\alpha}[/itex] and since [itex]\Gamma(n+1)=n![/itex], wouldn't this mean that 0! would diverge? But we expect 0!=1?
Furthermore, does this definition actually mean we can put a value on things like (-2)! and if so how?

The Gamma function has simple poles at [tex]\alpha = 0,-1,-2,\ldots[/tex]. However [tex]0!=\Gamma(1) = 1[/tex].
 
  • #19
fzero said:
It seems that one has to be more careful. The right way to keep track of the change in measure is to compute the Jacobian. For Grassmann variables, the determinant of the Jacobian comes in with the reciprocal power (this is worked out in Srednicki's book for example). Since the Jacobian here depends on [tex]i[/tex], this introduces an extra minus sign, which is what we need to get P&S's result.
I don't follow what you're on about with reciprocal powers etc?
fzero said:
The [tex]k^{d-1}[/tex] factor is there.
Why is it treated as a constant though? Is it because if we break up [itex]\int d^dk = \int k^{d-1} dk \int d^{d-1}k[/itex], the [itex]\int d^{d-1}k[/itex] measure has no [itex]k[/itex] dependence?

fzero said:
You can just try to draw diagrams that are one-particle irreducible.
Why one particle irreducible? Do we always just work with these?
As for finding one with 2 vertices and 2 lines - I'm struggling!
I can draw my four external lines coming in and each one hitting the corner of a square. That is definitely 1PI but it has 4 internal lines and 4 vertices.
When trying to get one with 2 vertices and 2 internal lines, I drew a cross, with a vertex at the centre and then stuck a loop on one of the external legs. However, I'm fairly sure that is not 1PI as I can cut the loop and the diagram will no longer be connected!
fzero said:
You can show for example that

[tex]\langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle = - \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle[/tex]

How though? I am not sure how to expand [itex]\langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle[/itex] in this case?

fzero said:
The Gamma function has simple poles at [tex]\alpha = 0,-1,-2,\ldots[/tex]. However [tex]0!=\Gamma(1) = 1[/tex].
Yes because [itex]\Gamma(\alpha) = \frac{\Gamma(\alpha+1)}{\alpha}[/itex] and so [itex]\Gamma(0)=\Gamma(1) / 1 = 1/1 = 1[/itex] which agrees with [itex]0!=1[/itex]
Although, is it true to say that there is still a simple pole associated with [itex]\Gamma(0)[/itex] since if we take [itex]\alpha[/itex] near 0 then [itex]\Gamma(\alpha) \tilde \frac{1}{\alpha}[/itex] since [itex]\Gamma(1)=1[/itex], right?

However, since we are claiming that the gamma function extends the factorial to the whole complex plane, I am wondering, how do you go about computing, say, [itex](-2)![/itex]?

Thanks.
 
Last edited:
  • #20
latentcorpse said:
I don't follow what you're on about with reciprocal powers etc?

You should probably just look at the computation leading up to (44.18) in http://www.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf What I should have realized sooner is that multiplying differentials almost never leads to the correct measure. What is true is that you could compute the measure from the volume form if you're careful. But it's usually faster to use the Jacobian.

Why is it treated as a constant though? Is it because if we break up [itex]\int d^dk = \int k^{d-1} dk \int d^{d-1}k[/itex], the [itex]\int d^{d-1}k[/itex] measure has no [itex]k[/itex] dependence?

No, the k-dependence is there. Specifically

[itex]
\int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2}= \int \frac{d\Omega_{d-1}}{(2 \pi)^d} \int \frac{dk~ k^{d-1}}{k^2+m^2}= \frac{V_{d-1}}{ (2 \pi)^d} \int_0^\infty \frac{k^{d-1}dk}{k^2+m^2},
[/itex]

where [tex]V_{d-1}[/tex] is the volume of a [tex]d-1[/tex] sphere. The proportionality constant is just the ratio appearing in front of the last expression.

Why one particle irreducible? Do we always just work with these?

The diagrams corresponding to the expansion of what's usually called the generating function (usually written as [tex]W[J][/tex]) are connected and include propagators on the external legs. What's usually called the effective action ([tex]\Gamma[\phi][/tex]) involves the one-particle irreducible diagrams with no propagators on the external legs.

As for finding one with 2 vertices and 2 lines - I'm struggling!
I can draw my four external lines coming in and each one hitting the corner of a square. That is definitely 1PI but it has 4 internal lines and 4 vertices.

That sounds like you're using cubic instead of quartic vertices.

When trying to get one with 2 vertices and 2 internal lines, I drew a cross, with a vertex at the centre and then stuck a loop on one of the external legs. However, I'm fairly sure that is not 1PI as I can cut the loop and the diagram will no longer be connected!

The diagram I'm talking about is

attachment.php?attachmentid=35149&stc=1&d=1304463328.jpg


How though? I am not sure how to expand [itex]\langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle[/itex] in this case?

All we've done is applied the [tex]\phi\rightarrow -\phi[/tex] symmetry to the correlator. There's no need to evaluate it since we've shown that it must be zero.

Yes because [itex]\Gamma(\alpha) = \frac{\Gamma(\alpha+1)}{\alpha}[/itex] and so [itex]\Gamma(0)=\Gamma(1) / 1 = 1/1 = 1[/itex] which agrees with [itex]0!=1[/itex]
Although, is it true to say that there is still a simple pole associated with [itex]\Gamma(0)[/itex] since if we take [itex]\alpha[/itex] near 0 then [itex]\Gamma(\alpha) \tilde \frac{1}{\alpha}[/itex] since [itex]\Gamma(1)=1[/itex], right?

However, since we are claiming that the gamma function extends the factorial to the whole complex plane, I am wondering, how do you go about computing, say, [itex](-2)![/itex]?

You could compute a residue or principal value, but the factorial does not extend to the whole plane. It extends to everywhere that it doesn't have a pole, which is the plane minus the nonpositive integers.
 

Attachments

  • fish.jpg
    fish.jpg
    1.2 KB · Views: 581
  • #21
fzero said:
You should probably just look at the computation leading up to (44.18) in http://www.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf What I should have realized sooner is that multiplying differentials almost never leads to the correct measure. What is true is that you could compute the measure from the volume form if you're careful. But it's usually faster to use the Jacobian.
So what are our [itex]J_{i_1j_1}[/itex] in this case?
fzero said:
The diagrams corresponding to the expansion of what's usually called the generating function (usually written as [tex]W[J][/tex]) are connected and include propagators on the external legs. What's usually called the effective action ([tex]\Gamma[\phi][/tex]) involves the one-particle irreducible diagrams with no propagators on the external legs.
Yes, I have these same definitions. But this doesn't explain why we consider 1PI diagrams? Is it because they don't have propagators on the external legs and are therefore simpler to deal with?

fzero said:
The diagram I'm talking about is

attachment.php?attachmentid=35149&stc=1&d=1304463328.jpg
Ok so given this graph, if we put k as the loop momenta then the Euclidean space Feynman rules tell us that we get
[itex]\int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2} \frac{1}{(p-k)^2+m^2} \rightarrow \int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^4}[/itex]
If we regularise by imposing a high momentum UV cutoff [itex]k \leq \Lambda[/itex]
then we obtain the result [itex]\tilde \log{\Lambda}[/itex] as on p324 P&S.
Is this correct?

Also, when we compute these loop integrals, what exactly are we computing? He just draws a diagram and then puts something like ~ log (k) next to it but doesn't actually say WHAT IS ~log(k). It's not a green's function or anything like that. What actually is it?

fzero said:
All we've done is applied the [tex]\phi\rightarrow -\phi[/tex] symmetry to the correlator. There's no need to evaluate it since we've shown that it must be zero.
Do you just mean that you took [itex]\langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle[/itex] and made the transformation [itex]\phi \rightarrow - \phi[/itex] which gives 3 minus signs (one for each [itex]\phi[/itex]) and since [itex](-1)^3=-1[/itex] we get an overall minus sign that we can pull out showing that the 3 point green's function is equal to minus itself and therefore must vanish?

fzero said:
You could compute a residue or principal value, but the factorial does not extend to the whole plane. It extends to everywhere that it doesn't have a pole, which is the plane minus the nonpositive integers.
Ok. So you're telling me that [itex]\Gamma[/itex] is defined everywhere except 0,-1,-2,-3,...?
But we just showed a few posts ago that [itex]\Gamma(0)=1[/itex] to be consistent with [itex]0!=1[/itex]

So if you were asked to find [itex](-1.5)![/itex], could you just compute [itex]\Gamma(-0.5)[/itex] using its integral definition as [itex]\Gamma(\alpha) = \int_0^\int x^{\alpha-1}e^{-x}dx[/itex]?

Lastly, my notes discuss three methods of regularisation: (i) UV cut-off which I think I understand (assuming you agree with my calculation of [itex]\log(\Lambda)[/itex] above.
(ii) Using a spacetime lattice (although this doesn't seem very important as he just mentions it in passing.
(iii) Dimensional Regularisation. This seems like the most important. He just says though:
The superficial degree opf divergence is given by [itex]D=(d-4)L + \displaystyle\sum_n (n-4) V_n - E + 4[/itex] which depends on the dimension d.
Usually this is just defined for [itex]d \in \mathbb{Z}^+[/itex] but we can regulate by analytic continuation to [itex]d \in \mathbb{C}[/itex].
He has included an example afterwords and I can follow the maths in it but I don't really understand what we are doing. How do we analytically continue to complex d and what's the point of doing so?

Thanks very much!
 
  • #22
latentcorpse said:
So what are our [itex]J_{i_1j_1}[/itex] in this case?

That's easy to figure out by inverting the formula for the complex variables. Srednicki does the calculation a bit later in that section.

Yes, I have these same definitions. But this doesn't explain why we consider 1PI diagrams? Is it because they don't have propagators on the external legs and are therefore simpler to deal with?

The LSZ formula tells us that S-matrix elements are related to time-ordered correlation functions. We can build all of the correlations functions from the 1PIs as ingredients I guess.

Ok so given this graph, if we put k as the loop momenta then the Euclidean space Feynman rules tell us that we get
[itex]\int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2} \frac{1}{(p-k)^2+m^2} \rightarrow \int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^4}[/itex]
If we regularise by imposing a high momentum UV cutoff [itex]k \leq \Lambda[/itex]
then we obtain the result [itex]\tilde \log{\Lambda}[/itex] as on p324 P&S.
Is this correct?

Basically yes.

Also, when we compute these loop integrals, what exactly are we computing? He just draws a diagram and then puts something like ~ log (k) next to it but doesn't actually say WHAT IS ~log(k). It's not a green's function or anything like that. What actually is it?

Each diagram is a specific way of Wick contracting the operators in a momentum space correlation function.

Do you just mean that you took [itex]\langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle[/itex] and made the transformation [itex]\phi \rightarrow - \phi[/itex] which gives 3 minus signs (one for each [itex]\phi[/itex]) and since [itex](-1)^3=-1[/itex] we get an overall minus sign that we can pull out showing that the 3 point green's function is equal to minus itself and therefore must vanish?

Yes.

Ok. So you're telling me that [itex]\Gamma[/itex] is defined everywhere except 0,-1,-2,-3,...?
But we just showed a few posts ago that [itex]\Gamma(0)=1[/itex] to be consistent with [itex]0!=1[/itex]

No, [tex]0!=\Gamma(1) [/tex]

So if you were asked to find [itex](-1.5)![/itex], could you just compute [itex]\Gamma(-0.5)[/itex] using its integral definition as [itex]\Gamma(\alpha) = \int_0^\int x^{\alpha-1}e^{-x}dx[/itex]?

There are identities for [tex]\Gamma(z/2)[/tex] and [tex]\Gamma(1-z)[/tex] that are usually easier to work with than the integral.

Lastly, my notes discuss three methods of regularisation: (i) UV cut-off which I think I understand (assuming you agree with my calculation of [itex]\log(\Lambda)[/itex] above.
(ii) Using a spacetime lattice (although this doesn't seem very important as he just mentions it in passing.
(iii) Dimensional Regularisation. This seems like the most important. He just says though:
The superficial degree opf divergence is given by [itex]D=(d-4)L + \displaystyle\sum_n (n-4) V_n - E + 4[/itex] which depends on the dimension d.
Usually this is just defined for [itex]d \in \mathbb{Z}^+[/itex] but we can regulate by analytic continuation to [itex]d \in \mathbb{C}[/itex].
He has included an example afterwords and I can follow the maths in it but I don't really understand what we are doing. How do we analytically continue to complex d and what's the point of doing so?

You'd be better off trying to read the details in a decent text. You are generally expressing loop integrals in terms of Gamma functions which define the analytic continuation. Since the Gamma functions have a known pole structure, one can determine the divergent and finite parts of each integral.
 
  • #23
fzero said:
Basically yes.
I'm getting an infinity though when i write it out in full

[itex]\int \frac{d^4k}{k^4} = \int \frac{dk}{k}|_0^\Lambda = \log{\Lambda} - \log{0}[/itex] but [itex][itex]\log{0}=-\infty[/itex] so I end up with a [itex]+\infty[/itex] term?
fzero said:
No, [tex]0!=\Gamma(1) [/tex]
Ok. So we're saying that [itex]\Gamma(0)[/itex] is undefined and we can never know it's value because of the pole there? That makes sense I guess...

fzero said:
There are identities for [tex]\Gamma(z/2)[/tex] and [tex]\Gamma(1-z)[/tex] that are usually easier to work with than the integral.
So I should know these identities I guess. But you are saying that calculation the factorial of a negative number is possible providing it isn't at a pole? i.e. (-1.5)! exists?

What about complex numbers (-1+i)! Does that exist?

And so, given these two methods of regularisation :(i) momentum space cut-off and (ii) dimensional regularisation, how do we know which one to use? Will questions generally state which one they want you to use or is it better to use a particular one?

Thanks.
 
Last edited:
  • #24
latentcorpse said:
I'm getting an infinity though when i write it out in full

[itex]\int \frac{d^4k}{k^4} = \int \frac{dk}{k}|_0^\Lambda = \log{\Lambda} - \log{0}[/itex] but [itex][itex]\log{0}=-\infty[/itex] so I end up with a [itex]+\infty[/itex] term?

You get this divergence because you dropped the particle mass out of the integral. That was valid when we were considering the UV behavior, but not valid when considering small momenta. If you consider the complete integral, the mass acts as an IR cutoff.

Ok. So we're saying that [itex]\Gamma(0)[/itex] is undefined and we can never know it's value because of the pole there? That makes sense I guess...

I should have been more careful. We can isolate the divergent and finite parts of the Gamma function at the poles. This is what's used in dimensional regularization. My comment above was that [tex]0![/tex] does not correspond to a pole of the Gamma function.

So I should know these identities I guess. But you are saying that calculation the factorial of a negative number is possible providing it isn't at a pole? i.e. (-1.5)! exists?

What about complex numbers (-1+i)! Does that exist?

Both of those exist.
 
  • #25
fzero said:
You get this divergence because you dropped the particle mass out of the integral. That was valid when we were considering the UV behavior, but not valid when considering small momenta. If you consider the complete integral, the mass acts as an IR cutoff.
So going back to
[itex]\int_{k \leq \Lambda} \frac{d^4}k{(2 \pi)^d} \frac{1}{k^2+m^2} \frac{1}{(p-k)^2+m^2}[/itex]
What should I use as a substitution? I can't get rid of the p term?

Also, aren't we meant ot be talking about divergent graphs here? We get our answer to be [itex]\log{\Lambda}[/itex] but [itex]\Lambda[/itex] is bounded above meaning that it can't diverge. This is the point of regularisation right? We have imposed this upper limit on momenta in order to render an initially divergent loop integral finite?

Isn't this just a sort of "quick fix" though? It doesn't really seem to solve the problem as we cans till just take [itex]\Lambda \rightarrow \infty[/itex] and the integral will diverge again, won't it?

fzero said:
Both of those exist.
So how would we calculate (1-i)!, say?Can you try and explain to me what renormalisation is all about as my notes offer a fairly inadequate description and I can't really follow what P&S is on about! It seems to be something to do with the mass and the physical mass (which appears as the pole in the full two point function are different?

And there's something in my notes that's bugging me: He claims that a tree level (i.e. no loops) diagram with 2 external legs that are amputated contributes [itex]\hat{F}^{(0)}_2(p,-p)=p^2+m^2[/itex]
I don't see why though? Surely the Feynman rules tell us it should have a [itex]\frac{1}{p^2+m^2}[/itex], no?

Cheers.
 
Last edited:
  • #26
latentcorpse said:
So going back to
[itex]\int_{k \leq \Lambda} \frac{d^4}k{(2 \pi)^d} \frac{1}{k^2+m^2} \frac{1}{(p-k)^2+m^2}[/itex]
What should I use as a substitution? I can't get rid of the p term?

Also, aren't we meant ot be talking about divergent graphs here? We get our answer to be [itex]\log{\Lambda}[/itex] but [itex]\Lambda[/itex] is bounded above meaning that it can't diverge. This is the point of regularisation right? We have imposed this upper limit on momenta in order to render an initially divergent loop integral finite?

Isn't this just a sort of "quick fix" though? It doesn't really seem to solve the problem as we cans till just take [itex]\Lambda \rightarrow \infty[/itex] and the integral will diverge again, won't it?

This integral is eq (10.20) in P&S and is evaluated in dimensional regularization there.

So how would we calculate (1-i)!, say?

You probably have to do an integral somewhere, since I'm not remembering an expression for [tex]\Gamma(\pm i)[/tex].

Can you try and explain to me what renormalisation is all about as my notes offer a fairly inadequate description and I can't really follow what P&S is on about! It seems to be something to do with the mass and the physical mass (which appears as the pole in the full two point function are different?

I'd suggest trying some other texts. I can try to answer specific questions, but I can't try to regurgitate material that takes pages and pages in books.

And there's something in my notes that's bugging me: He claims that a tree level (i.e. no loops) diagram with 2 external legs that are amputated contributes [itex]\hat{F}^{(0)}_2(p,-p)=p^2+m^2[/itex]
I don't see why though? Surely the Feynman rules tell us it should have a [itex]\frac{1}{p^2+m^2}[/itex], no?

The definition of the amputated correlation function is

[tex]G(p_1,\ldots p_n) = G(p_1)\cdots G(p_n) G_{\text{amp}}(p_1,\ldots p_n).[/tex]

Since

[tex] G(p,-p) \sim \frac{1}{p^2+m^2} \sim G(p)[/tex]

we find that

[tex]G_{\text{amp}}(p,-p) \sim p^2+m^2.[/tex]
 
  • #27
latentcorpse said:
If [itex]\{ \theta_i \}[/itex] are a set of Grassmann numbers then what is

[itex]\frac{\partial}{\partial \theta_i} ( \theta_j \theta_k \theta_l)[/itex]

The [itex] \frac{\partial}{\partial \theta_i} [/itex] is undefined on a Grassmann algebra. You either have a left derivative, or a right derivative operator depending on your wish. The 2 operators in general are not the same and in particular this difference is felt once you have a product of anticommuting objects.

Read more on this in Henneaux and Teitelboim's book on the quantization of gauge theories.
 
  • #28
fzero said:
This integral is eq (10.20) in P&S and is evaluated in dimensional regularization there.
I see how he gets the first line of (10.20) from Euclidean space Feynman rules, then he seems to do lots of steps at once and I can't keep up! He says that this is equal to [itex](-i \lambda)^2 \cdot i V(p^2)[/itex] where [itex]V(p^2)[/itex] is given on the next page. Why is that the formula for V? And what's this x integral all about? I reckon it's the feynman parameter he's on about but our lecture notes haven't mentioned these...

fzero said:
You probably have to do an integral somewhere, since I'm not remembering an expression for [tex]\Gamma(\pm i)[/tex].
So we have (-1+i)! = \Gamma(i) = \int_0^\infty x^{i-1} e^{-x} dx[/itex] and we'd have to evaluate this?

What about my question concerning which method to use for regularisation? How do we decide between dimensional regularisation or imposing a momentum space cut-off?

fzero said:
I'd suggest trying some other texts. I can try to answer specific questions, but I can't try to regurgitate material that takes pages and pages in books.
Sure. Well the notes that (more or less) follow my lectures are here:
http://www.damtp.cam.ac.uk/user/ho/Notes.pdf
Our renormalisation stuff is round about p50. Half-way down this page he says that the loop integrals lead to divergent terms and these divergent terms can be canceled off by adding new "counter terms" to the Lagrangian so that we have a new lagrangian of the form
[itex]\mathcal{L}+\mathcal{L}_{ct}[/itex] where the counter term lagrangian is given by
[itex]\mathcal{L}_{ct}=-\frac{1}{2}A \partial_\mu \phi \partial^\mu \phi - V_{ct}(\phi)[/itex]
and the counter term potential is given by [itex]E \phi + \frac{B}{2} \phi^2 + \frac{C}{3!} \phi^3 + \frac{D}{4!} \phi^4[/itex]
Why does the counter term lagrangian take this form? I don't see why any of these terms are there or how they help cancel stuff?

fzero said:
The definition of the amputated correlation function is

[tex]G(p_1,\ldots p_n) = G(p_1)\cdots G(p_n) G_{\text{amp}}(p_1,\ldots p_n).[/tex]

Since

[tex] G(p,-p) \sim \frac{1}{p^2+m^2} \sim G(p)[/tex]

we find that

[tex]G_{\text{amp}}(p,-p) \sim p^2+m^2.[/tex]
[/QUOTE]

So we have that (I think this is the same as what you have)
[itex]F_n(p_1 , \dots , p_n) = i (2 \pi)^d \delta^{(d)}( \displaystyle\sum_{i=1}^n p_i ) \displaystyle\prod_{i=1}^n \left( \frac{-i}{p^2+m^2} \right) \hat{F}_n(p_1 , \dots , p_n)[/itex]
where [itex]\hat{F}_n(p_1 , \dots , p_n)[/itex] corresponds to the amputated correlation function.
I can see how my definition falls out of the Euclidean space Feynman rules although it seems to have some extra terms relative to yours?
Anyway, I agree that if we have two external legs then we get (-i/(p^2+m^2))^2*\hat{F} on the RHS. Why is the full correlation function equal to 1/(p^2+m^2)? How do you know that?
And my definition looks as though it's going to disagree with yours by an overall minus sign when we work out \hat{F}?

Thanks again.
 
Last edited by a moderator:
  • #29
latentcorpse said:
I see how he gets the first line of (10.20) from Euclidean space Feynman rules, then he seems to do lots of steps at once and I can't keep up! He says that this is equal to [itex](-i \lambda)^2 \cdot i V(p^2)[/itex] where [itex]V(p^2)[/itex] is given on the next page. Why is that the formula for V? And what's this x integral all about? I reckon it's the feynman parameter he's on about but our lecture notes haven't mentioned these...

(10.20) serves as the definition of [tex]V(p^2)[/tex]. The Feynman parameter method is the same thing that you were asking about in https://www.physicsforums.com/showthread.php?t=478538 P&S refer to their section 7.5.

So we have (-1+i)! = \Gamma(i) = \int_0^\infty x^{i-1} e^{-x} dx[/itex] and we'd have to evaluate this?

Or use some other representation of the Gamma function. Wolframalpha seems to use the series representations: http://www.wolframalpha.com/input/?i=gamma(i)

What about my question concerning which method to use for regularisation? How do we decide between dimensional regularisation or imposing a momentum space cut-off?

The momentum space cut-off will eventually break gauge covariance. For some diagrams it works, but for general processes it will turn up as a nonzero photon mass generated through radiative corrections. Dimensional regularization preserves gauge covariance at the expense of being difficult to understand.

Sure. Well the notes that (more or less) follow my lectures are here:
http://www.damtp.cam.ac.uk/user/ho/Notes.pdf
Our renormalisation stuff is round about p50. Half-way down this page he says that the loop integrals lead to divergent terms and these divergent terms can be canceled off by adding new "counter terms" to the Lagrangian so that we have a new lagrangian of the form
[itex]\mathcal{L}+\mathcal{L}_{ct}[/itex] where the counter term lagrangian is given by
[itex]\mathcal{L}_{ct}=-\frac{1}{2}A \partial_\mu \phi \partial^\mu \phi - V_{ct}(\phi)[/itex]
and the counter term potential is given by [itex]E \phi + \frac{B}{2} \phi^2 + \frac{C}{3!} \phi^3 + \frac{D}{4!} \phi^4[/itex]
Why does the counter term lagrangian take this form? I don't see why any of these terms are there or how they help cancel stuff?

The idea is that we can consider the effect of adding any single term to our original Lagrangian, either by just noting that we shift the parameters of the original theory or by adding new diagrams where we replace some lines and vertices with the appropriate counterterms. Either way, we can now express our physical quantities in terms of the bare couplings and the coefficients of the counterterms. The divergent parts of these quantities are given by the [tex]\hat{\tau}[/tex]s at the top of that page, while the counterterms contribute by adding the [tex]\hat{\tau}_{c.t.}[/tex]s to the bare result. The renormalization scheme consists of choosing the counterterm coefficients in such a way that the divergences are removed.

So we have that (I think this is the same as what you have)
[itex]F_n(p_1 , \dots , p_n) = i (2 \pi)^d \delta^{(d)}( \displaystyle\sum_{i=1}^n p_i ) \displaystyle\prod_{i=1}^n \left( \frac{-i}{p^2+m^2} \right) \hat{F}_n(p_1 , \dots , p_n)[/itex]
where [itex]\hat{F}_n(p_1 , \dots , p_n)[/itex] corresponds to the amputated correlation function.
I can see how my definition falls out of the Euclidean space Feynman rules although it seems to have some extra terms relative to yours?
Anyway, I agree that if we have two external legs then we get (-i/(p^2+m^2))^2*\hat{F} on the RHS. Why is the full correlation function equal to 1/(p^2+m^2)? How do you know that?
And my definition looks as though it's going to disagree with yours by an overall minus sign when we work out \hat{F}?

There are various conventions that can be used to define these expressions. It's probably unnecessary to put the momentum conservation delta function in by hand, since it should be enforced by the Feynman rules already when the external lines are on-shell.
 
Last edited by a moderator:
  • #30
fzero said:
(10.20) serves as the definition of [tex]V(p^2)[/tex]. The Feynman parameter method is the same thing that you were asking about in https://www.physicsforums.com/showthread.php?t=478538 P&S refer to their section 7.5.

I found the calculation in full here:
http://arxiv.org/PS_cache/arxiv/pdf/0901/0901.2208v1.pdf
See p14 & p15.
I follow up to (2.6): why does he get an [itex]\Omega_4[/itex]?
Surely [itex]\int d^4k = \int_{S^3} d \Omega_3 \int k^3 dk[/itex], no?

And then in (2.7), I understand the change of variables he used to get the expression on the left of this line but how does that integrate to give the RHS?

And then how does (2.8) come about? Surely we have 5 factors of 2 on the bottom i.e. 32 since we had a (2 pi)^4 and also a 1.2 from (2.7)? He also appears to have only pi^2 rather than pi^4 on the bottom?

fzero said:
The momentum space cut-off will eventually break gauge covariance. For some diagrams it works, but for general processes it will turn up as a nonzero photon mass generated through radiative corrections. Dimensional regularization preserves gauge covariance at the expense of being difficult to understand.
So, given an arbitrary loop integral to compute, is it best to start of by trying momentum space cut off (since it is easier) and if that fails then try dim reg? Or should we just always try dim reg straight away since we know that it works?

fzero said:
The idea is that we can consider the effect of adding any single term to our original Lagrangian, either by just noting that we shift the parameters of the original theory or by adding new diagrams where we replace some lines and vertices with the appropriate counterterms. Either way, we can now express our physical quantities in terms of the bare couplings and the coefficients of the counterterms. The divergent parts of these quantities are given by the [tex]\hat{\tau}[/tex]s at the top of that page, while the counterterms contribute by adding the [tex]\hat{\tau}_{c.t.}[/tex]s to the bare result. The renormalization scheme consists of choosing the counterterm coefficients in such a way that the divergences are removed.
Yep, so I worked out those divergent parts already using the dimensional regularisation prescription i think. And I understand (or at least am prepared to accept) that we can add new terms to the lagrangian in order to cancel off these divergences. However, I do not understand why the counter terms in the lagrangian give the contributions that they do to the amputated n point function
i.e. why is [itex]\hat{\tau}^{(0)}(p,-p)_{\text{c.t.}}=-Ap^2-B[/itex]?
And why do they then give those diagrams drawn underneath?
fzero said:
There are various conventions that can be used to define these expressions. It's probably unnecessary to put the momentum conservation delta function in by hand, since it should be enforced by the Feynman rules already when the external lines are on-shell.
Ok, so given my convention, we get the RHS is equal to [itex]\frac{-1}{(p^2+m^2)^2} \hat{F}(p,-p)[/itex] and the RHS is equal to [itex]F(p,-p)[/itex]. So what do we substitute for [itex]F(p,-p)[/itex] and why?
 
Last edited:
  • #31
latentcorpse said:
I found the calculation in full here:
http://arxiv.org/PS_cache/arxiv/pdf/0901/0901.2208v1.pdf
See p14 & p15.
I follow up to (2.6): why does he get an [itex]\Omega_4[/itex]?
Surely [itex]\int d^4k = \int_{S^3} d \Omega_3 \int k^3 dk[/itex], no?

Yes, he's wrong.

And then in (2.7), I understand the change of variables he used to get the expression on the left of this line but how does that integrate to give the RHS?

The integral can be done by substitution. I get a slightly different answer than he does.

And then how does (2.8) come about? Surely we have 5 factors of 2 on the bottom i.e. 32 since we had a (2 pi)^4 and also a 1.2 from (2.7)? He also appears to have only pi^2 rather than pi^4 on the bottom?

I think he has a factor of 2 wrong in the integral above, so it's pointless to try to follow his algebra.

So, given an arbitrary loop integral to compute, is it best to start of by trying momentum space cut off (since it is easier) and if that fails then try dim reg? Or should we just always try dim reg straight away since we know that it works?

It's probably best to use DR since it's probably not obvious from just one or two diagrams whether the primitive cutoff is going to break something important.

Yep, so I worked out those divergent parts already using the dimensional regularisation prescription i think. And I understand (or at least am prepared to accept) that we can add new terms to the lagrangian in order to cancel off these divergences. However, I do not understand why the counter terms in the lagrangian give the contributions that they do to the amputated n point function
i.e. why is [itex]\hat{\tau}^{(0)}(p,-p)_{\text{c.t.}}=-Ap^2-B[/itex]?
And why do they then give those diagrams drawn underneath?

The counterterms are used to compute new vertices. So you just have to compute Fourier transforms to get the tree level diagrams:

[tex]A(\partial \phi)^2 \rightarrow - A p^2[/tex]

[tex]- B \phi2 \rightarrow - B,[/tex]

etc. On the RHS we've gotten rid of the factors of [tex]\hat{\phi}(p)[/tex] as usual when writing Feynman rules.

Ok, so given my convention, we get the RHS is equal to [itex]\frac{-1}{(p^2+m^2)^2} \hat{F}(p,-p)[/itex] and the RHS is equal to [itex]F(p,-p)[/itex]. So what do we substitute for [itex]F(p,-p)[/itex] and why?

[itex]F(p,-p)[/itex] is the connected 2pt function, which should just be the propagator.
 
  • #32
fzero said:
The integral can be done by substitution. I get a slightly different answer than he does.
What would your substitution be? I tried [itex]u=k^2+p^2x(1-x)+m^2[/itex] as well as [itex]u=(k^2+p^2x(1-x)+m^2)^2[/itex] but couldn't get either to work?

fzero said:
The counterterms are used to compute new vertices. So you just have to compute Fourier transforms to get the tree level diagrams:

[tex]A(\partial \phi)^2 \rightarrow - A p^2[/tex]

[tex]- B \phi2 \rightarrow - B,[/tex]

etc. On the RHS we've gotten rid of the factors of [tex]\hat{\phi}(p)[/tex] as usual when writing Feynman rules.
Ok. So you're saying that we take the Fourier transform to get the momentum space Feynman rules? How do you know only to Fourier transform the A and B contributions for [itex]\hat{\tau}_2^{(0)}[/itex] though?

And even if I went through and did all the Fourier transforms, how do we know what the corresponding vertices look like?

Lastly, are the vertices he has drawn on that page corresponding to [itex]B,C,D,E[/itex] or [itex]\hat{\tau}_1^{(0)},\hat{\tau}_2^{(0)},\hat{\tau}_3^{(0)},\hat{\tau}_4^{(0)}[/itex]?
fzero said:
[itex]F(p,-p)[/itex] is the connected 2pt function, which should just be the propagator.
Ok but surely the full 2 point function should have external leg contributions as well, no?

My notes also claim that the renormalised parameters [itex]m, \lambda, \phi[/itex] depend on the RG (renormalisation group) scale [itex]\mu[/itex] but are independent of the cut off [itex]\epsilon[/itex] whereas the bar parameters are dependent on the cutoff and independent of the RG scale.
I have two questions about this statement:
(i) I thought the whole point of renormalisation was to get physical parameters i.e. ones that don't change when you change the scale? But clearly these will change and we have to further manipulate them (i.e. go to the bare parameters) to get fixed ones. What do the bare parameters represent?
(ii) When he talks about the cutoff [itex]\epsilon[/itex], even though this has come from dimensional regularisation, does this correspond exactly to the high momentum cutoof [itex]\Lambda[/itex] from UV cutoff regularisation since surely taking the limit [itex]\Lambda \rightarrow \infty[/itex] and [itex]\epsilon \rightarrow 0[/itex] have the same effect?
Thanks.
 
Last edited:
  • #33
latentcorpse said:
What would your substitution be? I tried [itex]u=k^2+p^2x(1-x)+m^2[/itex] as well as [itex]u=(k^2+p^2x(1-x)+m^2)^2[/itex] but couldn't get either to work?

[tex]u=k^2+p^2x(1-x)+m^2[/tex] works. You might want to try again.

Ok. So you're saying that we take the Fourier transform to get the momentum space Feynman rules? How do you know only to Fourier transform the A and B contributions for [itex]\hat{\tau}_2^{(0)}[/itex] though?

The leading divergence from the counterterms is at tree-level (since we choose the coefficients to be divergent). There are no contributions to the tree level 2pt function from the other counterterms.

And even if I went through and did all the Fourier transforms, how do we know what the corresponding vertices look like?

The number of external legs corresponds to the number of fields in the term from the Lagrangian. The factor is the coupling constant up to sign or other conventions. The rules are given on page 21 of your notes.

Lastly, are the vertices he has drawn on that page corresponding to [itex]B,C,D,E[/itex] or [itex]\hat{\tau}_1^{(0)},\hat{\tau}_2^{(0)},\hat{\tau}_3^{(0)},\hat{\tau}_4^{(0)}[/itex]?

[itex]\hat{\tau}_2^{(0)}[/itex] is the leading divergence in the 2pt function. It comes from one-loop graphs. [itex]\hat{\tau}_2^{(0)}_\text{c.t.}[/itex] comes from the tree-level counterterms, since we're choosing the coefficients of the counterterms themselves to be divergent.

Ok but surely the full 2 point function should have external leg contributions as well, no?

The 2pt function is what you compute from [tex]\langle T \phi(x)\phi(y)\rangle[/tex]. At tree level there aren't external leg contributions.

My notes also claim that the renormalised parameters [itex]m, \lambda, \phi[/itex] depend on the RG (renormalisation group) scale [itex]\mu[/itex] but are independent of the cut off [itex]\epsilon[/itex] whereas the bar parameters are dependent on the cutoff and independent of the RG scale.
I have two questions about this statement:
(i) I thought the whole point of renormalisation was to get physical parameters i.e. ones that don't change when you change the scale? But clearly these will change and we have to further manipulate them (i.e. go to the bare parameters) to get fixed ones. What do the bare parameters represent?

The point of renormalization is to get finite physical parameters. As a consequence they depend on scale. The bare parameters aren't directly measurable because they are divergent.

(ii) When he talks about the cutoff [itex]\epsilon[/itex], even though this has come from dimensional regularisation, does this correspond exactly to the high momentum cutoof [itex]\Lambda[/itex] from UV cutoff regularisation since surely taking the limit [itex]\Lambda \rightarrow \infty[/itex] and [itex]\epsilon \rightarrow 0[/itex] have the same effect?

The divergent parts should agree with the substitution [tex]1/\epsilon \sim \log \Lambda[/tex]. I don't think that there's any deeper connection between the two methods.
Thanks.[/QUOTE]
 
  • #34
fzero said:
[tex]u=k^2+p^2x(1-x)+m^2[/tex] works. You might want to try again.
ok ill try it

My notes keep chopping and changing between G's and F's for greens functions. I thought F was a greens function in momentum space and G was a greens function in position space but I just saw a G(p)? Is this notation standard? If so, can you enlighten me as to what they are meant to be?

Also, is there a difference between green's functions and correlations functions? Up until now we appear to use the two interchangably?
fzero said:
The 2pt function is what you compute from [tex]\langle T \phi(x)\phi(y)\rangle[/tex]. At tree level there aren't external leg contributions.

How does this look:

We have the equation [itex]F^{(n)}(p_1, \dots , p_n) = i (2 \pi)^d \delta^{d}(\displaystyle\sum_{i=1}^\infty p_i) \displaystyle\prod_{i=1}^n \left( \frac{-i}{p_i^2+m^2} \right) \hat{F}_n(p_1, \dots , p_n)[/itex]

Now we're trying to solve for [itex]\hat{F}_2(p,-p)[/itex]

So the RHS of our above equation (neglecting the delta function piece) is
[itex]\frac{-1}{(p^2+m^2)^2} \hat{F}_2(p,-p)[/itex]

Now the RHS is the full two point function which should just be an internal line contribution so we should have [itex]F_2(p,-p) = \frac{1}{p^2+m^2}[/itex]

Rearranging this gives [itex]\hat{F}_2(p,-p)=-(p^2+m^2)=-p^2-m^2[/itex]
Furthermore, the renormalisation theorem tells us that if all the subdivergences are removed and we find the superficial degree of divergence, D, to be positive or equal to 0 then our diagram is divergent and the divergent part is given by a polynomial in the couplings and the external momenta of degree D

However, in my notes, we work out that

[itex]\hat{F}_1^{(1)} \sim \frac{1}{16 \pi^2 \epsilon} g m^2[/itex] is the divergent piece for a theory with [itex]\phi^4[/itex] (coupling [itex]\lambda[/itex]) and [itex]\phi^3[/itex] (coupling [itex]g[/itex]) interactions in 4 dimensions for 1 loop with 1 external line (that has been amputated).

However, naive power counting gives the superficial degree of divergence as D=4-E-V_3 where E is the number of external legs and V_3 is the number of valence 3 vertices. So for 1 external line above we find D=3-V_3=2 and so we should have a degree 2 polynomial in couplings and momenta.

However, this divergent piece only has one factor of g, not 2?
What's going on?
I thought that it might be that the mass counts but it isn't a coupling OR a momenta, is it?

This is a problem throughout as I have [itex]\hat{F}_3^{(1)} \sim \frac{1}{16 \pi^2 \epsilon} 3 g \lambda[/itex]
This should have D=4-3-1=0 but quite clearly this polynomial is quadratic in couplings leading to the same problem!And lastly, is the following true: we want to use renormalisation to get finite physical parameters. During renormalisation, we create renormalised parameters (that depend on the RG(renormalisation group) scale but are independent of the regulator) and then when we add the counter terms to the original lagrangian we create the bare lagrangian (the bare quantities depend on the regulator but not the RG scale). Neither of these are the physical quantities though since physical quantities should be independent of RG scale and regulator, shouldn't they?
(i)I'm not sure my understanding of how renormalised and bare quantities are produced is correct? Re-reading what I wrote above it sounds like I've said they both arise for the same reason. This can't be true as they are different things!
(ii) How do we get the physical parameters out at the end of all this?

Cheers.
 
Last edited:
  • #35
latentcorpse said:
ok ill try it

My notes keep chopping and changing between G's and F's for greens functions. I thought F was a greens function in momentum space and G was a greens function in position space but I just saw a G(p)? Is this notation standard? If so, can you enlighten me as to what they are meant to be?

Also, is there a difference between green's functions and correlations functions? Up until now we appear to use the two interchangably?

Use of F and G is not standard between authors, though usually people use G. Momentum space vs position space should be clear from context. Usually people don't bother to put a tilde over the momentum space functions because of this.

All Green functions are correlation functions of some type, but can differ in terms of connectedness. For example,

[tex] G(1,2,\ldots N) = (-i)^N \left. \frac{\delta}{\delta J_1} \cdots \frac{\delta}{\delta J_N} W[J]\right|_{J=0}[/tex]

are the Green functions computed from the generating functional. The RHS is clearly a correlation function of the fields. However the Green functions computed from [tex]Z[J] = -i \log W[J][/tex] are the connected Green functions.


Furthermore, the renormalisation theorem tells us that if all the subdivergences are removed and we find the superficial degree of divergence, D, to be positive or equal to 0 then our diagram is divergent and the divergent part is given by a polynomial in the couplings and the external momenta of degree D

The degree corresponds to the expression [tex]\Lambda^D[/tex] in the primitive cutoff scheme (where D=0 corresponds to [tex]\log \Lambda[/tex]). The polynomial in couplings and momenta must be of scaling dimension D to compensate.

However, in my notes, we work out that

[itex]\hat{F}_1^{(1)} \sim \frac{1}{16 \pi^2 \epsilon} g m^2[/itex] is the divergent piece for a theory with [itex]\phi^4[/itex] (coupling [itex]\lambda[/itex]) and [itex]\phi^3[/itex] (coupling [itex]g[/itex]) interactions in 4 dimensions for 1 loop with 1 external line (that has been amputated).

However, naive power counting gives the superficial degree of divergence as D=4-E-V_3 where E is the number of external legs and V_3 is the number of valence 3 vertices. So for 1 external line above we find D=3-V_3=2 and so we should have a degree 2 polynomial in couplings and momenta.

However, this divergent piece only has one factor of g, not 2?
What's going on?
I thought that it might be that the mass counts but it isn't a coupling OR a momenta, is it?

This is a problem throughout as I have [itex]\hat{F}_3^{(1)} \sim \frac{1}{16 \pi^2 \epsilon} 3 g \lambda[/itex]
This should have D=4-3-1=0 but quite clearly this polynomial is quadratic in couplings leading to the same problem!

The expression of couplings and momenta must have scaling or mass dimension D. The actual number of n-pt couplings would be determined from [tex]V_n[/tex], while the dependence on momenta is determined by the scaling dimension.

And lastly, is the following true: we want to use renormalisation to get finite physical parameters. During renormalisation, we create renormalised parameters (that depend on the RG(renormalisation group) scale but are independent of the regulator) and then when we add the counter terms to the original lagrangian we create the bare lagrangian (the bare quantities depend on the regulator but not the RG scale). Neither of these are the physical quantities though since physical quantities should be independent of RG scale and regulator, shouldn't they?
(i)I'm not sure my understanding of how renormalised and bare quantities are produced is correct? Re-reading what I wrote above it sounds like I've said they both arise for the same reason. This can't be true as they are different things!
(ii) How do we get the physical parameters out at the end of all this?

Cheers.

The renormalized parameters are the physical parameters. If you were to measure the fine structure constant from the strength of the electromagnetic interaction, you would find a different value at different energy scales. If you were to scatter electrons at a few MeV, you'd find a value close to 1/137, while if you scattered electrons at a center of mass energy around 80 GeV, you'd find 1/128.
 

Similar threads

  • Advanced Physics Homework Help
Replies
1
Views
914
  • Advanced Physics Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
599
  • Quantum Interpretations and Foundations
Replies
1
Views
493
  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
1K
  • Special and General Relativity
2
Replies
36
Views
3K
  • Calculus and Beyond Homework Help
Replies
7
Views
2K
  • Advanced Physics Homework Help
Replies
6
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
912
Back
Top