Grassmann Algebra: Derivative of $\theta_j \theta_k \theta_l$

  • Thread starter Thread starter latentcorpse
  • Start date Start date
  • Tags Tags
    Algebra Grassmann
Click For Summary
SUMMARY

The discussion focuses on the differentiation of products of Grassmann numbers, specifically the expression $\frac{\partial}{\partial \theta_i} (\theta_j \theta_k \theta_l)$. Participants clarify that the differentiation follows the rule $\frac{\partial}{\partial \theta_i} (\theta_j \theta_k) = \delta_{ij} \theta_k - \theta_j \delta_{ik}$, emphasizing the importance of the minus sign due to the anticommutative nature of Grassmann variables. Additionally, they explore Grassmann integration, concluding that $\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \epsilon_{i_1 \dots i_n}$, where $\epsilon$ is the antisymmetric tensor, and discuss the implications of this in various integrals.

PREREQUISITES
  • Understanding of Grassmann algebra and its properties
  • Familiarity with differentiation rules for Grassmann variables
  • Knowledge of antisymmetric tensors and their applications
  • Basic concepts of Grassmann integration
NEXT STEPS
  • Study the differentiation of products of Grassmann numbers in detail
  • Learn about the properties and applications of the antisymmetric tensor $\epsilon_{i_1 \dots i_n}$
  • Explore advanced topics in Grassmann integration techniques
  • Investigate the implications of Grassmann variables in quantum field theory
USEFUL FOR

The discussion is beneficial for physicists, mathematicians, and students studying quantum field theory, particularly those interested in the applications of Grassmann numbers and their integration techniques.

latentcorpse
Messages
1,411
Reaction score
0
If \{ \theta_i \} are a set of Grassmann numbers then what is

\frac{\partial}{\partial \theta_i} ( \theta_j \theta_k \theta_l)

I know that \frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \delta_{ij} \theta_k - \theta_j \delta_{ik} - we need this to be the case becuse if we set j=k then the LHS becomes the derivative of \theta_j^2=0 and so we need the RHS to vanish as well (hence the minus sign!)

However, now that there are three variables present, I am confused as to what should pick up a minus sign upon differentiation and what should not?

Thanks.
 
Physics news on Phys.org
The minus sign in

<br /> \frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \delta_{ij} \theta_k - \theta_j \delta_{ik}<br />

can also be derived from noting that \partial/\partial \theta_i is Grassman, so

<br /> \frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \left( \frac{\partial\theta_j}{\partial \theta_i} \right) \theta_k -<br /> \theta_j \left( \frac{\partial \theta_k }{\partial \theta_i} \right).<br />

This method applies directly to the product of 3 Grassman numbers.
 
fzero said:
The minus sign in

<br /> \frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \delta_{ij} \theta_k - \theta_j \delta_{ik}<br />

can also be derived from noting that \partial/\partial \theta_i is Grassman, so

<br /> \frac{\partial}{\partial \theta_i} ( \theta_j \theta_k ) = \left( \frac{\partial\theta_j}{\partial \theta_i} \right) \theta_k -<br /> \theta_j \left( \frac{\partial \theta_k }{\partial \theta_i} \right).<br />

This method applies directly to the product of 3 Grassman numbers.

Great. I do have a question about Grassmann integration though.

\int d^n \theta f(\theta_1, \dots \theta_n) = \int d \theta_n d \theta_{n-1} \dots d \theta_1 f(\theta_1, \dots , \theta_n)
where f(\theta_1 , \dots , \theta_n = a + a_i \theta_i + \dots + \frac{1}{n!} a_{i_1} \dots a_{i_n} \theta_{i_1} \dots \theta_{i_n}

So I can see that only the last term will survive because all the preceeding terms are constant with respect to \theta_n and since the integral of a constant with respect to a grassmann variable is zero, they will all die. Therefore this simplifies to

\int d^n \frac{1}{n!} a_{i_1} \dots a_{i_n} \theta_{i_1} \dots \theta_{i_n}

My notes then say that \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \epsilon_{i_1 \dots i_n} where \epsilon_{i_1 \dots i_n} is the antisymmetric tensor that equals 1 if the indices are ordered in an even permutation, -1 if they are ordered in an odd permutation and 0 if any two indices are the same.

Apparently this then means that \int d^n \theta f(\theta_1 , \dots , \theta_n) = \frac{1}{n!} \epsilon_{i_1 \dots i_n} a_{i_1} \dots a_{i_n} = a_{1 \dots n}

So my question is, how does that last equality work? How does he get rid of the epsilon? And where does the n! go?


Given this definition, does that mean that say \int d^3 \theta \theta_1 \theta_3 \theta_2 =-1 as \epsilon_{132}=-1 i.e. we would have the last two variables to do this integral and that would pick up a minus sign?

Thanks.
 
latentcorpse said:
Apparently this then means that \int d^n \theta f(\theta_1 , \dots , \theta_n) = \frac{1}{n!} \epsilon_{i_1 \dots i_n} a_{i_1} \dots a_{i_n} = a_{1 \dots n}

So my question is, how does that last equality work? How does he get rid of the epsilon? And where does the n! go?

The a_i are odd variables. Just expand \epsilon_{i_1 \dots i_n} a_{i_1} \dots a_{i_n}, count the number of terms and apply permutations to write each one as a_{1} \cdots a_{n}.

Given this definition, does that mean that say \int d^3 \theta \theta_1 \theta_3 \theta_2 =-1 as \epsilon_{132}=-1 i.e. we would have the last two variables to do this integral and that would pick up a minus sign?

Yes. You should be able to derive

<br /> \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \epsilon_{i_1 \dots i_n}<br />

from the basic formula

\int d\theta_i \theta_i = 1 ~~(\text{no sum on} ~i).
 
fzero said:
The a_i are odd variables. Just expand \epsilon_{i_1 \dots i_n} a_{i_1} \dots a_{i_n}, count the number of terms and apply permutations to write each one as a_{1} \cdots a_{n}.

Got it.

fzero said:
Yes. You should be able to derive

<br /> \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \epsilon_{i_1 \dots i_n}<br />

from the basic formula

\int d\theta_i \theta_i = 1 ~~(\text{no sum on} ~i).
[\QUOTE]
This I don't know how to do...And also, when it comes to complex integration, we have been given \theta=\frac{\theta_1+i \theta_2}{\sqrt{2}} , \bar{\theta} = \frac{\theta_1 - \theta_2}{\sqrt{2}}

We are then told that d \theta d \bar{\theta} = d \theta_1 d \theta_2 \times i

However, I find that d \theta d \bar{\theta} = \frac{d \theta_1^2 - i d \theta_1 d \theta_2 + i d \theta_2 d \theta-1 + d \theta_2^2}{2}
Obviously the \theta^2 terms vanish and so I find
d \theta d \bar{\theta} = \frac{ -2 i d \theta_1 d \theta_2}{2} = - i d \theta_1 d \theta_2 which is out by a minus sign?Also, do you know how we get the extra factor of 1/b in 9.67 in Peskin and Schroeder i.e. why is \int d \bar{\theta} d \theta \theta \bar{\theta} e^{-\bar{\theta} b \theta} = \frac{1}{b}b = 1?And finally, why do we have \int d \theta d \bar{\theta} e^{- \bar{\theta} b \theta} = \int d \bar{\theta} d \theta ( 1 - \bar{\theta} b \theta)?
Shouldn't we have additional higher order terms from the expansion of the exponential?
 
Last edited:
latentcorpse said:
This I don't know how to do...

I actually have a mistake, it should be

<br /> \int \theta_i d\theta_i = 1 ~~(\text{no sum on} ~i).<br />

The proof of that formula just amounts to counting signs when you permute to express

<br /> \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \prod_i \int \theta_i d\theta_i.<br />

And also, when it comes to complex integration, we have been given \theta=\frac{\theta_1+i \theta_2}{\sqrt{2}} , \bar{\theta} = \frac{\theta_1 - \theta_2}{\sqrt{2}}

We are then told that d \theta d \bar{\theta} = d \theta_1 d \theta_2 \times i

However, I find that d \theta d \bar{\theta} = \frac{d \theta_1^2 - i d \theta_1 d \theta_2 + i d \theta_2 d \theta-1 + d \theta_2^2}{2}
Obviously the \theta^2 terms vanish and so I find
d \theta d \bar{\theta} = \frac{ -2 i d \theta_1 d \theta_2}{2} = - i d \theta_1 d \theta_2 which is out by a minus sign?

The minus sign looks correct. That sign cancels out in any nonzero integral, such as

\int d\theta d\bar{\theta} \theta\bar{\theta} = 1.

Also, do you know how we get the extra factor of 1/b in 9.67 in Peskin and Schroeder i.e. why is \int d \bar{\theta} d \theta \theta \bar{\theta} e^{-\bar{\theta} b \theta} = \frac{1}{b}b = 1?

The point is that the b-dependent terms in the integrand vanish, so that

\int d \bar{\theta} d \theta \theta \bar{\theta} e^{-\bar{\theta} b \theta} =1.[/tex]<br /> <br /> The 1/b is put in later to compare with the integral without \theta \bar{\theta} inserted.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> And finally, why do we have \int d \theta d \bar{\theta} e^{- \bar{\theta} b \theta} = \int d \bar{\theta} d \theta ( 1 - \bar{\theta} b \theta)?<br /> Shouldn&#039;t we have additional higher order terms from the expansion of the exponential? </div> </div> </blockquote><br /> The higher order terms vanish as \theta^2=0.
 
fzero said:
I actually have a mistake, it should be

<br /> \int \theta_i d\theta_i = 1 ~~(\text{no sum on} ~i).<br />

The proof of that formula just amounts to counting signs when you permute to express
My notes and, by the looks of things P&S, define it the way you had it originally i.e. that \int d \theta_i \theta_i =1 (since the d \theta_i is Grassmann as well, we should get a minus sign if we got to the way you have it above. Are there different conventions at play here?

As for trying to prove the identity, I don't get it...
We know \int d^n \theta = \int d \theta_{i_n} d \theta_{i_{n-1}} \dots d_{i_1} so surely, \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \int d \theta_{i_n} \dots d \theta_{i_1} \theta_{i_1} \dots \theta_{i_n} =\int d \theta_{i_n} \dots d \theta_{i_2} \theta_{i_2} \dots \theta_{i_n} = \dots = \int d \theta_{i_n} \theta_{i_n} = 1
Where does the permuting come into play?

fzero said:
<br /> \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \prod_i \int \theta_i d\theta_i.<br />



The minus sign looks correct. That sign cancels out in any nonzero integral, such as

\int d\theta d\bar{\theta} \theta\bar{\theta} = 1.
[/itex]
Yeah I can't see what's going wrong either but there must be something because P&S and my notes both have \int d \bar{\theta} d \theta \theta \bar{\theta}=1 which is different to what we get if we assume that minus sign is right?


fzero said:
The point is that the b-dependent terms in the integrand vanish, so that

\int d \bar{\theta} d \theta \theta \bar{\theta} e^{-\bar{\theta} b \theta} =1.[/tex]<br /> <br /> The 1/b is put in later to compare with the integral without \theta \bar{\theta} inserted.<br />
<br /> Ok. But how do you actually do that integral, I find that <br /> <br /> \int d \bar{\theta} d \theta \theta \bar{\theta} e^{- \bar{\theta} b \theta} = \int d \bar{\theta} d \theta \theta \bar{\theta} ( 1 + b \theta \bar{\theta}) = \int d \bar{\theta} d \theta ( \theta \bar{\theta} + b \theta \bar{\theta} \theta \bar{\theta} )<br /> The first term gives us a b and the second one seems to give b \theta \bar{\theta}<br /> <br /> <br /> <br /> And the last thing we do on Grassmann integrals is that I_B = \int d^N \bar{\theta} d^N \theta e^{-\bar{\theta_i} B_{ij} \theta_j = \text{det } B by making a unitary transformation. Anyway I can do that calculation fine. But he then says that in the presence of source terms,<br /> <br /> I_B(\eta , \bar{\eta}) = \int d^N \bar{\theta} d \theta e^{-\bar{\theta}_i B_{ij} \theta_j + \bar{\eta}_i \theta_j + \bar{\theta}_i \eta_j}<br /> and we can complete the square to show<br /> I_B(\eta , \bar{\eta}) = \text{det } B \times e^{+\bar{\eta}_i B_{ij}^{-1} \eta_j}<br /> <br /> Now I have been trying to show this but I don&#039;t think I am completing the square correctly as I keep getting cross terms that I can&#039;t get rid of. I tried \tilde{\theta}_i = \theta_i - B_{ij}^{-1} \eta_j<br /> <br /> Thanks a lot again!
 
latentcorpse said:
My notes and, by the looks of things P&S, define it the way you had it originally i.e. that \int d \theta_i \theta_i =1 (since the d \theta_i is Grassmann as well, we should get a minus sign if we got to the way you have it above. Are there different conventions at play here?

Every minus sign is convention-dependent. For instance, your d^n\theta = d\theta_n\cdots d\theta_1 is another convention.

As for trying to prove the identity, I don't get it...
We know \int d^n \theta = \int d \theta_{i_n} d \theta_{i_{n-1}} \dots d_{i_1} so surely, \int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \int d \theta_{i_n} \dots d \theta_{i_1} \theta_{i_1} \dots \theta_{i_n} =\int d \theta_{i_n} \dots d \theta_{i_2} \theta_{i_2} \dots \theta_{i_n} = \dots = \int d \theta_{i_n} \theta_{i_n} = 1
Where does the permuting come into play?

Because d^n\theta = d\theta_n\cdots d\theta_1 by your convention, so

\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \int d\theta_n\cdots d\theta_1\theta_{i_1} \dots \theta_{i_n}

depends on how we rearrange to put the appropriate \theta with the matching d\theta.

Yeah I can't see what's going wrong either but there must be something because P&S and my notes both have \int d \bar{\theta} d \theta \theta \bar{\theta}=1 which is different to what we get if we assume that minus sign is right?

No, the sign is common to d\theta d \bar{\theta} and \theta \bar{\theta} so it cancels in the product.

Ok. But how do you actually do that integral, I find that

\int d \bar{\theta} d \theta \theta \bar{\theta} e^{- \bar{\theta} b \theta} = \int d \bar{\theta} d \theta \theta \bar{\theta} ( 1 + b \theta \bar{\theta}) = \int d \bar{\theta} d \theta ( \theta \bar{\theta} + b \theta \bar{\theta} \theta \bar{\theta} )
The first term gives us a b and the second one seems to give b \theta \bar{\theta}

The 2nd term vanishes because it's proportional to \theta^2 \bar{\theta}^2 and
\theta^2 = \bar{\theta}^2=1 since they are odd variables.

And the last thing we do on Grassmann integrals is that I_B = \int d^N \bar{\theta} d^N \theta e^{-\bar{\theta_i} B_{ij} \theta_j = \text{det } B by making a unitary transformation. Anyway I can do that calculation fine. But he then says that in the presence of source terms,

I_B(\eta , \bar{\eta}) = \int d^N \bar{\theta} d \theta e^{-\bar{\theta}_i B_{ij} \theta_j + \bar{\eta}_i \theta_j + \bar{\theta}_i \eta_j}
and we can complete the square to show
I_B(\eta , \bar{\eta}) = \text{det } B \times e^{+\bar{\eta}_i B_{ij}^{-1} \eta_j}

Now I have been trying to show this but I don't think I am completing the square correctly as I keep getting cross terms that I can't get rid of. I tried \tilde{\theta}_i = \theta_i - B_{ij}^{-1} \eta_j

You want (in matrix form)

\theta = S \tilde{\theta} + B^{-1} \eta,

where S diagonalizes B, so S^{-1} B S = I.
 
fzero said:
Every minus sign is convention-dependent. For instance, your d^n\theta = d\theta_n\cdots d\theta_1 is another convention.



Because d^n\theta = d\theta_n\cdots d\theta_1 by your convention, so

\int d^n \theta \theta_{i_1} \dots \theta_{i_n} = \int d\theta_n\cdots d\theta_1\theta_{i_1} \dots \theta_{i_n}

depends on how we rearrange to put the appropriate \theta with the matching d\theta.
So if you were asked to show this is an exam, what would you do? As far as I can tell, we just get it to \int d \theta_N \dots d \theta_1 \theta_{i_1} \dots \theta_{i_N} and then just describe in words that it's to do with making permutations? Is there any maths we can do?

fzero said:
No, the sign is common to d\theta d \bar{\theta} and \theta \bar{\theta} so it cancels in the product.
Yeah but there is some inconsistency here. We claim that if that -i was correct then
\int d \bar{\theta} d \theta \bar{\theta} \theta =1 but both my notes and P&S say \int d \bar{\theta} d \theta \theta \bar{\theta} =1


fzero said:
You want (in matrix form)

\theta = S \tilde{\theta} + B^{-1} \eta,

where S diagonalizes B, so S^{-1} B S = I.

I've been playing about with this but can't get it. How are we going to get an S^{-1} out of anything? Also, how on Earth did you know to take that to complete the square?
 
  • #10
latentcorpse said:
So if you were asked to show this is an exam, what would you do? As far as I can tell, we just get it to \int d \theta_N \dots d \theta_1 \theta_{i_1} \dots \theta_{i_N} and then just describe in words that it's to do with making permutations? Is there any maths we can do?

You could also show that \theta_{i_1} \cdots \theta_{i_N} = \epsilon_{i_1\cdots i_N} \theta_1\cdots \theta_N. It's obvious by considering various cases, but maybe there's a more elegant proof.

Yeah but there is some inconsistency here. We claim that if that -i was correct then
\int d \bar{\theta} d \theta \bar{\theta} \theta =1 but both my notes and P&S say \int d \bar{\theta} d \theta \theta \bar{\theta} =1

I wrote the former term down using the opposite convention for the integral. If you work things out in P&S conventions, the relative minus sign is precisely what you need to cancel the minus from i^2.


I've been playing about with this but can't get it. How are we going to get an S^{-1} out of anything? Also, how on Earth did you know to take that to complete the square?

S is Hermitian and when you consider the \bar{\theta} expression, you end up with a complex conjugation. Then when you put things together into the quadratic expression, you also transpose \bar{\theta}, so S^\dagger=S^{-1} appears in the correct places. The \eta term comes from comparing the cross-terms.
 
  • #11
fzero said:
I wrote the former term down using the opposite convention for the integral. If you work things out in P&S conventions, the relative minus sign is precisely what you need to cancel the minus from i^2.
Ok. So we agreed that d \theta d \bar{\theta} = -i d \theta_1 d \theta_2

And if you work out \theta \bar{\theta} = \frac{1}{2} ( i \theta_2 \theta_1 - i \theta_1 \theta_2 ) = - i \theta_1 \theta_2
\Rightarrow \bar{\theta} \theta = i \theta_2 \theta_1

Therefore \int d \theta d \bar{\theta} ( \bar{\theta} \theta ) = - i \int d \theta_1 d \theta_2 ( i \theta_2 \theta_1) = -i \times i =1

So this is correct, yes? This means then that my written notes are wrong and that there should indeed be the minus sign in d \theta d \bar{\theta} = -i d \theta_1 d \theta_2?



fzero said:
S is Hermitian and when you consider the \bar{\theta} expression, you end up with a complex conjugation. Then when you put things together into the quadratic expression, you also transpose \bar{\theta}, so S^\dagger=S^{-1} appears in the correct places. The \eta term comes from comparing the cross-terms.

Ok. So the exponent looks like - \bar{\theta} B \theta + \bar{eta} \theta + \bar{\theta} \eta
Under the transformation \theta \rightarrow S \tilde{\theta} + B^{-1} \eta we get
- (S^* \bar{\tilde{\theta}} + ( B^{-1})^* \bar{\eta} ) B ( S \theta + B^{-1} \eta ) + \bar{\eta} S \tilde{\theta} + \bar{\eta} B \eta + S^* \bar{\tilde{\theta}} \eta + B^{-1} \eta \eta
Is this going in the right direction?

Cheers.
 
  • #12
latentcorpse said:
Ok. So we agreed that d \theta d \bar{\theta} = -i d \theta_1 d \theta_2

And if you work out \theta \bar{\theta} = \frac{1}{2} ( i \theta_2 \theta_1 - i \theta_1 \theta_2 ) = - i \theta_1 \theta_2
\Rightarrow \bar{\theta} \theta = i \theta_2 \theta_1

Therefore \int d \theta d \bar{\theta} ( \bar{\theta} \theta ) = - i \int d \theta_1 d \theta_2 ( i \theta_2 \theta_1) = -i \times i =1

So this is correct, yes? This means then that my written notes are wrong and that there should indeed be the minus sign in d \theta d \bar{\theta} = -i d \theta_1 d \theta_2?

Probably. I couldn't say for certain, since there could be some convention that's different.


Ok. So the exponent looks like - \bar{\theta} B \theta + \bar{eta} \theta + \bar{\theta} \eta
Under the transformation \theta \rightarrow S \tilde{\theta} + B^{-1} \eta we get
- (S^* \bar{\tilde{\theta}} + ( B^{-1})^* \bar{\eta} ) B ( S \theta + B^{-1} \eta ) + \bar{\eta} S \tilde{\theta} + \bar{\eta} B \eta + S^* \bar{\tilde{\theta}} \eta + B^{-1} \eta \eta
Is this going in the right direction?

Note: I accidentally called S Hermitian when I meant unitary. I gave the correct property as an equation, but wanted to correct that.

If you're going to use the matrix notation (and I think it's easier to do so) you have to keep track of transposes. So \bar{\theta} is Hermitian conjugate in this notation, not just complex conjugate, so

\bar{\theta} = \bar{\tilde{\theta}} S^\dagger + \bar{\eta} B^{-1}.

I derived this by canceling cross-terms, but it could also be computed directly by using the Hermiticity of B.
 
  • #13
fzero said:
Probably. I couldn't say for certain, since there could be some convention that's different.

I actually disagree again!

I find d \theta d \bar{\theta}= - i d \theta_1 d \theta_2

and \bar{\theta} \theta = -i \theta_2 \theta_1

which would give an answer of -i x -i =-1 which is wrong! AAArrgghh!
And also, could you confirm whether \gamma^\mu \gamma^\nu = \frac{1}{2} \{ \gamma^\mu , \gamma^\nu \} where \gamam^\mu are the matrices of the clifford algebra?

Thanks.
 
Last edited:
  • #14
latentcorpse said:
I actually disagree again!

I find d \theta d \bar{\theta}= - i d \theta_1 d \theta_2

and \bar{\theta} \theta = -i \theta_2 \theta_1

which would give an answer of -i x -i =-1 which is wrong! AAArrgghh!

\theta \bar{\theta} and d \theta d \bar{\theta} have the same structure so your previous calculation was correct.

And also, could you confirm whether \gamma^\mu \gamma^\nu = \frac{1}{2} \{ \gamma^\mu , \gamma^\nu \} where \gamam^\mu are the matrices of the clifford algebra?

Thanks.

That would imply that [\gamma^\mu, \gamma^\nu]=0, so no.
 
  • #15
fzero said:
\theta \bar{\theta} and d \theta d \bar{\theta} have the same structure so your previous calculation was correct.

Explicitly:

d\ theta d \bar{\theta} = \frac{1}{2} ( d \theta_1 + i d \theta_2 )( d \theta_1 - i d \theta_2) = \frac{1}{2} ( d \theta_1^2 - i d \theta_1 d \theta_2 + i d \theta_2 d \theta_1 + d \theta_2^2) = - i d \theta_1 d \theta_2

and \bar{ \theta} \theta = \frac{1}{2} ( \theta_1 - i \theta_2 )( \theta_1 + i \theta_2) = \frac{1}{2} ( \theta_1^2 + i \theta_1 \theta_2 - i \theta_2 \theta_1 + \theta_2^2) = i \theta_1 \theta_2

so \int d \theta d \bar{\theta} \theta \bar{\theta} = - i \times i \int d \theta_1 d \theta_2 \theta_1 \theta_2 = - i \times i \times (-1) \int d \theta_1 d \theta_2 \theta_2 \theta_1 = -i^2 \times -1 = i^2 =-1

I simply cannot spot the mistake!Moving on, what is the point of Wick Rotation? We introduce it so that we can use Euclidean space Feynman rules for loop integrals rather than momentum space Feynman rules. What's the reason for this? Is it just easier in Euclidean rules or something?

And take for example a simple loop integral, in Euclidean space feynman rules this gives
\int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2} \propto \int_0^\infty \frac{k^{d-1}dk}{k^2+m^2}
Where does that proportionality come from?

Lastly, I was trying to replicate the calculation for the amplitude at the top of p324 in P&S for the diagram with 4 external lines, but I can't get it. How do we do that?
And I'm assuming when he says "the only divergent amplitudes are..." and then draws those 3 diagrams on p324, that the grey circles just represent any collection of internal lines that can be made so that they have 4 point vertices, yes?

EDIT: Also at the bottom of p323, he says that all amplitudes with an odd number of vertices will vanish in this \phi^4 theory. Why on Earth is that?
 
  • #16
latentcorpse said:
Explicitly:

d\ theta d \bar{\theta} = \frac{1}{2} ( d \theta_1 + i d \theta_2 )( d \theta_1 - i d \theta_2) = \frac{1}{2} ( d \theta_1^2 - i d \theta_1 d \theta_2 + i d \theta_2 d \theta_1 + d \theta_2^2) = - i d \theta_1 d \theta_2

and \bar{ \theta} \theta = \frac{1}{2} ( \theta_1 - i \theta_2 )( \theta_1 + i \theta_2) = \frac{1}{2} ( \theta_1^2 + i \theta_1 \theta_2 - i \theta_2 \theta_1 + \theta_2^2) = i \theta_1 \theta_2

so \int d \theta d \bar{\theta} \theta \bar{\theta} = - i \times i \int d \theta_1 d \theta_2 \theta_1 \theta_2 = - i \times i \times (-1) \int d \theta_1 d \theta_2 \theta_2 \theta_1 = -i^2 \times -1 = i^2 =-1

I simply cannot spot the mistake!

Your calculation is correct. P&S claim that

\int d \bar{\theta} d \theta \theta \bar{\theta} =1.

Moving on, what is the point of Wick Rotation? We introduce it so that we can use Euclidean space Feynman rules for loop integrals rather than momentum space Feynman rules. What's the reason for this? Is it just easier in Euclidean rules or something?

It makes path integrals more convergent, since the weight is now e^{-S/\hbar}. The divergence structure of loop integrals is also easier to elucidate in Euclidean space, hence regularization is easier.

And take for example a simple loop integral, in Euclidean space feynman rules this gives
\int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2} \propto \int_0^\infty \frac{k^{d-1}dk}{k^2+m^2}
Where does that proportionality come from?

The RHS is the result of choosing spherical coordinates in momentum space.

Lastly, I was trying to replicate the calculation for the amplitude at the top of p324 in P&S for the diagram with 4 external lines, but I can't get it. How do we do that?

We only have a quartic vertex, so the simplest 1-loop diagrams have 2 vertices and 2 internal lines. Therefore there are 2 internal propagators involving the loop momentum, so the diagram is proportional to

\int d^4k \frac{1}{k^4}.

And I'm assuming when he says "the only divergent amplitudes are..." and then draws those 3 diagrams on p324, that the grey circles just represent any collection of internal lines that can be made so that they have 4 point vertices, yes?

Yes.

EDIT: Also at the bottom of p323, he says that all amplitudes with an odd number of vertices will vanish in this \phi^4 theory. Why on Earth is that?

You can use the \phi\rightarrow -\phi symmetry to show directly that correlation functions of an odd number of fields must vanish. Alternatively, if you set up a diagram with an odd number of external legs, you'll find that you must connect an odd number of internal legs in pairs, which is impossible.
 
  • #17
fzero said:
Your calculation is correct. P&S claim that

\int d \bar{\theta} d \theta \theta \bar{\theta} =1.
Sorry to drag this out. I think I may have made a mistake in what I wrote out last time. To try and reproduce P&S calculation:
\theta=\frac{1}{\sqrt{2}}(\theta_1 + i \theta_2), \quad \bar{\theta} = \frac{1}{\sqrt{2}} ( \theta_1 - i \theta_2)
d \bar{\theta} d \theta = \frac{1}{2} ( d \theta_1 - i d \theta_2 ) ( d \thteta_1 + i d \theta_2 ) = \frac{1}{2} ( i d \theta_1 d \theta_2 - i d \theta_2 d \theta_1) = i d \theta_1 d \theta_2
And \theta \bar{\theta} = \frac{1}{2} ( \theta_1 + i \theta_2 )( \theta_1 - i \theta_2 ) = \frac{1}{2} ( - i \theta_1 \theta_2 + i \theta_2 \theta_1 ) = i \theta_2 \theta_1
So \int d \bar{\theta} d \theta \theta \bar{\theta} = i^2 \int d \theta_1 d \theta_2 \theta_2 \theta_1 = i^2 = -1
fzero said:
It makes path integrals more convergent, since the weight is now e^{-S/\hbar}. The divergence structure of loop integrals is also easier to elucidate in Euclidean space, hence regularization is easier.
So previously our path integrals had weight e^{iS/\hbar}, right? How does the weight change to what you said? And why does that new weight make it more convergent?

fzero said:
The RHS is the result of choosing spherical coordinates in momentum space.
Well shouldn't there also be a \int k^{d-1} piece? Why can that be dropped from the proportionality?

fzero said:
We only have a quartic vertex, so the simplest 1-loop diagrams have 2 vertices and 2 internal lines. Therefore there are 2 internal propagators involving the loop momentum, so the diagram is proportional to

\int d^4k \frac{1}{k^4}.
How do you know those are the simplest diagrams? Experience?
And how does that integral give a log? I tried substituting u=k^4 but that doesn't help?
fzero said:
You can use the \phi\rightarrow -\phi symmetry to show directly that correlation functions of an odd number of fields must vanish. Alternatively, if you set up a diagram with an odd number of external legs, you'll find that you must connect an odd number of internal legs in pairs, which is impossible.
I can see this from the diagrams but not quite sure what you mean for doing it for the correlation functions - what formula do I use?

Also, he's introducing analytic continuation and discusses how we can use analytic continuation of the Gamma function to extend the definition of n! to negative numbers.
We have the relationship \Gamma(n+1)=n! for n \in \mathbb{Z}^+ which is fair enough. Then he goes through the procedure of using \Gamma(\alpha) = \frac{1}{\alpha} \Gamma(\alpha+1) to extend this to -1 and so on and so forth till we can extend over the whole complex plane. However, we find that for \alpha near 0, we have \Gamma(\alpha) \tilde \frac{1}{\alpha} and since \Gamma(n+1)=n!, wouldn't this mean that 0! would diverge? But we expect 0!=1?
Furthermore, does this definition actually mean we can put a value on things like (-2)! and if so how?

Thanks again!
 
Last edited:
  • #18
latentcorpse said:
Sorry to drag this out. I think I may have made a mistake in what I wrote out last time. To try and reproduce P&S calculation:
\theta=\frac{1}{\sqrt{2}}(\theta_1 + i \theta_2), \quad \bar{\theta} = \frac{1}{\sqrt{2}} ( \theta_1 - i \theta_2)
d \bar{\theta} d \theta = \frac{1}{2} ( d \theta_1 - i d \theta_2 ) ( d \thteta_1 + i d \theta_2 ) = \frac{1}{2} ( i d \theta_1 d \theta_2 - i d \theta_2 d \theta_1) = i d \theta_1 d \theta_2
And \theta \bar{\theta} = \frac{1}{2} ( \theta_1 + i \theta_2 )( \theta_1 - i \theta_2 ) = \frac{1}{2} ( - i \theta_1 \theta_2 + i \theta_2 \theta_1 ) = i \theta_2 \theta_1
So \int d \bar{\theta} d \theta \theta \bar{\theta} = i^2 \int d \theta_1 d \theta_2 \theta_2 \theta_1 = i^2 = -1

It seems that one has to be more careful. The right way to keep track of the change in measure is to compute the Jacobian. For Grassmann variables, the determinant of the Jacobian comes in with the reciprocal power (this is worked out in Srednicki's book for example). Since the Jacobian here depends on i, this introduces an extra minus sign, which is what we need to get P&S's result.

So previously our path integrals had weight e^{iS/\hbar}, right? How does the weight change to what you said? And why does that new weight make it more convergent?

Just consider a quadratic action. One integral is oscillatory and the other is a Gaussian.

Well shouldn't there also be a \int k^{d-1} piece? Why can that be dropped from the proportionality?

The k^{d-1} factor is there.

How do you know those are the simplest diagrams? Experience?

You can just try to draw diagrams that are one-particle irreducible.

And how does that integral give a log? I tried substituting u=k^4 but that doesn't help?

\int \frac{d^4k}{k^4} \sim \int \frac{dk}{k}

I can see this from the diagrams but not quite sure what you mean for doing it for the correlation functions - what formula do I use?

You can show for example that

\langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle = - \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle

Also, he's introducing analytic continuation and discusses how we can use analytic continuation of the Gamma function to extend the definition of n! to negative numbers.
We have the relationship \Gamma(n+1)=n! for n \in \mathbb{Z}^+ which is fair enough. Then he goes through the procedure of using \Gamma(\alpha) = \frac{1}{\alpha} \Gamma(\alpha+1) to extend this to -1 and so on and so forth till we can extend over the whole complex plane. However, we find that for \alpha near 0, we have \Gamma(\alpha) \tilde \frac{1}{\alpha} and since \Gamma(n+1)=n!, wouldn't this mean that 0! would diverge? But we expect 0!=1?
Furthermore, does this definition actually mean we can put a value on things like (-2)! and if so how?

The Gamma function has simple poles at \alpha = 0,-1,-2,\ldots. However 0!=\Gamma(1) = 1.
 
  • #19
fzero said:
It seems that one has to be more careful. The right way to keep track of the change in measure is to compute the Jacobian. For Grassmann variables, the determinant of the Jacobian comes in with the reciprocal power (this is worked out in Srednicki's book for example). Since the Jacobian here depends on i, this introduces an extra minus sign, which is what we need to get P&S's result.
I don't follow what you're on about with reciprocal powers etc?
fzero said:
The k^{d-1} factor is there.
Why is it treated as a constant though? Is it because if we break up \int d^dk = \int k^{d-1} dk \int d^{d-1}k, the \int d^{d-1}k measure has no k dependence?

fzero said:
You can just try to draw diagrams that are one-particle irreducible.
Why one particle irreducible? Do we always just work with these?
As for finding one with 2 vertices and 2 lines - I'm struggling!
I can draw my four external lines coming in and each one hitting the corner of a square. That is definitely 1PI but it has 4 internal lines and 4 vertices.
When trying to get one with 2 vertices and 2 internal lines, I drew a cross, with a vertex at the centre and then stuck a loop on one of the external legs. However, I'm fairly sure that is not 1PI as I can cut the loop and the diagram will no longer be connected!
fzero said:
You can show for example that

\langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle = - \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle

How though? I am not sure how to expand \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle in this case?

fzero said:
The Gamma function has simple poles at \alpha = 0,-1,-2,\ldots. However 0!=\Gamma(1) = 1.
Yes because \Gamma(\alpha) = \frac{\Gamma(\alpha+1)}{\alpha} and so \Gamma(0)=\Gamma(1) / 1 = 1/1 = 1 which agrees with 0!=1
Although, is it true to say that there is still a simple pole associated with \Gamma(0) since if we take \alpha near 0 then \Gamma(\alpha) \tilde \frac{1}{\alpha} since \Gamma(1)=1, right?

However, since we are claiming that the gamma function extends the factorial to the whole complex plane, I am wondering, how do you go about computing, say, (-2)!?

Thanks.
 
Last edited:
  • #20
latentcorpse said:
I don't follow what you're on about with reciprocal powers etc?

You should probably just look at the computation leading up to (44.18) in http://www.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf What I should have realized sooner is that multiplying differentials almost never leads to the correct measure. What is true is that you could compute the measure from the volume form if you're careful. But it's usually faster to use the Jacobian.

Why is it treated as a constant though? Is it because if we break up \int d^dk = \int k^{d-1} dk \int d^{d-1}k, the \int d^{d-1}k measure has no k dependence?

No, the k-dependence is there. Specifically

<br /> \int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2}= \int \frac{d\Omega_{d-1}}{(2 \pi)^d} \int \frac{dk~ k^{d-1}}{k^2+m^2}= \frac{V_{d-1}}{ (2 \pi)^d} \int_0^\infty \frac{k^{d-1}dk}{k^2+m^2},<br />

where V_{d-1} is the volume of a d-1 sphere. The proportionality constant is just the ratio appearing in front of the last expression.

Why one particle irreducible? Do we always just work with these?

The diagrams corresponding to the expansion of what's usually called the generating function (usually written as W[J]) are connected and include propagators on the external legs. What's usually called the effective action (\Gamma[\phi]) involves the one-particle irreducible diagrams with no propagators on the external legs.

As for finding one with 2 vertices and 2 lines - I'm struggling!
I can draw my four external lines coming in and each one hitting the corner of a square. That is definitely 1PI but it has 4 internal lines and 4 vertices.

That sounds like you're using cubic instead of quartic vertices.

When trying to get one with 2 vertices and 2 internal lines, I drew a cross, with a vertex at the centre and then stuck a loop on one of the external legs. However, I'm fairly sure that is not 1PI as I can cut the loop and the diagram will no longer be connected!

The diagram I'm talking about is

attachment.php?attachmentid=35149&stc=1&d=1304463328.jpg


How though? I am not sure how to expand \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle in this case?

All we've done is applied the \phi\rightarrow -\phi symmetry to the correlator. There's no need to evaluate it since we've shown that it must be zero.

Yes because \Gamma(\alpha) = \frac{\Gamma(\alpha+1)}{\alpha} and so \Gamma(0)=\Gamma(1) / 1 = 1/1 = 1 which agrees with 0!=1
Although, is it true to say that there is still a simple pole associated with \Gamma(0) since if we take \alpha near 0 then \Gamma(\alpha) \tilde \frac{1}{\alpha} since \Gamma(1)=1, right?

However, since we are claiming that the gamma function extends the factorial to the whole complex plane, I am wondering, how do you go about computing, say, (-2)!?

You could compute a residue or principal value, but the factorial does not extend to the whole plane. It extends to everywhere that it doesn't have a pole, which is the plane minus the nonpositive integers.
 

Attachments

  • fish.jpg
    fish.jpg
    1.2 KB · Views: 634
  • #21
fzero said:
You should probably just look at the computation leading up to (44.18) in http://www.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf What I should have realized sooner is that multiplying differentials almost never leads to the correct measure. What is true is that you could compute the measure from the volume form if you're careful. But it's usually faster to use the Jacobian.
So what are our J_{i_1j_1} in this case?
fzero said:
The diagrams corresponding to the expansion of what's usually called the generating function (usually written as W[J]) are connected and include propagators on the external legs. What's usually called the effective action (\Gamma[\phi]) involves the one-particle irreducible diagrams with no propagators on the external legs.
Yes, I have these same definitions. But this doesn't explain why we consider 1PI diagrams? Is it because they don't have propagators on the external legs and are therefore simpler to deal with?

fzero said:
The diagram I'm talking about is

attachment.php?attachmentid=35149&stc=1&d=1304463328.jpg
Ok so given this graph, if we put k as the loop momenta then the Euclidean space Feynman rules tell us that we get
\int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2} \frac{1}{(p-k)^2+m^2} \rightarrow \int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^4}
If we regularise by imposing a high momentum UV cutoff k \leq \Lambda
then we obtain the result \tilde \log{\Lambda} as on p324 P&S.
Is this correct?

Also, when we compute these loop integrals, what exactly are we computing? He just draws a diagram and then puts something like ~ log (k) next to it but doesn't actually say WHAT IS ~log(k). It's not a green's function or anything like that. What actually is it?

fzero said:
All we've done is applied the \phi\rightarrow -\phi symmetry to the correlator. There's no need to evaluate it since we've shown that it must be zero.
Do you just mean that you took \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle and made the transformation \phi \rightarrow - \phi which gives 3 minus signs (one for each \phi) and since (-1)^3=-1 we get an overall minus sign that we can pull out showing that the 3 point green's function is equal to minus itself and therefore must vanish?

fzero said:
You could compute a residue or principal value, but the factorial does not extend to the whole plane. It extends to everywhere that it doesn't have a pole, which is the plane minus the nonpositive integers.
Ok. So you're telling me that \Gamma is defined everywhere except 0,-1,-2,-3,...?
But we just showed a few posts ago that \Gamma(0)=1 to be consistent with 0!=1

So if you were asked to find (-1.5)!, could you just compute \Gamma(-0.5) using its integral definition as \Gamma(\alpha) = \int_0^\int x^{\alpha-1}e^{-x}dx?

Lastly, my notes discuss three methods of regularisation: (i) UV cut-off which I think I understand (assuming you agree with my calculation of \log(\Lambda) above.
(ii) Using a spacetime lattice (although this doesn't seem very important as he just mentions it in passing.
(iii) Dimensional Regularisation. This seems like the most important. He just says though:
The superficial degree opf divergence is given by D=(d-4)L + \displaystyle\sum_n (n-4) V_n - E + 4 which depends on the dimension d.
Usually this is just defined for d \in \mathbb{Z}^+ but we can regulate by analytic continuation to d \in \mathbb{C}.
He has included an example afterwords and I can follow the maths in it but I don't really understand what we are doing. How do we analytically continue to complex d and what's the point of doing so?

Thanks very much!
 
  • #22
latentcorpse said:
So what are our J_{i_1j_1} in this case?

That's easy to figure out by inverting the formula for the complex variables. Srednicki does the calculation a bit later in that section.

Yes, I have these same definitions. But this doesn't explain why we consider 1PI diagrams? Is it because they don't have propagators on the external legs and are therefore simpler to deal with?

The LSZ formula tells us that S-matrix elements are related to time-ordered correlation functions. We can build all of the correlations functions from the 1PIs as ingredients I guess.

Ok so given this graph, if we put k as the loop momenta then the Euclidean space Feynman rules tell us that we get
\int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^2+m^2} \frac{1}{(p-k)^2+m^2} \rightarrow \int \frac{d^dk}{(2 \pi)^d} \frac{1}{k^4}
If we regularise by imposing a high momentum UV cutoff k \leq \Lambda
then we obtain the result \tilde \log{\Lambda} as on p324 P&S.
Is this correct?

Basically yes.

Also, when we compute these loop integrals, what exactly are we computing? He just draws a diagram and then puts something like ~ log (k) next to it but doesn't actually say WHAT IS ~log(k). It's not a green's function or anything like that. What actually is it?

Each diagram is a specific way of Wick contracting the operators in a momentum space correlation function.

Do you just mean that you took \langle \phi(x_1) \phi(x_2) \phi(x_3) \rangle and made the transformation \phi \rightarrow - \phi which gives 3 minus signs (one for each \phi) and since (-1)^3=-1 we get an overall minus sign that we can pull out showing that the 3 point green's function is equal to minus itself and therefore must vanish?

Yes.

Ok. So you're telling me that \Gamma is defined everywhere except 0,-1,-2,-3,...?
But we just showed a few posts ago that \Gamma(0)=1 to be consistent with 0!=1

No, 0!=\Gamma(1)

So if you were asked to find (-1.5)!, could you just compute \Gamma(-0.5) using its integral definition as \Gamma(\alpha) = \int_0^\int x^{\alpha-1}e^{-x}dx?

There are identities for \Gamma(z/2) and \Gamma(1-z) that are usually easier to work with than the integral.

Lastly, my notes discuss three methods of regularisation: (i) UV cut-off which I think I understand (assuming you agree with my calculation of \log(\Lambda) above.
(ii) Using a spacetime lattice (although this doesn't seem very important as he just mentions it in passing.
(iii) Dimensional Regularisation. This seems like the most important. He just says though:
The superficial degree opf divergence is given by D=(d-4)L + \displaystyle\sum_n (n-4) V_n - E + 4 which depends on the dimension d.
Usually this is just defined for d \in \mathbb{Z}^+ but we can regulate by analytic continuation to d \in \mathbb{C}.
He has included an example afterwords and I can follow the maths in it but I don't really understand what we are doing. How do we analytically continue to complex d and what's the point of doing so?

You'd be better off trying to read the details in a decent text. You are generally expressing loop integrals in terms of Gamma functions which define the analytic continuation. Since the Gamma functions have a known pole structure, one can determine the divergent and finite parts of each integral.
 
  • #23
fzero said:
Basically yes.
I'm getting an infinity though when i write it out in full

\int \frac{d^4k}{k^4} = \int \frac{dk}{k}|_0^\Lambda = \log{\Lambda} - \log{0} but \log{0}=-\infty so I end up with a +\infty term? <blockquote data-attributes="" data-quote="fzero" data-source="post: 3282351" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-title"> fzero said: </div> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> No, 0!=\Gamma(1) </div> </div> </blockquote>Ok. So we&#039;re saying that \Gamma(0) is undefined and we can never know it&#039;s value because of the pole there? That makes sense I guess...<br /> <br /> <blockquote data-attributes="" data-quote="fzero" data-source="post: 3282351" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-title"> fzero said: </div> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> There are identities for \Gamma(z/2) and \Gamma(1-z) that are usually easier to work with than the integral. </div> </div> </blockquote>So I should know these identities I guess. But you are saying that calculation the factorial of a negative number is possible providing it isn&#039;t at a pole? i.e. (-1.5)! exists?<br /> <br /> What about complex numbers (-1+i)! Does that exist?<br /> <br /> And so, given these two methods of regularisation :(i) momentum space cut-off and (ii) dimensional regularisation, how do we know which one to use? Will questions generally state which one they want you to use or is it better to use a particular one?<br /> <br /> Thanks.
 
Last edited:
  • #24
latentcorpse said:
I'm getting an infinity though when i write it out in full

\int \frac{d^4k}{k^4} = \int \frac{dk}{k}|_0^\Lambda = \log{\Lambda} - \log{0} but \log{0}=-\infty so I end up with a +\infty term?
<br /> <br /> You get this divergence because you dropped the particle mass out of the integral. That was valid when we were considering the UV behavior, but not valid when considering small momenta. If you consider the complete integral, the mass acts as an IR cutoff.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> Ok. So we&#039;re saying that \Gamma(0) is undefined and we can never know it&#039;s value because of the pole there? That makes sense I guess... </div> </div> </blockquote><br /> I should have been more careful. We can isolate the divergent and finite parts of the Gamma function at the poles. This is what&#039;s used in dimensional regularization. My comment above was that 0! does not correspond to a pole of the Gamma function.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> So I should know these identities I guess. But you are saying that calculation the factorial of a negative number is possible providing it isn&#039;t at a pole? i.e. (-1.5)! exists?<br /> <br /> What about complex numbers (-1+i)! Does that exist? </div> </div> </blockquote><br /> Both of those exist.
 
  • #25
fzero said:
You get this divergence because you dropped the particle mass out of the integral. That was valid when we were considering the UV behavior, but not valid when considering small momenta. If you consider the complete integral, the mass acts as an IR cutoff.
So going back to
\int_{k \leq \Lambda} \frac{d^4}k{(2 \pi)^d} \frac{1}{k^2+m^2} \frac{1}{(p-k)^2+m^2}
What should I use as a substitution? I can't get rid of the p term?

Also, aren't we meant ot be talking about divergent graphs here? We get our answer to be \log{\Lambda} but \Lambda is bounded above meaning that it can't diverge. This is the point of regularisation right? We have imposed this upper limit on momenta in order to render an initially divergent loop integral finite?

Isn't this just a sort of "quick fix" though? It doesn't really seem to solve the problem as we cans till just take \Lambda \rightarrow \infty and the integral will diverge again, won't it?

fzero said:
Both of those exist.
So how would we calculate (1-i)!, say?Can you try and explain to me what renormalisation is all about as my notes offer a fairly inadequate description and I can't really follow what P&S is on about! It seems to be something to do with the mass and the physical mass (which appears as the pole in the full two point function are different?

And there's something in my notes that's bugging me: He claims that a tree level (i.e. no loops) diagram with 2 external legs that are amputated contributes \hat{F}^{(0)}_2(p,-p)=p^2+m^2
I don't see why though? Surely the Feynman rules tell us it should have a \frac{1}{p^2+m^2}, no?

Cheers.
 
Last edited:
  • #26
latentcorpse said:
So going back to
\int_{k \leq \Lambda} \frac{d^4}k{(2 \pi)^d} \frac{1}{k^2+m^2} \frac{1}{(p-k)^2+m^2}
What should I use as a substitution? I can't get rid of the p term?

Also, aren't we meant ot be talking about divergent graphs here? We get our answer to be \log{\Lambda} but \Lambda is bounded above meaning that it can't diverge. This is the point of regularisation right? We have imposed this upper limit on momenta in order to render an initially divergent loop integral finite?

Isn't this just a sort of "quick fix" though? It doesn't really seem to solve the problem as we cans till just take \Lambda \rightarrow \infty and the integral will diverge again, won't it?

This integral is eq (10.20) in P&S and is evaluated in dimensional regularization there.

So how would we calculate (1-i)!, say?

You probably have to do an integral somewhere, since I'm not remembering an expression for \Gamma(\pm i).

Can you try and explain to me what renormalisation is all about as my notes offer a fairly inadequate description and I can't really follow what P&S is on about! It seems to be something to do with the mass and the physical mass (which appears as the pole in the full two point function are different?

I'd suggest trying some other texts. I can try to answer specific questions, but I can't try to regurgitate material that takes pages and pages in books.

And there's something in my notes that's bugging me: He claims that a tree level (i.e. no loops) diagram with 2 external legs that are amputated contributes \hat{F}^{(0)}_2(p,-p)=p^2+m^2
I don't see why though? Surely the Feynman rules tell us it should have a \frac{1}{p^2+m^2}, no?

The definition of the amputated correlation function is

G(p_1,\ldots p_n) = G(p_1)\cdots G(p_n) G_{\text{amp}}(p_1,\ldots p_n).

Since

G(p,-p) \sim \frac{1}{p^2+m^2} \sim G(p)

we find that

G_{\text{amp}}(p,-p) \sim p^2+m^2.
 
  • #27
latentcorpse said:
If \{ \theta_i \} are a set of Grassmann numbers then what is

\frac{\partial}{\partial \theta_i} ( \theta_j \theta_k \theta_l)

The \frac{\partial}{\partial \theta_i} is undefined on a Grassmann algebra. You either have a left derivative, or a right derivative operator depending on your wish. The 2 operators in general are not the same and in particular this difference is felt once you have a product of anticommuting objects.

Read more on this in Henneaux and Teitelboim's book on the quantization of gauge theories.
 
  • #28
fzero said:
This integral is eq (10.20) in P&S and is evaluated in dimensional regularization there.
I see how he gets the first line of (10.20) from Euclidean space Feynman rules, then he seems to do lots of steps at once and I can't keep up! He says that this is equal to (-i \lambda)^2 \cdot i V(p^2) where V(p^2) is given on the next page. Why is that the formula for V? And what's this x integral all about? I reckon it's the feynman parameter he's on about but our lecture notes haven't mentioned these...

fzero said:
You probably have to do an integral somewhere, since I'm not remembering an expression for \Gamma(\pm i).
So we have (-1+i)! = \Gamma(i) = \int_0^\infty x^{i-1} e^{-x} dx[/itex] and we'd have to evaluate this?

What about my question concerning which method to use for regularisation? How do we decide between dimensional regularisation or imposing a momentum space cut-off?

fzero said:
I'd suggest trying some other texts. I can try to answer specific questions, but I can't try to regurgitate material that takes pages and pages in books.
Sure. Well the notes that (more or less) follow my lectures are here:
http://www.damtp.cam.ac.uk/user/ho/Notes.pdf
Our renormalisation stuff is round about p50. Half-way down this page he says that the loop integrals lead to divergent terms and these divergent terms can be canceled off by adding new "counter terms" to the Lagrangian so that we have a new lagrangian of the form
\mathcal{L}+\mathcal{L}_{ct} where the counter term lagrangian is given by
\mathcal{L}_{ct}=-\frac{1}{2}A \partial_\mu \phi \partial^\mu \phi - V_{ct}(\phi)
and the counter term potential is given by E \phi + \frac{B}{2} \phi^2 + \frac{C}{3!} \phi^3 + \frac{D}{4!} \phi^4
Why does the counter term lagrangian take this form? I don't see why any of these terms are there or how they help cancel stuff?

fzero said:
The definition of the amputated correlation function is

G(p_1,\ldots p_n) = G(p_1)\cdots G(p_n) G_{\text{amp}}(p_1,\ldots p_n).

Since

G(p,-p) \sim \frac{1}{p^2+m^2} \sim G(p)

we find that

G_{\text{amp}}(p,-p) \sim p^2+m^2.
[/QUOTE]

So we have that (I think this is the same as what you have)
F_n(p_1 , \dots , p_n) = i (2 \pi)^d \delta^{(d)}( \displaystyle\sum_{i=1}^n p_i ) \displaystyle\prod_{i=1}^n \left( \frac{-i}{p^2+m^2} \right) \hat{F}_n(p_1 , \dots , p_n)
where \hat{F}_n(p_1 , \dots , p_n) corresponds to the amputated correlation function.
I can see how my definition falls out of the Euclidean space Feynman rules although it seems to have some extra terms relative to yours?
Anyway, I agree that if we have two external legs then we get (-i/(p^2+m^2))^2*\hat{F} on the RHS. Why is the full correlation function equal to 1/(p^2+m^2)? How do you know that?
And my definition looks as though it's going to disagree with yours by an overall minus sign when we work out \hat{F}?

Thanks again.
 
Last edited by a moderator:
  • #29
latentcorpse said:
I see how he gets the first line of (10.20) from Euclidean space Feynman rules, then he seems to do lots of steps at once and I can't keep up! He says that this is equal to (-i \lambda)^2 \cdot i V(p^2) where V(p^2) is given on the next page. Why is that the formula for V? And what's this x integral all about? I reckon it's the feynman parameter he's on about but our lecture notes haven't mentioned these...

(10.20) serves as the definition of V(p^2). The Feynman parameter method is the same thing that you were asking about in https://www.physicsforums.com/showthread.php?t=478538 P&S refer to their section 7.5.

So we have (-1+i)! = \Gamma(i) = \int_0^\infty x^{i-1} e^{-x} dx[/itex] and we'd have to evaluate this?

Or use some other representation of the Gamma function. Wolframalpha seems to use the series representations: http://www.wolframalpha.com/input/?i=gamma(i)

What about my question concerning which method to use for regularisation? How do we decide between dimensional regularisation or imposing a momentum space cut-off?

The momentum space cut-off will eventually break gauge covariance. For some diagrams it works, but for general processes it will turn up as a nonzero photon mass generated through radiative corrections. Dimensional regularization preserves gauge covariance at the expense of being difficult to understand.

Sure. Well the notes that (more or less) follow my lectures are here:
http://www.damtp.cam.ac.uk/user/ho/Notes.pdf
Our renormalisation stuff is round about p50. Half-way down this page he says that the loop integrals lead to divergent terms and these divergent terms can be canceled off by adding new "counter terms" to the Lagrangian so that we have a new lagrangian of the form
\mathcal{L}+\mathcal{L}_{ct} where the counter term lagrangian is given by
\mathcal{L}_{ct}=-\frac{1}{2}A \partial_\mu \phi \partial^\mu \phi - V_{ct}(\phi)
and the counter term potential is given by E \phi + \frac{B}{2} \phi^2 + \frac{C}{3!} \phi^3 + \frac{D}{4!} \phi^4
Why does the counter term lagrangian take this form? I don't see why any of these terms are there or how they help cancel stuff?

The idea is that we can consider the effect of adding any single term to our original Lagrangian, either by just noting that we shift the parameters of the original theory or by adding new diagrams where we replace some lines and vertices with the appropriate counterterms. Either way, we can now express our physical quantities in terms of the bare couplings and the coefficients of the counterterms. The divergent parts of these quantities are given by the \hat{\tau}s at the top of that page, while the counterterms contribute by adding the \hat{\tau}_{c.t.}s to the bare result. The renormalization scheme consists of choosing the counterterm coefficients in such a way that the divergences are removed.

So we have that (I think this is the same as what you have)
F_n(p_1 , \dots , p_n) = i (2 \pi)^d \delta^{(d)}( \displaystyle\sum_{i=1}^n p_i ) \displaystyle\prod_{i=1}^n \left( \frac{-i}{p^2+m^2} \right) \hat{F}_n(p_1 , \dots , p_n)
where \hat{F}_n(p_1 , \dots , p_n) corresponds to the amputated correlation function.
I can see how my definition falls out of the Euclidean space Feynman rules although it seems to have some extra terms relative to yours?
Anyway, I agree that if we have two external legs then we get (-i/(p^2+m^2))^2*\hat{F} on the RHS. Why is the full correlation function equal to 1/(p^2+m^2)? How do you know that?
And my definition looks as though it's going to disagree with yours by an overall minus sign when we work out \hat{F}?

There are various conventions that can be used to define these expressions. It's probably unnecessary to put the momentum conservation delta function in by hand, since it should be enforced by the Feynman rules already when the external lines are on-shell.
 
Last edited by a moderator:
  • #30
fzero said:
(10.20) serves as the definition of V(p^2). The Feynman parameter method is the same thing that you were asking about in https://www.physicsforums.com/showthread.php?t=478538 P&S refer to their section 7.5.

I found the calculation in full here:
http://arxiv.org/PS_cache/arxiv/pdf/0901/0901.2208v1.pdf
See p14 & p15.
I follow up to (2.6): why does he get an \Omega_4?
Surely \int d^4k = \int_{S^3} d \Omega_3 \int k^3 dk, no?

And then in (2.7), I understand the change of variables he used to get the expression on the left of this line but how does that integrate to give the RHS?

And then how does (2.8) come about? Surely we have 5 factors of 2 on the bottom i.e. 32 since we had a (2 pi)^4 and also a 1.2 from (2.7)? He also appears to have only pi^2 rather than pi^4 on the bottom?

fzero said:
The momentum space cut-off will eventually break gauge covariance. For some diagrams it works, but for general processes it will turn up as a nonzero photon mass generated through radiative corrections. Dimensional regularization preserves gauge covariance at the expense of being difficult to understand.
So, given an arbitrary loop integral to compute, is it best to start of by trying momentum space cut off (since it is easier) and if that fails then try dim reg? Or should we just always try dim reg straight away since we know that it works?

fzero said:
The idea is that we can consider the effect of adding any single term to our original Lagrangian, either by just noting that we shift the parameters of the original theory or by adding new diagrams where we replace some lines and vertices with the appropriate counterterms. Either way, we can now express our physical quantities in terms of the bare couplings and the coefficients of the counterterms. The divergent parts of these quantities are given by the \hat{\tau}s at the top of that page, while the counterterms contribute by adding the \hat{\tau}_{c.t.}s to the bare result. The renormalization scheme consists of choosing the counterterm coefficients in such a way that the divergences are removed.
Yep, so I worked out those divergent parts already using the dimensional regularisation prescription i think. And I understand (or at least am prepared to accept) that we can add new terms to the lagrangian in order to cancel off these divergences. However, I do not understand why the counter terms in the lagrangian give the contributions that they do to the amputated n point function
i.e. why is \hat{\tau}^{(0)}(p,-p)_{\text{c.t.}}=-Ap^2-B?
And why do they then give those diagrams drawn underneath?
fzero said:
There are various conventions that can be used to define these expressions. It's probably unnecessary to put the momentum conservation delta function in by hand, since it should be enforced by the Feynman rules already when the external lines are on-shell.
Ok, so given my convention, we get the RHS is equal to \frac{-1}{(p^2+m^2)^2} \hat{F}(p,-p) and the RHS is equal to F(p,-p). So what do we substitute for F(p,-p) and why?
 
Last edited:

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
Replies
2
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 36 ·
2
Replies
36
Views
5K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K