Genereral:Questions about Srednicki's QFT

  • Thread starter Thread starter haushofer
  • Start date Start date
  • Tags Tags
    Qft
haushofer
Science Advisor
Insights Author
Messages
3,045
Reaction score
1,579
**********************
Hi,

I did some searching and found quite some questions about the Srednicki book on QFT, so apparently there are more people working with it. I thought maybe it would be a nice idea to have some sort of "questions about QFT encountered while reading Srednicki's book"-topic, so I hope I'm being appropriate here. If not, let me know.
**********************



I'm still a little confused about how the Feynman diagrams are generated with the functional Z. Just like you can define \Pi(k^2) as the sum of all one-particle irreducible diagrams (1PI's), you can define V_n(k_1, \ldots, k_n) as the sum of all 1PI's with n external lines.

Now Srednicki claims that there is no tree-level contribution to V_{n \geq 4} in \phi^3-theory. The connected diagram of V=1, P=3 is a tree diagram, right? (tree external lines coming together at a single vertex). So does he basically mean that "you don't have E=4,P=4,V=1 diagrams in \phi^3 theory and all the other tree diagrams are not 1PI"?

Also, a question about regularization which I already posed, but I'm still confused (but RedX, thanks for your efforts!) ;)

I have another small question about Srednick's book;it's about ultraviolet cutt-off. In eq. (9.22) Srednicki makes the replacement

<br /> \Delta(x-y) \rightarrow \int\frac{d^4k}{(2\pi)^4}\frac{e^{ik(x-y)}}{k^2 + m^2 - i\epsilon} \Bigl(\frac{\Lambda^2}{k^2 + \Lambda^2 - i\epsilon}\Bigr)^2<br />
instead of cutting the integral explicitly of at \Lambda. Are there any arguments besides Lorentz invariance why such a particular convergent replacement makes sense?
 
Physics news on Phys.org
I think I got the first question; In the considered \phi^3 theory we have P = (E+ 3V)/2, (where E = # external lines, P = # propagators, V = # vertices) so for E=4 we have to start with V=1,P=5, and this diagram is not 1PI.
 
I'm also having troubles with the "skeleton expansion" described in chapter 19 (perturbation theory to all orders).

First of all: if we are interested in a certain proces, then we fix our E, right? We know how many particles come in and out, and we want to calculate the cross-section of that process. So I'm not sure why we have to sum over all n-point vertices n=3,4,...,E.

The expansion itself is described as:

This means that we draw all the contributing 1PI , but omit diagrams that include either propagator or 3-point vertex corrections. That is, we omit any 1PI diagram that contains a subdiagram with two or three external lines that is more complicated than a single tree-level propagator (for a subdiagram with two external lines) or tree-level vertex (for a subdiagram with three external lines).

Can someone elaborate on this?
 
Ok, to take a concrete example (like in chapter 20): elastic 2-particle scattering. I take E=4 here. Because P=(E+3V)/2, I get

<br /> P = 2 + \frac{3}{2}V<br />

This gives me the following list:

V: |1 | 2 |3 |4 |5 |6
P: |x |5 |x |8 |x |11

where an x means that this particular combination of V and P is not possible in our \phi^3 theory. So, to start at lowest order in V, we get

V=2,P=5
V=4,P=8
V=6,P=11

The first one,V=2 and P =5, is a diagram with 2 external lines coming in at a vertex, and this vertex is connected with an internal line to another vertex. This last vertex is connected to, ofcourse, again 2 external lines.

The second, V=4 and P=8, is a square where every edge is connected to an external line.

The third, V=6, P = 11 is, I think, the same diagram as the second with an extra internal line in the square.

Ofcourse, every diagram can be obtained in different ways (the first one for instance at 3!=3 different ways etc). Is this skeleton expansion then the idea that:

You take these first three orders in V, insert for the internal lines the exact propagator, and for the vertices the exact 3-point functions? What about the exact propagators of the external lines? Can I find some book/link where this is explained in detail and up to a reasonable order in V?

I hope my question is a bit clear :)
 
haushofer said:
You take these first three orders in V, insert for the internal lines the exact propagator, and for the vertices the exact 3-point functions?
Yes.
haushofer said:
What about the exact propagators of the external lines?
Yes, but after the expression for the diagram is put into the LSZ formula, the external propagators get replaced by the residue of the pole at the physical mass, which (at least at this stage of the book) has been set equal to 1.
haushofer said:
Can I find some book/link where this is explained in detail and up to a reasonable order in V?
Srednicki appears to be following the program outlined by 't Hooft, http://arxiv.org/abs/hep-th/0405032, which seems to me to be different than the standard BPHZ procedure. For this, see Sterman or Kaku.

As for your question about the cutoff procedure, the whole idea is that the details of the procedure should not matter (in a renormalizable theory), so we can use whatever is most convenient.
 
Avodyne said:
Yes.

Yes, but after the expression for the diagram is put into the LSZ formula, the external propagators get replaced by the residue of the pole at the physical mass, which (at least at this stage of the book) has been set equal to 1.

Srednicki appears to be following the program outlined by 't Hooft, http://arxiv.org/abs/hep-th/0405032, which seems to me to be different than the standard BPHZ procedure. For this, see Sterman or Kaku.

Great! It's good to hear I get a hang of it! Indeed, I forgot the LSZ-formula, I see the point now! ;) When I asked,

"We know how many particles come in and out, and we want to calculate the cross-section of that process. So I'm not sure why we have to sum over all n-point vertices n=3,4,...,E."

the point is then that IN the diagram there are n-point functions which we want to evaluate. For instance, in the E=4, P=8 case we want to evaluate the vertices exactly which are 3-point functions, and if we go higher in order in our skeleton we will encounter higher n-point functions, right?

As for your question about the cutoff procedure, the whole idea is that the details of the procedure should not matter (in a renormalizable theory), so we can use whatever is most convenient.
So I could use

<br /> <br /> \Delta(x-y) \rightarrow \int\frac{d^4k}{(2\pi)^4}\frac{e^{ik(x-y)}}{k^2 + m^2 - i\epsilon} \Bigl(\frac{\Lambda^2}{k^2 + \Lambda^2 - i\epsilon}\Bigr)^n<br /> <br />

for an arbitrary, finite n?
 
haushofer said:
For instance, in the E=4, P=8 case we want to evaluate the vertices exactly which are 3-point functions, and if we go higher in order in our skeleton we will encounter higher n-point functions, right?
Right!

haushofer said:
So I could use ... for an arbitrary, finite n?
Yes.
 
Another question :)

Chapter 25 talks about decay. A Lagrangian of 2 different particles \phi and \chi is written down, and in this Lagrangian an interaction between \phi and \chi is included. Then it is stated that

For m_{\phi}&gt;2m_{\chi} it is kinematically possible for the \phiparticle to decay in two \chi particles.

My question is: does this follow directly from Z(J)? Ofcourse, I'm familiar with relativistic kinematics and decays and all that, but I'm wondering where and how exactly this possibility slips in our definition of Z(J).
 
haushofer -> It's built in Z(J) through translation invariance. You use Z(J) to generate the n-point functions of the theory, and these are translation invariant. In momentum space this means you will end up with a momentum conservation delta. For the decay of a particle you'll have \delta(k_1 + k_2 - k). (See Srednicki's eq. 25.4) Now, Dirac's delta is only non zero when the argument is zero. And if m_\phi &lt; 2 m_\chi then it will always be k_1 +k_2 - k \ne 0, so the momentum delta is identically zero, giving you an overall vanishing probability for the process to occur.
 
Last edited:
  • #10
DrFaustus said:
haushofer -> It's built in Z(J) through translation invariance. You use Z(J) to generate the n-point functions of the theory, and these are translation invariant. In momentum space this means you will end up with a momentum conservation delta. For the decay of a particle you'll have \delta(k_1 + k_2 - k). (See Srednicki's eq. 25.4) Now, Dirac's delta is only non zero when the argument is zero. And if m_\phi &lt; 2 m_\chi then it will always be k_1 +k_2 - k \ne 0, so the momentum delta is identically zero, giving you an overall vanishing probability for the process to occur.

Hey DrFaustus!

I also traced down what happens if I plug in the Lagrangian, and also came down to the Dirac delta functions. It's nice to see how energy conservation is expressed in that way in QFT.

(I often tend to throw down questions here before intensive investigation, because this really helps me to come closer to an answer, and it's very good to see how other people think about it. So sometimes I find an answer some time after posing the question ;) Thanks for your answer! :) )
 
  • #11
haushofer said:
... for an arbitrary, finite n?
Shouldn't Re{n}>1? (So, "no, not completely arbitrary.")
 
  • #12
Yes, I implicitly assumed that n was real. :)
 
  • #13
Okido, another question which I encountered in chapter 27. It's about equation 27.11 and 27.12. We have

<br /> m^2_{ph} = m^2[1+ \frac{5}{12}\alpha(\ln{\frac{\mu^2}{m^2}} + c&#039;) + O(\alpha^2)]<br />

He takes the log on both sides and divides by 2. So that leaves us with

<br /> \ln{m_{\ph}} = 2\ln{m} + \ln{[\ldots]}<br />

Now I don't understand how he gets equation 27.12. He obviously does a Taylor expansion

<br /> \ln(1+x) = x - \frac{x^2}{2} + \ldots<br />

and implicitly assumes that alpha is very small, or m>>mu. Or can we sweep the corrections under the rug of O(\alpha^2)? Can someone comment on this? :)

edit: I see the point; you expand in x, and x is of at least order alpha.
 
Last edited:
  • #14
Another question about Srednicki, chapter 34. It's about Lorentz representations.

The Lorentz representation is described by (2n+1,2m+1), which gives the dimensions of both SU(2) algebra's to which SO(3,1) is isomorphic to.

In ordinary QM we can add to electronspins together and obtain a singlet (which is antisymmetric) and a triplet (which is symmetric). This is written as

<br /> 2 \otimes 2 = 1_A \oplus 3_S<br />

At the bottom of page 211 the book mentions "For the Lorentz group, the relevant equation is

<br /> (2,1) \otimes (2,1) = (1,1)_A \oplus (3,1)_S<br />

"

Why is this exactly? Another question arises at page 213, "For example, we can deduce the existence of g_{\mu\nu}=g_{\nu\mu} from

<br /> (2,2) \otimes (2,2) = (1,1)_S \oplus (1,3)_A \oplus (3,1)_A \oplus (3,3)_S<br />
", and the following "another invariant symbol..." Frankly, I couldn't derive these results by my own, so could some-one elaborate on this or give a link/book/whatever in which this is properly explained?
 
  • #15
  • #16
Great Sam, I've never been properly exposed to Clebsch-Gordon, so now it's the time to be properly exposed :) I'll look at your posts!
 
  • #17
I think that you need to translate between yours and sam's notation. Basically, your notation specifies the dimensionality of the representation, whereas sam's notation specifies the "total angular momentum" (in multiples of hbar). To put it simply, your values of n and m are twice those in sam's notation.
 
  • #18
turin said:
Shouldn't Re{n}>1? (So, "no, not completely arbitrary.")

Almost certain this integral converges (as a distribution) even for n=0. It's the Feynman propagator - it doesn't need to be regularised.

I'm not familiar with Srednicki's book - why did s/he regularise this?

Cheers

Dave
 
  • #19
schieghoven said:
Almost certain this integral converges (as a distribution) even for n=0. It's the Feynman propagator - it doesn't need to be regularised.

I'm not familiar with Srednicki's book - why did s/he regularise this?

Cheers

Dave

I missed this one. But it's a good point, and I have to say that his treatment of this isn't very clear to me at all. So if anyone can comment on it, I'm curious!

But I came with another computational question which I encountered in Chapter 4, eqn(4.7).
For a proper orthochronous Lorentz Transformation we have

<br /> U(\Lambda)^{-1}\phi(x)U(\Lambda)=\phi(\Lambda^{-1}x) \ \ \ \ \ (1)<br />
which means for the annihilation operator (the creation operator goes the same)

<br /> U(\Lambda)^{-1}a(\bold{k})U(\Lambda) = a(\Lambda^{-1}\bold{k}) \ \ \ \ \ (2)<br />

I thought the following: Expand phi in creation and annihilation operators and plug this into (1). On the right hand side we then have in the exponential of the expansion an inner product between \Lambda^{-1}x and k. This basically is the same as (k and x transform oppositely) the inner product between \Lambda k and x. If we then change variables in the expansion,
<br /> k \rightarrow \Lambda^{-1}k<br /> [/itex]<br /> and use that the measure \tilde{dk} doesn&#039;t change, we arrive at the result. <br /> <br /> However, in (1) our Lorentz transformation acts on the four-vector x,while in (2) it acts on the three-vector \bold{k}. So is my computation valid? What does it mean for this Lorentz transformation to act on a three-vector in (2)?
 
  • #20
haushofer said:
However, in (1) our Lorentz transformation acts on the four-vector x,while in (2) it acts on the three-vector \bold{k}. So is my computation valid? What does it mean for this Lorentz transformation to act on a three-vector in (2)?
Yeah, that's just a kind of cheap notation. You can just as well index the mode operator with the 4-momentum, subject to the mass-shell constraint. Using the 3-momentum as the index just helps to remind you that the index space is (homeomorphic? to) R3, not R4. The meaning of the transform of the 3-momentum is precisely the resulting value of the 3-momentum components after the transform of the 4-momentum.
 
  • #21
Ah, ok, thanks for the clearification!
 
  • #22
Hi all,

I have posted this question as a separate thread in the forum originally but I think this is the better place for it:
I have two questions regarding chapter 27 and 28 in Srednicki's book. On page 163 he states:
"furthermore, the residue of the pole is no longer one. Let us call the residue R. The LSZ formula must now be corrected by multiplying its rightnad side by a factor of R^(-1/2) for each external particle...This is because it is the field R^{(-1/2)} \phi (x) that now has unit amplitude to create a one-particle state."

But this would mean that
|&lt;k|\phi |0&gt;|^2 = R

I can not see why this is? I would expect that the result is R^2 because there is a factor of k^2 + m^2 in the LRZ formula...

My second question: On p.170 Srednicki states that bare parameters must be independent of \mu. Because if we "were smart enough, we would be able to compute the exact scattering amplitudes in terms of them". Why is this? After all bare parameters have no physical meaning at all (at least as far as I understand this), so why can't they be dependent on \mu? How would you calculate an exact amplitude just with arbitrary, unphysical bare parameters?

Hope anyone can help me and thanks for reading!
 
  • #23
PJK said:
Hi all,

I have posted this question as a separate thread in the forum originally but I think this is the better place for it:
I have two questions regarding chapter 27 and 28 in Srednicki's book. On page 163 he states:
"furthermore, the residue of the pole is no longer one. Let us call the residue R. The LSZ formula must now be corrected by multiplying its rightnad side by a factor of R^(-1/2) for each external particle...This is because it is the field R^{(-1/2)} \phi (x) that now has unit amplitude to create a one-particle state."

But this would mean that
|&lt;k|\phi |0&gt;|^2 = R

I can not see why this is? I would expect that the result is R^2 because there is a factor of k^2 + m^2 in the LRZ formula...

I think Peskin explains this better than Srednicki, so you might want to check out Peskin's explanation.

Maybe you can view it like this (just for now): 1.) LSZ formula requires that the residue of the propagator is one 2.) the propagator is the vacuum expectation value of two fields 3.) therefore, if you multiply each one of the two fields by 1/R^(1/2), which multiplies out to 1/R, then the residue will become R*1/R=1 4.) therefore you need to multiply all in and out fields by 1/R^(1/2)

PJK said:
My second question: On p.170 Srednicki states that bare parameters must be independent of \mu. Because if we "were smart enough, we would be able to compute the exact scattering amplitudes in terms of them". Why is this? After all bare parameters have no physical meaning at all (at least as far as I understand this), so why can't they be dependent on \mu? How would you calculate an exact amplitude just with arbitrary, unphysical bare parameters?

I don't own a copy of Srednicki's book, but I have the trial version, and page numbers aren't in sync, so I'm not exactly sure what Srednicki is saying. The bare parameters can't be arbitrary. Although they're infinity, they aren't just any infinity, but just the right infinity that's needed to cancel the infinities from loops.

You get all sorts of weird things in your amplitude like a mass scale \mu, or the Euler-Macaroni constant, etc. that are artifacts of your regularization scheme. Thankfully however, we're saved because all that junk can always be absorbed by the bare coefficients. So even though it seems that the bare coefficients depend on how we choose to regulate the loop, the bare coefficients aren't really changing at all - they are just covering for our ignorance.

If we always chose one method of regularization at one scale, then the bare coefficients would always be the same.
 
  • #24
So even though it seems that the bare coefficients depend on how we choose to regulate the loop, the bare coefficients aren't really changing at all - they are just covering for our ignorance.

Ok I think I got that...so the bare parameters are 'infinite constants' and our ignorance is that we do not know there exact (infinite) value?

Maybe you can view it like this (just for now): 1.) LSZ formula requires that the residue of the propagator is one 2.) the propagator is the vacuum expectation value of two fields 3.) therefore, if you multiply each one of the two fields by 1/R^(1/2), which multiplies out to 1/R, then the residue will become R*1/R=1 4.) therefore you need to multiply all in and out fields by 1/R^(1/2)

Well I understand your argumentation but I do not see why the LSZ formula requires that the residue of the propagator is one. All I can see is that for each external progagator it gives a factor of (k^2+m_{ph}^2) 1/(p^2+m^2-\Pi_{\bar{MS}}(k^2}) which results in a factor of R. But Srednicki says on p.173 (trial version) that 'combined with the correction factor of R^(-1/2) for each field, we get a net factor of R^(1/2) for each external line when using the MSBar scheme'.But following your argumentation this should result in a 1 because the propagator is corrected by a factor of R (because of its two field corrections in &lt;0|T{\phi \phi} |0&gt;).


Thank you so much for your answers, I am thinking about this for quite a while and I haven't made any progress until your reply!
 
  • #25
PJK said:
Well I understand your argumentation but I do not see why the LSZ formula requires that the residue of the propagator is one. All I can see is that for each external progagator it gives a factor of (k^2+m_{ph}^2) 1/(p^2+m^2-\Pi_{\bar{MS}}(k^2}) which results in a factor of R. But Srednicki says on p.173 (trial version) that 'combined with the correction factor of R^(-1/2) for each field, we get a net factor of R^(1/2) for each external line when using the MSBar scheme'.But following your argumentation this should result in a 1 because the propagator is corrected by a factor of R (because of its two field corrections in &lt;0|T{\phi \phi} |0&gt;).


Thank you so much for your answers, I am thinking about this for quite a while and I haven't made any progress until your reply!

Okay, I'll refer you to Srednicki's words in the trial version. The key is in chapter 5, beginning on page 51 with the quote:

"However, our derivation of the LSZ formula relied on the supposition that the creation operators...this is a rather suspect assumption, and so we must review it."

Equation (5.18) is what you're interested in. You want &lt;p|\phi(0)|0&gt; to equal one. Why? So that the interacting theory reduces to the free-field theory.

Now turn to chapter 13, equation (13.17). That is the full-interacting propagator, with all the loops contained in it already. It has a simple pole at -m^2 with residue 1, so that it agrees with the free-field theory when the interactions are turned off (the free-field theory has a simple pole at -m^2). So what assumptions were used in deriving (13.17)? The main one is (13.8), 2nd line. That 2nd line is responsible for the first term in (13.17) that has a simple pole at -m^2. And that 2nd line used (5.18)!

Hope that helps in seeing all the relationships between the residue, &lt;p|\phi(0)|0&gt;, and the assumption of the LSZ formula that it can transition from free-field to interacting field.

I believe pg 215 of Peskin and Schroeder's book discusses this also (you can read it free at google books). Their equation (7.9) has the same formula as Srednicki's (13.17), except the pole has residue Z and not 1.

edit:

summary -

1) &lt;p|\phi(0)|0&gt; is equal to one in the free-field theory. View this as the creation operator acting on the vacuum to the right to produce the state |p>, and &lt;p|\phi(0)|0&gt;=<p|p>=1 for correct normalization of |p>.

2) Because we are using creation and annihilation operators of the free-field theory, and extending them to the interacting field theory in deriving the LSZ formula, it should make sense that in the limit that all interactions are turned off, that &lt;p|\phi(0)|0&gt; will equals one, even if \phi is not a free-field but an interacting field.

3) It can be shown that the exact propagator is \Delta(k^2)=\frac{Z}{k^2+m^2-i\epsilon}+\int^{\infty}_{4m^2}ds\rho(s)\frac{1}{k^2+s-i\epsilon} (see Srednicki eqn 13.17 or Peskin eqn. 7.9) where Z=|&lt;k|\phi(0)|0&gt;|^2.

4) Therefore if &lt;k|\phi(0)|0&gt; equals one, then Z=1, which means that the exact propagator has a pole at -m^2 with residue 1.

5) Hence, if the residue is not 1, then &lt;k|\phi(0)|0&gt; is not equal one. If &lt;k|\phi(0)|0&gt; is not equal to 1, then from 1), the creation operator does not produce a correctly normalized state in the free-field case, i.e., <k|k> is not equal to one. Therefore, one must normalize the creation operator to produce a correctly normalized state.
 
Last edited:
  • #26
Thank you so much RedX! I really understood this now! Wow!
 
  • #27
PJK said:
Thank you so much RedX! I really understood this now! Wow!

It's extremely tricky. Srednicki's book is really good, and I've only ever found two areas where he does a poor job, and this was one of them.

Srednicki's chapter on the derivation of the Lehman-Kallen form of the exact propagator (ch. 13) seems out of place, but is absolutely necessary as it shows that the residue of the exact propagator must be one, which is used in the next chapter as a sort of boundary condition on the calculation of the propagator to 1-loop.

But the problem is that Srednicki doesn't emphasize that the reason the residue is one is because of the assumption that &lt;p|\phi(0)|0&gt;=1 or equivalently &lt;p|\phi(x)|0&gt;=e^{-ipx} (he mentions that he is using this assumption, but doesn't emphasize it). The residue without the assumption, if you actually follow it through by not inserting 1 for &lt;p|\phi|0&gt;, is |&lt;p|\phi|0&gt;|^2. Therefore if your residue is not 1, then &lt;p|\phi|0&gt; can't be 1, which means the creation and annihilation operators aren't normalized correctly. But the LSZ formula began with the assumption of correctly normalized creation and annihilation operators which are used to create the in and out states from vacuum. So you have to divide them by |&lt;p|\phi|0&gt;|, which is the residue raised to the 1/2 power.
 
  • #28
Hi!

I have one more question: I do not understand at all how one gets eq 29.11 and (maybe) connected with this how Srednicki gets eq. 29.13...
I would expect that the O-Operators also include the fields of higher momenta. Where does the propagator for the higher momenta fields come from?

Sorry for bothering again!
 
  • #29
I have a small question about spinor manipulations. I've asked these kind of things before but can't find nor remember the answer anymore. Somehow I have issues with dotted and undotted notation. It's about the expression 35.29,

<br /> [\psi_{\dot{a}}\bar{\sigma}^{\mu\dot{a}c}\xi_c]^{\dagger} = \xi_{\dot{c}}(\bar{\sigma}^{\mu a \dot{c}})^* \psi_a<br />

Now, from Linear algebra I remember that if I have an expression like xAy, with x,y vectors and A a matrix, I can write this as

<br /> xAy = x_i A_{ij}y_j = A_{ij}x_i y_j<br />

Daggering this whole expression just means that I complex conjugate A and I take the transpose of A, which basically amounts to switch the contraction:

<br /> (xAy)^{\dagger} = A^*_{ji}x^*_i y^*_j<br />

So in xAy I contract x with the first index of A, while in the daggered expression I contract the complexified x with the second index of A. However, in the spinor expression I still contract \psi with the first index of sigma after daggering. This has something to do with the dotting, but I can't see what's going on. So, I would say that

<br /> [\psi_{\dot{a}}\bar{\sigma}^{\mu\dot{a}c}\xi_c]^{\dagger} = \xi_{\dot{a}}(\bar{\sigma}^{\mu \dot{a}c})^* \psi_c<br />

Why is this wrong?
 
  • #30
haushofer said:
<br /> xAy = x_i A_{ij}y_j = A_{ij}x_i y_j<br />

Adjointing a 1x1 matrix is just taking the complex conjugate. Therefore:

(xAy)^{\dagger} = A^{*}_{ij}x^{*}_i y^{*}_j

which disagrees with your result.

There is the issue of dotted indices becoming undotted and vice versa. If you change the dottedness of a spinor then you must make the corresponding change to whatever it contracts too. Hence Srednicki's result is correct.
 
  • #31
But my A in my example is a general nxn matrix. To make it more concrete, you could take A to be 2x2, just like the Pauli matrices.

I understand that it doesn't make sense to contract dotted and undotted indices, I just wonder why the particular contraction order is taken. I see that in the Hermitian conjugation the dotted a-index of psi becomes undotted (conjugation brings you from one SU(2) sector into the other), but I don't understand why it's still contracted with the FIRST index of sigma. I would contract it with the SECOND, because sigma is also Hermitian conjugated.

So my reasoning would be: Take the Hermitian conjugate of the whole expression, note that Hermitian conjugating spinors brings you from dotted to undotted and vice versa, complex conjugate the matrix sigma and switch the contraction order in which ofcourse dotted indices are contracted with dotted and undotted with undotted. Our convention is also such that the Hermitian conjugate of 2 spinors reverses the order of the spinors.

This equation is before the remark that sigma is Hermitian, so it's just a linear algebra thing I would say.
 
  • #32
haushofer said:
But my A in my example is a general nxn matrix.
Of course A is, but xAy is a number. x is a vector, Ay is vector, and xAy is their inner product. So (xAy)^{\dagger}=(xAy)^*
 
  • #33
Ah, ofcourse, how stupid of me! Thanks RedX and Landau!
 
  • #34
Hi, I was looking at loop corrections in Srednicki's chapter 14, and I have a question about equation 14.8

From general considerations one knows that the exact propagator has a pole at k^2 = -m^2 with residue one. But why does this demand that

<br /> \frac{\partial \Pi}{\partial (k^2)}|_{k^2 = -m^2} = 0<br />

?
 
  • #35
haushofer said:
Hi, I was looking at loop corrections in Srednicki's chapter 14, and I have a question about equation 14.8

From general considerations one knows that the exact propagator has a pole at k^2 = -m^2 with residue one. But why does this demand that

<br /> \frac{\partial \Pi}{\partial (k^2)}|_{k^2 = -m^2} = 0<br />

?

The propagator is

\Delta(k^2)=\frac{1}{k^2+m^2-i\epsilon-\Pi(k^2)}

The residue is given my multiplying this by (k^2+m^2):

\frac{k^2+m^2}{k^2+m^2-i\epsilon-\Pi(k^2)}

and setting k^2=-m^2. However, before we do this, expand Pi about k^2=-m^2 in a power series about -m^2:

\frac{k^2+m^2}{k^2+m^2-i\epsilon-\Pi(-m^2)-\frac{\partial\Pi(k^2)}{\partial k^2}|_{k^2=-m^2}(k^2+m^2)-...}

From this you can see that after dividing the numerator and denominator by (k^2+m^2), the only way you get that the residue is equal to 1 is if the partial of Pi is equal to 0. Otherwise the residue in general would be 1/(1-partial Pi) after dividing the numerator and denominator by (k^2+m^2) and then setting k^2=-m^2 (note that Pi(-m^2)=0).

Note that it is not absolutely necessary that the derivative of Pi is equal to zero. If it's not, then the residue is 1/(1-partial Pi). This is considered in chapter 27 of Srednicki. Having the partial derivative of Pi equal to zero is sometimes called the on-shell renormalization scheme. It is necessary that Pi(-m^2) is equal to zero however, or else there won't be a pole at k^2=-m^2
 
Last edited:
  • #36
That's very clearyfying. Thanks RedX!
 
  • #37
Hi everyone,

Just out of curiosity, does anyone know why the second line of eqn (5.10) is valid?

The reason I ask is because that form for the creation operator is derived in eqn (3.21) under the assumption of a free-field theory.

Why is the same form still valid in the interacting-field theory? Srednicki took great care later on (e.g., eqns 5.17, 5.18, 5.19) to make the interacting-field theory give the same result as the free-field theory, but seemed a bit careless in not explaining why you can use eqn. (3.21) for the creation operator in eqn (5.10) for the interacting-field theory.
 
  • #38
RedX said:
Hi everyone,

Just out of curiosity, does anyone know why the second line of eqn (5.10) is valid?

The reason I ask is because that form for the creation operator is derived in eqn (3.21) under the assumption of a free-field theory.

Why is the same form still valid in the interacting-field theory? Srednicki took great care later on (e.g., eqns 5.17, 5.18, 5.19) to make the interacting-field theory give the same result as the free-field theory, but seemed a bit careless in not explaining why you can use eqn. (3.21) for the creation operator in eqn (5.10) for the interacting-field theory.

"Let us guess that this still works in the interacting theory as well. One complication is that a^dagger (vec k) becomes time dependent..."

i.e. we define that a should work in the same way but also time dependent due to interactions.

See Weinberg for futher information.
 
  • #39
ansgar said:
"Let us guess that this still works in the interacting theory as well. One complication is that a^dagger (vec k) becomes time dependent..."

i.e. we define that a should work in the same way but also time dependent due to interactions.

See Weinberg for futher information.

So the free-field is given by the Fourier expansion:

\phi(x)=\int d^3 \tilde k [a(k)e^{ikx}+a^\dagger(k)e^{-ikx}]

where k is on-shell and d^3 \tilde k=\frac{d^3k}{2E_k}.

Adding time dependence to the coefficients leads to:

\phi(x)=\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}]

However, deriving this:

a(k)=\int d^3x e^{-ikx} \bar \partial_0 \phi(x)

where A\bar \partial_0 B=A\partial_0 B-B\partial_0 A

only works when a(k,t)=a(k), i.e., when a(k) is not a function of time.

Does this mean that in the interacting theory, \phi(x) can't be written as:


\phi(x)=\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}]
 
  • #40
Yes, you can still write the field phi like that -- it is simply the Fourier transform of the field phi. Remember that the field operator phi satisfies the equations of motion. In the free case these equations are linear in the field. When you take the Fourier transform of the field phi, the components a(k,t) also satisfy an equation of motion. What's nice about a free field theory is that all these equations of motion for the modes a(k,t) decouple, and can be solved seperately -- this is where the phase factor exp[iw_k t] comes from. So in a free field theory you have truly solved the time dependence of the operator / Fourier component a(k,t).

But in the interacting case the field obeys the interaction version of the equations of motion, which is non-linear (there's a phi^3 term present in these equations). As a consequence the equations of motions of all the a(k,t) are coupled and highly non-linear. It becomes practically impossible to solve these, so there is no way to tell how a(k,t) at a later time depends on the a(k', t') at an earlier time. In fact, the only way to try to resolve the time-dependent structure of a(k,t) is through perturbation theory.

But to get back to your question: the a(k,t) are the Fourier components of the field phi at time t. You can always define those. But only in the free field case do you have a simple relation between a(k,t_1) and a(k,t_2). This can be traced back to the decoupling of the equations of motions for the Fourier components. For the interacting case you need perturbation theory.
 
  • #41
xepma said:
So in a free field theory you have truly solved the time dependence of the operator / Fourier component a(k,t).

But in the interacting case the field obeys the interaction version of the equations of motion, which is non-linear (there's a phi^3 term present in these equations). As a consequence the equations of motions of all the a(k,t) are coupled and highly non-linear. It becomes practically impossible to solve these, so there is no way to tell how a(k,t) at a later time depends on the a(k', t') at an earlier time. In fact, the only way to try to resolve the time-dependent structure of a(k,t) is through perturbation theory.

So for the free field:

a(k,t)e^{ikx}=a(k)e^{-iwt}e^{ikx}=a(k)e^{ik_\mu x^\mu}

but in general the field \phi(x,t) is a linear combination of:

a(k,t)e^{ikx} and hermitian conjugate.

This sounds good, and mathematically is correct, but the only problem I have with it is this equation seems no longer true:

a(k,t)=\int d^3x e^{-ikx} \bar \partial_0 \phi(x,t)

i.e., solving backwards for a(k,t) in terms of \phi(x,t)

I know you said that solving for a(k,t) is unsolvable in the interacting case, as the equations are nonlinear so a(k,t) depends not only on coefficients at past times but also coefficients with different momenta. But I think you were referring to a simple time dependence like a(k,t)=(sin t)^3 t^2 log(t) a(k) . However, can you write a(k,t) not in terms of a definite function of t, but in terms of the unknown interacting field \phi(x,t)?
According to Srednicki, you can, and the answer is the same as the free-field case:a(k,t)=\int d^3x e^{-ikx} \bar \partial_0 \phi(x,t)

except now the field \phi(x,t) is interacting and not free. I'm not sure how this is true in the interacting case.
 
  • #42
what happens if you actually is doing the math for the RHS of that equation? what does it become?
Use

<br /> \phi(x)=\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}]<br />

which according to xepma is true

where now the a is the annihilation operator for the true vacuum |\Omega \rangle
 
  • #43
ansgar said:
what happens if you actually is doing the math for the RHS of that equation? what does it become?
Use

<br /> \phi(x)=\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}]<br />

which according to xepma is true

where now the a is the annihilation operator for the true vacuum |\Omega \rangle

Sure. But I should say that I was a bit careless with the notation. In some contexts e^(ikx) is the contraction of a 4-vector and others it is the contraction of a 3-vector. The formula that xepma is referring to I believe is the 3-vector case. Also I'm using the (-+++) signature.

<br /> \int d^3x e^{i\vec{k}\cdot\vec{x}} \bar \partial_0 \phi(x) <br />

Taking the time derivative on the left side is zero. So this expression becomes

<br /> \int d^3x e^{i\vec{k}\cdot\vec{x}} \partial_0 \phi(x) <br /> = \int d^3x e^{i\vec{k}\cdot\vec{x}} \partial_0 [\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}]]

and I don't see how one can get rid of the time derivative of the creation and annihilation operators to get just the creation and annihilation operators without any derivatives.

So the expression is not equal to just a(k,t).
----------------------------------------------------------------------------------------
correction:

actually, I got everything mixed up, so ignore everything above this correction. here's the new post:

<br /> \int d^3x e^{-ikx} \bar \partial_0 \phi(x) <br />

So taking the time derivatives, this expression becomes

<br /> i \int d^3x k_0e^{-ikx} \phi(x) <br /> +\int d^3x e^{-ikx} \partial_0 [\int d^3 \tilde k [a(k,t)e^{i\vec{k}\cdot\vec{x}}+a^\dagger(k,t)e^{-i\vec{k}\cdot\vec{x}}]]

but this to me runs into the same problem, that you'll get time derivatives of the creation and annihilation operator, so there is no way to get just the creation and annihilation operator without time derivatives.
 
Last edited:
  • #44
Never mind. I got it. It wasn't exactly pretty, so I probably didn't do it the best way, so I won't write the details here.

Basically you have this:
(1) <br /> <br /> \int d^3x e^{-ikx} \bar \partial_0 \phi(x) <br /> <br />
and for the time derivative of \phi(x), use:

\dot{\phi}=i[H,\phi]=iH\phi-i\phi H

Then show that (1) operating on |0> gives zero, (1) operating on a^\dagger(q)|0&gt; is zero unless q=k, in which case you get just |0>.

I think that's enough to prove that (1) = a(k)
 
  • #45
I already posted this in the homework/course section, but got no reply, so I'm crossposting here(Sorry for this)


Problem with the ordering of integrals in the derivation of the Lehmann-Kaller form of the exact propagator in Srednicki's book.

We start with the definition of the exact propagator in terms of the 2-point correlation function and introduce the complete set of momentum eigenstates and then define a certain spectral density in terms of a delta function. But the spectral density is also a function of 'k', so we cannot take the spectral density outside the integral over 'k'. Since that is not possible, the subsequent manipulations fail too.


2. Homework Equations

In Srednicki's book :
Equation 13.11 and 13.12

If that is incorrect, the use of 13.15 to get 13.16 is not possible.

3. The Attempt at a Solution

I don't see how it is possibe to derive the equation without that interchange.

I'd appreciate any clarifications on this issue. Am I missing some trivial thing?
 
  • #46
no the specral density is only a function of s

use eq. 13.9

we get

|< k,n | phi(0) | 0 >|^2 which is just a (complex) number.
 
  • #47
Sorry, I still do not get it. Isn't |&lt;k,n|\phi(0)|0&gt;|^{2} dependent on 'k'? Could you please elaborate?
 
  • #48
msid said:
Sorry, I still do not get it. Isn't |&lt;k,n|\phi(0)|0&gt;|^{2} dependent on 'k'? Could you please elaborate?

you might want to go back to basic QM...

it is a number since phi(0) is a number

here is a good review of those particular chapters from srednicki

www.physics.indiana.edu/~dermisek/QFT_09/qft-II-1-4p.pdf
 
  • #49
\sum_n |\langle k,n|\phi(0)|0\rangle|^{2} depends on k, but not on any other 4-vectors. Since it is a scalar, it can depend only on k^2 = -s.
 
  • #50
ansgar said:
you might want to go back to basic QM...

it is a number since phi(0) is a number

here is a good review of those particular chapters from srednicki

www.physics.indiana.edu/~dermisek/QFT_09/qft-II-1-4p.pdf

\phi(0) is not a number, it is an operator at a specified location in spacetime, which in this case is at the origin of it.

Avodyne said:
\sum_n |\langle k,n|\phi(0)|0\rangle|^{2} depends on k, but not on any other 4-vectors. Since it is a scalar, it can depend only on k^2 = -s.

It makes sense that it can only depend on k^2 and k^2 = -M^2, which we are summing over. This is acceptable if the interchange of the summation over 'n' and the integral over 'k' are valid. Thanks a lot for the clarification, Avodyne.
 
Back
Top