Genereral:Questions about Srednicki's QFT

  • Thread starter Thread starter haushofer
  • Start date Start date
  • Tags Tags
    Qft
  • #51
This is a great thread, I really need to read thoroughly when I get chance. I've just got through the spin zero part of Srednicki, and started on the Spin 1/2 stuff, however the group representation stuff is phasing me a bit here, all this stuff about (1,2) representation, (2,2) vector rep etc. I was wondering if anyone could explain what this means, or recommend any good books/online references that go through this stuff?
 
Physics news on Phys.org
  • #52
1 means singlet, 2 means doublet.. it is "just" adding of two spin 1/2 particles, same algebra.
 
  • #53
xepma said:
But in the interacting case the field obeys the interaction version of the equations of motion, which is non-linear (there's a phi^3 term present in these equations). As a consequence the equations of motions of all the a(k,t) are coupled and highly non-linear. It becomes practically impossible to solve these, so there is no way to tell how a(k,t) at a later time depends on the a(k', t') at an earlier time. In fact, the only way to try to resolve the time-dependent structure of a(k,t) is through perturbation theory.

I have a question about this. Suppose you can solve for a(k,t) in an interacting theory. Does this mean you can calculate scattering amplitudes at finite times? So say I begin at t=-10, and want to figure out the probability amplitude of observing a state at t=129. Then can I say this is equal to:

<0|a(k_{final},t=129) a^\dagger(k_{initial},t=-10)|0>

But I'm having trouble picturing this. Doesn't the Fock space get screwed up, because if you begin with one particle, all sorts of things are happening such as loops involving other particles. In other words, don't you have to extend your Fock space for virtual particles that might be off-shell?

In perturbation theory, is there an assumption that at t=+-infinity, that all interactions are turned off? Otherwise, wouldn't the Fock space have to include off-shell momenta?
 
  • #54
RedX said:
Never mind. I got it. It wasn't exactly pretty, so I probably didn't do it the best way, so I won't write the details here.

Basically you have this:
(1) <br /> <br /> \int d^3x e^{-ikx} \bar \partial_0 \phi(x) <br /> <br />
and for the time derivative of \phi(x), use:

\dot{\phi}=i[H,\phi]=iH\phi-i\phi H

Then show that (1) operating on |0> gives zero, (1) operating on a^\dagger(q)|0&gt; is zero unless q=k, in which case you get just |0>.

I think that's enough to prove that (1) = a(k)

I'm not convinced this is enough. You would need to see if it holds for all ppssible states.

I do however found a different way of writing the inverse,

a(k,t) = \int d^3x e^{-ikx+iwt}\left[\omega \phi(x,t) + i \Pi(x,t)\right]

where \Pi(x,t) is the field conjugate to \phi(x,t) (which is just the time-derivative). Now the mode expansion of the conjugate field is

\Pi(x,t) = -i \int\frac{d^3k}{(2\pi)^3(2\omega)} \omega\left[a(k,t) e^{ikx-iwt} - a^\dag(k,t) e^{-ikx+iwt}\right]

This is the same expansion as the non-interacting case, but again the modes pick up a time dependence. The expansion is restriced to this form due to the equal time commutation relations between the field phi and its conjugate. So in conclusion, yes the relation should hold in the interacting case.

I probably ran into the same problem as you: acting with the time derivative on the field phi generates time derivatives of a and a^dag as well. But I think the resolution lies in the fact that the basis of modes is complete, so these time derives can be written in terms of a linear sum of the modes a and a^\dag as well.
 
  • #55
RedX said:
I have a question about this. Suppose you can solve for a(k,t) in an interacting theory. Does this mean you can calculate scattering amplitudes at finite times? So say I begin at t=-10, and want to figure out the probability amplitude of observing a state at t=129. Then can I say this is equal to:

&lt;0|a(k_{final},t=129) a^\dagger(k_{initial},t=-10)|0&gt;

Yes, you can solve the correlators in that case.

But I'm having trouble picturing this. Doesn't the Fock space get screwed up, because if you begin with one particle, all sorts of things are happening such as loops involving other particles. In other words, don't you have to extend your Fock space for virtual particles that might be off-shell?

You don't have to do anything with your Fock space. The virtual particles are intermediate states that pop when you perturbatively try to determine the time-dependence of the fields / the modes. They are a mathematical construction that pop up in perturbation theory. If you can solve the time dependence exactly you wouldn't be needing these virtual particles.

Let me give an example by the way of why the time-evolution of the mode operators is problematic. The \phi^4 interaction looks like:

H_I = \int d^3x \phi^4(x)

Written in terms of the modes this is something like

H_I = \int d^3 k_1 d^3k_2 d^3k_3 d^3k_4 (2\pi)^3\delta(k_1+k_2+k_3+k+4) a^\dag_{k_1}a^\dag_{k_2} a_{k_3} a{k_4}
This is probably not completely correct, but the point is, is that the interaction is given by (a sum over) a product of two creation and two annihilation operators subject to momentum conservation.

Now, the time evolution of the mode operator a is given by (Heisenberg picture)

-i\hbar \partial_t a = [H_0 + H_I,a]

Now the commutator of a with the interaction terms H_I will generate (a sum over) a product of three mode operators, a^\dag a a. To solve it we need the time dependence of this product. You guessed it, this is given by:

-i\hbar \partial_t (a^\dag a a) = [H_0 + H_I,a^\dag a a]

But this commutator generates terms with an even larger product of mode operators! Which, in turn, also determine the time evolution of the operator a. And so the problem is clear: the time evolution of a product of mode operators: a^\dag\cdot a is determined by the time dependence larger product of mode operators.

I hope I'm not being too vague here... ;)

In perturbation theory, is there an assumption that at t=+-infinity, that all interactions are turned off? Otherwise, wouldn't the Fock space have to include off-shell momenta?

Well, you're touching on something deep here. The assumption is, indeed, that at t= +/- infinity the theory is non-interacting. At these instances we can construct the Fock space, and assume the Fock space is the same for the interacting case. It's not a pretty assumption at all that these two Fock spaces (interacting vs non-interacting) are the same, and --as far as I know-- it's not clear it should hold. But I don't think there's a way around this at the moment... If you can define the Fock space directly in your interacting theory, that would be great. But I got no clue on how to do that.
 
  • #56
xepma said:
I'm not convinced this is enough. You would need to see if it holds for all ppssible states.

Using the (-+++) metric and \dot{\phi}=i[H,\phi]=iH\phi-i\phi H:

i\int d^3x e^{-ikx} \bar \partial_0 \phi(x)=<br /> i[i\int d^3x e^{-ikx}H\phi(x)-i\int d^3x e^{-ikx}\phi(x)H-i\int d^3x e^{-ikx} E_k\phi(x)]

Using that \phi(x)=\int d^3 \tilde q [a(q,t)e^{iqx}+a^\dagger(q,t)e^{-iqx}] this becomes:

<br /> =-H[\frac{a(k,t)}{2E_k}+\frac{a^\dagger(-k,t)}{2E_k}e^{2iE_kt}]<br />
<br /> +[\frac{a(k,t)}{2E_k}+\frac{a^\dagger(-k,t)}{2E_k}e^{2iE_kt}]H<br />
<br /> +E_k[\frac{a(k,t)}{2E_k}+\frac{a^\dagger(-k,t)}{2E_k}e^{2iE_kt}]<br />

Now consider the operation of this on a state |M> with energy M. The 2nd terms (the creation operators) in each of the lines cancel because the Hamiltonian in the first line pulls out an energy of -(M+Ek), the second line pulls out an energy of M, and the third line is just Ek: these add to give zero. So far so good, so examine the 1st terms (the destruction operators). Consider two cases, where |M> contains the particle |k>, and not. If M does not, then all the destruction operators will produce zero acting on |M>. If |M> contains |k>, then the Hamiltonian brings out a -[M-Ek] in the 1st line, a +M in the second line, and an Ek in the 3rd line. This adds to 2Ek, and multiplying this by the term \frac{a(k,t)}{2E_k}, this leaves just the destruction operator.

Anyways, there are some subtleties that I'm bothered by, but I'm convinced that a(k,t)= i\int d^3x e^{-ikx} \bar \partial_0 \phi(x) is still true in an interacting theory. What's remarkable is if you plug this expression in for a(k,t) to calculate the commutator of the a's, and use the canonical commutation relations, then you get the standard equal time commutation relations for a(k,t):

[a(k,t),a^\dagger(q,t)]=\delta^3(k-q)(2\pi)^32E_k

All that's required is that \Pi=\frac{\partial \mathcal L}{\partial \dot{\phi}}=\dot{\phi}, so that you can identify the time derivative terms in a(k,t)= i\int d^3x e^{-ikx} \bar \partial_0 \phi(x) as the canonical momentum, so the commutation relation is easy to take.

So basically, we're pretty much required to have the Lagrangian be no more than 2nd order in time derivatives so that the canonical momentum is just the time derivative of the field, or there is no equal time commutation relations for the creation operators.

So basically the relation a(k,t)= i\int d^3x e^{-ikx} \bar \partial_0 \phi(x) is actually more fundamental than the Lagrangian!

Anyways, I was reading Srednicki again, and he showed that a^\dagger(k,t) can actually create multiparticle states when acting on the vacuum! This is equation (5.23). However, as t goes to +- infinity, you don't have this happening.

This is interesting stuff, but I heard that getting into more detail on this stuff takes constructive quantum field theory.
 
Last edited:
  • #57
The notion of a (so-called real) particle is ambiguous at "finite" time, basically due to the HUP. Or, to say it the other way around, particles become well-defined after enough time has passed in order to subdue the HUP. The poles of the correlation function are (what should be) interpreted as particles, not the lines on Feynman diagrams. In the spirit of QM, only a superposition of different particle number states, not a definite particle number state, exists at finite time.
 
  • #58
Equation 11.20 of Srednicki's book is the expression for the probability per unit time for the scattering of two particles. It is equal to a Lorentz invariant part times a non-Lorentz invariant part, and the non-Lorentz invariant part is:

\frac{1}{E_1E_2V}

I'm having trouble seeing how aliens on a spaceship observing the LHC will see everything slowed down by \frac{1}{\sqrt{1-v^2}}, where v is the velocity of the Earth relative to their spaceship.

In equation 11.48 for the decay rate it's obvious this is true, that aliens will observe a decay time longer by \frac{1}{\sqrt{1-v^2}}.

But is there a quick way to verify that \frac{1}{E_1E_2V} divided by \frac{1}{E_1&#039;E_2&#039;V&#039;} is equal to \frac{1}{\sqrt{1-v^2}} in the primed frame?
 
  • #59
Okay, I sort of intuitively derived it, after I read Weinberg's section on cross-sections.

First of all, I'm a bit shocked that if you take the probability per unit time, and divide by the flux, you get something Lorentz invariant. Weinberg on page 138 Vol 2, says: "It is conventional to define the cross-section to be a Lorentz-invariant function of 4-momenta."
Seriously, is that how the cross-section is defined? I thought it would be defined by the experimentalist reporting his or her results by dividing by the flux, because that's all that they can do! By sheer luck when you do that, you get something that's Lorentz-invariant!

Anyways, dividing by the flux, you get a now Lorentz-invariant part that looks like this:

\frac{1}{E_1E_2V}\frac{V}{u}

where \frac{u}{V} is the flux, and u is the relative velocity defined as:

u=\frac{\sqrt{(p_1\cdot p_2)^2-m_{1}^2m_{2}^2}}{E_1E_2}
in 3.4.17 of Weinberg.

Now examine the \frac{V}{u} term.

If you are boosting away from the COM frame, V undergoes a length contraction, and also the length numerator of u undergoes a length contraction. These cancel. But the time denominator of u undergoes a time dilation, so overall \frac{V}{u} increases. That means
\frac{1}{E_1E_2V} must decrease, since their product is Lorentz-invariant. So the probability per unit time is smaller in a boosted frame, which is time dilation.

My special relativity is really shoddy, so I just assumed that the COM frame of two particles is like a rest frame of one particle, so that a boost away from this frame results in a length contraction and a time dilation. Does anyone know how to do this more rigorously (legitimately)?
 
  • #60
I think that you must restrict to the case pA.pB = -|pA||pB| (where A and B are the incoming particles), and therefore restrict the set of Lorentz transformations to rotations and only longitudinal boosts. I'm pretty sure that's what Weinberg means, but I don't have his book, so I can't check. The cross section is invariant to longitudinal boosts, but not transverse boosts.
 
Last edited:
  • #61
turin said:
I think that you must restrict to the case pA.pB = -|pA||pB| (where A and B are the incoming particles), and therefore restrict the set of Lorentz transformations to rotations and only longitudinal boosts. I'm pretty sure that's what Weinberg means, but I don't have his book, so I can't check. The cross section is invariant to longitudinal boosts, but not transverse boosts.

I sort of read the same thing on some lecture notes for experimentalists. Basically, they say that since the cross-section is an area, a boost perpendicular to the area results in no change. That would seem to imply that if the boost is not perpendicular to the area, then the dimension of the area in the direction of the boost should get length contracted, decreasing the cross-section.

But Weinberg's expression for the cross section is Lorentz-invariant to boosts in any direction:

d\sigma=\frac{1}{4\sqrt{(k_1\cdot k_2)^2-m_{1}^2m_{2}^2}} <br /> |\mathcal T|^2dLIPS_n(k_1+k_2)

where all vectors in that expression are 4-vectors, and k_1,k_2 are the incoming 4-momenta of the two colliding particles.
 
  • #62
I have a number of basic conceptual questions from chapter 5 on the LSZ formula. Firstly, Srednicki defines a^{\dag}_1 :=\int d^3k f_1(\vec{k})a^{\dag}(\vec{k}), where f_1(\vec{k}) \propto exp[-(\vec{k}-\vec{k}_1)^2/4\sigma^2].

I understand how this creates a particle localized in momentum space near \vec{k}_1 as it is weighted sum over momentum effectively, but don't understand how this endows the particle with any kind of position, or a position near the origin for that matter as Srednicki states?

Also Srednicki asks us to consider the state a^{\dag}_{1}\mid 0 \rangle. Why exactly does this state propagate and spread out, and why is it localized far from origin at t \rightarrow \pm \infty

I follow Srednicki's mathematics leading to 5.11, but I can't see why the creation ops would no longer be time independent in an interacting theory or why indeed there would be any issue in assuming that free field creation ops would work comparably in an interacting theory.
 
  • #63
I also have a few question regarding equations coming from Srednicki's book, though I'm afraid they are all rather trivial.

Trial version of the book can be found http://www.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf"

First, why is equation 14.33, {A^{\varepsilon /2}} = 1 + \frac{\varepsilon }{2}\ln A + O({\varepsilon ^2}) true?

Second, equations 28.19 till 28.21, where it is argued that from
0 = (1 + \frac{{\alpha G_1^&#039;(\alpha )}}{\varepsilon } + \frac{{\alpha G_2^&#039;(\alpha )}}{{{\varepsilon ^2}}} + ...)\frac{{d\alpha }}{{d\ln \mu }} + \varepsilon \alpha

and some physical reasoning in the paragraph below this equation, we should have have \frac{{d\alpha }}{{d\ln \mu }} = - \varepsilon \alpha + \beta (\alpha )

Now, I do not understand how the terms on the RHS of this equation are fixed. The text says the beta function is determined by matching the O({\varepsilon ^0}) terms, which should give \beta (\alpha ) = {\alpha ^2}G_1^&#039;(\alpha )

What O({\varepsilon ^0}) terms? How to match them?
Also, what are the O({\varepsilon}) terms?

thank you
 
Last edited by a moderator:
  • #64
Lapidus said:
why is equation 14.33, {A^{\varepsilon /2}} = 1 + \frac{\varepsilon }{2}\ln A + O({\varepsilon ^2}) true?
A^{\varepsilon /2}=\exp[({\varepsilon /2})\ln A].

Now expand the exponential in powers of \varepsilon.
Lapidus said:
What O({\varepsilon ^0}) terms? How to match them? Also, what are the O({\varepsilon}) terms?
Take the equation

\frac{{d\alpha }}{{d\ln \mu }} = - \varepsilon \alpha + \beta (\alpha )

and plug it into

0 = (1 + \frac{{\alpha {G_1^&#039;}(\alpha )}}{\varepsilon } + \frac{{\alpha {G_2^&#039;}(\alpha )}}{{{\varepsilon ^2}}} + ...)\frac{{d\alpha }}{{d\ln \mu }} + \varepsilon \alpha.

This is supposed to be true for all \varepsilon, so if we do a Laurent expansion in powers of \varepsilon, the coefficient of each power of \varepsilon must be zero. The coefficient of \varepsilon^0 (that is, the constant term with no powers of \varepsilon) is

\alpha^2 G&#039;_1(\alpha)-\beta(\alpha),

so this must equal zero.
 
  • #65
Thanks, Avodyne!

I got two more. (Actually, I have plenty more.)

Inhttps://www.physicsforums.com/showpost.php?p=2331156&postcount=13" of this thread, Haushofer asks how we get from 27.11 to 27.12.

I got this far \ln {m_{ph}} = \ln m + \frac{1}{2}\ln \left[ {\frac{5}{{12}}2\alpha \left( {\ln (\mu /m} \right) + {c^&#039;}} \right] + \frac{1}{2}\ln O({\alpha ^2}) but what now?

Also, in chapter 9, in the paragraph below equation 9.9 it says that we will see that
Y = O(g) and {Z_i} = 1 + O({g^2}). Where and when do we see it? In equation 9.18? Does 9.18 require Y and Z to be first respectively second order in g?
 
Last edited by a moderator:
  • #66
You are not taking logarithms correctly!

Start with:

m_{ph}^2 = m^2\left[1 + c\alpha + O(\alpha^2)\right].

Take the log:

\ln(m_{ph}^2) = \ln(m^2)+\ln\left[1 + c \alpha + O(\alpha^2)].

Now use \ln(m^2)=2\ln m and \ln[1+ c \alpha + O(\alpha^2)]= c\alpha.

As for Y and the Z's, I would say 9.20 for Y, 14.37 and 14.38 for Z_ph and Z_m, and 16.12 for Z_g.
 
  • #67
Good stuff, Avodyne!

I will be back with more. For now, many thanks!
 
  • #68
I have a question about this part: why can't \frac{d \alpha}{d ln \mu} be an infinite series in /positive/ powers of epsilon? I see that if it is a /finite/ series in positive powers of epsilon, then it must terminate at the eps^1 term, since if it goes to eps^n with n > 1 then there's no way the eps^n term on the right hand side can be zero. But I could imagine the appropriate cancellations happening if it is an infinite series.
 
  • #69
Because you would end up with an infinite series in \varepsilon that summed to zero. The only such series has all zero coefficients.
 
  • #70
My Srednicki questions for today!

As I already had problems with the rather trivial 14.33, the more daunting equations 14.34 and 14.36 are unfortunately also not clear to me.

Question 1

how to go from 14.32

\frac{1}{2}\alpha \Gamma ( - 1 + \frac{\varepsilon }{2})\int\limits_0^1 {dxD(\frac{{4\pi {{\tilde \mu }^2}}}{D}} {)^{\varepsilon /2}}

to 14.34

- \frac{1}{2}\alpha \left[ {(\frac{2}{\varepsilon } + 1)(\frac{1}{6}{k^2} + {m^2}) + \int\limits_0^1 {dxD\ln \left( {\frac{{4\pi {{\tilde \mu }^2}}}{{{e^\gamma }D}}} \right)} } \right]

knowing that

D = x(1 - x){k^2} + {m^2}\] and \int\limits_0^1 {dxD = \frac{1}{6}{k^2} + {m^2}} and \Gamma ( - 1 + \frac{\varepsilon }{2}) = - (\frac{2}{\varepsilon } - \gamma + 1) and {A^{\varepsilon /2}} = 1 + \frac{\varepsilon }{2}\ln A + O({\varepsilon ^2}

My first step would be

- (\frac{2}{\varepsilon } - \gamma + 1)\int\limits_0^1 {dx} D\left[ {1 + \frac{\varepsilon }{2}\ln \left( {\frac{{4\pi {{\tilde \mu }^2}}}{D}} \right)} \right] = - (\frac{2}{\varepsilon } - \gamma + 1)\left[ {(\frac{1}{6}{k^2} + {m^2}) + \frac{\varepsilon }{2}\int\limits_0^1 {dx} D\ln \left( {\frac{{4\pi {{\tilde \mu }^2}}}{D}} \right)} \right]

Question 2

how to go from 14.34

- \frac{1}{2}\alpha \left[ {(\frac{2}{\varepsilon } + 1)(\frac{1}{6}{k^2} + {m^2}) + \int\limits_0^1 {dxD\ln \left( {\frac{{4\pi {{\tilde \mu }^2}}}{{{e^\gamma }D}}} \right)} } \right] - A{k^2} - B{m^2} + O({\alpha ^2})

to 14.36

\frac{1}{2}\alpha \int\limits_0^1 {dxD\ln (D/{m^2}) - } \left\{ {\frac{1}{6}\alpha \left[ {\frac{1}{\varepsilon } + \ln (\mu /m) + \frac{1}{2}} \right] + A} \right\}{k^2} - \left\{ {\alpha \left[ {\frac{1}{\varepsilon } + \ln (\mu /m) + \frac{1}{2}} \right] + B} \right\}{m^2} + O({\alpha ^2})

with the help of redefining
\mu \equiv \sqrt {4\pi } {e^{ - \gamma /2}}\tilde \mu

Sadly, even after staring at it for half an hour, I have no clue what he does here. Though, I assume it is certainly rather simply.

thanks in advance for any hints and help
 
Last edited:
  • #71
For question one, just insert the small-epsilon approximations for the gamma function and the integrand, multiply everything out to get a bunch of terms, drop terms proportional to epsilon (since they go to zero) and then collect the remaining terms.

For question two, you're going to want to play with logs using some manipulation like ln(mu^2/D) = -ln(D/m^2) + 2*ln(mu/m)
 
  • #72
Thanks for answering, The Duck.

The_Duck said:
For question one, just insert the small-epsilon approximations for the gamma function and the integrand, multiply everything out to get a bunch of terms, drop terms proportional to epsilon (since they go to zero) and then collect the remaining terms.

What you mean with insert the small-epsilon approximations for the gamma function? How and where can I insert it? Is my first step given in the last post correct?


The_Duck said:
For question two, you're going to want to play with logs using some manipulation like ln(mu^2/D) = -ln(D/m^2) + 2*ln(mu/m)

Ahhh! But why are the two ln(mu/m) not in the integrand anymore?
 
  • #73
Lapidus said:
Is my first step given in the last post correct?
Yes, and that's what The Duck meant.
Lapidus said:
But why are the two ln(mu/m) not in the integrand anymore?
Because ln(mu/m) is now just a constant times D, and D has been integrated.
 
  • #74
Got it! Thank you

I hate to test more of your patience, but still two minor quibbles in chapter 14.

In 14.40, the -ln m^2 from the integrand in 14.39 is lumped into the 'linear in k^2 and m^2' part in 14.40, right?

How does Pi(-m^2) vanish via 14.41 and 14.42? I assume the two terms in 14.41 neutralize each other. How can I see this?

Now to the excellent https://www.physicsforums.com/showpost.php?p=2725516&postcount=35" by RedX, where he adresses 14.8. Can it be said that 14.7 corresponds to mass renormalization and 14.8 to field renormalization? I remember reading that somewhere.

again I would be thankful for any answers
 
Last edited by a moderator:
  • #75
Lapidus said:
How does Pi(-m^2) vanish via 14.41 and 14.42? I assume the two terms in 14.41 neutralize each other. How can I see this?

When you plug in k^2 = -m^2, then D = D0, so the log vanishes (ln 1 = 0). Furthermore k^2+m^2 = 0, so the term linear in (k^2 + m^2) also vanishes.
 
  • #76
A simple, new question on Srednicki's book:
Equation 2.26 :
U(\Lambda)^{-1} \varphi(x)U(\Lambda)= \varphi(\Lambda^^{-1}x)

which describes how a scalar field transforms under Lorentz transformation is not derived in the book. Instead it seems to be inspired by time-translation equation (2.24).
Anyone can point me to a proof ?
 
Last edited:
  • #77
emz said:
A simple, new question on Srednicki's book:
Equation 2.26 :
U(\Lambda)^{-1} \varphi(x)U(\Lambda)= \varphi(\Lambda^^{-1}x)

which describes how a scalar field transforms under Lorentz transformation is not derived in the book. Instead it seems to be inspired by time-translation equation (2.24).
Anyone can point me to a proof ?

I believe this is the definition of a scalar field.
 
  • #78
emz said:
A simple, new question on Srednicki's book:
Equation 2.26 :
U(\Lambda)^{-1} \varphi(x)U(\Lambda)= \varphi(\Lambda^^{-1}x)

which describes how a scalar field transforms under Lorentz transformation is not derived in the book. Instead it seems to be inspired by time-translation equation (2.24).
Anyone can point me to a proof ?

That has always been a point of confusion for me. Forgetting about operators, a solution to the KG equation can be the c-number:

\phi(x)=ae^{ikx}

This shouldn't change under Lorentz transform, so I think what happens is that if you change x to x', k changes to k' such that k'x'=kx. The coefficient 'a' just stays the same.

Now take a superposition of plane waves:

\phi(x)=\int d^3 \tilde{k} a(k)e^{ikx}

How does this behave under Lorentz transform? Well isn't it the same thing:

\phi(x&#039;)=\int d^3 \tilde{k} a(k)e^{ik&#039;x&#039;}

?

But this isn't the same as:

\phi(x&#039;)=\int d^3 \tilde{k} a(k&#039;)e^{ik&#039;x&#039;}

right?

Anyways, what if you look at it differently. What if under Lorentz transform of\phi(x)=\int d^3 \tilde{k} a(k)e^{ikx}

only the x is changed?

\phi(x&#039;)=\int d^3 \tilde{k} a(k)e^{ikx&#039;}=<br /> \int d^3 \tilde{k} a(k)e^{ik\Lambda x}=<br /> \int d^3 \tilde{k} a(k)e^{i(\Lambda^{-1}k)x}<br /> =\int d^3 \tilde{k&#039;} a(\Lambda k&#039;)e^{ik&#039;x}<br />

But is this equal to:\phi(x)=\int d^3 \tilde{k} a(k)e^{ikx}

? Anyways, I don't know. I think I confused myself.
 
Last edited:
  • #79
Avodyne said:
I believe this is the definition of a scalar field.


I thought the definition of a scalar field was the numerical value of the field at a given point to be Lorentz invariant. That is
U(\Lambda) \varphi(\Lambdax)=\varphi(x)
 
  • #80
No. Think about a temperature field T(x) (a scalar) under rotations in Euclidean space. The rotated version of T(\vec{x}) is T(R^{-1}\vec{x}) where R is the rotation matrix that implements the rotation on 3-vectors. Contrast, say, the electric field, a vector, where if you rotate \vec{E}(\vec{x}) you get R \vec{E}(R^{-1}\vec{x}). A scalar transforms in the simplest possible way: more complicated fields have different components that, like the components of the electric field, rotate into each other under rotations.

Also, while states in QM transform with one transformation operator: | \psi \rangle \to U(\Lambda) | \psi \rangle, operators transform with two transformation operators: \phi(x) \to U(\Lambda)^{-1} \phi(x) U(\Lambda)
 
Last edited:
  • #81
Okay, I got a question. Consider just the annihilation part of the field operator

\phi_+(x)=\int_R d^3 \tilde{k} a(k)e^{ikx}

where R is a region in momentum space. Srednicki takes R to be the entire R3, but here I'm taking it to be a connected subset of R3.

Now consider a Lorentz transform of this operator:

U^{-1}\phi_+(x)U=\int_R d^3 \tilde{k} U^{-1}a(k)Ue^{ikx}<br /> =\int_R d^3 \tilde{k} a(\Lambda^{-1}k)e^{ikx}<br /> =\int_{\Lambda^{-1}R} d^3 \tilde{k} a(k)e^{i(\Lambda k)x}<br /> =\int_{\Lambda^{-1}R} d^3 \tilde{k} a(k)e^{ik(\Lambda^{-1}x)}<br />

But is U^{-1}\phi_+(x)U=\int_{\Lambda^{-1}R} d^3 \tilde{k} a(k)e^{ik(\Lambda^{-1}x)} really equal to: \int_{R} d^3 \tilde{k} a(k)e^{ik(\Lambda^{-1}x)}=\phi(\Lambda^{-1}x) ?

The integration volumes are different. So for this Lorentz transform to work, does it rely on the fact that the volume is over the entire R3? That's weird.

addendum: o okay I got it, but I won't erase my posts in case anyone else got confused like me. when you integrate over all of momentum space, then you don't have to specify a special momentum k or a special region of momentum k. therefore the final result can only depend on the transformation of the coordinate x as there aren't any special 4-vectors to contract with x. but if your wavefunction is over a special region or a special value of the momentum, you have to specify k and contract it with x. therefore k and x transform. so really you should label the wavefunction \phi(x,k). in other words, the last integral I have is dependent on k through the region of the integration (and not the dummy indices). this has to transform to \Lambda^{-1}R, making the last two integrals equal:

\phi(\Lambda^{-1}x)\neq\int_{R} d^3 \tilde{k} a(k)e^{ik(\Lambda^{-1}x)}\phi(\Lambda^{-1}x)=\int_{R&#039;} d^3 \tilde{k} a(k)e^{ikx&#039;}<br /> =\int_{\Lambda^{-1}R} d^3 \tilde{k} a(k)e^{ik(\Lambda^{-1}x)}
 
Last edited:
  • #82
Back to chapter 14.

Concerning the fixing of the two purely numerical constants, the two \kappa by imposing the two conditions 14.7 and 14.8.

How do we get 14.43? (Srednicki says it is straightforward, I once again does not see it. Where does the 1/12 come from? When I differentiate 14.41 wrt k^2, the k^2 disappears..)

And what are now the two numerical constants that we wanted to fix?

thanks
 
  • #83
I admit probably not the most sophisticated question ever asked on PF, but could nevertheless someone give me a hint...

thank you
 
  • #84
Lapidus said:
When I differentiate 14.41 wrt k^2, the k^2 disappears..
Then you made a mistake. Remember that D depends on k^2.
 
  • #85
Avodyne said:
Then you made a mistake. Remember that D depends on k^2.

I have to take a derivative of an ln term in an integral?? And Srednicki calls that straightforward? Sorry, I still can't see...

And what are now the two numerical constants??

thank you
 
  • #86
First of all, if you set k^2=-m^2 in eq.(14.39), the result is supposed to be zero, so this gives you -{1\over6}\kappa_A+\kappa_B as an integral over x of a messy function of x. Next you need to differentiate eq.(14.39) with respect to k^2, and then set k^2=-m^2; once again the result is supposed to be zero. To take this derivative, you need to differentiate D\ln(D/m^2) under the integral over x. This gives you \kappa_A as another integral of another messy function of x.
 
Back
Top