Genereral:Questions about Srednicki's QFT

  • Thread starter Thread starter haushofer
  • Start date Start date
  • Tags Tags
    Qft
Click For Summary
The discussion revolves around clarifications regarding Srednicki's Quantum Field Theory (QFT) textbook, particularly focusing on the generation of Feynman diagrams and the implications of tree-level contributions in φ³ theory. Participants express confusion about the absence of tree-level contributions for certain vertex configurations and the rationale behind regularization techniques, specifically the use of a cutoff in integrals. The conversation also touches on the skeleton expansion method in perturbation theory, emphasizing the importance of evaluating n-point functions for processes with fixed external lines. Additionally, the role of translation invariance in decay processes and the derivation of Lorentz representations are discussed, with requests for further resources and explanations on these topics. Overall, the thread serves as a collaborative effort to deepen understanding of complex QFT concepts presented in Srednicki's work.
  • #31
But my A in my example is a general nxn matrix. To make it more concrete, you could take A to be 2x2, just like the Pauli matrices.

I understand that it doesn't make sense to contract dotted and undotted indices, I just wonder why the particular contraction order is taken. I see that in the Hermitian conjugation the dotted a-index of psi becomes undotted (conjugation brings you from one SU(2) sector into the other), but I don't understand why it's still contracted with the FIRST index of sigma. I would contract it with the SECOND, because sigma is also Hermitian conjugated.

So my reasoning would be: Take the Hermitian conjugate of the whole expression, note that Hermitian conjugating spinors brings you from dotted to undotted and vice versa, complex conjugate the matrix sigma and switch the contraction order in which ofcourse dotted indices are contracted with dotted and undotted with undotted. Our convention is also such that the Hermitian conjugate of 2 spinors reverses the order of the spinors.

This equation is before the remark that sigma is Hermitian, so it's just a linear algebra thing I would say.
 
Physics news on Phys.org
  • #32
haushofer said:
But my A in my example is a general nxn matrix.
Of course A is, but xAy is a number. x is a vector, Ay is vector, and xAy is their inner product. So (xAy)^{\dagger}=(xAy)^*
 
  • #33
Ah, ofcourse, how stupid of me! Thanks RedX and Landau!
 
  • #34
Hi, I was looking at loop corrections in Srednicki's chapter 14, and I have a question about equation 14.8

From general considerations one knows that the exact propagator has a pole at k^2 = -m^2 with residue one. But why does this demand that

<br /> \frac{\partial \Pi}{\partial (k^2)}|_{k^2 = -m^2} = 0<br />

?
 
  • #35
haushofer said:
Hi, I was looking at loop corrections in Srednicki's chapter 14, and I have a question about equation 14.8

From general considerations one knows that the exact propagator has a pole at k^2 = -m^2 with residue one. But why does this demand that

<br /> \frac{\partial \Pi}{\partial (k^2)}|_{k^2 = -m^2} = 0<br />

?

The propagator is

\Delta(k^2)=\frac{1}{k^2+m^2-i\epsilon-\Pi(k^2)}

The residue is given my multiplying this by (k^2+m^2):

\frac{k^2+m^2}{k^2+m^2-i\epsilon-\Pi(k^2)}

and setting k^2=-m^2. However, before we do this, expand Pi about k^2=-m^2 in a power series about -m^2:

\frac{k^2+m^2}{k^2+m^2-i\epsilon-\Pi(-m^2)-\frac{\partial\Pi(k^2)}{\partial k^2}|_{k^2=-m^2}(k^2+m^2)-...}

From this you can see that after dividing the numerator and denominator by (k^2+m^2), the only way you get that the residue is equal to 1 is if the partial of Pi is equal to 0. Otherwise the residue in general would be 1/(1-partial Pi) after dividing the numerator and denominator by (k^2+m^2) and then setting k^2=-m^2 (note that Pi(-m^2)=0).

Note that it is not absolutely necessary that the derivative of Pi is equal to zero. If it's not, then the residue is 1/(1-partial Pi). This is considered in chapter 27 of Srednicki. Having the partial derivative of Pi equal to zero is sometimes called the on-shell renormalization scheme. It is necessary that Pi(-m^2) is equal to zero however, or else there won't be a pole at k^2=-m^2
 
Last edited:
  • #36
That's very clearyfying. Thanks RedX!
 
  • #37
Hi everyone,

Just out of curiosity, does anyone know why the second line of eqn (5.10) is valid?

The reason I ask is because that form for the creation operator is derived in eqn (3.21) under the assumption of a free-field theory.

Why is the same form still valid in the interacting-field theory? Srednicki took great care later on (e.g., eqns 5.17, 5.18, 5.19) to make the interacting-field theory give the same result as the free-field theory, but seemed a bit careless in not explaining why you can use eqn. (3.21) for the creation operator in eqn (5.10) for the interacting-field theory.
 
  • #38
RedX said:
Hi everyone,

Just out of curiosity, does anyone know why the second line of eqn (5.10) is valid?

The reason I ask is because that form for the creation operator is derived in eqn (3.21) under the assumption of a free-field theory.

Why is the same form still valid in the interacting-field theory? Srednicki took great care later on (e.g., eqns 5.17, 5.18, 5.19) to make the interacting-field theory give the same result as the free-field theory, but seemed a bit careless in not explaining why you can use eqn. (3.21) for the creation operator in eqn (5.10) for the interacting-field theory.

"Let us guess that this still works in the interacting theory as well. One complication is that a^dagger (vec k) becomes time dependent..."

i.e. we define that a should work in the same way but also time dependent due to interactions.

See Weinberg for futher information.
 
  • #39
ansgar said:
"Let us guess that this still works in the interacting theory as well. One complication is that a^dagger (vec k) becomes time dependent..."

i.e. we define that a should work in the same way but also time dependent due to interactions.

See Weinberg for futher information.

So the free-field is given by the Fourier expansion:

\phi(x)=\int d^3 \tilde k [a(k)e^{ikx}+a^\dagger(k)e^{-ikx}]

where k is on-shell and d^3 \tilde k=\frac{d^3k}{2E_k}.

Adding time dependence to the coefficients leads to:

\phi(x)=\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}]

However, deriving this:

a(k)=\int d^3x e^{-ikx} \bar \partial_0 \phi(x)

where A\bar \partial_0 B=A\partial_0 B-B\partial_0 A

only works when a(k,t)=a(k), i.e., when a(k) is not a function of time.

Does this mean that in the interacting theory, \phi(x) can't be written as:


\phi(x)=\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}]
 
  • #40
Yes, you can still write the field phi like that -- it is simply the Fourier transform of the field phi. Remember that the field operator phi satisfies the equations of motion. In the free case these equations are linear in the field. When you take the Fourier transform of the field phi, the components a(k,t) also satisfy an equation of motion. What's nice about a free field theory is that all these equations of motion for the modes a(k,t) decouple, and can be solved seperately -- this is where the phase factor exp[iw_k t] comes from. So in a free field theory you have truly solved the time dependence of the operator / Fourier component a(k,t).

But in the interacting case the field obeys the interaction version of the equations of motion, which is non-linear (there's a phi^3 term present in these equations). As a consequence the equations of motions of all the a(k,t) are coupled and highly non-linear. It becomes practically impossible to solve these, so there is no way to tell how a(k,t) at a later time depends on the a(k', t') at an earlier time. In fact, the only way to try to resolve the time-dependent structure of a(k,t) is through perturbation theory.

But to get back to your question: the a(k,t) are the Fourier components of the field phi at time t. You can always define those. But only in the free field case do you have a simple relation between a(k,t_1) and a(k,t_2). This can be traced back to the decoupling of the equations of motions for the Fourier components. For the interacting case you need perturbation theory.
 
  • #41
xepma said:
So in a free field theory you have truly solved the time dependence of the operator / Fourier component a(k,t).

But in the interacting case the field obeys the interaction version of the equations of motion, which is non-linear (there's a phi^3 term present in these equations). As a consequence the equations of motions of all the a(k,t) are coupled and highly non-linear. It becomes practically impossible to solve these, so there is no way to tell how a(k,t) at a later time depends on the a(k', t') at an earlier time. In fact, the only way to try to resolve the time-dependent structure of a(k,t) is through perturbation theory.

So for the free field:

a(k,t)e^{ikx}=a(k)e^{-iwt}e^{ikx}=a(k)e^{ik_\mu x^\mu}

but in general the field \phi(x,t) is a linear combination of:

a(k,t)e^{ikx} and hermitian conjugate.

This sounds good, and mathematically is correct, but the only problem I have with it is this equation seems no longer true:

a(k,t)=\int d^3x e^{-ikx} \bar \partial_0 \phi(x,t)

i.e., solving backwards for a(k,t) in terms of \phi(x,t)

I know you said that solving for a(k,t) is unsolvable in the interacting case, as the equations are nonlinear so a(k,t) depends not only on coefficients at past times but also coefficients with different momenta. But I think you were referring to a simple time dependence like a(k,t)=(sin t)^3 t^2 log(t) a(k) . However, can you write a(k,t) not in terms of a definite function of t, but in terms of the unknown interacting field \phi(x,t)?
According to Srednicki, you can, and the answer is the same as the free-field case:a(k,t)=\int d^3x e^{-ikx} \bar \partial_0 \phi(x,t)

except now the field \phi(x,t) is interacting and not free. I'm not sure how this is true in the interacting case.
 
  • #42
what happens if you actually is doing the math for the RHS of that equation? what does it become?
Use

<br /> \phi(x)=\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}]<br />

which according to xepma is true

where now the a is the annihilation operator for the true vacuum |\Omega \rangle
 
  • #43
ansgar said:
what happens if you actually is doing the math for the RHS of that equation? what does it become?
Use

<br /> \phi(x)=\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}]<br />

which according to xepma is true

where now the a is the annihilation operator for the true vacuum |\Omega \rangle

Sure. But I should say that I was a bit careless with the notation. In some contexts e^(ikx) is the contraction of a 4-vector and others it is the contraction of a 3-vector. The formula that xepma is referring to I believe is the 3-vector case. Also I'm using the (-+++) signature.

<br /> \int d^3x e^{i\vec{k}\cdot\vec{x}} \bar \partial_0 \phi(x) <br />

Taking the time derivative on the left side is zero. So this expression becomes

<br /> \int d^3x e^{i\vec{k}\cdot\vec{x}} \partial_0 \phi(x) <br /> = \int d^3x e^{i\vec{k}\cdot\vec{x}} \partial_0 [\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}]]

and I don't see how one can get rid of the time derivative of the creation and annihilation operators to get just the creation and annihilation operators without any derivatives.

So the expression is not equal to just a(k,t).
----------------------------------------------------------------------------------------
correction:

actually, I got everything mixed up, so ignore everything above this correction. here's the new post:

<br /> \int d^3x e^{-ikx} \bar \partial_0 \phi(x) <br />

So taking the time derivatives, this expression becomes

<br /> i \int d^3x k_0e^{-ikx} \phi(x) <br /> +\int d^3x e^{-ikx} \partial_0 [\int d^3 \tilde k [a(k,t)e^{i\vec{k}\cdot\vec{x}}+a^\dagger(k,t)e^{-i\vec{k}\cdot\vec{x}}]]

but this to me runs into the same problem, that you'll get time derivatives of the creation and annihilation operator, so there is no way to get just the creation and annihilation operator without time derivatives.
 
Last edited:
  • #44
Never mind. I got it. It wasn't exactly pretty, so I probably didn't do it the best way, so I won't write the details here.

Basically you have this:
(1) <br /> <br /> \int d^3x e^{-ikx} \bar \partial_0 \phi(x) <br /> <br />
and for the time derivative of \phi(x), use:

\dot{\phi}=i[H,\phi]=iH\phi-i\phi H

Then show that (1) operating on |0> gives zero, (1) operating on a^\dagger(q)|0&gt; is zero unless q=k, in which case you get just |0>.

I think that's enough to prove that (1) = a(k)
 
  • #45
I already posted this in the homework/course section, but got no reply, so I'm crossposting here(Sorry for this)


Problem with the ordering of integrals in the derivation of the Lehmann-Kaller form of the exact propagator in Srednicki's book.

We start with the definition of the exact propagator in terms of the 2-point correlation function and introduce the complete set of momentum eigenstates and then define a certain spectral density in terms of a delta function. But the spectral density is also a function of 'k', so we cannot take the spectral density outside the integral over 'k'. Since that is not possible, the subsequent manipulations fail too.


2. Homework Equations

In Srednicki's book :
Equation 13.11 and 13.12

If that is incorrect, the use of 13.15 to get 13.16 is not possible.

3. The Attempt at a Solution

I don't see how it is possibe to derive the equation without that interchange.

I'd appreciate any clarifications on this issue. Am I missing some trivial thing?
 
  • #46
no the specral density is only a function of s

use eq. 13.9

we get

|< k,n | phi(0) | 0 >|^2 which is just a (complex) number.
 
  • #47
Sorry, I still do not get it. Isn't |&lt;k,n|\phi(0)|0&gt;|^{2} dependent on 'k'? Could you please elaborate?
 
  • #48
msid said:
Sorry, I still do not get it. Isn't |&lt;k,n|\phi(0)|0&gt;|^{2} dependent on 'k'? Could you please elaborate?

you might want to go back to basic QM...

it is a number since phi(0) is a number

here is a good review of those particular chapters from srednicki

www.physics.indiana.edu/~dermisek/QFT_09/qft-II-1-4p.pdf
 
  • #49
\sum_n |\langle k,n|\phi(0)|0\rangle|^{2} depends on k, but not on any other 4-vectors. Since it is a scalar, it can depend only on k^2 = -s.
 
  • #50
ansgar said:
you might want to go back to basic QM...

it is a number since phi(0) is a number

here is a good review of those particular chapters from srednicki

www.physics.indiana.edu/~dermisek/QFT_09/qft-II-1-4p.pdf

\phi(0) is not a number, it is an operator at a specified location in spacetime, which in this case is at the origin of it.

Avodyne said:
\sum_n |\langle k,n|\phi(0)|0\rangle|^{2} depends on k, but not on any other 4-vectors. Since it is a scalar, it can depend only on k^2 = -s.

It makes sense that it can only depend on k^2 and k^2 = -M^2, which we are summing over. This is acceptable if the interchange of the summation over 'n' and the integral over 'k' are valid. Thanks a lot for the clarification, Avodyne.
 
  • #51
This is a great thread, I really need to read thoroughly when I get chance. I've just got through the spin zero part of Srednicki, and started on the Spin 1/2 stuff, however the group representation stuff is phasing me a bit here, all this stuff about (1,2) representation, (2,2) vector rep etc. I was wondering if anyone could explain what this means, or recommend any good books/online references that go through this stuff?
 
  • #52
1 means singlet, 2 means doublet.. it is "just" adding of two spin 1/2 particles, same algebra.
 
  • #53
xepma said:
But in the interacting case the field obeys the interaction version of the equations of motion, which is non-linear (there's a phi^3 term present in these equations). As a consequence the equations of motions of all the a(k,t) are coupled and highly non-linear. It becomes practically impossible to solve these, so there is no way to tell how a(k,t) at a later time depends on the a(k', t') at an earlier time. In fact, the only way to try to resolve the time-dependent structure of a(k,t) is through perturbation theory.

I have a question about this. Suppose you can solve for a(k,t) in an interacting theory. Does this mean you can calculate scattering amplitudes at finite times? So say I begin at t=-10, and want to figure out the probability amplitude of observing a state at t=129. Then can I say this is equal to:

&lt;0|a(k_{final},t=129) a^\dagger(k_{initial},t=-10)|0&gt;

But I'm having trouble picturing this. Doesn't the Fock space get screwed up, because if you begin with one particle, all sorts of things are happening such as loops involving other particles. In other words, don't you have to extend your Fock space for virtual particles that might be off-shell?

In perturbation theory, is there an assumption that at t=+-infinity, that all interactions are turned off? Otherwise, wouldn't the Fock space have to include off-shell momenta?
 
  • #54
RedX said:
Never mind. I got it. It wasn't exactly pretty, so I probably didn't do it the best way, so I won't write the details here.

Basically you have this:
(1) <br /> <br /> \int d^3x e^{-ikx} \bar \partial_0 \phi(x) <br /> <br />
and for the time derivative of \phi(x), use:

\dot{\phi}=i[H,\phi]=iH\phi-i\phi H

Then show that (1) operating on |0> gives zero, (1) operating on a^\dagger(q)|0&gt; is zero unless q=k, in which case you get just |0>.

I think that's enough to prove that (1) = a(k)

I'm not convinced this is enough. You would need to see if it holds for all ppssible states.

I do however found a different way of writing the inverse,

a(k,t) = \int d^3x e^{-ikx+iwt}\left[\omega \phi(x,t) + i \Pi(x,t)\right]

where \Pi(x,t) is the field conjugate to \phi(x,t) (which is just the time-derivative). Now the mode expansion of the conjugate field is

\Pi(x,t) = -i \int\frac{d^3k}{(2\pi)^3(2\omega)} \omega\left[a(k,t) e^{ikx-iwt} - a^\dag(k,t) e^{-ikx+iwt}\right]

This is the same expansion as the non-interacting case, but again the modes pick up a time dependence. The expansion is restriced to this form due to the equal time commutation relations between the field phi and its conjugate. So in conclusion, yes the relation should hold in the interacting case.

I probably ran into the same problem as you: acting with the time derivative on the field phi generates time derivatives of a and a^dag as well. But I think the resolution lies in the fact that the basis of modes is complete, so these time derives can be written in terms of a linear sum of the modes a and a^\dag as well.
 
  • #55
RedX said:
I have a question about this. Suppose you can solve for a(k,t) in an interacting theory. Does this mean you can calculate scattering amplitudes at finite times? So say I begin at t=-10, and want to figure out the probability amplitude of observing a state at t=129. Then can I say this is equal to:

&lt;0|a(k_{final},t=129) a^\dagger(k_{initial},t=-10)|0&gt;

Yes, you can solve the correlators in that case.

But I'm having trouble picturing this. Doesn't the Fock space get screwed up, because if you begin with one particle, all sorts of things are happening such as loops involving other particles. In other words, don't you have to extend your Fock space for virtual particles that might be off-shell?

You don't have to do anything with your Fock space. The virtual particles are intermediate states that pop when you perturbatively try to determine the time-dependence of the fields / the modes. They are a mathematical construction that pop up in perturbation theory. If you can solve the time dependence exactly you wouldn't be needing these virtual particles.

Let me give an example by the way of why the time-evolution of the mode operators is problematic. The \phi^4 interaction looks like:

H_I = \int d^3x \phi^4(x)

Written in terms of the modes this is something like

H_I = \int d^3 k_1 d^3k_2 d^3k_3 d^3k_4 (2\pi)^3\delta(k_1+k_2+k_3+k+4) a^\dag_{k_1}a^\dag_{k_2} a_{k_3} a{k_4}
This is probably not completely correct, but the point is, is that the interaction is given by (a sum over) a product of two creation and two annihilation operators subject to momentum conservation.

Now, the time evolution of the mode operator a is given by (Heisenberg picture)

-i\hbar \partial_t a = [H_0 + H_I,a]

Now the commutator of a with the interaction terms H_I will generate (a sum over) a product of three mode operators, a^\dag a a. To solve it we need the time dependence of this product. You guessed it, this is given by:

-i\hbar \partial_t (a^\dag a a) = [H_0 + H_I,a^\dag a a]

But this commutator generates terms with an even larger product of mode operators! Which, in turn, also determine the time evolution of the operator a. And so the problem is clear: the time evolution of a product of mode operators: a^\dag\cdot a is determined by the time dependence larger product of mode operators.

I hope I'm not being too vague here... ;)

In perturbation theory, is there an assumption that at t=+-infinity, that all interactions are turned off? Otherwise, wouldn't the Fock space have to include off-shell momenta?

Well, you're touching on something deep here. The assumption is, indeed, that at t= +/- infinity the theory is non-interacting. At these instances we can construct the Fock space, and assume the Fock space is the same for the interacting case. It's not a pretty assumption at all that these two Fock spaces (interacting vs non-interacting) are the same, and --as far as I know-- it's not clear it should hold. But I don't think there's a way around this at the moment... If you can define the Fock space directly in your interacting theory, that would be great. But I got no clue on how to do that.
 
  • #56
xepma said:
I'm not convinced this is enough. You would need to see if it holds for all ppssible states.

Using the (-+++) metric and \dot{\phi}=i[H,\phi]=iH\phi-i\phi H:

i\int d^3x e^{-ikx} \bar \partial_0 \phi(x)=<br /> i[i\int d^3x e^{-ikx}H\phi(x)-i\int d^3x e^{-ikx}\phi(x)H-i\int d^3x e^{-ikx} E_k\phi(x)]

Using that \phi(x)=\int d^3 \tilde q [a(q,t)e^{iqx}+a^\dagger(q,t)e^{-iqx}] this becomes:

<br /> =-H[\frac{a(k,t)}{2E_k}+\frac{a^\dagger(-k,t)}{2E_k}e^{2iE_kt}]<br />
<br /> +[\frac{a(k,t)}{2E_k}+\frac{a^\dagger(-k,t)}{2E_k}e^{2iE_kt}]H<br />
<br /> +E_k[\frac{a(k,t)}{2E_k}+\frac{a^\dagger(-k,t)}{2E_k}e^{2iE_kt}]<br />

Now consider the operation of this on a state |M> with energy M. The 2nd terms (the creation operators) in each of the lines cancel because the Hamiltonian in the first line pulls out an energy of -(M+Ek), the second line pulls out an energy of M, and the third line is just Ek: these add to give zero. So far so good, so examine the 1st terms (the destruction operators). Consider two cases, where |M> contains the particle |k>, and not. If M does not, then all the destruction operators will produce zero acting on |M>. If |M> contains |k>, then the Hamiltonian brings out a -[M-Ek] in the 1st line, a +M in the second line, and an Ek in the 3rd line. This adds to 2Ek, and multiplying this by the term \frac{a(k,t)}{2E_k}, this leaves just the destruction operator.

Anyways, there are some subtleties that I'm bothered by, but I'm convinced that a(k,t)= i\int d^3x e^{-ikx} \bar \partial_0 \phi(x) is still true in an interacting theory. What's remarkable is if you plug this expression in for a(k,t) to calculate the commutator of the a's, and use the canonical commutation relations, then you get the standard equal time commutation relations for a(k,t):

[a(k,t),a^\dagger(q,t)]=\delta^3(k-q)(2\pi)^32E_k

All that's required is that \Pi=\frac{\partial \mathcal L}{\partial \dot{\phi}}=\dot{\phi}, so that you can identify the time derivative terms in a(k,t)= i\int d^3x e^{-ikx} \bar \partial_0 \phi(x) as the canonical momentum, so the commutation relation is easy to take.

So basically, we're pretty much required to have the Lagrangian be no more than 2nd order in time derivatives so that the canonical momentum is just the time derivative of the field, or there is no equal time commutation relations for the creation operators.

So basically the relation a(k,t)= i\int d^3x e^{-ikx} \bar \partial_0 \phi(x) is actually more fundamental than the Lagrangian!

Anyways, I was reading Srednicki again, and he showed that a^\dagger(k,t) can actually create multiparticle states when acting on the vacuum! This is equation (5.23). However, as t goes to +- infinity, you don't have this happening.

This is interesting stuff, but I heard that getting into more detail on this stuff takes constructive quantum field theory.
 
Last edited:
  • #57
The notion of a (so-called real) particle is ambiguous at "finite" time, basically due to the HUP. Or, to say it the other way around, particles become well-defined after enough time has passed in order to subdue the HUP. The poles of the correlation function are (what should be) interpreted as particles, not the lines on Feynman diagrams. In the spirit of QM, only a superposition of different particle number states, not a definite particle number state, exists at finite time.
 
  • #58
Equation 11.20 of Srednicki's book is the expression for the probability per unit time for the scattering of two particles. It is equal to a Lorentz invariant part times a non-Lorentz invariant part, and the non-Lorentz invariant part is:

\frac{1}{E_1E_2V}

I'm having trouble seeing how aliens on a spaceship observing the LHC will see everything slowed down by \frac{1}{\sqrt{1-v^2}}, where v is the velocity of the Earth relative to their spaceship.

In equation 11.48 for the decay rate it's obvious this is true, that aliens will observe a decay time longer by \frac{1}{\sqrt{1-v^2}}.

But is there a quick way to verify that \frac{1}{E_1E_2V} divided by \frac{1}{E_1&#039;E_2&#039;V&#039;} is equal to \frac{1}{\sqrt{1-v^2}} in the primed frame?
 
  • #59
Okay, I sort of intuitively derived it, after I read Weinberg's section on cross-sections.

First of all, I'm a bit shocked that if you take the probability per unit time, and divide by the flux, you get something Lorentz invariant. Weinberg on page 138 Vol 2, says: "It is conventional to define the cross-section to be a Lorentz-invariant function of 4-momenta."
Seriously, is that how the cross-section is defined? I thought it would be defined by the experimentalist reporting his or her results by dividing by the flux, because that's all that they can do! By sheer luck when you do that, you get something that's Lorentz-invariant!

Anyways, dividing by the flux, you get a now Lorentz-invariant part that looks like this:

\frac{1}{E_1E_2V}\frac{V}{u}

where \frac{u}{V} is the flux, and u is the relative velocity defined as:

u=\frac{\sqrt{(p_1\cdot p_2)^2-m_{1}^2m_{2}^2}}{E_1E_2}
in 3.4.17 of Weinberg.

Now examine the \frac{V}{u} term.

If you are boosting away from the COM frame, V undergoes a length contraction, and also the length numerator of u undergoes a length contraction. These cancel. But the time denominator of u undergoes a time dilation, so overall \frac{V}{u} increases. That means
\frac{1}{E_1E_2V} must decrease, since their product is Lorentz-invariant. So the probability per unit time is smaller in a boosted frame, which is time dilation.

My special relativity is really shoddy, so I just assumed that the COM frame of two particles is like a rest frame of one particle, so that a boost away from this frame results in a length contraction and a time dilation. Does anyone know how to do this more rigorously (legitimately)?
 
  • #60
I think that you must restrict to the case pA.pB = -|pA||pB| (where A and B are the incoming particles), and therefore restrict the set of Lorentz transformations to rotations and only longitudinal boosts. I'm pretty sure that's what Weinberg means, but I don't have his book, so I can't check. The cross section is invariant to longitudinal boosts, but not transverse boosts.
 
Last edited:

Similar threads

Replies
2
Views
2K
Replies
22
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K