# Understanding how to get the unpolarized cross-sections

• I

## Summary:

I want to understand the beautiful technique to sum over spin states.

## Main Question or Discussion Point

The following material comes from the book Quantum Field Theory by Mandl & Shaw, chapter 8, section 8.2

Here we discussed how to obtain the cross-sections from ##S_{fi}## elements. These cross-sections were assumed to be fully polarized (i.e. the particles involved in the collision have definite initial and final polarization states). However, the beams of colliding particles are usually unpolarized, and the polarizations of the particles produced in the collision cannot be measured. Thus, we must average and sum over polarization states of initial and final particles, respectively. This procedure implies summing over spin and and photon polarization states.

In this post I am aimed at discussing the technique to obtain the unpolarized cross-section involving lepton-spin states only.

As an example Mandl & Shaw worked out Compton Scattering for electrons and took the following Feynman Amplitude
$$\mathscr{M}=\bar{u_s} (\mathbf p') \Gamma u_r (\mathbf p) \ \ \ \ (1)$$

Where the four-component spinors ##u_r( \mathbf p)## and ##\bar u_s(\mathbf p')## completely specify the momentum and spins of the electron in the initial and final states, and the operator ##\Gamma## is a ##4 \times 4## matrix built up out of ##\gamma##-matrices. Eq. (1) gives rise to an unpolarized cross-section of the form

$$X:= \frac 1 2 \sum_{r=1}^2 \sum_{s=1}^2 |\mathscr{M}|^2 \ \ \ \ (2)$$

Where we've averaged over initial spins (i.e ##\sum_r##) and summed over final spins (i.e ##\sum_s##)

Defining:

$$\tilde \Gamma := \gamma^0 \Gamma^{\dagger} \gamma^0 \ \ \ \ (3)$$

Eq. (2) can be rewritten as follows

$$X:= \frac 1 2 \sum_{r=1} \sum_{s=1} \Big( \bar{u_s} (\mathbf p') \Gamma u_r (\mathbf p))( \bar{u_r} (\mathbf p) \tilde \Gamma u_s (\mathbf p') \Big) \ \ \ \ (4)$$

Writing out the spinor indices explicitly, (4) can be written as

$$X= \frac 1 2 \Big( \sum_s u_{s \delta} (\mathbf p') \bar u_{s \alpha} (\mathbf p') \Big) \Gamma_{\alpha \beta} \Big( \sum_r u_{r \beta} (\mathbf p) \bar u_{r \gamma}(\mathbf p)\Big) \tilde \Gamma_{\alpha \delta} \ \ \ \ (5)$$

The positive energy projection operator satisfies the following equation

$$\Lambda_{\alpha \beta}^+ (\mathbf p) = \Big( \frac{ \not{\!p}+m}{2m} \Big)_{\alpha \beta} = \sum_{r=1}^2 u_{r \alpha} (\mathbf p) \bar u_{r \beta} (\mathbf p) \ \ \ \ (6)$$

Eq. (6) allows to eliminate the sums over positive energy states, leading to the final result

$$X= \frac 1 2 \Lambda_{\delta \alpha}^+ (\mathbf p') \Gamma _{\alpha \beta} \Lambda_{\beta \gamma}^+ (\mathbf p) \tilde \Gamma _{\gamma \delta}=\frac 1 2 Tr \Big[\Lambda^+ (\mathbf p') \Gamma \Lambda^+ (\mathbf p) \tilde \Gamma \Big] \ \ \ \ (7)$$

So my questions are:

1) How to get Eq. (4)

I am aware of the following definition

$$\bar u_{s,r} (\mathbf p) := u^{\dagger}_{s,r} (\mathbf p) \gamma^0$$

But I do not know what property of ##\gamma##-Dirac matrices should I use to get (4).

2) How to get Eq. (5).

Again, there has to be a ##\gamma##-Dirac matrix property I am missing...

3) Why does the Trace show up in (7)?

Any help is appreciated.

Thank you.

Related Quantum Physics News on Phys.org
3) Why does the Trace show up in (7)?

Any help is appreciated.

Thank you.
This is why 2) How to get Eq. (5).

Again, there has to be a ##\gamma##-Dirac matrix property I am missing...
Ahhh so it has nothing to do with Dirac matrix properties.

Apparently, it turns out that when we write out matrix indices explicitly we can treat spinors and ##\Gamma## matrices as numbers!

That means that we can rearrange ##(5)## at will (note that at #1 I made a typo; it is actually ##\tilde \Gamma_{\gamma \delta}##). For instance

$$X= \frac 1 2 \Big( \sum_s u_{s \delta} (\mathbf p') \bar u_{s \alpha} (\mathbf p') \Big) \Gamma_{\alpha \beta} \tilde \Gamma_{\gamma \delta} \Big( \sum_r u_{r \beta} (\mathbf p) \bar u_{r \gamma}(\mathbf p)\Big) \ \ \ \ (5)$$

1) How to get Eq. (4)
Alright, I got a proof for Eq. (4) (I know it'll be trivial for most of you but I am pretty excited I have to say ). Please feel free to check it out.

I assumed that

$$X:= \frac 1 2 \sum_{r=1}^2 \sum_{s=1}^2 |\mathscr{M}|^2=\frac 1 2 \sum_{r=1}^2 \sum_{s=1}^2\mathscr{M}\mathscr{M}^{\dagger}$$

Where by ##\dagger## I mean the conjugate transpose. Thus explicitly we get

$$X:= \frac 1 2 \sum_{r=1}^2 \sum_{s=1}^2\mathscr{M}\mathscr{M}^{\dagger}=\frac 1 2 \sum_{r=1} \sum_{s=1} ( \bar{u_s} (\vec p') \Gamma u_r (\vec p))\Big( \bar{u_s} (\vec p) \Gamma u_r (\vec p')\Big)^{\dagger}=\frac 1 2 \sum_{r=1} \sum_{s=1} ( \bar{u_s} (\vec p') \Gamma u_r (\vec p))\Big( u_r^{\dagger} (\vec p') \Gamma^{\dagger} \bar{u_s}^{\dagger} (\vec p)\Big) \ \ \ \ (6)$$

Now it is about working out ##\bar{u_s}^{\dagger} (\vec p)## and ##u_r^{\dagger} (\vec p')##

We know that the adjoint is, by definition

$$\bar{u_s} (\vec p) := u_s^{\dagger} \gamma^0$$

Taking ##\dagger## on both sides of such equation we get

$$\bar{u_s}^{\dagger} (\vec p) := (u_s^{\dagger} \gamma^0)^{\dagger}=\gamma^{0\dagger} u_s \ \ \ \ (7)$$

Where

$$\gamma^{0\dagger}=\gamma^{0}$$

And here comes the key step: I assumed that ##\Big(\gamma^{0}\Big)^{-1}=\gamma^{0}## Thus we get

$$u_r^{\dagger} (\vec p') = \bar u_r (\vec p') \gamma^{0} \ \ \ \ (8)$$

Plugging ##(7)## and ##(8)## into ##(6)## we get the desired ##(4)##

$$X=\frac 1 2 \sum_{r=1} \sum_{s=1} ( \bar{u_s} (\vec p') \Gamma u_r (\vec p))\Big( u_r^{\dagger} (\vec p') \Gamma^{\dagger} \bar{u_s}^{\dagger} (\vec p)\Big)=\frac 1 2 \sum_{r=1} \sum_{s=1} ( \bar{u_s} (\vec p') \Gamma u_r (\vec p))\bar u_r (\vec p') \gamma^{0}\Gamma^{\dagger}\gamma^{0} u_s(\vec p')=\frac 1 2 \sum_{r=1} \sum_{s=1} \Big( \bar{u_s} (\vec p') \Gamma u_r (\vec p))( \bar{u_r} (\vec p) \tilde \Gamma u_s (\vec p') \Big)$$

Note that in my proof I assumed ##\Big(\gamma^{0}\Big)^{-1}=\gamma^{0}##. If this is correct, why is this true?

Note that in my proof I assumed ##\Big(\gamma^{0}\Big)^{-1}=\gamma^{0}##. If this is correct, why is this true?
I think I got it! ##\Big(\gamma^{0}\Big)^{-1}=\gamma^{0}## is true and we can prove it based on the anticommutation relation (which can be found in page 452; Mandl & Shaw) ##\left\{\gamma^\mu, \gamma^{\nu} \right\}=2g^{\mu \nu}##.

Using the convention ##(+, -, -, -)## for the metric, we get for ##\mu=0, \nu=0##

$$\Big(\gamma^0\Big)^2=1$$

Which indeed proofs my assumption.