# Dirac Equation and Pauli Matrices

HeavyMetal
I have been reading through Mark Srednicki's QFT book because it seems to be well regarded here at Physics Forums. He discusses the Dirac Equation very early on, and then demonstrates that squaring the Hamiltonian will, in fact, return momentum eigenstates in the form of the momentum-energy relation.

He uses the Hamiltonian $H_{ab}=cP_{j}(\alpha^{\ j})_{ab}+mc^2(\beta)_{ab}$
So it is easy to see that $(H_{ab})^2=c^2P_{j}P_{k}(\alpha^{\ j}\alpha^{k})_{ab}+mc^{3}P_{j}(\alpha^{\ j}\beta+\beta\alpha^{\ j})_{ab}+m^{2}c^{4}(\beta^2)_{ab}$

He then explains that if we choose suitable matrices that satisfy a few equations, that we can obtain the momentum eigenstate in the correct momentum-energy form. These matrices satisfy:
{$\alpha^{\ j},\alpha^{k}$}$_{ab}=2\delta^{\ jk}\delta_{ab}$
{$\alpha^{\ j},\beta$}$_{ab}=0$
$(\beta^2)_{ab}=\delta_{ab}$
Where brackets represent the anticommutator.

Ultimately, after some arithmetic we find that:
$(H^2)_{ab}=\textbf{P}^{2}c^{2}\delta_{ab}+m^{2}c^{4}\delta_{ab}=(\textbf{P}^{2}c^{2}+m^{2}c^{4})\delta_{ab}$

I'm having trouble seeing the steps that it takes to get there. Substituting in the values obtained from the matrices, I can see that the middle term drops out because the value is zero, i.e. $mc^{3}P_{j}(\alpha^{\ j}\beta+\beta\alpha^{\ j})_{ab}=mc^{3}P_{j}*0=0$.
I can also see that the last term is included because $m^{2}c^{4}(\beta^2)_{ab}=m^{2}c^{4}\delta_{ab}$.

I cannot, however, see why $c^{2}P_{j}P_{k}(\alpha^{\ j}\alpha^{k})_{ab}=\textbf{P}^{2}c^{2}\delta_{ab}$.

I can see that $c^{2}P_{j}P_{k}(\alpha^{\ j}\alpha^{k})_{ab}=c^{2}P_{j}P_{k}\frac{1}{2}${$\alpha^{\ j},\alpha^{k}$}$_{ab}=c^2\textbf{P}^2\delta^{\ jk}\delta_{ab}$. I am confused as to where the $\delta^{\ jk}$ goes!

Is it simply the fact that $\delta^{\ jk}$ and $\delta_{ab}$ are Kronecker deltas? I reread the previous sections, but I don't think it was mentioned. If this is the case, then I understand where that $\delta^{\ jk}$ goes.
More importantly, I tried to work out the three equivalencies above by using Pauli matrices, but I quickly get lost! When {$\alpha^{\ j},\alpha^{k}$}$_{ab}$ is written, does it refer to a 2 x 2 matrix (a x b), in which each entry is a 2 x 2 Pauli matrix (j x k) and therefore is ultimately a 4 x 4 matrix?

HeavyMetal \m/

## Answers and Replies

The_Duck
I can see that $c^{2}P_{j}P_{k}(\alpha^{\ j}\alpha^{k})_{ab}=c^{2}P_{j}P_{k}\frac{1}{2}${$\alpha^{\ j},\alpha^{k}$}$_{ab}=c^2\textbf{P}^2\delta^{\ jk}\delta_{ab}$. I am confused as to where the $\delta^{\ jk}$ goes!

Is it simply the fact that $\delta^{\ jk}$ and $\delta_{ab}$ are Kronecker deltas?

Right, so it goes

$c^{2}P_{j}P_{k}(\alpha^{\ j}\alpha^{k})_{ab}=c^{2}P_{j}P_{k}\frac{1}{2}${$\alpha^{\ j},\alpha^{k}$}$_{ab}=c^2 P_{j}P_{k}\delta^{\ jk}\delta_{ab} = c^2\textbf{P}^2\delta_{ab}$

When {$\alpha^{\ j},\alpha^{k}$}$_{ab}$ is written, does it refer to a 2 x 2 matrix (a x b), in which each entry is a 2 x 2 Pauli matrix (j x k) and therefore is ultimately a 4 x 4 matrix?

$\{\alpha^{\ j},\alpha^{k}\}_{ab}$ says: compute the matrix product $\alpha^j \alpha^k + \alpha^k \alpha^j$, then take the $(a, b)$ component of the final matrix. As Srednicki explains just below this, $\alpha_j$ and $\alpha_k$ are 4x4 matrices. So ##a## and ##b## both range from 1 to 4. Meanwhile ##i## and ##j## range from 1 to 3.

Some things to help understand the index notation:

##\delta_{ab}## gives the entries of the identity matrix.

$\{\alpha^{\ j},\alpha^{k}\}_{ab} = 2 \delta^{jk} \delta_{ab}$ means: the square of any alpha matrix is the identity matrix (this is the ##j=k## case), and the anticommutator of any alpha matrix with a *different* alpha matrix is the zero matrix (this is the ##j \neq k## case).

As you can see, your proficiency in QFT is limited by how well you can see through a fog of indices :P This is just a warmup; eventually you are dealing with equations that have half a dozen indices on each term, half of which are suppressed.

Last edited:
ddd123
$P_{j}P_{k}\delta^{\ jk} =P_{j}P^{j}$

It's Einstein's summation convention, see that it works by explicitating the sums. You don't get a scalar unless you contract the indices. When you actually have unpaired indices in an equation, they stay there to refer to the fact that you have in fact a system of equations.

HeavyMetal
Right, so it goes

$c^{2}P_{j}P_{k}\frac{1}{2}${$\alpha^{\ j},\alpha^{k}$}$_{ab}=c^2 P_{j}P_{k}\delta^{\ jk}\delta_{ab}$

It was this intermediate step that I missed. I made the mistake of trusting my abilities to make the last two steps in one. Very careless error :D

$\{\alpha^{\ j},\alpha^{k}\}_{ab}$ says: compute the matrix product $\alpha^j \alpha^k + \alpha^k \alpha^j$, then take the $(a, b)$ component of the final matrix. As Srednicki explains just below this, $\alpha_j$ and $\alpha_k$ are 4x4 matrices. So ##a## and ##b## both range from 1 to 4. Meanwhile ##i## and ##j## range from 1 to 3.

I see why I was confused. I was thinking that the alpha matrices were 2 x 2 Pauli matrices. So what I think you're saying is that both $\alpha^j$ and $\alpha^k$ correspond to the 1, 2, and 3 Pauli gamma matrices?

$\{\alpha^{\ j},\alpha^{k}\}_{ab} = 2 \delta^{jk} \delta_{ab}$ means: the square of any alpha matrix is the identity matrix (this is the ##j=k## case), and the anticommutator of any alpha matrix with a *different* alpha matrix is the zero matrix (this is the ##j \neq k## case).

As you can see, your proficiency in QFT is limited by how well you can see through a fog of indices :P This is just a warmup; eventually you are dealing with equations that have half a dozen indices on each term, half of which are suppressed.

This one really hit home for me. I just finished chapter 2 of Bernard Schutz's "A First Course in General Relativity," which is the very chapter that introduces Einstein notation! It is still very new to me; I am a chemistry major, and you must understand that we do not do much mathematical physics as undergraduates. Even in physical chemistry we use it as a tool -- a means to an end -- and are rarely allowed to bask in the beautiful depth of theory. As a matter of fact, my school doesn't offer any physics courses above physics 1 and 2! So I will take your advice and practice recognizing suppressed indices. I take it that you're referring to the omission of repeated identical contravariant and covariant indices, correct? Thanks for your response :D

$P_{j}P_{k}\delta^{\ jk} =P_{j}P^{j}$You don't get a scalar unless you contract the indices.

I am aware of the Einstein summation convention, but the obvious benefit was completely obscured until you pointed this out. Thank you!

ddd123
This one really hit home for me. I just finished chapter 2 of Bernard Schutz's "A First Course in General Relativity," which is the very chapter that introduces Einstein notation! It is still very new to me; I am a chemistry major, and you must understand that we do not do much mathematical physics as undergraduates. ... I am aware of the Einstein summation convention, but the obvious benefit was completely obscured until you pointed this out. Thank you!

I had problems too even when starting from pure physics. I strongly suggest these short notes: http://www.ita.uni-heidelberg.de/~dullemond/lectures/tensor/tensor.pdf they contain all the applied "benefits" and explain the mysteries plainly. You'd go insane without reading something similar.

• aabottom, HeavyMetal and bhobba
The_Duck
I see why I was confused. I was thinking that the alpha matrices were 2 x 2 Pauli matrices. So what I think you're saying is that both $\alpha^j$ and $\alpha^k$ correspond to the 1, 2, and 3 Pauli gamma matrices?

Right.

This one really hit home for me. I just finished chapter 2 of Bernard Schutz's "A First Course in General Relativity," which is the very chapter that introduces Einstein notation! It is still very new to me; I am a chemistry major, and you must understand that we do not do much mathematical physics as undergraduates. Even in physical chemistry we use it as a tool -- a means to an end -- and are rarely allowed to bask in the beautiful depth of theory. As a matter of fact, my school doesn't offer any physics courses above physics 1 and 2! So I will take your advice and practice recognizing suppressed indices. I take it that you're referring to the omission of repeated identical contravariant and covariant indices, correct?

Sure, or as another example the alpha matrix anticommutation relation would often be written

$$\{\alpha^j, \alpha^k\} = \delta^{jk}$$

where the ##a## and ##b## indices have been suppressed, leaving it up to you to remember that ##\alpha^i## and ##\alpha^j## are matrices, and that when we say that a matrix equals 1 we mean that it equals the identity matrix.

Don't worry if you get tripped up at first, QFT is slow going for everyone.

• HeavyMetal
aabottom
I had problems too even when starting from pure physics. I strongly suggest these short notes: http://www.ita.uni-heidelberg.de/~dullemond/lectures/tensor/tensor.pdf they contain all the applied "benefits" and explain the mysteries plainly. You'd go insane without reading something similar.
Yep, gonna have to read that tensor.pdf. I've been studying relativistic QM and the Dirac equation. I'm getting tensor overload.

Thanks for the link.