Proving \{ \gamma^5 , \gamma^\mu \} = 0 to Gamma 5 Matrix Verification

  • Thread starter Thread starter latentcorpse
  • Start date Start date
  • Tags Tags
    Gamma Matrix
latentcorpse
Messages
1,411
Reaction score
0
How do I verify \{ \gamma^5 , \gamma^\mu \} = 0

I have

\{ \gamma^5 , \gamma^\mu \} = \gamma^5 \gamma^\mu + \gamma^\mu \gamma^5
= -i ( \gamma^0 \gamma^1 \gamma^2 \gamma^3 \gamma^5 + \gamma^5 \gamma^0 \gamma^1 \gamma^2 \gamma^3 )
= -i ( \gamma^0 \gamma^1 \gamma^2 \gamma^3 \gamma^5 - \gamma^0 \gamma^5 \gamma^1 \gamma^2 \gamma^3 )
= -i ( \gamma^0 \gamma^1 \gamma^2 \gamma^3 \gamma^5 + \gamma^0 \gamma^1 \gamma^5 \gamma^2 \gamma^3 )
= -i ( \gamma^0 \gamma^1 \gamma^2 \gamma^3 \gamma^5 - \gamma^0 \gamma^1 \gamma^2 \gamma^5 \gamma^3 )
= -i ( \gamma^0 \gamma^1 \gamma^2 \gamma^3 \gamma^5 + \gamma^0 \gamma^1 \gamma^2 \gamma^3 \gamma^5 )

But this is not quite right, because at some point I will have shifted the \gamma^\mu past itself and so I will get an additional term +2 \eta^{ \mu \mu} since \{ \gamma^\mu , \gamma^\nu \} = 2 \eta^{\mu \nu}

So I should get three terms:
= -i ( \gamma^0 \gamma^1 \gamma^2 \gamma^3 \gamma^5 + \gamma^0 \gamma^1 \gamma^2 \gamma^3 \gamma^5 -2 \eta^{\mu \mu} \gamma^0 \gamma^1 \gamma^2 \gamma^3 \gamma^5 )

and then

= -i ( (2-2 \eta^{\mu \mu}) \gamma^0 \gamma^1 \gamma^2 \gamma^3 \gamma^5) \neq 0 since \eta^{\mu \mu} = 4, no?
 
Physics news on Phys.org
So you have to prove that
<br /> \gamma^0 \gamma^1 \gamma^2 \gamma^3 \gamma^{\mu} + \gamma^{\mu} \gamma^0 \gamma^1 \gamma^2 \gamma^3 = 0<br />

The main idea is to get matrices with identical indexes close to each other.
Note that
1. every time you switch two \gamma's with \mu \neq \nu you get a minus sign
2. if you have to move \gamma^\mu in the first summand k times to the left, you will have to move it in the second summand (3-k) times to the right, thus giving ou additional (-1)
 
quZz said:
So you have to prove that
<br /> \gamma^0 \gamma^1 \gamma^2 \gamma^3 \gamma^{\mu} + \gamma^{\mu} \gamma^0 \gamma^1 \gamma^2 \gamma^3 = 0<br />

The main idea is to get matrices with identical indexes close to each other.
Note that
1. every time you switch two \gamma's with \mu \neq \nu you get a minus sign
2. if you have to move \gamma^\mu in the first summand k times to the left, you will have to move it in the second summand (3-k) times to the right, thus giving ou additional (-1)

Thanks. Can I also ask, why equation 3.25 is correct? Isn't that the interaction Lagrangian instead of the interaction Hamiltonian? I think you'll need to look at eqn 3.7 as well.

Cheers.
 
em... which book? =)
 
It comes right from the connection between hamiltonian and lagrangian.

If you have for lagrangian L = L1 + L2 and L2 does not depend on derivatives, then for hamiltonian you have H = H1 + H2, where H1 corresponds to L1 and H2 = -L2.

Even more, if you have infinitesimal addition (that can depend on derivatives) to lagrangian: L = L0 + L', then for hamiltonian H = H0 + H', where H0 corresponds to L0 and H' = -L'.
 
quZz said:
It comes right from the connection between hamiltonian and lagrangian.

If you have for lagrangian L = L1 + L2 and L2 does not depend on derivatives, then for hamiltonian you have H = H1 + H2, where H1 corresponds to L1 and H2 = -L2.

Even more, if you have infinitesimal addition (that can depend on derivatives) to lagrangian: L = L0 + L', then for hamiltonian H = H0 + H', where H0 corresponds to L0 and H' = -L'.

when you say H0 correspsonds to L0, do you mean H0=L0? Or it is just some function of L0?
 
H0 can't be equal to L0, because they don't depend on the same (field) variables. For the system without constraints, the Hamiltonian is the Legendre transformation of the Lagrangian wrt the generalized velocities.
 
Back
Top