Spinorial Field Representation

Breo
Messages
176
Reaction score
0
What does mean the next (why we write it like this, why is a sum, why first a 0 and secondly a 1/2 and viceversa):

$$ (\frac{1}{2} , 0 ) \oplus (0, \frac{1}{2}) $$

?
 
Physics news on Phys.org
Do you know how the representations of the Lorentz group are classified? The (1/2, 0) and (0, 1/2) are two irreducible representations of the Lorentz group. These representations are also called the "left-handed Weyl spinor" and the "right-handed Weyl spinor." The ##\oplus## is the symbol for a direct sum of group representations.
 
I asked this to eventually understand the next decomposition. If we take the tensor product of 2 spinors ## \Psi\Psi* ##:

$$[(\frac{1}{2}, 0 ) \oplus (0, \frac{1}{2})] \otimes [(\frac{1}{2}, 0 ) \oplus (0, \frac{1}{2})] = 2(\frac{1}{2},\frac{1}{2}) \oplus (1,0) \oplus 2(0, 0 ) \oplus (0, 1)$$

Why we say these are 16 terms and how to descompose ##2(\frac{1}{2},\frac{1}{2})## in the vector ## \bar{\Psi}\gamma^{\mu}\Psi ## and pseudovector ## \bar{\Psi}\gamma^5 \gamma^{\mu}\Psi ## ?
 
May I suggest supplimenting whatever you're reading with Wu Ki Tung's group theory for physicists book? It explains things nicely. Miller's book is more detailed (especially on the Clebsch-Gordan theorem), but more intricate. Why are there 16 terms in a basis ? The representation space of the product representation is the tensor product of the 2 spaces. How do you compute dimensions?
 
Last edited:
## dim ( D \otimes D) = dim (D) .dim (D) ## ?
 
Bingo, so if the direct sum of 2 elementary Weyl spinors has dimension 4, what's the dimension of the product of 2 such direct sums?
 
16. I missunderstood the dimensions and terms... Thank you for clarification.

Do you know how to descompose the ##2(1/2, 1/2)## as I stated in the official post?
 
Breo said:
I asked this to eventually understand the next decomposition. If we take the tensor product of 2 spinors ## \Psi\Psi* ##:

$$[(\frac{1}{2}, 0 ) \oplus (0, \frac{1}{2})] \otimes [(\frac{1}{2}, 0 ) \oplus (0, \frac{1}{2})] = 2(\frac{1}{2},\frac{1}{2}) \oplus (1,0) \oplus 2(0, 0 ) \oplus (0, 1)$$
The LHS of this equation is the vector space S \otimes \bar{ S } of all 16-component vectors of the form \psi \otimes \bar{ \psi }. We can also take this “vector” to be the 4 \times 4 matrix (of fields on M^{ ( 1 , 3 ) }) \psi_{ a } \bar{ \psi }_{ b } formed by the following direct product
<br /> \psi \otimes \bar{ \psi } = \left( \begin{array} {c} \psi_{1} \\ \psi_{2} \\ \psi_{3} \\ \psi_{4} \end{array} \right) \otimes \left( \psi_{ 1 }^{ * } , \ \psi_{ 2 }^{ * } , \ - \psi_{ 3 }^{ * } , \ - \psi_{ 4 }^{ * } \right) .<br />
Or
<br /> \psi \otimes \bar{ \psi } = \left( \begin {array} {c c c c}<br /> ( \psi_{ 1 } \psi_{ 1 }^{ * } ) &amp; ( \psi_{ 1 } \psi_{ 2 }^{ * } ) &amp; ( - \psi_{ 1 } \psi_{ 3 }^{ * } ) &amp; ( - \psi_{ 1 } \psi_{ 4 }^{ * } ) \\<br /> ( \psi_{ 2 } \psi_{ 1 }^{ * } ) &amp; ( \psi_{ 2 } \psi_{ 2 }^{ * } ) &amp; ( - \psi_{ 2 } \psi_{ 3 }^{ * } ) &amp; ( - \psi_{ 2 } \psi_{ 4 }^{ * } ) \\<br /> ( \psi_{ 3 } \psi_{ 1 }^{ * } ) &amp; ( \psi_{ 3 } \psi_{ 2 }^{ * } ) &amp; ( - \psi_{ 3 } \psi_{ 3 }^{ * } ) &amp; ( - \psi_{ 3 } \psi_{ 4 }^{ * } ) \\<br /> ( \psi_{ 4 } \psi_{ 1 }^{ * } ) &amp; ( \psi_{ 4 } \psi_{ 2 }^{ * } ) &amp; ( - \psi_{ 4 } \psi_{ 3 }^{ * } ) &amp; ( - \psi_{ 4 } \psi_{ 4 }^{ * } )<br /> \end {array} \right) .<br />
But any 4 \times 4 matrix can be expanded in the standard (complete) basis of Clifford algebra \mathcal{ C }_{ ( 1 , 3 ) } ( \mathbb{ C } ):
<br /> \Gamma^{ A } = \{ I_{4 \times 4} , \gamma^{ \mu } , \gamma^{ \mu } \gamma_{ 5 } , \sigma^{ \mu \nu } , \gamma_{ 5 } \} ,<br />
where A = 1 , 2 , \cdots , 16. So, we can write
<br /> \psi \otimes \bar{ \psi } = \sum_{ A = 1 }^{ 16 } a_{ A } \ \Gamma^{ A } . \ \ \ \ (1)<br />
To find the coefficients a_{ A } we use the trace properties of \Gamma^{ A }: \mbox{ Tr } ( \Gamma^{ A } ) = 0 , \ \mbox{for} \ \Gamma^{ A } \neq I ,
\mbox{ Tr } ( \Gamma^{ A } ) = 4 , \mbox{for} \ \Gamma^{ A } = I ,
and
\mbox{ Tr } ( \Gamma_{ B } \Gamma^{ A } ) \sim 2^{ 2 } \delta^{ A }_{ B } .
So, multiplying Eq(1) with \Gamma_{ B } and taking the trace, we find
<br /> \mbox{ Tr } ( \Gamma_{ B } \psi \otimes \bar{ \psi } ) = a_{ I } \mbox{ Tr } ( \Gamma_{ B } ) + \sum_{ \Gamma^{ R } \neq I } a_{ R } \mbox{ Tr } ( \Gamma_{ B } \Gamma^{ R } ) . \ \ \ (2)<br />
For \Gamma_{ B } = I, we find
<br /> a_{ I } = \frac{ 1 }{ 4 } \mbox{ Tr } ( \psi \otimes \bar{ \psi } ) = \frac{ 1 }{ 4 } \bar{ \psi } \psi .
For \Gamma_{ B } \neq I, Eq(2) gives
<br /> \mbox{ Tr } ( \Gamma_{ B } \psi \otimes \bar{ \psi } ) = 0 + \sum_{ \Gamma^{ R } \neq I } a_{ R } \mbox{ Tr } ( \Gamma_{ B } \Gamma^{ R } ) \sim 2^{ 2 } a_{ B } .<br />
Thus
<br /> a_{ B } \sim \frac{ 1 }{ 4 } ( \Gamma_{ B } )_{ a b } ( \psi \otimes \bar{ \psi } )_{ b a } = \frac{ 1 }{ 4 } \bar{ \psi }_{ a } ( \Gamma_{ B } )_{ a b } \psi_{ b } = \frac{ 1 }{ 4 } \bar{ \psi }_{ a } \Gamma_{ B } \psi .<br />
So, our expansion Eq(1) becomes
<br /> \psi \otimes \bar{ \psi } = \frac{ 1 }{ 4 } ( \bar{ \psi } \psi ) \ I + \frac{ c }{ 4 } \sum_{ \Gamma^{ B } \neq I } ( \bar{ \psi } \Gamma^{ B } \psi ) \Gamma_{ B } ,<br />
where c is a constant whose value depends on the definitions of \gamma_{5} and \sigma^{ \mu \nu } and on the signature of the spacetime metric.
From this it follows that
<br /> \bar{ \psi } \psi + \bar{ \psi } \gamma_{ 5 } \psi \ \in ( 0 , 0 ) \oplus ( 0 , 0 ) , \mbox{ dim } : \ [1] \oplus [1] ,<br />
<br /> \bar{ \psi } \gamma^{ \mu } \psi + \bar{ \psi } \gamma^{ \mu } \gamma_{ 5 } \psi \ \in ( 1/2 , 1/2 ) \oplus ( 1/2 , 1/2 ) \sim [4] \oplus [4] ,<br />
and
<br /> \bar{ \psi } \sigma^{ \mu \nu } \psi \ \in ( 1 , 0 ) \oplus ( 0 , 1 ) \sim [6] .<br />
So, going back to the vector space S \otimes \bar{ S }, we can say that the 16 \times 16 matrices which act naturally on the 16-component “vector” \psi \otimes \bar{ \psi } \in S \otimes \bar{ S } can be brought into block diagonal form with matrices of dimensions [1] , [1] , [4] , [4] and [6] on the diagonal.
Why we say these are 16 terms and how to descompose ##2(\frac{1}{2},\frac{1}{2})## in the vector ## \bar{\Psi}\gamma^{\mu}\Psi ## and pseudovector ## \bar{\Psi}\gamma^5 \gamma^{\mu}\Psi ## ?
You need to spend more time on group theory in general and the representation theory of Lorentz algebra in particular.

Sam
 
Last edited:
  • Like
Likes dextercioby, Breo and ChrisVer
  • #10
Thank you very much.

Yes, I have started to studying theoretical physics this year. I have a lack of many "basic" things but I am trying to solving them :).

Indeed, I have one when reading you: where can I learn the properties of the traces? I know is the summ of the diagonal terms of a matrix but I do not figure out when I see some of his uses. Maybe if you could tell me why do you can used it in your derivation...
 
  • #11
Which traces derivation?
The \gamma^\mu are traceless matrices, so are the \gamma_5
Then \gamma^\mu \gamma_5 are also traceless, take eg at some particular representation (Weyl):
\gamma^\mu= \begin{pmatrix} 0 &amp; \sigma^\mu \\ \bar{\sigma}^\mu &amp; 0 \end{pmatrix}, ~~ \gamma_5 = \begin{pmatrix} -1 &amp; 0 \\ 0 &amp; 1 \end{pmatrix}
with \sigma^\mu = (1, \vec{\sigma}), ~~\bar{\sigma}^\mu = (1, -\vec{\sigma})

Take the product:

\gamma^\mu \gamma_5= \begin{pmatrix} 0 &amp; \sigma^\mu \\ \bar{\sigma}^\mu &amp; 0 \end{pmatrix} \begin{pmatrix} -1 &amp; 0 \\ 0 &amp; 1 \end{pmatrix}= \begin{pmatrix} 0 &amp; \sigma^\mu \\ -\bar{\sigma}^\mu &amp; 0 \end{pmatrix}

Is still traceless. Of course using the anticommutation relations can also give you this answer (so it's a result of the algebra).
Tr(\gamma^\mu \gamma_5)= Tr (\gamma_5 \gamma^\mu)= -Tr(\gamma^\mu \gamma_5) \Rightarrow Tr(\gamma^\mu \gamma_5)=0

Where I used at first Tr(AB)=Tr(BA) and then \left \{ \gamma_5, \gamma^\mu \right \}=0.

For the \sigma^{\mu \nu} \equiv \frac{i}{2}[\gamma^\mu,\gamma^\nu]
Well, since it's a commutor it's traceless...

Tr(\sigma^{\mu \nu}) \sim Tr(\gamma^\mu \gamma^\nu)-Tr(\gamma^\nu \gamma^\mu) =Tr(\gamma^\mu \gamma^\nu)-Tr(\gamma^\mu \gamma^\nu)=0

Where I used the trace property Tr(AB)=Tr(BA)

The rest is in a complete analogy with what you do for the 2x2 complex matrices with a basis \Gamma^A = \left \{ 1, \sigma^i \right \} and when you write a general matrix:

M= \sum_B a_B \Gamma^B

and want to find the coefficients a_B.
http://en.wikipedia.org/wiki/Pauli_matrices#Completeness_relation

The even more simple form is what you do to find the coeffecients for a vector in some basis.
\vec{v}= a_i x^i
where you take the inner product with the basis vectors (the analogy to this is seen after you realize that the inner product is a scalar, as is the trace of a matrix)
 
Last edited:
  • #12
Just as expected, samalkhaiat writes a stunning reply. :) Bravo!
 
  • #13
dextercioby said:
Just as expected, samalkhaiat writes a stunning reply. :) Bravo!

Thanks, it is always a pleasure to help someone understand something. Also, it is fun to do.
 

Similar threads

Back
Top