SU_L x SU_R and SU_V x SU_A equivalence

  • Thread starter Thread starter whitewolf91
  • Start date Start date
  • Tags Tags
    Equivalence
whitewolf91
Messages
4
Reaction score
0
I want to show that the group SU_L(N)\times SU_R(N) is the same as SU_V(N)\times SU_A(N) - i.e. that it is possible to rewrite the transformation:

<br /> \begin{cases}<br /> \psi_L \to \psi&#039;_L=V_L\,\psi_L\\<br /> \psi_R \to \psi&#039;_R=V_R\,\psi_R<br /> \end{cases}<br />

, where V_L and V_R are N\times N SU(N) matrices and \psi_L and \psi_R are N-component vectors, as:

<br /> \begin{cases}<br /> \psi_L \to \psi&#039;_L=V A\,\psi_L\\<br /> \psi_R \to \psi&#039;_R=V A^\dagger\, \psi_R<br /> \end{cases}<br />

, where again V and A are N\times N SU(N) matrices. The system:

<br /> \begin{cases}<br /> V_L=VA &amp;\\<br /> V_R=VA^\dagger<br /> \end{cases}<br />

must be solved for V and A:

<br /> A^2=V_R^\dagger V_L<br />

So, A is the square root of the SU(N) matrix V_R^\dagger V_L.

Problems:

1. The only way I can think of to give meaning to the square root of a matrix is a naive series expansion, but how to prove convergence?

2. Does the series expansion defines an SU(N) matrix?


Random thoughts:

Probably 2. is the less problematic. I believe that it is true that if the square of a matrix equals an SU(N) matrix, that matrix still lives in SU(N). It is not necessary to find the explicit expression of A and V so it would be enough to prove that the square root of an SU(N) matrix is an SU(N) matrix. Another approach is to consider infinitesimal elements of the group, then the previous system can be easily satisfied in the Lie algebra of the group. Is this enough to conclude that there are finite elements in the group that satisfy the same relationships?

Thank you for your time.
 
Physics news on Phys.org
whitewolf91 said:
1. The only way I can think of to give meaning to the square root of a matrix is a naive series expansion, but how to prove convergence?

If a matrix A is similar to a diagonal matrix (there is some matrix ##P## and some diagonal matrix ##D## such that ##A = P^{-1} D P##) then you can define ##A^{1/2} \equiv P^{-1} D^{1/2} P##, where ##D^{1/2}## is defined by just taking the square root of the diagonal entries.
 
An SU(N) matrix is not necessarily hermitian, so it is not guaranteed that it can be diagonalized. If it were hermitian however then P would be unitary and it would be automatic that the square root is in SU(N).
 
Unitary matrices are all diagonalizable with a unitary ##P##; see e.g. here. So all SU(N) matrices have a square root in SU(N).
 
  • Like
Likes 1 person
@The_Duck: thank you, that solves the problem.

@samalkhaiat: I follow you until the very end, when you imply that:

<br /> e^{i(\alpha+\epsilon \gamma_5)\,\cdot\, \tau/2}=e^{i\alpha\,\cdot\, \tau/2}e^{i\epsilon \gamma_5\,\cdot\, \tau/2}<br />

Isn't this false since the exponents are non commuting?
 
whitewolf91 said:
@The_Duck: thank you, that solves the problem.

@samalkhaiat: I follow you until the very end, when you imply that:

<br /> e^{i(\alpha+\epsilon \gamma_5)\,\cdot\, \tau/2}=e^{i\alpha\,\cdot\, \tau/2}e^{i\epsilon \gamma_5\,\cdot\, \tau/2}<br />

Isn't this false since the exponents are non commuting?

This should be understood as 8 \times 8 matrix, when you use the BHC identity. Write it as
e^{ \frac{i}{2} ( \alpha \cdot \tau ) \otimes I } e^{ \frac{i}{2} ( \epsilon \cdot \tau ) \otimes \gamma_{ 5 } } .
 
  • #10
My reasoning:

<br /> [(\alpha\cdot \tau)\otimes I, (\epsilon\cdot \tau)\otimes\gamma_5]=<br />

<br /> =[\alpha\cdot\tau,\epsilon\cdot\tau]\otimes\gamma_5=<br />

<br /> =\alpha_a\epsilon_b[\tau_a,\tau_b]\otimes\gamma_5=<br />

<br /> =i\alpha_a\epsilon_b\epsilon_{abc}\tau_c\otimes\gamma_5=<br />

<br /> =i(\alpha \times \epsilon)\cdot \tau \otimes \gamma_5<br />

, which is zero only if \alpha and \epsilon are parallel.
 
  • #11
One should stress that you shouldn't write \mathrm{SU}(2)_V \times \mathrm{SU}(2)_A (BAD NOTATION!) since this the \times symbol should be reserved for a direct product of groups. That's the case for \mathrm{SU}(2)_L \times \mathrm{SU}(2)_R, but \mathrm{SU}(2)_A is not even a sub group but a coset.
 
  • #12
whitewolf91 said:
My reasoning:

<br /> [(\alpha\cdot \tau)\otimes I, (\epsilon\cdot \tau)\otimes\gamma_5]=<br />

<br /> =[\alpha\cdot\tau,\epsilon\cdot\tau]\otimes\gamma_5=<br />

<br /> =\alpha_a\epsilon_b[\tau_a,\tau_b]\otimes\gamma_5=<br />

<br /> =i\alpha_a\epsilon_b\epsilon_{abc}\tau_c\otimes\gamma_5=<br />

<br /> =i(\alpha \times \epsilon)\cdot \tau \otimes \gamma_5<br />

, which is zero only if \alpha and \epsilon are parallel.

I am very sorry, may be I was drunk. For some reason, I thought you were asking about eq(3) with the projection matrices. But you were asking about
U_{ L } U_{ R } = \exp ( i \frac{ \alpha \cdot \tau }{ 2 } ) \exp ( i \gamma_{ 5 } \frac{ \epsilon \cdot \tau }{ 2 } )
This is not an algebraic equation. As explained in the paragraph bellows it, it means the following

For all g( \epsilon_{ L } , \epsilon_{ R } ) \in SU_{ L } \times SU_{ R }, a transformation with \epsilon_{ L } = \epsilon_{ R }, corresponds to the vector transformation g( \alpha ). And, transformation with \epsilon_{ L } = - \epsilon_{ R } corresponds to the axial transformation g( \epsilon ).

As pointed out by vanhees71 , the sign ( \times ) in SU \times SU_{ 5 } is not a direct product , because the Lie bracket of SU_{ 5 } does not closes on itself. But the group G = SU \times SU_{ 5 } is a direct product of the two independent groups SU_{ L } and SU_{ R }, with Lie algebra given as
\mathcal{ G } = su_{ L } \oplus su_{ R }
Sorry for the confusion

Sam
 
Back
Top