Justify matrices form basis for SO(4)

  • Context: Graduate 
  • Thread starter Thread starter OhNoYaDidn't
  • Start date Start date
  • Tags Tags
    Basis Form Matrices
Click For Summary
SUMMARY

The discussion focuses on justifying a set of six 4x4 matrices as a basis for the Lie Algebra of the group SO(4). The matrices must satisfy the conditions of the Lie algebra, specifically that A + A^T = 0 and tr(A) = 0, rather than the conditions for the group itself. To establish that these matrices form a basis, one must demonstrate their linear independence and that they span the vector space of the Lie algebra, which consists of matrices with specific properties. The dimension of the Lie algebra is confirmed to be 6, aligning with the number of matrices provided.

PREREQUISITES
  • Understanding of Lie Algebras, specifically the properties of the Lie algebra of SO(4)
  • Knowledge of matrix operations, including transpose and trace
  • Familiarity with linear independence and spanning sets in vector spaces
  • Basic concepts of linear algebra, particularly in relation to matrices
NEXT STEPS
  • Study the properties of the Lie algebra of SO(4) in detail
  • Learn about the conditions for matrices to belong to a Lie algebra, focusing on A + A^T = 0 and tr(A) = 0
  • Explore methods for proving linear independence of matrix sets
  • Investigate the concept of spanning sets and their significance in vector spaces
USEFUL FOR

Mathematicians, physicists, and students studying Lie groups and algebras, particularly those interested in the structure and properties of SO(4) and its applications in theoretical physics.

OhNoYaDidn't
Messages
25
Reaction score
0
I am given the following set of 4x4 matrices. How can i justify that they form a basis for the Lie Algebra of the group SO(4)? I know that they must be real matrices, and AA^{T}=\mathbb{I}, and the detA = +-1. Do i show that the matrices are linearly independent, verify these properties, and so they are a basis? Why are they 6 elements?

<br /> _{A_1}=\begin{pmatrix}<br /> 0 &amp;0 &amp;0 &amp;0 \\<br /> 0 &amp; 0 &amp; 1 &amp; 0 \\<br /> 0&amp; -1 &amp;0 &amp;0 \\<br /> 0&amp; 0&amp; 0 &amp; 0<br /> \end{pmatrix}<br /> ,\\<br /> \<br /> _{A_2}=\begin{pmatrix}<br /> 0 &amp;0 &amp;-1 &amp;0 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> 1&amp; 0 &amp;0 &amp;0 \\<br /> 0&amp; 0&amp; 0 &amp; 0<br /> \end{pmatrix}<br /> <br /> ,\\<br /> <br /> _{A_3}=\begin{pmatrix}<br /> 0 &amp;-1 &amp;0 &amp;0 \\<br /> 1 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0&amp; 0 &amp;0 &amp;0 \\<br /> 0&amp; 0&amp; 0 &amp; 0<br /> \end{pmatrix}<br /> \\<br /> _{B_1}=\begin{pmatrix}<br /> 0 &amp;0 &amp;0 &amp;-1 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0&amp; 0 &amp;0 &amp;0 \\<br /> 1&amp; 0&amp; 0 &amp; 0<br /> \end{pmatrix}<br /> <br /> ,\\<br /> <br /> _{B_2}=\begin{pmatrix}<br /> 0 &amp;0 &amp;0 &amp;0 \\<br /> 0 &amp; 0 &amp; 0 &amp; -1 \\<br /> 0&amp; 0 &amp;0 &amp;0 \\<br /> 0&amp; 1&amp; 0 &amp; 0<br /> \end{pmatrix}<br /> <br /> ,\\<br /> <br /> _{B_3}=\begin{pmatrix}<br /> 0 &amp;0 &amp;0 &amp;0 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0&amp; 0 &amp;0 &amp;1 \\<br /> 0&amp; 0&amp; -1 &amp; 0<br /> \end{pmatrix}<br /> <br />
 
Physics news on Phys.org
OhNoYaDidn't said:
I am given the following set of 4x4 matrices. How can i justify that they form a basis for the Lie Algebra of the group SO(4)? I know that they must be real matrices, and AA^{T}=\mathbb{I}, and the detA = +-1. Do i show that the matrices are linearly independent, verify these properties, and so they are a basis? Why are they 6 elements?

<br /> _{A_1}=\begin{pmatrix}<br /> 0 &amp;0 &amp;0 &amp;0 \\<br /> 0 &amp; 0 &amp; 1 &amp; 0 \\<br /> 0&amp; -1 &amp;0 &amp;0 \\<br /> 0&amp; 0&amp; 0 &amp; 0<br /> \end{pmatrix}<br /> ,\\<br /> \<br /> _{A_2}=\begin{pmatrix}<br /> 0 &amp;0 &amp;-1 &amp;0 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> 1&amp; 0 &amp;0 &amp;0 \\<br /> 0&amp; 0&amp; 0 &amp; 0<br /> \end{pmatrix}<br /> <br /> ,\\<br /> <br /> _{A_3}=\begin{pmatrix}<br /> 0 &amp;-1 &amp;0 &amp;0 \\<br /> 1 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0&amp; 0 &amp;0 &amp;0 \\<br /> 0&amp; 0&amp; 0 &amp; 0<br /> \end{pmatrix}<br /> \\<br /> _{B_1}=\begin{pmatrix}<br /> 0 &amp;0 &amp;0 &amp;-1 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0&amp; 0 &amp;0 &amp;0 \\<br /> 1&amp; 0&amp; 0 &amp; 0<br /> \end{pmatrix}<br /> <br /> ,\\<br /> <br /> _{B_2}=\begin{pmatrix}<br /> 0 &amp;0 &amp;0 &amp;0 \\<br /> 0 &amp; 0 &amp; 0 &amp; -1 \\<br /> 0&amp; 0 &amp;0 &amp;0 \\<br /> 0&amp; 1&amp; 0 &amp; 0<br /> \end{pmatrix}<br /> <br /> ,\\<br /> <br /> _{B_3}=\begin{pmatrix}<br /> 0 &amp;0 &amp;0 &amp;0 \\<br /> 0 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0&amp; 0 &amp;0 &amp;1 \\<br /> 0&amp; 0&amp; -1 &amp; 0<br /> \end{pmatrix}<br /> <br />
I get the impression you confuse the special orthogonal group ##SO(4)## and its Lie algebra ##\mathfrak{so}(4)##. At least on the level of their defining conditions. Your matrices are elements of a Lie algebra, whereas neither is invertible ##(AA^T=\mathbb{1})## nor is of determinant ##\pm 1##, which are the conditions for the group. In the Lie algebra ##\mathfrak{so}(4)## the defining conditions are ##A+A^T=0## and ##tr(A)=0##.

And, yes, to form a basis they have to be linear independent, which therefore has to be shown. I think this will be easier if you write them as ##\delta_{ij}-\delta_{ji}##. To show they form a basis, you also have to show that they span the entire vector space of ##\mathfrak{so}(4)##, which could easiest be done by counting the dimension: How many matrix entries are there for matrices in ##\mathbb{M}_\mathbb{R}(4)## and how many linear equations, conditions does this system have?
 
Last edited:
  • Like
Likes   Reactions: OhNoYaDidn't
fresh_42 said:
I get the impression you confuse the special orthogonal group ##SO(4)## and its Lie algebra ##\mathfrak{so}(4)##. At least on the level of their defining conditions. Your matrices are elements of a Lie algebra, whereas neither is invertible ##(AA^T=\mathbb{1})## nor is of determinant ##\pm 1##, which are the conditions for the group. In the Lie algebra ##\mathfrak{so}(4)## the defining conditions are ##A+A^T=0## and ##tr(A)=0##.

And, yes, to form a basis they have to be linear independent, which therefore has to be shown. I think this will be easier if you write them as ##\delta_{ij}-\delta_{ji}##. To show they form a basis, you also have to show that they span the entire vector space of ##\mathfrak{so}(4)##, which could easiest be done by counting the dimension: How many matrix entries are there for matrices in ##\mathbb{M}_\mathbb{R}(4)## and how many linear equations, conditions does this system have?
Thank you, fresh_42. I'm new to the subject, things are still a little confusing.
So, we know that a ##\mathbb{M}_\mathbb{R}(4)## matrix has 16 entries, but since ##A=-A^T##, we can get rid of the elements of the upper or lower triangular part of our matrix: hence subtracting ## N(N+1)/2## from ##N^2##, so now we need ##n=N^2-N(N+1)/2=N(N-1)/2## elements for this basis, which is exactly what i wanted to understand.
To show that they are linearly independent, i simply write them as a linear combination, multiplying them by constants: a, b, c, d, e, f, respectively, and prove ##a=b=c=d=e=f=0##.

What do you mean by writing them as ##\delta_{ij}-\delta_{ji}##.?
 
OhNoYaDidn't said:
What do you mean by writing them as ##\delta_{ij}-\delta_{ji}##.?
The matrices ##A,B## above are all of the form ##\delta_{ij}-\delta_{ji}## where ##\delta_{ij}## is a matrix with exactly one ##1## in the ##i-##th row and ##j-##th column and ##0## elsewhere. They are also written as ##E_{ij}, e_{ij}## or ##\mathfrak{e}_{ij}## or similar as they are the standard basis vectors like ##(0,\ldots,1,\ldots,0) \in \mathbb{R}^n##. To chose ##E_{ij}## is probably preferable over the ##\delta## notation, because it saves the ##\delta## for pairs of integers as usual; my fault, sorry.)

It is almost obvious by itself, that the matrices above are linear independent, since all non-zero entries are all in different positions so they cannot cancel each other out. The advantage of the notation as ##E_{ij}-E_{ji}## becomes important if you start multiplying them: ##E_{ij}\cdot E_{mn}=\delta_{jm}E_{in} ##, i.e. match the inner indices and take the outer as the new ones; zero if the inner indices don't match.
 
A set B of vectors (in this case matrices) is a basis of a vector space V if and only if any element of V can be written as a linear combination of the elements of B (i.e. B spans V) and the elements of B are linearly independent.

In order to show that B spans V=##\mathfrak so(4)##, you can simply find the general expression for an arbitrary element A of ##\mathfrak so(4)## using the "defining conditions" that fresh_42 referred (A is traceless and ##A=-A^\dagger##). If you do this, you will be able to write this matrix A as a linear combiantion of the matrices you wrote, so they span V.

In order to show that the elements of B are linearly independent, you can write a general linear combination of them, set it equal to zero, and solve the resulting equation: you will see that all coefficients must be zero, and this by definition tells you that B is linearly independent.
 
  • Like
Likes   Reactions: OhNoYaDidn't
Thank you guys, i got it :)
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 34 ·
2
Replies
34
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 8 ·
Replies
8
Views
3K
Replies
31
Views
3K
Replies
8
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K