Block Diagonalization - Representation Theory

In summary, the matrix (7) on page 4 of this document is obtained by solving the Jordan-Chevalley decomposition. The general procedure would be to figure out all semisimple matrices and diagonalize them simultaneously.
  • #1
nigelscott
135
4
How does one go about finding a matrix, U, such that U-1D(g)U produces a block diagonal matrix for all g in G? For example, I am trying to figure out how the matrix (7) on page 4 of this document is obtained.
 
Physics news on Phys.org
  • #2
I would study the Jordan Chevalley decomposition, but I don't know the algorithmic procedure.
 
  • #3
You can do this by looking for the eigenvectors of the representation of each element. Start with one of the matrices (not the representation of the identity) and deduce its eigenvectors. Then take one of the eigenvectors and see what you get back from acting with the other elements on that vector. The irrep that the original eigenvector is in is the linear span of all of those vectors. Set that subspace of the representation aside and take one of the remaining eigenvectors and repeat the procedure.

Caveat: two eigenvectors in different irreps may have the same eigenvalue. Thus, you should pick eigenvectors that correspond to non-degenerate eigenvalues.
 
  • #4
OK. Thanks. It seems that this involves numerical analysis and is best solved using MATLAB etc. Is that a fair assessment?
 
  • #5
That depends on the complexity. There can also be other methods in particular cases that work or sometimes some subspaces are obvious (as in your case where (1,1,1,1)/2 is a normalised eigenvector with eigenvalue one of all your matrices and so forms a subspace that is in the trivial representation of your group).

Also note that you just need to act with the generators of the group on your vectors until they do not produce new linearly independent ones.

I was planning to write down a common physical example which is the coupling of two spin-1/2 particles, but I am on my mobile. I might do it later.

Also note that there are some nice results about irreps, such as Shur’s lemma, that can allow you to do things more easily.
 
  • #6
The general procedure would be to figure out all semisimple matrices and diagonalize them simultaneously. I am sure there are good algorithms to achieve a Jordan-Chevalley decomposition or a simple Jordan normalform.
 
  • #7
If you have moderate computing power and not too many elements in G you can construct projection operators. Long explanation, and basic recipie at the end.

Basically let's say you have a group ##G##. A vector space ##V## and a (possibly reducible) representation of ##G## over ##V##,

##\mathbf{M}: G \to Hom(V,V)##

i.e. ##\forall g \in G## you have ##\mathbf{M}\left(g\right): V\to V##

Assuming you have the character tables for all irreducible representations of this group, with ##\chi^{(i)} : g\to\mathbb{C}## being the character if i-th irreducible representation, you can define:

##\mathbf{P}_i : V \to V##

such that ##\mathbf{P}_i = \frac{\#i}{\#G}\sum_{g\in G} \chi^{(i)}\left(g^{-1}\right)\mathbf{M}(g)##

where ##\#G## is the number of elements in ##G## and ##\#i## is the dimension of i-th irrep.

Now remember the great orthogonality theorem. Given irreducible representations ##D^{(i,j)}_{\alpha,\beta}##, we have:

##\frac{1}{\#G}\sum_{g\in G} D^{(i)}\left(g^{-1}\right)_{\alpha,\beta} D^{(j)}\left(g\right)_{\mu,\nu}=\frac{1}{\#i} \delta_{i,j}\delta_{\alpha,\mu}\delta_{\beta,\nu}##

Taking the trace over ##\alpha,\beta## we get:

##\frac{1}{\#G}\sum_{g\in G} \chi^{(i)}\left(g^{-1}\right) D^{(j)}\left(g\right)_{\mu,\nu}=\frac{1}{\#i} \delta_{i,j}\delta_{\mu,\nu}##

So now I can drop the matrix indices and write:

##\frac{1}{\#G}\sum_{g\in G} \chi^{(i)}\left(g^{-1}\right) \mathbf{D}^{(j)}\left(g\right)=\frac{1}{\#i} \delta_{i,j}\mathbf{Id}##

Lets get back to your representation. We know that there will be a way of decomposing ##M## into ireducible representations. Let's say ##\mathbf{M}=\mathbf{D}^{(1)}\oplus \mathbf{D}^{(3)}##

Then

##\mathbf{P}_1 = \frac{\#1}{\#G}\sum_{g\in G} \chi^{(1)}\left(g^{-1}\right)\mathbf{D}^{(1)}(g) \oplus \frac{\#1}{\#G}\sum_{g\in G} \chi^{(1)}\left(g^{-1}\right)\mathbf{D}^{(3)}(g) = \mathbf{Id}\oplus\mathbf{0}##

So ##\mathbf{P}_1## projects all vectors into the first irrep, i.e. it send the vectors from different irreps to zero, whilst keeping the vectors from the 1-st irrep untouched.

Now simply find all the eigenvectors of ##\mathbf{P_1}## (numerically). This will allow you to span all the 1-st irrep copies in your representation. Do it for all other projection operators for the irreps that occur in your representation (use characters to test this), i.e. find eigenvectors of all ##\mathbf{P}_i##. Finally, represent ##\mathbf{M}## in that eigenvector basis for all ##g\in G## (i.e. ##\mathbf{U}## that you wanted consists of these eigenvectors). This will be block-diagonal.
 
  • Like
Likes Truecrimson and Orodruin
  • #8
This will take me a little time to digest, but in the meantime I wanted to thank everyone for your responses.
 
  • #9
Cryo said:
Finally, represent ##\mathbf{M}## in that eigenvector basis for all ##g\in G## (i.e. ##\mathbf{U}## that you wanted consists of these eigenvectors). This will be block-diagonal.

What bugs me is how to get block-diagonal matricies if your representation is ##\mathbf{D}_1\oplus \mathbf{D}_1##, or something similar, i.e. if you have two or more copies of the same representation. The projection operators will not touch it.

Does anyone know?
 
  • #10
Cryo said:
What bugs me is how to get block-diagonal matricies if your representation is ##\mathbf{D}_1\oplus \mathbf{D}_1##, or something similar, i.e. if you have two or more copies of the same representation. The projection operators will not touch it.

Does anyone know?
How is it given, if not already in block form, i.e. how do you know, that the two spaces are invariant?
If they are, find a basis for both and perform the change of basis on your matrices.
 
  • #11
fresh_42 said:
How is it given, if not already in block form, i.e. how do you know, that the two spaces are invariant?

It is motivated by my earlier post (see above) , but it is not too important.

So let us say that for all elements ##g## of a finite group ##G## we have representation ##\mathbf{M}\left(g\right):\mathcal{V}\to\mathcal{V}##, where ##\mathcal{V}## is the vector space. Crucially, ##\mathbf{M}## are not block-diagonal in the initial basis. By construction. From character calculus we know that ##\mathbf{M}\left(g\right)=\mathbf{D}\left(g\right)\oplus\mathbf{D}\left(g\right)##, where ##\mathbf{D}\left(g\right)## is an irreducible representation of ##G##

fresh_42 said:
If they are, find a basis for both and perform the change of basis on your matrices.

That's my question, how do I find such basis for this specific situation, i.e. where I know that representation is a direct sum of two copies of the same irrep, but I do not know in which basis? What is the procedure?
 
  • #12
What does that mean: we know that? If you know, then you have the spaces, ergo the block form. Otherwise find invariant vectors.
 
  • #13
So let's say we are working with D3 (diherdral group), and more specifically with 2D irreps of D3 (it is denoted by 'E' in here http://symmetry.jacobs-university.de/cgi-bin/group.cgi?group=303&option=4).

The matrix representation is:

##\mathbf{D}\left(Id\right)=\left(\begin{array}\\1 & 0\\ 0 & 1\end{array}\right)##
##\mathbf{D}\left(r_{120^\circ}\right)=\left(\begin{array}\\-\frac{1}{2} & -\frac{\sqrt{3}}{2}\\\ \frac{\sqrt{3}}{2} & -\frac{1}{2}\end{array}\right)##
##\mathbf{D}\left(r_{240^\circ}\right)=\left(\begin{array}\\-\frac{1}{2} & \frac{\sqrt{3}}{2}\\\ -\frac{\sqrt{3}}{2} & -\frac{1}{2}\end{array}\right)##
##\mathbf{D}\left(m_{90^\circ}\right)=\left(\begin{array}\\-1 & 0\\\ 0 & 1\end{array}\right)##
##\mathbf{D}\left(m_{210^\circ}\right)=\left(\begin{array}\\-\frac{1}{2} & -\frac{\sqrt{3}}{2}\\\ -\frac{\sqrt{3}}{2} & \frac{1}{2}\end{array}\right)##
##\mathbf{D}\left(m_{330^\circ}\right)=\left(\begin{array}\\-\frac{1}{2} & \frac{\sqrt{3}}{2}\\\ \frac{\sqrt{3}}{2} & \frac{1}{2}\end{array}\right)##

The characters are: ##\chi=2,\,-1,\,-1,\,0,\,0,\,0##, respectively.

Next I create a bigger representation ##\mathbf{M}=\mathbf{U}\,\left(\mathbf{D}\oplus\mathbf{D}\right)\,\mathbf{U}^\dagger##, where ##\mathbf{U}## is simply a unitary real-valued matrix.

##\mathbf{U}=\exp\left(\mathbf{P}\right)##
##\mathbf{P}=\left(\begin{array}\\ 0 & -1 & 0 & 0 \\ 1 & 0 & -1 & -1 \\ 0 & 1 & 0 & -1 \\ 0 & 1 & 1& 0 \end{array}\right)##

Clearly identy will look as 4d identity, but (e.g.):

##\mathbf{M}\left(m_{210^\circ}\right)\approx\left(\begin{array}\\
0.3221 & 0.5709 & 0.1493 & -0.7403 \\
0.5709 & -0.5176 & -0.5786 & -0.2674 \\
0.1493 & -0.5786 & 0.7693 & -0.2261 \\
-0.7403 & -0.2674 & -0.2261 & -0.5738 \\
\end{array}\right)##

The trace of this is still zero, so I know this corresponds to reflection, but as you can see it is not at all block-diagonal. Same goes for other matricies. Now, if I was given these matricies (##\mathbf{M}\left(Id\right),\,\mathbf{M}\left(r_{120^\circ}\right),\,\dots ##) and told that they are a representation of D3 group, I could work out that this representation is isomorphic to ##\mathbf{D}\oplus\mathbf{D}## using characters alone. But then, how would I find the basis in which this representation is block-diagonal? How would I find ##\mathbf{U}## if I was not given it?
 
  • #14
I haven't checked your statements, but given they are correct, why isn't ##\mathbf{U}^\dagger \mathbf{M}\mathbf{U}## the matrix representation you are looking for? If it really equals ##\mathbf{D}\oplus \mathbf{D}## then it is in block form.
 
  • #15
fresh_42 said:
I haven't checked your statements, but given they are correct, why isn't ##\mathbf{U}^\dagger \mathbf{M}\mathbf{U}## the matrix representation you are looking for? If it really equals ##\mathbf{D}\oplus \mathbf{D}## then it is in block form.
This is not necessarily true. It may be written in a form which is rotated away from the block form and still be a ##\mathbf{D}\oplus\mathbf{D}## representation, i.e., it may be written in a basis that mixes the irreps. This would not mean that it is not a ##\mathbf{D}\oplus\mathbf{D}## representation.
 
  • Like
Likes Cryo
  • #16
Orodruin said:
This is not necessarily true. It may be written in a form which is rotated away from the block form and still be a ##\mathbf{D}\oplus\mathbf{D}## representation, i.e., it may be written in a basis that mixes the irreps. This would not mean that it is not a ##\mathbf{D}\oplus\mathbf{D}## representation.
But he wrote the ##\mathbf{D}## as ##2\times 2## matrices and ##\mathbf{M}## as ##4\times 4##!
There are no second diagonals possible in the given situation.

In case ##\mathbf{M}## is given anyhow, plus the abstract knowledge that the representation splits, in which case the information cannot arose from given block matrices, then my formerly given statement applies: look for invariants and start with all semisimple parts.
 
  • #17
Orodruin said:
This is not necessarily true. It may be written in a form which is rotated away from the block form and still be a D⊕D\mathbf{D}\oplus\mathbf{D} representation

Precisely. The rules of the 'game' is that you are given ##\mathbf{M}\left(g\right)##'s for each ##g\in G## and you are told what ##G## is, and ##G## is finite. Then

Cryo said:
How would I find ##\mathbf{U}## if I was not given it?
 
  • #18
fresh_42 said:
look for invariants and start with all semisimple parts.
Can you please explain it in more detail, or maybe give a reference to it? I have 6 4-by-4 matricies, and I know that they reps of D3 and that they are isomorphic to direct sum of two copies of the two-dimensional irrep of that group. What do I do to block-diagonalize all ##\mathbf{M}##'s?
 
Last edited:
  • #19
I have to take a look. I could give you a reference for semisimple groups and I assume that for other matrix groups the procedure is similar, but I don't have it in mind.
 
  • #20
I'd be glad to get any help you can provide
 
  • #21
If you have access to it, then
J.E. Humphreys: Linear Algebraic Groups
https://www.amazon.com/dp/0387901086/?tag=pfamazon01-20

is an excellent book and good to read. The keyword is Jordan decomposition (chapter VI ff.).
The difficulty is, that books are full of fancy theorems about properties of e.g. tori, but don't give an explicit algorithm. The way leads indeed over characters and tori (maximal diagonalizable subgroups).

I think it is along the lines:

Write all elements ##g=g_s+g_n## additive ##(*)## as a sum of semisimple (diagonalizable) and nilpotent endomorphisms, then pass over to ##g_u=1+g_s^{-1}g_n## such that ##g=g_sg_u## is the multiplicative decomposition into a semisimple and unipotent part. ##g_s=p(g)\, , \,g_n=q(g)## for polynomials without constant term.

Then all semisimple parts are simultaneously diagonalizable, which yields the basis we look for.

(*) The additive part can be found in his other book:
J.E. Humphreys: Introduction to Lie Algebras and Representation Theory
https://www.amazon.com/dp/0387900535/?tag=pfamazon01-20
if not in any linear algebra book (using the characteristic polynomial of ##g## and its linear factors.)
 
  • Like
Likes Cryo

1. What is block diagonalization in representation theory?

Block diagonalization is a technique used in representation theory to simplify the study of a large matrix by breaking it down into smaller, more manageable blocks. This allows for a better understanding of the structure and properties of the matrix.

2. How is block diagonalization different from diagonalization?

Diagonalization involves finding a basis in which a matrix is represented by a diagonal matrix. Block diagonalization, on the other hand, involves finding a basis in which a matrix is represented by a block diagonal matrix, with each block representing a smaller matrix.

3. What is the significance of block diagonalization in representation theory?

Block diagonalization is significant because it allows for the study of large matrices by breaking them down into smaller, more manageable blocks. This can reveal important properties and relationships between the blocks, which can then be used to understand the original matrix.

4. Can any matrix be block diagonalized?

No, not all matrices can be block diagonalized. The matrix must have a certain structure, such as being symmetric or having certain symmetries, in order for block diagonalization to be possible.

5. What are some applications of block diagonalization in representation theory?

Block diagonalization has various applications in fields such as physics, engineering, and computer science. It can be used to simplify and analyze large matrices in quantum mechanics, signal processing, and data compression, among others.

Similar threads

  • Linear and Abstract Algebra
Replies
2
Views
181
  • Linear and Abstract Algebra
Replies
2
Views
873
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
462
  • Linear and Abstract Algebra
Replies
1
Views
1K
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
3K
  • Linear and Abstract Algebra
Replies
1
Views
1K
Back
Top