How can I calculate the Killing form of a Lie algebra using a specific basis?

Dogtanian
Messages
13
Reaction score
0
OK, firstly I hope this is the rigth place for my question. I'm in a bit of a problem. I need to be able to calucalte the Killing for for a Lie algebra by next wek, but I'm stuck and won't be able to get any help in 'real life' until Friday, not leaving me enough time to sort out my problem. So I was hoping someone here might be able to show me some pointers.


So I have the Lie algebra of all upper triangular 2x2 matrices and am using the basis
(1 0) (0 1) (0 0)
(0 0),(0 0),(0 1).
(I hope these matrices turn out OK: they should be 3 2x2 matrices.)

The Killing form is defined as K(X,Y) = trace(adX adY) for all X,Y in the Lie algebra, where ad is the adjoint, defined on Z by adX(Z) = [X, Z] = XZ-ZX.

I know I have to consider adX and adY as linear transformations and work out their matrices (and hence multiply them together and find the trace). What I'm not sure is how to go about this, and at what point do I use the bases instead of a general element from the Lie algebra?

Any help would be much appreciated, as I've spent a fair bit of time on this and got no where.

I also have to so the same for the special linear Lie algebra sl_2(C) (C=complex nubers), but I think if I have some hints for the triangular one I should hopefully be able to work the special linear one out my self.
 
Physics news on Phys.org
name your three matrices A, B, and C.

then you have the following "multiplication" table:

[A,B]=B
[A,C]=0
[B,C]=B

so ad(A) takes A to 0, B to B, and C to 0. relative to the basis {A,B,C}, these are (0 0 0), (0 1 0), and (0 0 0). as a matrix, you make ad(A) by putting its value on each basis vector as each column so ad(A) is

0 0 0
0 1 0
0 0 0

ad(B) is
0 0 0
-1 0 1
0 0 0

and ad(C)
0 0 0
0 -1 0
0 0 0

from which I get K(A,A)=1, K(A,B) = 0, K(A,C)=-1, K(B,B)=0, K(B,C)=0, and K(C,C)=1

note that the Killing form is degenerate, so by Cartan's criterion (K is degenerate iff g is semsimple), this algebra is not semisimple. which is good, since this algebra is solvable [[g,g],g] = 0. Cartan's other criterion says that g is solvable iff [g,g] is in the kernel of K. in this case, [g,g]=B, and K vanishes on B, so that checks out. When you do sl(2,C), which is semisimple (simple, even), you should get a nondegenerate Killing form.
 
Last edited:
I am feeling really dumb here, as I'm sure all this is not too difficult, I'm just sure I am missing something that makes it all fit into place.

Why are we ending up with 3x3 matrice to describe the transformation? Is it becasue we have a 3 dimensional Lie algebra?

Is it possible to get a single matrice for the transformation that would work for any general X as adX, rather than having separate ones for each basis element? Would we just add the three together to get this (which would essentially just be adB)? This doesn't seem right to me, but I've got a blank mind and can't think it through.

Also, how would we then apply a 3x3 matrix form of the transformation to the 2x2 matrices it should be applyed to?
I've seen an example elsewhere with 3x3 matrices involved and it confused me. I'm sure there is simply something I'm forgetting/not understanding, which is the key to all this... :confused:

(BTW, I do understand all the Killing form/Cartan Criteria stuff at the bottom, so no worries about that bit)

EDIT:I've just thought, it should be possible to know the behaviour of the Killing form just by knowing it's effect only using the basis elements due to the Killing forms bilinearity. Thus, we should really only need to Killing form work out for adX when only considering basis elements for X (I think this is what you have done above). So really, what I need to get my head around is how we get the 3x3 matrices. I can sort of see where they come from in your post, but not why we get them or why we should get them. I'd have seriously thought that we'd need 2x2 matrices...for some reason, but I ain't even sure why I would think that now. ...I really confusing my self here :confused:
 
Last edited:
Dogtanian said:
I am feeling really dumb here, as I'm sure all this is not too difficult, I'm just sure I am missing something that makes it all fit into place.

Why are we ending up with 3x3 matrice to describe the transformation? Is it becasue we have a 3 dimensional Lie algebra?

yes. we are turning the lie algebra into a (3-dimensional) representation, upon which the lie algebra acts via the adjoint action.

Is it possible to get a single matrice for the transformation that would work for any general X as adX, rather than having separate ones for each basis element? Would we just add the three together to get this (which would essentially just be adB)? This doesn't seem right to me, but I've got a blank mind and can't think it through.

ad(X) is a linear thing: ad(X+Y)=ad(X)+ad(Y), so if you know ad on the basis vectors of the lie algebra you know it on all elements by linearity.


Also, how would we then apply a 3x3 matrix form of the transformation to the 2x2 matrices it should be applyed to?
I've seen an example elsewhere with 3x3 matrices involved and it confused me. I'm sure there is simply something I'm forgetting/not understanding, which is the key to all this... :confused:


but you aren't applying ad(X) to the 2x2 matrices via matrix multiplication! you are applying them to the 3x3 vector space of upper triangular matrices.

(BTW, I do understand all the Killing form/Cartan Criteria stuff at the bottom, so no worries about that bit)

EDIT:I've just thought, it should be possible to know the behaviour of the Killing form just by knowing it's effect only using the basis elements due to the Killing forms bilinearity. Thus, we should really only need to Killing form work out for adX when only considering basis elements for X (I think this is what you have done above). So really, what I need to get my head around is how we get the 3x3 matrices. I can sort of see where they come from in your post, but not why we get them or why we should get them. I'd have seriously thought that we'd need 2x2 matrices...for some reason, but I ain't even sure why I would think that now. ...I really confusing my self here :confused:



Let M be the above lie algebra, M is a 3 dimensional real vector space. For an element X in that space there is a linear map from M to itself called ad(X), and the linear action of ad(X) on a vector v in M is given by declaring

ad(X).v = [xv]

this is linear a map from M to M and can thus be represented by a 3x3 matrix.

It so happens that M is a subset of 2x2 matrices, and that we calculate [xv] from the matrix mupltiplication xv-vx. I think yours is a common issue.
 
If it helps, think of two copies of M, one as matrices, the lie algebra, and one just as a vector space.

In fact let's use the labe g for the lie algebra version and V for the Vector space.

V is a 3-d vector space and we can identify elements of V with g via

\left( \begin{array}{cc} a & b\\0& c\end{array} \right) \leftrightarrow \left( \begin{array} a \\ b \\ c \end{array} \right)


Now, we want to make g act on V just as a Lie Representation.

so we do that define a map p: g--> End(V); p(x)= ad(x).

we define p(x).v =ad(x).v via recalling that actually we can identify v with some 2x2 matrix, and set ad(x).v = [xv] the commutator bracket.

This defines an action of g on V, ie as a subset of the 3x3 matrices of End(V).

Does that make it clearer?
 
Yes, I think it does make it clear now. I sort of got it a few minutes ago as I was making a post on this subjec tover on Science forums and debate. I remeber doing a hell of a lot of work on this sort of thingback in my topology course last year, only I'd some how temporarilly put it out of my mind...as I always manage to do with the things I need when I worry about not being able to understand something...I often already know what I need, I only just don't think about it :blushing: :bugeye:

TRhanks very much for everyones help here :biggrin:
 
Last edited:
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top