MHB Confusion about the Killing form for A1

  • Thread starter Thread starter topsquark
  • Start date Start date
  • Tags Tags
    Confusion Form
topsquark
Science Advisor
Homework Helper
Insights Author
MHB
Messages
2,020
Reaction score
843
This is a long one if you have to follow all of my steps. If you are reasonably familiar with Killing forms then you can probably just skip to the three questions.

Okay, I'm on my latest project which is to get some idea about how Dynkin diagrams and Coxeter labels work. (How do you pronounce "Coxeter" anyway?) I haven't got quite that far, but I'm not too far from it. But I'm having problems working with the Killing form using A1 as an example. I'm going to let you all what I know, but I'm also going to skip some steps that I don't believe are relevant for convenience. If you need more, just let me know.

Okay. A1 is a simple real Lie algebra over the field of complex numbers. There are three elements with Lie brackets [math][ H, E_{\pm} ] = \pm 2~E_{\pm}[/math] and [math][ E_+ , E_- ] = H[/math]. Using [math]\left( \begin{matrix} H \\ E_+ \\E_- \end{matrix} \right )[/math] as the basis (and calling H --> 1, [math]E_+[/math] --> 2, and [math]E_-[/math] --> 3 for ease of writing) we obtain the structure factors [math]f^{23}_{~~~~1} = -f^{32}_{~~~~1} = 1[/math] and [math]f^{12}_{~~~~2} = -f^{21}_{~~~~2} = - f^{13}_{~~~~3} = f^{31}_{~~~~3} = 2[/math].

The adjoint representation is simple to derive from here using [math]\left ( R_{ad} \left ( T^a \right ) \right )_b^{~c} = f^{ac}_{~~~~b}[/math]. I don't see the need to list them explicitly, so I'll move on.

From the structure factors we can write the Killing form for A1 by use of the formula: [math]\kappa ^{ab} = \frac{1}{I_{ad}} \left ( f^{ae}_{~~~~c} ~ f^{bc}_{~~~~e} \right )[/math] (summation convention implied.) [math]I_{ad}[/math] is the Dynkin index of the adjoint representation, whatever that means. After a bit of work the Killing form for A1 is
[math]\kappa = \frac{1}{I_{ad}} \left ( \begin{matrix} 8 & 0 & 0 \\ 0 & 0 &4 \\ 0 & 4 & 0 \end{matrix} \right )[/math]

At this point I start coming into my first question.
Since for any simple Lie algebra over [math]\mathbb{R}[/math], [math]\kappa[/math] is non-degenerate, an appropriate choice of basis brings [math]\kappa[/math] to the Canonical diagonal form:
[math]\left ( \begin{matrix} 1_p & 0 \\ 0 & -1_q \end{matrix} \right )[/math]
Question: I am working over the field of complex numbers, not the reals. As it happens the Killing form I have is non-degenerate and can be brought into the mentioned form. Is this chance or am I misinterpreting what they mean by "over [math]\mathbb{R}[/math]?"

Question: Splitting the above (adjoint representation) Killing form into eigenvectors/values I can write it in the form given in the quote. But when I rewrite the basis this way the resulting algebra no longer has the same structure factors. I would have thought that any representation of the Lie algebra would always have the same structure factors? Why would changing the basis change the structure factors?

--------------------------------------------------------------------------------------------------------------------------------------------

Now I'm going to list a few things that are simple enough to calculate. I'll give you the blow by blow and get to my final question.

The (only) Cartan subalgebra of A1 is [math]g_0 = \{ H \}[/math], with [math]r = rank(A1) = dim(g_0) = 1[/math]. This means we can split the adjoint representation of A1 into a direct sum of one dimensional representations [math]g \leadsto \bigoplus _{\alpha} g^{\alpha}[/math] where [math]g^{\alpha} = \{ x \in A1 ~ | ~ ad_{H}(x) = \alpha (H) \cdot x \} = g_0 \oplus g^2 \oplus g^{-2}[/math]

and [math][ H, E^{\alpha} ] = \alpha E^{\alpha}[/math]. From the Lie brackets we get two values of [math]\alpha[/math] and thus we have the root system (of A1 over [math]g_0[/math]) [math]\Phi = \left ( \begin{matrix} 2 \\ -2 \end{matrix} \right )[/math]. This gives (finally!) the Cartan-Weyl basis of A1: [math]\beta = \{ H \cup \{E^2, ~E^{-2} \} \}[/math]. ([math]E^2 = E_+, ~E^{-2} = E_-[/math]). This is actually the basis I started with which makes the next part very confusing to me.

Finally we get to the last question. My text says that the Killing form in the Cartan-Weyl basis can be written as (after a normalization of the basis)
[math]\kappa = \frac{1}{I_{ad}} \left ( \begin{matrix} 1 & 0 \\ 0 & \delta ^{\alpha, ~-\beta} \end{matrix} \right )[/math]

Question: What the heck is the [math]\delta ^{\alpha, -\beta}[/math] supposed to be? For A1 I can intuit [math]\alpha = 2 \text{ and } -\beta = -(-2) = 2[/math] from the root system so [math]\kappa[/math] in the Cartan-Weyl basis is just the 3x3 identity matrix. but the Cartan-Weyl basis is just the basis I used at the beginning and the Killing form of the adjoint representation is not the same as the identity, even looking at an appropriate normalization of the basis. It looks more like the the Killing form in the quote...it has a signature of ++-. Where did I go wrong?

-Dan
 
Physics news on Phys.org
Okay, let's break this up a little.

My text says that [math]SL(2, \mathbb{C})[/math] is isomorphic to [math]A_1[/math]. But by a choice of basis we find that [math]SL(2, \mathbb{C} )[/math] is isomorphic to [math]SU(2) \oplus SU(2)[/math]. This means that [math]A_1[/math] is isomorphic to [math]SU(2) \oplus SU(2)[/math]. The problem I'm having is this: [math]A_1[/math] has no proper ideals but [math]SU(2) \oplus SU(2)[/math] seems to have one: [math]1 \oplus SU(2)[/math]. So how can [math]A_1[/math] be isomorphic to [math]SU(2) \oplus SU(2)[/math]?

-Dan
 
Okay, I've gotten somewhere anyway. [math]SL(2, \mathbb{C} )[/math] is isomorphic to [math]SU(2) \oplus SU(2)[/math] because [math]SL(2, \mathbb{C} )[/math] is the complexification of two real Lie algebras. And [math]1 \oplus SU(2)[/math] is a subalgegra, not an ideal so that problem is fixed.

My next level of confusion is the notation [math]\delta ^{\alpha, -\beta}[/math] appearing in the matrix in the OP near the bottom. Is this a Kronecker delta of some kind? Or is [math]\delta ^{\alpha, -\beta}[/math] meant to represent a block matrix?

-Dan
 
I'm thinking too hard. In 3D:

[math]\delta ^{\alpha, -\beta} = \left ( \begin{matrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{matrix} \right )[/math]

I still have one question left in this thread but I'm going open a new one on the topic.

-Dan
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Replies
19
Views
3K
Replies
7
Views
2K
Replies
2
Views
1K
Replies
52
Views
3K
Replies
4
Views
2K
Replies
2
Views
2K
Back
Top