HDB1 said:
please, could you explain more on above,
That would be
fresh_42 said:
To see that a nilpotent transformation ##\varphi ## has trace zero, consider that ##\varphi^n=0## so its characteristic polynomial is ##t^n+0\cdot t^{n-1}=0.## Since the trace is the second-highest coefficient in the characteristic polynomial, it has to be zero.
A nilpotent linear transformation ##\varphi ## is one for which there is a natural number ##k## such that ##\varphi^k=0.## This means ##\varphi (\varphi (\varphi (\ldots (\varphi (v)))\ldots) =0## for every ##v\in L.## Starting with a vector that needs the most ##\varphi ## to get killed, then one with the second most, etc., we can build a basis of ##L## such that ##\varphi ## is represented by a strict upper triangular matrix. Therefore, all entries on the diagonal of the matrix of ##\varphi ## are zero. That means that the trace of ##\varphi ## is zero and that ##\det(\varphi -t\cdot I)=(-t)^n## is the characteristic polynomial of ##\varphi.## (The trace is also always the coefficient of the second highest term of the characteristic polynomial, which in case of nilpotent transformations like ##\varphi ## doesn't exist, i.e. equals zero.) Long story short:
$$
\varphi \text{ nilpotent } \Longrightarrow \varphi^n=0 \Longrightarrow \operatorname{trace}(\varphi )=0
$$
HDB1 said:
also please, I get confused about the first direction, I mean in:
$$
\text { According to Cartan's Criterion (4.3), ad }{ }_L S \text { is solvable, hence } S \text { is solvable. }
$$
Are you confused about how to apply Cartan's criterion, or about Cartan's criterion itself?
Cartan's criterion says: Consider a Lie algebra ##L## which consists of matrices of a fixed finite size. Assume that the trace of all matrix products ##x\cdot y## is zero, where ##y\in L## is any matrix of ##L## and ##x\in [L,L],## i.e. ##x## can be written as ##x=\sum a_{ij}[a_i,a_j]## for some ##a_k\in L.## Then ##L## is solvable, means ##[\ldots,[[[L,L],[L,L]],[[L,L],[L,L]]]\ldots]=0.## This is the sequence we get if we multiply ##L## with itself, then the product of it with itself, then the result of that product with itself, and so on. Solvable means that this process ends in ##\{0\}.##
Examples are:
the Borel subalgebra of ##\mathfrak{sl}(2)## (solvable)
$$
\mathfrak{B(sl}(2))=\left\{\begin{pmatrix}a&b\\0&0\end{pmatrix}\, : \,a,b\in \mathbb{R}\right\}
$$
or the Heisenberg algebra (nilpotent and thus in particular solvable)
$$
\mathfrak{H}=\left\{\begin{pmatrix}0&a&b\\0&0&c\\0&0&0\end{pmatrix}\, : \,a,b,c\in \mathbb{R}\right\}\\
$$
So Cartan's criterion for the Heisenberg algebra says: If
\begin{align*}
\operatorname{trace}&\left(\left[\begin{pmatrix}0&a&b\\0&0&c\\0&0&0\end{pmatrix},\begin{pmatrix}0&u&v\\0&0&w\\0&0&0\end{pmatrix}\right]\cdot \begin{pmatrix}0&x&y\\0&0&z\\0&0&0\end{pmatrix}\right)\\
&=\operatorname{trace}\left(\begin{pmatrix}0&a&b\\0&0&c\\0&0&0\end{pmatrix}\cdot \begin{pmatrix}0&u&v\\0&0&w\\0&0&0\end{pmatrix}\cdot \begin{pmatrix}0&x&y\\0&0&z\\0&0&0\end{pmatrix}-
\begin{pmatrix}0&u&v\\0&0&w\\0&0&0\end{pmatrix}\cdot\begin{pmatrix}0&a&b\\0&0&c\\0&0&0\end{pmatrix}\cdot \begin{pmatrix}0&x&y\\0&0&z\\0&0&0\end{pmatrix}
\right)\\
&=0
\end{align*}
for all ##a,b,c,u,v,w,x,y,z## then ##L=\mathfrak{H}## is solvable.
In short: If ##\operatorname{trace}([A,U]\cdot X)=0## then ##L## is solvable.
Cartan's criterion gives us a sufficient condition for Lie algebras built by matrices - and all finite-dimensional Lie algebras over fields of characteristic zero (##\mathbb{Q},\mathbb{R},\mathbb{C},##etc.) are matrix algebras (Ado's theorem) - to determine solvability by looking if the traces of
$$
\operatorname{trace}\left([A,U]\cdot X\right)=\operatorname{A\cdot U\cdot X- U\cdot A\cdot X}=0
$$
vanish for any choice of ##A,U,X.##
Hence, given a Lie algebra defined as matrices, e.g. simple and
not solvable ##\mathfrak{sl}(2)## or solvable ##\mathfrak{H},## compute that monster ##A\cdot U\cdot X- U\cdot A\cdot X## and add the diagonal entries. I would use
https://www.symbolab.com/solver/matrix-calculator for that, although the pre-phone layout of this website was better.
That is the Criterion. I leave it as that in order to answer your question about theorem 5.1. In case you have a question about Cartan's criterion, please open a new thread.
In the proof of theorem 5.1 (1st direction), we assume that ##L## is semisimple, i.e. its (solvable) radical ##\operatorname{Rad}(L)=\{0\}## is zero. But forget this for a moment. We start new by considering
$$
S:=_{def}\operatorname{Rad}(K)=\left\{x\in L\, : \,K(x,y)=0\text{ for all }y\in L\right\}
$$
If we choose an ##x\in S## then ##K(x,y)=0## for all ##y\in L## per definition. Hence we have
$$
0=K(x,y)=\operatorname{trace}(\operatorname{ad}x \cdot \operatorname{ad}y)
$$
for all ##y\in [L,L]\subseteq L.## But ##\operatorname{ad}z## are matrices, building the Lie algebra ##\operatorname{ad}L## of all such matrices with elements ##z\in L.## But for matric algebras, Cartan's criterion applies. We have ##K(x,y)=0## for all ##y\in L## so in particular for all ##y\in [L,L].## Thus, by Cartan, we have that the Lie algebra of matrices ##\operatorname{ad}_L S## is solvable. The index ##L## only notes that we still multiply in ##L## which determines e.g. the size of the matrices ##\operatorname{ad}_L x,## but ##x\in S## and ##\operatorname{ad}_L S## is solvable by Cartan's criterion. (We silently assumed that ##S## is a Lie algebra. We even need in a moment that it is an ideal of ##L##. You might want to check this!)
Now go back to what solvable means and what ##\operatorname{ad}## means. We have with the Jacobi identity
\begin{align*}
(\operatorname{ad}[x,y])(z)&=[[x,y],z]=-[z,[x,y]]=[x,[y,z]]-[y,[x,z-]]\\
&=(\operatorname{ad}x \operatorname{ad}y)(z)-(\operatorname{ad}y\operatorname{ad}x(z)=[\operatorname{ad}x,\operatorname{ad}y](z)
\end{align*}
So ##\operatorname{ad}## is a Lie algebra homomorphism, ##\operatorname{ad}[x,y]=[\operatorname{ad}x,\operatorname{ad}y],## i.e. it walks into and out of the brackets.
Since ##\operatorname{ad}_L S## is solvable, we know that
$$
[\ldots,[[[\operatorname{ad}_L S,\operatorname{ad}_L S],[\operatorname{ad}_L S,\operatorname{ad}_L S]],[[\operatorname{ad}_L S,\operatorname{ad}_L S],[\operatorname{ad}_L S,\operatorname{ad}_L S]]]\ldots]=\{0\}
$$
This means by pulling out ##\operatorname{ad}## that
$$
\operatorname{ad}_L[\ldots,[[[S,S],[S,S]],[[S,S],[S,S]]]\ldots]=\{0\}
$$
But ##\operatorname{ad}x(y)=[x,y]## hence
\begin{align*}
(\operatorname{ad}_L[\ldots,[[[S,S],[S,S]],[[S,S],[S,S]]]\ldots])(y)&=0 \text{ for all }y\in S\\
&\Longrightarrow \\
[[\ldots,[[[S,S],[S,S]],[[S,S],[S,S]]]\ldots],y]&=\{0\}\text{ for all }y\in S\\
&\Longrightarrow \\
[[\ldots,[[[S,S],[S,S]],[[S,S],[S,S]]]\ldots],y]&=\{0\}\text{ for all }y\in [[\ldots,[[[S,S],[S,S]],[[S,S],[S,S]]]\ldots]\\
[[\ldots,[[[S,S],[S,S]],[[S,S],[S,S]]]\ldots] &=\{0\}
\end{align*}
and ##S## is solvable per definition of solvability. Next we need that ##S\subseteq L## is an ideal. I hope you have checked it
As a solvable ideal of ##L## it is contained in the maximal solvable ideal of ##L,## which is the radical of ##L.## Now we go back to our initial assumption that ##L## is semisimple. This means the maximal solvable ideal of ##L## is zero. So ##S\subseteq \operatorname{Rad}(L)=\{0\}.##
Finally, by definition of ##S,## the Killing-form of ##L## is non-degenerate.
(I wonder how many tiny steps are included in these proofs. I will answer the rest in a separate post.)
HDB1 said:
my last question, please, when we say
##
\operatorname{Rad} L=0
##
is that means there is no simple ideal satisfy:
##
[I, I]^n=0##