A Infinite-Dimensional Lie Algebra

fresh_42
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
2024 Award
Messages
20,627
Reaction score
27,770
TL;DR Summary
Are these two (infinite dimensional) Lie algebras isomorphic or not?
Let ##\mathfrak{A}:=\operatorname{span}\left\{D_n:=x^n\dfrac{d}{dx}\, : \,n\in \mathbb{Z}\right\}## and ##\mathfrak{B}:=\operatorname{span}\left\{E_n:=x^n\dfrac{d}{dx}\, : \,n\in \mathbb{N}_0\right\}## with the usual commutation rule.
My question is: How can we prove or disprove the Lie algebra isomorphism ##\mathfrak{A}\cong \mathfrak{B}?##

Multiplication goes: ##[D_n,D_m]=(m-n)D_{n+m-1}## and ##[E_n,E_m]=(m-n)E_{n+m-1}.##

The easy invariants (dimension ##\aleph_0##, center ##\{0\}##, derived algebra ##[\mathfrak{A},\mathfrak{A}]=\mathfrak{A},[\mathfrak{B},\mathfrak{B}]=\mathfrak{B}##, ideals - none) are the same. My suspicion is that they are not isomorphic, since there are infinitely many subalgebras ##\mathfrak{sl}(2)\cong\operatorname{span}\{D_{-n+1},D_1,D_{n+1}\}\leq \mathfrak{A}## and as far as I can see only one ##\mathfrak{sl}(2)\cong \operatorname{span}\{D_0,D_1,D_2\}\leq \mathfrak{B}.## However, this is not obvious (to me) and any manual calculations are a mess of indices. Other common properties (solvability, semisimplicity, Killing-form) aren't of help, either, since we have an infinite-dimensional vector space.

##D_1## is almost a ##1## in both Lie algebras, and presumably their maximal toral subalgebra. So how can we prove, that there aren't any other copies of ##\mathfrak{sl}(2)## in ##\mathfrak{B}## than the obvious one? Or is there an easy invariant I haven't thought of?
 
Physics news on Phys.org
I think in ##B## only things of the form ##x=aE_0 +b E_1## have the property that for any finite dimensional subspace ##V##, repeatedly applying ##[x,\cdot]## to ##V##, you get spaces that are entirely contained in a finite dimensional subspace (namely, ##span(E_0,...,E_n)## where ##E_n## is the highest degree term needed to represent anything in ##V##).
But in ##A##, I think only vectors of the form ##a D_1## work. In particular ##D_0## stops working since ##[D_0,D_{-1}] = D_{-2}##, ##[D_0,D_{-2}]=2 D_{-3}## etc.

I'm not 100% confident in this proof that they are not isomorphic, but it might be right...Edit: maybe slightly simpler, for any ##e\in B## repeatedly applying ##[E_0,\cdot]## to it eventually kills it. There's no element in ##A## that has a similar property
 
Last edited:
Yes, ##\operatorname{ad}E_0## is "nilpotent" on all others. But so is ##\operatorname{ad} D_0## on the positive part of ##\mathfrak{A}## (stops at ##0##) and ##\operatorname{ad}D_2## on the negative part (stops at ##2##). I think it is possible, but unpleasant to figure out whether or not there is a kind of diagonal element that combines the two. Those possible diagonals are the difficulty, especially if only finitely many coefficients are nonzero. A restriction I'm not sure whether it is necessary.

How are e.g. derivations ##\delta ## of infinite-dimensional algebras usually defined? If we set ##\delta(D_k)=\sum_j d_{jk}D_j##, is it required to have almost all ##d_{ij}=0##? The definition ##\delta[X,Y]=[\delta X,Y]+[X,\delta Y]## allows both. Is there a "canonical" way to define it restricted or not?
 
Suppose we have a single element which is nilpotent on all of ##A##, say ##x=\sum_i a_i D_i##. Suppose that the largest ##i## is at least 2, and let's call it ##m##. Then
##[\sum_i a_i D_i, \sum b_j D_j]## for any choice of ##b_j##s such that the largest j with a non-zero coefficient is ##n>m## can be written as
##[\sum_i a_i D_i, \sum_j b_j D_j]= (m-n) D_{m+n-1} + stuff## where the stuff is terms expressed as ##D_k## for ##k<m+n-1##. As long as ##m \geq 2##, then the largest positive term here increased, and is still larger than ##m##, so applying ##x## repeatedly cannot kill anything of this form.

Similarly looking at the smallest (i.e. most negative) term gets that the smallest term in ##x## cannot be 0 or less. And obviously if ##x## is restricted to a multiple of ##D_1## then it's not killing anything.

I'm pretty sure this is an approximately complete proof?

As far as infinite sums, I'm not sure. It seems weird to ask the question about the derivations, surely the right question is whether your original space is allowed to have infinite sums? If it's not (and I thought span canonically did not) then your derivation isn't well defined if it includes an infinite sum since it can map finite sums to infinite sums.
 
Last edited:
Office_Shredder said:
Suppose we have a single element which is nilpotent on all of ##A##, say ##x=\sum_i a_i D_i##. Suppose that the largest ##i## is at least 2, and let's call it ##m##. Then
##[\sum_i a_i D_i, \sum b_j D_j]## for any choice of ##b_j##s such that the largest j with a non-zero coefficient is ##n>m## can be written as
##[\sum_i a_i D_i, \sum_j b_j D_j]= (m-n) D_{m+n-1} + stuff## where the stuff is terms expressed as ##D_k## for ##k<m+n+1##. As long as ##m \geq 2##, then the largest positive term here increased, and is still larger than ##m##, so applying ##x## repeatedly cannot kill anything of this form.

Similarly looking at the smallest (i.e. most negative) term gets that the smallest term in ##x## cannot be 0 or less. And obviously if ##x## is restricted to a multiple of ##D_1## then it's not killing anything.

I'm pretty sure this is an approximately complete proof?

As far as infinite sums, I'm not sure. It seems weird to ask the question about the derivations, surely the right question is whether your original space is allowed to have infinite sums? If it's not (and I thought span canonically did not) then your derivation isn't well defined if it includes an infinite sum since it can map finite sums to infinite sums.
Thanks. I'll have a closer look when it's not late at night, or early in the morning. Depends on whether you're an early bird or a night owl. (See my fault in the HW thread. Guess I need some sleep.)
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top