# O(3) sp(2) lie algebra isomorphism problem

• jostpuur

#### jostpuur

I'm mainly hoping that somebody else might have done the same exercise earlier. In that case it could be possible to spot where I'm going wrong.

## Homework Statement

I'm supposed to prove that Lie algebras $$\mathfrak{o}(3)$$ and $$\mathfrak{sp}(2)$$ are isomorphic.

## Homework Equations

Let's see if I have the definitions right...

$$\mathfrak{o}(3) = \{x_1e_1 + x_2e_2 + x_3e_3\;|\;x_1,x_2,x_3\in\mathbb{R}^3\},$$

where

$$e_1 = \left(\begin{array}{rrr} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \\ \end{array}\right), \quad e_2 = \left(\begin{array}{rrr} 0 & 0 & -1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \\ \end{array}\right),\quad e_3 = \left(\begin{array}{rrr} 0 & 0 & 0\\ 0 & 0 & 1\\ 0 & -1 & 0 \\ \end{array}\right),$$

and

$$\mathfrak{sp}(2n) = \{x_1a_1 + x_2a_2 + x_3a_3\;|\;x_1,x_2,x_3\in\mathbb{R}\}$$

where

$$a_1 = \left(\begin{array}{rr} 0 & 1 \\ 0 & 0 \\ \end{array}\right),\quad a_2 = \left(\begin{array}{rr} 0 & 0 \\ 1 & 0 \\ \end{array}\right),\quad a_3 = \left(\begin{array}{rr} 1 & 0 \\ 0 & -1 \\ \end{array}\right).$$

Lie brackets are

$$[e_1,e_2] = e_3,\quad [e_2,e_3] = e_1,\quad [e_3,e_1] = e_2,$$

$$[a_1,a_2] = a_3,\quad [a_2,a_3] = 2a_2,\quad [a_3,a_1] = 2a_1.$$

## The Attempt at a Solution

So I want an isomorphism $$\phi:\mathfrak{o}(3)\to\mathfrak{sp}(2)$$. I tried to conclude that I could assume the isomorphism to be of form

$$\phi(e_1) = \lambda_1 a_1 + \lambda_2 a_2 + \lambda_3 a_3$$

$$\phi(e_2) = \pi_1 a_1 + \pi_2 a_2 + \pi_3 a_3$$

$$\phi(e_3) = \alpha a_3,$$

because the cross-product structure on $$\mathbb{R}^3$$ is invariant under rotations. If there exists some isomorphism $$\bar{\phi}$$, then there would be a vector $$v\in\mathfrak{o}(3)$$ so that $$\bar{\phi}(v)= a_3$$. Then I choose a new basis to $$\mathfrak{o}(3)$$ so that $$e_3\propto v$$.

Then it started to look like the task is impossible. The demand

$$[\phi(e_2),\phi(e_3)] = \phi([e_2,e_3])$$

implies set of equations

$$-2\pi_1\alpha = \lambda_1,\quad 2\pi_2\alpha = \lambda_2,\quad 0=\lambda_3,$$

and the demand

$$[\phi(e_3),\phi(e_1)]=\phi([e_3,e_1])$$

implies set of equations

$$2\alpha\lambda_1 =\pi_1,\quad -2\alpha\lambda_2 = \pi_2,\quad 0=\pi_3.$$

Then we get

$$\lambda_1 = -2\pi_1\alpha = -4\alpha^2 \lambda_1,\quad\implies\quad \alpha=\pm\frac{i}{2},\;\textrm{or}\;\lambda_1 = 0$$

and both possibilities are unacceptable.

You made an assumption. It didn't work - and I don't see any justification for the assumption to be honest. So perhaps your assumption was incorrect? Try 'just doing it'.

Other methods include altering the inner product on R^3 that SO(3) preserves, to get so(3) in a new basis.

Here's my proof that the assumption should have been right:

First assume that some isomorphism $$\bar{\phi}:\mathfrak{o}(3)\to\mathfrak{sp}(2)$$ exists. Then there is a vector $$v\in\mathfrak{o}(3)$$ so that $$\bar{\phi}(v)=a_3$$. We can choose a new basis $$\bar{e}_1,\bar{e}_2,\bar{e}_3$$ to the $$\mathfrak{o}(3)$$ so that

$$[\bar{e}_1,\bar{e}_2]=\bar{e}_3,\quad [\bar{e}_2,\bar{e}_3]=\bar{e}_1,\quad [\bar{e}_3,\bar{e}_1]=\bar{e}_2,\quad \bar{e}_3 = \alpha v$$

with some $$\alpha\in\mathbb{R}$$. Let $$\psi:\mathfrak{o}(3)\to\mathfrak{o}(3)$$ be the automorphism so that

$$\psi(e_1)=\bar{e}_1,\quad\psi(e_2)=\bar{e}_2,\quad\psi(e_3)=\bar{e}_3.$$

Now $$\phi=\bar{\phi}\circ\psi$$ should be the isomorphism so that $$\phi(e_3)=\alpha a_3$$.

I know already that following isomorphisms are true. $$\mathfrak{o}(3)=\mathfrak{su}(2)$$ and $$\mathfrak{sl}(2,\mathbb{R})=\mathfrak{sp}(2)$$. The definitions are

$$\mathfrak{o}(3) = \{X\in\mathfrak{gl}(3,\mathbb{R})\;|\; X^T+X = 0\}$$

$$\mathfrak{su}(2) = \{X\in\mathfrak{gl}(2,\mathbb{C})\;|\; X^{\dagger} + X = 0\;\textrm{and}\;\textrm{Tr}(X)=0\}$$

$$\mathfrak{sl}(2,\mathbb{R}) = \{X\in\mathfrak{gl}(2,\mathbb{R})\;|\; \textrm{Tr}(X)=0\}$$

$$\mathfrak{sp}(2) = \{X\in\mathfrak{gl}(2,\mathbb{R})\;|\; X^T J + J X=0\}$$

where

$$J=\left(\begin{array}{rr} 0 & -1 \\ 1 & 0 \\ \end{array}\right)$$

Is the claim $$\mathfrak{sl}(2,\mathbb{R})=\mathfrak{su}(2)$$ true?

Last edited:
I still don't buy your claim. There are 2 reason for this.

1) You seem to be saying that given any vector, you can complete to a basis with certain lie bracket. This may be true, but I don't see it is obvious.

2) You keep not noticing that since things are defined up to scalars, it is unnecessary to use that alpha.

Remember that choosing an isomorphism is just picking a new basis. Why should I be able to complete to two different bases like that given the same starting vector?

Finally, on that example. In sp(2), a_3 lies in the cartan subalgebra $\mathfrak{h}$, and a_1, a_2 are eigenvectors with respect to the adjoint action. It is certainly not true that you can pick an arbitrary element and expect it to lie in $\mathfrak{h}$. Using your logic, I can suppose that there is an isomorphism of sp(2) that sends a_3 to a_1. But that can't be - the adjoint action of a_1 doesn't have an eigenspace of dimension 2 for eigenvalue 1. In fact in the adjoint action a_1 is nilpotent - write out the matrix, or notice that ad(a_1) sends a_2 to a multiple of a_3, and a_3 to a multiple of a_1, hence some power of ad(a_1) annihilates sp(2).

All 3-dim complex semi-simple Lie algebras are isomorphic. I can't say I've ever studied the real case.

The complexifications are all isomorphic, by above, but su(2) isn't a complex lie algebra. I still think that over the reals they will be isomorphic.

Last edited:
Oh, and by Schur's lemma, any isomorphism of sp(2) to itself is just multiplication by a scalar, since it is simple - we're attempting to say something about the (simple) adjoint rep.

1) You seem to be saying that given any vector, you can complete to a basis with certain lie bracket. This may be true, but I don't see it is obvious.

I'm not saying this for an arbitrary Lie algebra, but precisely for the $$\mathfrak{o}(3)$$. This is standard geometric knowledge: The cross-product looks the same after a rotation of basis.

2) You keep not noticing that since things are defined up to scalars, it is unnecessary to use that alpha.

The Lie brackets are not invariant under scaling. I have already fixed the basis $$a_n$$, and at least the magnitudes of basis vectors $$e_n$$, so the alpha in the definition of the isomorphism $$\phi$$ is not redundant. If I set $$\alpha=1$$, I would need to then introduce some other equivalent parameter in the basis of $$\mathfrak{sp}(2)$$.

Using your logic, I can suppose that there is an isomorphism of sp(2) that sends a_3 to a_1.

I'm not using an isomorphism $$\mathfrak{sp}(2)\to\mathfrak{sp}(2)$$, but only an isomorphism $$\mathfrak{o}(3)\to\mathfrak{o}(3)$$, when justifying the form of the attempt $$\phi$$.

The *only* isomorphisms of o(3) are multiplication by scalars by Schur's lemma. (I thiknk) You're just writing down an isometry of the underlying vector space and assuming it is a Lie algebra homomorphism.

The question just needs you to write out 3 linear equations

$$a_i = \sum_{j=1}^3 \lambda_j a_j$$

for i=1,2,3

and work out what the lambda_i are. It is very straight forward.

The *only* isomorphisms of o(3) are multiplication by scalars by Schur's lemma.

This must be a mistake.

For example, set

$$\psi(e_1) = e_2,\quad \psi(e_2) = -e_1,\quad \psi(e_3)=e_3.$$

Now $$\psi:\mathfrak{o}(3)\to\mathfrak{o}(3)$$ is a linear bijection, and also

$$[\psi(e_1),\psi(e_2)] = [e_2,-e_1] = e_3 = \psi(e_3) = \psi([e_1,e_2])$$

$$[\psi(e_2),\psi(e_3)] = [-e_1,e_3] = e_2 = \psi(e_1) =\psi([e_2,e_3])$$

$$[\psi(e_3),\psi(e_1)] = [e_3,e_2] = -e_1 = \psi(e_2) = \psi([e_3,e_1])$$

so $$\psi$$ is a Lie algebra isomorphism.

Hmm, so where am I going wrong?* In any case, if you do what I suggested (with e_j, not a_j in the sum) above you will get the answer.

* perhaps in trying to invoke Schur's lemma, which is after all about modules, in the wrong place...

Last edited:
By strange coincidence, I proved the Schur's lemma yesterday in another exercise in this Lie algebra context. It says that if $$\phi$$ is an irreducible representation of a Lie algebra in some finite dimensional vector space $$V$$, and if a matrix $$A\in\textrm{End}(V)$$ commutes with all $$\phi(x)$$, then A is a scalar multiple of the identity mapping. The lemma get's related to the simple Lie algebra's, because their adjoint representations are irreducible. But it doesn't seem immediately clear to me what are the matrices commuting with the $$\textrm{ad}_X:\mathfrak{o}(3)\to\mathfrak{o}(3)$$ matrices, or how they are related to the Lie algebra isomorphisms.

I'm not using an isomorphism $$\mathfrak{sp}(2)\to\mathfrak{sp}(2)$$, but only an isomorphism $$\mathfrak{o}(3)\to\mathfrak{o}(3)$$, when justifying the form of the attempt $$\phi$$.

Suppose there is an iso from o(3) to sp(2), then something is sent to a_1 and something to a_3, call them x and y. By your reasoning I can compose the inverse to o(3) with a map that sends x to a multiple of y (since your claiming I can choose any isometry), then y down to sp(2) again, so I've sent a_1 to a multiple of a_3, but that can't happen.

Right, got the Schur's lemma thing straight - sorry for misleading you. I was attempting to find a justification for something without thinking it through.

The point is, if f is an isomorphism of the adjoint representation, then f is multiplication by a scalar. This doesn't help work out the lie algebra homomorphisms.

It does say something like: if f is a lie alg hom of the adjoint rep then

i.e. [x,f(y)]=f([x,y])

but that isn't the same condition as being a homomorphism of algebras, just reps.

Suppose there is an iso from o(3) to sp(2), then something is sent to a_1 and something to a_3, call them x and y. By your reasoning I can compose the inverse to o(3) with a map that sends x to a multiple of y (since your claiming I can choose any isometry), then y down to sp(2) again, so I've sent a_1 to a multiple of a_3, but that can't happen.

Looks like a proof that o(3) and sp(2) are not isomorphic

I'm believing that there is a mistake in this exercise.

Every 3-dim simple lie algebra is isomorphic to sl_2, so there's no mistake (albeit I'm talking over C, not R).

Could it be that the maker of the exercise has confused something with real and complex vector spaces?

I'm going to see this professor at some point anyway, so I'm going to be asking his advice/opinion on the problem also.

Possible, entirely possible that they were assuming over C. I was playing around with something like this:

e_1 + e_2
e_1 - e_2

As a_1, a_2, ad got close, but e_1 + ie_2 and e_1 - ie_2 I think might be better, but I've not checked, yet.