Structure constants of su(2) and so(3)

Click For Summary
SU(2) and SO(3) share the same Lie algebra structure, as evidenced by their identical commutation relations. While the algebras are isomorphic, the groups themselves are distinct due to differences in their infinitesimal generators. The discussion highlights that although both groups obey the same algebra, their representations can vary significantly. It clarifies that SO(3) is not simply connected, which aligns with the theorem stating that each Lie algebra corresponds to a unique simply connected Lie group. This distinction resolves the apparent contradiction regarding the relationship between SU(2) and SO(3).
cathalcummins
Messages
43
Reaction score
0
SU(2) and SO(3) "have the same Lie algebra".

While I understand that their corresponding lie algebras su(3) and so(2) have the same commutator relations

\mbox{SO(3)}: \left[ \tau^i, \tau^j\right] = \iota \varepsilon_{ijk} \tau^k

\mbox{SU(2)}: \left[ \frac{\sigma^i}{2}, \frac{\sigma^j}{2}\right] = \iota\varepsilon_{ijk} \frac{\sigma^k}{2}

so that the structure constants of each are identical, and as each Lie Algebra is uniquely defined by it structure constants, both algebras are identical.

The elements being operated on are, clearly, very different. If I were to distinguish the two groups in a discussion, I cannot speak of Algebra as the Algebra is purely the rule of composition between elements of the group(not the elements themselves)?. The Algebras are truly identical.

To distinguish, I would say

"Both Groups obey the same Lie Algebra, but whose infinitesimal generators are different"

yes?

Thanks

edit: I did search the forum, but could only find much more general questions about Lie Algebras so I decided to make a new topic.
 
Physics news on Phys.org
The algebra is the set of rules that determines the operation. In this case, the operation is commutation and by linearity of that operation, it is uniquely determined once given on the basis elements:
\left[ \sum_i a_i \tau^i, \sum_j b_j \tau^j \right] = \sum_{i, j} a_i b_j c^{ijk} \tau^k
where a, b are coefficients (real or complex), \tau are the basis elements and c are the structure constants; i and j index the basis elements.

Now a Lie group, L, is a group with elements of the form g(a_1, a_2, \cdots) = \exp\left( \sum_i a_i \tau^i \right), where the exponential is defined by the power series:
g(a_1, a_2, \cdots) = \sum_n \frac{1}{n!} \left( \sum_i a_i \tau^i \right)^n.
The \tau are called the generators of the group; though the group itself is usually infinite, in practice the number of generators for such a group is often finite (but it can be infinite of course).

Finally, there is an object called a presentation. For a group, this is a map \phi: G \to GL_n which assigns to each group element g an nxn matrix which preserves the group structure, e.g. \phi(g \circ h) = \phi(g) \phi(h) (group composition on the left, matrix multiplication on the right). One group can have many representations, for many values of n. (Some are more interesting than others). So there is now a distinction between the elements of a group and the matrix representation of those elements. You can similarly imagine linking matrices to elements of the algebra. Because matrix multiplication is linear, it suffices to assign to each generator \tau^i a matrix M^i and then for each element \sum_i a_i \tau^i you have the matrix \sum_i a_i M^i. To be a representation of the algebra, these matrices must be chosen such that their commutation relations are the same as those of the algebra elements. Again, there can be several sets of matrices which satisfy these commutation relations, and they are all equally valid representations of the algebra. The matrices can however look very differently. Of course, once you have such a representation, you can make a representation for the corresponding Lie group by just exponentiating the matrices. Note that you can also build as many as you want. One trivial trick, you can always use, is to take your favorite representation M_i and make a new one with matrices N defined by
N_i = \begin{pmatrix} M_i & 0 & 0 & \cdots & 0 \\ 0 & M_i & 0 & & \vdots \\ 0 & 0 & \ddots & & 0 \\ 0 & 0 & 0 & \cdots & M_i \end{pmatrix}.

What probably confuses you as well, is this structure of creating Lie-algebras. As I just described, what we formally do is fix the algebra - which is the thing we want to work with. Then we find a convenient representation and do our calculations there, because it's easier for both us and computers to calculate with matrices. In practice, however, one often does it the other way around. We have some set of matrices (for example, rotations in three dimensions) and notices that this is a Lie-group. One can then isolate the generators and write down the generators and the commutation relations. On one hand the matrices have been used to define the algebra. On the hand, this original set of matrices is already a representation of this group. It may not be the only one, but it is usually the one which is most intuitive to us.

Hope that clears some of the confusion.
 
The usual 'equal' and 'isomorphic' misunderstanding. Clearly, they are not equal, since thet are different, but equally clearly they are isomorphic.
 
Yeah, the second post makes sense. I decided not to reply to the first one till I digested it entirely.

I can see how they're isomorphic (I can't formalise right now but by analogy with everyday manifolds).

You see, my lecturer contradicted himself. He says that su(2) and so(3) were the same algebra. And then, almost directly afterwards, Gives the Theorem:

"To every Lie algebra there corresponds a unique simply connected Lie
group."

Taking the su(2) algabra which is supposedly the "same as" so(3) so that the "one" algebra corresponds to both SO(3) and SU(2). Which would contradict the theorem. However, if they are realized to be simply isomorphic then there is no difficulty.

Have I picked you up correctly?

CompuChip: Thank you. I will reply to your post as soon as I have made my best efforts to understand it.
 
There is no contradiction there: SO(3) isn't simply connected. It's fundamental group is C_2 (group with 2 elements), and has SU(2) as its simply connected cover. These facts are illustrated quite nicely with 'the soup bowl trick' and quarternions.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
902
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 8 ·
Replies
8
Views
7K