Explicit construction of SO(n)

  • Context: Graduate 
  • Thread starter Thread starter Lapidus
  • Start date Start date
  • Tags Tags
    Construction Explicit
Click For Summary

Discussion Overview

The discussion centers around the construction of special orthogonal group matrices, specifically SO(n), with a focus on SO(8). Participants explore various methods of constructing these matrices, including connections to Clifford algebras, antisymmetric matrices, and geometric algebra.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants propose that SO(n) matrices can be constructed from antisymmetric matrices A, where e^A is in SO(n).
  • One participant provides an example of a 2x2 antisymmetric matrix and its exponential, suggesting this method is close to what is needed for SO(8).
  • Another participant describes the structure of SO(n) matrices in terms of rotation matrices and the requirement for positive determinants.
  • There is a question regarding the relationship between the subscript m in rotation matrices and n in SO(n), as well as the infinite number of elements in SO(n).
  • Some participants express confusion about the construction of SO(n) and Spin groups from Clifford algebras, indicating a need for clarification on the indices used in relevant equations.
  • One participant mentions the number of generators for SO(n) and provides differing views on the count, suggesting a need for further discussion.
  • Another participant introduces the concept of using rotors from geometric algebra as an alternative to rotation matrices, emphasizing the simplicity of this approach.

Areas of Agreement / Disagreement

Participants express a variety of views on the construction methods for SO(n) matrices, with no consensus reached on a single approach. Some methods are discussed in detail, while others remain unclear or contested.

Contextual Notes

Participants note the complexity of finding orthogonal bases and the non-uniqueness of matrix decompositions in SO(n). There are also unresolved questions regarding the specific construction methods and the interpretation of mathematical indices.

Who May Find This Useful

This discussion may be of interest to those studying advanced linear algebra, group theory, or geometric algebra, particularly in the context of physics and mathematics.

Lapidus
Messages
344
Reaction score
12
I would love to know how we can contruct SO(n) matrices. I know we get them from Clifford algebras somehow.

I am especially interested in SO(8).

Could anybody help?

thank you
 
Physics news on Phys.org
I once had a discussion on this subject on a MySpace forum (defunct now, alas). We satisfied ourselves that the following two statements are true:

1. For any real n x n antisymmetric matrix A (i.e., A = -A^T) e^A \in SO(n).
2. For any O \in SO(n), there exists an antisymmetric matrix A such that O = e^A.

For instance,

<br /> \begin{eqnarray*}<br /> A &amp; = &amp;<br /> \left(<br /> \begin{array}{cc}<br /> 0 &amp; \theta \\<br /> -\theta &amp; 0<br /> \end{array}<br /> \right) \\<br /> e^A &amp; = &amp; \left(<br /> \begin{array}{cc}<br /> \cos (\theta ) &amp; \sin (\theta ) \\<br /> -\sin (\theta ) &amp; \cos (\theta )<br /> \end{array}<br /> \right)<br /> \end{eqnarray*}<br />

This is a surjection, as the example shows -- you can change theta by any multiple of 2pi and get the same e^A. Not sure if this does what you want, but it's pretty close, it seems to me.
 
Relative a suitable orthogonal basis, the elements of SO(n) are of the form

A=\left(\begin{array}{cccccc}<br /> R_1 &amp; ... &amp; 0 &amp; 0 &amp; ... &amp; 0\\<br /> 0 &amp; \ddots &amp; 0 &amp; 0 &amp; ... &amp; 0\\<br /> 0 &amp; ... &amp; R_m &amp; 0 &amp; ... &amp; 0\\<br /> 0 &amp; ... &amp; 0 &amp; \pm 1 &amp; ... &amp; 0\\<br /> 0 &amp; ... &amp; 0 &amp; 0 &amp;\ddots &amp; 0\\<br /> 0 &amp; ... &amp; 0 &amp; 0 &amp; ... &amp; \pm 1\\<br /> \end{array}\right)

Where the Rk are rotation matrices and thus have the form

\left(\begin{array}{cc} \cos(\theta) &amp; -\sin(\theta)\\ \sin(\theta) &amp; \cos(\theta)\\ \end{array}\right)~\text{or}~\left(\begin{array}{cc} \sin(\theta) &amp; \cos(\theta)\\ \cos(\theta) &amp; -\sin(\theta)\\ \end{array}\right)

The only thing you'll need to make sure is that A has positive determinant.
 
Micromass, are you saying that every element of SO(n) looks basically the same, only the angles in the rotation matrices vary for each element. How is the subscript m in R related to the n of SO(n)?

How many elements do SO(4), SO(6), SO(8) have?

The matrices you gave are those of the Lie algebra of SO(n), right?

Also, I would really like to see the construction of SO(n) or Spin groups from the Clifford algebras as it is done in the document that I attached below. Unfortunately, I don't understand it poperly. Especially, all the indices in the equations (2) till (6) do not make sense to me.

I really would love to work out explicitly for SO(4), SO(6) and so on, as the author advices, but I do not know how...
 

Attachments

Lapidus said:
Micromass, are you saying that every element of SO(n) looks basically the same, only the angles in the rotation matrices vary for each element. How is the subscript m in R related to the n of SO(n)?
The differ in the angles for the rotation matrices, AND in the choice of "a suitable orthogonal basis". This is my objection to this as an explicit method for constructing elements of SO(n). You have to also come up with orthogonal bases, which is the problem of finding elements of O(n), which is essentially the problem you started with. What's more, the decomposition of a matrix in SO(n) in this form is not unique -- there are O's and R's such that O_1 R_1 O_1^T = O_2 R_2 O_2^T = O_3 R_3 O_3^T...

R is an n x n matrix. Thus m is any integer 0 <= m <= n/2. You fill out the rest of the diagonal with 1s and -1s.

How many elements do SO(4), SO(6), SO(8) have?
Well, it's infinite, of course. The number of degrees of freedom in SO(n) is \frac{n(n-1)}{2}. You can see this from the construction I gave from antisymmetric matrices. Choose any \frac{n(n-1)}{2} real numbers and use them to fill out the upper triangle of an n x n matrix. Fill out the lower diagonal with their negatives, leave the diagonal 0, and you have an antisymmetric matrix A. Exponentiate it, and you have an element of SO(n). You don't have to filter by determinant -- all such matrices have determinant 1, out of the box.

If you want to think about this a little more intuitively, to specify an elementary rotation, one needs to specify an angle and a plane. (Not an axis -- the axis of rotation idea only works in 3 dimensions, because in 3 d there is a unique plane through the origin perpendicular to an axis.) Basis planes in n dimensions are the plane that contains x_1 and x_2, the plane that contains x_1 and x_3, ..., x_2 and x_3, ... x_n-1 and x_n -- \frac{n(n-1)}{2} of them. To fully specify a rotation in n dimensions, you need an angle for each of those planes. The planes are directed, so the 2,1 angle is the negative of the 1,2 angle. Those angles are the elements of the upper and lower triangles respectively of an antisymmetric matrix A.

The matrices you gave are those of the Lie algebra of SO(n), right?
Antisymmetric matrices are the elements of the Lie algebra of SO(n). However, the statements we (the MySpace Mathematics group) made about the mapping are stronger than that, since they imply a global correspondence, not just a local differential mapping.

Also, I would really like to see the construction of SO(n) or Spin groups from the Clifford algebras as it is done in the document that I attached below. Unfortunately, I don't understand it poperly. Especially, all the indices in the equations (2) till (6) do not make sense to me.

I really would love to work out explicitly for SO(4), SO(6) and so on, as the author advices, but I do not know how...
Sorry, I'm not familiar with this.
 
thanks, pmsrw3 (btw, a fun name you picked there!)

Well, it's infinite, of course.

I meant the number of generators. I believe they are 2n. Right?
 
Lapidus said:
I meant the number of generators. I believe they are 2n. Right?
\frac{n(n-1)}{2}, as I discussed above.
 
Very interesting question! I'm going to try and wing it using my knowledge of Geometric Algebra.

pmsrw3's observations from myspace look an awful lot like how you make rotors out of bivectors. That's the approach I'm going to take.

First, though: If you want to understand the geometric "intuition" behind the Clifford Algebra, an excellent resource is the first handout:
http://www.mrao.cam.ac.uk/~clifford/ptIIIcourse/handouts/hout01.ps.gz
(note that it is a gzipped postscript file), found from this page:
http://www.mrao.cam.ac.uk/~clifford/ptIIIcourse/
I find the first few chapters make for breezy, engaging reading. If you're impatient, section 2.4 talks about reflections, and section 2.5 builds on that to make rotations.

By the way: the above link is essentially identical to the first few chapters of the excellent "Geometric Algebra for Physicists" by Doran and Lasenby. If you can find that at your library, it's always nice to read from an actual, physical book. :)

Now: about those rotation matrices!

The essence of the above reading is that we don't need rotation matrices; it is conceptually simpler to compute rotations using "rotors". As pmsrw3 pointed out, rotations are parameterized in terms of angles and planes. A rotor is simply the combination of the angle, \theta, and the unit bivector B which represents the plane.(1) To rotate a vector x into a vector \bar{x}, we apply the double-sided transformation law:
\bar{x} = e^{-B\theta/2}xe^{B\theta/2}

This (coordinate-free!) expression tells us everything we could want to know about the rotation. In particular, we can get the rotation matrix from it. The strategy is to pick a particular basis, \gamma_i, i \in \{1 \ldots n \}, so that
x = x_i \gamma_i
For simplicity, assume the \{\gamma_i\} are orthonormal. Define our rotation matrix R_{ij} by its action on the components of a vector:
\bar{x}_i = R_{ij} x_j
Naturally, this is true for any x. In particular, when x is a basis vector, we find that the components of \bar{(e_j)} are simply the entries in the jth column of R_{ij}.

This is how you could compute the entries using Clifford Algebra: rotate the basis vectors one at a time, and take the components relative to the basis. Explicitly:
R_{ij} = \gamma_i \cdot (e^{-B\theta/2}\gamma_j e^{B\theta/2})

I think the hard part would be actually computing the exponentiated bivectors. Especially if you're interested in something like SO(8)! The exponential would have one scalar term, 28 bivector terms, 70 quadvector terms, 28 hexvector terms, and one pseudoscalar term.

But my main point was to try to make the conceptual connection. Hope it helped.

(1) Technically, in 4 or more dimensions, a bivector might not represent a single plane in the most general case. That's because we can have multiple planes which are completely independent, in the sense of having no lines in common. Rotations in the e_1e_2 plane are independent of rotations in the e_3e_4 plane, and commute with them.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
756
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
992
Replies
3
Views
3K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K