Using the orbit-stabilizer theorem to identify groups

Click For Summary
SUMMARY

The discussion focuses on applying the orbit-stabilizer theorem to identify the spheres ##S^n## and ##S^{2n+1}## through the quotients of the orthogonal groups ##O(n+1, \mathbb{R})## by ##O(n, \mathbb{R})## and the unitary groups ##U(n+1)## by ##U(n)##, respectively. The stabilizer for points on these spheres is defined, with specific matrices provided for clarity. The participants explore the transitive action of these groups on the spheres and discuss the necessary conditions for the validity of matrix multiplication in this context. Key references include the German Wikipedia page on group actions and insights from the Physics Forums.

PREREQUISITES
  • Understanding of the orbit-stabilizer theorem in group theory.
  • Familiarity with orthogonal groups, specifically ##O(n, \mathbb{R})## and ##O(n+1, \mathbb{R})##.
  • Knowledge of unitary groups, particularly ##U(n)## and ##U(n+1)##.
  • Basic concepts of linear algebra, including matrix operations and transformations.
NEXT STEPS
  • Study the orbit-stabilizer theorem in detail to understand its applications in group theory.
  • Explore the properties and applications of orthogonal groups, particularly in relation to transformations on spheres.
  • Investigate the structure and properties of unitary groups and their actions on complex vector spaces.
  • Review linear algebra concepts, focusing on matrix multiplication and orthogonal transformations.
USEFUL FOR

Mathematicians, particularly those specializing in group theory, algebra, and geometry, as well as students and researchers interested in the applications of the orbit-stabilizer theorem in identifying geometric structures.

aalma
Messages
46
Reaction score
1
TL;DR
Using the orbit-stabilizer theorem to identify groups.
I want to identify:
  1. ##S^n## with the quotient of ##O(n + 1,R)## by ##O(n,R)##.
  2. ##S^{2n+1}## with the quotient of ##U(n + 1)## by ##U(n)##.

The orbit-stabilizer theorem would give us the result, but my problem is to apply it. My problem is how to find the stabilizer.
In 1 how to define the action of ##O(n+1,R)## on ##S^n## and then conclude that ##stab(x)=O(n,R)## for ##x \in S^n##. And similar in 2?

It would be helpful and appreciated if you simplify/explain the method for showing this.
 
Physics news on Phys.org
Thanks. I think I got 1: the stabilizer is the block matrix H=
[A 0]
[0 1]
Where ##A## is in ##O(n)## so ##H## is the collection of ##n+1\times n+1## orthogonal matrices.
In 2, I say that ##U(n+1)## acts on ##S^{2n+1}## as follows:
For a matrix ##A## in ##U(n+1)## and a vector ##e_{2n+1}## in ##S^{2n+1}## we consider ##Ae_{2n+1}## then the map
##U(n+1)\to S^{2n+1}## given by ##A\to Ae_{2n+1}##.
But then is the multiplication valid here?
 
Last edited:
aalma said:
Thanks. I think I got 1: the stabilizer is the block matrix H=
[A 0]
[0 1]
Where ##A## is in ##O(n)## so ##H## is the collection of ##n+1\times n+1## orthogonal matrices.
In 2, I say that ##U(n+1)## acts on ##S^{2n+1}## as follows:
For a matrix ##A## in ##U(n+1)## and a vector ##e_{2n+1}## in ##S^{2n+1}## we consider ##Ae_{2n+1}## then the map
##U(n+1)\to S^{2n+1}## given by ##A\to Ae_{2n+1}##.
But then is the multiplication valid here? Since ##A## is ##n+1 ×n+1## but ##v## is ##2n+1 × 1## ?!
That looks a bit confusing.

An action of a group ##G## on a set ##X## is a group homomorphism ##G\rightarrow \operatorname{GL}(X).## What is your group homomorphism in this case? The natural operation would be ##\operatorname{O}(n+1,\mathbb{R}) \rightarrow \operatorname{GL}(n+1,\mathbb{R}).##
$$
H = \{A\in SO(n+1)\, : \,A.\mathfrak{e}_{n+1}=\mathfrak{e}_{n+1}\} \leq SO(n+1)
$$
is the stabilizer of ##\mathfrak{e}_{n+1}\in \mathbb{R}^{n+1}.##

Now show that ##H\cong \operatorname{SO}(n,\mathbb{R})## and ##\operatorname{SO}(n+1,\mathbb{R})/H \cong S^n.##
 
Last edited:
  • Like
Likes   Reactions: aalma
fresh_42 said:
That looks a bit confusing.

An action of a group ##G## on a set ##X## is a group homomorphism ##G\rightarrow \operatorname{GL}(X).## What is your group homomorphism in this case? The natural operation would be ##\operatorname{O}(n+1,\mathbb{R}) \rightarrow \operatorname{GL}(n+1,\mathbb{R}).##
$$
H = \{A\in SO(n+1)\, : \,A.\mathfrak{e}_{n+1}=\mathfrak{e}_{n+1}\} \leq SO(n+1)
$$
is the stabilizer of ##\mathfrak{e}_{n+1}\in \mathbb{R}^{n+1}.##

Now show that ##H\cong \operatorname{O}(n,\mathbb{R})## and ##\operatorname{O}(n+1,\mathbb{R})/H \cong S^n.##
I think you're right. But I can look at the action ##\phi:O(n+1)\times S^n\to S^n## given by ##\phi(A,v)=Av##. So I need to see that it's transitive that is for every ##v, w\in S^n## there exists ##A\in O(n+1)## such that ##Av=w##. How can I find such ##A##?
 
Last edited:
aalma said:
How can I find such ##A##?
Visually, it is easy. You have two points (##v,w##) on a ball (##S^n##) and rotations (##\operatorname{SO}(n+1)##) at hand to do so. A "soft" argumentation is, that there exists an invertible matrix ##A\in \operatorname{GL}(n+1)## such that ##A.v=w.## Since ##\|v\|=\|w\|## we can do this with a matrix that leaves lengths, and angles invariant, i.e. an orthogonal matrix.

Polar coordinates seem to be suitable for a formal proof, possibly an induction over ##n.##

You can chose ##v## to be the pole and simply skip the rotation axis to ##w.##
 
aalma said:
I think you're right. But I can look at the action ##\phi:O(n+1)\times S^n\to S^n## given by ##\phi(A,v)=Av##. So I need to see that it's transitive that is for every ##v, w\in S^n## there exists ##A\in O(n+1)## such that ##Av=w##. How can I find such ##A##?
Pick coordinates so that your two points are in the ##x_1x_2## plane. In the plane the problem is clear. There is an element of ##O(2)## that takes one to the other. For your ##A##, take blockdiagonal with the 2x2 in the upper left and 1's on the rest of the diagonal.
 
  • Like
Likes   Reactions: mathwonk and fresh_42
you could use induction for transitivity, essentially what fresh is suggesting. i.e. if O(n) is transitive on S(n-1), then O(n+1) is transitive on the points of the hemisphere in S(n) perpendicular to e1. But given any two points of S(n), n≥2, we can pass a hyperplane through them, and choose a new orthonormal basis so that hyperplane is spanned by e2,...,en+1, and use the same argument.

Here we think of S(n) as living in an abstract coordinate free space equipped with an inner product, and O(n+1) as length preserving linear transformations of that space. They are represented by matrices only when we choose a basis, which we may change as desired from time to time.

for this however, you must start the induction by proving it for n = 0 and 1. I.e. it is not true for S(1) that any 2 points lie in a hyperplane, so the case of O(1) being transitive on S(0) does not help prove O(2) is transitive on S(1). It should not be too hard to show rotations act transitively on the circle, using sines and cosines, or just geometry. I.e. the first column of the matrix should be the desired destination of e1, and the second should be perpendicular to that, and also length one.

martinbn's succinct remarks should also be helpful.
 
  • #10
Extend ##v## to an orthonormal basis so that it is the first vector, extend ##w## to one as well. There is a matrix that takes one basis to the other. It will be orthogonal.
 
  • Like
Likes   Reactions: aalma
  • #11
martinbn said:
Extend ##v## to an orthonormal basis so that it is the first vector, extend ##w## to one as well. There is a matrix that takes one basis to the other. It will be orthogonal.
If I extend ##v## to an orthonormal basis ##{v_1,...,v_{n+1}}## and write it as axmatrix whose column are these vectors so is it just enough to say that any orthogonal matrix-orthogonal transformation ##A## acting on this matrix would give us an orthogonal matrix whose column is the orthonormal basis ##{Av_1,...,Av_{n+1}}={w_1,...,w_{n+1}}##?
I just do not see how our matrix must look!.
 
Last edited:
  • #12
aalma said:
If I extend ##v## to an orthonormal basis ##{v_1,...,v_{n+1}}## and write it as axmatrix whose column are these vectors so is it just enough to say that any orthogonal matrix-orthogonal transformation ##A## acting on this matrix would give us an orthogonal matrix whose column is the orthonormal basis ##{Av_1,...,Av_{n+1}}={w_1,...,w_{n+1}}##?
I just do not see how our matrix must look!.
I am not sure what you are saying. Take the standard basis ##\{e_i\}##, then the matrix ##A_v## that takes it to the basis ##\{v_i\}## has columns the vectors ##v_i##, so it is orthogonal. Same fore the matrix ##A_w## that takes ##\{e_i\}## to ##\{w_i\}##. The matrix that takes the ##\{v_i\}## to ##\{w_i\}## is ##A_wA_v^{-1}##.
 
  • Like
Likes   Reactions: aalma

Similar threads

  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 26 ·
Replies
26
Views
969
  • · Replies 3 ·
Replies
3
Views
993
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 17 ·
Replies
17
Views
7K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
641
  • · Replies 1 ·
Replies
1
Views
2K