# Using the orbit-stabilizer theorem to identify groups

• I
• aalma
In summary, the conversation discusses identifying ##S^n## with the quotient of ##O(n+1, \mathbb{R})## by ##O(n, \mathbb{R})## and ##S^{2n+1}## with the quotient of ##U(n+1)## by ##U(n)##. The orbit-stabilizer theorem is mentioned as a possible method for showing this, but the difficulty lies in finding the stabilizer. One participant suggests using explicit homomorphisms from the German Wikipedia page, while another suggests using polar coordinates for a formal proof. Ultimately, it is determined that an element of ##O(n+1)## can be used to take one point to another in the ##x

#### aalma

TL;DR Summary
Using the orbit-stabilizer theorem to identify groups.
I want to identify:
1. ##S^n## with the quotient of ##O(n + 1,R)## by ##O(n,R)##.
2. ##S^{2n+1}## with the quotient of ##U(n + 1)## by ##U(n)##.

The orbit-stabilizer theorem would give us the result, but my problem is to apply it. My problem is how to find the stabilizer.
In 1 how to define the action of ##O(n+1,R)## on ##S^n## and then conclude that ##stab(x)=O(n,R)## for ##x \in S^n##. And similar in 2?

It would be helpful and appreciated if you simplify/explain the method for showing this.

Thanks. I think I got 1: the stabilizer is the block matrix H=
[A 0]
[0 1]
Where ##A## is in ##O(n)## so ##H## is the collection of ##n+1\times n+1## orthogonal matrices.
In 2, I say that ##U(n+1)## acts on ##S^{2n+1}## as follows:
For a matrix ##A## in ##U(n+1)## and a vector ##e_{2n+1}## in ##S^{2n+1}## we consider ##Ae_{2n+1}## then the map
##U(n+1)\to S^{2n+1}## given by ##A\to Ae_{2n+1}##.
But then is the multiplication valid here?

Last edited:
aalma said:
Thanks. I think I got 1: the stabilizer is the block matrix H=
[A 0]
[0 1]
Where ##A## is in ##O(n)## so ##H## is the collection of ##n+1\times n+1## orthogonal matrices.
In 2, I say that ##U(n+1)## acts on ##S^{2n+1}## as follows:
For a matrix ##A## in ##U(n+1)## and a vector ##e_{2n+1}## in ##S^{2n+1}## we consider ##Ae_{2n+1}## then the map
##U(n+1)\to S^{2n+1}## given by ##A\to Ae_{2n+1}##.
But then is the multiplication valid here? Since ##A## is ##n+1 ×n+1## but ##v## is ##2n+1 × 1## ?!
That looks a bit confusing.

An action of a group ##G## on a set ##X## is a group homomorphism ##G\rightarrow \operatorname{GL}(X).## What is your group homomorphism in this case? The natural operation would be ##\operatorname{O}(n+1,\mathbb{R}) \rightarrow \operatorname{GL}(n+1,\mathbb{R}).##
$$H = \{A\in SO(n+1)\, : \,A.\mathfrak{e}_{n+1}=\mathfrak{e}_{n+1}\} \leq SO(n+1)$$
is the stabilizer of ##\mathfrak{e}_{n+1}\in \mathbb{R}^{n+1}.##

Now show that ##H\cong \operatorname{SO}(n,\mathbb{R})## and ##\operatorname{SO}(n+1,\mathbb{R})/H \cong S^n.##

Last edited:
aalma
fresh_42 said:
That looks a bit confusing.

An action of a group ##G## on a set ##X## is a group homomorphism ##G\rightarrow \operatorname{GL}(X).## What is your group homomorphism in this case? The natural operation would be ##\operatorname{O}(n+1,\mathbb{R}) \rightarrow \operatorname{GL}(n+1,\mathbb{R}).##
$$H = \{A\in SO(n+1)\, : \,A.\mathfrak{e}_{n+1}=\mathfrak{e}_{n+1}\} \leq SO(n+1)$$
is the stabilizer of ##\mathfrak{e}_{n+1}\in \mathbb{R}^{n+1}.##

Now show that ##H\cong \operatorname{O}(n,\mathbb{R})## and ##\operatorname{O}(n+1,\mathbb{R})/H \cong S^n.##
I think you're right. But I can look at the action ##\phi:O(n+1)\times S^n\to S^n## given by ##\phi(A,v)=Av##. So I need to see that it's transitive that is for every ##v, w\in S^n## there exists ##A\in O(n+1)## such that ##Av=w##. How can I find such ##A##?

Last edited:
aalma said:
How can I find such ##A##?
Visually, it is easy. You have two points (##v,w##) on a ball (##S^n##) and rotations (##\operatorname{SO}(n+1)##) at hand to do so. A "soft" argumentation is, that there exists an invertible matrix ##A\in \operatorname{GL}(n+1)## such that ##A.v=w.## Since ##\|v\|=\|w\|## we can do this with a matrix that leaves lengths, and angles invariant, i.e. an orthogonal matrix.

Polar coordinates seem to be suitable for a formal proof, possibly an induction over ##n.##

You can chose ##v## to be the pole and simply skip the rotation axis to ##w.##

aalma said:
I think you're right. But I can look at the action ##\phi:O(n+1)\times S^n\to S^n## given by ##\phi(A,v)=Av##. So I need to see that it's transitive that is for every ##v, w\in S^n## there exists ##A\in O(n+1)## such that ##Av=w##. How can I find such ##A##?
Pick coordinates so that your two points are in the ##x_1x_2## plane. In the plane the problem is clear. There is an element of ##O(2)## that takes one to the other. For your ##A##, take blockdiagonal with the 2x2 in the upper left and 1's on the rest of the diagonal.

mathwonk and fresh_42
you could use induction for transitivity, essentially what fresh is suggesting. i.e. if O(n) is transitive on S(n-1), then O(n+1) is transitive on the points of the hemisphere in S(n) perpendicular to e1. But given any two points of S(n), n≥2, we can pass a hyperplane through them, and choose a new orthonormal basis so that hyperplane is spanned by e2,...,en+1, and use the same argument.

Here we think of S(n) as living in an abstract coordinate free space equipped with an inner product, and O(n+1) as length preserving linear transformations of that space. They are represented by matrices only when we choose a basis, which we may change as desired from time to time.

for this however, you must start the induction by proving it for n = 0 and 1. I.e. it is not true for S(1) that any 2 points lie in a hyperplane, so the case of O(1) being transitive on S(0) does not help prove O(2) is transitive on S(1). It should not be too hard to show rotations act transitively on the circle, using sines and cosines, or just geometry. I.e. the first column of the matrix should be the desired destination of e1, and the second should be perpendicular to that, and also length one.

martinbn's succinct remarks should also be helpful.

Extend ##v## to an orthonormal basis so that it is the first vector, extend ##w## to one as well. There is a matrix that takes one basis to the other. It will be orthogonal.

aalma
martinbn said:
Extend ##v## to an orthonormal basis so that it is the first vector, extend ##w## to one as well. There is a matrix that takes one basis to the other. It will be orthogonal.
If I extend ##v## to an orthonormal basis ##{v_1,...,v_{n+1}}## and write it as axmatrix whose column are these vectors so is it just enough to say that any orthogonal matrix-orthogonal transformation ##A## acting on this matrix would give us an orthogonal matrix whose column is the orthonormal basis ##{Av_1,...,Av_{n+1}}={w_1,...,w_{n+1}}##?
I just do not see how our matrix must look!.

Last edited:
aalma said:
If I extend ##v## to an orthonormal basis ##{v_1,...,v_{n+1}}## and write it as axmatrix whose column are these vectors so is it just enough to say that any orthogonal matrix-orthogonal transformation ##A## acting on this matrix would give us an orthogonal matrix whose column is the orthonormal basis ##{Av_1,...,Av_{n+1}}={w_1,...,w_{n+1}}##?
I just do not see how our matrix must look!.
I am not sure what you are saying. Take the standard basis ##\{e_i\}##, then the matrix ##A_v## that takes it to the basis ##\{v_i\}## has columns the vectors ##v_i##, so it is orthogonal. Same fore the matrix ##A_w## that takes ##\{e_i\}## to ##\{w_i\}##. The matrix that takes the ##\{v_i\}## to ##\{w_i\}## is ##A_wA_v^{-1}##.

aalma