# Restriction of SO(N) to 2 dim subspaces

#### jostpuur

If $N>2$ and $A\in\textrm{SO}(N)$ are arbitrary, does there exist subspaces $V_1,V_2\subset\mathbb{R}^N$ such that

$$V_1+ V_2 = \mathbb{R}^N,\quad\quad \textrm{dim}(V_1)=2,\quad \textrm{dim}(V_2)=N-2$$

and such that the restriction of $A$ to $V_1$ belongs to $\textrm{SO}(2)$, and that the restriction of $A$ to $V_2$ is the identity $1$?

Related Linear and Abstract Algebra News on Phys.org

#### Hurkyl

Staff Emeritus
Gold Member
If V is a nonzero subspace, then its image under SO(N) is all of RN.

#### jostpuur

You must have paid too much attention to the title of my post, which I did not write very carefully.

In the actual problem I'm interested in finding some subspaces with fixed matrix $A\in\textrm{SO}(N)$.

#### Hurkyl

Staff Emeritus
Gold Member
Ah, my bad. Have you tried looking at the eigenvalues and eigenvectors of A?

#### jostpuur

I don't know anything else about the eigenvalues except that they have norm 1.

The situation with eigen equation of a orthogonal matrix is like this:

$$Az=\lambda z,\quad A\in\textrm{SO}(N),\quad z\in\mathbb{C}^N,\quad \lambda\in\mathbb{C},\quad |\lambda|=1$$

I know how to prove the claim for N=3. I guess it is often considered trivial... "all rotations have some axis". IMO it's not trivial. At least my own proof for it is not trivial, but complicated... It involves such special trickery that it cannot be generalised to N>3.

#### jostpuur

Okey, here's a counter example to my own claim

$$A = \frac{1}{\sqrt{2}}\left(\begin{array}{cccc} 1 & 1 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & -1 & 1 \\ \end{array}\right) \in \textrm{SO}(4)$$

duh...

#### jostpuur

New problem!

Let $N=1,2,3,\ldots$ and $A\in\textrm{SO}(2N+1)$ be arbitrary. Does there exist a one dimensional subspace $V_1\subset\mathbb{R}^{2N+1}$ such that the restriction of $A$ to $V_1$ is the identity, and the restriction to $V_1^{\perp}$ is in $\textrm{SO}(2N)$?

#### jostpuur

hmhm.... I only now started thinking about diagonalizability of an orthogonal matrix.

If I want to prove that a symmetric matrix is diagonalizable, I would use the facts that a matrix always has at least one eigenvalue, and then the symmetric matrix remains symmetric in orthogonal transformation. Using these facts the proof can be carried out inductively.

Also an orthogonal matrix remains orthogonal in orthogonal transformations, so the diagonalizability of an orthogonal matrix can be proven precisely in the same way as the diagonalizability of a symmetric matrix?

The answer to may latest question seems to be positive. It can be proven by using eigenbasis of the orthogonal matrix. So all rotations in odd (2N+1) dimensional spaces are always rotations in even (2N) dimensional subspace, around some one dimensional axis?

#### lavinia

Gold Member
hmhm.... I only now started thinking about diagonalizability of an orthogonal matrix.

Also an orthogonal matrix remains orthogonal in orthogonal transformations, so the diagonalizability of an orthogonal matrix can be proven precisely in the same way as the diagonalizability of a symmetric matrix?

The answer to may latest question seems to be positive. I
$$\left(\begin{array}{cc}0&-1\\1&0\end{array}\right)$$

This matrix is orthogonal but not diagonalizable - over the reals.
Over the complexes it is just the complex number, i

Last edited:

#### jostpuur

$$\left(\begin{array}{cc}0&-1\\1&0\end{array}\right)$$

This matrix is orthogonal but not diagonalizable - over the reals.
It is well known that real matrices can have complex eigenvalues, so diagonalizable usually does not mean diagonalizable in real vector space only.

Over the complexes it is just the complex number, i
It is not a complex number $i$ unless it is specifically and explicitly defined to be a complex number through some identification $\mathbb{C}\subset\mathbb{R}^{2\times 2}$.

The standard answer is that that matrix is diagonalizable and its eigenvalues are $i$ and $-i$, with eigenvectors

$$\left(\begin{array}{c} 1+i \\ 1-i \\ \end{array}\right),\quad\quad \left(\begin{array}{c} i+1 \\ i-1 \\ \end{array}\right)$$

That means that your matrix is similar to a diagonal matrix

$$\left(\begin{array}{cc} i & 0 \\ 0 & -i \\ \end{array}\right)$$

#### lavinia

Gold Member
It is well known that real matrices can have complex eigenvalues, so diagonalizable usually does not mean diagonalizable in real vector space only.

It is not a complex number $i$ unless it is specifically and explicitly defined to be a complex number through some identification $\mathbb{C}\subset\mathbb{R}^{2\times 2}$.

The standard answer is that that matrix is diagonalizable and its eigenvalues are $i$ and $-i$, with eigenvectors

$$\left(\begin{array}{c} 1+i \\ 1-i \\ \end{array}\right),\quad\quad \left(\begin{array}{c} i+1 \\ i-1 \\ \end{array}\right)$$

That means that your matrix is similar to a diagonal matrix

$$\left(\begin{array}{cc} i & 0 \\ 0 & -i \\ \end{array}\right)$$
I was responding to the thought that an orthogonal matrix can be shown to be diagonalizabe in the same way that a symmetric matrix can. The matrix I provided is a counter example.
You can not diagonalize this matrix with orthogonal transformations.

I was just pointing out that multiplication by i, which is the same as this matrix, has no one dimensional real subspaces. Perhaps it would have been clearer if is called it rotation by 90 degrees. the choice of orientation of the plane and the basis that I chose was obvious from the example.

#### jostpuur

I was responding to the thought that an orthogonal matrix can be shown to be diagonalizabe in the same way that a symmetric matrix can. The matrix I provided is a counter example.
You can not diagonalize this matrix with orthogonal transformations.
I see. My comment that orthogonal matrices remain orthogonal in orthogonal transformations is not enough for the proof, because unitary transformations are going to be needed.

The claim that orthogonality would be preserved under unitary transformations seems to be incorrect, if I checked it correctly now.

I think that the fact that a unitary matrix remains unitary under unitary transformations is what is needed. If we use it, we can prove that unitary matrices are diagonalizable in the same way as we can prove that hermitian matrices are diagonalizable too.

Then, orthogonal matrices are unitary, so the diagonalizability of orthogonal matrices follows.

#### Hurkyl

Staff Emeritus
Gold Member
It can be proven by using eigenbasis of the orthogonal matrix. So all rotations in odd (2N+1) dimensional spaces are always rotations in even (2N) dimensional subspace, around some one dimensional axis?
I believe this to be true -- you can (I'm pretty sure) show that eigenvalues that aren't 1 come in pairs -- either (-1,-1) or a complex-conjugate pair. I'm not 100% sure, though -- I can't seem to recall why I believe that to be true.

#### jostpuur

$A\in\textrm{SO}(2N+1)$ has odd number of eigenvalues, and they are on the circle $|z|=1$ so that those that are not $\pm 1$ come in pairs $(\lambda,\lambda^*)$. So the amount of eigenvalues $\pm 1$ has to be odd. The product of each pair $(\lambda,\lambda^*)$ always produces $1$, so the number of eigenvalues $\lambda=-1$ has to be even too. The number of eigenvalues $\lambda=1$ has to be odd then, and cannot be zero.

Last edited:

#### Hurkyl

Staff Emeritus
Gold Member
Oh right, the norm 1 thing; that's what I couldn't remember today. I feel better now. I was getting worried I gave some bad advice to someone recently!

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving