Recognizing a product of two 3d rotations (matrices)

Main Question or Discussion Point

Hi, I have a problem identifying some 3d rotation matrices. Actually I don't know if the result can be brought on the desired form, however it would make sense from a physics point of view. My two questions are given at the bottom.

$$\mathbf{s}=\left( \begin{array}{c} s_{x} \\ s_{y} \\ s_{z} \end{array} \right),\; \mathbf{S}=\left( \begin{array}{c} S_{x} \\ S_{y} \\ S_{z} \end{array} \right)$$

$$H=\left[\left( \begin{array}{ccc} 1-\left(1-C_{z}^{2}+C_{x}^{2}\right)\gamma^{2}& 0 & -2\gamma\left(1-C_{x}C_{z}\gamma\right) \\ 0 & 1-\left(1+C_{z}^{2}+C_{x}^{2}\right)\gamma^{2} & 0 \\ 2\gamma\left(1-C_{x}C_{z}\gamma\right)& 0 & 1-\left(1+C_{z}^{2}-C_{x}^{2}\right)\gamma^{2} \end{array} \right) \mathbf{s}\right]\cdot\mathbf{S}$$

(for the physics interested: this describes Kondo effect in a quantum dot with spin-orbit interaction)

The goal is to bring H on a the form $$H = \left(\mathbf{A}\mathbf{s}\right)\cdot\left(\mathbf{B}\mathbf{S}\right)$$ where A and B are matrices describing rotations.

For Cx=Cz=0 :

$$H=\left[\left(1-\gamma^{2}\right)\underbrace{\left( \begin{array}{ccc} 1 & 0 & \theta \\ 0 & 1 & 0 \\ -\theta & 0 & 1 \end{array} \right)}_{R_{y}(\theta)+O(\theta^{2})} \mathbf{s}\right]\cdot\mathbf{S} \;\; \approx \;\; \left(1-\gamma^{2}\right)\left(R_{y}(\theta) \mathbf{s} \right)\cdot \mathbf{S} \;, \qquad \theta=\frac{-2\gamma}{1-\gamma^{2}}$$

That is, H is a vector product between spin S and a spin s that has been rotated around the y-axis with angle theta.

For Cx and Cz different from 0 :

In this case the matrix cannot be direct identified as a rotation around the x,y or z axis. The questions are now:

1) can it be a rotation around another axis?
2) Is it possible to write it as $$H = \left(\mathbf{A}\mathbf{s}\right)\cdot\left(\mathbf{B}\mathbf{S}\right)$$ where A and B are matrices describing rotations?

I would be glad if anybody has an idea about how to deal with this.

Related Linear and Abstract Algebra News on Phys.org
Fredrik
Staff Emeritus
Gold Member
Your H is in the form $(As)^T S$ where s and S are 3x1 and X is 3x3, and you're trying to put in in the form $(As)^T(BS)$? It already is, with B equal to the 3x3 identity matrix. So I'm not sure I understand the question.

If you have a matrix Z and want to find out if it's a rotation, check if it's orthogonal, i.e. check if $Z^TZ=1$. If you want to find the rotation axis, find its eigenvectors. Only a vector along the rotation axis can be an eigenvector.

Your H is in the form $(As)^T S$ where s and S are 3x1 and X is 3x3, and you're trying to put in in the form $(As)^T(BS)$? It already is, with B equal to the 3x3 identity matrix. So I'm not sure I understand the question.
I am trying to put it in the form $(As)^T(BS)$ where A and B are matrices describing rotations. Taking B=identity, A will not be a rotation matrix (as far as I can see) since $detA\neq0$.

I have $(Zs)^T S$ where Z does not describe a rotation. This I want to put on the form $(As)^T(BS)$, where both A and B are rotations.

I'm going to assume that you mean A and B are rotation matrices in the sense that Det(A) = Det(B) = 1, like a standard matrix for rotating vectors in real space. I have heard that there is a possibly different meaning of rotation matrix in quantum mechanics (I don't know much about this), so it may be that what I'm about to say has nothing to do with what your talking about.

Ok, assuming you mean rotation matrix in the sense that Det(A)=Det(B)=1, I don't think its going to work unless there are other special conditions on Cx, Cz, etc.

Right now the equation looks like this: $\textbf{s}^T Z^T \textbf{S}$, where Z is that big matrix full of Cx's, Cz's and so forth.

I ran the matrix Z through a computer algebra system to calculate the determinant, and the result was a big ugly mess of Cx's, Cz's, and gamma's that doesn't simplify to 1.

Under your hypothesis, $H=\textbf{s}^T A^T B \textbf{S}$, where A and B are rotation matrices with Det(A)=1 and Det(B) = 1. Now here's the problem: Det(A)Det(B) must equal Det(Z). However, this is not going to work since Det(Z) =/= 1.

The intuition behind this is that s->As and S->BS can only rotate the vectors s and S around and must keep them the same length. On the other hand, s->Zs might scale the length of s since it's determinant is not 1.

Last edited:
AA1983,

Hmmmm. Writing your matrix as a rotation matrix looks promising.

I wish I knew more about the Lie algebras of the classical groups. I have the vague impression that they are kind of like the exponential mab (i.e., the geodesic map) in differential geometry.

The matrix you have written in your special case sith ones on the diagonal and theta in the two corners is not written as a rotation matrix in the usual sense of the word in mathematics. I'm guessing that it might be a matrix in the Lie Algebra of the rotation group that projects onto the rotation matrix you describe when the exponential map is applied.

But, forget the Lie algebras, there is a simpler way to see what is going on. If you replace the thetas by "sin(theta)" and if you replace the ones in the other two corners with "cos(theta)" then you do get the rotation matrix around the y axis like yuou describe and the martix is in the classic mathematical form (with determinant equal to one and orthogonal column vectors and Rot*Rot = RotRot* = I, in other words both the row and column vectors form an orthonormal basis.

I don't want to be pedantic here. For small lambda, the form that you have written the rotation matrix asymptotically approaches the classic form of a rotation matrix. I've seen this convention uswed before many times. Nothin wrong with it.

Deacon John