# Passive Transformation and Rotation Matrix

• I
• shinobi20
shinobi20
TL;DR Summary
I got confused with the rotation matrices for the basis and the components, i.e., why they are written that way. Eventually, from linear algebra, I discussed how I think it should be done. I would like to confirm if my discussion is correct or maybe I missed something.
I'm reading Group Theory by A. Zee , specifically, chapter I.3 on rotations. He used the passive transformation in analyzing a point ##P## in space. There are two observers, one labeled with unprimed coordinates and the other with primed coordinates. From the figure below, he deduced the relation between the two observer coordinates,

\tag{1}
x' = x \cos\theta + y \sin\theta, \qquad y' = -x \sin\theta + y \cos\theta

This can be written in vector notation as ##\; \vec{r}' = R(\theta) \vec{r} \;## where,

\tag{2}
\vec{r} = \begin{bmatrix} x \\ y \end{bmatrix}, \quad \vec{r}' = \begin{bmatrix} x' \\ y' \end{bmatrix}, \quad R(\theta) = \begin{bmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{bmatrix}

I'll briefly discuss what I know from linear algebra. Given a vector ##\vec{v}##, we can always express it in different bases. Here, the vector ##\vec{v}## is the arrow from the origin to the point ##P##. I'll use an unprimed basis ##\{\hat{x},\hat{y}\}## and primed basis ##\{\hat{x}',\hat{y}'\}##. We can rotate the basis ##\{\hat{x},\hat{y}\}## using the rotation operator ##R(\theta)## so that it results to the new basis ##\{\hat{x}',\hat{y}'\}##. In equations,

\tag{3}
\begin{bmatrix} \hat{x}' \\ \hat{y}' \end{bmatrix} = R(\theta) \begin{bmatrix} \hat{x} \\ \hat{y} \end{bmatrix} = \begin{bmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{bmatrix} \begin{bmatrix} \hat{x} \\ \hat{y} \end{bmatrix}

This can be seen in the figure below where the unprimed basis is colored in blue while the primed basis is colored in red,

We can write,

\tag{4}
\vec{v} = x' \hat{x}' + y' \hat{y}' = \begin{bmatrix} x' & y' \end{bmatrix} \begin{bmatrix} \hat{x}' \\ \hat{y}' \end{bmatrix} = \begin{bmatrix} x' & y' \end{bmatrix} R(\theta) \begin{bmatrix} \hat{x} \\ \hat{y} \end{bmatrix}

which means,

\tag{5}
\begin{bmatrix} x & y \end{bmatrix} = \begin{bmatrix} x' & y' \end{bmatrix} R(\theta)

taking the transpose we get,

\tag{6}
\begin{bmatrix} x \\ y \end{bmatrix} = R(\theta)^T \begin{bmatrix} x' \\ y' \end{bmatrix}

This tells us that there exist an operator ##\tilde{R}(\theta)## such that,

\tag{7}
\begin{bmatrix} x' \\ y' \end{bmatrix} = \tilde{R}(\theta)^T \begin{bmatrix} x \\ y \end{bmatrix}

so that,

\tag{8}
\begin{bmatrix} x \\ y \end{bmatrix} = R(\theta)^T \begin{bmatrix} x' \\ y' \end{bmatrix} = R(\theta)^T \tilde{R}(\theta)^T \begin{bmatrix} x \\ y \end{bmatrix} = (\tilde{R}(\theta) R(\theta))^T \begin{bmatrix} x \\ y \end{bmatrix}

This means ##\tilde{R}(\theta) = R(\theta)^{-1} = R(\theta)^T## (we are working on orthogonal transformations so inverse is just transpose). This implies that,

\tag{9}
\begin{bmatrix} x' \\ y' \end{bmatrix} = R(\theta) \begin{bmatrix} x \\ y \end{bmatrix}

My original confusion while reading is that given the basis transforms through ##R(\theta)##, we know that the coordinates should transform inversely so as to preserve the vector ##\vec{v}##, i.e.,

\tag{10}
\begin{bmatrix} \hat{x}' \\ \hat{y}' \end{bmatrix} = R(\theta) \begin{bmatrix} \hat{x} \\ \hat{y} \end{bmatrix} = \begin{bmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{bmatrix} \begin{bmatrix} \hat{x} \\ \hat{y} \end{bmatrix} \qquad \leftrightarrow \qquad \begin{bmatrix} x' \\ y' \end{bmatrix} = R(\theta)^T \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}

This led me to be confused with Zee's rotation matrix in eq.##(2)## stemming from the equation for the coordinate transformation in eq.##(1)## since the rotation matrix is different from what I expected which is ##R(\theta)^T##.

Eventually, I think I know the reason. It has something to do with the row-column dot product to write the vector. From the linear algebra discussion above, to preserve the vector, and considering eq.##(3)## and eq.##(9)##,

\tag{11}
\vec{v} = x' \hat{x}' + y' \hat{y}' = \begin{bmatrix} x' & y' \end{bmatrix} \begin{bmatrix} \hat{x}' \\ \hat{y}' \end{bmatrix} = \begin{bmatrix} x & y \end{bmatrix} R(\theta)^T R(\theta) \begin{bmatrix} \hat{x} \\ \hat{y} \end{bmatrix} = \begin{bmatrix} x & y \end{bmatrix} \begin{bmatrix} \hat{x} \\ \hat{y} \end{bmatrix} = x \hat{x} + y \hat{y}

So, this is what we mean by "preserve" the vector. The naive notion that the coordinates transform inversely compared to the basis as is written in eq.##(10)##, i.e., both are written in a column-wise manner, is incorrect. The key here is the expression ##\begin{bmatrix} x' & y' \end{bmatrix} = \begin{bmatrix} x & y \end{bmatrix} R(\theta)^T##. In order words, what we mean by preserve is that the basis is written in column while the components is in a row. However, we usually express this as a column vector so that instead of the right-hand side equation in eq.##(10)##, we have

\tag{12}
\begin{bmatrix} x' \\ y' \end{bmatrix} = R(\theta) \begin{bmatrix} x \\ y \end{bmatrix}

which matches Zee's rotation matrix. This column version of the rotation of coordinates may lead one to think that both the basis and coordinates transform through ##R(\theta)##, but in fact it is doesn't, the coordinates transform through ##R(\theta)^T##, and the column version is just a convenient way of expressing the rotation of coordinates. I think this should be correct (obviously from the linear algebra) but can anyone comment if my realization is correct?

I am probably not the best to sift through your derivation in order to identify exactly where something is wrong (if any), but I would like to say that working with both passive and active transformations I find it helps immensely to adopt a notation that clearly distinguish what is (passive) coordinate changes and what is active transformations. For instance, having two reference frames with associated coordinate system A and B the (passive) coordinate change of a vector ##v^B## coordinated in B to the same vector coordinated in A can be written using superscripts as $$v^A = R^{A\gets B} \, v^B,$$ whereas the active transformation of a vector ##v## rotated to the vector ##u## can be written using subscripts with the notation $$u = R_{A\gets B} \, v,$$ where all vectors and rotation matrices are coordinate in same arbitrary coordinate system, e.g. using a third coordinate system C as ##u^C = R_{A\gets B}^C \, v^C##. With that notation and using ##R_{B\gets A} = R_{A\gets B}^T## one can then express the relation between passive and active transformations as $$R^{A\gets B} = R_{B\gets A}^A = R_{B\gets A}^B,$$ that is, the passive coordinate change matrix changing to A from B, is the same as the active transform to B from A when this is coordinated either in A or B.

Of course, the notation can be varied with role of superscript and subscript swapped, the arrows can be dropped, and so on, but the important point is that with a proper notation you can much better keep track of what you are expressing and avoid mistakes like accidentally equating an active transforms with passive coordinate change. Keeping the notation clear should make it a fairly straight forward exercise to derive the relationship above.

• Linear and Abstract Algebra
Replies
4
Views
1K
• Linear and Abstract Algebra
Replies
27
Views
2K
• Linear and Abstract Algebra
Replies
6
Views
3K
• Calculus and Beyond Homework Help
Replies
1
Views
367
• Quantum Physics
Replies
12
Views
1K
• Linear and Abstract Algebra
Replies
1
Views
1K
• Differential Equations
Replies
2
Views
2K
• Linear and Abstract Algebra
Replies
2
Views
1K
• Linear and Abstract Algebra
Replies
2
Views
2K
• Linear and Abstract Algebra
Replies
6
Views
2K