- #1
FallenLeibniz
- 86
- 1
I was reading a section in my book discussing the commutativity of infinitesimal and finite rotations. In the book the authors try to set up a scenario to explain why finite rotations are not commutative. The following is an excerpt from the book regarding this language:
"The impossibility of describing a finite rotation by a vector results from the fact that such rotations do not commute, and therefore, in general, different results will be obtained depending on the order in which the roations are made. To illustrate this statement, consider the successive application of two finite rotations described by rotation matrices ##\lambda_1## and ##\lambda_2##. Let us associate the vectors ##\vec{A}## and ##\vec{B}## in a one-to-one manner with these rotations. The vector sum is ##\vec{C}=\vec{A}+\vec{B}##, which is equivalent to the matrix ##\lambda_3=\lambda_2\lambda_1##. But because vector addition is commutative, we also have ##\vec{C}=\vec{B}+\vec{A}##, with ##\lambda_4=\lambda_1\lambda_2##, But we know that matrix operations are not commutative, so that in general ##\lambda_3\neq\lambda_4##."
Now, to me, this language seems kind of murky so I had to think on this for awhile. Assume you have point P desginated by a vector ##\vec{r}## in a reference frame S. Applying the first rotation ##\lambda_1## can be represented by rotating P in S. Once the rotation is complete, the point is now at P'. If you consider vector ##\vec{A}## the vector that connects point P to P', it would be a representation of the rotation. Is this a physically plausible way to interpret the above explanation if you apply the same reasoning to the case of multiple rotations.
"The impossibility of describing a finite rotation by a vector results from the fact that such rotations do not commute, and therefore, in general, different results will be obtained depending on the order in which the roations are made. To illustrate this statement, consider the successive application of two finite rotations described by rotation matrices ##\lambda_1## and ##\lambda_2##. Let us associate the vectors ##\vec{A}## and ##\vec{B}## in a one-to-one manner with these rotations. The vector sum is ##\vec{C}=\vec{A}+\vec{B}##, which is equivalent to the matrix ##\lambda_3=\lambda_2\lambda_1##. But because vector addition is commutative, we also have ##\vec{C}=\vec{B}+\vec{A}##, with ##\lambda_4=\lambda_1\lambda_2##, But we know that matrix operations are not commutative, so that in general ##\lambda_3\neq\lambda_4##."
Now, to me, this language seems kind of murky so I had to think on this for awhile. Assume you have point P desginated by a vector ##\vec{r}## in a reference frame S. Applying the first rotation ##\lambda_1## can be represented by rotating P in S. Once the rotation is complete, the point is now at P'. If you consider vector ##\vec{A}## the vector that connects point P to P', it would be a representation of the rotation. Is this a physically plausible way to interpret the above explanation if you apply the same reasoning to the case of multiple rotations.