# Static Rotation as an Axial Vector

Hetware
If I rotate an object about an arbitrary axis, I can draw an arrow along that axis, and assign it a length proportional to the amount of rotation. I can project that arrow onto the basis of a rectangular coordinate system. The question arises, is it a vector?

I can certainly transform it from one coordinate system to another, preserving its meaning. I could add them together using standard vector addition, and assert that this sum means to rotate about the resulting axial by the resulting magnitude.

What doesn't work is to perform the rotation determined by one such arrow, and then perform the rotation for a second arrow and, in general get the rotation resulting from the vector sum of these.
http://sphotos-a.xx.fbcdn.net/hphotos-ash3/75130_484117724951803_2017520621_n.jpg [Broken]

Now, with displacements, for example, I can equate subsequent displacements and the summation of vectors.

Now, if I require that the combination of the rotation arrows be done simultaneously, using some real number parameter, such that both angles about the respective axes of rotation increase in proportion to the magnitudes, then, I believe, the result will be the same as the rotation defined by the vector sum.

The reason I got to thinking about this is because I was pondering Feynman's unproved assertion that angular velocity vectors can be summed.

Is this something that has an intuitive discussion in textbooks? Is there a position axial vector analogous to the polar position vector used to depict a displacement from an origin?

Last edited by a moderator:

## Answers and Replies

Muphrid
Yes, you can add angular velocity vectors, but this is not the same as performing the rotations in a specified sequence. How could it be? We all know rotations do not commute, yet vector addition is a commutative operation.

There may be a useful way of thinking about the meaning of vector addition in this context, but it is not the sequential application of rotations. Even constraining to the case where the angles about both axes are forced to be the same doesn't change this.

You must remember that position vectors are really displacement vectors between two coordinate points (points are not vectors). This is naturally a one-dimensional object.

Rotations, on the other hand, are generally determined by the plane of rotation, and we cheat by talking about the normal vectors to these planes, a cheat you can get away with in 3D. Nevertheless, the plane of a rotation is a naturally two-dimensional object. It is fundamentally different from a (displacement) vector.

Hetware
Angular rotations were called pseudovectos in this thread: https://www.physicsforums.com/showthread.php?t=585240 I think's that's the correct term.

You've missed my point. I am quite aware of the term "pseudo vector". That is not the issue. My question is regarding treating angular position as an axial vector. It seems possible, with some qualifications.

Hetware
Yes, you can add angular velocity vectors, but this is not the same as performing the rotations in a specified sequence. How could it be? We all know rotations do not commute, yet vector addition is a commutative operation.

Do infinitesimal rotation commute?

Muphrid
Yes, infinitesimal rotations commute. It is finite rotations that do not commute.

Regardless, you seem to be interested in "angular position" as an axial vector. Would such a vector have the same transformation law under reflections as an axial vector should? For instance, a polar vector perpendicular to a plane of reflection inverts on reflections, but an axial vector perpendicular to that plane does not.

Hetware
Yes, infinitesimal rotations commute. It is finite rotations that do not commute.

Regardless, you seem to be interested in "angular position" as an axial vector. Would such a vector have the same transformation law under reflections as an axial vector should? For instance, a polar vector perpendicular to a plane of reflection inverts on reflections, but an axial vector perpendicular to that plane does not.

I guess I don't follow. If I have an angular velocity perpendicular to the xy plane, I can pick it up, turn it upside down and stab it through that plane so it pokes out the bottom, representing rotation in the opposite direction.

I believe the short answer to my overall question is yes. I could contrive an angular position vector function. I just replace the time factor with some other parameter, and require that I get the same results as I would from multiplying by time. I'll have to think about this.

Right now I get to go play on I66 for an hour or so.

Hetware
Yes, infinitesimal rotations commute. It is finite rotations that do not commute.

Regardless, you seem to be interested in "angular position" as an axial vector. Would such a vector have the same transformation law under reflections as an axial vector should? For instance, a polar vector perpendicular to a plane of reflection inverts on reflections, but an axial vector perpendicular to that plane does not.

OK, I think I get your meaning. If I hold it up to a mirror, the reflected image has a reverse sense, even thought the arrow head points in the same direction, or word to that effect. Yes my rotator vector would act like a angular velocity vector, in that respect.

Muphrid
I guess I'm not sure I understand exactly what it is you want to do. I think what you're asking for is similar to an "attitude quaternion"--i.e. you pick a reference direction and express other directions by the quaternion required to rotate that reference direction to the new direction. Such a quaternion is not a vector--it has four components, although only three are independent because we usually constrain it to have unit magnitude--but it is a well-defined geometric object, and the degrees of freedom correspond to the angle to rotate through and the unit vector defining the rotation axis.

Staff Emeritus
Can you add angular velocity vectors? Sure. Just add the numbers. However, the result is meaningless, at least in general.

The result is physically meaningful if the two angular velocities are parallel or anti-parallel. There's a reason for that: This is really just a plane rotation, and rotations do commute in ℝ2. Rotations do not commute in ℝ3 or higher, and that loss of commutativity is ultimately what makes it so that adding angular angular velocities doesn't make sense.

It's already been noted that pseudovector is a better name for those angular velocity vectors. There's an even better name: bivector. This concept generalizes to ℝn. That the time derivative of rotation in ℝ3 can be expressed as something that looks like a vector in ℝ3 is special to ℝ3. It doesn't work in ℝ2; angular velocity is just a scalar (better: pseudoscalar) in ℝ2. It doesn't work in ℝ4; angular velocity has six degrees of freedom in ℝ4. In general, angular velocity in ℝn has n(n-1)/2 degrees of freedom. The only case where n(n-1)/2=n is n=3. ℝ3 is a special case, a very special case.

To see what's going on, let's look at the time derivative $\dot{\boldsymbol T}$ of a rotation matrix $\boldsymbol T$ in $\mathbb R^n$. Define $\boldsymbol S \equiv \dot{\boldsymbol T} \boldsymbol T^{\top}$. Note that $\boldsymbol S^{\top} = \boldsymbol T\dot{\boldsymbol T}^{\top}$. That $\dot{\boldsymbol T}$ is a (proper) rotation matrix means that $\boldsymbol T \boldsymbol T^{\top} = \boldsymbol I$, the nxn identity matrix. Differentiating with respect to time yields $\dot{\boldsymbol T} \boldsymbol T^{\top} + \boldsymbol T\dot{\boldsymbol T}^{\top} = 0$. Note that the left hand side is just the sum of our matrix S and it's transpose: $\boldsymbol S+\boldsymbol S^{\top}=0$. This means S is a skew symmetric matrix. This skew symmetric matrix is the generalization of angular velocity. It happens to be representable as a 3-vector in ℝ3 and ℝ3 only.

Note that our definition of S means that $\dot{\boldsymbol T} = \boldsymbol S \boldsymbol T$. Note how similar this looks to $\dot x = ax$. Examine the case where x is a scalar and a is a constant. The solution to $\dot x = ax,\,\,\left. x(t) \right|_{t=0}=x_0$ differential equation is $x(t)=\exp(at)x_0$. Does this generalize to constant rate rotations in ℝn? Could we naively say that the solution to $\dot{\boldsymbol T} = \boldsymbol S \boldsymbol T,\,\, \left. \boldsymbol T(t) \right|_{t=0}=\boldsymbol T_0$ is $\boldsymbol T(t) = \exp(\boldsymbol S t)\boldsymbol T_0$. The answer is yes. The matrix exponential is a well-defined concept, and it has exactly the same Taylor series as does the exponential function. There is one huge difference, however. With the exponential function, $\exp(a)\exp(b) = \exp(a+b)$. This no longer works with the matrix exponential. The reason once again is lack of commutativity. Matrix multiplication is not commutative.

What I've done above is to give a very brief overview of some of the features of what are called Lie groups. Rotation in ℝn is but one example (a fairly simple one) of a Lie group. Rotation in ℝ3 is one of the simplest (maybe the simplest?) non-commutative Lie group. ℝ3 is where rotation starts to get interesting. It gets even weirder in ℝ4, weirder yet in even higher dimensions.

Muphrid
The concept of bivectors is also elegantly handled in geometric algebra.

Geometric algebra defined a new product of vectors. The rules of this product are simple. If you have two basis vectors $e_i,e_j$ then the geometric product is denoted $e_i e_j$ (just a juxtaposition of the two) and is equal to the following:

$$e_i e_j = \begin{cases} 1 & i = j \\ - e_j e_i & i \neq j \end{cases}$$

So if the two vectors are the same, it produces a scalar like the dot product. If they're different, then the vectors anticommute under the product, like the cross product. We call this latter result a bivector. The geometric product is associative, so you can build even bigger objects (trivectors, for instance) and so on.

Bivectors have interesting properties when treated with the geometric product. Consider the bivector $e_x e_y$, which might represent the unit bivector of the xy plane. See that

$$(e_x e_y)(e_x e_y) = - e_x e_y e_y e_x = -1$$

Or, more succinctly, $(e_x e_y)^2 = -1$. Bivectors in Euclidean spaces act like imaginary units, but each different bivector specifies its own plane.

The exponential of a bivector is taken through power series, and since Euclidean bivectors square to -1, this generates the trig functions. Namely, for any unit Euclidean bivector $i$, we get

$$\exp(i \theta) = \cos \theta + i \sin \theta$$

But with the machinery of geometric algebra, this is no longer a statement about the complex plane. This is valid in any number of dimensions, as long as $i$ is properly specified.

Under the geometric product, vectors are rotated by way of such exponentials of bivectors. Let there be a "rotor" $\psi = \exp(-i \theta/2)$, and a rotation operator on any vector $a$ is

$$\underline R(a) = \psi a \psi^{-1}$$

When $i \theta = \omega t$ for some scalar $t$, then $\omega$ is a bivector as well. We can take a time derivative of the rotation operator to get

$$\dot{\underline R}(a) = \frac{1}{2}(- \omega \underline R(a) + \underline R(a) \omega) = \underline R(a) \times \omega$$

where $\times$ denotes an antisymmetric "commutator" product (you can take the expression above as a definition of this product).

As D H points out, in general the exponential of a sum of bivectors is not the exponentials of the individual bivectors multiplied out. However, each rotation is described by a rotor $\psi$, and one can expand these out in terms of associated trig functions and multiply to find the combined rotor and the overall bivector underlying the rotation.