Static Rotation as an Axial Vector

In summary, the conversation discusses the concept of adding angular rotations and whether angular positions can be treated as axial vectors. It is mentioned that infinitesimal rotations commute while finite rotations do not. The idea of using an attitude quaternion to represent rotations is also brought up. It is ultimately concluded that while it is possible to add angular velocity vectors, the result may not have a meaningful interpretation.
  • #1
Hetware
125
1
If I rotate an object about an arbitrary axis, I can draw an arrow along that axis, and assign it a length proportional to the amount of rotation. I can project that arrow onto the basis of a rectangular coordinate system. The question arises, is it a vector?

I can certainly transform it from one coordinate system to another, preserving its meaning. I could add them together using standard vector addition, and assert that this sum means to rotate about the resulting axial by the resulting magnitude.

What doesn't work is to perform the rotation determined by one such arrow, and then perform the rotation for a second arrow and, in general get the rotation resulting from the vector sum of these.
http://sphotos-a.xx.fbcdn.net/hphotos-ash3/75130_484117724951803_2017520621_n.jpg

Now, with displacements, for example, I can equate subsequent displacements and the summation of vectors.

Now, if I require that the combination of the rotation arrows be done simultaneously, using some real number parameter, such that both angles about the respective axes of rotation increase in proportion to the magnitudes, then, I believe, the result will be the same as the rotation defined by the vector sum.

The reason I got to thinking about this is because I was pondering Feynman's unproved assertion that angular velocity vectors can be summed.

Is this something that has an intuitive discussion in textbooks? Is there a position axial vector analogous to the polar position vector used to depict a displacement from an origin?
 
Last edited by a moderator:
Physics news on Phys.org
  • #3
Yes, you can add angular velocity vectors, but this is not the same as performing the rotations in a specified sequence. How could it be? We all know rotations do not commute, yet vector addition is a commutative operation.

There may be a useful way of thinking about the meaning of vector addition in this context, but it is not the sequential application of rotations. Even constraining to the case where the angles about both axes are forced to be the same doesn't change this.

You must remember that position vectors are really displacement vectors between two coordinate points (points are not vectors). This is naturally a one-dimensional object.

Rotations, on the other hand, are generally determined by the plane of rotation, and we cheat by talking about the normal vectors to these planes, a cheat you can get away with in 3D. Nevertheless, the plane of a rotation is a naturally two-dimensional object. It is fundamentally different from a (displacement) vector.
 
  • #4
Stephen Tashi said:
Angular rotations were called pseudovectos in this thread: https://www.physicsforums.com/showthread.php?t=585240 I think's that's the correct term.

You've missed my point. I am quite aware of the term "pseudo vector". That is not the issue. My question is regarding treating angular position as an axial vector. It seems possible, with some qualifications.
 
  • #5
Muphrid said:
Yes, you can add angular velocity vectors, but this is not the same as performing the rotations in a specified sequence. How could it be? We all know rotations do not commute, yet vector addition is a commutative operation.

Do infinitesimal rotation commute?
 
  • #6
Yes, infinitesimal rotations commute. It is finite rotations that do not commute.

Regardless, you seem to be interested in "angular position" as an axial vector. Would such a vector have the same transformation law under reflections as an axial vector should? For instance, a polar vector perpendicular to a plane of reflection inverts on reflections, but an axial vector perpendicular to that plane does not.
 
  • #7
Muphrid said:
Yes, infinitesimal rotations commute. It is finite rotations that do not commute.

Regardless, you seem to be interested in "angular position" as an axial vector. Would such a vector have the same transformation law under reflections as an axial vector should? For instance, a polar vector perpendicular to a plane of reflection inverts on reflections, but an axial vector perpendicular to that plane does not.

I guess I don't follow. If I have an angular velocity perpendicular to the xy plane, I can pick it up, turn it upside down and stab it through that plane so it pokes out the bottom, representing rotation in the opposite direction.

I believe the short answer to my overall question is yes. I could contrive an angular position vector function. I just replace the time factor with some other parameter, and require that I get the same results as I would from multiplying by time. I'll have to think about this.

Right now I get to go play on I66 for an hour or so.
 
  • #8
Muphrid said:
Yes, infinitesimal rotations commute. It is finite rotations that do not commute.

Regardless, you seem to be interested in "angular position" as an axial vector. Would such a vector have the same transformation law under reflections as an axial vector should? For instance, a polar vector perpendicular to a plane of reflection inverts on reflections, but an axial vector perpendicular to that plane does not.

OK, I think I get your meaning. If I hold it up to a mirror, the reflected image has a reverse sense, even thought the arrow head points in the same direction, or word to that effect. Yes my rotator vector would act like a angular velocity vector, in that respect.
 
  • #9
I guess I'm not sure I understand exactly what it is you want to do. I think what you're asking for is similar to an "attitude quaternion"--i.e. you pick a reference direction and express other directions by the quaternion required to rotate that reference direction to the new direction. Such a quaternion is not a vector--it has four components, although only three are independent because we usually constrain it to have unit magnitude--but it is a well-defined geometric object, and the degrees of freedom correspond to the angle to rotate through and the unit vector defining the rotation axis.
 
  • #10
Can you add angular velocity vectors? Sure. Just add the numbers. However, the result is meaningless, at least in general.

The result is physically meaningful if the two angular velocities are parallel or anti-parallel. There's a reason for that: This is really just a plane rotation, and rotations do commute in ℝ2. Rotations do not commute in ℝ3 or higher, and that loss of commutativity is ultimately what makes it so that adding angular angular velocities doesn't make sense.

It's already been noted that pseudovector is a better name for those angular velocity vectors. There's an even better name: bivector. This concept generalizes to ℝn. That the time derivative of rotation in ℝ3 can be expressed as something that looks like a vector in ℝ3 is special to ℝ3. It doesn't work in ℝ2; angular velocity is just a scalar (better: pseudoscalar) in ℝ2. It doesn't work in ℝ4; angular velocity has six degrees of freedom in ℝ4. In general, angular velocity in ℝn has n(n-1)/2 degrees of freedom. The only case where n(n-1)/2=n is n=3. ℝ3 is a special case, a very special case.

To see what's going on, let's look at the time derivative [itex]\dot{\boldsymbol T}[/itex] of a rotation matrix [itex]\boldsymbol T[/itex] in [itex]\mathbb R^n[/itex]. Define [itex]\boldsymbol S \equiv \dot{\boldsymbol T} \boldsymbol T^{\top}[/itex]. Note that [itex]\boldsymbol S^{\top} = \boldsymbol T\dot{\boldsymbol T}^{\top}[/itex]. That [itex]\dot{\boldsymbol T}[/itex] is a (proper) rotation matrix means that [itex]\boldsymbol T \boldsymbol T^{\top} = \boldsymbol I[/itex], the nxn identity matrix. Differentiating with respect to time yields [itex]\dot{\boldsymbol T} \boldsymbol T^{\top} + \boldsymbol T\dot{\boldsymbol T}^{\top} = 0[/itex]. Note that the left hand side is just the sum of our matrix S and it's transpose: [itex]\boldsymbol S+\boldsymbol S^{\top}=0[/itex]. This means S is a skew symmetric matrix. This skew symmetric matrix is the generalization of angular velocity. It happens to be representable as a 3-vector in ℝ3 and ℝ3 only.

Note that our definition of S means that [itex]\dot{\boldsymbol T} = \boldsymbol S \boldsymbol T[/itex]. Note how similar this looks to [itex]\dot x = ax[/itex]. Examine the case where x is a scalar and a is a constant. The solution to [itex]\dot x = ax,\,\,\left. x(t) \right|_{t=0}=x_0[/itex] differential equation is [itex]x(t)=\exp(at)x_0[/itex]. Does this generalize to constant rate rotations in ℝn? Could we naively say that the solution to [itex]\dot{\boldsymbol T} = \boldsymbol S \boldsymbol T,\,\, \left. \boldsymbol T(t) \right|_{t=0}=\boldsymbol T_0[/itex] is [itex]\boldsymbol T(t) = \exp(\boldsymbol S t)\boldsymbol T_0[/itex]. The answer is yes. The matrix exponential is a well-defined concept, and it has exactly the same Taylor series as does the exponential function. There is one huge difference, however. With the exponential function, [itex]\exp(a)\exp(b) = \exp(a+b)[/itex]. This no longer works with the matrix exponential. The reason once again is lack of commutativity. Matrix multiplication is not commutative.

What I've done above is to give a very brief overview of some of the features of what are called Lie groups. Rotation in ℝn is but one example (a fairly simple one) of a Lie group. Rotation in ℝ3 is one of the simplest (maybe the simplest?) non-commutative Lie group. ℝ3 is where rotation starts to get interesting. It gets even weirder in ℝ4, weirder yet in even higher dimensions.
 
  • #11
The concept of bivectors is also elegantly handled in geometric algebra.

Geometric algebra defined a new product of vectors. The rules of this product are simple. If you have two basis vectors [itex]e_i,e_j[/itex] then the geometric product is denoted [itex]e_i e_j[/itex] (just a juxtaposition of the two) and is equal to the following:

[tex]e_i e_j = \begin{cases} 1 & i = j \\ - e_j e_i & i \neq j \end{cases}[/tex]

So if the two vectors are the same, it produces a scalar like the dot product. If they're different, then the vectors anticommute under the product, like the cross product. We call this latter result a bivector. The geometric product is associative, so you can build even bigger objects (trivectors, for instance) and so on.

Bivectors have interesting properties when treated with the geometric product. Consider the bivector [itex]e_x e_y[/itex], which might represent the unit bivector of the xy plane. See that

[tex](e_x e_y)(e_x e_y) = - e_x e_y e_y e_x = -1[/tex]

Or, more succinctly, [itex](e_x e_y)^2 = -1[/itex]. Bivectors in Euclidean spaces act like imaginary units, but each different bivector specifies its own plane.

The exponential of a bivector is taken through power series, and since Euclidean bivectors square to -1, this generates the trig functions. Namely, for any unit Euclidean bivector [itex]i[/itex], we get

[tex]\exp(i \theta) = \cos \theta + i \sin \theta[/tex]

But with the machinery of geometric algebra, this is no longer a statement about the complex plane. This is valid in any number of dimensions, as long as [itex]i[/itex] is properly specified.

Under the geometric product, vectors are rotated by way of such exponentials of bivectors. Let there be a "rotor" [itex]\psi = \exp(-i \theta/2)[/itex], and a rotation operator on any vector [itex]a[/itex] is

[tex]\underline R(a) = \psi a \psi^{-1}[/tex]

When [itex]i \theta = \omega t[/itex] for some scalar [itex]t[/itex], then [itex]\omega[/itex] is a bivector as well. We can take a time derivative of the rotation operator to get

[tex]\dot{\underline R}(a) = \frac{1}{2}(- \omega \underline R(a) + \underline R(a) \omega) = \underline R(a) \times \omega[/tex]

where [itex]\times[/itex] denotes an antisymmetric "commutator" product (you can take the expression above as a definition of this product).

As D H points out, in general the exponential of a sum of bivectors is not the exponentials of the individual bivectors multiplied out. However, each rotation is described by a rotor [itex]\psi[/itex], and one can expand these out in terms of associated trig functions and multiply to find the combined rotor and the overall bivector underlying the rotation.
 

What is static rotation as an axial vector?

Static rotation as an axial vector refers to the rotation of an object around a fixed axis without any change in its position or translation.

How is static rotation different from dynamic rotation?

Static rotation involves a fixed axis and no change in the position of the object, while dynamic rotation involves an axis that is constantly changing and can result in translation of the object.

Can static rotation occur in all three dimensions?

Yes, static rotation can occur in all three dimensions, as long as there is a fixed axis of rotation.

What is the significance of axial vectors in static rotation?

Axial vectors are important in static rotation because they represent the direction and magnitude of the rotation around the fixed axis. They can also help determine the torque and angular momentum of the rotating object.

How is static rotation measured and calculated?

Static rotation can be measured and calculated using various methods, such as using a protractor to measure the angle of rotation or using mathematical equations involving the axis of rotation and the distance from the axis to the object. In some cases, specialized equipment such as gyroscopes may be used to measure static rotation.

Similar threads

  • Linear and Abstract Algebra
Replies
9
Views
587
  • Calculus and Beyond Homework Help
Replies
4
Views
754
Replies
3
Views
762
Replies
10
Views
737
  • Mechanics
Replies
20
Views
2K
Replies
4
Views
2K
  • Special and General Relativity
Replies
4
Views
1K
  • Mechanical Engineering
2
Replies
35
Views
3K
Replies
8
Views
1K
  • Introductory Physics Homework Help
Replies
1
Views
360
Back
Top