Why is the Cross Product Used in Mathematics? Understanding its Role and History

  • #1
Trying2Learn
375
57
TL;DR Summary
The question says it all.
So I do know that there does exist a generalization of the cross product (the exterior product), but this question does not concern that (and I would prefer it not )

I know that the cross product (that Theodore Frankel, for example, calls "the most toxic operation in math") works in 3D only. (Why does he say this? Simply because it fails associativity?)

I am aware that the operation has two vectors as input, takes their magnitude and the angle between them and outputs a vector that is perpendicular to both and contains information about the angle and the magnitudes.

I can REASON out why this strange operation is so useful when constructing the "moment." I can reason out why it can deliver information about the "tendency" to keep rotating (angular momentum) and can work the operations to show that the rate of change of angular momentum is the moment.

All that is fine.

However, despite that, this operation unnerves me, and I do not know why.

I can see (in my mind) how integrals "sum up effects." I can see in my mind, the role of differentiating. I can see the role of the dot product (in 3D space, not its generalizations). However, this operation called the "cross product" seems to be like rabbit pulled out of a hat: "Oh! It's useful! So, let's use it."

Can anyone discuss the role/need/history of this operation. It seems laden with baggage (i.e.: the "right hand rule" to determine the direction of the resulting vector)

Why (how?) did this operation come about? It just seems whimsical (esp. the sine of the angle between the vectors and, say, not the cosine)

Anything?
 
Physics news on Phys.org
  • #2
Trying2Learn said:
TL;DR Summary: The question says it all.

"Oh! It's useful! So, let's use it."
Is any other justification needed?
 
  • Like
Likes Vanadium 50, russ_watters and nasu
  • #3
Dale said:
Is any other justification needed?
OK, that was good. That was funny. Put a smile on my face.

But could you elaborate?

In my senses, I see rotations and tendencies to rotate.

In my mind, I see a paraphernalia of mathematical tools.

Fine, then along comes this strange operation: norm of the first vector, norm of the second, the SINE of the angle, make the final vector perpendicular to both, and then (icing on the cake of confusion), use the right hand rule.

This is so infused with an ostensible whimsy, that it baffles me on how this came about.

Or should I just stop thinking and just use it?
 
  • Skeptical
Likes Motore
  • #4
Trying2Learn said:
But could you elaborate?
We use it because it is useful. Its weirdness is not relevant. Using it, regardless of its weirdness, allows us to accurately predict the outcome of experiments and build useful devices like communication systems. That is all that is needed.
 
  • Like
Likes russ_watters
  • #5
Dale said:
We use it because it is useful. Its weirdness is not relevant. Using it, regardless of its weirdness, allows us to accurately predict the outcome of experiments and build useful devices like communication systems. That is all that is needed.
OK, be that way ;-) (joking here)

How did they stumble upon this operation?

Did they just try everything under the sun? Why not the cosine of the angle between the vectors.

So, yes, it is useful. But how did they know in advance (when constructing this operation) that it would be useful?
 
  • #6
Trying2Learn said:
Why not the cosine of the angle between the vectors.
An operation that produces a vector perpendicular to the plane defined by two vectors cannot work on two parallel or antiparallel vectors. Nor can it be defined if one or other vector has zero magnitude. ##|\vec a||\vec b|\sin\theta## has to be in the running just on that basis, where a cosine-based one is not.
 
  • Like
Likes Dale
  • #7
Trying2Learn said:
But how did they know in advance (when constructing this operation) that it would be useful?
I don’t know the history here, so I cannot comment on that. It is entirely possible that they did not know in advance that it would be useful. Sometimes new math is developed because there is an immediate practical need, like calculus. Sometimes it is developed just because a mathematician liked it, and only later finds a practical use. Sometimes it is useful for one thing immediately, and later it is found to be useful elsewhere. I don’t know how specifically that history played out with the cross product.

Does it matter?

Trying2Learn said:
Why not the cosine of the angle between the vectors
I do have to say that I find this part of your reaction a little strange. You are repeatedly harping on the sine rather than the cosine. I don’t get that at all. What is weirder about sine than cosine? Why do you object to the sine? To me that seems the most natural part. The angle is important and it has to go to zero at 0 degrees and be finite at 90 degrees. What else would it be?
 
  • Like
Likes Trying2Learn
  • #8
Trying2Learn said:
TL;DR Summary: The question says it all.

So I do know that there does exist a generalization of the cross product (the exterior product), but this question does not concern that (and I would prefer it not )

I know that the cross product (that Theodore Frankel, for example, calls "the most toxic operation in math") works in 3D only. (Why does he say this? Simply because it fails associativity?)
I've no clue, why he says this. It's nonsense.

The vector product in ##\mathbb{E}^3## has two quite intuitive geometrical meanings.

(a) it measures oriented areas

If you have two linearly independent vectors ##\vec{a}## and ##\vec{b}## by definition ##\vec{a} \times \vec{b}## is a vector perpendicular to the plane spanned by the two vectors (with the direction chosen according to the right-hand rule) with the magnitude given by the area of the parallelogram spanned by the two vectors.

It turns out that the operation is linear in both arguments and that it's antisymmetric, i.e., ##\vec{b} \times \vec{a}=-\vec{a} \times \vec{b}##.

vector-product.png


(b) it describes "infinitesimal rotations"

If you have a rotation matrix ##\hat{D}_{\vec{n}}(\alpha)##, which describes a rotation by an angle ##\alpha## along an axis defined by the unit vector ##\vec{n}## (with the sense of the rotation according to the right-hand rule), then for small ##\alpha##
$$\hat{D}_{\vec{n}}(\alpha) \vec{V}=\vec{V} + \alpha \vec{n} \times \vec{V} + \mathcal{O}(\alpha^2).$$
Trying2Learn said:
I am aware that the operation has two vectors as input, takes their magnitude and the angle between them and outputs a vector that is perpendicular to both and contains information about the angle and the magnitudes.
The magnitude is, according to meaning (a)
$$|\vec{a} \times \vec{b}|=|\vec{a}| |\vec{b}| \sin \angle(\vec{a},\vec{b}),$$
where ##\angle \vec{a},\vec{b} \in [0,\pi]##.
Trying2Learn said:
I can REASON out why this strange operation is so useful when constructing the "moment." I can reason out why it can deliver information about the "tendency" to keep rotating (angular momentum) and can work the operations to show that the rate of change of angular momentum is the moment.
That's of course directly related to meaning (b) and is thus imporant to define an angular velocity as an axial vector, ##\vec{\omega}##.
Trying2Learn said:
All that is fine.

However, despite that, this operation unnerves me, and I do not know why.
Maybe due to the nonsensical statement of Theodore Frankel ;-)). Just use another textbook.
Trying2Learn said:
I can see (in my mind) how integrals "sum up effects." I can see in my mind, the role of differentiating. I can see the role of the dot product (in 3D space, not its generalizations). However, this operation called the "cross product" seems to be like rabbit pulled out of a hat: "Oh! It's useful! So, let's use it."
That's true for all the standard operations defined with vectors.
Trying2Learn said:
Can anyone discuss the role/need/history of this operation. It seems laden with baggage (i.e.: the "right hand rule" to determine the direction of the resulting vector)
The history is funny. In the beginning, when physicists dealt with (vector) fields they used quaternions, discovered in the beginning of the nineteenth century by Hamilton. Also Maxwell famously formulated his electroamagnetic theory first in terms of quaternions, and it were Heaviside and Gibbs in the beginning of the 19th century who introduced the much more simple notion of vectors in 2D and 3D Euclidean space (which itself is an affine space). If you want to appreciate, how much simpler physics gets with using vectors instead of quaternions, try to read Maxwell's treatise ;-).
Trying2Learn said:
Why (how?) did this operation come about? It just seems whimsical (esp. the sine of the angle between the vectors and, say, not the cosine)
The cosine occurs in the dot product, which describes (if one of the vectors is a unit vector) the projection of another vector to the direction of the unit vector, and that's why there's the cosine.

Of course you can also combine the cross and the dot product in the "scalar triple product" (I like the German name "Spatprodukt" better, because of it's intuitive meaning), ##(\vec{a} \times \vec{b}) \cdot \vec{c}##. For three non-complanar vectors this gives the volume of the parallelepiped (or "Spat" like a calcite crystal).

spat-produkt.png

Trying2Learn said:
Anything?
I hope that helped.
 
  • Informative
  • Like
Likes Trying2Learn and Dale
  • #9
In the article Convenient Equations for Projectile Motion, Am. J. Phys. 29, 623 (1961); doi: 10.1119/1.1937861, J.R. Winans using quaternions (no less) derives the expression $$\mathbf{a} \times\mathbf{s} = \mathbf{v}\times\mathbf{u}.$$ It is applicable to an object moving under constant acceleration ##\mathbf{a}##. Here, ##\mathbf{u}## is the initial velocity. The other two quantities, ##\mathbf{v}## and ##\mathbf{s}## are, respectively, the velocity and the position at a later time.

From this, one can derive the horizontal displacement of a projectile ##\Delta x## $$\Delta x=\left|\frac{\mathbf{v}\times\mathbf{u}}{g}\right|.$$ This expression is especially useful not only for finding the horizontal range, but also for maximizing it. The optimization condition for the horizontal range ##\mathbf{v}\cdot\mathbf{u}=0## is obvious.

It is true that everything related to projectile motion can be said starting with the kinematics equations describing the horizontal and vertical component of the position vector as a functions of time. However, doing so will be long and tedious for finding, for example, the projection angle that maximizes the horizontal range of projectile fired from the edge of a cliff of height ##H.##
Trying2Learn said:
So, yes, it is useful. But how did they know in advance (when constructing this operation) that it would be useful?
I don't know about "them" but I can tell you from my own experience how I happened to come upon the range equation on my own before reading Winans's article. I started by considering the development of an alternate method for solving projectile motion problems using geometry and trigonometry instead of quadratic equations. That work is described here. I used what I call 'the velocity triangle" consisting of the initial and final velocity vectors. Then I considered how one can use the triangle to find what one usually finds in projectile problems, the horizontal range included.

I did not know in advance that the range can be written as the magnitude of a cross product divided by
Velocity triangle.png
##g## but I knew that half of the magnitude of the cross product is the area of the triangle having as sides the two vectors. Looking at the velocity triangle (see figure on the right), it became obvious to me that the range ##R=v_{0x}~t_{\!f}## where ##t_{\!f}=\dfrac{v_{fy}-v_{0y}}{g}## is the area is the area of the triangle divided by ##g##. Putting the two thoughts together gave me the cross product equation for the horizontal range. Just to make sure, I verified that ##|\mathbf{v}_f\times\mathbf{v}_y|=|v_{0x}(v_{fy}-v_{0y})|.##

To make a long story short, I didn't start out intending to find use for the cross product. I stumbled onto it by diddling with the velocity triangle in order to find an alternate description for projectile motion. As we say in the profession, if we knew what we are doing, it wouldn't be called research.
 
Last edited:
  • Like
Likes Trying2Learn, Dale and vanhees71
  • #10
As an engineer I would like to add, that most of the properties of the vector cross product can at some level be understood by looking at the properties of the equivalent skew-symmetric cross product matrix, ##\left[\mathbf a\right]_\times##, such that for two vectors ##\mathbf a## and ##\mathbf b## we have
$$\left[\mathbf a\right]_\times \mathbf b = \mathbf a \times \mathbf b.$$
As an example of how useful this notation can be consider that the rotation matrix for angle ##\theta## around the unit axis ##\mathbf n## can be written as
$$\mathbf R_{\theta\mathbf n} = e^{\left[\theta\mathbf n\right]_\times} = \sum_{i=0}^\infty\frac{\left[\theta\mathbf n\right]_\times^i}{i!} = \mathbf I + sin\theta \left[\mathbf n\right]_\times + (1-cos\theta) \left[\mathbf n\right]_\times^2.$$
That is, the repeated multiplication of a cross product matrix is a cyclic structure such that ##\left[\mathbf n\right]_\times##, ##\left[\mathbf n\right]_\times^2##, ##\left[\mathbf n\right]_\times^3 = -\left[\mathbf n\right]_\times##, and ##\left[\mathbf n\right]_\times^4 = -\left[\mathbf n\right]_\times^2##, with ##\mathbf n## being a unit vector. By the way note that $$-\left[\mathbf n\right]_\times^2 = \mathbf I - \mathbf n \mathbf n^T$$ is the plane projection matrix with ##\mathbf n## being the plane normal.

I have earlier considered the vector cross product as a kind of "bastard" operator that quickly could mess up an otherwise beautiful equation, but using the cross product matrix instead I found some of the elegance in notation can been regained, and overall I found the the cross product matrix much more useful (for the engineering work I do, your milage may of course vary).
 
  • Like
Likes vanhees71
  • #11
To answer very roughly and with hand waving, this is how I see it, in the big picture.

Given two vectors you can either be interested in properties that these vectors has relative to each other. With this I for example mean such aspect as projections onto the other vectors direction. Or, you can be interested in properties associated with the geometrical object they span, such as area, orientation etc.

For the former kind of properties the scalar product is usually involved, whereas for the ladder the cross product is usually involved.

Now, two vectors span a parallellogram, in the most obvious of ways, which is a two dimensional object. In R^3 however, neglecting the shape, every oriented parallellogram can be describe by a vector perpendicular to it. The direction of the vector describes the orientation and angles of the parallellogram and the length represents the area.

This is however only in R^3, since this is the only case where the number of dimensions (x, y and z) coincide with the number of basis planes (xy, yz, zx).

In R^4 you have (x, y, z, t) vs (xy, yz, zt, tx, xz, ty), thus vectors are not suitable to represent two dimensional objects.

PS. Sorry for my english. DS.
 
Last edited:
  • Like
Likes Trying2Learn

Similar threads

Replies
8
Views
872
  • Classical Physics
Replies
14
Views
794
Replies
14
Views
2K
Replies
7
Views
2K
  • Precalculus Mathematics Homework Help
Replies
5
Views
659
Replies
6
Views
641
  • Linear and Abstract Algebra
Replies
32
Views
3K
Replies
14
Views
1K
Replies
15
Views
1K
Replies
1
Views
111
Back
Top