Why this form of the vector product?

In summary, the cross product is a pseudodeterminant because the top row consists of unit vectors, while the second and third rows consist of the coordinates of the two vectors making up the cross product.
  • #1
davidge
554
21
Hi. I was investigating through this week why there are the differential forms, why are they anti-symmetric, why do we have the Jacobian when expressing the volume in a different coordinate system. This was just fantastic! I found all the connections between these topics. And I found that all of these things are due to the fact that the volume element (the same works for an area element) is defined through what is called the triple product, namely ##(A \times B) \cdot C## for vectors ##A, B## and ##C##.

But now I wonder: why does the vector product is defined in terms of a determinant in first place? I thought at first that it had to do with the way basis vectors transform between coordinate systems, but I noticed that it doesn't matter, because covectors are defined in such a way that the inner product is left invariant anyway. So why the vector product is defined that way?
 
Physics news on Phys.org
  • #3
jedishrfu said:
I think I see what this article says. I actually considered this before:

Suppose we want to form a vector out of two other vectors. We also want that vector to be orthogonal to the plane containing the other two vectors. We would discover that the determinant of their components mnemonically give us the third one we wish, by doing this for the most trivial (and perhaps, fundamental) case:

##(1,0,0) \times (0,1,0) = (0,0,1)##.

But how to be sure that this procedure is correct for any pair of vectors (in any dimension)?
 
  • #4
The key is that the basis unit vectors are orthogonal for the determinant method to apply.
 
  • #5
jedishrfu said:
The key is that the basis unit vectors are orthogonal for the determinant method to apply.
I don't understand. The spherical polar basis vectors aren't orthogonal to each other and we still have a determinant.
 
  • #6
davidge said:
I don't understand. The spherical polar basis vectors aren't orthogonal to each other and we still have a determinant.

I thought they were an orthonormal set.
 
  • Like
Likes davidge
  • #8
davidge said:
But now I wonder: why does the vector product is defined in terms of a determinant in first place?
This isn't something I've ever lost much sleep over, wondering why this definition is as it is.

However, technically speaking, the cross product (or vector product) isn't really a determinant -- it's called a pseudodeterminant, because the top row consists of unit vectors, while the second and third rows consist of the coordinates of the two vectors making up the cross product.

Another definition of the cross product is ##\vec u \times \vec v = \left( |\vec u| |\vec v| \sin(\theta) \right) \vec w##, where ##\vec w## is a vector that is orthogonal to both ##\vec u## and ##\vec v##. I don't know who came up with the idea of a pseudodeterminant that produces the coordinates of the vector that is normal to both of the other vectors, but whatever, the formula does what it's supposed to.
davidge said:
The spherical polar basis vectors aren't orthogonal to each other and we still have a determinant.
I'm pretty sure the spherical basis vectors are all orthogonal. In fact, there are an orthonormal set. See the definitions of the basis vectors here -- https://en.wikipedia.org/wiki/Spherical_basis#basis_definition.

davidge said:
But how to be sure that this procedure is correct for any pair of vectors (in any dimension)?
It's not applicable in just any dimension. For the usual cross product, the vectors involved are three-dimensional. I've heard there are extensions to some higher dimensional spaces, such as quaternions and octonions, but this isn't something I've ever studied.
 
  • Like
Likes jedishrfu
  • #9
davidge said:
But now I wonder: why does the vector product is defined in terms of a determinant in first place?
As it is always the case with "why" questions: it depends on what you will accept as an answer.

From a historical point of view we have to mention Graßmann who was indeed driven by geometric considerations like generalizing the one-dimensional length of a vector.
From a geometric point of view you can find an interesting treatise about it on the first pages here: https://arxiv.org/pdf/1205.5935.pdf.
From an algebraic point of view, one might refer to the cross product as being the multiplication in the three dimensional simple Lie algebra.
From a topological point of view, one might consider simplicial complexes and corresponding boundary operators.
From a physicist's point of view, we will probably have to mention the right hand rule.
For a differential geometer, you already mentioned the connection to differential forms which brings us back to the geometrical treatment I quoted and Hermann Graßmann, which closes in the circle.

So as to why is it as it is depends on which approach you would prefer and consider "natural".
 
  • Like
Likes davidge
  • #10
Mark44 said:
I'm pretty sure the spherical basis vectors are all orthogonal. In fact, there are an orthonormal set
Indeed. I'm sorry, but I messed things up earlier.
Mark44 said:
However, technically speaking, the cross product (or vector product) isn't really a determinant -- it's called a pseudodeterminant, because the top row consists of unit vectors, while the second and third rows consist of the coordinates of the two vectors making up the cross product.
yea
Mark44 said:
It's not applicable in just any dimension. For the usual cross product, the vectors involved are three-dimensional.
by "in any dimension" in post #1, I meant one carries over any dimension the determinant (Jacobian determinant).
never the less, I think one can prove by induction that for any dimension, using the canonical basis components in the determinant ##(1,0,...,0), (0,1,...,0),...,(0,0,...,1)## one can obtain the next one basis vector, i.e. it's as if we were displacing the ##1## one slot to the right.

Now, if we accept that the canonical basis I mentioned above is the most fundamental one, because it arises naturally from Euclidean Spaces and all curvilinear coordinates can be written as a function of them, e.g. ##x = x(r, \theta), y = y(r, \theta)## (derivatives with respect to the coordinate directions give the basis vectors), then it's obvious why we should carry over the vector product defined that way to other coordinate systems.
Mark44 said:
I don't know who came up with the idea of a pseudodeterminant that produces the coordinates of the vector that is normal to both of the other vectors
This raises another question: why does the determinant is defined in that way we know? :biggrin:
Note that I'm not getting into a loop of why things are the way they are, but I'm trying to understand what I said in post #1.
 
  • #11
fresh_42 said:
As it is always the case with "why" questions: it depends on what you will accept as an answer.

From a historical point of view we have to mention Graßmann who was indeed driven by geometric considerations like generalizing the one-dimensional length of a vector.
From a geometric point of view you can find an interesting treatise about it on the first pages here: https://arxiv.org/pdf/1205.5935.pdf.
From an algebraic point of view, one might refer to the cross product as being the multiplication in the three dimensional simple Lie algebra.
From a topological point of view, one might consider simplicial complexes and corresponding boundary operators.
From a physicist's point of view, we will probably have to mention the right hand rule.
For a differential geometer, you already mentioned the connection to differential forms which brings us back to the geometrical treatment I quoted and Hermann Graßmann, which closes in the circle.

So as to why is it as it is depends on which approach you would prefer and consider "natural".
Thanks. This is a helpful reply.
 
  • #12
The why is perhaps rooted in Cramer and the rule he discovered for solving a system of equations:

https://en.wikipedia.org/wiki/Cramer's_rule

However, there are earlier uses found in the math of other cultures as described in the HISTORY section of the wikipedia article and quoted below:

History
Historically, determinants were used long before matrices: originally, a determinant was defined as a property of a system of linear equations. The determinant "determines" whether the system has a unique solution (which occurs precisely if the determinant is non-zero). In this sense, determinants were first used in the Chinese mathematics textbook The Nine Chapters on the Mathematical Art (九章算術, Chinese scholars, around the 3rd century BCE). In Europe, 2 × 2 determinants were considered by Cardano at the end of the 16th century and larger ones by Leibniz.[19][20][21][22]

In Japan, Seki Takakazu (関 孝和) is credited with the discovery of the resultant and the determinant (at first in 1683, the complete version no later than 1710). In Europe, Cramer (1750) added to the theory, treating the subject in relation to sets of equations. The recurrence law was first announced by Bézout (1764).

It was Vandermonde (1771) who first recognized determinants as independent functions.[19] Laplace (1772)[23][24] gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde had already given a special case. Immediately following, Lagrange (1773) treated determinants of the second and third order and applied it to questions of elimination theory; he proved many special cases of general identities.

Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers. He introduced the word determinant (Laplace had used resultant), though not in the present signification, but rather as applied to the discriminant of a quantic. Gauss also arrived at the notion of reciprocal (inverse) determinants, and came very near the multiplication theorem.

The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product of two matrices of m columns and n rows, which for the special case of m = n reduces to the multiplication theorem. On the same day (November 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on the subject. (See Cauchy–Binet formula.) In this he used the word determinant in its present sense,[25][26] summarized and simplified what was then known on the subject, improved the notation, and gave the multiplication theorem with a proof more satisfactory than Binet's.[19][27] With him begins the theory in its generality.

The next important figure was Jacobi[20] (from 1827). He early used the functional determinant which Sylvester later called the Jacobian, and in his memoirs in Crelle's Journal for 1841 he specially treats this subject, as well as the class of alternating functions which Sylvester has called alternants. About the time of Jacobi's last memoirs, Sylvester (1839) and Cayley began their work.[28][29]

The study of special forms of determinants has been the natural result of the completion of the general theory. Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants by Sylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaffians, in connection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so called by Muir) by Christoffel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians and Hessians by Sylvester; and symmetric gauche determinants by Trudi. Of the textbooks on the subject Spottiswoode's was the first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises.
 
  • Like
Likes davidge
  • #13
Here's a further reference on the history of determinants:

http://www-groups.dcs.st-and.ac.uk/history/HistTopics/Matrices_and_determinants.html

and Muir's book:

http://igm.univ-mlv.fr/~al/Classiques/Muir/History_5/VOLUME5_TEXT.PDF
 
  • Like
Likes davidge and fresh_42
  • #14
I will take a look at these links. Thanks!
 

1. Why is the vector product also known as the cross product?

The vector product is called the cross product because the resulting vector is perpendicular to both of the original vectors, forming a shape resembling a cross.

2. How is the vector product different from the scalar product?

The vector product results in a vector quantity, while the scalar product results in a scalar quantity. Additionally, the vector product is calculated using the sine of the angle between the vectors, while the scalar product is calculated using the cosine of the angle.

3. What is the geometric interpretation of the vector product?

The vector product can be interpreted as the magnitude of one vector multiplied by the magnitude of the other vector, multiplied by the sine of the angle between them. This results in a vector that is perpendicular to both original vectors and has a magnitude equal to the area of the parallelogram formed by the two vectors.

4. Why is the vector product useful in physics and engineering?

The vector product is useful in physics and engineering because it allows us to calculate the direction and magnitude of a resulting force or torque in systems with multiple forces acting simultaneously. This is particularly useful in calculating the motion of objects in three-dimensional space.

5. How is the vector product related to the right-hand rule?

The vector product is related to the right-hand rule, which states that if the fingers of your right hand are curled in the direction of the first vector and then extended in the direction of the second vector, the resulting vector will point in the direction perpendicular to both original vectors. This is a useful tool for visualizing the direction of the resulting vector in three dimensions.

Similar threads

  • Calculus
Replies
4
Views
519
Replies
3
Views
2K
  • Precalculus Mathematics Homework Help
Replies
5
Views
576
  • Linear and Abstract Algebra
Replies
33
Views
835
Replies
7
Views
2K
  • Linear and Abstract Algebra
Replies
14
Views
647
Replies
10
Views
726
  • Quantum Physics
Replies
8
Views
2K
Replies
11
Views
15K
  • Linear and Abstract Algebra
Replies
9
Views
203
Back
Top