# Proof of The Cross Product

Hey guys, Can you help me out with this one. I want to prove the that the cross product $$|A||B| sin( \theta)$$ is equal to its components,
$$=<a_2b_3-a_3b_2,a_3b_1-a_1b_3,a_1b_2-a_2b_1>$$.

A thing I find Annoying is that most books just say, we define the dot product in component form as "" or we define the cross product in component form as " ". What nonsense, whos to say what its defined as. The equal sign demands a REASON why, not just a definition. Stewart provided a good geometric proof of why this is true in component form for the dot product. Here is what he has for the cross product:

$$\begin{multline*} a \times b = (a_1i+a_2j+a_3k) \times (b_1i+b_2j+b_3k) \\ = a_1b_1i \times i + a_1b_2i \times j + a_1b_3i \times k \\ + a_2b_1 j \times i + a_2b_2 j \times j + a_2b_3 j \times k \\ + a_3b_1k \times i + a_3b_2k \times j + a_3b_3k \times k \\ = a_1b_2k+a_1b_1(-j)+a_2b_1(-k)+a_2b_3i+a_3b_1j+a_3b_2(-i) \\ =(a_2b_3-a_3b_2)i+(a_3b_1-a_1b_3)j+(a_1b_2-a_2b_1)k \end{multline*}$$

O.K., this is fine by me, but one problem, we ASSUMED that the distributive law holds for $$(a_1i+a_2j+a_3k) \times (b_1i+b_2j+b_3k)$$

So to make me happy, I would like to see a proof of why the distributive law holds for this.

Ive seen a few ppl prove it by proving A x (B+D) = (A x B ) + (A x D). This is fine, I dont see any problem with that. But when they do go about proving it, they use the determinant method for components, but then that is using the very method you are setting out to prove! That does not seem right...

dextercioby
Homework Helper
Why do think the determinant method is faulty...?

$$\vec{a}\times\vec{b}=:\left |\begin{array}{ccc} \vec{i} & \vec{j} & \vec{k} \\ a_{1} & a_{2} & a_{3} \\ b_{1} & b_{2} & b_{3} \end{array} \right |$$

helps you prove distributivity.

Daniel.

I dont, thats not what im saying. im asking to see a proof of the distributive law, without using the determinant. If you use the determinant, your using the result of what your trying to prove in its very proof! You see the determinant gives you a result that is consistent with the cross product, ASSUMING you can apply the distributive law. The entire proof of the cross product is based on this assumption, and is the REASON why we use the determinant. So we have to find a way of proving the distributive law, WITHOUT using the determinant.... Perhaps a sort of geometric proof.

dextercioby
Homework Helper
Nope, that determinant is the definition of the cross product for 2 euclidean vectors. Period.

Here's another approach using Lie algebras.

So you have a 3D vector space spanned by $\left\{\vec{e}_{i}\right\}^{i=\overline{1,3}}$.

I define the abstract product

$$\vec{e}_{i} \circ \vec{e}_{j} =: \epsilon_{ijk} \vec{e}_{k}$$

and i can prove it structures the vector space into a Lie algebra.

So i'll take 3 vectors and, by denoting the abstract Lie product $\circ$ by $\times$ ,i'll prove distributivity.

$$\vec{a}\times\left(\vec{b}+\vec{c}\right) =a_{i} \left(b_{j}+c_{j}\right) \vec{e}_{i} \times \vec{e}_{j} =a_{i}b_{j}\epsilon_{ijk}\vec{e}_{k}+a_{i} c_{j}\epsilon_{ijk}\vec{e}_{k}$$

and using $$\vec{a}\times\vec{b}=\epsilon_{ijk}a_{i}b_{j}\vec{e}_{k}$$

, i get the desired result.

Daniel.

Last edited:
dextercioby
Homework Helper
Incidentally this Lie algebra with the $\left\{\vec{e}_{i}\right\}^{i=\overline{1,3}}$ as basis and the Lie product

$$\vec{e}_{i} \circ_{\mbox{Lie}} \vec{e}_{j}= \epsilon_{ijk} \vec{e}_{k} [/itex] is isomorphic to the $so(3)$ algebra which is isomorphic to the $su(2)$ algebra which is isomorphic to the angular momentum algebra in QM. Daniel. Gokul43201 Staff Emeritus Science Advisor Gold Member cyrusabdollahi said: A thing I find Annoying is that most books just say, we define the dot product in component form as "" or we define the cross product in component form as " ". What nonsense, whos to say what its defined as. This is hardly nonsense. The book is only telling you what the mathematical community in general believes is the definition of a thing. Knowing the definition, you or I, or anyone else, can say "what its defined as". The equal sign demands a REASON why, not just a definition. If it is a definition, it needs no justification whatsoever, other than perhaps to answer the question "but why is this useful ?" Stewart provided a good geometric proof of why this is true in component form for the dot product. Here is what he has for the cross product: [tex] \begin{multline*} a \times b = (a_1i+a_2j+a_3k) \times (b_1i+b_2j+b_3k) \\ = a_1b_1i \times i + a_1b_2i \times j + a_1b_3i \times k \\ + a_2b_1 j \times i + a_2b_2 j \times j + a_2b_3 j \times k \\ + a_3b_1k \times i + a_3b_2k \times j + a_3b_3k \times k \\ = a_1b_2k+a_1b_1(-j)+a_2b_1(-k)+a_2b_3i+a_3b_1j+a_3b_2(-i) \\ =(a_2b_3-a_3b_2)i+(a_3b_1-a_1b_3)j+(a_1b_2-a_2b_1)k \end{multline*}$$

O.K., this is fine by me, but one problem, we ASSUMED that the distributive law holds for $$(a_1i+a_2j+a_3k) \times (b_1i+b_2j+b_3k)$$

So to make me happy, I would like to see a proof of why the distributive law holds for this.

It's trivial to prove that A x (B+D) = (A x B ) + (A x D) when A, B, and D lie along one or more of three orthogonal directions - and this is all you need to ensure the correctness of the line from Stewart that you had a problem with. The proof lies simply in the geometric construction of the terms on the LHS and RHS as per the "geometric" definition of the cross-product.

Notice that what you are asking for here is that it be proved that alternative definitions of a thing are in fact defining the same thing. That is a fair gripe. The definition itself is something that needs no proof.

Last edited:
HallsofIvy
Homework Helper
Cyrusabdollahi started by asserting that the cross product of two vectors A and B is defined as |A||B| cos (&theta;) where &theta; is the angle between A and B.
He then asked for a proof that that is equal to the "determinant" definition.

If you want to take the "determinant" as the definition, fine. Then prove that that is equal to |A||B| cos(&theta;)!

dextercioby
Homework Helper
I can prove that

$$|\vec{a}\times\vec{b}|=|\vec{a}||\vec{b}|\sin\theta=\sqrt{\left\langle \left |\begin{array}{ccc} \vec{i} & \vec{j} & \vec{k} \\ a_{1} & a_{2} & a_{3} \\ b_{1} & b_{2} & b_{3} \end{array} \right |,\left |\begin{array}{ccc} \vec{i} & \vec{j} & \vec{k} \\ a_{1} & a_{2} & a_{3} \\ b_{1} & b_{2} & b_{3} \end{array} \right | \right\rangle}$$

without any problem.

Daniel.

Gokul43201
Staff Emeritus
Gold Member
HallsofIvy said:
Cyrusabdollahi started by asserting that the cross product of two vectors A and B is defined as |A||B| cos (θ) where θ is the angle between A and B.
He then asked for a proof that that is equal to the "determinant" definition.

If you want to take the "determinant" as the definition, fine. Then prove that that is equal to |A||B| cos(θ)!
I'll take this as addressed to me.

I was reacting to this part of cyrus' post : "The equal sign demands a REASON why, not just a definition." A definition does not require a reason why. Both definitions of the cross product are identical, and the proof has already been provided (partly by cyrus' and the rest by me).

Your the man robphy! Thats exactly the REASONING behind my question.
To Gokul:
I know that it is a definition, but think about what im saying. lets say i make up a new thing called the cyrus product. :-) And i tell you by definition that this is how you compute the cyrus product. Surely you would ask me, why did you choose this definition, and why are the two sides equal? The problem I had with the proof component wise was that in the proof of using the determinant as the cross product, we had the asumption that LINEARITY, or that the cross product distributes over addition, to arrive at our final anwser. It was a small subtly that I never really payed attention to until now.

Thanks to all for your help!

dextercioby
Homework Helper
cyrusabdollahi said:
Your the man robphy! Thats exactly the REASONING behind my question.
To Gokul:
I know that it is a definition, but think about what im saying. lets say i make up a new thing called the cyrus product. :-)

Okay.

cyrusabdollahi said:
And i tell you by definition that this is how you compute the cyrus product.

Okay.

cyrusabdollahi said:
Surely you would ask me, why did you choose this definition, and why are the two sides equal?

Unless in the mathematical/physical literature there is "cyrus product" already defined, i have no right and no reason to do that. If it is defined already, i can ask you to prove the equivalence of the two statements. One def. implies/should imply another and viceversa. You'd have to prove the equivalence, else nobody will accept your definition.

It's as simple as that, believe me. I reckon you have serious problems in understanding the meaning of "definition". To you the traditional sign $:=$ (i prefer and am accustomed to another version $=:$) doesn't say too much. Let me exemplify with something common:

Before you go into relatively complicated matters such as exterior Lie algebras, a physics/mathematics student is being taught 2 definitions of the CROSS product of two 3D euclidean vectors.

The first one is operational. You give a mathematical expression for the modulus/magnitude of the resulting vector (*) and a practical way to get its direction and sense (using the right corkscrew rule).
This is the first you learn.

The second one requires itsy-bitsy linear algebra. You see the vectors as elements of the 3D euclidean space and you're free to choose the traditional orthonormal cartesian basis and then, once you've defined the elements, simply paint the determinant using the symbol $:=$.

Both definitions are for the same mathematical notion: the cross product of 2 euclidean vectors.

The questions which should be raised are as follows:

1.Why do we have 2?
2.In what respect are they different ?
3.Are they logically equivalent ?

I won't post the answers, i hope that the idea of this message will induce the answers to you.

cyrusabdollahi said:
Thanks to all for your help!

You're welcome.

Daniel.

(*) pseudovector actually in the general case, but for the connected component of $O(3)$ (i.e. $SO(3)$ ) it is a genuine vector.

Last edited:
lets say i make up a new thing called the cyrus product. :-) And i tell you by definition that this is how you compute the cyrus product. Surely you would ask me, why did you choose this definition, and why are the two sides equal?

If it is a definition, it needs no justification whatsoever, other than perhaps to answer the question "but why is this useful ?"

Well dex, here is a similar example of what I mean. Some books just "define" the dot product as axbx+ayby+azbz and say it is equal to |A||b|cos(theta). and they do this by doing the product of (ax ay az) dot (bx by bz). And they expand by using distributive properties. But we have not even proved that the dot product is distributive yet! Meanwhile, stewart uses a triangle and the law of cosines to very neatly show why it is axbx+ayby+azbz. Its not a matter of DEFINING it, there is a proof behind the definition. Now I can do similar for the cross product using the paper robphy presented. As for your explination, its a bit above my head. I have taken some linear algerba, but I have not seen anything similar to your work before.

dextercioby
Homework Helper
It is distributive by one of the axioms of the euclidean space. An euclidean space is a an example of a preHilbert space. Specifically, it is a linear space over $\mathbb{R}$ endowed with a bilinear application usually denoted by $\langle \cdot , \cdot\rangle$. Bilinearity is wrt the the vector addition.

Ergo, the scalar product in an Euclidean space is bilinear by definition.

Daniel.

cyrusabdollahi said:
Some books just "define" the dot product as axbx+ayby+azbz and say it is equal to |A||b|cos(theta). and they do this by doing the product of (ax ay az) dot (bx by bz). And they expand by using distributive properties. But we have not even proved that the dot product is distributive yet!

When you define dot product of two vectors (ax,ay,az) and (bx,by,bz) to be axbx+ayby+azbz, distributivity is implied in the definition itself, because it converts dot product into scalar multiplication of components which is obviously distributive. From this definition, one can easily prove that dot product is equal to |A||B|cos(theta).

Similarly if we define the cross product in the determinant form, it is obviously distributive.

Also using the identity |AxB|^2=|A|^2+|B|^2-(A.B)^2, it can be easily proved that |AxB|=|A||B|sin(theta)

because it converts dot product into scalar multiplication of components which is obviously distributive

Not true, it IS in component form, but remember, being in component form actually converts it into THREE vectors!, e1 e2 and e3. So it is NOT obviously distributive. The distributive property can be shown via a geometric construction.

Hurkyl
Staff Emeritus
Gold Member
"This has gone on just long enough!" -- Homer Simpson

One does not prove definitions. Period.1 (You know I'm serious: I used both bold and italic font!)

Going back to the original question: a direct proof that the cross product depends on what you take the definition of cross product to be.

If you take, as a definition, that:

(a, b, c) x (d, e, f) := (bf - ce, cd - af, ae - bd)

then the distributive law is fairly straightforward to prove (although tedious).

Someone familiar with axiomatic thinking would prefer a more concise definition: the cross product is defined to be the "most general" antisymmetric bilinear function on 3-vectors satisfying these identities:

i x j = k
j x k = i
k x i = j

In this definition, distributivity is utterly trivial, because it's part of the definition.

Even for those not accustomed to this train of thought, it is still useful as motivation, maybe by showing what one would like to be able to do, or maybe as motivation for adopting the previous definition.

All of this is the same applies for the dot product as well. (with the appropriate dot product facts replacing the cross product facts)

1: Although one does not prove definitions, one might need to prove that a definition is well-defined. A common thing is for one to prove something exists, and then to name that thing via a definition.

cyrusabdollahi said:
Not true, it IS in component form, but remember, being in component form actually converts it into THREE vectors!, e1 e2 and e3. So it is NOT obviously distributive. The distributive property can be shown via a geometric construction.

What do you mean by component form converts it into three vectors???

You want me to show the proof then here it is :
We have 3 vectors A(ax,ay,az), B(bx,by,bz) and C(cx,cy,cz)
Now A.(B+C)=(ax,ay,az).(bx+cx,by+cy,bz+cz)
=ax(bx+cx)+ay(by+cy)+az(bz+cz)
=axbx+ayby+azbz+axcx+aycy+azcz
=A.B+A.C

What do you mean by component form converts it into three vectors???

axi + ayj + azk

That is the addition of THREE vectors, broken down into component form. (i,j,k are vectors).

LoL sorry Hurkyl.

Yes, thats my point, you got it.
1: Although one does not prove definitions, one might need to prove that a definition is well-defined. A common thing is for one to prove something exists, and then to name that thing via a definition.

I wanted to prove it exists, and then name it a definition, and the paper that robphy shared does just that. Anyways, im satisfied with the papers anwser. Lets kill this thread before you guys kill me!

Hurkyl
Staff Emeritus
Gold Member
I wanted to prove it exists

No, I'm pretty sure that's not what you meant. Something about whose existance one might worry is:

f(p/q) = q/p &nbsp; &nbsp; (p, q > 0)

because, in order for this to be a function, you have to make sure that different representatives of the same rational number give you the same value. (this one exists)

Another is:

f(n) is the number such that n * f(n) = 1

becuase there might be numbers n for which there does not exist a unique number m such that n * m = 1. (either no such number, or multiple such numbers. For example, look at n = 0)

However, there's nothing to worry about in the definition:

(a, b, c) x (d, e, f) := (bf - ce, cd - af, ae - bd)

There is no worry about the same vector having different representatives, there is no worry that the results of the operations might be undefined for a vector.

You've been worried about distributivity this whole thread. If you were really looking for a proof the above function is well-defined, then you've set loose the biggest red herring ever. :tongue:

mathwonk
Homework Helper
well not to quibble but the statement you say you want to prove is false, that a scalar |A||B| sin(t) equals a vector given by those components.

it seems you are confusing a vector with its length. i.e. the cross product of A and B is a vector of length |A||B| sin(t), and direction normal to both A and B, and forming a right hand coordinate system in the order A,B,AxB.

so you seem to be asking for a proof that this geometric definition of cross product are is the same as the determinant one, maybe why one chooses one or the other of them?

well before choosing a definition, ask yourself what a cross product is supposed to do.

i.e. why do you want a concept of cross product? your answer might be, wqell i want a quick way of constructing avector that is perpendicular to two other vectors. then you could use the fact that a determinant is zero when it has two equal rows. then you may be led to the determinant definition. i.e. if you take the dewterminant of the matrix having i,j,k in the first row, and a1,a2,a3 and b1,b2,b3 as the othjer two rows, then by the expansion rule for three dimensionald eterminants you will get a vector that dots to zero with both A and B.

so in that case the determinant definition is really the more natural one.

now you may want also to then relate the cross product to the other two vectors, and compute its length in terms of their lengths.

then you can just just check the equation |AxB|^2 = |A|^2 |B|^2 sin^2(t) = |A|^2|B|^2 - (A.B)^2, if you really want to.

notice this also gives you a way to compoute the area of the parallelgram spanned by A and B, as |AxB|, without using sins, but it's ugly.