Understanding the Cross Product of Vectors: Perpendicular Properties Explained

welle
Why does the cross product of two vectors produce a vector which is perpendicularto the plane in which the original two lie?(whenever i go to look it up it is already assumed that it is perpendicular)
 
Physics news on Phys.org
yes, how do you prove it?
 
Well, you could do
a X b . a
and
a X b . b

and you would see that both are zero for generic a and b.
 
I don't quite understand how that proves that the resulting vector is perpendicular and not at any other angle
 
That's true, but you are talking about dot product while my question was about cross product(vector product). Ax B =C
If vectors A and B lie in a plane, why should the resulting vector C be perpendicular to that plane?
 
Becase (\vec{a} \times \vec{b}) \cdot \vec{a} = 0 = (\vec{a} \times \vec{b}) \cdot \vec{b}.
 
What definition of cross-product are you using?

A perfectly good definition is:
The cross product of vector u and v is the vector with length equal to length of u times length of v time sine of the angle between u and v, perpendicular to both u and v and directed according to the "right hand rule".
 
I tried taking the dot product of v x u with v as Ambitwistor and NateTG suggested and got it equal to v^(2) u sin(theta) cos(theta), and when i used the component method i still couldn't get it to zero although i probably did it wrong, can someone show me how to get it to zero( i apologize for my misunderstanding)
 
welle,
What I know is multiplication of vector was developed so as to make it useful for different physical and mathematical applications. It could even have been a.b = a^(1/2) x b^(1/3)
But this isn't useful!


As far as vector multiplication is concerned, we know that the resulting 'thing' should have a direction too. But what direction should it be given? So we search for a unique direction and the choice of the perpendicular direction looks natural and moreover it can be used in different applications. So there.

Tell me if I'm wrong because I'm still searching for it's history.
 
  • #10
What definition of cross-product are you using?
Just to muddy the waters even more, I'll add Spivak's general definition:

If v_1, \ldots, v_{n-1} \in \mathbb{R}^n} and \varphi is defined by

\varphi(w) = \det \begin{pmatrix} v_1 \\ \vdots \\ v_{n-1} \\ w \end{pmatrix}

then \varphi is a 1-form over \mathbb{R}^n} and [I assume by the Riesz Rep. Theorem] there is a unique z \in \mathbb{R}^n} such that the inner product

\langle w,z\rangle = \varphi(w) = \det \begin{pmatrix} v_1 \\ \vdots \\ v_{n-1} \\ w \end{pmatrix}

This z is denoted v_1\times \cdots \times v_{n-1}.


Spivak finishes "It is uncommon in mathematics to have a 'product' that depends on more than two factors. In this case of two vectors in v,w\in \mathbb{R}^3, we obtain a more conventional looking product, v\times w \in \mathbb{R}^3. For this reason it is sometimes maintained that the cross product can be defined only in \mathbb{R}^3."

I only think this definition is interesting because I was not aware that the cross product could be generalized to more (or less!) than two vectors (always n-1 vectors in n dimensions) until I saw this.
 
  • #11
I think that what welle was trying to get is how the mathematics of it work, not just showing that it works. What might be helpful in this case is is a derivation of the cross product, provided that it isn't too complicated and doesn't require knowledge that we don't have (I believe that it originally came from quaternions somehow, but I don't know how quaternions work, either.).

For example, the non-trig version of the dot product can be derived from the applying the cosine definition. You can also think about it in the opposite direction: You can also see that, given constant maginitueds, the more different (greater angle betwee) two vectors are, the less the dot product will be (which is related the fact that (a+c)(a-c) < a^2), which coincides with the fact that the cosine decreases as the vector are more different (have a greater angle between).

But I'm at a loss for similar explanations for the cross product.
 
  • #12
Basically it would be extremely good to know how, when & who Cropped up Cross Product and for what reasons.

Anyone who knows it?
 
  • #13

Here's a good explanation of the historical development of the cross product.


HTH

Sol.[/color]
 
  • #14
Originally posted by Welle
I tried taking the dot product of v x u with v as Ambitwistor and NateTG suggested and got it equal to v^(2) u sin(theta) cos(theta), and when i used the component method i still couldn't get it to zero although i probably did it wrong, can someone show me how to get it to zero( i apologize for my misunderstanding)

Once again, what is your definition of "cross product"??
One definition of cross product is:
(ai+ bj+ ck)x(ui+ vj+ wk)= (bw-cv)i-(aw-cu)j+(ab-bu)k which is what Ambitwistor was doing. Using that definition the proof is a tedious but direct calculation.

Another definition is the one I gave before:
The cross product of vector u and v is the vector with length equal to length of u times length of v time sine of the angle between u and v, perpendicular to both u and v and directed according to the "right hand rule".

Because you write that, for the dot product of u with uxv, you got
"v^(2) u sin(theta) cos(theta)" you appear to be using the second definition but that includes "perpendicular to both u and v" by definition.
 
  • #15
Originally posted by HallsofIvy
Once again, what is your definition of "cross product"??
One definition of cross product is:
(ai+ bj+ ck)x(ui+ vj+ wk)= (bw-cv)i-(aw-cu)j+(ab-bu)k which is what Ambitwistor was doing. Using that definition the proof is a tedious but direct calculation.

Another definition is the one I gave before:
The cross product of vector u and v is the vector with length equal to length of u times length of v time sine of the angle between u and v, perpendicular to both u and v and directed according to the "right hand rule".

Because you write that, for the dot product of u with uxv, you got
"v^(2) u sin(theta) cos(theta)" you appear to be using the second definition but that includes "perpendicular to both u and v" by definition.

Part of his problem might be that he's using theta there twice for two different angles. One of which might not be 90 the other of which must be 90, and hence the expression is zero
 
  • #16
Originally posted by welle
Why does the cross product of two vectors produce a vector which is perpendicularto the plane in which the original two lie?(whenever i go to look it up it is already assumed that it is perpendicular)

welle,

I think the answer to your question about the cross product is this: The cross product is a definition. You can't prove a definition...because its a definition. Similarly you cannot prove the result of a dot product. A cross product and dot products simply describe what is happening physically.

Most textbooks are unsettling because they do not explain "why" the cross product is defined the way it is. Nearly all of mathematics was developed to solve physical or financial problems. To be honest, I don't think anyone really knows "why" accept that it became convention to "define" the cross product the way it is. You have to study the history of the topic.

You can derive the expressions that lead to the result that is "defined" as the cross product by considering a force acting at a distance on an object (torque). When you consider the trigonometry and the definition of torque what you get is the magnitude of the resulting "vector" and define its direction (because it makes the most physical sense)to be normal to the plane containing the force and position vector.

The resulting vector defined by the cross product is sometimes called a "pseudovector" because it is not a result of an agent such as force. For basically the same reason, the centripetal acceleration of a mass times the mass is a "pseudoforce" because it is a force that is not the result of a physical entity.

I can send you the derivations. Let me know.

Hope this helps,

JDH
 
Last edited:
  • #17
Force is a vector, but a vector isn't a force. It's an ordered n-tuple of numbers. The cross product is the operator on the exterior algebra. It so happens that the degree two part of the exterior algebra is isomorphic (as a vector space) to the degree 1 part, which is R^3 for R^3. It is not correct to say it is a pseudo-vector for phyiscal 'force' reasons. That might be one model for you to think of about this, but that isn't in general true. Perhaps if you realized that by no means 'nearly all' of mathematics was devised to solve physical or financial problems...
 
  • #18
I think that i am still a bit confused.How come a product of two vectors can be in one case scalar and in another a vector?Does the product depend on the nature of the vectors?By which i mean that the product of force and displacemnt vectors will produce a scalar while that of force and moment arm a vector.Then does it make sense to take a dot product or a cross product of any two vectors, and if it doesn't, can one really use both of those techniques to prove something as ambitwistor did?Mathematically ambitwistor's proof is perfect and leaves no doubt, yet i still don't understand whether the proof required a definition, an assumption, or nothing but pure algebra and geometry.If it was an assumption, and had purely physical grounds, is there no way of prooving this assumption?

Originally posted by J. D. Heine,
The resulting vector defined by the cross product is sometimes called a "pseudovector" because it is not a result of an agent such as force. For basically the same reason, the centripetal acceleration of a mass times the mass is a "pseudoforce" because it is a force that is not the result of a physical entity.

Is the direction of "preudoforce" dependent on other forces while the force itself is not?
 
  • #19
Originally posted by welle
I think that i am still a bit confused.How come a product of two vectors can be in one case scalar and in another a vector?Does the product depend on the nature of the vectors? By which i mean that the product of force and displacemnt vectors will produce a scalar while that of force and moment arm a vector.Then does it make sense to take a dot product or a cross product of any two vectors, and if it doesn't, can one really use both of those techniques to prove something as ambitwistor did?Mathematically ambitwistor's proof is perfect and leaves no doubt, yet i still don't understand whether the proof required a definition, an assumption, or nothing but pure algebra and geometry.If it was an assumption, and had purely physical grounds, is there no way of prooving this assumption?

Originally posted by J. D. Heine,


Is the direction of "preudoforce" dependent on other forces while the force itself is not?

No, whether the result of a product is a number or a vector does not depend upon the vectors- it depends upon the type of product. There are three different "products" or types of multiplication generally defined for vectors:
scalar product- multiply a number by a vector and the result is a vector.
dot product- multiply two vectors and the result is a number.
cross product- multiply two vectors and the result is a vector.

Each has different applications. It might happen that the correct formula for calculating such things as angular moment involves the cross product while the formula for "work" involves a dot product.
That depends on the formula, not just on what kind of vector is used.
 
  • #20
Perhaps annoyingly 'scalar product' is also widely used in the sense of dot product. (cos the answer is a scalar presumably)

It is also called the inner product, and the cross or vector product is called the outer product too.
 
  • #21
Just to make things worse: the term "outerproduct" is also occasionally used to mean "tensor" product of two vectors:
The outerproduct of vectors written <a, b, c>, <d, e, f> in some coordinate system is the tensor with components
[ad ae af]
[bd be bf]
[cd ce cf]
 
  • #22
Let me throw some more geometric intution into this.

I'm going to argue that (1) the cross product defines an area in space, (2) you get the perpendicular vector only after you've defined what "perpendicular" means, (3) you get perpendiculars when you define an inner or dot product.

(1)

We have an inutitive notion that a square is the product of two adjacent sides, a length side and a breadth side. After all, A = l x b. Expressing things with vectors means that we keep track of the directions of things like sides as well as their magnitudes. So it would be nice if we could speak of multiplying the unit vectors i and j together, to get the unit square in the xy-plane (where we're writing vectors as xi+yj+...

It would be even better if we could do that for any two vectors a and b. That is, we would like to have a calculation of the area of the parallelogram with a and b as sides.

The cross product does just that:

If a = a1 i + a2 j and b = b1 i + b2 j

then

a x b = a1 b1 (i x i) + a1 b2 (i x j) +a2 b1 (j x i) +a2 b2 (i x i) (1)

= a1 b2 (i x j) + a2 b1 (j x i) (2)

= (a1 b2 - a2 b1) (i x j) (3)

We get (2) because i x i = 0, etc. Which makes sense because a parallelogram with both adjacent sides the same is squashed flat, i.e. has zero area. We get (3) because this is a vector idea of parallelogram, which let's you keep track of which is the top face and which is the bottom.

You can verify that (a1 b2 - a2 b1) actually is the area of the parallelogram.

The excellent thing about this, is we now have calculations for a concept of "vector area" which keeps track of its orientation in space; just as "vector" keeps track of the direction of an interval in space.

Even better: the trick still works if instead of a rectangular cartestian coordinate system, we use an oblique system with "unit vectors", i.e. basis vectors, which need not be at right angles. In fact, angles haven't even got a mention so far. We can do without angles completely, in fact, but if we do then we also have to get along without a calculation for scalar area: all we get out of the cross-multiplication is a set of "area components" such as (a1 b2 - a2 b1).

And better still: the trick works in any number of dimensions. In, 4,5,6,..n dimensions, vector areas are well defined. The standard name for the cross-multiplication giving a vector area is "bivector"; that's what people always call it when working in n dimensions.

But wait, there's more: we can go on from vector areas in 3,...n-space, to vector volumes, 4-volumes and so on: trivectors, quadvectors,...,n-vectors. The calculation of the components is done by writing out the vector components:

a1 a2 a3 ... an

b1 b2 b3 ... bn

c1 c2 c3 ... cn

... etc

and then calculating all the determinants of the square matrices you get by selecting as many columns as there are rows. The scalar determinant values are the n-vector components.

In 3-space, we get some extra structure for bivectors. There are just as many unit bivectors as there are unit vectors, so there's a natural identification:

i x j = k

j x k = i

k x i = j

That is (and here is the crux of the matter): for every vector area A (i x j) in the xy-plane, there's a vector A k along the z-axis; etc.

(2)

But, is there a special reason why we make the identification above? Why not

i x j = k - i - j

for example, apart from simplicity? That is, why insist that a vector area orientation corresponds to a particular vector direction?

Well the reason is, if we do the trivector calculation for volume, we are just calculating the determinant of

a1 a2 a3

b1 b2 b3

c1 c2 c3

(There's only one component, because there's only one way a volume can be oriented in 3-space.)

The determinant works out as

(a1 b2 - a2 b1) c3 + (a2 b3 - a3 b2) c1 + (a3 b1 - a3 b1) c2

in other words, it's a dot product, (a x b) . c.

Now if we suspend our identification for a moment, and give the vectors which correspond to bivectors new names:

i x j = k*

j x k = i*

k x i = j*

Then we have a sensible orthonormal system if we can say that

i . i* = 1

j . j* = 1

k . k* = 1

and every other combination such as i . j* = 0.

In an orthonormal system we can state that the volume spanned by the unit vectors is 1. Equivalently we can say: every vector in the xy-plane is perpendicular to the unit bivector in the xy-plane: (i x j) . (a i + b j) = k* . (a i + b j) = 0.

If we can say that, then we have an orthonormal or Euclidean vector basis. Otherwise we have an oblique or stretched basis.

(3)

Now here I've been defining "perpendicular" as a relation between bivectors and vectors. But if we make the specific identification:

i* = i

j* = j

k* = k

then our statement above becomes: every vector in the xy-plane is perpendicular to the vector cross product (k = k* = i x j) of the unit vectors in the xy-plane. Which is exactly what we wanted to establish in the first place.

The identification we make between bivectors and vectors defines the metric that we're imposing on the vector space. When we choose

i* = i

j* = j

k* = k

we are saying that

i . i = j . j = k . k = 1

i . j = 0 etc

and that mean's we're choosing the Euclidean or Pythagorean metric, in which

a . b = a1 b1 + a2 b2 + a3 b3

In summary then:

(1) Cross multiplication defines a vector area or bivector.

(2) In 3-space we can identify bivectors with vectors, specifying a metric.

(3) The metric in which unit bivectors are identified with unit vectors is the metric which makes the unit vectors perpendicular to each other.

Now I realize that one can just say: do the calculation in components and you'll see that (a x b) . a = 0, i.e. they're perpendicular. But taking the "high road" ...

(a) makes the idea of cross product as "vector area" explicit.

(b) opens up the concept of vector areas, vector volumes, etc, which in advanced work are multivector or wedge products, the natural extension of the cross product.

(c) allows for advanced work where we transform into oblique and stretched coordinate systems, where we have to keep track of the metric.
 
  • #23
Originally posted by matt grime
It is also called the inner product, and the cross or vector product is called the outer product too.

i think i saw selfAdjoint saying this in some thread somewhere too. that's two people who have called the cross product an outer product.

so perhaps i am wrong.

in my head, it works like this:

an inner product takes two vectors and makes a scalar

a vector product takes two vectors and makes a new vector

and outer product takes two vectors and makes a tensor

so in my world, you cannot call a cross product an outer product (there is a relationship between the cross product and the exterior product, and the exterior product is constructed from the outer product, but as i said to selfAdjoint, the cross product is not isomorphic to the exterior product)

and it seems to make sense: the inner product somehow takes you to an "inner space" (the scalar field) and the outer product takes you to a larger "outer space", the space of tensors

but perhaps i should stop harping on about what i think the definitions should be, and find out what they actually are
 
  • #24
lethe writes:

i think i saw selfAdjoint saying this in some thread somewhere too. that's two people who have called the cross product an outer product.

so perhaps i am wrong.

in my head, it works like this:

an inner product takes two vectors and makes a scalar

a vector product takes two vectors and makes a new vector

and outer product takes two vectors and makes a tensor

I'm afraid the terminology depends on context.

an inner product takes two vectors and makes a scalar

The only ambiguity here is that sometimes only certain kinds of vector should have inner products formed with other kinds. Really, vectors should only have inner products formed with vectors from the dual space. When you have a metric defined, you can happily convert vectors to duals, so anything goes.

The inner product is also called a dot product or contraction.

a vector product takes two vectors and makes a new vector

Yes, in 3-space with orthonormal basis vectors, i.e. rectangular coordinates.

In other cases, we take two vectors and form a bivector. This is generally called a wedge product or exterior product. However, the Clifford algebra community, following Grassmann, call this an outer product.

and outer product takes two vectors and makes a tensor

Mathematicians usually call that a tensor product. However, in computing usage, e.g. Wolfram Mathematica, it's called an outer product - or at least the operation on components is.
 
  • #25
Originally posted by saski
lethe writes:



I'm afraid the terminology depends on context.
well, i would like to establish whether this terminology as i have it is correct (in context).

The only ambiguity here is that sometimes only certain kinds of vector should have inner products formed with other kinds. Really, vectors should only have inner products formed with vectors from the dual space. When you have a metric defined, you can happily convert vectors to duals, so anything goes.
nonsense. the inner product on a vector space exists between vectors and vectors (not dual vectors). there is no ambiguity here.

it is possible to induce an inner product on the dual space, given an inner product on a vector space, but still, this inner product is between dual vectors and dual vectors.

sometimes the contraction of a (m,n) rank tensor (with n>0) with a vector is called an inner product in differential geometry, but this terminology is misleading; this is certainly not an inner product space.

The inner product is also called a dot product or contraction.
hmm... the dot product is a special case of an inner product, but they are not the same thing. contraction is also not the same thing. both of those are never applied to, for example, Hilbert spaces. Hilbert spaces have an inner product, not contraction (since there are no indices), and not a dot product (unless it is a finite dimensional Hilbert space)


Yes, in 3-space with orthonormal basis vectors, i.e. rectangular coordinates.
nonsense. vector product has nothing to do with orthogonality or basis vectors. consider, for example, the Lie bracket in a Lie algebra. or the matrix multiplication in the algebra of matrices. what have these to do with orthogonal bases? nothing at all. remember, a vector product is a product which is a vector. just like it sounds.


In other cases, we take two vectors and form a bivector. This is generally called a wedge product or exterior product. However, the Clifford algebra community, following Grassmann, call this an outer product.
ahh... now that answers my question. the vector product in a Clifford algebra is sometimes called an outer product.

but you seem a little confused... why would the Clifford algebra people follow Grassman? Grassman invented the Grassman algebra (also known as the exterior algebra. you know, the one with the wedge product). i think the Clifford people must be following Clifford, not Grassman... eh?

do you mean to imply that Grassman also called his wedge product an outer product?

Mathematicians usually call that a tensor product. However, in computing usage, e.g. Wolfram Mathematica, it's called an outer product - or at least the operation on components is.
you bet. i usually call it a tensor product too. that is a good name for a product that produces a tensor (just like vector product is a good name for a product that produces a vector)

but i am aware that this is also called an outer product, and that name makes sense to me.

so, you agree with me, and would like to add that Clifford product is sometimes also called outer product. i don't like it, but i will take your word for it (or at least check in a book.)
 
Last edited:
  • #26
lethe writes:

nonsense. the inner product on a vector space exists between vectors and vectors (not dual vectors). there is no ambiguity here.

...

sometimes the contraction of a (m,n) rank tensor (with n>0) with a vector is called an inner product in differential geometry, but this terminology is misleading; this is certainly not an inner product space.

You're right, I'm wrong: see

http://courses.cs.vt.edu/~cs5485/notes/ch1-2/linalg.pdf

hmm... the dot product is a special case of an inner product, but they are not the same thing.

Sure.

contraction is also not the same thing. both of those are never applied to, for example, Hilbert spaces. Hilbert spaces have an inner product, not contraction (since there are no indices)[...]

But I can quote MTW: "Contraction seals off two of the tensor's slots, reducing the rank by two." That has to include the contraction of the tensor product of a contravariant and a covariant vector. If we're not to call that a contraction, what should we call it?

As for Hilbert spaces, see e.g.

http://farside.ph.utexas.edu/teaching/qm/fundamental/node9.html

"Mathematicians term <B|A> the inner product of a bra and a ket." Bras are defined as linear functionals operating on kets, i.e. dual vectors. So <B|A> is a contraction by MTW's definition. Howvever, there's a norm on the Hilbert space allowing one to convert between bras and kets, so <B|A> is also the inner product of |A> and |B>.

nonsense. vector product has nothing to do with orthogonality or basis vectors.

Consider:

w_i = \epsilon_i_j_k u^j v^k

It becomes a vector product only by raising the index on w, which requires a metric, i.e. definition of orthogonality. Or you can write the exterior product:

u^j v^k - v^j u^k

but you need a Hodge star to make a vector out of it, and you need the metric for the Hodge star.

the Lie bracket in a Lie algebra. or the matrix multiplication in the algebra of matrices. what have these to do with orthogonal bases?

The Lie bracket expresses non-commutation of Lie derivative operators; it's not a simple matter of alternating tensor products, and it's certainly not the same thing as a vector product.

And matrix multiplication is entirely the multiplication of row with column vectors. Again, not a vector product.
I stand by what I said.

i think the Clifford people must be following Clifford, not Grassman... eh?

do you mean to imply that Grassman also called his wedge product an outer product?

en.wikipedia.org/wiki/Hermann_Grassmann

"Following an idea of his father, as Grassmann himself quotes in the A1, he invented a new type of product, the exterior product which he calls also combinatorial product (In German: äußeres Produkt or kombinatorisches Produkt)."

I think the usage "outer" caught on as a synomym for "exterior". That's how "outer" is used in e.g.

www.mrao.cam.ac.uk/~clifford/introduction/intro/node5.html

However, I've permitted confusion between that exterior product and the Clifford product.

What the Clifford people do is make a grand basis containing scalar unity, the unit vectors, unit bivectors, unit trivectors, etc; the whole graded sequence of exterior products. Then they define a super-product on the span of that, which they call the "associative" OR "geometric" OR "Clifford" product.

Thanks - I'll watch my expression more carefully.
 
Last edited by a moderator:
  • #27
Where did you get the impression that orthogonality and metrics are linked, as in the phrase

'requires a metric, ie the definition of orthogonality'

It is perfectly possible to define exterior products without reference to a metric. Indeed, I cannot think of one off hand that uses a metric, but then I'm an algebraist.

Also, I would rather that quote about bra and ket said the physicists and some applied mathematicians used the terminology. The identification of the hilbert space and its dual implies reflexivity, which is not true for a general banach space (or, hilbert spaces I believe) where we also have linear fuctionals, though I don't believe you want to see the generalization of Rietz's Representation Theorem (actually, it isn't really a generalization, as much as the hilbert space version you're used to is a specialization).
 
  • #28
matt grime writes:

Where did you get the impression that orthogonality and metrics are linked, as in the phrase

'requires a metric, ie the definition of orthogonality'

It is perfectly possible to define exterior products without reference to a metric. Indeed, I cannot think of one off hand that uses a metric, but then I'm an algebraist.

You're right, exterior products don't require a metric. However, inner products do, even if the metric is just the boring old sum-of-squares norm.

Here's my reasoning.

(1) How do you measure a vector (in a vector space V), i.e. resolve it to its components v^i? Answer: with a set of linear functions, e_i: V -> R, in other words, covectors. No metric is required.

(2) But here's a harder one. Can you resolve a vector using only the basis vectors?

Obviously, you take one basis vector at a time, project it and drop a perpendicular to it from the vector you want to resolve.

Obvious, that is, in rectangular coordinates - but incorrect in oblique coordinates. In oblique coordinates, the line you have to drop is not a perpendicular to the basis vector, but rather a line in the (n-1)-plane spanned by all the other basis vectors.

In fact what you're doing there is constructing the wedge product of the other base vectors, so as to use it as a covector. You're calculating

( v ^ e2 ^ ... en) / ( e1 ^ e2 ^ ... en)

or in components in some arbitrary basis,

det( v, e2, ... en) / det( e1, e2, ... en) [Kramer's Rule]

to get your first component, and so on.

With an orthogonal basis, you don't need to go to that trouble, because each basis vector corresponds directly to its covector, courtesy of the metric.

The calculation for the first component then becomes

(v ^ j ^ k ^ ...) / (i ^ j ^ k ^ ...)

= (v ^ j ^ k ^ ...) / 1

= (v . i)

as we all learned in school.

Let's try it out.

v = (3, 4)

Basis vectors are

a = (1, 0)
b = (1, 1)

(v . a, v . b) = (3, 7)

But

(v ^ b)/(a ^ b) = -1
(a ^ v)/(a ^ b) = 4

(-1) a + 4 b = (3, 4)

Constructing covectors as (n-1)-vectors works, regardless of metric; making inner products with the basis vectors only works in orthogonal systems.
Also, I would rather that quote about bra and ket said the physicists and some applied mathematicians used the terminology. The identification of the hilbert space and its dual implies reflexivity, which is not true for a general banach space (or, hilbert spaces I believe) where we also have linear fuctionals, though I don't believe you want to see the generalization of Rietz's Representation Theorem (actually, it isn't really a generalization, as much as the hilbert space version you're used to is a specialization).

It was news to me, actually, that there is an identification of bras with kets. I don't know QM well; the reference I quoted was the first I've heard of Rietz.
 
  • #29
You absolutely do not need a metric. An inner product is just a non-degenerate positive definite bilinear form. If you are not in characteristic two then the parallelogram law let's you link a NORM and an inner product. I think you are using metric in a non-metric space sense. Which is why I was being pernickity about the term 'metric'.

Actually any basis element 'corresponds' to its dual basis element by definition, and independent of 'angles' - these are all defined for arbitrary vector spaces, not those over char zero. Note we are assuming finite dimensional.

And in answer to the question of resolving without using a metric:

let e_i be a basis, f_i the corresponding dual basis for any finite dimensional v.s. then any vector is:

v= sum over i f_i(v)e_i

no mention of norms.

I think there is some confusion of "the dual basis", and dual basis wrt some inner product here.

There are more innerproducts lying around than simply the obvious one defined component wise which presumes a choice of basis a priori.

You should always try and work co-ordinate free - a gentleman only takes bases when he has to.

As for QM, usually one thinks about l_2, which has the nice property that

f_y(x) = <x,y> is a linear functional and that every linear functional arises in this way. (note linear in first factor, conj in second). This is Rietz rep theory in its nice form and allows an unnatural isomorphism between the Hilbert space and its dual
 
  • #30
Originally posted by saski

But I can quote MTW: "Contraction seals off two of the tensor's slots, reducing the rank by two." That has to include the contraction of the tensor product of a contravariant and a covariant vector. If we're not to call that a contraction, what should we call it?
contraction refers to what you do when you get rid of an index in tensor index notation by summing over one of them.

i have never ever heard this term applied to Hilbert spaces. i believe the reason is because no one uses indices to label the states of their Hilbert space (which would be a very ackward notation indeed if it is not finite)


"Mathematicians term <B|A> the inner product of a bra and a ket." Bras are defined as linear functionals operating on kets, i.e. dual vectors. So <B|A> is a contraction by MTW's definition. Howvever, t there's a norm on the Hilbert space allowing one to convert between bras and kets, so <B|A> is also the inner product of |A> and |B>.
well, i am not a mathematician, so perhaps i shouldn't speak for them, but as far as i can tell, mathematicians do not use bra ket notation at all, because it is extremely sloppy.

and every math book i know defines an inner product as a positive definite bilinear (or perhaps antilinear in one argument) form. some more physically minded texts allow for more general nondegenerate (instead of positive definite)

but the point is, it is an operation on two VECTORS



Consider:

w_i = \epsilon_i_j_k u^j v^k

It becomes a vector product only by raising the index on w, which requires a metric, i.e. definition of orthogonality. Or you can write the exterior product:

u^j v^k - v^j u^k

but you need a Hodge star to make a vector out of it, and you need the metric for the Hodge star.
when you say "metric", do you mean Riemannian metric?

anyway, which point are you trying to prove here? that you need orthogonality to define the cross product? i am a little lost with what you are trying to show here, but i have a strong suspicion that it is wrong.



The Lie bracket expresses non-commutation of Lie derivative operators; it's not a simple matter of alternating tensor products, and it's certainly not the same thing as a vector product.
perhaps you should review the definition of Lie algebra.

want me to tell you? ok:
firstly, a Lie algebra is an algebra, which means it is a vector space with a vector product.

this vector product is bilinear (as all products must be), but neither commutative nor associative.

perhaps you can tell me your definition of vector product, so we can make sure we both know what the other is talking about. i told you mine: a product which is vector valued. the Lie bracket certainly satisfies this requirement. if you think otherwise, you are just wrong.
And matrix multiplication is entirely the multiplication of row with column vectors. Again, not a vector product.
I stand by what I said.
see above. please tell me your definition of vector product. if you define the vector product to be the cross product in R3, then of course anything else will not be.



However, I've permitted confusion between that exterior product and the Clifford product.

What the Clifford people do is make a grand basis containing scalar unity, the unit vectors, unit bivectors, unit trivectors, etc; the whole graded sequence of exterior products. Then they define a super-product on the span of that, which they call the "associative" OR "geometric" OR "Clifford" product.
yeah, i know what a Clifford algebra is, but thanks.

according to that link, geometric algebra people call the exterior product the outer product. OK, although this is different from your previous answer, i find it more plausible, so i will accept this answer.

and so according to your quote above, Grassman invented the Grassman algebra, aka, the exterior algebra. this is exactly what i claimed.
 
  • #31
Originally posted by matt grime
- a gentleman only takes bases when he has to.

that is an awesome quote. Physicsforums should have a quote of the day feature. this should be it at least once a month.

now, i was hoping you would respond to my original question above, which boils down to "what exactly is the definition of an outer product, and why is the vector cross product an outer product?"
 
  • #32
Originally posted by lethe

now, i was hoping you would respond to my original question above, which boils down to "what exactly is the definition of an outer product, and why is the vector cross product an outer product?"

mathworld seems to define an outer product as a tensor product (which is what i was claiming), whereas wikipedia thinks the an outer product is the wedge product, which is what someone else (you? i forget now) claimed, and i objected to. it is also what is claimed in the references that saski links.

i guess it can be either, depending on your preference.
 
  • #33
Originally posted by lethe
that is an awesome quote. Physicsforums should have a quote of the day feature. this should be it at least once a month.

now, i was hoping you would respond to my original question above, which boils down to "what exactly is the definition of an outer product, and why is the vector cross product an outer product?"

I'm not sure I can adequately answer that.

In generality, given a finite dimenisional vector space V there is the n-fold tensor product, which can saefly be thought of as:

let v_i be a basis of V, then a basis of V^{\otimes n} is given by the set of all symbols v_{i_1}\otimes\ldots \otimes v_{i_n}

( I am, slightly contradictarily, being a gentleman here by not defining that in a basis free way, if you want I can but it is more confusing for the non-algebraist, trust me)

now the permutation group on n elements acts on the n-fold tesnor product by swapping factors in the expresion.

the exterior algebra is that sub-space where swapping the elements is the same as multiplying by -1


eg in the two fold tensor, it is the subspace spanned by expressions like u\otimes v - v\otimes u

It is not hard to see that in an r dimensional vector space this is only non-zero if n<=r, and it has dimension r choose n.

The element u \times v - v\otimes u is labelled u wedge v (can't recall tex for it off hand; it's probably \wedge)

The reason why its so damn easy and confusing at the same time for R^3 is because 3 choose 2 is 3 and is therefore a 3 dimensional vector space. Different groups label these different things all over the place, but the wedge is usually called the exterior product.


Let's stick in the 3-d case: where does the rule ixj=k come from? Note I'm using x for ease of type setting, it is the wedge above.

well, the answer is: there is an exact pairing from between the first and second exterior algebra


basically, there is a unique element in the third exterior algebra's base:
ixjxk corresponding to the invariant element i\otimes j \otimes k -j\otimes i \otimes k + j\otimes k \otimes i -i\otimes k \otimes j + k\otimes i \otimes j - k\otimes j \otimes i[\tex] in the tensor algebra. <br /> <br /> <br /> so the paring matches ixj with k and jxk with i and so on. <br /> <br /> Is that clear in the slightest?<br /> <br /> I think that might answer the question but I&#039;m not sure.<br /> <br /> Incidentally the quote is very old, and I can&#039;t recall who came up with it. I think the first time I saw it was in some lecture notes by Tom Korner, but it was a quote from someone else.
 
  • #34
Originally posted by matt grime
In generality, given a finite dimenisional vector space V there is the n-fold tensor product, which can saefly be thought of as:

let v_i be a basis of V, then a basis of \bigotimes^nV is given by the set of all symbols

v_{i_1}\otimes\ldots \otimes v_{i_n}

( I am, slightly contradictarily, being a gentleman here by not defining that in a basis free way, if you want I can but it is more confusing for the non-algebraist, trust me)
while i am not myself an algebraist, i do have a fair amount of algebra. i am, for example, familiar with the universal construction of a tensor product. as well as the coordinate definition of a tensor product of vector spaces. so whatever.

mostly, i am just looking to establish what is the correct terminology, and why.
now the permutation group on n elements acts on the n-fold tesnor product by swapping factors in the expresion.

the exterior algebra is that sub-space where swapping the elements is the same as multiplying by -1
sure.

eg in the two fold tensor, it is the subspace spanned by expressions like u\otimes v - v\otimes u

It is not hard to see that in an r dimensional vector space this is only non-zero if n<=r, and it has dimension r choose n.

The element u\otimes v - v\otimes u is labelled u\wedge v (can't recall tex for it off hand; it's probably \wedge)

The reason why its so damn easy and confusing at the same time for R^3 is because 3 choose 2 is 3 and is therefore a 3 dimensional vector space. Different groups label these different things all over the place, but the wedge is usually called the exterior product.
yes, as i understand it, wedge product and exterior product are synonymous.


Let's stick in the 3-d case: where does the rule \imath\times\jmath=k come from? Note I'm using x for ease of type setting, it is the wedge above.

well, the answer is: there is an exact pairing from between the first and second exterior algebra
sure, because \binom{3}{1} and \binom{3}{2} are equal, there is an isomorphism between the two given by the Hodge dual (which depends on the orientation and the inner product we give to the vector space)


basically, there is a unique element in the third exterior algebra's base:
ixjxk corresponding to the invariant element \hat{\imath}\otimes\hat{\jmath} \otimes \hat{k} -\hat{\jmath}\otimes \hat{\imath}\otimes\hat{k} + \hat{\jmath}\otimes\hat{k}\otimes\hat{\imath} -\hat{\imath}\otimes \hat{k}\otimes\hat{\jmath} + \hat{k}\otimes\hat{\imath}\otimes\hat{\jmath} - \hat{k}\otimes\hat{\jmath} \otimes \hat{\imath} in the tensor algebra.
but this exactly illustrates my point of why the exterior algebra is not the cross product algebra. first of all, the exterior algebra is associative, and the cross product algebra is not. it follows that in the cross product algebra,

(\hat{\imath}\times\hat{\jmath})\times\hat{k}=\hat{\imath}\times(\hat{\jmath}\times\hat{k})=0

whereas in the exterior algebra, you get the invariant element e_1\wedge e_2\wedge e_3\neq 0.

if we assign a product on the exterior algebra e_i\times e_j=\star(e_i\wedge e_j), then we get an explicit correspondence with the vector cross product algebra in R3

so the paring matches ixj with k and jxk with i and so on.

Is that clear in the slightest?

I think that might answer the question but I'm not sure.

well, this isn't really my question. i know what the exterior algebra is and how it relates to the cross product.

i guess all i really want to know is: what does the term "outer product" mean to you? some people are saying "outer product"="exterior product", i think saski might have even said "outer product = Clifford product (geometric product)", but i guess we ruled that out.

on the other hand, i think "outer product"="tensor product"

i really just wanted you to weigh in with your opinion of what the term outer product should mean.
 
Last edited:
  • #35
Oh, ok an opinion, erm, well i never use the phrase outer product, but would take it to be the exterior product. sorry, i thought you wanted to see where the link with tensors is. sorry for telling you stuff you already know.

as for ixjxk=0 but i^j^k not being zero, well, I'm not making the identification between (R^3,x) as an algebra and \wedge^{\bullet}(R^3) that would be required for that contradiction to make sense. One is a 3 dimensional ungraded algebra, the other is an 8 dimensional graded algebra.
 
  • #36
Originally posted by matt grime
Oh, ok an opinion, erm, well i never use the phrase outer product, but would take it to be the exterior product. know.
OK. well, that's that then. perhaps i will endeavor to just never use that phrase as well. too much ambiguity.

as for ixjxk=0 but i^j^k not being zero, well, I'm not making the identification between (R^3,x) as an algebra and \wedge^{\bullet}(R^3) that would be required for that contradiction to make sense. One is a 3 dimensional ungraded algebra, the other is an 8 dimensional graded algebra.

right, but there is an identification you can make, i am not quite sure how you make it explicit. i think it is what you were describing here, right?:

Originally posted by matt grime

Let's stick in the 3-d case: where does the rule ixj=k come from? Note I'm using x for ease of type setting, it is the wedge above.

well, the answer is: there is an exact pairing from between the first and second exterior algebra

so if you consider the R3 subspace of \bigwedge \mathfrak{R}^3, and identify the space of bivectors with their Hodge duals, then this becomes isomorphic to R3 with the cross product.

i know this is true, is there a nicer way to describe it though?
 
  • #37
take the two 3-d bits in the exterior algebra, the vectors and 'bivectors' and mod out be the relation v~*v (its hodge star dual, I mean) just considered as a vector space. See if the product descends to the quotient? Which might just be another way of saying what you just said. That seems the slickest way to do it. But that isn't the same as nice.
 
  • #38
I'm not 100% sure of this, but I believe that, historically vector products (and vectors) actually originated with quaternions.
I'm not telling the how quaternions originate (for that, look here http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Hamilton.html), only how vector came from them.

Quaternions may be taken to have a real (scalar) part and a vector part, so in that notation, they are written like this: a=(a_o,\vec{a}), where \vec{a}=a_1\hat{i}+a_2\hat{j}+a_3\hat{k}.
Then, the product of 2 quaternions, can be written like this: ab=(a_ob_o-\vec{a}\cdot\vec{b},a_o\vec{b}+b_o\vec{a}+\vec{a}\times\vec{b}).
From what I read, phycisist (and mathematics) originally used quaternions instead of vectors, but then they noticed that they could just drop the scalar part.
Notice now what we get if we multiply 2 scalar-less quaternions:
ab=(0,\vec^{a})(0,\vec^{b})=(-\vec{a}\cdot\vec{b},\vec{a}\times\vec{b})
See, the scalar part is just minus the dot product and the vector part is the cross product. And so (I've been told), vectors and their products were born.

I've also been told that Maxwell original work on electromagnetism was done using quaternions, I've been trying really hard to confirm that (looking for a copy of Maxwell original work) but I haven't had any luck, can anyone here help me with that?

P.S. To find more about quaternions, look here http://mathworld.wolfram.com/Quaternion.html

*Edit: I had wrong the first link above (about the history of Hamilton and quaternions), but now is fixed.
 
Last edited:
  • #39
Vectors were used before Hamilton came up with Quaternions. A quaternion is a 4-vector, I'm not sure where this idea that it is a scalar plus a vector came from (how does one even add scalars and vectors?) but you're not the first person to have said it even in this forum. It is no more a vector plus a scalar than any n vector is a sum of n scalars. Perhaps it is because people think the i,j,k are the unit vectors in R^3?

Hamilton came up with them in order to find an extension of degree greater than 2 of the Real numbers (ie a vector space over R with a Field structure on it) he attempted first to make a degree three extension and it took him a while to figure out it can't be done (it can be for 2,4,8,16 and no others)
 
  • #40
First, a quaternion is NOT a 4-vector, preety much like a complex is not a 2-vector, because, for example, you can't divide two 4-vectors (anyway, I don't think this is the place to discuss that).
I didn't say a quaternion is a vector sadded to a scalar and I didn't say either that quternion units i, j, k are equal to vector basis i, j, k.
What I said, is that you may look a quaternion as a pair (scalar, vector) (pretty much like a comlex number can be looked as a pair of real numbers). That is just notation, and it's useful, for example, as it makes really easy to compute the product of 2 quaternions using the well known vector products.

It may be right that the stroy is not real and that the use of that notation originated it, as I said, I'm not 100% sure about it and I haven't got the chance to confirm it with serious sources.

You may want to give a look at this link:
http://www.rtis.com/nat/user/jfullerton/school/math251/cproduct.htm

Among other things, it says that Hamilton was the first to use the word vector, based on the Latin veher, "to draw".
 
Last edited:
  • #41
Quaternions do form a vector space of dimension 4 over the real numbers, a quaternion is a 4 vector. Quaternions are a (4-d) division algebra (over R). If you wish to say it behaves LIKE a pair (u,v) with u a scalar and v a 3-vector, then fine, but all you are doing is defining a 4-vector there anyway. An n vector is just an n-tuple of elements of a field with an addition and scalar multiplication on them.

Just because not every vector space of dimension A has the property that some object X does, does not mean X cannot be an A-dimensional vector space. The set of L^2 functions on the circle is an infinite dimensional vector space, yet and arbitrary infinite dimensional vector space doesn't have an integral defined on it.


Seeing as a+bi+cj+dk is a quaternion, I was assuming when you spoke of a scalar and a vector part that the a was the scalar part, and bi+cj+dk was the vector, that is where my misunderstanding of the adding scalar to vector came from - just notational issues.


A complex number is a 2-vector over R (C is a two dimensional real space, that happens to also be a field, it was in attempting to make a 3 -dim real space that was also a field that Hamilton realized you had to drop the commutativity requirements and make it 4-d)

As every n-dimensional vector space (over R) is abstractly isomorphic to every other n dimensional vector space (over R) for n in N, one can define a division algebra structure (any element is invertible) on any real 4 dimensional vector space, simply because Quaternionic structure preseves the norm. One may also do this to real vector spaces of dimension 2 (iso to C) 4 (quaternions) 8 (octonions) and 16 (can't remember the name). It might not be very useful to do this.



Hamilton may have been the first to use the word vector, however their use has been implicit for many centuries. Another instance formal group theory wasn't developed until the start of the 20th century, approx, but Galois used groups in the 1830s.
 
Last edited:
  • #42
Before I forget, 16D are called sedenions.

Well, yes, you're right, that's the definition of a n-vector and according to it, a quaternion is indeed a 4-vector, it just has some extra properties (so it's not just a 4-vector (just in case, I know you never say a quaternion was just a 4-vector, I'm just putting that to leave it clear)).

Coming to think of it, since the parallelogram law is so intuitive, geometrical vector addition may be as old as euclidian geometry, and there's no doubt they were somehow implicitly used when analitical geometry was created around the XVII century.
Vectors use must also be implicit in Newton's work which dates from the XVII century too.
But was there a formal theory of vectors before Hamilton?
What about the dot and the cross products?
 
Last edited:
  • #43
As always in mathematical history, it is very hard to decide at what point these abstract, disparate elements came together to form a theory as we now recognize it. I am prepared to state categorically that vectors came before quaternions and defend that statement in all its interpretations since Hamilton came up with quaternions after '10 years' of trying to define a (non-degenerate) multiplication on a 3-d real vector space. St Andrews University has a website about the history of maths that has plenty of details about these things. A an geometer, my defintion of vector spaces (though they weren't called that) were known before Hamilton - Analytic geometry for instance predates Hamiton, however, as physical quantities such as velocity, I'd be prepared to accept Hamilton as the father of that interpretation. Swicthing hats, as an algebraist, he wouldn't have understood our vector spaces over fields of non-zero characteristic, as such fields weren't known to him; all groups to 19th century mathematicians were at best concrete permutation groups, and fields as we now know them weren't defined until such time as abstract groups were developed. Inner products? Pass.

Octonions are now exciting the interest of theoretical physicists, perhaps in the same way quaternions did in the 19th Century.
 
Back
Top