Exterior algebra and n-dimensional volumes

In summary: If you're actually going to use cross products in this fashion, I don't see how anything will extend to N dimensions at all.
  • #1
coquelicot
299
67
Hello,

In R^3, the surface of the parallelogram determined by two vectors u and v is given by the norm of the cross product of u and v. For my research, I have to know if this can be generalized in the following manner:
Let e_1,..,e_n be the canonical basis of R^n, and Ext_k be the exterior algebral of rank k over R^n (k<=n). We assume that the basis e_1^..e^k, e_1^..^e_{k-1}^e_{k+1}, ..., e_2^..e_{k+1},... of Ext_k is oriented positively, and we endow it with the canonical scalar product. Now, is the following assertion true: the k-dimensional volume of the polytope determined by k vectors u_1,...,u_k is equal to the norm of u_1^...^u_k, computed with respect to the scalar product in the above basis.

When trying to prove this theorem, I have come to some other theorem that implies it easily (any suggestion will be appreciated). To begin with, consider again the case of R^3, and let A be an orthonormal matrix of R^3. Then it should be clear for a physisist that for two vectors u and v of R^3, Au^Av = det(A) A (u^v) (here ^ is the usual cross product).
Now, with the above notations for the bases, scalar product etc., is the following true:
if u_1,...,u_k are k vectors of R^n and if A is an orthonormal matrix of R^n, then Au_1^...^Au_k = (-1)^n det(A) B (u_1^...^u_k), where B is a rotation matrix of Ext_k with respect to the basis above. Furthermore, if k=n-1, B=Refl(A), where the columns of Refl(M) are equal to the columns of M in the reverse order.

thx.
 
Physics news on Phys.org
  • #2
I think your general assertion that you want to prove is true. I'm confused by

[tex]Au \wedge Av= (\det A) A(u\wedge v)[/tex]

Could you describe what this means?
 
  • #3
Muphrid said:
I think your general assertion that you want to prove is true. I'm confused by

[tex]Au \wedge Av= (\det A) A(u\wedge v)[/tex]

Could you describe what this means?
Yes. The matrix A has been defined to be orthonormal, hence det(A)=+/- 1.
u^v is the cross product of u and v, and A (u^v) is the action of A on u^v (maybe the parentheses are confusing, but this is to avoid readers to understand this as (Au)^v).
 
  • #4
All right, that's what I thought. Why is the determinant needed then? Isn't the action of a linear operator on a wedge product just the wedge of transformed vectors?
 
  • #5
Try u = e_1, v= e_2, and A=[0 1 0; 1 0 0; 0 0 1] (the matrix that exchange e_1 and e_2).
 
  • #6
So I get [itex]\underline A(e_1) \wedge \underline A(e_2) = e_2 \wedge e_1 = -e_1 \wedge e_2[/itex].

Multiplying by a factor of [itex]\det(\underline A) = -1[/itex] would reverse the orientation of this object. I'm not clear why you would choose to do this when the object's orientation is already the way it should be?
 
  • #7
Well, let us write it down:
(Ae_1)^(Ae_2) = e_2^e_1 = -e_1^e_2 = - e_3 = - A e_3 (since A fixes e_3) = -A (e_1^e_2) and not A(e_1^e_2) as you thaught.
 
  • #8
Oh, you actually are doing the cross product. I thought because you were writing wedges you were mentioning the cross product just not to lose people.

If you're actually going to use cross products in this fashion, I don't see how anything will extend to N dimensions at all. Wedges can be abused to work this way in 3D, but in N dimensions, saying that two vectors wedge to another vector is nonsense. Is this typical in exterior algebra to say [itex]e_1 \wedge e_2 = e_3[/itex] instead of explicitly showing that a duality operation is in play?
 
  • #9
Admittedly, my question is not elementary. But I hope someone will understand it. I am not saying that in n-dimensions two vectors wedge to another vector. But the exterior algebra of rank k is a finite dimensional vector space, hence scalar product can be defined in an obvious manner given a basis, and so the norm of a vector. Now, if you consider the duality with vectors and edges x^y in dimension 3 (that is, replace wedge product with cross product), then the theorem I am trying to prove/disprove is the exact generalization of the well known fact that Ax^Ay = det(A) A x^y (please, pay attention to the definition of the basis that serves to define the scalar product in Ext_k).
 
  • #10
coquelicot said:
I am not saying that in n-dimensions two vectors wedge to another vector.
Ah, but you are! A is a linear operator on vectors, and you are treating your wedge as a vector so that you can apply A.

This is especially confusing because any linear operator A on a vector space naturally extends a linear operator on the tensor algebra:

[tex]A(a \otimes b \otimes c \otimes \cdots) \mapsto (Aa) \otimes (Ab) \otimes (Ac) \otimes \cdots[/tex]

but this natural extension is clearly not what you intend.

I think you really mean something like, in the two-dimensional case,

[tex]\star((Av) \wedge (Aw)) = \det(A) A \star(v \wedge w)[/tex]
 
  • #11
Let me make things clear.
1. You are dealing with the second assertion of my first post (true/false)
2. I said "I am not saying that in n-dimensions two vectors wedge to another vector", which means I am not claiming that if u and v are two vectors of R^n with n>3, then one can give a sense to u^v AS A VECTOR OF R^n (true/false)·
3. What I am saying in my first post is that u^v IS a vector of Ext_k(R^n), and I claimed that possibly, Au^Av = (-1)^n det(A) B (u^v), where B is a rotation matrix of Ext_k(R^n), and where the meaning of "rotation matrix" is relative to the canonical scalar product of Ext_k(R^n) in the special basis I specified above. This assertion may be true or false, but it is not ill-posed (true/false).
4. in the last line of your post, what the asterix means?
 
  • #12
It's the Hodge star -- the duality operator between k-forms and (n-k)-forms on an n-dimensional oriented inner product space.
 
  • #13
Thank you. Indeed, a look at the hodge star operator in wikipedia (article "inner product of k-vectors", last line) seems to indicate that my claim is true (or close to be true). But I'm not sure.
 
  • #14
In other words, I personally would never say [itex]\wedge[/itex] represents he cross product in 3D. It's much clearer to say [itex]u\times v=-\star (u \wedge v)[/itex] and keep the difference between the products explicit.

That is why I wrote [itex]\underline A(u\wedge v)=\underline A(u) \wedge \underline A(v)[/itex], for as Hurkyl said, this is correct for the usual definition of a wedge product.
 
  • #15
If I understand, you and Hurkyl are using the same Letter "A" to denote the operator B defined by B(u^v) = Au^Av, and this is the reason of a part of the discussion above. Of course, this is a current practice in mathematics to call by the same name an application that is canonically defined by another one. But what I claimed is much more precise: the actual form of the matrix B is equal to B=det(A) A, if B is written in the basis (e_1^e_2, e_1^e_3, e_2^e_3).
But all of this concerns, after all, the motivation for the theorem I seek, and is not so important. What is important is the general claim in the theorem. It seems that you and hurkyl agree more or less with it, but I'm not sure. I would, of course, appreciate some link to a lesson that deals with this matter.
 
  • #16
No, I don't think I agree with what you're saying. A simple example should illustrate.

Let there be an operator [itex]\underline S[/itex] on [itex]\mathbb R^3[/itex] such that

[tex]\begin{align*}
\underline S(e_1) &= s_1 \\
\underline S(e_2) &= s_2 \\
\underline S(e_3) &= s_3
\end{align*}[/tex]

These should all be understood to be vectors. It's clear then that,

[tex]\underline S(e_1) \wedge \underline S(e_2) = s_1 \wedge s_2[/tex]

And similarly for other wedges and combinations of basis vectors. The [itex]e_1 \wedge e_2[/itex] component of [itex]s_1 \wedge s_2[/itex] is not in general the [itex]e_1[/itex] component of [itex]s_1[/itex] times [itex]\det \underline S[/itex]. When you say [itex]\underline B = (\det \underline A)\underline A[/itex], this is what I interpret that meaning. Please correct me if you mean something else.

I think there's a subtle distinction going on that separates our lines of thinking. In classic linear algebra, you have an operator [itex]\underline A[/itex] that operates on a vector, and that's that. Typically, we extend the meaning of this operator such that it can act on elements of a multilinear algebra (tensors or members of an exterior algebra or something else) in the way Hurkyl and I described. It's not that we're deliberately abusing notation when we say [itex]\underline A(u \wedge v) = \underline A(u) \wedge \underline A(v)[/itex]. We consider this part of the definition of how the operator acts on elements of the algebra--it's the same operator, just acting on different inputs.

You're correct to say that an operator on the basis [itex]e_1 \wedge e_2, e_1 \wedge e_3, e_2 \wedge e_3[/itex] should have a different matrix representation than one that acts just on basis 1-vectors, but that doesn't mean the operator underlying the transformation is fundamentally different. This is a big reason I have eschewed matrix representation in this entire post. I take the view that the operator [itex]\underline A[/itex] can act on any element of the algebra, and I think this is the prevailing viewpoint, honestly.
 
  • #17
I agree with you that it's ok to denote by the same letter the operator and the extended operator. But now that we agree on this point, can I ask you a question?
Let A be an orthonormal operator in R^3, and let B be the extention of this operator to Ext_2(R^3) (what can be denoted by the same letter A as you pointed out).
Is the matrix of B in the basis (e_1^e_2, e_1^e_3, e_2^e_3) orthonormal with respect to the standart scalar product in this basis, and what is the representation of B in this basis?
More generally, what is the answer to the same question when R^3 is replaced by R^n and Ext_2(R^3) by Ext_k(R^n), with k<n (I wrote down explicitely the basis in my first post)?
 
  • #18
coquelicot said:
I agree with you that it's ok to denote by the same letter the operator and the extended operator. But now that we agree on this point, can I ask you a question?
Let A be an orthonormal operator in R^3, and let B be the extention of this operator to Ext_2(R^3) (what can be denoted by the same letter A as you pointed out).
Is the matrix of B in the basis (e_1^e_2, e_1^e_3, e_2^e_3) orthonormal with respect to the standart scalar product in this basis, and what is the representation of B in this basis?
More generally, what is the answer to the same question when R^3 is replaced by R^n and Ext_2(R^3) by Ext_k(R^n), with k<n (I wrote down explicitely the basis in my first post)?

Hey coquelicot.

For this are you talking about for x,y in R^3 the inner product <Bx,By> for some operator B?
 
  • #19
Sorry, I haven't understood your question, can you be more explicit?
 
  • #20
coquelicot said:
Sorry, I haven't understood your question, can you be more explicit?

Basically if you have an inner product <.,.> and vectors x and y in untransformed basis, where Bx = B*x (matrix-multiplication of operator B with x) and By = B*y, are you trying to find <Bx,By> where B is your operator to transform one vector from one basis ti another?

Your question was to understand orthonormality conditions of the inner product, and I'm asking is <Bx,By> your inner product?
 
  • #21
My impression is that you are dealing about something that is not the subject and the question I asked. Are you sure you have read my first post and a part of the posts and replies of others? I have explicitely stated what is the scalar product in Ext_k(R^n) (namely the canonical one on the basis e_1^...^e_k, ... explicited in my first post).
 
  • #22
coquelicot said:
I agree with you that it's ok to denote by the same letter the operator and the extended operator. But now that we agree on this point, can I ask you a question?
Let A be an orthonormal operator in R^3, and let B be the extention of this operator to Ext_2(R^3) (what can be denoted by the same letter A as you pointed out).
Is the matrix of B in the basis (e_1^e_2, e_1^e_3, e_2^e_3) orthonormal with respect to the standart scalar product in this basis, and what is the representation of B in this basis?
More generally, what is the answer to the same question when R^3 is replaced by R^n and Ext_2(R^3) by Ext_k(R^n), with k<n (I wrote down explicitely the basis in my first post)?

Consider a rotation in the xy plane.

[tex]\begin{align*}
\underline A(e_1) &= e_1 \cos \theta + e_2 \sin \theta \\
\underline A(e_2) &= e_2 \cos \theta - e_1 \sin \theta \\
\underline A(e_3) &= e_3
\end{align*}
[/tex]

When acting on basis 2-vectors, we get

[tex]\begin{align*}
\underline A(e_1 \wedge e_2) &= e_1 \wedge e_2 \\
\underline A(e_1 \wedge e_3) &= e_1 \wedge e_3 \cos \theta + e_2 \wedge e_3 \sin \theta \\
\underline A(e_2 \wedge e_3) &= e_2 \wedge e_3 \cos \theta - e_1 \wedge e_3 \sin \theta
\end{align*}
[/tex]

I think I see what you're driving at now. You're making a very, very specific statement about the components of the matrix when the basis is ordered according to certain rules and such. I can see how that might be useful for computing applications, but it's all just an artifact of needing certain rules for the ordering of the basis.
 
  • #23
Indeed, as I stated it, this theorem depends on a particular choice of the basis of Ext_k(R^n). But let us return to the first claim of my first post. I said that the the second claim would imply easily that the k-dimensional volume of the polytope determined by k vectors u_1,..., u_k of R^n is equal to the norm of u_1^...^u_k in Ext_k(R^n), which would be very useful to compute this volume. In fact, less is needed to imply this theorem: it would suffices to show that the extended operator B of an orthonormal operator A of R^n to Ext_k(R^n) is always orthonormal. I believe that it is possible to define the scalar product in Ext_k(R^n) in an intrinsic manner, so that the theorem wouldn't seem artificial to you. In fact, this is probably what is dealt in the article "inner product" of the topic "exterior algebra" in Wikipedia (see also "the hodge star operator in wikipedia, article "inner product of k-vectors", last line). But the description in Wikipedia is too succint to allow me to be sure.
 
  • #24
That [itex]a\wedge b\wedge c[/itex] has as its magnitude the volume the three vectors span is safe to say and generalizes to arbitrary dimension.

That an orthonormal operator is still orthonormal on k-vectors is also safe, I think.

The stuff on using the dual to define the inner product is traditional...and also silly in my opinion. It arises because exterior algebrists don't want to define another product other than the wedge, but taking the dual basically requires one. This treats the dual as something fundamental when it isn't.

The elegant solution is to define a "geometric" product. Let [itex]e_i e_j[/itex] denote he geometric product of two orthonormal basis vectors. When i=j the product is defined to be 1, capturing the properties of the inner product. When i and j are different, the product is anticommutative, reducing to the outer product. This makes scalar products is k-vectors arise naturally, and it shows that the Hodge star operator is just multiplication of a k-vector by the unit N-vector in N-dimensional space under the geometric product.
 
  • #25
I think we are coming to a conclusion. I am happy that you think a part of what I claimed is true. If you can indicate me some link where I could find a rigorous justification, I would be the happiest of man.
Your idea to involve geometric product is also very interesting; I guess this is related to Grassman calculus, and the theorems I stated are probably superseded by known theorems there.
 
  • #26
I think the key to what you want to prove here lies in some nifty inverse formulas for operators. First, let [itex]i[/itex] denote some multivector such that [itex]ia=\star a[/itex] under the geometric product. i represents the unit pseudoscalar of the space: i.e. [itex]i=e_1 e_2 e_3[/itex]. (Wedges are optional because he vectors are orthogonal.)

There is then a formula for inverting an operator.
[tex]\underline A^{-1}(a)=\overline A(ai) [\underline A(i)]^{-1}[/tex]

Overline represents transpose. Realize that orthogonal operators have inverse equal to transpose and there are big possibilities to use this, I think.

Beyond that, geometric algebra makes a statement like

[tex](e_1 \wedge e_2)\cdot [\underline A(e_1) \wedge \underline A(e_2)][/tex]

meaningful as a scalar product between bivectors.

I realize this language and set of ideas is unfamiliar, however. I would start with [itex]\underline A(a) \wedge \underline A(b)=\underline A(a\wedge b)[/itex] and then prove this for the transpose, prove that other properties hold still. Could be done like an induction proof.
 
  • #27
Thank you. I have found a good lesson on geometric algebra. I'll try to follow your outline when I feel sufficiently familiar with these concepts.
Anyway, thank you again for the time you have vasted in this discussion.
 

1. What is exterior algebra and how does it relate to n-dimensional volumes?

Exterior algebra is a mathematical concept that extends the properties of vectors and matrices to higher dimensions. It is used to represent and manipulate multilinear functions and forms. One of its applications is in calculating the n-dimensional volume of a geometric object, where the exterior algebra is used to define the volume as the coefficient of the highest-dimensional element in the multilinear form.

2. How is exterior algebra different from traditional algebra?

Traditional algebra deals with operations on numbers and variables, while exterior algebra deals with operations on multivectors. In traditional algebra, the order of multiplication does not matter, but in exterior algebra, the order of multiplication matters and is given by the anticommutative property.

3. Can exterior algebra be applied to non-Euclidean spaces?

Yes, exterior algebra can be applied to non-Euclidean spaces. In fact, it is often used in differential geometry to study curved spaces. The exterior algebra can be defined on any vector space, regardless of its geometric properties.

4. What is a wedge product in exterior algebra?

The wedge product is a binary operation in exterior algebra that generalizes the cross product in three dimensions. It is used to calculate the exterior product of two vectors, resulting in a multivector that represents the oriented area spanned by the two vectors.

5. How is the determinant of a matrix related to exterior algebra?

The determinant of a matrix can be seen as a special case of the wedge product in exterior algebra. The determinant of an n x n matrix is equivalent to the n-dimensional volume of the parallelepiped formed by the column vectors of the matrix. This can be expressed using the exterior product of the column vectors, which is a multivector with the determinant as its coefficient.

Similar threads

Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
965
  • Linear and Abstract Algebra
Replies
9
Views
943
  • Linear and Abstract Algebra
Replies
1
Views
813
  • Linear and Abstract Algebra
Replies
2
Views
940
  • Linear and Abstract Algebra
Replies
2
Views
863
Replies
4
Views
3K
  • Calculus and Beyond Homework Help
Replies
16
Views
1K
Back
Top