## Exterior algebra and n-dimensional volumes

 Quote by coquelicot I agree with you that it's ok to denote by the same letter the operator and the extended operator. But now that we agree on this point, can I ask you a question? Let A be an orthonormal operator in R^3, and let B be the extention of this operator to Ext_2(R^3) (what can be denoted by the same letter A as you pointed out). Is the matrix of B in the basis (e_1^e_2, e_1^e_3, e_2^e_3) orthonormal with respect to the standart scalar product in this basis, and what is the representation of B in this basis? More generally, what is the answer to the same question when R^3 is replaced by R^n and Ext_2(R^3) by Ext_k(R^n), with k
Hey coquelicot.

For this are you talking about for x,y in R^3 the inner product <Bx,By> for some operator B?

 Sorry, I haven't understood your question, can you be more explicit?

 Quote by coquelicot Sorry, I haven't understood your question, can you be more explicit?
Basically if you have an inner product <.,.> and vectors x and y in untransformed basis, where Bx = B*x (matrix-multiplication of operator B with x) and By = B*y, are you trying to find <Bx,By> where B is your operator to transform one vector from one basis ti another?

Your question was to understand orthonormality conditions of the inner product, and I'm asking is <Bx,By> your inner product?

 My impression is that you are dealing about something that is not the subject and the question I asked. Are you sure you have read my first post and a part of the posts and replies of others? I have explicitely stated what is the scalar product in Ext_k(R^n) (namely the canonical one on the basis e_1^...^e_k, ... explicited in my first post).

 Quote by coquelicot I agree with you that it's ok to denote by the same letter the operator and the extended operator. But now that we agree on this point, can I ask you a question? Let A be an orthonormal operator in R^3, and let B be the extention of this operator to Ext_2(R^3) (what can be denoted by the same letter A as you pointed out). Is the matrix of B in the basis (e_1^e_2, e_1^e_3, e_2^e_3) orthonormal with respect to the standart scalar product in this basis, and what is the representation of B in this basis? More generally, what is the answer to the same question when R^3 is replaced by R^n and Ext_2(R^3) by Ext_k(R^n), with k
Consider a rotation in the xy plane.

\begin{align*} \underline A(e_1) &= e_1 \cos \theta + e_2 \sin \theta \\ \underline A(e_2) &= e_2 \cos \theta - e_1 \sin \theta \\ \underline A(e_3) &= e_3 \end{align*}

When acting on basis 2-vectors, we get

\begin{align*} \underline A(e_1 \wedge e_2) &= e_1 \wedge e_2 \\ \underline A(e_1 \wedge e_3) &= e_1 \wedge e_3 \cos \theta + e_2 \wedge e_3 \sin \theta \\ \underline A(e_2 \wedge e_3) &= e_2 \wedge e_3 \cos \theta - e_1 \wedge e_3 \sin \theta \end{align*}

I think I see what you're driving at now. You're making a very, very specific statement about the components of the matrix when the basis is ordered according to certain rules and such. I can see how that might be useful for computing applications, but it's all just an artifact of needing certain rules for the ordering of the basis.

 Indeed, as I stated it, this theorem depends on a particular choice of the basis of Ext_k(R^n). But let us return to the first claim of my first post. I said that the the second claim would imply easily that the k-dimensional volume of the polytope determined by k vectors u_1,..., u_k of R^n is equal to the norm of u_1^...^u_k in Ext_k(R^n), which would be very useful to compute this volume. In fact, less is needed to imply this theorem: it would suffices to show that the extended operator B of an orthonormal operator A of R^n to Ext_k(R^n) is always orthonormal. I believe that it is possible to define the scalar product in Ext_k(R^n) in an intrinsic manner, so that the theorem wouldn't seem artificial to you. In fact, this is probably what is dealt in the article "inner product" of the topic "exterior algebra" in Wikipedia (see also "the hodge star operator in wikipedia, article "inner product of k-vectors", last line). But the description in Wikipedia is too succint to allow me to be sure.
 That $a\wedge b\wedge c$ has as its magnitude the volume the three vectors span is safe to say and generalizes to arbitrary dimension. That an orthonormal operator is still orthonormal on k-vectors is also safe, I think. The stuff on using the dual to define the inner product is traditional...and also silly in my opinion. It arises because exterior algebrists don't want to define another product other than the wedge, but taking the dual basically requires one. This treats the dual as something fundamental when it isn't. The elegant solution is to define a "geometric" product. Let $e_i e_j$ denote he geometric product of two orthonormal basis vectors. When i=j the product is defined to be 1, capturing the properties of the inner product. When i and j are different, the product is anticommutative, reducing to the outer product. This makes scalar products is k-vectors arise naturally, and it shows that the Hodge star operator is just multiplication of a k-vector by the unit N-vector in N-dimensional space under the geometric product.
 I think we are coming to a conclusion. I am happy that you think a part of what I claimed is true. If you can indicate me some link where I could find a rigorous justification, I would be the happiest of man. Your idea to involve geometric product is also very interesting; I guess this is related to Grassman calculus, and the theorems I stated are probably superseded by known theorems there.
 I think the key to what you want to prove here lies in some nifty inverse formulas for operators. First, let $i$ denote some multivector such that $ia=\star a$ under the geometric product. i represents the unit pseudoscalar of the space: i.e. $i=e_1 e_2 e_3$. (Wedges are optional because he vectors are orthogonal.) There is then a formula for inverting an operator. $$\underline A^{-1}(a)=\overline A(ai) [\underline A(i)]^{-1}$$ Overline represents transpose. Realize that orthogonal operators have inverse equal to transpose and there are big possibilities to use this, I think. Beyond that, geometric algebra makes a statement like $$(e_1 \wedge e_2)\cdot [\underline A(e_1) \wedge \underline A(e_2)]$$ meaningful as a scalar product between bivectors. I realize this language and set of ideas is unfamiliar, however. I would start with $\underline A(a) \wedge \underline A(b)=\underline A(a\wedge b)$ and then prove this for the transpose, prove that other properties hold still. Could be done like an induction proof.
 Thank you. I have found a good lesson on geometric algebra. I'll try to follow your outline when I feel sufficiently familiar with these concepts. Anyway, thank you again for the time you have vasted in this discussion.