How to generalize determinant and cross product

  • Thread starter jostpuur
  • Start date
  • #1
jostpuur
2,112
18
Assume that [itex]X^1,X^2,\ldots, X^k[/itex] are vectors in [itex]\mathbb{R}^n[/itex], and [itex]1\leq k\leq n[/itex]. Is there a simple formula for the k-dimensional measure of the generalised "quadrangle" spanned by these vectors?

If [itex]k=n[/itex], then the solution is [itex]|\textrm{det}(X)|[/itex] with [itex]X_{ij}=(X^{j})_i[/itex].

If [itex]k=2[/itex] and [itex]n=3[/itex], then the solution is [itex]\|X^1\times X^2\|[/itex].

I know that a wedge product exists between alternating multilinear forms, and that it is related to measures because it is used in differential geometry and integration, but the definition of the wedge product doesn't immediately answer my question.
 

Answers and Replies

  • #2
jshtok
18
3
Just a thought - the vectors reside in the k-dimensional subspace of [tex]\mathbb{R}^n[/tex], therefore when represented in aome basis of this subspace they do constitute a square matrix and you can take the determinant. However, this stumbles me with the obvious dependence of the determinant on the specific chosen basis. Shouldn't this also be a problem in the case k=n?
 
  • #3
adriank
534
1
But the determinant doesn't depend on the chosen basis.
 
  • #4
jshtok
18
3
But the determinant doesn't depend on the chosen basis.

Please explain this to me. Take any basis B in which the matrix X has certain determinant value d. Now multiply all vectors in B by 2. That will cause all the representations of X_i to become twice as small, which will cause a coefficient of [tex]2^{-n}[/tex] to the determinant.
 
  • #5
Landau
Science Advisor
905
0
adriank means that the determinant of a linear map is independent of the basis you choose to represent that linear map by a matrix. (After all, the determinant of a linear map is defined to be the determinant of its matrix with repsect to any basis. This is well-defined precisely because this is independent of the chosen basis).

You are computing the determinant of the same matrix with respect to different bases, i.e. the matrix represents two different linear maps.
 
  • #6
monea83
20
0
If A is the matrix whose column vectors are [tex]X_1, ..., X_k[/tex], the "hypervolume" V of the parallelepiped spanned by the vectors is given by

[tex]V^2 = \det(A^TA)[/tex]
 
  • #7
jostpuur
2,112
18
I have finally discovered a proof for the determinant formula given in the latest reply.

Suppose [itex]A\in\mathbb{R}^{n\times k}[/itex] is such that it's vertical rows are linearly independent, and [itex]1\leq k\leq n[/itex]. First choose some [itex]U\in\mathbb{R}^{n\times k}[/itex] such that its vertical rows are orthogonal, and that they span the same [itex]k[/itex]-dimensional subspace as the vertical rows of [itex]A[/itex] span too. That means

[tex]
U_{*i}\cdot U_{*j} = \delta_{ij}
[/tex]

and that if we define the coefficients

[tex]
\alpha_{ij} = U_{*j} \cdot A_{*i}
[/tex]

then

[tex]
A_{*i} = \sum_{j=1}^{k} \alpha_{ij} U_{*j}
[/tex]

These conditions can be written like this:

[tex]
U^TU = \textrm{id}_{k\times k},\quad\quad UU^TA = A
[/tex]

The [itex]\alpha[/itex] is a [itex]k\times k[/itex]-matrix, whose horizontal rows give the coordinates of [itex]A_{*i}[/itex] in the [itex]U[/itex]-basis, so its determinant's absolute value gives the answer to the problem.

[tex]
\det(\alpha)^2 = \det(\alpha)\det(\alpha^T) = \det(A^TU)\det(U^TA) = \det(A^TUU^TA) = \det(A^TA)
[/tex]
 
  • #8
jostpuur
2,112
18
Suppose that the task is to find the area of the quadrilateral spanned by the vertical rows of the matrix

[tex]
A = \left(\begin{array}{cc}
A_{11} & A_{12} \\
A_{21} & A_{22} \\
A_{31} & A_{32} \\
\end{array}\right)
[/tex]

There is actually two different formulas for this now. One is [itex]\sqrt{\det(A^TA)}[/itex], and second is [itex]\|A_{*1}\times A_{*2}\|[/itex]. The formulas look quite different:

[tex]
\det(A^TA) = (A_{11}^2 + A_{21}^2 + A_{31}^2)(A_{12}^2 + A_{22}^2 + A_{32}^2) - (A_{12}A_{11} + A_{22}A_{21} + A_{32}A_{31})^2
[/tex]

[tex]
\|A_{*1}\times A_{*2}\|^2 = (A_{21}A_{32} - A_{31}A_{22})^2 + (A_{31}A_{12} - A_{11}A_{32})^2 + (A_{11}A_{22} - A_{21}A_{12})^2
[/tex]

You need to go through some effort if you want to prove that these are the same.

At this point I would still put forward the question: How do you generalize the cross product? It's not settled IMO.

If [itex]A^{n\times k}[/itex] is something larger, [itex]n>3, k>2[/itex], do we still have two different formulas for the measure of the generalized quadrilateral? One being [itex]\sqrt{\det(A^TA)}[/itex], and the other one something else?
 
  • #9
jostpuur
2,112
18
If [itex]A^{n\times k}[/itex] ...

That was supposed to be [itex]A\in\mathbb{R}^{n\times k}[/itex].
 
  • #10
lavinia
Science Advisor
Gold Member
3,283
673
Assume that [itex]X^1,X^2,\ldots, X^k[/itex] are vectors in [itex]\mathbb{R}^n[/itex], and [itex]1\leq k\leq n[/itex]. Is there a simple formula for the k-dimensional measure of the generalised "quadrangle" spanned by these vectors?

If [itex]k=n[/itex], then the solution is [itex]|\textrm{det}(X)|[/itex] with [itex]X_{ij}=(X^{j})_i[/itex].

If [itex]k=2[/itex] and [itex]n=3[/itex], then the solution is [itex]\|X^1\times X^2\|[/itex].

I know that a wedge product exists between alternating multilinear forms, and that it is related to measures because it is used in differential geometry and integration, but the definition of the wedge product doesn't immediately answer my question.

try deriving a formula yourself. Start with 2 vectors.
 
  • #11
Ben Niehoff
Science Advisor
Gold Member
1,887
168
Assume that [itex]X^1,X^2,\ldots, X^k[/itex] are vectors in [itex]\mathbb{R}^n[/itex], and [itex]1\leq k\leq n[/itex]. Is there a simple formula for the k-dimensional measure of the generalised "quadrangle" spanned by these vectors?

If [itex]k=n[/itex], then the solution is [itex]|\textrm{det}(X)|[/itex] with [itex]X_{ij}=(X^{j})_i[/itex].

If [itex]k=2[/itex] and [itex]n=3[/itex], then the solution is [itex]\|X^1\times X^2\|[/itex].

I know that a wedge product exists between alternating multilinear forms, and that it is related to measures because it is used in differential geometry and integration, but the definition of the wedge product doesn't immediately answer my question.

The cross product formula generalizes naturally using wedge products:

[tex]\lVert \vec X_1 \wedge \vec X_2 \wedge \ldots \wedge \vec X_k \rVert = \sqrt{ \lvert \det [ \vec X_i \cdot \vec X_j ] \rvert } [/tex]
 
  • #12
jostpuur
2,112
18
The cross product formula generalizes naturally using wedge products:

[tex]\lVert \vec X_1 \wedge \vec X_2 \wedge \ldots \wedge \vec X_k \rVert = \sqrt{ \lvert \det [ \vec X_i \cdot \vec X_j ] \rvert } [/tex]

Can you mention a name of a book where this is explained?
 
  • #13
Ben Niehoff
Science Advisor
Gold Member
1,887
168
Can you mention a name of a book where this is explained?

Practically every book or article on exterior algebra? It seemed like an obvious fact to me.

If you know the answer for an n-parallelepiped in n-space, then a k-parallelepiped in n-space follows by simply restricting yourself to the k-hypersurface in which the k-parallelepiped lives, and using what you already know. You should be able to derive it without too much effort.
 
  • #14
jostpuur
2,112
18
Every time I try to read about exterior algebras, I'm only shown some abstract definitions and properties.
 
  • #15
Ben Niehoff
Science Advisor
Gold Member
1,887
168
Then follow the outline I gave of the derivation.

Choose an orthonormal basis [itex]e_1, \ldots, e_n[/itex] such that the k-parallelepiped lies in the subspace generated by [itex]e_1, \ldots, e_k[/itex], and the subspace generated by [itex]e_{k+1}, \ldots, e_n[/itex] is orthogonal to it. Then [itex]e_1, \ldots, e_k[/itex] give an orthonormal basis for the k-subspace.

Now you just need to write down a square matrix in that basis and take its determinant, just like you did for the n-parallelepiped case.

To generalize to arbitrary orthonormal basis, use the fact that dot products are preserved under rotations (so, the basis-invariant version of the formula must be written in terms of dot products).
 
  • #16
jostpuur
2,112
18
Every time I ask something here, I end up needing to prove everything myself...

So if
[tex]
\omega:\underbrace{\mathbb{R}^n \times \cdots \times \mathbb{R}^n}_{k\;\textrm{times}}\to\mathbb{R}
[/tex]
is an alternating multilinear form, we define its "norm" with formula
[tex]
\|\omega\|^2 = \frac{1}{k!}\sum_{i_1,\cdots, i_k=1}^{n} \omega_{i_1\cdots i_k}^2
[/tex]
where coefficients [itex]\omega_{i_1\cdots i_k}[/itex] have been chosen so that
[tex]
\omega = \sum_{i_1,\cdots i_k=1}^n \omega_{i_1\cdots i_k} dx_{i_1}\otimes\cdots\otimes dx_{i_k}
[/tex]

Then we interpret a vertical vector [itex]A_{*i}[/itex] as a linear form
[tex]
A_{*i} = \sum_{i'=1}^n A_{i'i} dx_{i'}
[/tex]
and put forward a claim
[tex]
\|A_{*1}\wedge\cdots\wedge A_{*k}\| = \sqrt{\det(A^TA)}
[/tex]

How to prove this?

Here it goes! First verify
[tex]
\omega = A_{*1}\wedge\cdots\wedge A_{*k}\quad\implies\quad \omega_{i_1\cdots i_k} = \sum_{\sigma\in S_k} \epsilon(\sigma) A_{i_1,\sigma(1)}\cdots A_{i_k,\sigma(k)}
[/tex]

Then the final calculation begins:
[tex]
\|A_{*1}\wedge\cdots\wedge A_{*k}\|^2 = \frac{1}{k!}\sum_{i_1,\cdots,i_k=1}^n \Big(\sum_{\sigma\in S_k} A_{i_1,\sigma(1)}\cdots A_{i_k,\sigma(k)}\Big)^2
[/tex]
[tex]
= \frac{1}{k!}\sum_{i_1,\cdots,i_k=1}^n\Big( \sum_{\sigma,\sigma'\in S_k} \epsilon(\sigma)\epsilon(\sigma') (A_{i_1,\sigma'(1)} A_{i_1,\sigma(1)})\cdots (A_{i_k,\sigma'(k)} A_{i_k,\sigma(k)})\Big)
[/tex]
[tex]
= \frac{1}{k!}\sum_{\sigma'\in S_k}\Big(\sum_{\sigma\in S_k} \epsilon(\sigma)\epsilon(\sigma') (A^TA)_{\sigma'(1),\sigma(1)}\cdots (A^TA)_{\sigma'(k),\sigma(k)}\Big) = \cdots
[/tex]
With fixed [itex]\sigma'[/itex] we can make change of variable [itex]\sigma\mapsto\sigma''[/itex] in the inner sum with formula [itex]\sigma''=\sigma\circ(\sigma')^{-1}[/itex]. Then
[tex]
\cdots = \frac{1}{k!}\sum_{\sigma'\in S_k}\Big(\sum_{\sigma''\in S_k} \epsilon(\sigma''\circ\sigma')\epsilon(\sigma') (A^TA)_{\sigma'(1),\sigma''(\sigma'(1))}\cdots (A^TA)_{\sigma'(k),\sigma''(\sigma'(k))}\Big)
[/tex]
[tex]
= \frac{1}{k!}\sum_{\sigma'\in S_k}\Big(\sum_{\sigma''\in S_k} \epsilon(\sigma'') (A^TA)_{1,\sigma''(1)}\cdots (A^TA)_{k,\sigma''(k)}\Big) = \cdots
[/tex]
Now we see that the outer sum simply sums [itex]k![/itex] constant terms, which do not depend on [itex]\sigma'[/itex].
[tex]
\cdots = \sum_{\sigma''\in S_k} \epsilon(\sigma'') (A^TA)_{1,\sigma''(1)} \cdots (A^TA)_{k,\sigma''(k)} = \det(A^TA).
[/tex]
 
  • #17
Ben Niehoff
Science Advisor
Gold Member
1,887
168
Every time I ask something here, I end up needing to prove everything myself...

That's because we have other work to do, and you'll learn more by doing it yourself anyway. Looks like it worked out.
 

Suggested for: How to generalize determinant and cross product

  • Last Post
Replies
2
Views
581
Replies
7
Views
833
Replies
14
Views
509
Replies
32
Views
2K
Replies
11
Views
2K
Replies
10
Views
439
Replies
1
Views
256
  • Last Post
Replies
8
Views
729
Replies
4
Views
243
Top