How to generalize determinant and cross product

In summary, the k-dimensional measure of a generalised "quadrangle" spanned by k vectors in n-dimensional space can be calculated using either the determinant of the matrix formed by the vectors or the norm of their wedge product. This generalizes the formula for calculating the area of a parallelogram in three-dimensional space. The wedge product can be used to generalize the formula for any n-dimensional parallelepiped. This concept is explained in books on exterior algebra.
  • #1
jostpuur
2,116
19
Assume that [itex]X^1,X^2,\ldots, X^k[/itex] are vectors in [itex]\mathbb{R}^n[/itex], and [itex]1\leq k\leq n[/itex]. Is there a simple formula for the k-dimensional measure of the generalised "quadrangle" spanned by these vectors?

If [itex]k=n[/itex], then the solution is [itex]|\textrm{det}(X)|[/itex] with [itex]X_{ij}=(X^{j})_i[/itex].

If [itex]k=2[/itex] and [itex]n=3[/itex], then the solution is [itex]\|X^1\times X^2\|[/itex].

I know that a wedge product exists between alternating multilinear forms, and that it is related to measures because it is used in differential geometry and integration, but the definition of the wedge product doesn't immediately answer my question.
 
Physics news on Phys.org
  • #2
Just a thought - the vectors reside in the k-dimensional subspace of [tex]\mathbb{R}^n[/tex], therefore when represented in aome basis of this subspace they do constitute a square matrix and you can take the determinant. However, this stumbles me with the obvious dependence of the determinant on the specific chosen basis. Shouldn't this also be a problem in the case k=n?
 
  • #3
But the determinant doesn't depend on the chosen basis.
 
  • #4
adriank said:
But the determinant doesn't depend on the chosen basis.

Please explain this to me. Take any basis B in which the matrix X has certain determinant value d. Now multiply all vectors in B by 2. That will cause all the representations of X_i to become twice as small, which will cause a coefficient of [tex]2^{-n}[/tex] to the determinant.
 
  • #5
adriank means that the determinant of a linear map is independent of the basis you choose to represent that linear map by a matrix. (After all, the determinant of a linear map is defined to be the determinant of its matrix with repsect to any basis. This is well-defined precisely because this is independent of the chosen basis).

You are computing the determinant of the same matrix with respect to different bases, i.e. the matrix represents two different linear maps.
 
  • #6
If A is the matrix whose column vectors are [tex]X_1, ..., X_k[/tex], the "hypervolume" V of the parallelepiped spanned by the vectors is given by

[tex]V^2 = \det(A^TA)[/tex]
 
  • #7
I have finally discovered a proof for the determinant formula given in the latest reply.

Suppose [itex]A\in\mathbb{R}^{n\times k}[/itex] is such that it's vertical rows are linearly independent, and [itex]1\leq k\leq n[/itex]. First choose some [itex]U\in\mathbb{R}^{n\times k}[/itex] such that its vertical rows are orthogonal, and that they span the same [itex]k[/itex]-dimensional subspace as the vertical rows of [itex]A[/itex] span too. That means

[tex]
U_{*i}\cdot U_{*j} = \delta_{ij}
[/tex]

and that if we define the coefficients

[tex]
\alpha_{ij} = U_{*j} \cdot A_{*i}
[/tex]

then

[tex]
A_{*i} = \sum_{j=1}^{k} \alpha_{ij} U_{*j}
[/tex]

These conditions can be written like this:

[tex]
U^TU = \textrm{id}_{k\times k},\quad\quad UU^TA = A
[/tex]

The [itex]\alpha[/itex] is a [itex]k\times k[/itex]-matrix, whose horizontal rows give the coordinates of [itex]A_{*i}[/itex] in the [itex]U[/itex]-basis, so its determinant's absolute value gives the answer to the problem.

[tex]
\det(\alpha)^2 = \det(\alpha)\det(\alpha^T) = \det(A^TU)\det(U^TA) = \det(A^TUU^TA) = \det(A^TA)
[/tex]
 
  • #8
Suppose that the task is to find the area of the quadrilateral spanned by the vertical rows of the matrix

[tex]
A = \left(\begin{array}{cc}
A_{11} & A_{12} \\
A_{21} & A_{22} \\
A_{31} & A_{32} \\
\end{array}\right)
[/tex]

There is actually two different formulas for this now. One is [itex]\sqrt{\det(A^TA)}[/itex], and second is [itex]\|A_{*1}\times A_{*2}\|[/itex]. The formulas look quite different:

[tex]
\det(A^TA) = (A_{11}^2 + A_{21}^2 + A_{31}^2)(A_{12}^2 + A_{22}^2 + A_{32}^2) - (A_{12}A_{11} + A_{22}A_{21} + A_{32}A_{31})^2
[/tex]

[tex]
\|A_{*1}\times A_{*2}\|^2 = (A_{21}A_{32} - A_{31}A_{22})^2 + (A_{31}A_{12} - A_{11}A_{32})^2 + (A_{11}A_{22} - A_{21}A_{12})^2
[/tex]

You need to go through some effort if you want to prove that these are the same.

At this point I would still put forward the question: How do you generalize the cross product? It's not settled IMO.

If [itex]A^{n\times k}[/itex] is something larger, [itex]n>3, k>2[/itex], do we still have two different formulas for the measure of the generalized quadrilateral? One being [itex]\sqrt{\det(A^TA)}[/itex], and the other one something else?
 
  • #9
jostpuur said:
If [itex]A^{n\times k}[/itex] ...

That was supposed to be [itex]A\in\mathbb{R}^{n\times k}[/itex].
 
  • #10
jostpuur said:
Assume that [itex]X^1,X^2,\ldots, X^k[/itex] are vectors in [itex]\mathbb{R}^n[/itex], and [itex]1\leq k\leq n[/itex]. Is there a simple formula for the k-dimensional measure of the generalised "quadrangle" spanned by these vectors?

If [itex]k=n[/itex], then the solution is [itex]|\textrm{det}(X)|[/itex] with [itex]X_{ij}=(X^{j})_i[/itex].

If [itex]k=2[/itex] and [itex]n=3[/itex], then the solution is [itex]\|X^1\times X^2\|[/itex].

I know that a wedge product exists between alternating multilinear forms, and that it is related to measures because it is used in differential geometry and integration, but the definition of the wedge product doesn't immediately answer my question.

try deriving a formula yourself. Start with 2 vectors.
 
  • #11
jostpuur said:
Assume that [itex]X^1,X^2,\ldots, X^k[/itex] are vectors in [itex]\mathbb{R}^n[/itex], and [itex]1\leq k\leq n[/itex]. Is there a simple formula for the k-dimensional measure of the generalised "quadrangle" spanned by these vectors?

If [itex]k=n[/itex], then the solution is [itex]|\textrm{det}(X)|[/itex] with [itex]X_{ij}=(X^{j})_i[/itex].

If [itex]k=2[/itex] and [itex]n=3[/itex], then the solution is [itex]\|X^1\times X^2\|[/itex].

I know that a wedge product exists between alternating multilinear forms, and that it is related to measures because it is used in differential geometry and integration, but the definition of the wedge product doesn't immediately answer my question.

The cross product formula generalizes naturally using wedge products:

[tex]\lVert \vec X_1 \wedge \vec X_2 \wedge \ldots \wedge \vec X_k \rVert = \sqrt{ \lvert \det [ \vec X_i \cdot \vec X_j ] \rvert } [/tex]
 
  • #12
Ben Niehoff said:
The cross product formula generalizes naturally using wedge products:

[tex]\lVert \vec X_1 \wedge \vec X_2 \wedge \ldots \wedge \vec X_k \rVert = \sqrt{ \lvert \det [ \vec X_i \cdot \vec X_j ] \rvert } [/tex]

Can you mention a name of a book where this is explained?
 
  • #13
jostpuur said:
Can you mention a name of a book where this is explained?

Practically every book or article on exterior algebra? It seemed like an obvious fact to me.

If you know the answer for an n-parallelepiped in n-space, then a k-parallelepiped in n-space follows by simply restricting yourself to the k-hypersurface in which the k-parallelepiped lives, and using what you already know. You should be able to derive it without too much effort.
 
  • #14
Every time I try to read about exterior algebras, I'm only shown some abstract definitions and properties.
 
  • #15
Then follow the outline I gave of the derivation.

Choose an orthonormal basis [itex]e_1, \ldots, e_n[/itex] such that the k-parallelepiped lies in the subspace generated by [itex]e_1, \ldots, e_k[/itex], and the subspace generated by [itex]e_{k+1}, \ldots, e_n[/itex] is orthogonal to it. Then [itex]e_1, \ldots, e_k[/itex] give an orthonormal basis for the k-subspace.

Now you just need to write down a square matrix in that basis and take its determinant, just like you did for the n-parallelepiped case.

To generalize to arbitrary orthonormal basis, use the fact that dot products are preserved under rotations (so, the basis-invariant version of the formula must be written in terms of dot products).
 
  • #16
Every time I ask something here, I end up needing to prove everything myself...

So if
[tex]
\omega:\underbrace{\mathbb{R}^n \times \cdots \times \mathbb{R}^n}_{k\;\textrm{times}}\to\mathbb{R}
[/tex]
is an alternating multilinear form, we define its "norm" with formula
[tex]
\|\omega\|^2 = \frac{1}{k!}\sum_{i_1,\cdots, i_k=1}^{n} \omega_{i_1\cdots i_k}^2
[/tex]
where coefficients [itex]\omega_{i_1\cdots i_k}[/itex] have been chosen so that
[tex]
\omega = \sum_{i_1,\cdots i_k=1}^n \omega_{i_1\cdots i_k} dx_{i_1}\otimes\cdots\otimes dx_{i_k}
[/tex]

Then we interpret a vertical vector [itex]A_{*i}[/itex] as a linear form
[tex]
A_{*i} = \sum_{i'=1}^n A_{i'i} dx_{i'}
[/tex]
and put forward a claim
[tex]
\|A_{*1}\wedge\cdots\wedge A_{*k}\| = \sqrt{\det(A^TA)}
[/tex]

How to prove this?

Here it goes! First verify
[tex]
\omega = A_{*1}\wedge\cdots\wedge A_{*k}\quad\implies\quad \omega_{i_1\cdots i_k} = \sum_{\sigma\in S_k} \epsilon(\sigma) A_{i_1,\sigma(1)}\cdots A_{i_k,\sigma(k)}
[/tex]

Then the final calculation begins:
[tex]
\|A_{*1}\wedge\cdots\wedge A_{*k}\|^2 = \frac{1}{k!}\sum_{i_1,\cdots,i_k=1}^n \Big(\sum_{\sigma\in S_k} A_{i_1,\sigma(1)}\cdots A_{i_k,\sigma(k)}\Big)^2
[/tex]
[tex]
= \frac{1}{k!}\sum_{i_1,\cdots,i_k=1}^n\Big( \sum_{\sigma,\sigma'\in S_k} \epsilon(\sigma)\epsilon(\sigma') (A_{i_1,\sigma'(1)} A_{i_1,\sigma(1)})\cdots (A_{i_k,\sigma'(k)} A_{i_k,\sigma(k)})\Big)
[/tex]
[tex]
= \frac{1}{k!}\sum_{\sigma'\in S_k}\Big(\sum_{\sigma\in S_k} \epsilon(\sigma)\epsilon(\sigma') (A^TA)_{\sigma'(1),\sigma(1)}\cdots (A^TA)_{\sigma'(k),\sigma(k)}\Big) = \cdots
[/tex]
With fixed [itex]\sigma'[/itex] we can make change of variable [itex]\sigma\mapsto\sigma''[/itex] in the inner sum with formula [itex]\sigma''=\sigma\circ(\sigma')^{-1}[/itex]. Then
[tex]
\cdots = \frac{1}{k!}\sum_{\sigma'\in S_k}\Big(\sum_{\sigma''\in S_k} \epsilon(\sigma''\circ\sigma')\epsilon(\sigma') (A^TA)_{\sigma'(1),\sigma''(\sigma'(1))}\cdots (A^TA)_{\sigma'(k),\sigma''(\sigma'(k))}\Big)
[/tex]
[tex]
= \frac{1}{k!}\sum_{\sigma'\in S_k}\Big(\sum_{\sigma''\in S_k} \epsilon(\sigma'') (A^TA)_{1,\sigma''(1)}\cdots (A^TA)_{k,\sigma''(k)}\Big) = \cdots
[/tex]
Now we see that the outer sum simply sums [itex]k![/itex] constant terms, which do not depend on [itex]\sigma'[/itex].
[tex]
\cdots = \sum_{\sigma''\in S_k} \epsilon(\sigma'') (A^TA)_{1,\sigma''(1)} \cdots (A^TA)_{k,\sigma''(k)} = \det(A^TA).
[/tex]
 
  • #17
jostpuur said:
Every time I ask something here, I end up needing to prove everything myself...

That's because we have other work to do, and you'll learn more by doing it yourself anyway. Looks like it worked out.
 

1. What is the general formula for calculating the determinant of a matrix?

The general formula for calculating the determinant of a matrix is given by the Leibniz formula, which states that the determinant of an n-by-n matrix is equal to the sum of n! terms, each of which is the product of n entries of the matrix, with each term being multiplied by either +1 or -1 depending on the position of the entries in the matrix.

2. How can the determinant be generalized for higher dimensions?

The determinant can be generalized for higher dimensions by using the concept of tensors. Tensors are multi-dimensional arrays that can be used to represent and manipulate mathematical objects, including determinants. By using tensors, the concept of determinant can be extended to higher dimensions, such as 4D, 5D, and so on.

3. What is the cross product and how is it related to the determinant?

The cross product is a mathematical operation that takes two vectors as input and produces a third vector that is perpendicular to both the input vectors. The magnitude of the resulting vector is equal to the product of the magnitudes of the input vectors multiplied by the sine of the angle between them. The cross product can also be expressed as a determinant, where the resulting vector is the third column of the determinant matrix.

4. Can the cross product be generalized for higher dimensions?

Yes, the cross product can be generalized for higher dimensions by using the concept of exterior algebra. In exterior algebra, the cross product can be extended to any number of dimensions, and the resulting vector is known as the exterior product. The exterior product can also be expressed as a determinant, similar to the 3D cross product.

5. How are the determinant and cross product used in real-world applications?

The determinant and cross product have numerous real-world applications in fields such as physics, engineering, and computer graphics. They are used to calculate areas, volumes, and angles in 2D and 3D spaces, as well as to solve systems of linear equations and to determine whether a set of vectors is linearly independent. In computer graphics, the cross product is often used to calculate surface normals and to perform rotations in 3D space.

Similar threads

  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
4K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
925
  • Calculus and Beyond Homework Help
Replies
5
Views
174
  • Linear and Abstract Algebra
Replies
9
Views
999
  • Linear and Abstract Algebra
Replies
4
Views
978
  • Linear and Abstract Algebra
Replies
2
Views
965
Replies
6
Views
537
Replies
8
Views
835
Back
Top