Wedge Product and Determinants .... Tu, Proposition 3.27 ....

In summary, Tu and Walschap provide different notations for the same concept of finding the determinant of a matrix. While Tu uses a column index permutation, Walschap uses a row index permutation. However, since both approaches sum up all permutations, they ultimately result in the same answer. Furthermore, the use of the permutation function and the sign function in both definitions further support their equivalence.
  • #1
Math Amateur
Gold Member
MHB
3,990
48
In Loring W. Tu's book: "An Introduction to Manifolds" (Second Edition) ... Proposition 3.27 reads as follows:
?temp_hash=a458333626c1a4c034af30abe82c889b.png

The above proposition gives the wedge product of k linear functions as a determinant ...Walschap in his book: "Multivariable Calculus and Differential Geometry" gives the definition of a determinant as follows:
?temp_hash=a458333626c1a4c034af30abe82c889b.png


From Tu's proof above we can say that ...

##\text{det} [ \alpha^i ( v_j ) ]####= \text{det} \begin{bmatrix} \alpha^1 ( v_1 ) & \alpha^1 ( v_2 ) & \cdot \cdot \cdot & \alpha^1 ( v_k ) \\ \alpha^2 ( v_1 ) & \alpha^2 ( v_2 ) & \cdot \cdot \cdot & \alpha^2 ( v_k ) \\ \cdot \cdot \cdot \\ \cdot \cdot \cdot \\ \cdot \cdot \cdot \\ \alpha^3 ( v_1 ) & \alpha^3 ( v_2 ) & \cdot \cdot \cdot & \alpha^3 ( v_k ) \end{bmatrix}####= \sum_{ \sigma \in S_k } ( \text{ sgn } \sigma ) \alpha^1 ( v_{ \sigma (1) } ) \cdot \cdot \cdot \alpha^k ( v_{ \sigma (k) } )##Thus Tu is indicating that the column index ##j## is permuted ... that is we permute the rows of the determinant matrix ...But in the definition of the determinant given by Walschap we have##\text{det} \begin{bmatrix} a_{11} & \cdot \cdot \cdot & a_{ 1n } \\ \cdot & \cdot \cdot \cdot & \cdot \\ a_{n1} & \cdot \cdot \cdot & a_{ nn } \end{bmatrix}####= \sum_{ \sigma \in S_n } \varepsilon ( \sigma ) a_{ \sigma (1) 1 } \cdot \cdot \cdot a_{ \sigma (n) n }##Thus Walschap is indicating that the row index ##i## is permuted ... that is we permute the columns of the determinant matrix ... in contrast to Tu who indicates that we permute the rows of the determinant matrix ...Can someone please reconcile these two approaches ... do we get the same answer to both ...?

Clarification of the above issues will be much appreciated ... ...

Peter
 

Attachments

  • Tu - Proposition 3.27 ... .png
    Tu - Proposition 3.27 ... .png
    16.6 KB · Views: 483
  • Walschap - Defn of Determinant ... Page 15 ... .png
    Walschap - Defn of Determinant ... Page 15 ... .png
    32.6 KB · Views: 449
  • ?temp_hash=a458333626c1a4c034af30abe82c889b.png
    ?temp_hash=a458333626c1a4c034af30abe82c889b.png
    16.6 KB · Views: 671
  • ?temp_hash=a458333626c1a4c034af30abe82c889b.png
    ?temp_hash=a458333626c1a4c034af30abe82c889b.png
    32.6 KB · Views: 578
Last edited:
Physics news on Phys.org
  • #2
Both definitions are the same, because we sum up all permutations.

Short answer: ##(1,1)(2,2)-(1,2)(2,1)=(1,\operatorname{id}(1))\cdot(2,\operatorname{id}(2))-(1,\sigma(1))\cdot(2,\sigma(2))=(\operatorname{id}(1),1)\cdot(\operatorname{id}(2),2)-(\sigma(1),1)\cdot(\sigma(2),2)##
for ##\sigma = (12) \in S_2##

The long answer is to write ##\sum_{\sigma} f(j,\sigma(j))##, then substitute ##i=\sigma(j)##, which gives ##\sum_{\sigma}f(\sigma^{-1}(i),i)## and observe, that ##\sum_{\sigma} = \sum_{\sigma^{-1}}##

... plus ##\operatorname{sgn}(\sigma) = \operatorname{sgn}(\sigma^{-1})##.
 
Last edited:
  • #3
fresh_42 said:
Both definitions are the same, because we sum up all permutations.

Short answer: ##(1,1)(2,2)-(1,2)(2,1)=(1,\operatorname{id}(1))\cdot(2,\operatorname{id}(2))-(1,\sigma(1))\cdot(2,\sigma(2))=(\operatorname{id}(1),1)\cdot(\operatorname{id}(2),2)-(\sigma(1),1)\cdot(\sigma(2),2)##
for ##\sigma = (12) \in S_2##

The long answer is to write ##\sum_{\sigma} f(j,\sigma(j))##, then substitute ##i=\sigma(j)##, which gives ##\sum_{\sigma}f(\sigma^{-1}(i),i)## and observe, that ##\sum_{\sigma} = \sum_{\sigma^{-1}}##.
Thanks fresh_42 ...

Reflecting on what you have written ...

Peter
 

1. What is the Wedge Product?

The Wedge Product is a mathematical operation used to combine two vectors or differential forms to create a new object with different properties. It is denoted by the symbol ∧ and is also known as the exterior product.

2. What is the significance of the Wedge Product in mathematics?

The Wedge Product is important in differential geometry and multilinear algebra, as it allows for the creation of new geometric objects and the calculation of their properties. It is also used in physics, particularly in the study of electromagnetism and relativity.

3. What is a determinant in linear algebra?

A determinant is a numerical value that can be calculated from a square matrix. It represents the scaling factor of the transformation represented by the matrix and is used to solve systems of linear equations and calculate volumes and areas in geometry.

4. What is Proposition 3.27 in relation to the Wedge Product and Determinants?

Proposition 3.27 is a theorem in linear algebra that states that the determinant of a matrix can be calculated by taking the wedge product of the column vectors of the matrix. This property is useful for simplifying determinant calculations and understanding the relationship between the Wedge Product and determinants.

5. How is Proposition 3.27 used in practical applications?

Proposition 3.27 is used in various fields of mathematics and science, such as differential geometry, physics, and computer graphics. It allows for efficient calculation of determinants and can be applied to solve systems of linear equations and calculate volumes and areas in higher dimensions.

Similar threads

Replies
6
Views
2K
Replies
2
Views
2K
  • Topology and Analysis
Replies
2
Views
2K
Replies
6
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
4K
  • Math Proof Training and Practice
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
963
  • Calculus and Beyond Homework Help
Replies
3
Views
895
  • Math Proof Training and Practice
2
Replies
61
Views
6K
Back
Top