Wedge Product and Determinants .... Tu, Proposition 3.27 ....

Click For Summary
SUMMARY

The discussion centers on the reconciliation of two definitions of determinants as presented in Loring W. Tu's "An Introduction to Manifolds" (Second Edition) and Walschap's "Multivariable Calculus and Differential Geometry." Both authors describe determinants through permutations of indices, with Tu focusing on row permutations and Walschap on column permutations. Despite the different approaches, it is established that both definitions yield the same result due to the summation over all permutations, confirming the equivalence of the two methods.

PREREQUISITES
  • Understanding of linear functions and their properties
  • Familiarity with determinants and their mathematical definitions
  • Knowledge of permutation groups, specifically S_n
  • Basic concepts of differential geometry as presented in Tu's and Walschap's texts
NEXT STEPS
  • Study the properties of determinants in linear algebra
  • Explore the concept of wedge products in differential forms
  • Learn about permutation groups and their applications in mathematics
  • Review the proofs and applications of determinants in multivariable calculus
USEFUL FOR

Mathematicians, students of linear algebra, and anyone interested in understanding the relationship between determinants and wedge products in the context of differential geometry.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
In Loring W. Tu's book: "An Introduction to Manifolds" (Second Edition) ... Proposition 3.27 reads as follows:
?temp_hash=a458333626c1a4c034af30abe82c889b.png

The above proposition gives the wedge product of k linear functions as a determinant ...Walschap in his book: "Multivariable Calculus and Differential Geometry" gives the definition of a determinant as follows:
?temp_hash=a458333626c1a4c034af30abe82c889b.png


From Tu's proof above we can say that ...

##\text{det} [ \alpha^i ( v_j ) ]####= \text{det} \begin{bmatrix} \alpha^1 ( v_1 ) & \alpha^1 ( v_2 ) & \cdot \cdot \cdot & \alpha^1 ( v_k ) \\ \alpha^2 ( v_1 ) & \alpha^2 ( v_2 ) & \cdot \cdot \cdot & \alpha^2 ( v_k ) \\ \cdot \cdot \cdot \\ \cdot \cdot \cdot \\ \cdot \cdot \cdot \\ \alpha^3 ( v_1 ) & \alpha^3 ( v_2 ) & \cdot \cdot \cdot & \alpha^3 ( v_k ) \end{bmatrix}####= \sum_{ \sigma \in S_k } ( \text{ sgn } \sigma ) \alpha^1 ( v_{ \sigma (1) } ) \cdot \cdot \cdot \alpha^k ( v_{ \sigma (k) } )##Thus Tu is indicating that the column index ##j## is permuted ... that is we permute the rows of the determinant matrix ...But in the definition of the determinant given by Walschap we have##\text{det} \begin{bmatrix} a_{11} & \cdot \cdot \cdot & a_{ 1n } \\ \cdot & \cdot \cdot \cdot & \cdot \\ a_{n1} & \cdot \cdot \cdot & a_{ nn } \end{bmatrix}####= \sum_{ \sigma \in S_n } \varepsilon ( \sigma ) a_{ \sigma (1) 1 } \cdot \cdot \cdot a_{ \sigma (n) n }##Thus Walschap is indicating that the row index ##i## is permuted ... that is we permute the columns of the determinant matrix ... in contrast to Tu who indicates that we permute the rows of the determinant matrix ...Can someone please reconcile these two approaches ... do we get the same answer to both ...?

Clarification of the above issues will be much appreciated ... ...

Peter
 

Attachments

  • Tu - Proposition 3.27 ... .png
    Tu - Proposition 3.27 ... .png
    16.6 KB · Views: 615
  • Walschap - Defn of Determinant ... Page 15 ... .png
    Walschap - Defn of Determinant ... Page 15 ... .png
    32.6 KB · Views: 549
  • ?temp_hash=a458333626c1a4c034af30abe82c889b.png
    ?temp_hash=a458333626c1a4c034af30abe82c889b.png
    16.6 KB · Views: 783
  • ?temp_hash=a458333626c1a4c034af30abe82c889b.png
    ?temp_hash=a458333626c1a4c034af30abe82c889b.png
    32.6 KB · Views: 682
Last edited:
Physics news on Phys.org
Both definitions are the same, because we sum up all permutations.

Short answer: ##(1,1)(2,2)-(1,2)(2,1)=(1,\operatorname{id}(1))\cdot(2,\operatorname{id}(2))-(1,\sigma(1))\cdot(2,\sigma(2))=(\operatorname{id}(1),1)\cdot(\operatorname{id}(2),2)-(\sigma(1),1)\cdot(\sigma(2),2)##
for ##\sigma = (12) \in S_2##

The long answer is to write ##\sum_{\sigma} f(j,\sigma(j))##, then substitute ##i=\sigma(j)##, which gives ##\sum_{\sigma}f(\sigma^{-1}(i),i)## and observe, that ##\sum_{\sigma} = \sum_{\sigma^{-1}}##

... plus ##\operatorname{sgn}(\sigma) = \operatorname{sgn}(\sigma^{-1})##.
 
Last edited:
fresh_42 said:
Both definitions are the same, because we sum up all permutations.

Short answer: ##(1,1)(2,2)-(1,2)(2,1)=(1,\operatorname{id}(1))\cdot(2,\operatorname{id}(2))-(1,\sigma(1))\cdot(2,\sigma(2))=(\operatorname{id}(1),1)\cdot(\operatorname{id}(2),2)-(\sigma(1),1)\cdot(\sigma(2),2)##
for ##\sigma = (12) \in S_2##

The long answer is to write ##\sum_{\sigma} f(j,\sigma(j))##, then substitute ##i=\sigma(j)##, which gives ##\sum_{\sigma}f(\sigma^{-1}(i),i)## and observe, that ##\sum_{\sigma} = \sum_{\sigma^{-1}}##.
Thanks fresh_42 ...

Reflecting on what you have written ...

Peter
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
8
Views
3K
  • · Replies 15 ·
Replies
15
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K