I Permutations and Determinants .... Walschap, Theorem 1.3.1 ...

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading Gerard Walschap's book: "Multivariable Calculus and Differential Geometry" and am focused on Chapter 1: Euclidean Space ... ...

I need help with an aspect of the proof of Theorem 1.3.1 ...

The start of Theorem 1.3.1 and its proof read as follows:
?temp_hash=f5ded35588963e5ecb2b75591ab2544b.png


I tried to understand how/why

##\text{det} ( v_1, \cdot \cdot \cdot , v_n ) = \sum_{ \sigma } a_{ \sigma (1) 1 } , \cdot \cdot \cdot , a_{ \sigma (n) n } \ \text{det} ( e_{ \sigma (1) } , \cdot \cdot \cdot , e_{ \sigma (n) }##

where the sum runs over all maps ##\sigma \ : \ J_n = \{ 1 , \cdot \cdot \cdot , n \} \to J_n = \{ 1 , \cdot \cdot \cdot , n \}##

... so ...

... I tried an example with ##\text{det} \ : \ (\mathbb{R}^n)^n \to \mathbb{R}## ... ...

so we have ##v_1 = \sum_k a_{ k1 } e_k = a_{11} e_1 + a_{21} e_2##

and

##v_2 = \sum_k a_{ k2 } e_k = a_{12} e_1 + a_{22} e_2##and then we have

##\text{det} ( v_1, v_2 )####= \text{det} ( a_{11} e_1 + a_{21} e_2 , a_{12} e_1 + a_{22} e_2 )####= a_{11} a_{12} \ \text{det} ( e_1, e_1 ) + a_{11} a_{22} \ \text{det} ( e_1, e_2 ) + a_{21} a_{12} \ \text{det} ( e_2, e_1 ) + a_{21} a_{22} \ \text{det} ( e_2, e_2 )####= \sum_{ \sigma } a_{ \sigma (1) 1 } a_{ \sigma (2) 2 } \ \text{det} ( e_{ \sigma (1) }, e_{ \sigma (2) } ) ##

where the sum runs over all maps

##\sigma \ : \ J_n = \{ 1, 2 \} \to J_n = \{ 1 , 2 \}## ...... ... that is the sum runs over the two permutations

##\sigma_1 = \begin{bmatrix} 1 & 2 \\ 1 & 2 \end{bmatrix}## and ##\sigma_2 = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}##BUT how does the formula ##\sum_{ \sigma } a_{ \sigma (1) 1 } a_{ \sigma (2) 2 } \ \text{det} ( e_{ \sigma (1) }, e_{ \sigma (2) }## ...

... incorporate or deal with the terms involving ##\text{det} ( e_1, e_1 )## and ##\text{det} ( e_2, e_2 )## ... ... ?Hope someone can help ...?

Peter
 

Attachments

  • Walschap - Theorem 1.3.1 ... .png
    Walschap - Theorem 1.3.1 ... .png
    51.5 KB · Views: 454
  • ?temp_hash=f5ded35588963e5ecb2b75591ab2544b.png
    ?temp_hash=f5ded35588963e5ecb2b75591ab2544b.png
    51.5 KB · Views: 919
Last edited:
Physics news on Phys.org
Math Amateur said:
I am reading Gerard Walschap's book: "Multivariable Calculus and Differential Geometry" and am focused on Chapter 1: Euclidean Space ... ...

I need help with an aspect of the proof of Theorem 1.3.1 ...

The start of Theorem 1.3.1 and its proof read as follows:
View attachment 240124

I tried to understand how/why

##\text{det} ( v_1, \cdot \cdot \cdot , v_n ) = \sum_{ \sigma } a_{ \sigma (1) 1 } , \cdot \cdot \cdot , a_{ \sigma (n) n } \ \text{det} ( e_{ \sigma (1) } , \cdot \cdot \cdot , e_{ \sigma (n) }##

where the sum runs over all maps ##\sigma \ : \ J_n = \{ 1 , \cdot \cdot \cdot , n \} \to J_n = \{ 1 , \cdot \cdot \cdot , n \}##

... so ...

... I tried an example with ##\text{det} \ : \ (\mathbb{R}^n)^n \to \mathbb{R}## ... ...

so we have ##v_1 = \sum_k a_{ k1 } e_k = a_{11} e_1 + a_{21} e_2##

and

##v_2 = \sum_k a_{ k2 } e_k = a_{12} e_1 + a_{22} e_2##and then we have

##\text{det} ( v_1, v_2 )####= \text{det} ( a_{11} e_1 + a_{21} e_2 , a_{12} e_1 + a_{22} e_2 )####= a_{11} a_{12} \ \text{det} ( e_1, e_1 ) + a_{11} a_{22} \ \text{det} ( e_1, e_2 ) + a_{21} a_{12} \ \text{det} ( e_2, e_1 ) + a_{21} a_{22} \ \text{det} ( e_2, e_2 )####= \sum_{ \sigma } a_{ \sigma (1) 1 } a_{ \sigma (2) 2 } \ \text{det} ( e_{ \sigma (1) }, e_{ \sigma (2) } ) ##

where the sum runs over all maps

##\sigma \ : \ J_n = \{ 1, 2 \} \to J_n = \{ 1 , 2 \}## ...... ... that is the sum runs over the two permutations

##\sigma_1 = \begin{bmatrix} 1 & 2 \\ 1 & 2 \end{bmatrix}## and ##\sigma_2 = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}##BUT how does the formula ##\sum_{ \sigma } a_{ \sigma (1) 1 } a_{ \sigma (2) 2 } \ \text{det} ( e_{ \sigma (1) }, e_{ \sigma (2) }## ...

... incorporate or deal with the terms involving ##\text{det} ( e_1, e_1 )## and ##\text{det} ( e_2, e_2 )## ... ... ?Hope someone can help ...?

Peter

Quick reply (I will have more time this evening, if necessary):

The author seems to overcomplicate the case. Just define the determinant as the sum that runs over all permutations in the first place. Then you don't even need to worry what happens with none-injective maps.

As for your question ##\det(e_1,e_1) = 0 = \det(e_2,e_2)##. This easily follows by the defining determinant properties.
 
  • Like
Likes Math Amateur
Math_QED said:
Quick reply (I will have more time this evening, if necessary):

The author seems to overcomplicate the case. Just define the determinant as the sum that runs over all permutations in the first place. Then you don't even need to worry what happens with none-injective maps.

As for your question ##\det(e_1,e_1) = 0 = \det(e_2,e_2)##. This easily follows by the defining determinant properties.


Thanks Math_QED ...

Peter
 
Last edited:
Math Amateur said:
... I tried an example with ##\text{det} \ : \ (\mathbb{R}^n)^n \to \mathbb{R}## ... ...
I am not sure I understand this part but determinants apply only to square matrices; a map from ##\mathbb R^{n^2} \rightarrow \mathbb R ## is not represented by a square matrix.
 
WWGD said:
I am not sure I understand this part but determinants apply only to square matrices; a map from ##\mathbb R^{n^2} \rightarrow \mathbb R ## is not represented by a square matrix.

The determinant function maps ##\mathbb{R}^{n^2}## into ##\mathbb{R}##.
 
  • Like
Likes Math Amateur and WWGD
Math Amateur said:
I am reading Gerard Walschap's book: "Multivariable Calculus and Differential Geometry" and am focused on Chapter 1: Euclidean Space ... ...

I need help with an aspect of the proof of Theorem 1.3.1 ...

The start of Theorem 1.3.1 and its proof read as follows:
View attachment 240124

I tried to understand how/why

##\text{det} ( v_1, \cdot \cdot \cdot , v_n ) = \sum_{ \sigma } a_{ \sigma (1) 1 } , \cdot \cdot \cdot , a_{ \sigma (n) n } \ \text{det} ( e_{ \sigma (1) } , \cdot \cdot \cdot , e_{ \sigma (n) }##

where the sum runs over all maps ##\sigma \ : \ J_n = \{ 1 , \cdot \cdot \cdot , n \} \to J_n = \{ 1 , \cdot \cdot \cdot , n \}##Peter
This is just the definition of multilinearity: it means you can pull out scaling factors. For 2 variables : ## M( c_1a_1+ c_2a_2)= c_1M(a_1+ c_2a_2)= c_1c_2M(a_1+a_2) ## For k variables : ## M( c_1a_1+...+c_ka_k)= c_1M(a_1+ c_2a_2+...+c_ka_k)= c_1c_2M(a_1+a_2+ c_3a_3+...+c_ka_k)=...c_1c_2...c_kM(a_1+a_2+...+a_k) ## According to the exercise, if a map is multilinear, equal to 1 on the ##e_i## and alternating, then it must be the determinant.
 
  • Like
Likes Math Amateur
Thanks for clarifying the issues, WWGD ...

Just for interest ... after proving Theorem 1.3.1 Walschap defines a determinant as follows:
?temp_hash=9befdea79a1534cbb86f4e6afbb5cde6.png
Peter
 

Attachments

  • Walschap - Defn of Determinant ... Page 15 ... .png
    Walschap - Defn of Determinant ... Page 15 ... .png
    32.6 KB · Views: 367
  • ?temp_hash=9befdea79a1534cbb86f4e6afbb5cde6.png
    ?temp_hash=9befdea79a1534cbb86f4e6afbb5cde6.png
    32.6 KB · Views: 385
Last edited:
Back
Top