Permutations and Determinants .... Walschap, Theorem 1.3.1 ...

Click For Summary

Discussion Overview

The discussion revolves around understanding an aspect of the proof of Theorem 1.3.1 from Gerard Walschap's book "Multivariable Calculus and Differential Geometry," specifically focusing on the determinant of vectors in Euclidean space. Participants explore the implications of the determinant's definition and its application to non-injective maps.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant, Peter, questions how the formula for the determinant incorporates terms involving ##\text{det}(e_1, e_1)## and ##\text{det}(e_2, e_2)##, suggesting confusion over the treatment of these terms.
  • Another participant suggests that the author complicates the case and proposes defining the determinant as a sum over all permutations from the outset, implying that this would avoid issues with non-injective maps.
  • It is noted that ##\text{det}(e_1, e_1) = 0## and ##\text{det}(e_2, e_2) = 0##, which follows from the properties of determinants.
  • Some participants express uncertainty about the application of determinants to non-square matrices, raising concerns about the mapping from ##\mathbb{R}^{n^2}## to ##\mathbb{R}##.
  • Another participant mentions that the definition of multilinearity allows for pulling out scaling factors in the context of determinants.

Areas of Agreement / Disagreement

Participants express differing views on how to approach the definition and application of determinants, particularly regarding non-injective maps and the treatment of specific cases. The discussion remains unresolved with multiple competing perspectives on the topic.

Contextual Notes

There are limitations regarding the assumptions made about the nature of the determinant and its application to non-square matrices, as well as the implications of multilinearity in the context of the theorem.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading Gerard Walschap's book: "Multivariable Calculus and Differential Geometry" and am focused on Chapter 1: Euclidean Space ... ...

I need help with an aspect of the proof of Theorem 1.3.1 ...

The start of Theorem 1.3.1 and its proof read as follows:
?temp_hash=f5ded35588963e5ecb2b75591ab2544b.png


I tried to understand how/why

##\text{det} ( v_1, \cdot \cdot \cdot , v_n ) = \sum_{ \sigma } a_{ \sigma (1) 1 } , \cdot \cdot \cdot , a_{ \sigma (n) n } \ \text{det} ( e_{ \sigma (1) } , \cdot \cdot \cdot , e_{ \sigma (n) }##

where the sum runs over all maps ##\sigma \ : \ J_n = \{ 1 , \cdot \cdot \cdot , n \} \to J_n = \{ 1 , \cdot \cdot \cdot , n \}##

... so ...

... I tried an example with ##\text{det} \ : \ (\mathbb{R}^n)^n \to \mathbb{R}## ... ...

so we have ##v_1 = \sum_k a_{ k1 } e_k = a_{11} e_1 + a_{21} e_2##

and

##v_2 = \sum_k a_{ k2 } e_k = a_{12} e_1 + a_{22} e_2##and then we have

##\text{det} ( v_1, v_2 )####= \text{det} ( a_{11} e_1 + a_{21} e_2 , a_{12} e_1 + a_{22} e_2 )####= a_{11} a_{12} \ \text{det} ( e_1, e_1 ) + a_{11} a_{22} \ \text{det} ( e_1, e_2 ) + a_{21} a_{12} \ \text{det} ( e_2, e_1 ) + a_{21} a_{22} \ \text{det} ( e_2, e_2 )####= \sum_{ \sigma } a_{ \sigma (1) 1 } a_{ \sigma (2) 2 } \ \text{det} ( e_{ \sigma (1) }, e_{ \sigma (2) } ) ##

where the sum runs over all maps

##\sigma \ : \ J_n = \{ 1, 2 \} \to J_n = \{ 1 , 2 \}## ...... ... that is the sum runs over the two permutations

##\sigma_1 = \begin{bmatrix} 1 & 2 \\ 1 & 2 \end{bmatrix}## and ##\sigma_2 = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}##BUT how does the formula ##\sum_{ \sigma } a_{ \sigma (1) 1 } a_{ \sigma (2) 2 } \ \text{det} ( e_{ \sigma (1) }, e_{ \sigma (2) }## ...

... incorporate or deal with the terms involving ##\text{det} ( e_1, e_1 )## and ##\text{det} ( e_2, e_2 )## ... ... ?Hope someone can help ...?

Peter
 

Attachments

  • Walschap - Theorem 1.3.1 ... .png
    Walschap - Theorem 1.3.1 ... .png
    51.5 KB · Views: 485
  • ?temp_hash=f5ded35588963e5ecb2b75591ab2544b.png
    ?temp_hash=f5ded35588963e5ecb2b75591ab2544b.png
    51.5 KB · Views: 961
Last edited:
Physics news on Phys.org
Math Amateur said:
I am reading Gerard Walschap's book: "Multivariable Calculus and Differential Geometry" and am focused on Chapter 1: Euclidean Space ... ...

I need help with an aspect of the proof of Theorem 1.3.1 ...

The start of Theorem 1.3.1 and its proof read as follows:
View attachment 240124

I tried to understand how/why

##\text{det} ( v_1, \cdot \cdot \cdot , v_n ) = \sum_{ \sigma } a_{ \sigma (1) 1 } , \cdot \cdot \cdot , a_{ \sigma (n) n } \ \text{det} ( e_{ \sigma (1) } , \cdot \cdot \cdot , e_{ \sigma (n) }##

where the sum runs over all maps ##\sigma \ : \ J_n = \{ 1 , \cdot \cdot \cdot , n \} \to J_n = \{ 1 , \cdot \cdot \cdot , n \}##

... so ...

... I tried an example with ##\text{det} \ : \ (\mathbb{R}^n)^n \to \mathbb{R}## ... ...

so we have ##v_1 = \sum_k a_{ k1 } e_k = a_{11} e_1 + a_{21} e_2##

and

##v_2 = \sum_k a_{ k2 } e_k = a_{12} e_1 + a_{22} e_2##and then we have

##\text{det} ( v_1, v_2 )####= \text{det} ( a_{11} e_1 + a_{21} e_2 , a_{12} e_1 + a_{22} e_2 )####= a_{11} a_{12} \ \text{det} ( e_1, e_1 ) + a_{11} a_{22} \ \text{det} ( e_1, e_2 ) + a_{21} a_{12} \ \text{det} ( e_2, e_1 ) + a_{21} a_{22} \ \text{det} ( e_2, e_2 )####= \sum_{ \sigma } a_{ \sigma (1) 1 } a_{ \sigma (2) 2 } \ \text{det} ( e_{ \sigma (1) }, e_{ \sigma (2) } ) ##

where the sum runs over all maps

##\sigma \ : \ J_n = \{ 1, 2 \} \to J_n = \{ 1 , 2 \}## ...... ... that is the sum runs over the two permutations

##\sigma_1 = \begin{bmatrix} 1 & 2 \\ 1 & 2 \end{bmatrix}## and ##\sigma_2 = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}##BUT how does the formula ##\sum_{ \sigma } a_{ \sigma (1) 1 } a_{ \sigma (2) 2 } \ \text{det} ( e_{ \sigma (1) }, e_{ \sigma (2) }## ...

... incorporate or deal with the terms involving ##\text{det} ( e_1, e_1 )## and ##\text{det} ( e_2, e_2 )## ... ... ?Hope someone can help ...?

Peter

Quick reply (I will have more time this evening, if necessary):

The author seems to overcomplicate the case. Just define the determinant as the sum that runs over all permutations in the first place. Then you don't even need to worry what happens with none-injective maps.

As for your question ##\det(e_1,e_1) = 0 = \det(e_2,e_2)##. This easily follows by the defining determinant properties.
 
  • Like
Likes   Reactions: Math Amateur
Math_QED said:
Quick reply (I will have more time this evening, if necessary):

The author seems to overcomplicate the case. Just define the determinant as the sum that runs over all permutations in the first place. Then you don't even need to worry what happens with none-injective maps.

As for your question ##\det(e_1,e_1) = 0 = \det(e_2,e_2)##. This easily follows by the defining determinant properties.


Thanks Math_QED ...

Peter
 
Last edited:
Math Amateur said:
... I tried an example with ##\text{det} \ : \ (\mathbb{R}^n)^n \to \mathbb{R}## ... ...
I am not sure I understand this part but determinants apply only to square matrices; a map from ##\mathbb R^{n^2} \rightarrow \mathbb R ## is not represented by a square matrix.
 
WWGD said:
I am not sure I understand this part but determinants apply only to square matrices; a map from ##\mathbb R^{n^2} \rightarrow \mathbb R ## is not represented by a square matrix.

The determinant function maps ##\mathbb{R}^{n^2}## into ##\mathbb{R}##.
 
  • Like
Likes   Reactions: Math Amateur and WWGD
Math Amateur said:
I am reading Gerard Walschap's book: "Multivariable Calculus and Differential Geometry" and am focused on Chapter 1: Euclidean Space ... ...

I need help with an aspect of the proof of Theorem 1.3.1 ...

The start of Theorem 1.3.1 and its proof read as follows:
View attachment 240124

I tried to understand how/why

##\text{det} ( v_1, \cdot \cdot \cdot , v_n ) = \sum_{ \sigma } a_{ \sigma (1) 1 } , \cdot \cdot \cdot , a_{ \sigma (n) n } \ \text{det} ( e_{ \sigma (1) } , \cdot \cdot \cdot , e_{ \sigma (n) }##

where the sum runs over all maps ##\sigma \ : \ J_n = \{ 1 , \cdot \cdot \cdot , n \} \to J_n = \{ 1 , \cdot \cdot \cdot , n \}##Peter
This is just the definition of multilinearity: it means you can pull out scaling factors. For 2 variables : ## M( c_1a_1+ c_2a_2)= c_1M(a_1+ c_2a_2)= c_1c_2M(a_1+a_2) ## For k variables : ## M( c_1a_1+...+c_ka_k)= c_1M(a_1+ c_2a_2+...+c_ka_k)= c_1c_2M(a_1+a_2+ c_3a_3+...+c_ka_k)=...c_1c_2...c_kM(a_1+a_2+...+a_k) ## According to the exercise, if a map is multilinear, equal to 1 on the ##e_i## and alternating, then it must be the determinant.
 
  • Like
Likes   Reactions: Math Amateur
Thanks for clarifying the issues, WWGD ...

Just for interest ... after proving Theorem 1.3.1 Walschap defines a determinant as follows:
?temp_hash=9befdea79a1534cbb86f4e6afbb5cde6.png
Peter
 

Attachments

  • Walschap - Defn of Determinant ... Page 15 ... .png
    Walschap - Defn of Determinant ... Page 15 ... .png
    32.6 KB · Views: 390
  • ?temp_hash=9befdea79a1534cbb86f4e6afbb5cde6.png
    ?temp_hash=9befdea79a1534cbb86f4e6afbb5cde6.png
    32.6 KB · Views: 422
Last edited:

Similar threads

  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 15 ·
Replies
15
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
8
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K