About permutation acting on the Identity matrix

  • #1
aalma
46
1
Question:
Let ##\sigma\in S_n## be a permutation and ##T_{\sigma}## be the matrix we obtain from ##I## by appling ##\sigma## on the raws of ##I## (I.e ##\sigma## acts on the rows of ##I##) . Then:

1. ##\det(T_{\sigma}) = sgn(\sigma) ##
and 2. ##T_{\sigma} T_{\tau} =T_{\sigma\circ \tau}##, for all ##\sigma, \tau \in S_n##.

For the first part, I know that each permutation Can be written as a product of transpositions.
For ##\sigma\in S_n## either it is even (it's signature = 1) or odd (it's signature = - 1). And here I think $T_{\sigma}$ would be equal to ##(-1)^r * det(I)=(-1)^r## depend on how many transpositions we have in the product.
Now, how to connect these things togeter correctly?

The other part seems some how trivial, but what is the formal explanation of this equality?
 
Physics news on Phys.org
  • #2
Look at the general definition of the determinant of a matrix.

What is determinant ?
It is the sum that goes though all permutation of indices {1,2,3,...,n} of the quadratic matrix ( all n! of them)
$$det(A)
=
\begin{vmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\
a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\
\vdots & ~ & \ddots & \vdots \\
a_{n,1} & a_{n,2} & \cdots & a_{n,n}
\end{vmatrix}
=
\sum_{\sigma \in S_n} sign(\sigma) \cdot a_{1,\sigma(1)} \cdot a_{2,\sigma(2)} \cdot a_{3,\sigma(3)} \dots a_{n,\sigma(n)}$$

[1 method]
For ##T_{\sigma}##, the sum have n! members but only one is different from zero.
For all other permutations at least one of ##a_{1,\sigma(1)} \cdot a_{2,\sigma(2)} \cdot a_{3,\sigma(3)} \dots a_{n,\sigma(n)}## is zero and then all product is zero.

$$det(T_{\sigma})
=
det(\sigma(
\begin{vmatrix}
1 & 0 & \cdots & 0 \\
0 & 1 & \cdots & 0 \\
\vdots & ~ & \ddots & \vdots \\
0 & 0 & \cdots & 1
\end{vmatrix}
))
= sign(\sigma) \cdot 1 \cdot 1 \cdot 1 \dots 1
= sign(\sigma)
$$

You do not have to worry about number of transpositions in a permutation.
It is already "built in" in the determinant definition (formula)
## sign(\sigma)=(-1)^{tr(\sigma)}##, where ##tr(\sigma)## is the number of transpositions ( swapping places )
That is one method

[2 method]
Another method cold be ... usage of the fact that , if you swap two rows of the matrix its determinant will swap the sign either from plus to minus or from minus to plus.

Now you start swapping rows of the matrix ##I## in order to get matrix ##T_{\sigma}##

In the first column of the matrix ##T_{\sigma}##, where is the number 1 ? (in all other rows of that column are 0s )

If "1" is in the first row do not swap any rows and determinant stays 1. Go to the next column.
If number "1" is not in the same (first) row swap first row and that row so that number "1" from the first column goes to his place in the first column of the ##T_{\sigma}## matrix.

In the 2nd column of the matrix ##T_{\sigma}##, where is the number 1?

If "1" is in the 2nd row all is fine . Go to the next column.
If number "1" is not in the same (2nd) row swap 2nd row and that row so that number "1" from the 2nd column goes to his place in the 2nd column of the ##T_{\sigma}## matrix.
...
And so on until last (n-th) column of the matrix .
$$
det(T_{\sigma})
= det(I) \cdot (-1)^{tr(\sigma)}
=sign(\sigma)
$$
Where ##tr(\sigma)## is number of swaps of rows of the matrix. (= the transpositions of the permutation)
 
Last edited:
  • Like
Likes aalma
  • #3
Bosko said:
Look at the general definition of the determinant of a matrix.

What is determinant ?
It is the sum that goes though all permutation of indices {1,2,3,...,n} of the quadratic matrix ( all n! of them)
$$det(A)
=
\begin{vmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\
a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\
\vdots & ~ & \ddots & \vdots \\
a_{n,1} & a_{n,2} & \cdots & a_{n,n}
\end{vmatrix}
=
\sum_{\sigma \in S_n} sign(\sigma) \cdot a_{1,\sigma(1)} \cdot a_{2,\sigma(2)} \cdot a_{3,\sigma(3)} \dots a_{n,\sigma(n)}$$

[1 method]
For ##T_{\sigma}##, the sum have n! members but only one is different from zero.
For all other permutations at least one of ##a_{1,\sigma(1)} \cdot a_{2,\sigma(2)} \cdot a_{3,\sigma(3)} \dots a_{n,\sigma(n)}## is zero and then all product is zero.

$$det(T_{\sigma})
=
det(\sigma(
\begin{vmatrix}
1 & 0 & \cdots & 0 \\
0 & 1 & \cdots & 0 \\
\vdots & ~ & \ddots & \vdots \\
0 & 0 & \cdots & 1
\end{vmatrix}
))
= sign(\sigma) \cdot 1 \cdot 1 \cdot 1 \dots 1
= sign(\sigma)
$$

You do not have to worry about number of transpositions in a permutation.
It is already "built in" in the determinant definition (formula)
## sign(\sigma)=(-1)^{tr(\sigma)}##, where ##tr(\sigma)## is the number of transpositions ( swapping places )
That is one method

[2 method]
Another method cold be ... usage of the fact that , if you swap two rows of the matrix its determinant will swap the sign either from plus to minus or from minus to plus.

Now you start swapping rows of the matrix ##I## in order to get matrix ##T_{\sigma}##

In the first column of the matrix ##T_{\sigma}##, where is the number 1 ? (in all other rows of that column are 0s )

If "1" is in the first row do not swap any rows and determinant stays 1. Go to the next column.
If number "1" is not in the same (first) row swap first row and that row so that number "1" from the first column goes to his place in the first column of the ##T_{\sigma}## matrix.

In the 2nd column of the matrix ##T_{\sigma}##, where is the number 1?

If "1" is in the 2nd row all is fine . Go to the next column.
If number "1" is not in the same (2nd) row swap 2nd row and that row so that number "1" from the 2nd column goes to his place in the 2nd column of the ##T_{\sigma}## matrix.
...
And so on until last (n-th) column of the matrix .
$$
det(T_{\sigma})
= det(I) \cdot (-1)^{tr(\sigma)}
=sign(\sigma)
$$
Where ##tr(\sigma)## is number of swaps of rows of the matrix. (= the transpositions of the permutation)
Thanks! In the first part, I am still confused of what ##a_{1,\sigma(1)}## and so on represent (are they the entries of ##I## after appling ##\sigma## and so the product of the ##a_{i, \sigma(i)}## is not zero just when ##\sigma=id##?

In the other part, Can I say that
##T_{\sigma} T_{\tau}=\sigma(I) \tau(I)## and
##T_{\sigma \circ\tau} =(\sigma \circ \tau) (I)## so the equality follows from the definition of composition of permutations?
 
  • #4
aalma said:
Thanks! In the first part, I am still confused of what ##a_{1,\sigma(1)}## and so on represent (are they the entries of ##I## after appling ##\sigma## and so the product of the ##a_{i, \sigma(i)}## is not zero just when ##\sigma=id##?
Yes, but also is true for ##T_\tau## so the product of the ##a_{i, \sigma(i)} ...## is not zero just when ##\sigma=\tau##

For example , you have 3x3 matrices
The identity or (Leopold) Kronecker delta matrix
##I=
\begin{vmatrix}
1 & 0 & 0 \\
0 & 1& 0 \\
0 & 0 & 1
\end{vmatrix}
##

and permutation

##\sigma : (1,2,3) \mapsto (2,1,3)##

meaning
##\sigma(1)=2##
##\sigma(2)=1##
##\sigma(3)=3##

The corresponding ##T_{\sigma}## matrix is
##T_{\sigma}=
\begin{vmatrix}
0 & 1 & 0 \\
1 & 0& 0 \\
0 & 0 & 1
\end{vmatrix}
##

Now you want to calculate ##det(T_{\sigma})##. We have to go trough all 6 permutations of indices (1,2,3) and add the products

##(1,2,3) \Rightarrow (-1)^0 \cdot a_{1,1} \cdot a_{2,2} \cdot a_{3,3} = 0 \cdot 0 \cdot 1 =0##
2,3 swap ...
##(1,3,2) \Rightarrow (-1)^1 \cdot a_{1,1} \cdot a_{2,3} \cdot a_{3,2} = - 0 \cdot 0 \cdot 0=0##
1,2 swap ...
##(2,3,1) \Rightarrow (-1)^2 \cdot a_{1,2} \cdot a_{2,3} \cdot a_{3,1} = 1 \cdot 0 \cdot 0=0##
3,1 swap ...
##(2,1,3) \Rightarrow (-1)^3 \cdot a_{1,2} \cdot a_{2,1} \cdot a_{3,3} = - 1 \cdot 1 \cdot 1=-1##
2,3 swap ...
##(3,1,2) \Rightarrow (-1)^4 \cdot a_{1,3} \cdot a_{2,1} \cdot a_{3,2} = 0 \cdot 1 \cdot 0=0##
1,2 swap ...
##(3,2,1) \Rightarrow (-1)^5 \cdot a_{1,3} \cdot a_{2,2} \cdot a_{3,1} = - 0 \cdot 0 \cdot 0=0##

The sum is ##det(T_{\sigma})=-1##

aalma said:
In the other part, Can I say that
##T_{\sigma} T_{\tau}=\sigma(I) \tau(I)## and
##T_{\sigma \circ\tau} =(\sigma \circ \tau) (I)## so the equality follows from the definition of composition of permutations?
Yes, that is it ... That is just yet another method to calculate the same thing

##I=
\begin{vmatrix}
1 & 0 & 0 \\
0 & 1& 0 \\
0 & 0 & 1
\end{vmatrix}
##

If you swap the first and the second row ...
##T_{\sigma}=
\begin{vmatrix}
0 & 1 & 0 \\
1 & 0& 0 \\
0 & 0 & 1
\end{vmatrix}
##

Just one swap ##det(T_{\sigma})=(-1)^1 det(I)=-1##
 
  • Like
Likes aalma
  • #5
Bosko said:
Yes, but also is true for ##T_\tau## so the product of the ##a_{i, \sigma(i)} ...## is not zero just when ##\sigma=\tau##

For example , you have 3x3 matrices
The identity or (Leopold) Kronecker delta matrix
##I=
\begin{vmatrix}
1 & 0 & 0 \\
0 & 1& 0 \\
0 & 0 & 1
\end{vmatrix}
##

and permutation

##\sigma : (1,2,3) \mapsto (2,1,3)##

meaning
##\sigma(1)=2##
##\sigma(2)=1##
##\sigma(3)=3##

The corresponding ##T_{\sigma}## matrix is
##T_{\sigma}=
\begin{vmatrix}
0 & 1 & 0 \\
1 & 0& 0 \\
0 & 0 & 1
\end{vmatrix}
##

Now you want to calculate ##det(T_{\sigma})##. We have to go trough all 6 permutations of indices (1,2,3) and add the products

##(1,2,3) \Rightarrow (-1)^0 \cdot a_{1,1} \cdot a_{2,2} \cdot a_{3,3} = 0 \cdot 0 \cdot 1 =0##
2,3 swap ...
##(1,3,2) \Rightarrow (-1)^1 \cdot a_{1,1} \cdot a_{2,3} \cdot a_{3,2} = - 0 \cdot 0 \cdot 0=0##
1,2 swap ...
##(2,3,1) \Rightarrow (-1)^2 \cdot a_{1,2} \cdot a_{2,3} \cdot a_{3,1} = 1 \cdot 0 \cdot 0=0##
3,1 swap ...
##(2,1,3) \Rightarrow (-1)^3 \cdot a_{1,2} \cdot a_{2,1} \cdot a_{3,3} = - 1 \cdot 1 \cdot 1=-1##
2,3 swap ...
##(3,1,2) \Rightarrow (-1)^4 \cdot a_{1,3} \cdot a_{2,1} \cdot a_{3,2} = 0 \cdot 1 \cdot 0=0##
1,2 swap ...
##(3,2,1) \Rightarrow (-1)^5 \cdot a_{1,3} \cdot a_{2,2} \cdot a_{3,1} = - 0 \cdot 0 \cdot 0=0##

The sum is ##det(T_{\sigma})=-1##Yes, that is it ... That is just yet another method to calculate the same thing

##I=
\begin{vmatrix}
1 & 0 & 0 \\
0 & 1& 0 \\
0 & 0 & 1
\end{vmatrix}
##

If you swap the first and the second row ...
##T_{\sigma}=
\begin{vmatrix}
0 & 1 & 0 \\
1 & 0& 0 \\
0 & 0 & 1
\end{vmatrix}
##

Just one swap ##det(T_{\sigma})=(-1)^1 det(I)=-1##
Just to make sure, i think there is some confusion in the calculations in the example of ##S_3##?

##T_{\sigma}T_{\tau}## is a product of two matries and equals to the matrix ##T_{\sigma\circ tau}##, right? Now again though it is clear, what is the way to show it formally? Is what was already shownn (in the first part useful here)?
 
  • #6
Bosko said:
Yes, but also is true for ##T_\tau## so the product of the ##a_{i, \sigma(i)} ...## is not zero just when ##\sigma=\tau##

For example , you have 3x3 matrices
The identity or (Leopold) Kronecker delta matrix
##I=
\begin{vmatrix}
1 & 0 & 0 \\
0 & 1& 0 \\
0 & 0 & 1
\end{vmatrix}
##

and permutation

##\sigma : (1,2,3) \mapsto (2,1,3)##

meaning
##\sigma(1)=2##
##\sigma(2)=1##
##\sigma(3)=3##

The corresponding ##T_{\sigma}## matrix is
##T_{\sigma}=
\begin{vmatrix}
0 & 1 & 0 \\
1 & 0& 0 \\
0 & 0 & 1
\end{vmatrix}
##

Now you want to calculate ##det(T_{\sigma})##. We have to go trough all 6 permutations of indices (1,2,3) and add the products

##(1,2,3) \Rightarrow (-1)^0 \cdot a_{1,1} \cdot a_{2,2} \cdot a_{3,3} = 0 \cdot 0 \cdot 1 =0##
2,3 swap ...
##(1,3,2) \Rightarrow (-1)^1 \cdot a_{1,1} \cdot a_{2,3} \cdot a_{3,2} = - 0 \cdot 0 \cdot 0=0##
1,2 swap ...
##(2,3,1) \Rightarrow (-1)^2 \cdot a_{1,2} \cdot a_{2,3} \cdot a_{3,1} = 1 \cdot 0 \cdot 0=0##
3,1 swap ...
##(2,1,3) \Rightarrow (-1)^3 \cdot a_{1,2} \cdot a_{2,1} \cdot a_{3,3} = - 1 \cdot 1 \cdot 1=-1##
2,3 swap ...
##(3,1,2) \Rightarrow (-1)^4 \cdot a_{1,3} \cdot a_{2,1} \cdot a_{3,2} = 0 \cdot 1 \cdot 0=0##
1,2 swap ...
##(3,2,1) \Rightarrow (-1)^5 \cdot a_{1,3} \cdot a_{2,2} \cdot a_{3,1} = - 0 \cdot 0 \cdot 0=0##

The sum is ##det(T_{\sigma})=-1##Yes, that is it ... That is just yet another method to calculate the same thing

##I=
\begin{vmatrix}
1 & 0 & 0 \\
0 & 1& 0 \\
0 & 0 & 1
\end{vmatrix}
##

If you swap the first and the second row ...
##T_{\sigma}=
\begin{vmatrix}
0 & 1 & 0 \\
1 & 0& 0 \\
0 & 0 & 1
\end{vmatrix}
##

Just one swap ##det(T_{\sigma})=(-1)^1 det(I)=-1##

Can you please take a look on the previous comment
 
  • #7
aalma said:
Just to make sure, i think there is some confusion in the calculations in the example of ##S_3##?

##T_{\sigma}T_{\tau}## is a product of two matries and equals to the matrix ##T_{\sigma\circ tau}##, right? Now again though it is clear, what is the way to show it formally? Is what was already shownn (in the first part useful here)?
Just try with two 3x3 matrices ##T_{\sigma}, T_{\tau}## and check what you will get (##T_{\sigma \circ \tau}## or something else)
Check also with either permutation of columns or rows . All formulas are produces the same result if you swap rows and columns .
 
  • Like
Likes aalma
  • #8
I think it's actually easier to prove these multiply the way you want if you think of these as functions, not matrices.

##T_\sigma : (x_1,...,x_n)\to (x_{\sigma(1)},...,x_{\sigma(n)})##
 

What is a permutation acting on the identity matrix?

A permutation acting on the identity matrix involves rearranging the rows or columns of the identity matrix according to a specific permutation. The identity matrix, which is a square matrix with ones on the diagonal and zeros elsewhere, transforms into another matrix where the rows or columns are shuffled according to the rules of the permutation.

How does a permutation change the identity matrix?

When a permutation is applied to the identity matrix, it alters the positions of the '1's in the matrix. For example, if a permutation σ acts on the identity matrix by permuting its rows, the i-th row of the identity matrix moves to the σ(i)-th position. This results in a new matrix where the diagonal elements might no longer align on the main diagonal, depending on the permutation.

What are the mathematical properties of a matrix obtained by permuting the identity matrix?

The matrix obtained by permuting the identity matrix, often called a permutation matrix, retains several important properties. It is still a square matrix, orthogonal, and its inverse is its transpose. Each row and each column contain exactly one entry of '1' and all others '0'. Such matrices are used to represent permutations in matrix form and are key in operations such as row and column exchanges in algorithms.

Can permuting the identity matrix result in the same matrix?

Yes, permuting the identity matrix can result in the same matrix if the permutation applied is the identity permutation, where each element maps to itself. In any other non-trivial permutation, the matrix will change, reflecting the specific reordering dictated by the permutation.

What are the practical applications of permuting the identity matrix?

Permuting the identity matrix has practical applications in various fields including computer science, mathematics, and engineering. It is particularly used in algorithms for matrix computations, solving linear systems, and in operations research for tasks such as optimizing network flows or rearranging data structures. In computer graphics, permutation matrices are used for operations like image transformation.

Similar threads

  • Linear and Abstract Algebra
Replies
15
Views
4K
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Math Proof Training and Practice
Replies
23
Views
496
  • Math Proof Training and Practice
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
928
  • Linear and Abstract Algebra
Replies
3
Views
1K
Replies
2
Views
2K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Topology and Analysis
Replies
2
Views
2K
Replies
6
Views
2K
Back
Top