Proving Identity for Determinant of $A^tA$

In summary, the conversation discusses how to prove an identity about determinants for a matrix A with m rows and n columns. The identity states that the determinant of A^tA is equal to the sum of the squared determinants of A, where A_j1...jn is the matrix with (i,k)-entry being a_jk and A^t is the transpose of A. The conversation also clarifies the notation for A_j1...jn and discusses potential methods for proving the identity, including using induction on the size of the matrix.
  • #1
tommyxu3
240
42
I have a problem of proving an identity about determinants. For ##A\in M_{m\times n}(\mathbb{R}),## a matrix with ##m## rows and ##n## columns, prove the following identity.

$$|\det(A^tA)|=\sum_{1\le j_1\le ... \le j_n \le m} (det(A_{j_1...j_n}))^2$$
where ##A_{j_1...j_n}## is the matrix whose ##(i,k)##-entry is ##a_{j,k},## and ##A^t## is the transpose of ##A.##

The example of this is given here to make clear:

$$\det (
\begin{pmatrix}
1 & 3 &5\\
2 & 4 &6\\
\end{pmatrix}
\begin{pmatrix}
1 & 2 \\
3 & 4 \\
5 & 6
\end{pmatrix}) =
(\det\begin{pmatrix}
1 & 2 \\
3 & 4 \\
\end{pmatrix})^2+
(\det\begin{pmatrix}
1 & 2 \\
5 & 6 \\
\end{pmatrix})^2+
(\det\begin{pmatrix}
3 & 4 \\
5 & 6 \\
\end{pmatrix})^2
.$$

The equation is clearly holds fro square matrix, but for general type I cannot solve...I try to prove it in induction from ##m\times n## to ##(m+1)\times n## but failed. This may be related to the notion of area (the given example is the area of a triangle on a plane).
Thanks for any ideas in advance!
 
  • Like
Likes Paul Colby
Physics news on Phys.org
  • #2
I have trouble to understand the notation ##A_{j_1 ... j_n}##. Your example is a ##m \times n = 3 \times 2## matrix, so possible arrays are ##(1,1),(1,2),(1,3),(2,2),(2,3),(3,3).## Could you explain, e.g. ##A_{13}## and ##A_{22}##?
 
  • #3
fresh_42 said:
I have trouble to understand the notation ##A_{j_1 ... j_n}##. Your example is a ##m \times n = 3 \times 2## matrix, so possible arrays are ##(1,1),(1,2),(1,3),(2,2),(2,3),(3,3).## Could you explain, e.g. ##A_{13}## and ##A_{22}##?
That's my error, and in my example you may see apparently. The correct version should be: ##\sum_{1\le j_1<...<j_n\le n}.##
 
  • #4
tommyxu3 said:
That's my error, and in my example you may see apparently. The correct version should be: ##\sum_{1\le j_1<...<j_n\le n}.##
Still doesn't explain ##A_{13}##. Normally such a notation denotes a diagonal or cofactor. What does it mean for a matrix, that isn't square?
 
  • #5
It means the matrix$$
\begin{pmatrix}
1 & 2 \\
5 & 6 \\
\end{pmatrix},$$
like the notation above.
Besides I found another error that it should be: "where ##A_{j_1...j_n}## is the matrix whose ##(i,k)##-entry is ##a_{j_ik},## and ##A^t## is the transpose of ##A.##"
 
  • #6
Sorry. Maybe I'm stupid or stubborn, possibly both. But if
$$A_{13} = \begin{bmatrix} 1 && 2 \\ 3 && 4 \\ 5 && 6 \end{bmatrix}_{13} = \begin{bmatrix} 1 && 2 \\ 5 && 6 \end{bmatrix},$$
##A_{13}## means the second row is deleted? But your sum allows only the array ##(1,2)## since you corrected the summation boundary from ##m = 3## to ##n=2##.
Or do you write the columns first and rows second? You have, as I am used to, a ##(3 \times 2)-##matrix ##A \in \mathbb{M}_{3 \times 2}(\mathbb{R}).## Note that ##A## is the second factor in ##\det (A^t A)##.

I try to understand the task and if it is possible at all. Matrices that aren't square matrices don't occur that often so I'm not used to them. Especially I'm asking myself how the formula would look like, if ##A## is simply a vector.
 
  • #7
Oops, error again... My correction focused on changing ##"\le"## to ##"<".## The boundary is from ##1## to ##m## like initially.Thanks for your check again! Do you have any ideas?
 
  • #8
tommyxu3 said:
Oops, error again... My correction focused on changing ##"\le"## to ##"<".## The boundary is from ##1## to ##m## like initially.Thanks for your check again! Do you have any ideas?
If ##A^t = (a_1, \dots ,a_m)## then ##A## has ##m## rows and ##n=1## column.
Thus ##| \det(A^t A) | = | \sum_{j=1}^{j=m} a_j^2 |## and ##1 \leq j_1 \leq m## is the only array of indices ##j_k##. Therefore ##A_j = a_j## is the only meaningful way to define ##A_j## and the formula is valid. So, I assume the ##j-## array denotes the numbers of rows that are not to be eliminated.
Give me a second to check the left hand side determinant of your example. The general formula for determinants is a bit complicated to handle.
 
  • Like
Likes tommyxu3
  • #9
fresh_42 said:
So, I assume the ##j-## array denotes the numbers of rows that are not to be eliminated.
That's right what I want to convey.
Thanks for your assistance!
 
  • #10
Ok. Your example isn't a counterexample. Both sides sum up to ##24##.
Induction is probably an ugly task. We have ##m \times 1## so the induction step would be the equation for ##m \times (n+1)## given the formula for ##m \times n##. I have to think about a less uncomfortable way. I'm sure there is one.
Edit: It could certainly be done by calculating the determinant along the additional last column.
 
  • #11
An idea is to make ##A## a square matrix by filling the additional positions with zeros. Only on the main diagonal have to be ones.
 
  • Like
Likes tommyxu3
  • #12
I have tried, and let me see later! Thanks~~
But I remembered the remainder may make some troubles?
 
  • #13
the det of the transpose equals the det of the original matrix, and the det of a product equals the product of the matrices.

see pages 62-64 of these notes:

http://alpha.math.uga.edu/%7Eroy/4050sum08.pdf
 
  • #14
mathwonk said:
the det of the transpose equals the det of the original matrix, and the det of a product equals the product of the matrices.
Yes, and I say that the statement is true for square matrix. You may misunderstand the statement that I didn't require ##A## is square.
 
  • #15
fresh_42 said:
Induction is probably an ugly task. We have ##m \times 1## so the induction step would be the equation for ##m \times (n+1)## given the formula for ##m \times n##. I have to think about a less uncomfortable way. I'm sure there is one.
Edit: It could certainly be done by calculating the determinant along the additional last column.
My another idea is since it holds for square, then do inductions from ##m\times n## to ##(m+1)\times n##?
By the way, your edit meant the induction from##m\times n## to ##m\times (n+1)## works?
 
  • #16
tommyxu3 said:
My another idea is since it holds for square, then do inductions from ##m\times n## to ##(m+1)\times n##?
By the way, your edit meant the induction from##m\times n## to ##m\times (n+1)## works?
It should work, because the statement appears to be true. I have taken this inductive step, because the basis case ##m \times 1## is obviously true and allows to be proven for any ##m## so no other induction is necessary. And expanding a determinant along only one column (the ##1## in ##m \times (n+1)##) and applying the inductive hypothesis ##(n \times m)## to the sub-determinants of this expansion might keep the mess bounded.

If we take ##A## to be a ##(1 \times n)-##matrix, then ##A^t A## is a ##(n \times n)-##matrix of rank ##1##, i.e. zero determinate. On the RHS we then get an empty matrix or empty sum, which is be definition ##0##. So even in this extreme case it is true. Have you checked other matrices of determinant zero, just to see what happens?I have also thought about other ways. Replacing the determinants by its complete formulas (weighted sum over all diagonal products) looks even more messy. The LHS might be controllable, but I have no idea how to interpret the RHS with its wiped-out rows.
 
  • #17
Really... other expansions may take lots of time...anyway...
Thanks for your help~~
 
  • #18
tommyxu3 said:
Really... other expansions may take lots of time...anyway...
Thanks for your help~~
If you are allowed to interpret determinants by volumes of cubes, this might shorten the proof. The reduced matrices ##A_{j_1 ... j_n}## could then be interpreted as orthogonal projections on subspaces.
 
  • #19
in relation to fresh's idea to make the matrix square, what about adding (to the given 3x2 example) a third row which is a unit vector orthogonal to the other two rows? then the new product is 3x3 and equals the old product plus a "1" in the lower right corner, and zeroes elsewhere in the new row and column, so getting the same determinant. this should generalize to adding new rows that are orthogonal to the old rows and to each other and have length one. does this work?

oops, this makes the matrix square, but does not obviously prove the result.
 
Last edited:
  • #20
isn't this related to the pythagorean theorem for areas? i.e. isn't the sum of the squares of those 2x2 determinants equal to the square of the area of the parallelogram spanned by the 2 rows in 3 space? i.e. as fresh(?) suggested, the sum of the squares of the areas of the projections of the parallelogram onto the 3 planes, equals the square of the area of the original parallelogram. i seem to recall this, maybe provable from looking at the coordinates of the cross product?
 
  • #21
mathwonk said:
isn't this related to the pythagorean theorem for areas?
It is!~~
And finally I proved it by decomposing it to matrix unit~ I think the induction also look troublesome...haha
Thanks for your help both!
 

What is the determinant of a matrix?

The determinant of a matrix is a numerical value that can be calculated from the elements of the matrix. It represents the scaling factor of the transformation represented by the matrix. In other words, it tells us how much the area/volume is scaled when the matrix is applied to a set of points.

What does it mean to prove identity for the determinant of a matrix?

Proving identity for the determinant of a matrix means showing that two matrices have the same determinant value. This can be done by performing operations on the matrices to transform them into a form where the determinant can be easily calculated and then showing that the determinant values are equal.

Why is it important to prove identity for the determinant of a matrix?

Proving identity for the determinant of a matrix is important because it allows us to verify the properties of determinants and to simplify calculations involving determinants. It also helps us to better understand the relationships between matrices and their determinants.

How can we prove identity for the determinant of $A^tA$?

To prove identity for the determinant of $A^tA$, we can use the property that the determinant of a product of matrices is equal to the product of their determinants. This means that the determinant of $A^tA$ is equal to the determinant of $A^t$ multiplied by the determinant of $A$. We can then use row operations to transform $A^t$ into a diagonal matrix, and since the determinant of a diagonal matrix is equal to the product of its diagonal elements, we can easily calculate the determinant of $A^t$ and $A$. If the determinant values are equal, we have proven identity for the determinant of $A^tA$.

What are the practical applications of proving identity for the determinant of $A^tA$?

Proving identity for the determinant of $A^tA$ has practical applications in various fields such as physics, engineering, and statistics. It allows us to simplify calculations involving determinants and to better understand the relationships between matrices and their determinants. It also helps us to verify the properties of determinants, which can be useful in solving real-world problems.

Similar threads

  • Linear and Abstract Algebra
Replies
2
Views
428
  • Linear and Abstract Algebra
Replies
9
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
882
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
34
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
900
  • Linear and Abstract Algebra
Replies
10
Views
983
  • Linear and Abstract Algebra
Replies
11
Views
956
  • Linear and Abstract Algebra
Replies
15
Views
4K
Back
Top