Proving ##(cof ~A)^t ~A = (det A)I##

Hall
Messages
351
Reaction score
87
Homework Statement
##cof ~A## means the cofactor matrix of A, and ##(cof~ A)^t## means the transpose of cofactor matrix of A (do you call it adjoint of A, well I too used to, but no longer. det A = determinant of A and I is the identity matrix of order compatible with LHS.
Relevant Equations
The idea I would use is to show that all diagonal elements of ##(cof~A)^t~A## is equal to ##det ~A## and rest of all the elements are zero.
i-th column of ##cof~A## =
$$
\begin{bmatrix}
(-1)^{I+1} det~A_{1i} \\
(-1)^{I+2} det ~A_{2i}\\
\vdots \\
(-1)^{I+n} det ~A_{ni}\\
\end{bmatrix}$$

Therefore, the I-th row of ##(cof~A)^t## = ##\big[ (-1)^{I+1} det~A_{1i}, (-1)^{I+2} det ~A_{2i}, \cdots, (-1)^{I+n} det ~A_{ni} \big]##

The I-th -- I-th element of ##(cof~A)^t ~ A## is =
$$
\big[ (-1)^{I+1} det~A_{1i}, (-1)^{I+2} det ~A_{2i}, \cdots, (-1)^{I+n} det ~A_{ni}\big] \times
\begin{bmatrix}
a_{1i}\\
a_{2i}\\
\vdots \\
a_{ni}\\
\end{bmatrix}
= \sum_{k=1}^{n} (-1)^{I+k} a_{ki} det~A_{ki}$$
Well, the RHS is simply a ##det ~A## expanded along the ith column. Therefore, all diagonal elements of ##(cof~A)^t ~A## is equal to ##det~A##.

Now, I would try to prove that all non-diagonal elements are zero. Consider the ##I-j th element## of ##(cof~A)^t~A##
$$
\big[ (-1)^{I+1} det~A_{1i}, (-1)^{I+2} det ~A_{2i}, \cdots, (-1)^{I+n} det ~A_{ni}\big] \times
\begin{bmatrix}
a_{1i}\\
a_{2j}\\
\vdots \\
a_{nj}\\
\end{bmatrix}
= \sum_{k=1}^{n} (-1)^{I+j} a_{kj} det A_{ki}$$

But I'm unable to prove that RHS is equal to zero. Will you help me?

Note: My computer in not making me to write small I and so somewhere where there should be a small I we have a big I.
 
Physics news on Phys.org
RHS coincides with the definition of det A, for i=j. How about try n=2 in order to confirm your way ? Say RHS = R_ij, I see
R_{11}=R_{22}=a_{11}a_{22}-a_{21}a_{21}
R_{12}=R_{21}=0
Then you can go to n=3 to find a general rule of cancellation.
 
Last edited:
anuttarasammyak said:
R_{11}=R_{22}=a_{11}a_{22}-a_{21}a_{21}
R_{12}=R_{21}=0
Then you can go to n=3 to find a general rule of cancellation.
I showed ##R_{i,j}=0## when ##i\neq j## for ##2\times 2## and ##3\times 3## matrices (for 3 x 3, we shall have three cases for ##i \neq j##). But proving it in general seems unattainable at the moment.
 
  • Like
Likes anuttarasammyak
For i ##\neq## j, you may interpret that RHS is determinant of a matrix which has two same columns. Thus it is zero.
 
anuttarasammyak said:
For i ##\neq## j, you may interpret that RHS is determinant of a matrix which has two same columns. Thus it is zero.
Yes, I found this proof by Tom Apostol:

For any matrix ##A##
$$
\begin{bmatrix}
a_11 & a_{12}& \cdots &a_{1k} &\cdots& a_{1n}\\
a_{21} & a_{22} & \cdots & a_{2k} &\cdots & a_{2n}\\
\vdots&\vdots&\vdots &\vdots & \vdots &\vdots\\
a_{n1}& a_{n2} & \cdots &a_{nk} &\cdots & a_{nn}\\
\end{bmatrix}
$$
Consider a new matrix B, such that ##B=##
$$\begin{bmatrix}
a_11 & a_{12}& \cdots& a_{1j}=a_{1k} \cdots &a_{1k} &\cdots& a_{1n}\\
a_{21} & a_{22} & \cdots a_{2j} =a_{2k}& \cdots & a_{2k} &\cdots & a_{2n}\\
\vdots&\vdots&\vdots &\vdots & \vdots &\vdots\\
a_{n1}& a_{n2} &\cdots a_{nj}= a_{nk}& \cdots &a_{nk} &\cdots & a_{nn}\\
\end{bmatrix}
$$

That is, the j-th column of B is equal to the kth column of A, and rest of all things are same. That would mean ##det ~B=0##. (We have taken kth column for generality, we could show our result for any column).

For the expression
##\sum_{I=1}^{n} (-1)^{I +k} a_{ij} det ~A_{ik}##
We can replace ## a_{ij}## with ##a_{ik}## for all i.
$$
\sum_{I=1}^{n} (-1)^{I +k} a_{ij} det ~A_{ik} = \sum_{I=1}^{n} (-1)^{I +k} a_{ik} det ~A_{ik}$$
The RHS is simply det B expanded along kth column, therefore
$$
\sum_{I=1}^{n} (-1)^{I +k} a_{ij} det ~A_{ik} = det~B = 0$$

Thus, for any expression of the form ##\sum_{I=1}^{n} (-1)^{I +k} a_{ij} det ~A_{ik}## (where ##j \neq k##) we have it equal to zero.But this seems to me like a tyranny of Mathematics, we have proved something to be zero by taking it into a completely new system. Well, changing the context of something must change it's meaning, the expression is zero in the context of matrix B, not in A.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...

Similar threads

Back
Top