Understanding proof about A(adj A) elements

  • Thread starter Thread starter CoolFool
  • Start date Start date
  • Tags Tags
    Elements Proof
Click For Summary
The discussion focuses on understanding the theorem that states the elements of A(adj A) are |A| on the diagonal and 0 elsewhere. The diagonal elements are derived from the product of a row vector of A and its corresponding cofactors in adj A. The confusion arises when trying to connect the construction of a matrix B, which replaces a row of A, to the proof that off-diagonal elements in A(adj A) are zero. It is clarified that the scalar product of a row in A with the cofactors of a different row in adj A results in zero, confirming that the off-diagonal elements are indeed zero. Overall, the connection lies in the properties of determinants and cofactors in relation to matrix multiplication.
CoolFool
Messages
4
Reaction score
0
Hey everyone,
I've been going thru a linear alg textbook and there's this one theorem I can't quite follow. It proves that the elements of A(adj A) are |A| if on the diagonal and are 0 if not. Therefore, A(adj A) = |A|I.

The first part shows that along the diagonal of this product A(adj A), the resulting elements are |A| because they are determined by multiplying the row vector of A by its corresponding cofactors of the corresponding column in adj A. This first part I understand, even if I didn't put it very clearly.

The main theorem is: for any matrix A, \sum^{n}_{k=1}a_{tk}c_{ik} = \delta_{ti}|A|, where \delta_{ti} is 1 if t = i, and 0 otherwise.

As an example, a 3x3 matrix B is constructed by replacing the second row vector of A with any of A's row vectors, including its second row vector. If the second row vector is replaced with itself, B=A and so if you multiply that row by its corresponding column of adj A, you get |A|, which we already know.

So far I follow, though constructing B as an example seems awkward. But the next step confuses me. Now, we have to prove that the elements not on the diagonal are equal to zero. The example is that if B's second row is replaced with another row, B has two identical rows and you can subtract one from the other and get a zero row vector, which means that |B| = 0. The rule where a matrix with 2 identical rows has a determinant of zero makes sense to me, but I don't get why the construction of B has anything to do with A(adj A). That's what throws me off: I don't understand the connection between this product and B.

What am I missing? I feel like it's right in front of me but I can't see it!
Thanks for your help!
 
Physics news on Phys.org
Suppose we want to calculate the element at place ij in A(adj A), with i =/= j. Then, we take the scalar product of row i in A and column j in adj A. The latter column consists of the cofactors of the elements of row j in A. But if we take the scalar product of a row with the cofactors of another row in the same matrix, the result is 0, as you said you understand. Thus, the desired element in A(adj A) is 0.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 15 ·
Replies
15
Views
5K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
5K
  • · Replies 48 ·
2
Replies
48
Views
6K