# Determinant and its transpose

1. Oct 26, 2004

### Hyperreality

I've been doing revisions for my final exams, and I got stuck on the proof

det A = det A^T, determinant of A = determinant of A transpose.

How do I proof it?

2. Oct 27, 2004

### Galileo

Use the fact that you expand a determinant by minors along any row or column.
(Unless you have to prove that fact..., it's tedious)

3. Oct 27, 2004

### HallsofIvy

Another way to do it is this: If you "row reduce" a matrix to diagonal form its determinant is just the product of the numbers on the diagonal.

Conversely, if you "column reduce" (exactly the same as "row reduce" except that you use "column operations" rather than "row operations"), the determinant is still the product of the numbers on the diagonal.

But "column reducing" a matrix is exactly the same as "row reducing" its transpose.

4. Oct 29, 2004

### Hyperreality

Umm, I am actually trying to proof the general case here.

My text says, it is possible to proof the identity using the product rule

ie det(AB) = det A x det B,

Would this also help? (AB)^T = (B^T)(A^T)?

5. Oct 29, 2004

### shmoe

Hi, both the suggestions you've been given will work for proving general case.

Sure, they'll help prove the identity in this third method I'll outline:

Case 1) A is not invertible. Is A^T invertible? What is the determinant of a non-invertible matrix?

Case 2) A is invertible. Prove that for any elementary matrix E, det E = det E^T. This should be fairly simple, but you'll want to consider each type of elementary matrix seperately. Now, write A as the product of elementary matrices. Apply the two equations you just gave liberally and you're done.

6. Feb 2, 2009

### nishsweet

How do you go about proving that the determinant of a nxn matrix A is equal to the determinant of the transpose of said matrix A using Laplace's expansion?

How can you use Det(AB) =Det A x det B to help with this?

7. Feb 2, 2009

### d_b

You need to to do expansion along row 1 of A because it is the same thing as the expansion along column 1 of AT. By using the theorem it should be the same answer.

.

8. Feb 3, 2009

### nishsweet

Can Laplace's Expansion be used either across a row or down a column? I thought by definition it needed to be used across a row?

9. Feb 12, 2009

### LogicalTime

Since the eigenvalues of A = eigenvalues of $$A^{T}$$
and you know that det (A) is the product of the eigenvalues of A
then det(A) = det ($$A^{T}$$)

seems the fastest way, but I don't know what you are allowed to assume

10. Feb 12, 2009

### Matthollyw00d

I'm sure he's not gotten to eigenvalues yet this early in the semester.

11. Feb 12, 2009

### LogicalTime

he's studying for final exams

12. Feb 12, 2009

### Take_it_Easy

Heheheh.. this is called the Binet's Theorem if I remember good.
I remember the proof I read was very boring and technical, and it relied on the property of the set of permutations $$\mathbb S_n$$ and the signature homomorfism.
I don't know of a simple proof that uses just the definition of determinant.
:(

13. Feb 12, 2009

### Fredrik

Staff Emeritus
It's not too hard to prove it directly from the definition. The only thing that's tricky is the notation. The definition can be expressed as

$$\det A=\sum_P(-1)^P A_{1,P1}\cdots A_{n,Pn}$$

where the sum is over all permutations of the ordered set (1,2,...,n). The factor (-1)P is interpreted as +1 when the permutation is even, and -1 when the permutation is odd. Pk for k=1,2,...,n is interpreted as the number that k is mapped to by the permutation P.

We have

$$\det A^T=\sum_P(-1)^P (A^T)_{1,P1}\cdots (A^T)_{n,Pn}=\sum_P(-1)^P A_{P1,1}\cdots A_{Pn,n}$$

The idea is to prove that any given term from the right-hand side of the first equation occurs exactly once on the right-hand side of the second equation. This is sufficient to prove that the sums are equal because the number of terms is the same in both.

So let's write the given term as

$$(-1)^P A_{1,P1}\cdots A_{n,Pn}$$

and let's rearrange the order of the factors in all the terms of the second equation so that the column indices appear in the same order as in the given term from the first equation. For example, if Pn=3, we rearrange the factors of the term

$$(-1)^Q A_{Q1,1}\cdots A_{Qn,n}$$

(where Q is some permutation) so that the factor we write last is AQ3,3. Note that all factors are of the form AQm,m for some m. Our rearrangement makes sure that the column index in the kth factor is Pk, and that means that the kth factor is AQPk,Pk. So each term of the right-hand side of the second equation can be expressed as

$$(-1)^Q A_{QP1,P1}\cdots A_{QPn,Pn}$$

but the sum is over all permutations, so exactly one of those terms has Q=P-1, and that term is equal to

$$(-1)^{P^{-1}} A_{1,P1}\cdots A_{n,Pn}$$

P-1 is of course even if and only P is even, so this is equal to the given term we started with, and we're done.

You can prove all of the properties (that are mentioned in an introductory text) of the determinant directly from the definition, using the notation above, but the result $\det AB=\det A \det B$ is of course much harder to prove than the others.

Last edited: Feb 12, 2009