Can I Use Linear Algebra to Prove the Equation det(AB) = det(A) det(B)?

  • Context: Undergrad 
  • Thread starter Thread starter Castilla
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around the equation det(AB) = det(A) det(B) and the use of linear algebra to prove it. Participants explore various definitions and properties of determinants, as well as different approaches to the proof, including recursive definitions, multilinearity, and the effects of elementary row operations.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant seeks guidance on the linear algebra concepts necessary to understand the proof of det(AB) = det(A) det(B).
  • Another participant suggests a tedious method involving the determinant of matrix products but expresses a desire for a simpler approach.
  • A participant notes that the determinant can be defined recursively, but this definition may not be practical for proofs.
  • Some participants discuss the properties of determinants, including multilinearity, alternation, and the effect of column operations on the determinant.
  • One participant outlines a proof that involves defining a multilinear and alternating map and shows how it leads to the desired equation.
  • Another proof is presented that utilizes the effects of elementary row operations on determinants, particularly for diagonal matrices.
  • There is a discussion about the distinction between diagonalization and transforming matrices into upper triangular form, with some participants clarifying their statements.
  • Concerns are raised about the definitions of determinants and their implications for proving multiplicativity.
  • One participant emphasizes the importance of understanding determinants as scale factors of volume change.

Areas of Agreement / Disagreement

Participants express differing views on the definitions and properties of determinants, as well as the methods for proving the multiplicative property. No consensus is reached on a single approach or definition.

Contextual Notes

Participants mention various definitions of determinants, including recursive definitions and properties related to volume change, which may influence the proof's clarity and applicability. The discussion also highlights the complexity of the topic and the potential for confusion regarding matrix transformations.

Castilla
Messages
241
Reaction score
0
I am trying to advance in my teoretical study of change of variables for double integrals but it seems I need to use this equation:
[tex]det(AB) = det(A) det(B)[/tex]. I would like to know which elements of linear algebra I need to know to follow a proof ot that statement.
Thanks for your answer.
 
Physics news on Phys.org
I never liked these types of proofs. This is the only thing I can say, maybe someone can add to it or show you a different way:

[tex]AB=\left[Ab_1\cdots Ab_n\right][/tex]

The right side can be simplifed / written differently and then take the determinant. I think this is the way I've seen it before, although it's really tedious. I hope someone knows an easier way :rolleyes:

Alex
 
It depends... Sometimes, the determinant is defined recursively - of course, you don't need much lineair algebra for that. On the other hand, such a "definition" isn't very useful to work with, although it's easy to understand.
Intrinsicly though, a determinant can be defined using permutations and it involves being multilineair, alternating and having the property that det(In) = 1. If you've seen it this way, the proof isn't too long.
 
Thanks to both.

TD, could I request you a sketch of the proof ?
 
Ok, since it uses some of the previous definitions I will make a short introduction.

Firstly, we define a map d(A) (I think it's called this in English) which is multilineair and alternating. We can prove it satisfies the following properties:
- d(A) changes sign if you swap two columns.
- d(A) doesn't change if you had a lineair combination of columns to another column.
- d(A) = 0 if one of the columns of A is 0.
- If rank(A) < n (assuming we're starting with a n x n matrix), then d(A) is 0.

After that, we define the "det" as: [itex]\det :M_{nn} \left( K \right) \to K[/itex] which is the above (alternating and multilineair) and satisfies [itex]\det \left( {I_n } \right) = 1[/tex]. we can show that this det is unique.<br /> Then you can prove a small lemma. Suppose we have that initial map d again, then d can always be written as [itex]d\left( {I_n } \right)\det[/itex] so that for all matrices A: [itex]d\left( A \right) = \det \left( A \right)d\left( {I_n } \right)[/itex].<br /> <br /> Now we've done all of that, proving our theorem isn't that hard anymore.<br /> We take A and B and want that det(AB) = det(A)det(B). Start with taking A and consider the map (?): [itex]d_A :M_{nn} \left( K \right) \to K:d_A \left( B \right) = \det \left( {AB} \right)[/itex], or, written in columns: [itex] d_A \left( {\begin{array}{*{20}c}<br /> {B_1 } & {B_2 } & \cdots & {B_n } \\<br /> \end{array}} \right) = \det \left( {\begin{array}{*{20}c}<br /> {AB_1 } & {AB_2 } & \cdots & {AB_n } \\<br /> \end{array}} \right)[/itex]<br /> <br /> It is now easy to see that our current d is multilineair and alternating again, so we get (using our lemma) that [itex]d_A \left( B \right) = \det \left( B \right)d\left( {I_n } \right)[/itex], but seeing how we defined d, we also have [itex]d_A \left( {I_n } \right) = \det \left( A \right)[/itex]. Putting that together yields: [itex]\det \left( {AB} \right) = d_A \left( B \right) = \det \left( A \right)\det \left( B \right)[/itex]<br /> <br /> Note:<br /> - A function of a matrix is multilineair if it's lineair for every element.<br /> - A function of a matrix is alternating if it's 0 when 2 columns (or rows) are equal.[/itex]
 
Here's another proof which uses the effect of elementary row operations on the determinant:
- Swapping 2 rows switches the sign of the determinant
- Adding a scalar multiple of a row to another doesn't change the determinant
- If a single row is multiplied by a scalar r, then the determinant of the resulting matrix is r times the determinant of the original matrix.

So first, note that det(AB)=det(A)det(B) if A is a diagonal matrix. Since AB is the matrix B with the ith row multiplied by a_ii. So using the scalar multiplication property for each row we see that for diagonal A:
det(AB)=(a_1)(a_2)...(a_n)det(B)=det(A)det(B)
since the determinant of a diagonal matrix is the product of the diagonal elements.

If A is singular, then AB is also singular, so det(AB)=0=det(A)det(B).

For the nonsingular case we can row reduce A to diagonal form by Gauss-Jordan elimination (we avoid row-scaling). Every row-operation can be represented by an elementary matrix, the product of which we call E. Then EA=D, where D is the reduced diagonal matrix of A. So E(AB)=(EA)B=DB.
Let r be the number of row swaps. Now we have:
[tex]\det(AB)=(-1)^r \det(DB)=(-1)^r \det(D)\det(B)=\det(A)\det(B)[/tex]
 
Last edited:
Galileo said:
HYou can show every matrix can be reduced to a diagonal matrix with these operations (gaussian elimination).
Not every matrix can be diagonalized. Over C though, it is possible to turn every matrix into an upper triangle matrix (e.g. with Gaussian elimination). Is that what you meant?
 
TD and Galileo:

It won't be easy to understand your posts but it will be a good test for me.

Thanks again.
Castilla.
 
TD said:
Not every matrix can be diagonalized. Over C though, it is possible to turn every matrix into an upper triangle matrix (e.g. with Gaussian elimination). Is that what you meant?

Yeah, my mistake. I treated the nonsingular case separately in the proof so I could diagonalize.
 
  • #10
TD said:
Not every matrix can be diagonalized. Over C though, it is possible to turn every matrix into an upper triangle matrix (e.g. with Gaussian elimination). Is that what you meant?


be careful not to confuse (or cause to be confuesd) the notion of gaussian elimnation to put something into upper triangular *non-conjugate* form whcihc has nothing to do with the base field being C or anything else, and the notion of conjugate upper triangular matrix (jordan normal form)
 
  • #11
incidentally, the proof that det is multiplicative depends on your definition of determinant. of course they are all equivalent but with either of my two definitions of det it is obvious that det is mutliplicative, and it is only if you define det as some expansion by rows that it is not clear that it is multiplicative.

it is better to prove that det is the scale factor of volume, whence it becomes trivial to prove it is multiplicative
 
  • #12
matt grime said:
be careful not to confuse (or cause to be confuesd) the notion of gaussian elimnation to put something into upper triangular *non-conjugate* form whcihc has nothing to do with the base field being C or anything else, and the notion of conjugate upper triangular matrix (jordan normal form)
Right, thanks for pointing that out.
matt grime said:
incidentally, the proof that det is multiplicative depends on your definition of determinant. of course they are all equivalent but with either of my two definitions of det it is obvious that det is mutliplicative, and it is only if you define det as some expansion by rows that it is not clear that it is multiplicative.
it is better to prove that det is the scale factor of volume, whence it becomes trivial to prove it is multiplicative
May I ask what those two definitions are?
In my lineair algebra course (as I mentioned earlier), we first defined a 'determinant map' [itex]\det :M_{nn} \left( K \right) \to K[/itex] which had to be multilineair, alternating and satisfying det(In) = 1. Then we showed that this existed, was unique and given by:
[tex]\det(A) = \sum_{\sigma \in S_n} <br /> \sgn(\sigma) \prod_{i=1}^n a_{\sigma(i),i}[/tex]
 
  • #13
i told you: det is the scale factor of volume change.

formally, look at the induced action on the n'th exterior power of the vector space.
 
  • #14
TD said:
Ok, since it uses some of the previous definitions I will make a short introduction.

Firstly, we define a map d(A) (I think it's called this in English) which is multilineair and alternating. We can prove it satisfies the following properties:
- d(A) changes sign if you swap two columns.
- d(A) doesn't change if you had a lineair combination of columns to another column.
- d(A) = 0 if one of the columns of A is 0.
- If rank(A) < n (assuming we're starting with a n x n matrix), then d(A) is 0.

After that, we define the "det" as: [itex]\det :M_{nn} \left( K \right) \to K[/itex] which is the above (alternating and multilineair) and satisfies [itex]\det \left( {I_n } \right) = 1[/tex]. we can show that this det is unique.<br /> Then you can prove a small lemma. Suppose we have that initial map d again, then d can always be written as [itex]d\left( {I_n } \right)\det[/itex] so that for all matrices A: [itex]d\left( A \right) = \det \left( A \right)d\left( {I_n } \right)[/itex].<br /> <br /> Now we've done all of that, proving our theorem isn't that hard anymore.<br /> We take A and B and want that det(AB) = det(A)det(B). Start with taking A and consider the map (?): [itex]d_A :M_{nn} \left( K \right) \to K:d_A \left( B \right) = \det \left( {AB} \right)[/itex], or, written in columns: [itex] d_A \left( {\begin{array}{*{20}c}<br /> {B_1 } & {B_2 } & \cdots & {B_n } \\<br /> \end{array}} \right) = \det \left( {\begin{array}{*{20}c}<br /> {AB_1 } & {AB_2 } & \cdots & {AB_n } \\<br /> \end{array}} \right)[/itex]<br /> <br /> It is now easy to see that our current d is multilineair and alternating again, so we get (using our lemma) that [itex]d_A \left( B \right) = \det \left( B \right)d\left( {I_n } \right)[/itex], but seeing how we defined d, we also have [itex]d_A \left( {I_n } \right) = \det \left( A \right)[/itex]. Putting that together yields: [itex]\det \left( {AB} \right) = d_A \left( B \right) = \det \left( A \right)\det \left( B \right)[/itex]<br /> <br /> Note:<br /> - A function of a matrix is multilineair if it's lineair for every element.<br /> - A function of a matrix is alternating if it's 0 when 2 columns (or rows) are equal.[/itex]
[itex] <br /> I my try harder to later to follow this but it seems like a rather advanced proof for something which should be basic.[/itex]
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
7K
Replies
2
Views
2K
  • · Replies 25 ·
Replies
25
Views
4K
Replies
17
Views
3K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
8K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K