Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Det(AB) = Det(A) Det(B)

  1. Oct 12, 2005 #1
    I am trying to advance in my teoretical study of change of variables for double integrals but it seems I need to use this equation:
    [tex] det(AB) = det(A) det(B)[/tex]. I would like to know which elements of linear algebra I need to know to follow a proof ot that statement.
    Thanks for your answer.
     
  2. jcsd
  3. Oct 12, 2005 #2
    I never liked these types of proofs. This is the only thing I can say, maybe someone can add to it or show you a different way:

    [tex]AB=\left[Ab_1\cdots Ab_n\right][/tex]

    The right side can be simplifed / written differently and then take the determinant. I think this is the way I've seen it before, although it's really tedious. I hope someone knows an easier way :rolleyes:

    Alex
     
  4. Oct 12, 2005 #3

    TD

    User Avatar
    Homework Helper

    It depends... Sometimes, the determinant is defined recursively - of course, you don't need much lineair algebra for that. On the other hand, such a "definition" isn't very useful to work with, although it's easy to understand.
    Intrinsicly though, a determinant can be defined using permutations and it involves being multilineair, alternating and having the property that det(In) = 1. If you've seen it this way, the proof isn't too long.
     
  5. Oct 12, 2005 #4
    Thanks to both.

    TD, could I request you a sketch of the proof ?
     
  6. Oct 13, 2005 #5

    TD

    User Avatar
    Homework Helper

    Ok, since it uses some of the previous definitions I will make a short introduction.

    Firstly, we define a map d(A) (I think it's called this in English) which is multilineair and alternating. We can prove it satisfies the following properties:
    - d(A) changes sign if you swap two columns.
    - d(A) doesn't change if you had a lineair combination of columns to another column.
    - d(A) = 0 if one of the columns of A is 0.
    - If rank(A) < n (assuming we're starting with a n x n matrix), then d(A) is 0.

    After that, we define the "det" as: [itex]\det :M_{nn} \left( K \right) \to K[/itex] which is the above (alternating and multilineair) and satisfies [itex]\det \left( {I_n } \right) = 1[/tex]. we can show that this det is unique.
    Then you can prove a small lemma. Suppose we have that initial map d again, then d can always be written as [itex]d\left( {I_n } \right)\det[/itex] so that for all matrices A: [itex]d\left( A \right) = \det \left( A \right)d\left( {I_n } \right)[/itex].

    Now we've done all of that, proving our theorem isn't that hard anymore.
    We take A and B and want that det(AB) = det(A)det(B). Start with taking A and consider the map (?): [itex]d_A :M_{nn} \left( K \right) \to K:d_A \left( B \right) = \det \left( {AB} \right)[/itex], or, written in columns: [itex]
    d_A \left( {\begin{array}{*{20}c}
    {B_1 } & {B_2 } & \cdots & {B_n } \\
    \end{array}} \right) = \det \left( {\begin{array}{*{20}c}
    {AB_1 } & {AB_2 } & \cdots & {AB_n } \\
    \end{array}} \right)[/itex]

    It is now easy to see that our current d is multilineair and alternating again, so we get (using our lemma) that [itex]d_A \left( B \right) = \det \left( B \right)d\left( {I_n } \right)[/itex], but seeing how we defined d, we also have [itex]d_A \left( {I_n } \right) = \det \left( A \right)[/itex]. Putting that together yields: [itex]\det \left( {AB} \right) = d_A \left( B \right) = \det \left( A \right)\det \left( B \right)[/itex]

    Note:
    - A function of a matrix is multilineair if it's lineair for every element.
    - A function of a matrix is alternating if it's 0 when 2 columns (or rows) are equal.
     
  7. Oct 13, 2005 #6

    Galileo

    User Avatar
    Science Advisor
    Homework Helper

    Here's another proof which uses the effect of elementary row operations on the determinant:
    - Swapping 2 rows switches the sign of the determinant
    - Adding a scalar multiple of a row to another doesn't change the determinant
    - If a single row is multiplied by a scalar r, then the determinant of the resulting matrix is r times the determinant of the original matrix.

    So first, note that det(AB)=det(A)det(B) if A is a diagonal matrix. Since AB is the matrix B with the ith row multiplied by a_ii. So using the scalar multiplication property for each row we see that for diagonal A:
    det(AB)=(a_1)(a_2)...(a_n)det(B)=det(A)det(B)
    since the determinant of a diagonal matrix is the product of the diagonal elements.

    If A is singular, then AB is also singular, so det(AB)=0=det(A)det(B).

    For the nonsingular case we can row reduce A to diagonal form by Gauss-Jordan elimination (we avoid row-scaling). Every row-operation can be represented by an elementary matrix, the product of which we call E. Then EA=D, where D is the reduced diagonal matrix of A. So E(AB)=(EA)B=DB.
    Let r be the number of row swaps. Now we have:
    [tex]\det(AB)=(-1)^r \det(DB)=(-1)^r \det(D)\det(B)=\det(A)\det(B)[/tex]
     
    Last edited: Oct 13, 2005
  8. Oct 13, 2005 #7

    TD

    User Avatar
    Homework Helper

    Not every matrix can be diagonalized. Over C though, it is possible to turn every matrix into an upper triangle matrix (e.g. with Gaussian elimination). Is that what you meant?
     
  9. Oct 13, 2005 #8
    TD and Galileo:

    It won't be easy to understand your posts but it will be a good test for me.

    Thanks again.
    Castilla.
     
  10. Oct 13, 2005 #9

    Galileo

    User Avatar
    Science Advisor
    Homework Helper

    Yeah, my mistake. I treated the nonsingular case seperately in the proof so I could diagonalize.
     
  11. Oct 13, 2005 #10

    matt grime

    User Avatar
    Science Advisor
    Homework Helper


    be careful not to confuse (or cause to be confuesd) the notion of gaussian elimnation to put something into upper triangular *non-conjugate* form whcihc has nothing to do with the base field being C or anything else, and the notion of conjugate upper triangular matrix (jordan normal form)
     
  12. Oct 13, 2005 #11

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    incidentally, the proof that det is multiplicative depends on your definition of determinant. of course they are all equivalent but with either of my two definitions of det it is obvious that det is mutliplicative, and it is only if you define det as some expansion by rows that it is not clear that it is multiplicative.

    it is better to prove that det is the scale factor of volume, whence it becomes trivial to prove it is multiplicative
     
  13. Oct 14, 2005 #12

    TD

    User Avatar
    Homework Helper

    Right, thanks for pointing that out.
    May I ask what those two definitions are?
    In my lineair algebra course (as I mentioned earlier), we first defined a 'determinant map' [itex]\det :M_{nn} \left( K \right) \to K[/itex] which had to be multilineair, alternating and satisfying det(In) = 1. Then we showed that this existed, was unique and given by:
    [tex]\det(A) = \sum_{\sigma \in S_n}
    \sgn(\sigma) \prod_{i=1}^n a_{\sigma(i),i}[/tex]
     
  14. Oct 15, 2005 #13

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    i told you: det is the scale factor of volume change.

    formally, look at the induced action on the n'th exterior power of the vector space.
     
  15. Feb 8, 2008 #14
    I my try harder to later to follow this but it seems like a rather advanced proof for something which should be basic.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?