1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Prove trace of matrix: Tr(AB) = Tr(BA)

  1. Aug 4, 2017 #1
    1. The problem statement, all variables and given/known data

    The trace of a matrix is defined to be the sum of its diaganol matrix elements.
    1. Show that Tr(ΩΛ) = Tr(ΩΛ)
    2. Show that Tr(ΩΛθ) = Tr(θΩΛ) = Tr(ΛθΩ) (the permutations are cyclic)

    my note: the cross here U[+][/+]is supposed to signify the adjoint of the unitary matrix U

    2. Relevant equations

    $$
    Tr(Ω) = \sum_{i=1} Ω_{ii} \\
    I = \sum_{k=1}^n | k >< k |
    $$

    3. The attempt at a solution

    $$
    Tr(Ω) = \sum_{i=1} Ω_{ii} \\
    Tr(ΛΩ) = \sum_{i=1} (ΛΩ)_{ii} \\
    (Ω)_{ij} = < i | Ω | j >
    $$

    (this is saying that when we take the product of the matrices we sum the diagonal entries where the element is in the ith row of the ith column, I also assume the trace is )

    1.
    $$
    = \sum_{i=1} (Λ Ω)_{ii} \\
    = \sum_{i=1} < i |ΛΩ| i > \\
    = \sum_{i=1} < i |Λ I Ω| i > \\
    = \sum_{k=1} \sum_{i=1} < i |Λ| k >< k |Ω| i > \\
    = \sum_{k=1} \sum_{i=1} Λ_{ik} Ω_{ki}
    $$

    Unsure about how to finish as I think Im on the right track but my thinking is a bit cloudy.

    To finish proof i need to show as this is where we end up if we reverse the operators initially as we want:

    $$
    \sum _{k=1} \sum _{i=1} Λ_{ik} Ω_{ki} = \sum _{k=1} \sum _{i=1} Ω_{ki} Λ_{ik}
    $$

    So correct me where/if I'm wrong.

    The way I'm thinking about it is in terms of matrix multiplication. The trace only sums the diagonal elements of the a matrix. Thus when multiplying two matrices/operators the only terms that 'survive' are terms which end up there as the ith row * the ith column. This is denoted by the fact that ' i ' is the 1st term $$Λ_{ik}$$ denotes the ith row time of Λ times the ith column $$Ω_{ki}$$ when we sum through all the values k = 1 to k = n.

    Is it possible that we can just swap the two indexes k & i? as they both go to n? Can I just swap their positions as they are are matrix elements and are thus commutative?

    For part 2 i can use a similar method to get to:
    $$
    \sum _{k=1} \sum _{i=1} \sum _{j=1} Λ_{ik} Ω_{kj} θ_{ji}
    $$
    I notice that the last index of a matrix matches the first index of the next operator. Which may be a clue to why its cyclic but I cant figure out why.

    Apologies for the long winded attempt, I just wanted to be clear for anybody attempting to understand my confusion. In my text this proof is in the Unitary Matrix/ Determinant section. These are obviously relevant in later parts to this question but they don't appear to be relevant here.

    Any help greatly appreciated
     
  2. jcsd
  3. Aug 4, 2017 #2

    fresh_42

    Staff: Mentor

    It might be easier to work with matrices ##A_1,A_2,A_3## and calculate ##\operatorname{tr}(A_1A_2)## and ##\operatorname{tr}(A_1A_2A_3)##. Then the result should be symmetric in ##(1,2)## and cyclic in ##(1,2,3)##. In any case, you have finite sums here, so changing the order of summation is no issue and the rest comes with the fact, that your matrix entries are hopefully from a commutative and associative domain.
     
  4. Aug 5, 2017 #3

    StoneTemplePython

    User Avatar
    Gold Member

    My advice would be to only spend much time on the first statement, and you ultimately need to find a way to prove this that is intuitively satisfying to you, as traces are ridiculously useful and the cyclic property is vital. I.e. spend your time on proving

    ##trace\Big(\mathbf {AB}\Big) = trace\Big(\mathbf {BA}\Big)##

    Personally, I like to start simple and build, which in this context means rank one matrices, i.e. that

    ##trace\Big(\mathbf a \tilde{ \mathbf b}^T \Big) = trace\Big(\tilde{ \mathbf b}^T \mathbf a \Big)##

    If you look at matrix multiplication in terms of a series of rank one updates vs a bunch of dot products, you can build on this simple vector setup.

    - - - -
    once you have this locked down, use that fact plus associativity of matrix multiplication to get the cyclic property. Specifically consider

    ##trace\Big(\mathbf {XYZ}\Big) = trace\Big(\big(\mathbf {XY}\big)\big(\mathbf Z \big)\Big)##

    now call on the fact that ##trace\Big(\mathbf {AB}\Big) = trace\Big(\mathbf {BA}\Big)##, which if you assigned ##\mathbf A := \big(\mathbf {XY}\big)## and ##\mathbf B := \big(\mathbf Z\big)##, gives you the below:

    ##trace\Big(\big(\mathbf {XY}\big)\big(\mathbf Z \big)\Big) = trace\Big(\big(\mathbf Z \big)\big(\mathbf {XY}\big)\Big)##

    and use associativity and original proof once more to finish this off.
     
  5. Aug 5, 2017 #4
    Thanks for help guys. I am beginning to understand some of my issues.

    However I realized that that this identity applies for non-square matrix products also. i.e. if we have matrix A (2x3 matrix) and matrix B (3x2 matrix) then AB produces a 2x2 matrix & BA produces a 3x3 matrix yet the traces are still the same. Which was something I hadn't even considered and very interesting.

    I was just wondering if you could elaborate on this a bit more? More specifically, why do you have the complex conjugate and transpose of b denoted. Also the term rank one updates is unfamiliar to me. My Linear algebra is geared towards QM and lacks some generality.

    Also I had figured out the answer to why they are cyclic. Very nice proof indeed :)
     
  6. Aug 5, 2017 #5

    StoneTemplePython

    User Avatar
    Gold Member

    I would probably start off by assuming the scalars are in ##\mathbb R## for starters here that way you don't get lost in things like dot product versus inner product. If you right click the LaTeX, and do 'show math as -> Tex commands' you'll see that the symbol above the b vector in

    ##trace\Big(\mathbf a \tilde{ \mathbf b}^T \Big) = trace\Big(\tilde{ \mathbf b}^T \mathbf a \Big)##

    is actually the "tilde" sign.

    People don't always use it, but basically ##\tilde{ \mathbf b}^T## indicates that the vector is a row vector and was originally a row vector. (As opposed to if it didn't have the tilde sign, it would mean you have a column vector that has been transposed so that it is now a row vector.)

    Notation is not uniform though... FWIW if I was showing a conjugate transpose I would do ##\mathbf b^H## or ##\mathbf b^*## but again, in the example I show here, conjugation has nothing to do with anything... you're certainly welcome to ignore the tilde signs if you want. There is a hint, though, tied in with the tilde sign -- consider how to partition ##\mathbf A## and ##\mathbf B## appropriately into row and column vectors, and look at the two different types of multiplication mentioned.

    As for rank one updates -- you can just call it a finite series of rank one matrices if you want. Sometimes people call it the outer product interpretation of matrix multiplication.

    Again, the key thing is finding a way to prove

    ##trace\Big(\mathbf {AB}\Big) = trace\Big(\mathbf {BA}\Big)##

    in a way that is satisfying and intuitive to you. Most proofs I've seen involve juggling double sigmas. I prefer something a bit more visual, and building off of rank one matrices.

    In Linear Algebra Done Wrong, they strongly hint at using, in effect, indicator variables for ##\mathbf B## to prove this, but ultimately the problem is left as an exercise. It is a very good book, and free. You may want to read it as some point. It is available here:

    https://www.math.brown.edu/~treil/papers/LADW/book.pdf
     
  7. Aug 5, 2017 #6

    fresh_42

    Staff: Mentor

    You have already done all the work needed in your opening post under point ##1##. Just use ##\Lambda_{ik}\Omega_{ki}=\Omega_{ki}\Lambda_{ik}##, switch the summation order and walk the same way back to end up with ##\operatorname{Tr}(\Omega \Lambda)##. The ##3-##cycles follow with what @StoneTemplePython said in post #3 by using the associativity of matrix multiplication. No need for ranks, row- or column vectors, transposings, conjugates or special fields, except that the scalar field has to be commutative and associative.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Prove trace of matrix: Tr(AB) = Tr(BA)
  1. Quick ! T(A) = Tr(A) (Replies: 1)

Loading...