Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Commutativity of Tensor Field Multiplication

  1. Jan 22, 2010 #1
    It may seem like a very simple question, but I just want to clarify something:

    Is tensor field multiplication non-commutative in general?

    For example, if I have two tensors [itex] A_{ij}, B_k^\ell [/itex] then in general, is it true that

    [tex] A_{ij} B_k^\ell \neq B_k^\ell A_{ij} [/tex]

    I remember them being non-commutative, but I want to make sure.
     
    Last edited: Jan 22, 2010
  2. jcsd
  3. Jan 22, 2010 #2

    bcrowell

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The example you wrote down should have an equals sign. Just write out the sum without the Einstein summation convention; all you're doing is reversing the order of two real numbers being multiplied in each term.

    On the other hand, [itex]A_i^j B_{jk} \ne B_i^j A_{jk}[/itex], because, e.g., in a metric with +++ signature, this would just be a way of writing ordinary matrix multiplication, which is noncommutative.

    Another example would be that the covariant derivative acts like a tensor, in the sense that you can raise and lower indices on it, but covariant derivatives don't commute with each other -- their commutator is the Riemann tensor.
     
  4. Jan 22, 2010 #3

    bapowell

    User Avatar
    Science Advisor

    Yes, in general tensor multiplication is non-commutative. Matrix multiplication is an example.
     
    Last edited: Jan 22, 2010
  5. Jan 22, 2010 #4
    This is what I thought, though I would like to clarify a bit.

    So for fixed indices, if the (mathematical) field commutes then the product as I've written it commutes since these are just field elements. However, the moment we introduce a summation we cannot guarantee commutativity? Scalars being the exception.
     
  6. Jan 22, 2010 #5

    bcrowell

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    This is incorrect. His example is not an example of reversing the order of multiplication of two matrices. See my #2. If all you do is reverse the order of the two factors, written in Einstein summation convention, that isn't the same as reversing the order of multiplication of two matrices; you have to change the arrangement of the indices with respect to the two tensors, or else you're just writing another expression that's equivalent to the original expression.
     
  7. Jan 22, 2010 #6
    So the noncommutativity really arises from the indices, not the order of the representations.
     
  8. Jan 22, 2010 #7

    bcrowell

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The issue isn't that there's a summation. Your example includes a summation (an implied Einstein summation), and is an equality. My example in #2 includes a summation, and is an inequality. The issue is that you didn't rearrange the indices in the way you'd have to in order to represent something like commutation of matrix multiplication.
     
  9. Jan 22, 2010 #8

    bcrowell

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Right :-)
     
  10. Jan 22, 2010 #9

    bapowell

    User Avatar
    Science Advisor

    Agreed. I didn't look closely enough at his example. Tried to edit my post but, alas, too late! Apologies.
     
  11. Jan 23, 2010 #10

    George Jones

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    There seems to be some confusion in thread, so I am going to try to contribute further confusion.
    Suitably interpreted, the answer to the question "Is tensor multiplication commutative?" is "No." , and this agrees with everything that bcrowell wrote.

    I think (but I could be wrong, and apologies if so) that Kreizhn and bapowell mean "tensor product" when they write "tensor multiplication," and the tensor product of two tensors is non-commutative, that is, if [itex]\mathbf{A}[/itex] and [itex]\mathbf{B}[/itex] are two tensors, then it is not generally true that [itex]\mathbf{A} \otimes \mathbf{B} = \mathbf{B} \otimes \mathbf{A}[/itex].

    Consider a simpler example. Let [itex]V[/itex] be a finite-dimensional vector space, and let [itex]\mathbf{u}[/itex] and [itex]\mathbf{v}[/itex] both be non-zero vectors in [itex]V[/itex]. Form the tensor product space [itex]V \otimes V[/itex]. To see when

    [tex]0 = \mathbf{u} \otimes \mathbf{v} - \mathbf{v} \otimes \mathbf{u},[/tex]

    introduce a basis [itex]\left\{ \mathbf{e}_i \right\}[/itex] for [itex]V[/itex] so that [itex]\left\{ \mathbf{e}_i \otimes \mathbf{e}_j \right\}[/itex] is a basis for [itex]V \otimes V[/itex]. Then,

    [tex]
    \begin{equation*}
    \begin{split}
    0 &= \mathbf{u} \otimes \mathbf{v} - \mathbf{v} \otimes \mathbf{u} \\
    &= \left(u^i v^j - u^j v^i \right) \mathbf{e}_i \otimes \mathbf{e}_j .
    \end{split}
    \end{equation*}
    [/tex]

    Because the basis elements are linearly independent,

    [tex]u^i v^j = u^j v^i[/tex]

    for all possible [itex]i[/itex] and [itex]j[/itex]. WLOG, assume that all the components of [itex]\mathbf{u}[/itex] are non-zero. Consequently,

    [tex]\frac{v^j}{u^j} = \frac{v^i}{u^i}[/tex]

    (no sum) for all possible [itex]i[/itex] and [itex]j[/itex], i.e., [itex]\mathbf{u}[/itex] and [itex]\mathbf{v}[/itex] are parallel.

    Thus, if non-zero [itex]\mathbf{u}[/itex] and [itex]\mathbf{v}[/itex] are not parallel,

    [tex]\mathbf{u} \otimes \mathbf{v} \ne \mathbf{v} \otimes \mathbf{u}.[/tex]

    In component form, this reads

    [tex]u^i v^j \ne u^j v^i[/tex]

    for some [itex]i[/itex] and [itex]j[/itex]. As bcrowell emphasized, placement of indices is crucial.

    In the original post, I think (again, I could be wrong) that Kreizhn was trying to formulate the property of non-commutativity of tensor products in the abstract-index approach advocated by, for example, Penrose and Wald. In this approach, indices do *not* refer to components with respect to a basis (no basis is chosen) and indices do *not* take on numerical values (like 0, 1, 2, 3), indices pick out copies of the vector space [itex]V[/itex]. The index [itex]i[/itex] on [itex]v^i[/itex] indicates the copy of [itex]V[/itex] in which [itex]v^i[/itex] resides. Vectors [itex]v^i[/itex] and [itex]v^j[/itex] live in different copies of [itex]V[/itex]. Vectors [itex]v^i[/itex] and [itex]u^i[/itex] live in the same copy of [itex]V[/itex].

    In the component approach, [itex]v^i u^j = u^j v^i[/itex] because multiplication of real numbers is commutative. In the abstract-index approach, [itex]v^i u^j = u^j v^i[/itex] because on each side [itex]v^i[/itex] lives in the same copy of [itex]V[/itex], and on each side [itex]u^j[/itex] lives in the same (different) copy of [itex]V[/itex].

    In the abstract index approach, non-commutativity of tensor products is indicated by, for example, [itex]v^i u^j \ne v^i u^j[/itex].
     
  12. Jan 23, 2010 #11

    bcrowell

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I'm not sure that it really matters which way you interpret Kreizhn's original post. Let's say he wrote down the conjecture

    [tex] A_{ij} B_k^\ell \neq B_k^\ell A_{ij} [/tex]

    with abstract index notation in mind. Then one way to test the conjecture is like this. We know by the definition of manifolds that the manifold is locally compatible with coordinate systems, so since coordinate systems exist, let's arbitrarily fix one. Rewrite the equation with Greek indices to show that they refer to these coordinates, rather than being abstract indices.

    [tex] A_{\mu \nu} B_\kappa^\lambda \neq B_\kappa^\lambda A_{\mu \nu} [/tex]

    By the axioms of the real numbers we can see that this is actually an equality, not an inequality. Since the equality held regardless of any assumption about which particular coordinates we chose, it follows that the original inequality, in abstract index notation, should also be an equality.

    In other words, the rules of tensor gymnastics don't change just because you're using abstract index notation.

    This seems fine to me, but I would emphasize that, as in the example I gave above, you don't need to forswear the manipulation of symbols according to the ordinary axioms of the real number system just because you're using abstract index notation. All you have to forswear is invocation of any special properties of a particular set of coordinates.

    Here you've really lost me. I think this must just be a typo or something, because both sides of the inequality are written using identical symbols, so it must be an equality.
     
  13. Jan 23, 2010 #12
    In another thread, Kreizhn again brings up the same question that there I answered fairly clearly the whole thing:

    But now I want to wash some confusions people have made here from my own point of view.


    bcrowell says

    and Goerge does confirm this answer by saying

    Rest assured that both are correct. I know where exactly the question arises from. I think Kreizhn is active in some zone of Physics that deals with tensors in Physicist's standpoint. I'm saying this because they usually don't clarify what based on their mathematical approaches are and maybe they are afraid of using symbols like [tex]\otimes[/tex]. I mean that one can picture two senarios in his mind as soon as something like [tex]A_{ij} B_k^\ell[/tex] is seen for the first time in a textbook: First, one can assume that this is a tensor product which is really possible to come to your mind even if you are a very professional expert. Because we use symbols like [tex]g_{ab}[/tex] as a second-rank 4-by-4 matrix --basically metric tensor--, for instance, in GR while it is indicative of components of an unseen matrix so I could expect [tex]g_{ab}v^a[/tex] to be taken as a tensor product by someone and here va is a 4-by-1 tensor (matrix). Speaking of which, the senario is somehow logical and what the OP is concerned about its truth now proceeds to be completely rational that [tex] A_{ij} B_k^\ell \neq B_k^\ell A_{ij} [/tex].

    Remember that to avoid confusions, mathematicians use bold-faced Latin alphabet to show a second-rank tensor or matrix, for example, so they are not worried about anything coming out of the use of componential representation of multiplication of matrices [tex]g_{ab}v^a[/tex]. Yes, this is the second senario that [tex]g_{ab}v^a[/tex] is a componential representation of matrix multiplication or, rationally, it is the componential multiplication translated into simple words as the usual multiplication and as I quoted above from my own post, the componential multiplication is commutative. But this senario besides being logical, is TRUE and this is what makes it distinctive from the first one.

    And the last thing to be recalled is that in the componential approach one CAN consider the multiplication as being non-commutative but this requires some huge observations that all multiplications must be meant to be of the nature of matrix multiplication.

    AB
     
    Last edited: Jan 23, 2010
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook