Do Tensor Fields Commute When Multiplied?

  • Thread starter Thread starter Kreizhn
  • Start date Start date
  • Tags Tags
    Field Tensor
Click For Summary
SUMMARY

The discussion clarifies that the product of two tensor fields, specifically (0, 2) tensors A and B, is generally non-commutative when considering their tensor product A ⊗ B versus B ⊗ A. While scalar components of these tensors can commute, the ordering of indices in tensor notation is crucial, leading to different tensor products. The conversation also distinguishes between tensor products and matrix multiplication, emphasizing that matrix products reflect a contraction and are fundamentally non-commutative. Thus, understanding the distinction between tensor operations and matrix operations is essential for accurate calculations in tensor algebra.

PREREQUISITES
  • Understanding of tensor notation and indices
  • Familiarity with tensor products, specifically (0, 2) tensors
  • Knowledge of matrix multiplication and linear transformations
  • Basic principles of Einstein summation convention
NEXT STEPS
  • Study the properties of tensor products in detail
  • Learn about the Einstein summation convention and its applications
  • Explore the differences between tensor operations and matrix operations
  • Investigate the implications of non-commutativity in tensor algebra
USEFUL FOR

Mathematicians, physicists, and students of advanced mathematics who are working with tensor calculus, linear algebra, or general relativity will benefit from this discussion.

Kreizhn
Messages
714
Reaction score
1

Homework Statement


The SR&GR guy aren't being very helpful, so maybe I can get this quickly resolved here.

I want to know if the product of two tensor fields is generally non-commutative. That is, if I have two tensor fields [itex]A_{ij}, B_k^\ell[/itex] do these representations commute?

The Attempt at a Solution



I feel generally quite conflicted about this subject, and I think it's because I don't fully understand what the representations mean. On one hand, I want to say that for a fixed i,j these simply represent scalar elements and so certainly commute. However, taken as general tensors (for example matrices), they would not commute. That is, if A and B were matrices, then [itex]AB \neq BA[/itex] in general, but given the representations [itex](A)_{ij} = a_{ij}, (B)_{ij} = b_{ij}[/itex] then certainly [itex]a_{ij}b_{k\ell} = b_{k\ell} a_{ij}[/itex] - the only "non-commutativity" comes in the ordering of the indices. Can anybody shed some light on this situation?
 
Physics news on Phys.org
If [tex]A[/tex] and [tex]B[/tex] are two [tex](0, 2)[/tex] tensors with components [tex]a_{ij}[/tex] and [tex]b_{ij}[/tex] respectively, then the [tex](0, 4)[/tex] tensor with components [tex]c_{ijkl} = a_{ij} b_{kl}[/tex] is the tensor product [tex]A \otimes B[/tex]. You are correct to observe that this tensor differs from [tex]B \otimes A[/tex], which has components [tex]c'_{ijkl} = b_{ij} a_{kl}[/tex], only in the ordering of the indices -- but since the order of the indices for tensors is very important, you can't think of [tex]\otimes[/tex] as a commutative product.

When you mention matrices, you are talking about a different product, which is noncommutative in a more fundamental way. In their role as linear transformations, you can think of matrices as [tex](1, 1)[/tex] tensors. If [tex]A[/tex] and [tex]B[/tex] are matrices expressed this way, with components [tex]a^i_j[/tex] and [tex]b^i_j[/tex], then the matrix product [tex]AB[/tex] (composition of linear transformations) has components [tex]c^i_j = \sum_k a^i_k b^k_j[/tex], or simply [tex]a^i_k b^k_j[/tex] in the Einstein summation convention. In tensor language, the matrix product (composition) is actually reflected as a contraction (which is also how the product of two [tex](1, 1)[/tex] tensors can be another [tex](1, 1)[/tex] tensor and not a [tex](2, 2)[/tex] tensor, as the tensor product would be).
 
ystael said:
If [tex]A[/tex] and [tex]B[/tex] are two [tex](0, 2)[/tex] tensors with components [tex]a_{ij}[/tex] and [tex]b_{ij}[/tex] respectively, then the [tex](0, 4)[/tex] tensor with components [tex]c_{ijkl} = a_{ij} b_{kl}[/tex] is the tensor product [tex]A \otimes B[/tex]. You are correct to observe that this tensor differs from [tex]B \otimes A[/tex], which has components [tex]c'_{ijkl} = b_{ij} a_{kl}[/tex], only in the ordering of the indices -- but since the order of the indices for tensors is very important, you can't think of [tex]\otimes[/tex] as a commutative product.

When you mention matrices, you are talking about a different product, which is noncommutative in a more fundamental way. In their role as linear transformations, you can think of matrices as [tex](1, 1)[/tex] tensors. If [tex]A[/tex] and [tex]B[/tex] are matrices expressed this way, with components [tex]a^i_j[/tex] and [tex]b^i_j[/tex], then the matrix product [tex]AB[/tex] (composition of linear transformations) has components [tex]c^i_j = \sum_k a^i_k b^k_j[/tex], or simply [tex]a^i_k b^k_j[/tex] in the Einstein summation convention. In tensor language, the matrix product (composition) is actually reflected as a contraction (which is also how the product of two [tex](1, 1)[/tex] tensors can be another [tex](1, 1)[/tex] tensor and not a [tex](2, 2)[/tex] tensor, as the tensor product would be).

Some prefer to use (1,1) tensors as matrices and some say that (0,2) and (2,0) tensors are to be called the second-rank matrices which of course sounds so correct. The reason is that 4-by-4 matrices (on a 4d spacetime) are mixed tensors that are not that much identified among physicists and in their language you can find a lot of things like a metric tensor is a second-rank square matrix and if this is the case, then claiming mixed tensors as being of the nature of the same matrices does seem absurd. Besides, if we represent [tex]v^i[/tex] (i=0,...,3) as a 1-by-4 matrix (i.e. a row-vector) and a mixed tensor as a 4-by-4 matrix, then from the transformation formula

[tex]v^i=\frac{\partial x^i}{\partial \bar{x}^j}\bar{v}^j[/tex]

one would expect to have a [tex](4\times 4)(1\times 4) = (4\times 4)[/tex] vector which is absurd whereas if the transformation formula was written as

[tex]v^i=\bar{v}^j\frac{\partial x^i}{\partial \bar{x}^j}[/tex],

everything would be okay. The same situation happens to exist when one wants to take an upper index down or vice versa using the metric matrix [tex]g_{ij}[/tex], i.e.

[tex]v_i=g_{ij}v^j[/tex],

then taking the preceding path, a 4-by-4 matrix is to be assigned to a vector vi!

So to simply answer our OP's question about why non-commutativity does not hold for componential representation of matrices\tensors, I got to say that you can readily change the position of two numbers under the usual operation of multiplication, while you can't do the same stuff to an arrangement of numbers with a different law of multiplication which is, deep down, not commutative. So we you deal with tensors (actually with rank 2) as matrices, or a complex of matrices and tensors together, like [tex]Ag_{ab}C[/tex] where A and C are 4-by-4 matrices and [tex]g_{ab}[/tex] is the second-rank metric tensor, then non-commutativity is to be strongly considered in our calculations.

And I assume that you know for second-rank tensors, tensor rank and matrix rank are the same. So these things won't work if we are given something like a (0,3) tensor.

AB
 
Last edited:

Similar threads

Replies
4
Views
2K
Replies
9
Views
13K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 22 ·
Replies
22
Views
4K
Replies
5
Views
4K
  • · Replies 11 ·
Replies
11
Views
6K
  • · Replies 4 ·
Replies
4
Views
14K