ystael said:
If A and B are two (0, 2) tensors with components a_{ij} and b_{ij} respectively, then the (0, 4) tensor with components c_{ijkl} = a_{ij} b_{kl} is the tensor product A \otimes B. You are correct to observe that this tensor differs from B \otimes A, which has components c'_{ijkl} = b_{ij} a_{kl}, only in the ordering of the indices -- but since the order of the indices for tensors is very important, you can't think of \otimes as a commutative product.
When you mention matrices, you are talking about a different product, which is noncommutative in a more fundamental way. In their role as linear transformations, you can think of matrices as (1, 1) tensors. If A and B are matrices expressed this way, with components a^i_j and b^i_j, then the matrix product AB (composition of linear transformations) has components c^i_j = \sum_k a^i_k b^k_j, or simply a^i_k b^k_j in the Einstein summation convention. In tensor language, the matrix product (composition) is actually reflected as a contraction (which is also how the product of two (1, 1) tensors can be another (1, 1) tensor and not a (2, 2) tensor, as the tensor product would be).
Some prefer to use (1,1) tensors as matrices and some say that (0,2) and (2,0) tensors are to be called the second-rank matrices which of course sounds so correct. The reason is that 4-by-4 matrices (on a 4d spacetime) are mixed tensors that are not that much identified among physicists and in their language you can find a lot of things like a metric tensor is a second-rank square matrix and if this is the case, then claiming mixed tensors as being of the nature of the same matrices does seem absurd. Besides, if we represent v^i (i=0,...,3) as a 1-by-4 matrix (i.e. a row-vector) and a mixed tensor as a 4-by-4 matrix, then from the transformation formula
v^i=\frac{\partial x^i}{\partial \bar{x}^j}\bar{v}^j
one would expect to have a (4\times 4)(1\times 4) = (4\times 4) vector which is absurd whereas if the transformation formula was written as
v^i=\bar{v}^j\frac{\partial x^i}{\partial \bar{x}^j},
everything would be okay. The same situation happens to exist when one wants to take an upper index down or vice versa using the metric matrix g_{ij}, i.e.
v_i=g_{ij}v^j,
then taking the preceding path, a 4-by-4 matrix is to be assigned to a vector v
i!
So to simply answer our OP's question about why non-commutativity does not hold for componential representation of matrices\tensors, I got to say that you can readily change the position of two numbers under the usual operation of multiplication, while you can't do the same stuff to an arrangement of numbers with a different law of multiplication which is, deep down, not commutative. So we you deal with tensors (actually with rank 2) as matrices, or a complex of matrices and tensors together, like Ag_{ab}C where A and C are 4-by-4 matrices and g_{ab} is the second-rank metric tensor, then non-commutativity is to be strongly considered in our calculations.
And I assume that you know for second-rank tensors, tensor rank and matrix rank are the same. So these things won't work if we are given something like a (0,3) tensor.
AB