Why is a Matrix a Tensor of Type (1,1)?

Click For Summary
SUMMARY

A matrix is definitively classified as a tensor of type (1,1) due to its ability to map row vectors (rank (1,0)) to column vectors (rank (0,1)) through linear transformations. This classification arises from the matrix's structure, which consists of rows and columns, allowing it to transform appropriately under coordinate changes. While tensors of types (2,0) and (0,2) can be represented in matrix form, they do not exhibit the same transformation properties as (1,1) tensors. The contraction of tensors further illustrates this, as it requires the cancellation of ranks to yield a scalar.

PREREQUISITES
  • Understanding of tensor notation and indexing
  • Familiarity with linear algebra concepts, particularly matrices and vectors
  • Knowledge of covariant and contravariant transformations
  • Basic grasp of Einstein summation convention
NEXT STEPS
  • Study the properties of covariant and contravariant tensors in detail
  • Learn about tensor contraction and its implications in tensor algebra
  • Explore the differences between rank (1,1) tensors and higher rank tensors
  • Investigate the role of orthonormal bases in simplifying tensor operations
USEFUL FOR

This discussion is beneficial for students and professionals in mathematics, physics, and engineering who are working with tensors, particularly those seeking to deepen their understanding of tensor types and transformations.

mmmboh
Messages
401
Reaction score
0
I decided to take out a book and read about tensors on my own, but am having a bit of trouble, mainly with regards to the indexing (although I understand, at least superficially covariant and contravariant). Why is a matrix a tensor of type (1,1)? Obviously it is of order 2, but why not (2,0) or (0,2)? Does it have to do with the fact that a matrix consists of rows and columns, and a tensor (1,0) is a row vector, and (0,1) is a column vector? I understand that upon coordinate transformations, a tensor of (1,1) transforms a certain way, but I don't see why a matrix must transform as a (1,1) tensor necessarily.

Can someone help me out please?
 
Physics news on Phys.org
Note that we often write the components of rank (2,0) and (0,2) tensors as matrices (as for example when we write a metric) but this is a corruption of the proper notation. When we do so we are "kludging" a bit by invoking the transpose which is a highly basis dependent entity.

The best way to see the rank of a tensor is to contract with sufficient rank (0,1) and (1,0) vectors to get to a scalar. Under contraction the ranks of opposite type must cancel.

To get a scalar from a matrix you must multiply on the left by a row vector and on the right by a column vector.

If you have an object which maps 3 row vectors and two column vectors to a scalar then it is a rank (2,3) tensor since it cancels 2 rank (0,1) tensors and 3 rank (1,0) tensors whose product is then a rank (3,2) tensor. The (2,3) can cancel the (3,2) to yield a scalar.

The more proper statement is that matrices when used as linear operators are rank (1,1).

This is easiest to see and natural when you use index notation... (repeated indices in terms are summed over via Einstein's convention.)
MA = B with M a matrix operator and A and B column vectors:
In component form:
(\sum_j)M^i_j A^j = B^i

If we really wanted to write a metric (on the space of column vectors) properly we should write it as a row vector of row vectors (rank (2,0) tensor).

g = ((2,0),(0,3))

A metric applied to one vector gives its "geometric transpose" or "orthogonal dual" (I'm sure there's a more proper name for this but it escapes me at the moment.):
g(X) = g\left(\begin{array}{c}x_1\\ x_2\end{array}\right)=( (2,0)\, , \, (0,3) )\left(\begin{array}{c}x_1\\ x_2\end{array}\right)= (2,0)x_1 + (0,3)x_2
= (2 x_1,3 x_2)
g has mapped a column vector to a row vector. Applying to two column vectors gives their dot product under that metric:
g(X,Y) = (gX)Y = (2x_1, 3 x_2)\left(\begin{array}{c}y_1\\ y_2\end{array}\right)=2x_1 y_1 + 3 x_2 y_2

Similarly a dual metric (rank (0,2)) would be written as a column vector of column vectors. I'll skip typesetting as it takes up too much space.

Typically we try to work in an ortho-normal basis (and as long as we're working with a Euclidean space) the metric will then in matrix form look like the identity matrix. In an ortho-normal basis then the matrix transpose will correspond to the "orthogonal dual".
 
Ok so I get the motivation behind it, but are you saying that you can't actually represent a (2,0) or (0,2) as a matrix.

Also I still don't quite understand what the contraction argument has to do with covariance and contravariance. Why does a matrix transform like a (1,1) tensor, ie. for a matrix, A_j^{*i}=A_n^m\frac{\partial z^i}{\partial x^m}\frac{\partial x^n}{\partial z^j} and why a row vector necessarily transforms in a contravariant way, and the same goes for column vectors in a covariant way. This covariant and contravariant indexing is my biggest confusion with tensors so far, if I could get passed this I think I could move on quickly.
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 7 ·
Replies
7
Views
805
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
4K
Replies
1
Views
1K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
879
  • · Replies 2 ·
Replies
2
Views
2K