1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Can any matrix be considered a tensor?

  1. Jun 9, 2010 #1
    From the list of very fundamental things I am confused about:

    Let's say I have two bases and a transformation matrix T that allows me to convert between them, like so:

    [tex]A'_i = T_{ik} A_k[/tex], where A and A' express the same vector in the two bases.

    If I have a second rank tensor, it will transform in a similar way:

    [tex]C'_{ij} = T_{ik} T_{jl} C_{kl}[/tex], or in the matrix notation:

    [tex]C' = T C T^T[/tex]
    On the other hand, if I consider C as a matrix that acts on the vector A, [tex]C A = B[/tex], I can write
    [tex]B' = T B = T C A = T C T^{-1} T A = (T C T^{-1}) A'[/tex].

    This all is very basic and familiar, but I'm having trouble understanding what exactly this implies about the connection between matrices and tensors. It seems that the formulas are saying the following: "you can always convert matrices between bases using similarity transformation. However, you cannot call your matrix a tensor unless your bases are connected by an orthogonal transformation, at which point we would have [tex]B' = ( T C T^T) A' = C' A'[/tex]". Is this a correct statement?.. But if so, this means that whether or not something is a tensor is determined not by the intrinsic properties of that object, but by the specifics of how one picks the bases and converts between them, which seems a little odd.

    Can some linear algebra guru shed some light on what's going on here?..
     
  2. jcsd
  3. Jun 9, 2010 #2
    I don't know the answer, but I think the difference is larger than you suggest- a tensor is NOT a matrix, but it can be represented by a matrix, depending on how you look at it.

    For example, the electromagnetic field at a fixed point is a tensor: this tensor can be expressed as a 4x4 matrix, but the elements of this matrix depend on your velocity relative to that fixed point. In other words, the tensor can have multiple matrix representations depending on how you view it. But the underlying EM field tensor does not change.
     
  4. Jun 9, 2010 #3

    phyzguy

    User Avatar
    Science Advisor

    A tensor is a physical object, which has a physical reality which transcends the basis representation you choose. By contrast, a matrix can be any collection of numbers, and the different components of the matrix need not be related at all. Think of a rank 1 tensor, i.e. a vector in 3D space. It has certain physical attributes, such as its length and direction, which do not change regardless of how you look at it or what coordinate system you choose to represent it. In contrast, any old collection of three numbers or three functions can be a (3x1) matrix, and this need not have any physical reality at all.
     
  5. Jun 9, 2010 #4

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    A tensor is something that transforms according to very specific rules under a change of coordinates. Something that does not transform according to those rules is not a tensor. It is important to distinguish what a tensor represents from how a tensor is represented. What a tensor represents transcends its representation. The inertia tensor of some object is the same tensor in any set of axes. While the representation of that tensor changes in with a change of coordinates, the tensor is still the same tensor.

    Not all n-vectors are tensors. To be a tensor that vector has to transform as a rank 1 covariant or contravariant tensor. Similarly, not all n×n matrices are tensors. To be a tensor, a matrix has to transform as a rank 2 tensor (there are three types: (2,0) tensors, (1,1) tensors, and (0,2) tensors).
     
  6. Jun 9, 2010 #5
    My problem is that I've heard all these statements before -- but can anyone point me to a concrete, preferably numerical, example of when something is or is not a tensor?..
     
  7. Jun 12, 2010 #6
    I figured it out. The problem was sloppy conversion between index and matrix notation (to this day I haven't seen a good reference that explains the best way to perform it). In any case, if you do this carefully and introduce distinction between upper and lower indices, defining contraction accordingly, it will be clear that what I thought was a transpose is actually a matrix inverse. Thus, "tensor" transformation reduces to the similarity transformation of a matrix.

    Another moral of the story is that the condition under which you can ignore distinctions between upper and lower indices is the case of orthonormal bases -- in which case transpose and inverse of transformation are one and the same, so everything is self-consistent in the formulas as written in the first post.
     
  8. Jun 12, 2010 #7
    The most general mathematical definition of a tensor I've found is this: a type-(p,q) tensor with respect to a vector space, V is a scalar-valued function, linear in each of its arguments, of q vectors (i.e. elements of the underlying set of V) and p dual vectors (i.e. elements of the underlying set of V*, the dual space of V). The dual space is a second vector space over the same field as V whose vectors are the set of all scalar-valued linear functions of vectors in the underlying set of V. So, you could have matrices which are tensors with respect to some vector space, provided you choose an oppropriate vector space and an appropriate class of matrices.

    But when people talk in a physics context about what is or isn't a tensor, they're usually discussing whether an object is or isn't a tensor with respect to some particular (often implicit) vector space or kind of vector space. In relativity, it's the tangent spaces defined at each point of a spacetime manifold. Tensors with respect to a tangent space are called simply "tensors", and contrasted with the various coordinate representations that each of these tensors can be given, or with other tensor-like functions, such as pseudo-tensors, whose value depend on coordinates in some way, and therefore aren't simply tensors with respect to the tangent spaces.

    And, of course, the word tensor is also often used loosely in a physics context as a synonym for tensor field.
     
  9. Jun 12, 2010 #8
    Well the short answer is NO.

    Tensors can be represented by matrices, but only by square matrices.

    Of course not all matrices are square.

    In Physics a reality can be given to entities we call tensors in mathematics defined by:

    A tensor is a multilinear form, invariant with respect to a specified group of coordinate transformations in n-space.
    This definition encapsulates what others have already said here.

    The important point from this definition is the word linear. Tensor algebra is a branch of linear algebra.
    The theorems etc are predicated upon linear mathematics and do not hold in the non-linear arena.

    I don't know in what context you are studying tensors, but the Schaum series book by Mase: Continuum Mechanics has an accessible introduction to tensors and related objects and their usefulness.
     
    Last edited: Jun 12, 2010
  10. Jun 13, 2010 #9
    Example: the gamma matrices in the Dirac equation. The four matrices together labeled by an index mu ranging from zero to 3, transform as a four-vector. But the individual four by four gamma matrices do not transform, they are therefore not tensors.
     
  11. Jun 13, 2010 #10
    In as much as every square matrix represents a linear transformation from a space to itself, then every matrix represents a (1,1) tensor since linear transformations are naturally isomorphic to (1,1) tensors.
     
  12. Jun 13, 2010 #11
    It is customary to notate vectors as variables with upper indices and dual vectors as variables with lower indices. In any case, to obtain a dual vector from a tensor acting on a dual vector:

    [tex]A'_i = T_{i}{^k} A_k \ .[/tex]

    Now, a square matrix acts from the left on a column vector and spits out another column vector. We take column vectors to be vectors and row vectors to be dual vectors. The tensor equivalent of this matrix operation is:

    [tex]B^i = T^{i}{_k} A^k \ .[/tex]

    Edit: I see eok20 has already pointed this out, though it helps to see it written out.

    But this doesn't seem to fully define a tensor... I hadn't really thought about it before. A tensor is a vector in its own right, and obeys the axioms of a vector space over a field.
     
    Last edited: Jun 13, 2010
  13. Jun 16, 2010 #12
    Yes, I agree with you. Tensor is more wide than matrix. In tensor calculation, use matrix is more convenient than components .
     
  14. Jun 16, 2010 #13
    Then there are rectangular tensors whose indices do not range over the same values.
     
  15. Jun 18, 2010 #14
    It would be instructive to display some examples, since the propertis of square matrices are just those we require for tensors viz the existance of a determinant, inverse and so on.
     
  16. Jun 19, 2010 #15
    Thanks for pointing that out, Phrak. Does this modified version work?

    At first, I wrote in an exception for constant functions, defined as type-(0,0) tensors, i.e. scalars, but then it occured to me that these can be considered 1-dimensional vectors if we take the field addition as also vector addition.
     
  17. Jun 19, 2010 #16
    Rasalhague

    You can check for yourself that a type(p,q) tensor is an element of a vector space against the set of axioms of a vector space, http://mathworld.wolfram.com/VectorSpace.html" [Broken]

    But what are the bases of the vector space? It's useful to consider an example. Take the tensor A with row elements [A1,A2] and [A3,A4] on a manifold with dimension R2.

    We can define the bases of the tensor in terms the bases imposed on the manifold.

    [tex]\hat{\theta}_1 = \hat{e}_1 \otimes \hat{e}_1[/tex]
    [tex]\hat{\theta}_2 = \hat{e}_1 \otimes \hat{e}_2[/tex]
    [tex]\hat{\theta}_3 = \hat{e}_2 \otimes \hat{e}_1[/tex]
    [tex]\hat{\theta}_4 = \hat{e}_2 \otimes \hat{e}_2[/tex]

    In this example A is a four dimensional vector with \hat\theta bases.

    [tex]A = A^{i}\hat{\theta}_i[/tex]

    Bring in the mathematicians who can, no doubt, criticize the sloppy language, but this is the general idea.

    There's an interesting element to all this. If A is a type (1,0) tensor in R4, how is the dual basis represented in the original R2 space?
     
    Last edited by a moderator: May 4, 2017
  18. Jun 19, 2010 #17
    I suppose it makes more sense to express it the other way around, since "linear" doesn't mean anything unless we have some way to add tensors and multiply them by scalars:

    A type-(p,q) tensor with respect to an n-dimensional vector space, V, is an element of the underlying set of an n(p+q)-dimensional vector space, T, over the same field as V, such that the tensor is a scalar-valued function, linear in each of its arguments, of q vectors (i.e. elements of the underlying set of V) and p dual vectors (i.e. elements of the underlying set of V*, the dual space of V).

    Or if it's preferable not to define them in advance as vectors, but to let that fact emerge:

    A type-(p,q) tensor with respect to an n-dimensional vector space, V, is a scalar-valued function of q vectors (i.e. elements of the underlying set of V) and p dual vectors (i.e. elements of the underlying set of V*, the dual space of V), on which we define binary operations

    +(A,B) = C, where A, B and C are all type-(p,q) tensors;
    *(s,X) = Y, where X, Y are type-(p,q) tensors, and s a a scalar with respect to V,

    such that the tensor is linear in each of its arguments.
     
  19. Jun 19, 2010 #18
    I don't understand this part. Maybe I'm missing something obvious... If A is a type-(1,0) tensor in R4, and we select a basis for R4, then at most two dual vectors of the dual basis can be represented in R2. With respect to R4, maybe R2 would be where type-(-2,0) tensors live, if there is such a thing, but I haven't seen any definition of tensors where p or q are negative, and I don't know what it would mean. (I should have specified in my definition that p and q are nonnegative integers.)
     
  20. Jun 20, 2010 #19
    Your descriptive language is better than mine.

    What I was asking, was how to go from the dual space in theta back to the dual of the original space with bases e.

    So, starting again with this

    [tex]\hat{\theta}_1 = \hat{e}_1 \otimes \hat{e}_1[/tex]
    [tex]\hat{\theta}_2 = \hat{e}_1 \otimes \hat{e}_2[/tex]
    [tex]\hat{\theta}_3 = \hat{e}_2 \otimes \hat{e}_1[/tex]
    [tex]\hat{\theta}_4 = \hat{e}_2 \otimes \hat{e}_2[/tex]

    then the dual vector space in theta has bases [itex]\hat{\omega}_i[/itex], where

    [tex]\hat{\omega}^i \hat{\theta}_j = \delta^i_j[/tex]

    There is also a dual space to the vector space in e, [itex]\hat{a}^\mu[/itex], where

    [tex]\hat{a}^\mu \hat{e}_\nu = \delta^\mu_\nu[/tex]

    I don't mean to imply I have looked at it. I was suddenly curious as to the relationship between a and omega. Of course, the original tensor could be of mixed type, so it could get more involved.
     
  21. Jun 20, 2010 #20
    I don't know if this counts as an answer or is just a restatement of the problem, but I think it would depend on which of the 24 permutations of the numbers 1,2,3 and 4 we arbitrarily choose to label the thetas by in the equations that define them, wouldn't it? I can't see how there could be a simple, general way of relating the a's and the omegas that doesn't require us to already know how the thetas are defined in terms of the original basis, unless there was some universal convention for labelling them. But if we know the thetas in terms of the e's, we can get the a's simply and directly from the e's.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook