Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Basic tensor help

  1. Apr 7, 2010 #1
    I am a little confused about the notation of tensors. could someone please explain the indexes? like what is the difference between Tmn and Tnm? and what about T[tex]^{m}_{n}[/tex] and T[tex]^{n}_{m}[/tex]? it would really be helpful if someone could give me an "intro to basic tensor notation" answer. i have no resources other than this, so please help!
     
  2. jcsd
  3. Apr 7, 2010 #2
    Last edited by a moderator: Apr 25, 2017
  4. Apr 8, 2010 #3
    A type (p,q) tensor--also called a valence (p,q) tensor--with respect to a given vector space, is a scalar-valued function of some number, q, of vectors of that space, and some number, p, of elements of the dual space of that vector space, linear in each of its arguments. The dual space is defined as another vector space over the same field (in the abstract algebraic sense) whose elements are scalar-valued linear functions of one vector of the primary vector space. Elements of the primary vector space are called vectors, contravariant vectors, tangent vectors or kets, while elements of the dual space are called dual vectors, covectors, covariant vectors, cotangent vectors, linear functionals, linear forms, one-forms, 1-forms or bras. Some of these names are more general, others restricted to certain contexts or applications. The tensors used in relativity are defined with respect to the tangent spaces of a spacetime manifold, whose elements are called tangent vectors. Elements of their dual spaces may be called cotangent vectors.

    When a coordinate system is specified, a tensor can be represented by an ordered set of (scalar) components with respect to that coordinate system. For example, a type (0,2) tensor, a tensor with two vector arguments is

    [tex]\textbf{T}\left ( ; \_ \_ \_ , \_ \_ \_ \right ) = \sum_{\mu = 0, \nu = 0}^{3} T_{\mu \nu} \, \tilde{\boldsymbol{\varepsilon}}^\mu \otimes \tilde{\boldsymbol{\varepsilon}}^\nu[/tex]

    where [itex]\{ \tilde{\boldsymbol{\varepsilon}}^\mu \}[/itex] denote members of what’s called a dual basis (with respect to a given basis, [itex]\vec{\mathbf{e}_\nu}[/itex], for the primary vector space). The dual basis is a maximal ordered set of linearly independent dual vectors such that

    [tex] \tilde{\boldsymbol{\varepsilon}}^\mu (\vec{\textbf{e}}_\nu) = \delta^\mu_\nu[/tex]

    where [itex] \delta^\mu_\nu[/itex] is the Kronecker delta, equal to one when mu = nu, otherwise equal to zero. The Kronecker delta can be represented in matrix form as the identity matrix:

    [tex] \begin{bmatrix}
    1 & 0 & 0 & 0\\
    0 & 1 & 0 & 0\\
    0 & 0 & 1 & 0\\
    0 & 0 & 0 & 1
    \end{bmatrix}[/tex]

    By the Einstein summation convention (which just means that an index is summed over if it appears twice in the same term, once up, once down), the above expression is abbreviated to

    [tex]T_{\mu \nu} \; \tilde{\boldsymbol{\varepsilon}}^\mu \otimes \tilde{\boldsymbol{\varepsilon}}^\nu[/tex]

    which is usually abbreviated further to [itex]T_{\mu \nu}[/itex], with basis tensors assumed. So the tensor is represented by its components. Similarly [itex]T^{\mu\nu} = \textbf{T}\left ( \_ \_ \_ , \_ \_ \_ ; \right ) [/itex] is a function of two dual vectors, and [itex] T^\mu\,_\nu = \textbf{T}\left ( \_ \_ \_ ; \_ \_ \_ \right )[/itex] is function of one dual vector and one vector. More fully,

    [tex]T^\mu\,_\nu = T^\mu\,_\nu \; \vec{\textbf{e}}_\mu \otimes \tilde{\boldsymbol{\varepsilon}^\nu}[/tex]

    The action of this tensor on a dual vector and a vector is denoted

    [tex] T^\mu\,_\nu \; \omega_\mu \; v^\nu = T^\mu\,_\nu \; \omega_\rho \; v^\sigma \; \vec{\textbf{e}}_\mu (\tilde{\boldsymbol{\varepsilon}^\rho}) \otimes \tilde{\boldsymbol{\varepsilon}^\nu}(\vec{\textbf{e}}_\sigma) = \sum_{\mu=0,\nu=0}^{3} T^\mu\,_\nu \; \omega_\mu \; v^\nu[/tex]

    the result being a single number. The notation [itex]T_{\mu \nu}[/itex] can either stand for a particular array of components, in this case a matrix:

    [tex]\begin{bmatrix}
    T_{00} & T_{01} & T_{02} & T_{03}\\
    T_{10} & T_{11} & T_{12} & T_{13}\\
    T_{20} & T_{21} & T_{22} & T_{23}\\
    T_{30} & T_{31} & T_{32} & T_{33}
    \end{bmatrix}[/tex]

    where each element of the matrix is a number. Or, since tensor equations that describe physical laws don't depend on which coordinate system you chose to represent them with, [itex]T_{\mu \nu}[/itex] can also denote the tensor itself in terms of whatever set of components might happen to be used. This use of the notation is called abstract index notation or slot-naming index notation. Blandford and Thorne liken the tensor to a machine with slots, like a toaster, as many slots as there are indices, so that the indices show how many slots there are--in other words, how many arguments the tensor has, the number of vectors and dual vectors it takes.

    The number of subscript (down) indices tells you how many vector arguments the tensor has. The number of superscript (up) indices tells you how many dual vector arguments it has. A dual vector is, by definition, a scalar-valued function of one vector and so has one down index. A vector can be viewed as a scalar-valued function of one dual vector, and so has one up index. The action of a dual vector on a vector (which can also be viewed as the action of a vector on a dual vector) is denoted thus:

    [tex] \tilde{\boldsymbol{\omega}}(\vec{\textbf{v}})= \vec{\textbf{v}}(\tilde{\boldsymbol{\omega}}) = \omega_\mu \, \tilde{\boldsymbol{\varepsilon}}^\mu(v^\nu \vec{\textbf{e}}_\nu ) = \omega_\mu \, v^\nu \, \tilde{\boldsymbol{\varepsilon}}^\mu( \vec{\textbf{e}}_\nu ) = \omega_\mu \; v^\mu = v^\mu \; \omega_\mu[/tex]

    The notation [itex]T_{\mu\nu}=T_{\nu\mu}[/itex] says that the tensor T is symmetric. In matrix language, [itex]T=T^T[/itex], T is equal to its own transpose, its indices are interchangeable.

    Index rules:

    1. Every unique (non-repeated) index (called a free index) that appears on one side of an equation appears on the other side at the same height. These indices aren’t summed over.

    2. No more than two identical indices (called dummy indices, because it doesn’t matter what letter you use, or summation indices) appear in any term, and when a pair of identical indices appear in one term, they must be on different levels, one up and one down.

    3. When a tensor with two indices at different heights is represented as a matrix, the upper index is the row number, the lower index the column number. When they’re the same height, the left index is the row number, and the right index stands for the column.

    In relativity, some authors follow a convention whereby Greek indices run from 0 to 3 (spacetime dimensions), while Roman indices run from 1 to 3 (purely spatial dimensions).

    Some more resources:

    http://www.pma.caltech.edu/Courses/ph136/yr2008/"

    http://preposterousuniverse.com/grnotes/"

    http://repository.tamu.edu/handle/1969.1/2502"

    http://repository.tamu.edu/handle/1969.1/3609"

    Also the other external links at the foot of the Wikipedia article Tensor, although as a beginner I found the article itself quite opaque. I've only dipped into it on Google Books, but I’ve found some helpful explanations in Bernard Schutz's Geometrical Methods of Mathematical Physics.
     
    Last edited by a moderator: Apr 25, 2017
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook