Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

GR Math: how does tensor linearity work?

  1. Mar 23, 2009 #1
    So I'm reading these notes about differential geometry as it relates to general relativity. It defines a tensor as being, among other things, a linear scalar function, and soon after it gives the following equation as an example of this property of linearity:

    T(aP + bQ, cR + dS) = acT(P, R) + adT(P, S) + bcT(Q, R)+bdT(Q, S)

    where T is the tensor function, P, Q, R, and S are vectors, and a, b, c, and d are scalar coefficients.

    Now I can follow the above leap from left hand side to right hand side as far as:

    T(aP + bQ, cR + dS) = T(aP, cR + dS) + T(bQ, cR + dS) = T(aP, cR) +T(aP, dS) + T(bQ, cR) + T(bQ, dS)

    but I don't quite understand the reasoning behind how the coefficients get outside of the function brackets. Somehow I managed to get a bachelors in physics without ever taking a single linear algebra course, so I'm a little bit stumped.

    Can anyone here give me a hand with this? Any help would be greatly appreciated.
     
  2. jcsd
  3. Mar 23, 2009 #2
    A linear operator T satisfies T(sa + b) = sT(a) + T(b) where s is a number. Tensors are multilinear operators (linear in each argument).
     
  4. Mar 23, 2009 #3
    Yeah, no, I understand that bit, but does that also mean that:

    T(sa, tb) = stT(a, b)

    where s and t are scalars, for a linear operator T with multiple arguments?
     
  5. Mar 24, 2009 #4
    It is linear in each argument: T(sa, tb) = sT(a, tb) = stT(a, b).
     
  6. Apr 1, 2009 #5
    It may or may not be helpful to think of a (0,2) tensor as a regular linear map from a vector space [tex]V[/tex] to the space of all linear maps from [tex]V[/tex] to [tex]\mathbb{R}[/tex]. Such a statement sounds convoluted, but you've actually encountered this kind of thing before: Given a function [tex]f : \mathbb{R}^n \to \mathbb{R}^m[/tex], the full derivative of [tex]f[/tex] is a mapping [tex]D : \mathbb{R}^n \to L(\mathbb{R}^n, \mathbb{R}^m)[/tex], where [tex]L(\mathbb{R}^n,\mathbb{R}^m)[/tex] denotes the space of all linear maps [tex]\mathbb{R}^n \to \mathbb{R}^m[/tex] (i.e., the space of all [tex]m \times n[/tex] matrices), such that [tex]D(p)[/tex] is the Jacobian of f at [tex]p[/tex]. However, the map [tex]D[/tex] isn't necessarily linear (although the Jacobian at any given point certainly is).

    In the same way, a (0,2) tensor can be thought of as a function [tex]T: V \to L(V,k)[/tex], where [tex]k[/tex] is the base field. (The space [tex]L(V,k)[/tex] is better known as the dual vector space, [tex]V^{*}[/tex].) Specifically, if [tex]T[/tex] is a (0,2) tensor, there are two ways to define the map [tex]T(\mathbf{v}) : V \to V^{*}[/tex]: Either [tex]T(\mathbf{v}) : \mathbf{w} \mapsto T(\mathbf{v},\mathbf{w})[/tex], or [tex]T(\mathbf{v}) : \mathbf{w} \mapsto T(\mathbf{w},\mathbf{v})[/tex]. The defining property of such a tensor is that, considered as a map [tex]V \to V^{*}[/tex], it is linear (regardless of which ordering you choose). Likewise, higher-rank tensors can be thought of as linear maps from some space [tex] V^{*} \otimes \ldots \otimes V^{*} \otimes V \otimes \ldots \otimes V [/tex] to [tex]V^{*}[/tex] or [tex]V[/tex]. Using this property of tensors, it is relatively trivial to show that [tex]T(a\mathbf{v},b\mathbf{w}) = abT(\mathbf{v}, \mathbf{w})[/tex].
     
  7. Apr 1, 2009 #6
    It may or may not be helpful to think of a (0,2) tensor as a regular linear map from a vector space [tex]V[/tex] to the space of all linear maps from [tex]V[/tex] to [tex]\mathbb{R}[/tex]. Such a statement sounds convoluted, but you've actually encountered this kind of thing before: Given a function [tex]f : \mathbb{R}^n \to \mathbb{R}^m[/tex], the full derivative of [tex]f[/tex] is a mapping [tex]D : \mathbb{R}^n \to L(\mathbb{R}^n, \mathbb{R}^m)[/tex], where [tex]L(\mathbb{R}^n,\mathbb{R}^m)[/tex] denotes the space of all linear maps [tex]\mathbb{R}^n \to \mathbb{R}^m[/tex] (i.e., the space of all [tex]m \times n[/tex] matrices), such that [tex]D(p)[/tex] is the Jacobian of f at [tex]p[/tex]. However, the map [tex]D[/tex] isn't necessarily linear (although the Jacobian at any given point certainly is).
     
  8. Apr 1, 2009 #7
    Tensors are a machine that you insert vectors into and obtain a real number. It is that simple.
     
  9. Apr 1, 2009 #8
    WTF?...Sorry for the double post.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: GR Math: how does tensor linearity work?
  1. Math work (Replies: 3)

Loading...