GR Math: how does tensor linearity work?

nonne
Messages
3
Reaction score
0
So I'm reading these notes about differential geometry as it relates to general relativity. It defines a tensor as being, among other things, a linear scalar function, and soon after it gives the following equation as an example of this property of linearity:

T(aP + bQ, cR + dS) = acT(P, R) + adT(P, S) + bcT(Q, R)+bdT(Q, S)

where T is the tensor function, P, Q, R, and S are vectors, and a, b, c, and d are scalar coefficients.

Now I can follow the above leap from left hand side to right hand side as far as:

T(aP + bQ, cR + dS) = T(aP, cR + dS) + T(bQ, cR + dS) = T(aP, cR) +T(aP, dS) + T(bQ, cR) + T(bQ, dS)

but I don't quite understand the reasoning behind how the coefficients get outside of the function brackets. Somehow I managed to get a bachelors in physics without ever taking a single linear algebra course, so I'm a little bit stumped.

Can anyone here give me a hand with this? Any help would be greatly appreciated.
 
Physics news on Phys.org
A linear operator T satisfies T(sa + b) = sT(a) + T(b) where s is a number. Tensors are multilinear operators (linear in each argument).
 
Yeah, no, I understand that bit, but does that also mean that:

T(sa, tb) = stT(a, b)

where s and t are scalars, for a linear operator T with multiple arguments?
 
nonne said:
Yeah, no, I understand that bit, but does that also mean that:

T(sa, tb) = stT(a, b)

where s and t are scalars, for a linear operator T with multiple arguments?

It is linear in each argument: T(sa, tb) = sT(a, tb) = stT(a, b).
 
It may or may not be helpful to think of a (0,2) tensor as a regular linear map from a vector space V to the space of all linear maps from V to \mathbb{R}. Such a statement sounds convoluted, but you've actually encountered this kind of thing before: Given a function f : \mathbb{R}^n \to \mathbb{R}^m, the full derivative of f is a mapping D : \mathbb{R}^n \to L(\mathbb{R}^n, \mathbb{R}^m), where L(\mathbb{R}^n,\mathbb{R}^m) denotes the space of all linear maps \mathbb{R}^n \to \mathbb{R}^m (i.e., the space of all m \times n matrices), such that D(p) is the Jacobian of f at p. However, the map D isn't necessarily linear (although the Jacobian at any given point certainly is).

In the same way, a (0,2) tensor can be thought of as a function T: V \to L(V,k), where k is the base field. (The space L(V,k) is better known as the dual vector space, V^{*}.) Specifically, if T is a (0,2) tensor, there are two ways to define the map T(\mathbf{v}) : V \to V^{*}: Either T(\mathbf{v}) : \mathbf{w} \mapsto T(\mathbf{v},\mathbf{w}), or T(\mathbf{v}) : \mathbf{w} \mapsto T(\mathbf{w},\mathbf{v}). The defining property of such a tensor is that, considered as a map V \to V^{*}, it is linear (regardless of which ordering you choose). Likewise, higher-rank tensors can be thought of as linear maps from some space V^{*} \otimes \ldots \otimes V^{*} \otimes V \otimes \ldots \otimes V to V^{*} or V. Using this property of tensors, it is relatively trivial to show that T(a\mathbf{v},b\mathbf{w}) = abT(\mathbf{v}, \mathbf{w}).
 
It may or may not be helpful to think of a (0,2) tensor as a regular linear map from a vector space V to the space of all linear maps from V to \mathbb{R}. Such a statement sounds convoluted, but you've actually encountered this kind of thing before: Given a function f : \mathbb{R}^n \to \mathbb{R}^m, the full derivative of f is a mapping D : \mathbb{R}^n \to L(\mathbb{R}^n, \mathbb{R}^m), where L(\mathbb{R}^n,\mathbb{R}^m) denotes the space of all linear maps \mathbb{R}^n \to \mathbb{R}^m (i.e., the space of all m \times n matrices), such that D(p) is the Jacobian of f at p. However, the map D isn't necessarily linear (although the Jacobian at any given point certainly is).
 
Tensors are a machine that you insert vectors into and obtain a real number. It is that simple.
 
WTF?...Sorry for the double post.
 
Back
Top