GR Math: how does tensor linearity work?

Click For Summary
SUMMARY

This discussion focuses on the linearity of tensors in the context of differential geometry and general relativity. The key equation presented is T(aP + bQ, cR + dS) = acT(P, R) + adT(P, S) + bcT(Q, R) + bdT(Q, S), illustrating how tensors behave as multilinear operators. Participants clarify that a linear operator T satisfies T(sa + b) = sT(a) + T(b), leading to the conclusion that T(sa, tb) = stT(a, b) for scalars s and t. The conversation emphasizes the importance of understanding tensors as linear maps from vector spaces to dual vector spaces.

PREREQUISITES
  • Understanding of linear algebra concepts, particularly linear operators.
  • Familiarity with vector spaces and dual vector spaces.
  • Basic knowledge of differential geometry and its application in general relativity.
  • Comprehension of multilinear functions and their properties.
NEXT STEPS
  • Study the properties of multilinear maps and their applications in physics.
  • Learn about dual vector spaces and their significance in tensor analysis.
  • Explore the relationship between tensors and linear transformations in vector spaces.
  • Investigate the role of tensors in the formulation of general relativity and differential geometry.
USEFUL FOR

Students and professionals in physics, particularly those focusing on general relativity, differential geometry, and anyone seeking to deepen their understanding of tensor mathematics.

nonne
Messages
3
Reaction score
0
So I'm reading these notes about differential geometry as it relates to general relativity. It defines a tensor as being, among other things, a linear scalar function, and soon after it gives the following equation as an example of this property of linearity:

T(aP + bQ, cR + dS) = acT(P, R) + adT(P, S) + bcT(Q, R)+bdT(Q, S)

where T is the tensor function, P, Q, R, and S are vectors, and a, b, c, and d are scalar coefficients.

Now I can follow the above leap from left hand side to right hand side as far as:

T(aP + bQ, cR + dS) = T(aP, cR + dS) + T(bQ, cR + dS) = T(aP, cR) +T(aP, dS) + T(bQ, cR) + T(bQ, dS)

but I don't quite understand the reasoning behind how the coefficients get outside of the function brackets. Somehow I managed to get a bachelors in physics without ever taking a single linear algebra course, so I'm a little bit stumped.

Can anyone here give me a hand with this? Any help would be greatly appreciated.
 
Physics news on Phys.org
A linear operator T satisfies T(sa + b) = sT(a) + T(b) where s is a number. Tensors are multilinear operators (linear in each argument).
 
Yeah, no, I understand that bit, but does that also mean that:

T(sa, tb) = stT(a, b)

where s and t are scalars, for a linear operator T with multiple arguments?
 
nonne said:
Yeah, no, I understand that bit, but does that also mean that:

T(sa, tb) = stT(a, b)

where s and t are scalars, for a linear operator T with multiple arguments?

It is linear in each argument: T(sa, tb) = sT(a, tb) = stT(a, b).
 
It may or may not be helpful to think of a (0,2) tensor as a regular linear map from a vector space V to the space of all linear maps from V to \mathbb{R}. Such a statement sounds convoluted, but you've actually encountered this kind of thing before: Given a function f : \mathbb{R}^n \to \mathbb{R}^m, the full derivative of f is a mapping D : \mathbb{R}^n \to L(\mathbb{R}^n, \mathbb{R}^m), where L(\mathbb{R}^n,\mathbb{R}^m) denotes the space of all linear maps \mathbb{R}^n \to \mathbb{R}^m (i.e., the space of all m \times n matrices), such that D(p) is the Jacobian of f at p. However, the map D isn't necessarily linear (although the Jacobian at any given point certainly is).

In the same way, a (0,2) tensor can be thought of as a function T: V \to L(V,k), where k is the base field. (The space L(V,k) is better known as the dual vector space, V^{*}.) Specifically, if T is a (0,2) tensor, there are two ways to define the map T(\mathbf{v}) : V \to V^{*}: Either T(\mathbf{v}) : \mathbf{w} \mapsto T(\mathbf{v},\mathbf{w}), or T(\mathbf{v}) : \mathbf{w} \mapsto T(\mathbf{w},\mathbf{v}). The defining property of such a tensor is that, considered as a map V \to V^{*}, it is linear (regardless of which ordering you choose). Likewise, higher-rank tensors can be thought of as linear maps from some space V^{*} \otimes \ldots \otimes V^{*} \otimes V \otimes \ldots \otimes V to V^{*} or V. Using this property of tensors, it is relatively trivial to show that T(a\mathbf{v},b\mathbf{w}) = abT(\mathbf{v}, \mathbf{w}).
 
It may or may not be helpful to think of a (0,2) tensor as a regular linear map from a vector space V to the space of all linear maps from V to \mathbb{R}. Such a statement sounds convoluted, but you've actually encountered this kind of thing before: Given a function f : \mathbb{R}^n \to \mathbb{R}^m, the full derivative of f is a mapping D : \mathbb{R}^n \to L(\mathbb{R}^n, \mathbb{R}^m), where L(\mathbb{R}^n,\mathbb{R}^m) denotes the space of all linear maps \mathbb{R}^n \to \mathbb{R}^m (i.e., the space of all m \times n matrices), such that D(p) is the Jacobian of f at p. However, the map D isn't necessarily linear (although the Jacobian at any given point certainly is).
 
Tensors are a machine that you insert vectors into and obtain a real number. It is that simple.
 
WTF?...Sorry for the double post.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
940
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 17 ·
Replies
17
Views
7K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 31 ·
2
Replies
31
Views
4K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 1 ·
Replies
1
Views
5K