GR Math: how does tensor linearity work?

Click For Summary

Discussion Overview

The discussion revolves around the concept of tensor linearity in the context of differential geometry and general relativity. Participants explore the properties of tensors as linear operators and their behavior with respect to scalar coefficients and multiple arguments.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant presents an equation illustrating the linearity of a tensor function T with respect to vectors and scalar coefficients.
  • Another participant defines a linear operator and states that tensors are multilinear operators, emphasizing linearity in each argument.
  • A question is raised about whether the property T(sa, tb) = stT(a, b) holds for a linear operator T with multiple arguments.
  • Further clarification is provided that T(sa, tb) can be expressed as sT(a, tb) = stT(a, b), reinforcing the linearity in each argument.
  • One participant suggests conceptualizing a (0,2) tensor as a linear map from a vector space to the space of linear maps, relating it to the full derivative of a function.
  • Another participant reiterates the idea of tensors as linear maps and discusses the defining property of tensors being linear regardless of the ordering of arguments.
  • A simplified perspective is offered, describing tensors as machines that take vectors and produce real numbers.

Areas of Agreement / Disagreement

Participants express varying levels of understanding regarding tensor linearity and its implications. While some points are clarified, there is no consensus on the interpretation of certain properties, and the discussion remains somewhat unresolved.

Contextual Notes

Some statements rely on specific definitions and assumptions about linearity and tensor properties that may not be universally agreed upon. The discussion includes complex relationships between tensors and their representations that are not fully resolved.

nonne
Messages
3
Reaction score
0
So I'm reading these notes about differential geometry as it relates to general relativity. It defines a tensor as being, among other things, a linear scalar function, and soon after it gives the following equation as an example of this property of linearity:

T(aP + bQ, cR + dS) = acT(P, R) + adT(P, S) + bcT(Q, R)+bdT(Q, S)

where T is the tensor function, P, Q, R, and S are vectors, and a, b, c, and d are scalar coefficients.

Now I can follow the above leap from left hand side to right hand side as far as:

T(aP + bQ, cR + dS) = T(aP, cR + dS) + T(bQ, cR + dS) = T(aP, cR) +T(aP, dS) + T(bQ, cR) + T(bQ, dS)

but I don't quite understand the reasoning behind how the coefficients get outside of the function brackets. Somehow I managed to get a bachelors in physics without ever taking a single linear algebra course, so I'm a little bit stumped.

Can anyone here give me a hand with this? Any help would be greatly appreciated.
 
Physics news on Phys.org
A linear operator T satisfies T(sa + b) = sT(a) + T(b) where s is a number. Tensors are multilinear operators (linear in each argument).
 
Yeah, no, I understand that bit, but does that also mean that:

T(sa, tb) = stT(a, b)

where s and t are scalars, for a linear operator T with multiple arguments?
 
nonne said:
Yeah, no, I understand that bit, but does that also mean that:

T(sa, tb) = stT(a, b)

where s and t are scalars, for a linear operator T with multiple arguments?

It is linear in each argument: T(sa, tb) = sT(a, tb) = stT(a, b).
 
It may or may not be helpful to think of a (0,2) tensor as a regular linear map from a vector space [tex]V[/tex] to the space of all linear maps from [tex]V[/tex] to [tex]\mathbb{R}[/tex]. Such a statement sounds convoluted, but you've actually encountered this kind of thing before: Given a function [tex]f : \mathbb{R}^n \to \mathbb{R}^m[/tex], the full derivative of [tex]f[/tex] is a mapping [tex]D : \mathbb{R}^n \to L(\mathbb{R}^n, \mathbb{R}^m)[/tex], where [tex]L(\mathbb{R}^n,\mathbb{R}^m)[/tex] denotes the space of all linear maps [tex]\mathbb{R}^n \to \mathbb{R}^m[/tex] (i.e., the space of all [tex]m \times n[/tex] matrices), such that [tex]D(p)[/tex] is the Jacobian of f at [tex]p[/tex]. However, the map [tex]D[/tex] isn't necessarily linear (although the Jacobian at any given point certainly is).

In the same way, a (0,2) tensor can be thought of as a function [tex]T: V \to L(V,k)[/tex], where [tex]k[/tex] is the base field. (The space [tex]L(V,k)[/tex] is better known as the dual vector space, [tex]V^{*}[/tex].) Specifically, if [tex]T[/tex] is a (0,2) tensor, there are two ways to define the map [tex]T(\mathbf{v}) : V \to V^{*}[/tex]: Either [tex]T(\mathbf{v}) : \mathbf{w} \mapsto T(\mathbf{v},\mathbf{w})[/tex], or [tex]T(\mathbf{v}) : \mathbf{w} \mapsto T(\mathbf{w},\mathbf{v})[/tex]. The defining property of such a tensor is that, considered as a map [tex]V \to V^{*}[/tex], it is linear (regardless of which ordering you choose). Likewise, higher-rank tensors can be thought of as linear maps from some space [tex]V^{*} \otimes \ldots \otimes V^{*} \otimes V \otimes \ldots \otimes V[/tex] to [tex]V^{*}[/tex] or [tex]V[/tex]. Using this property of tensors, it is relatively trivial to show that [tex]T(a\mathbf{v},b\mathbf{w}) = abT(\mathbf{v}, \mathbf{w})[/tex].
 
It may or may not be helpful to think of a (0,2) tensor as a regular linear map from a vector space [tex]V[/tex] to the space of all linear maps from [tex]V[/tex] to [tex]\mathbb{R}[/tex]. Such a statement sounds convoluted, but you've actually encountered this kind of thing before: Given a function [tex]f : \mathbb{R}^n \to \mathbb{R}^m[/tex], the full derivative of [tex]f[/tex] is a mapping [tex]D : \mathbb{R}^n \to L(\mathbb{R}^n, \mathbb{R}^m)[/tex], where [tex]L(\mathbb{R}^n,\mathbb{R}^m)[/tex] denotes the space of all linear maps [tex]\mathbb{R}^n \to \mathbb{R}^m[/tex] (i.e., the space of all [tex]m \times n[/tex] matrices), such that [tex]D(p)[/tex] is the Jacobian of f at [tex]p[/tex]. However, the map [tex]D[/tex] isn't necessarily linear (although the Jacobian at any given point certainly is).
 
Tensors are a machine that you insert vectors into and obtain a real number. It is that simple.
 
WTF?...Sorry for the double post.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 80 ·
3
Replies
80
Views
10K
Replies
5
Views
2K
  • · Replies 69 ·
3
Replies
69
Views
9K
  • · Replies 2 ·
Replies
2
Views
2K