GR Math: how does tensor linearity work?

In summary: I just realized that the previous conversation was discussing tensors in a more general mathematical context, whereas the first part of this conversation was specifically about tensors as they relate to differential geometry and general relativity.In summary, tensors are defined as linear scalar functions and have the property of linearity. This means that they satisfy the equation T(aP + bQ, cR + dS) = acT(P, R) + adT(P, S) + bcT(Q, R)+bdT(Q, S), where T is the tensor function, P, Q, R, and S are vectors, and a, b, c, and d are scalar coefficients. This property also extends to tensors with multiple arguments, where T(sa, tb
  • #1
nonne
3
0
So I'm reading these notes about differential geometry as it relates to general relativity. It defines a tensor as being, among other things, a linear scalar function, and soon after it gives the following equation as an example of this property of linearity:

T(aP + bQ, cR + dS) = acT(P, R) + adT(P, S) + bcT(Q, R)+bdT(Q, S)

where T is the tensor function, P, Q, R, and S are vectors, and a, b, c, and d are scalar coefficients.

Now I can follow the above leap from left hand side to right hand side as far as:

T(aP + bQ, cR + dS) = T(aP, cR + dS) + T(bQ, cR + dS) = T(aP, cR) +T(aP, dS) + T(bQ, cR) + T(bQ, dS)

but I don't quite understand the reasoning behind how the coefficients get outside of the function brackets. Somehow I managed to get a bachelors in physics without ever taking a single linear algebra course, so I'm a little bit stumped.

Can anyone here give me a hand with this? Any help would be greatly appreciated.
 
Physics news on Phys.org
  • #2
A linear operator T satisfies T(sa + b) = sT(a) + T(b) where s is a number. Tensors are multilinear operators (linear in each argument).
 
  • #3
Yeah, no, I understand that bit, but does that also mean that:

T(sa, tb) = stT(a, b)

where s and t are scalars, for a linear operator T with multiple arguments?
 
  • #4
nonne said:
Yeah, no, I understand that bit, but does that also mean that:

T(sa, tb) = stT(a, b)

where s and t are scalars, for a linear operator T with multiple arguments?

It is linear in each argument: T(sa, tb) = sT(a, tb) = stT(a, b).
 
  • #5
It may or may not be helpful to think of a (0,2) tensor as a regular linear map from a vector space [tex]V[/tex] to the space of all linear maps from [tex]V[/tex] to [tex]\mathbb{R}[/tex]. Such a statement sounds convoluted, but you've actually encountered this kind of thing before: Given a function [tex]f : \mathbb{R}^n \to \mathbb{R}^m[/tex], the full derivative of [tex]f[/tex] is a mapping [tex]D : \mathbb{R}^n \to L(\mathbb{R}^n, \mathbb{R}^m)[/tex], where [tex]L(\mathbb{R}^n,\mathbb{R}^m)[/tex] denotes the space of all linear maps [tex]\mathbb{R}^n \to \mathbb{R}^m[/tex] (i.e., the space of all [tex]m \times n[/tex] matrices), such that [tex]D(p)[/tex] is the Jacobian of f at [tex]p[/tex]. However, the map [tex]D[/tex] isn't necessarily linear (although the Jacobian at any given point certainly is).

In the same way, a (0,2) tensor can be thought of as a function [tex]T: V \to L(V,k)[/tex], where [tex]k[/tex] is the base field. (The space [tex]L(V,k)[/tex] is better known as the dual vector space, [tex]V^{*}[/tex].) Specifically, if [tex]T[/tex] is a (0,2) tensor, there are two ways to define the map [tex]T(\mathbf{v}) : V \to V^{*}[/tex]: Either [tex]T(\mathbf{v}) : \mathbf{w} \mapsto T(\mathbf{v},\mathbf{w})[/tex], or [tex]T(\mathbf{v}) : \mathbf{w} \mapsto T(\mathbf{w},\mathbf{v})[/tex]. The defining property of such a tensor is that, considered as a map [tex]V \to V^{*}[/tex], it is linear (regardless of which ordering you choose). Likewise, higher-rank tensors can be thought of as linear maps from some space [tex] V^{*} \otimes \ldots \otimes V^{*} \otimes V \otimes \ldots \otimes V [/tex] to [tex]V^{*}[/tex] or [tex]V[/tex]. Using this property of tensors, it is relatively trivial to show that [tex]T(a\mathbf{v},b\mathbf{w}) = abT(\mathbf{v}, \mathbf{w})[/tex].
 
  • #6
It may or may not be helpful to think of a (0,2) tensor as a regular linear map from a vector space [tex]V[/tex] to the space of all linear maps from [tex]V[/tex] to [tex]\mathbb{R}[/tex]. Such a statement sounds convoluted, but you've actually encountered this kind of thing before: Given a function [tex]f : \mathbb{R}^n \to \mathbb{R}^m[/tex], the full derivative of [tex]f[/tex] is a mapping [tex]D : \mathbb{R}^n \to L(\mathbb{R}^n, \mathbb{R}^m)[/tex], where [tex]L(\mathbb{R}^n,\mathbb{R}^m)[/tex] denotes the space of all linear maps [tex]\mathbb{R}^n \to \mathbb{R}^m[/tex] (i.e., the space of all [tex]m \times n[/tex] matrices), such that [tex]D(p)[/tex] is the Jacobian of f at [tex]p[/tex]. However, the map [tex]D[/tex] isn't necessarily linear (although the Jacobian at any given point certainly is).
 
  • #7
Tensors are a machine that you insert vectors into and obtain a real number. It is that simple.
 
  • #8
WTF?...Sorry for the double post.
 

Related to GR Math: how does tensor linearity work?

1. What is tensor linearity in GR Math?

Tensor linearity in GR Math refers to the property of tensors, which are mathematical objects used to describe the curvature of spacetime in General Relativity. Linearity means that a tensor can be multiplied by a scalar and added to another tensor to create a new tensor with the same properties.

2. How does tensor linearity affect calculations in GR Math?

Tensor linearity is a crucial property in GR Math as it allows for simplification of calculations and makes it easier to manipulate tensors. It also ensures that the equations and laws of General Relativity remain consistent.

3. Can tensors with different ranks be added together using tensor linearity?

Yes, tensors with different ranks can be added together as long as they have the same dimensions. This is because tensor linearity is independent of the rank of the tensor.

4. How does tensor linearity relate to the principle of covariance in GR Math?

Tensor linearity is closely related to the principle of covariance in GR Math. The principle of covariance states that the laws of physics should remain the same regardless of the chosen coordinate system. Tensor linearity ensures that the equations used to describe the curvature of spacetime remain the same under coordinate transformations.

5. Are there any exceptions to tensor linearity in GR Math?

Yes, there are some exceptions to tensor linearity in GR Math. For example, when dealing with singularities or extreme gravitational fields, tensor linearity may break down. In these cases, more advanced mathematical tools are required to accurately describe the curvature of spacetime.

Similar threads

  • Special and General Relativity
Replies
6
Views
948
  • Classical Physics
Replies
4
Views
740
  • Topology and Analysis
Replies
13
Views
3K
Replies
31
Views
2K
Replies
1
Views
1K
Replies
2
Views
2K
  • Differential Equations
Replies
1
Views
815
  • Science and Math Textbooks
Replies
10
Views
2K
  • Differential Equations
Replies
1
Views
702
  • Special and General Relativity
Replies
14
Views
2K
Back
Top