Induced maps on tensor spaces

In summary, the conversation discusses the relationship between a map P from vector space V to W and its corresponding dual map P* from W* to V*. The focus is on generalizing this relationship to higher rank tensors and understanding the pushforward and pullback operations. The conversation introduces the use of covariant and contravariant tensors and explains how these are mapped using P* and P, respectively.
  • #1
quasar_4
290
0
Hi all,

Given a map P: V-->W for vector spaces V and W and the map P*: W* --> V* we have the relationship that many of us are familiar with:

For e in V, f in W, E in V* and F in W*, we can say that

(P*(F))(e)=F(P(e)).

This is nice and fine. So this is kind of the case for a rank 1 tensor. Now can anyone help me generalize this to rank r tensors, namely those of type (0,r) and (r,0)? We can't worry about the case of type (r,s) unless we know that P is an invertible map. But I'm having a REALLY REALLY hard time understanding the case for some higher rank tensor.

I am also hoping that whoever can help with this can introduce it with also explaining which of these induced maps is the pushforward and which is the pullback.

Thanks.
 
Physics news on Phys.org
  • #2
To use the same notation as you have used here, it is convenient to represent a tensor as an r form using the covariant tensor: [tex]F_1 \otimes F_2 \otimes ... \otimes F_r [/tex]. To map this to a real number (as you did with a rank 1 tensor above), we contract this with a contravariant tensor [tex]e_1 \otimes e_2 \otimes ... \otimes e_r[/tex]. Extending this is actually rather simple, since each tensor term is simply mapped individually as follows:

[tex][P^*(F_1 \otimes F_2 \otimes ... \otimes F_r)](e_1\otimes e_2 \otimes ... \otimes e_r) = [F_1 \otimes F_2 \otimes ... \otimes F_r](P(e_1\otimes e_2 \otimes ... \otimes e_r))[/tex]

which can be rewritten as:

[tex][P^*(F_1) \otimes P^*(F_2) \otimes ... \otimes P^*(F_r)](e_1\otimes e_2 \otimes ... \otimes e_r) = [F_1 \otimes F_2 \otimes ... \otimes F_r](P(e_1)\otimes P(e_2) \otimes ... \otimes P(e_r))[/tex]

which reduces to the scalar product:

[tex][P^*(F_1)](e_1) \cdot [P^*(F_2)](e_2) \cdot ... \cdot [P^*(F_r)](e_r) = F_1(P(e_1)) \cdot F_2(P(e_2)) \cdot ... \cdot F_r(P(e_r))[/tex]

Covectors are mapped via pull back [tex]P^*(F) \in V^*[/tex] whereas vectors are mapped via push forward [tex]P(e) \in W[/tex].
 
Last edited:
  • #3


Hello there,

Induced maps on tensor spaces can be a bit tricky to understand, but I will try my best to explain the generalization for higher rank tensors. First, let's go over the definitions of pushforward and pullback maps. A pushforward map is a linear map between two vector spaces that takes a vector in one space and maps it to a vector in the other space. On the other hand, a pullback map is a linear map between two dual spaces that takes a covector in one space and maps it to a covector in the other space. Now, let's see how these maps are related to induced maps on tensor spaces.

For a rank (0,r) tensor T in V*⊗...⊗V* (r times), the induced map by P would be P*: V*⊗...⊗V* --> W*⊗...⊗W* (r times), given by P*(T) = T∘P. This means that for a vector v1∈V, v2∈V, ..., vr∈V, and a covector f1∈W*, f2∈W*, ..., fr∈W*, we have (P*(T))(v1,v2,...,vr) = T(P(v1),P(v2),...,P(vr)) = T(f1,f2,...,fr) = (T∘P)(f1,f2,...,fr). This is the pushforward map, as it takes a tensor in the domain space and maps it to a tensor in the codomain space.

For a rank (r,0) tensor S in V⊗...⊗V (r times), the induced map by P would be P*: V⊗...⊗V --> W⊗...⊗W (r times), given by P*(S) = P∘S. This means that for a vector v1∈V, v2∈V, ..., vr∈V, and a covector f1∈W*, f2∈W*, ..., fr∈W*, we have (P*(S))(f1,f2,...,fr) = P(S(f1),S(f2),...,S(fr)) = P(v1,v2,...,vr) = (P∘S)(v1,v2
 

1. What is an induced map on tensor spaces?

An induced map on tensor spaces is a linear transformation between two tensor spaces that preserves the algebraic structure of the tensors. This means that the map operates on individual components of the tensors in a way that maintains their tensor product properties.

2. How is an induced map defined?

An induced map is defined by its action on the basis elements of the tensor space. By specifying how the map operates on the basis elements, the entire map can be determined for any tensor in the space.

3. What does it mean for an induced map to be tensor-product preserving?

A tensor-product preserving map is one that preserves the tensor product structure of the tensors it operates on. This means that the map will distribute over tensor products and maintain the same algebraic properties, such as linearity and associativity.

4. What is the significance of induced maps on tensor spaces?

Induced maps on tensor spaces play a crucial role in many areas of science and engineering, particularly in the fields of quantum mechanics and general relativity. They allow for the manipulation and transformation of tensor quantities, which are essential for describing physical systems and mathematical structures.

5. Can an induced map be non-linear?

No, an induced map on tensor spaces must be a linear transformation in order to preserve the tensor product structure. Non-linear maps would distort the algebraic properties of the tensors and would not be considered induced maps.

Similar threads

  • Differential Geometry
Replies
20
Views
2K
Replies
16
Views
3K
  • Differential Geometry
Replies
3
Views
1K
  • Differential Geometry
Replies
1
Views
1K
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
10
Views
357
  • Calculus and Beyond Homework Help
Replies
0
Views
449
  • Differential Geometry
Replies
4
Views
2K
Replies
6
Views
355
  • Differential Geometry
Replies
34
Views
2K
Back
Top