How Do Induced Maps Affect Higher Rank Tensors?

  • Context: Graduate 
  • Thread starter Thread starter quasar_4
  • Start date Start date
  • Tags Tags
    Induced Tensor
Click For Summary
SUMMARY

This discussion focuses on the relationship between induced maps and higher rank tensors, specifically addressing the mapping of covariant tensors via pullbacks and contravariant tensors via pushforwards. The relationship is established through the equation (P*(F))(e) = F(P(e)), which is generalized for rank r tensors of type (0,r) and (r,0). The discussion emphasizes the importance of understanding the distinction between pushforward and pullback operations in tensor analysis.

PREREQUISITES
  • Understanding of vector spaces and linear maps
  • Familiarity with tensor notation and operations
  • Knowledge of covariant and contravariant tensors
  • Basic concepts of pushforward and pullback in differential geometry
NEXT STEPS
  • Study the properties of induced maps in linear algebra
  • Learn about the contraction of tensors and its applications
  • Explore the differences between pushforward and pullback in detail
  • Investigate the implications of invertible maps on tensor transformations
USEFUL FOR

Mathematicians, physicists, and students specializing in linear algebra, differential geometry, or tensor analysis who seek to deepen their understanding of tensor mappings and their applications.

quasar_4
Messages
273
Reaction score
0
Hi all,

Given a map P: V-->W for vector spaces V and W and the map P*: W* --> V* we have the relationship that many of us are familiar with:

For e in V, f in W, E in V* and F in W*, we can say that

(P*(F))(e)=F(P(e)).

This is nice and fine. So this is kind of the case for a rank 1 tensor. Now can anyone help me generalize this to rank r tensors, namely those of type (0,r) and (r,0)? We can't worry about the case of type (r,s) unless we know that P is an invertible map. But I'm having a REALLY REALLY hard time understanding the case for some higher rank tensor.

I am also hoping that whoever can help with this can introduce it with also explaining which of these induced maps is the pushforward and which is the pullback.

Thanks.
 
Physics news on Phys.org
To use the same notation as you have used here, it is convenient to represent a tensor as an r form using the covariant tensor: F_1 \otimes F_2 \otimes ... \otimes F_r. To map this to a real number (as you did with a rank 1 tensor above), we contract this with a contravariant tensor e_1 \otimes e_2 \otimes ... \otimes e_r. Extending this is actually rather simple, since each tensor term is simply mapped individually as follows:

[P^*(F_1 \otimes F_2 \otimes ... \otimes F_r)](e_1\otimes e_2 \otimes ... \otimes e_r) = [F_1 \otimes F_2 \otimes ... \otimes F_r](P(e_1\otimes e_2 \otimes ... \otimes e_r))

which can be rewritten as:

[P^*(F_1) \otimes P^*(F_2) \otimes ... \otimes P^*(F_r)](e_1\otimes e_2 \otimes ... \otimes e_r) = [F_1 \otimes F_2 \otimes ... \otimes F_r](P(e_1)\otimes P(e_2) \otimes ... \otimes P(e_r))

which reduces to the scalar product:

[P^*(F_1)](e_1) \cdot [P^*(F_2)](e_2) \cdot ... \cdot [P^*(F_r)](e_r) = F_1(P(e_1)) \cdot F_2(P(e_2)) \cdot ... \cdot F_r(P(e_r))

Covectors are mapped via pull back P^*(F) \in V^* whereas vectors are mapped via push forward P(e) \in W.
 
Last edited:

Similar threads

  • · Replies 16 ·
Replies
16
Views
6K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
4K
  • · Replies 14 ·
Replies
14
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K