Understanding tensor contraction

jdstokes
Messages
520
Reaction score
1
Hi all,

I'm teaching myself the algebraic side of tensors and I was wondering if you would be able to clarify a few things for me.

I prefer to think of a tensor in the set theoretic manner as a multi-linear mapping taking several copies of a vector space and its dual space to the base field.

I'm trying to convince myself about all of the commonly used tensor index laws from this. If we consider the basis vectors of each space as tensors themselves, then it is clear that a basis for the (k,l) tensor space can be constructed by suitable tensor products of the basis vectors and their duals.

Consider any basis vector \hat{\theta}^{(\alpha)} of the dual space. This basis vector forms a (0,1) tensor
\hat{\theta}^{(\alpha)} : T_p \to \mathbb{R} such that \hat{\theta}^{(\alpha)}: \hat{e}_{(\beta)} \mapsto 1 if \alpha = \beta and zero otherwise.

If we take the tensor product \hat{\theta}^{(\alpha)} \otimes \hat{e}_{(\beta)} we get a (1,1) tensor T_p^\ast \times T_p \to \mathbb{R}; (\omega,v) : \mapsto \hat{\theta}^{(\alpha)}(v)\hat{e}_{(\beta)}(\omega) = v_\alpha \hat{e}_{(\beta)}(\omega).

Now I think we can say that \hat{e}_{(\beta)}(\omega) = \omega^\beta if we treat T_p as the dual space of T_p^\ast which gives \hat{\theta}^{(\alpha)}(v)\hat{e}_{(\beta)}(\omega) = v_\alpha \omega^\beta.

I'm not eactly sure how to show that this the Kronecker delta. Is this just the way Kronecker delta is defined?

Thanks
 
Last edited:
Physics news on Phys.org
Well, your trouble is due to the fact that's not the Kronecker delta!


You have the identity map V \to V.

The Kronecker delta is a transpose the identity map: it's a map \mathbb{R} \to V^* \otimes V, which is effectively the same thing as choosing an element of V^* \otimes V, then we get (with respect to a basis and its dual)
\sum_{i = 1}^n \hat{\theta}^i \otimes \hat{e}_i.

We get the evaluation map if we transpose the other way to get a map V^* \otimes V \to \mathbb{R}.
 
Last edited:
Hi Hurkyl,

Thanks for replying.

First of all, how do you define the transpose of the identity map \mathrm{id} : V \to V?

I assume by V^\ast \otimes V you mean the vector space of (1,1) tensors?

I'm not sure what you mean by that the last line. How do you define evaluation map?
 
Last edited:
By the way, how can the Kronecker delta be a map \mathbb{R} \to V^* \otimes V. Isn't it a tensor so maps from the vector space and its dual?
 
Another related question, if you take a (1,1) tensor T^\mu_\nu and multiply it tensorially with a (0,1) tensor V^\nu you get a (0,1) tensor by tensor contraction right?

So let's see this (note I'm using Einstein summation everywhere)

(T^\mu_\nu \hat{e}_{(\mu)} \otimes \hat{\theta}^{(\nu)}) \otimes (V^\nu \hat{e}_{(\nu)}) = T^\mu_\nu V^\nu \hat{e}_{(\mu)} \otimes \hat{\theta}^{(\nu)}\otimes \hat{e}_{(\nu)}

so one would expect that

\hat{\theta}^{(\nu)}\otimes \hat{e}_{(\nu)} : (v,\omega) \mapsto 1 \; \forall (v,\omega) \in T_p \times T_p^\ast

but this implies v^\nu \omega_\nu = 1 which seems like it must be wrong.
 
Yes; V^* \otimes V would be a space of all (1,1)-tensors. But there's a little more information there: it remembers that the dual space is the left factor. In index notation, you'd write an element as A_i{}^j. It is, of course, "naturally isomorphic" to V \otimes V^*, but the distinction is there, should it matter to you.

(I'm assuming all vector spaces are finite dimensional)

For \omega \in V^*, and v \in V, the evaluation map \epsilon_V : V^* \otimes V \to \mathbb{R} is defined by \epsilon_V(\omega \otimes v) = \omega(v).


If you have a map A : V \to W^*, you obtain a corresponding map B : V \otimes W \to \mathbb{R} by:
B(v \otimes w) = A(v)(w).
(note that A(v) is a dual vector on W, so we can evaluate it at w)
And we can go in the reverse direction:
A(v)(w) = B(v \otimes w).
This can be generalized: the "Hom-\otimes adjunction". It's statement involves the vector spaces \hom(V, W) of linear maps from V to W.

We can also dualize: the dual of B is a map B^* : \mathbb{R} \to (V \otimes W)^*:
B^*(1) = B



Everything I've said above ignores the question about what tensors really "are". There are natural isomorphisms between, for example,
(1) Elements of V \otimes W
(2) Linear maps V^* \otimes W^* \to \mathbb{R}
(3) Bilinear maps V^* \times W^* \to \mathbb{R}
(4) Linear maps \mathbb{R} \to V \otimes W
(5) Linear maps V^* \to W
(6) Linear maps W^* \to V

I always find the sheer number of ways to interpret a tensor like this is somewhat bewildering! :frown: So, you can think of tensors as multilinear maps if you want, but the tensor algebra is somewhat indifferent to the representation.
 
Last edited:
Hurkyl said:
...
Everything I've said above ignores the question about what tensors really "are". There are natural isomorphisms between, for example,
(1) Elements of V \otimes W
(2) Linear maps V^* \otimes W^* \to \mathbb{R}
(3) Bilinear maps V^* \times W^* \to \mathbb{R}
(4) Linear maps \mathbb{R} \to V \otimes W
(5) Linear maps V^* \to W
(6) Linear maps W^* \to V

I always find the sheer number of ways to interpret a tensor like this is somewhat bewildering! :frown: So, you can think of tensors as multilinear maps if you want, but the tensor algebra is somewhat indifferent to the representation...

I don't know if you are really confused about tensors or just wanted to show they are "slippery" and uneasy to catch and get :)
But this is my view:

First you have to define tensor product of two vector spaces, V \otimes W somehow. One of the common ways is your (3): V \otimes W := Bilinear maps V^* \times W^* \to \mathbb{R}.
(If you are playing with vector spaces over field of real numbers.)

Only after that has your (2) some meaning (remember V is any vector space thus it can be a dual space of some other space if you wish to...) because now you have defined what does that \otimes mean.

And only after some definition of V \otimes W you can call it's elements tensors ... (1).

(4) is related to common lazyness of physicists: if physicist says tensor (s)he actually means tensor field...
And tensor field is defined as a map "from real space to some tensor space".

Number (5) and (6) are tensor spaces but somehow trivial - these are only (1,1) tensor spaces (well related to spaces V and W because of course it could be that W = V^* \otimes V^* \otimes V^*)
 
I find it interesting that you come up with such vastly different qualitative descriptions of those 6 spaces -- despite the fact they are all 'the same'.

(In the same sense that V and V^{**} are 'the same')
 
tensors are combinations of vectors and functions on vectors. contracting means you have one of each type and you evaluate the function on the vector (or multivector).
 
  • #10
Hurkyl said:
(In the same sense that V and V^{**} are 'the same')

I guess you mean this:
\forall A \in V^{**} \exists ! a \in V: \forall \alpha \in V^*: A(\alpha)=\alpha(a)

OK, if you have somehow defined tensor product of two vector spaces - V \otimes W - then by above + abstractness of all used vector spaces where V, W can stand for anything you can do magic... :-)

(And if you define contraction of tensors + use representational theorem for linear 1-forms things are even more interesting...)

I think that everything depends on which way you define V \otimes W.

And excuse me for my mistake: (4) is of course not general tensor field, I simply overlooked linearity of the map.
 
  • #11
mathwonk said:
tensors are combinations of vectors and functions on vectors. contracting means you have one of each type and you evaluate the function on the vector (or multivector).
You mean linear functions on vectors - and those I prefer to call forms.
And please don't talk about contraction before you define what a tensor is... Because if you do (and people do) than everything related to tensors shrinks to manupulation of components...
And still I find your description somehow inaccurate: you should specify what does your vectors mean in every single place in your statement. (e.g. \alpha \in V^* is one-form on V but vector in V^*)
 
Back
Top