## Tensor fields and multiplication

Hello! I'm currently reading John Lee's books on different kinds of manifolds and three questions has appeared.

In 'Introduction to Smooth Manifolds' Lee writes that a tensor of rank 2 always can be decomposed into a symmetric and an antisymmetric tensor:

A = Sym(A) + Alt(A).

We define a product which looks at the antisymmetric part of A \otimes B according to:

AB = Sym(A \otimes B),

while the wedge product describes the antisymmetric part:

A \wedge B = Alt(A \otimes B).

Now first of all the fact that a tensor of, lets say, rank 3 can not be decomposed in this way seems quite counter-intuitive, for me. How do you think of it? Is there any easy way to picture it?

Secondly: Can we define a product for this last term (that is neither symmetric or antisymmetric) of our tensors of rank higher than 2? In other words:

A * B = (A \otimes B) - Sym(A \otimes B) - Alt(A \otimes B) ?

The last question concerns the total covariant derivative that is definied in the book on Riemannian manifolds. Lee first sets out to claim:

'Although the definition of a linear connection resembles the characterization of (2,1)-tensor fields [...], a linear connection is not a tensor field because it is not linear over C^∞(M) in Y, but instead satisfy the product rule.' (- 'Riemannian Manifolds: An Introduction to Curvature' by John Lee)

Later however he states that the total covariant derivative (the generalization of this linear connection) is a (k+1, l)-tensor field. This seems to be contradictive.. or am I mixing something up?

Thanks for all the help!

Kindly Regards
Kontilera
 PhysOrg.com science news on PhysOrg.com >> King Richard III found in 'untidy lozenge-shaped grave'>> Google Drive sports new view and scan enhancements>> Researcher admits mistakes in stem cell study
 Recognitions: Gold Member Homework Help Science Advisor Regarding the second question... What Lee is saying is that a connection ∇: $\Gamma(TM)\times \Gamma(TM)\rightarrow \Gamma(TM)$ looks like a (2,1) tensor (compare with Lemma 2.4), but it is not one as it is not $C^{\infty}(M)$-linear in its second argument. Later, he defines the covariant derivative of a tensor, and remarks that if you take a tensor T of type (k,l), and take its covariant derivative ∇T, you get a tensor of type (k+1,l). In particular, if you take a vector field Y (tensor of type (0,1)) and jam it up the second slot of the connection map like so: ∇Y, you get a tensor, because the problem was in the second argument of ∇ and you've now eliminated that problem.
 Thanks for the answer! Nobody that could give some response to the idea of the new multiplication? Maybe its just not so useful so Lee doesnt mention it..

Recognitions:
Gold Member
Homework Help