Tensor Product of Covariant and Contravariant Vectors

meteor
Messages
937
Reaction score
0
It's possible to do the tensor product of two contravariant vectors?
It's possible to do the tensor product of two covariant vectors?
 
Last edited:
Physics news on Phys.org
Yes, and yes.

- Warren
 
Ok, I think it goes this way:
If you want to do the tensor product of two covariant vectors A and B,with its components represented by two row vectors, then you do ATXB, where X denotes matrix product and T denotes transpose, and the resulting matrix is the tensor product
Similarly, to do the tensor product of two contravariant vectors C and D, you do CXDT
Is this correct?
 
meteor said:
Ok, I think it goes this way:
If you want to do the tensor product of two covariant vectors A and B,with its components represented by two row vectors, then you do ATXB, where X denotes matrix product and T denotes transpose, and the resulting matrix is the tensor product
Similarly, to do the tensor product of two contravariant vectors C and D, you do CXDT
Is this correct?

As long as the column vector is to the left of the row vector, you'll get it right. Another way to think about it is to let the indices in the product:

xμxν

represent the addresses in the matrix representation. In other words, the component x2x3 is the matrix element in the 2nd row, 3rd column.
 
I have just finished a math tensor course, it's really a lot of fun once you get going!
 
I presume when you say covariant vectors you mean in the classical backwards terminology.

So a covariant vector is a linear function L on the tangent space, with values which are numbers, and likewise another such guy M is the same.

The tensor product of these two guys is a bilinear function LtensM acting on pairs of tangent vectors in the only sensible way, i.e. (LtensM)(v,w) =L(v)M(w), product of numbers.

Now if e1,...,er is a basis for tangent vectors and h1,...,hr is the dual basis for covariant vectors, where hj has value zero at all ei except hj(ej) = 1,

then the basic tensors (hj)tens(hk) are a basis for all the second order tensors.

Hence we must be able to write out LtensM in terms of these basic tensors.

I.e. there must be doubly indexed family Cjk of numbers such that

LtensM = summation Cjk (hj)tens(hk).

Now to learn what these numbers Cjk are, note that Cjk is the value of the right hand side on the contravariant tensor (ej)tens(ek). So to compute Cjk we just

apply the left hand side to this same tensor.

I.e. we should have Cjk = (LtensM)((ej)tens(ek)) = L(ej)M(ek).

Hence if the covariant vector L was represented by (...aj...), and M was represented by (...bk...), i.e. if L(ej) = aj, and M(ek) = bk, then L tens M is represented by the matrix product

(ajbk) = (aj)T (bk), which seems to agree with what was said above.

Now my argument for the conceptual approach is that although I did not know how to multiply tensors when I started this post, I still seem to have got it right because I knew what a tensor actually is.

Or at least if I got it wrong, you can follow what I did and see where I went wrong.
 
Back
Top