## quick question about interpretation of contravariant and covariant components

Can the covariant components of a vector, v, be thought of as v multiplied by a matrix of linearly independent vectors that span the vector space, and the contravariant components of the same vector, v, the vector v multiplied by the *inverse* of that same matrix?

thinking about it like that makes it easy to see why the covariant and contravariant components are equal when the basis is the normalized mutually orthogonal one, for example, because then the matrix is just the identity one, which is its own inverse.

that's what the definitions i read seem to imply.

Thanks!

 PhysOrg.com science news on PhysOrg.com >> Hong Kong launches first electric taxis>> Morocco to harness the wind in energy hunt>> Galaxy's Ring of Fire

 Quote by enfield Can the covariant components of a vector, v, be thought of as v multiplied by a matrix of linearly independent vectors that span the vector space, and the contravariant components of the same vector, v, the vector v multiplied by the *inverse* of that same matrix? thinking about it like that makes it easy to see why the covariant and contravariant components are equal when the basis is the normalized mutually orthogonal one, for example, because then the matrix is just the identity one, which is its own inverse. that's what the definitions i read seem to imply. Thanks!
Hey enfield.

At first I didn't get what you were getting at but I think I do now.

The way I interpret what you said is to think about the metric tensor and its conjugate. The metric tensor written in gij vs gij represents the metric from A to B vs B to A and in this context if we look at covariant vs contravariant then under this interpretation all we are doing is from A to B in one and from B to A in another.

But then I tried to think about the situation when you have a mixed tensor. In the metric situation it makes sense like you have alluded to with matrices but for the mixed tensor, my guess if all tensors are multilinear objects, then they should have a matrix expansion through finding the tensor product decomposition and then subsequently getting the matrix form of the multilinear representation which like other matrices, has a basis and if invertible has an inverse map.

I'd be interested to hear your reply on this if you don't mind.

 thanks for the thoughtful response! okay, i hadn't read about tensors yet when i posted this - just the definitions of contra- and co-variant vector components. now i have a bit though. As I understand them, the contra- and co-variant components of a vector can be defined in many ways (as the result of multiplying any invertible matrix with the right dimensions by the vector). But with tensors, the matrix that is used to define the components is exclusively the Jacobian. this pdf has an accessible section near the start on "Jacobian matrices and metric tensors": http://medlem.spray.se/gorgelo/tensors.pdf. So using the Jacobian to define them you might have $$A_v = Jv$$ and $$B_v=J^{-1}v$$ (A would be the co-variant components of v here i think because they co-vary with the Jacobian), and then the the metric tensor could be interpreted as J^2 or the inverse of that, depending on which way you were relating the components, which is obviously the matrix that would relate A and B to each other.

Recognitions:
Gold Member