Quick question about interpretation of contravariant and covariant components

enfield
Messages
20
Reaction score
0
Can the covariant components of a vector, v, be thought of as v multiplied by a matrix of linearly independent vectors that span the vector space, and the contravariant components of the same vector, v, the vector v multiplied by the *inverse* of that same matrix?

thinking about it like that makes it easy to see why the covariant and contravariant components are equal when the basis is the normalized mutually orthogonal one, for example, because then the matrix is just the identity one, which is its own inverse.

that's what the definitions i read seem to imply.

Thanks!
 
Physics news on Phys.org
enfield said:
Can the covariant components of a vector, v, be thought of as v multiplied by a matrix of linearly independent vectors that span the vector space, and the contravariant components of the same vector, v, the vector v multiplied by the *inverse* of that same matrix?

thinking about it like that makes it easy to see why the covariant and contravariant components are equal when the basis is the normalized mutually orthogonal one, for example, because then the matrix is just the identity one, which is its own inverse.

that's what the definitions i read seem to imply.

Thanks!

Hey enfield.

At first I didn't get what you were getting at but I think I do now.

The way I interpret what you said is to think about the metric tensor and its conjugate. The metric tensor written in gij vs gij represents the metric from A to B vs B to A and in this context if we look at covariant vs contravariant then under this interpretation all we are doing is from A to B in one and from B to A in another.

But then I tried to think about the situation when you have a mixed tensor. In the metric situation it makes sense like you have alluded to with matrices but for the mixed tensor, my guess if all tensors are multilinear objects, then they should have a matrix expansion through finding the tensor product decomposition and then subsequently getting the matrix form of the multilinear representation which like other matrices, has a basis and if invertible has an inverse map.

I'd be interested to hear your reply on this if you don't mind.
 
thanks for the thoughtful response!

okay, i hadn't read about tensors yet when i posted this - just the definitions of contra- and co-variant vector components. now i have a bit though.

As I understand them, the contra- and co-variant components of a vector can be defined in many ways (as the result of multiplying any invertible matrix with the right dimensions by the vector). But with tensors, the matrix that is used to define the components is exclusively the Jacobian.

this pdf has an accessible section near the start on "Jacobian matrices and metric tensors": http://medlem.spray.se/gorgelo/tensors.pdf.

So using the Jacobian to define them you might have A_v = Jv and B_v=J^{-1}v (A would be the co-variant components of v here i think because they co-vary with the Jacobian), and then the the metric tensor could be interpreted as J^2 or the inverse of that, depending on which way you were relating the components, which is obviously the matrix that would relate A and B to each other.
 
Last edited:
Yes, if your coordinate system has perpendicular straight lines as coordinate lines "covariant" and "contravariant" components are the same.

Another way of looking at is is this. To identify a point in a coordinate system, we can drop perpendiculars from the point to each of the coordiate axes, then measure the distance from the foot of the perpendicular to the origin. Another way is, again, drop perpendiculars from the point to each of the coordinate axes and this time measure the distance, along that perpendicular, from the point to the foot of the perpendicular. As long as the coordinate axes are perpendicular straight lines, those are the same. But other wise, the first gives the "contravariant components" and the second the "covariant components".
 
Hello! There is a simple line in the textbook. If ##S## is a manifold, an injectively immersed submanifold ##M## of ##S## is embedded if and only if ##M## is locally closed in ##S##. Recall the definition. M is locally closed if for each point ##x\in M## there open ##U\subset S## such that ##M\cap U## is closed in ##U##. Embedding to injective immesion is simple. The opposite direction is hard. Suppose I have ##N## as source manifold and ##f:N\rightarrow S## is the injective...
Back
Top