Quick question about interpretation of contravariant and covariant components

Click For Summary
SUMMARY

The discussion focuses on the interpretation of covariant and contravariant components of vectors, specifically how they relate to matrices and the metric tensor. Participants agree that covariant components can be represented as a vector multiplied by a matrix of linearly independent vectors, while contravariant components are represented by the vector multiplied by the inverse of that matrix. The Jacobian matrix is identified as the key tool for defining these components, with the metric tensor serving as a crucial link between them. The conversation highlights the equivalence of these components when using normalized mutually orthogonal bases.

PREREQUISITES
  • Understanding of covariant and contravariant components in vector spaces
  • Familiarity with the Jacobian matrix and its role in transformations
  • Knowledge of metric tensors and their properties
  • Basic concepts of multilinear algebra and tensor products
NEXT STEPS
  • Study the properties and applications of the Jacobian matrix in vector transformations
  • Explore the concept of metric tensors in differential geometry
  • Learn about the relationship between covariant and contravariant components in various coordinate systems
  • Investigate the tensor product decomposition and its implications for mixed tensors
USEFUL FOR

Students and professionals in mathematics, physics, and engineering who are working with vector spaces, tensors, and differential geometry will benefit from this discussion.

enfield
Messages
20
Reaction score
0
Can the covariant components of a vector, v, be thought of as v multiplied by a matrix of linearly independent vectors that span the vector space, and the contravariant components of the same vector, v, the vector v multiplied by the *inverse* of that same matrix?

thinking about it like that makes it easy to see why the covariant and contravariant components are equal when the basis is the normalized mutually orthogonal one, for example, because then the matrix is just the identity one, which is its own inverse.

that's what the definitions i read seem to imply.

Thanks!
 
Physics news on Phys.org
enfield said:
Can the covariant components of a vector, v, be thought of as v multiplied by a matrix of linearly independent vectors that span the vector space, and the contravariant components of the same vector, v, the vector v multiplied by the *inverse* of that same matrix?

thinking about it like that makes it easy to see why the covariant and contravariant components are equal when the basis is the normalized mutually orthogonal one, for example, because then the matrix is just the identity one, which is its own inverse.

that's what the definitions i read seem to imply.

Thanks!

Hey enfield.

At first I didn't get what you were getting at but I think I do now.

The way I interpret what you said is to think about the metric tensor and its conjugate. The metric tensor written in gij vs gij represents the metric from A to B vs B to A and in this context if we look at covariant vs contravariant then under this interpretation all we are doing is from A to B in one and from B to A in another.

But then I tried to think about the situation when you have a mixed tensor. In the metric situation it makes sense like you have alluded to with matrices but for the mixed tensor, my guess if all tensors are multilinear objects, then they should have a matrix expansion through finding the tensor product decomposition and then subsequently getting the matrix form of the multilinear representation which like other matrices, has a basis and if invertible has an inverse map.

I'd be interested to hear your reply on this if you don't mind.
 
thanks for the thoughtful response!

okay, i hadn't read about tensors yet when i posted this - just the definitions of contra- and co-variant vector components. now i have a bit though.

As I understand them, the contra- and co-variant components of a vector can be defined in many ways (as the result of multiplying any invertible matrix with the right dimensions by the vector). But with tensors, the matrix that is used to define the components is exclusively the Jacobian.

this pdf has an accessible section near the start on "Jacobian matrices and metric tensors": http://medlem.spray.se/gorgelo/tensors.pdf.

So using the Jacobian to define them you might have A_v = Jv and B_v=J^{-1}v (A would be the co-variant components of v here i think because they co-vary with the Jacobian), and then the the metric tensor could be interpreted as J^2 or the inverse of that, depending on which way you were relating the components, which is obviously the matrix that would relate A and B to each other.
 
Last edited:
Yes, if your coordinate system has perpendicular straight lines as coordinate lines "covariant" and "contravariant" components are the same.

Another way of looking at is is this. To identify a point in a coordinate system, we can drop perpendiculars from the point to each of the coordiate axes, then measure the distance from the foot of the perpendicular to the origin. Another way is, again, drop perpendiculars from the point to each of the coordinate axes and this time measure the distance, along that perpendicular, from the point to the foot of the perpendicular. As long as the coordinate axes are perpendicular straight lines, those are the same. But other wise, the first gives the "contravariant components" and the second the "covariant components".
 

Similar threads

  • · Replies 22 ·
Replies
22
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 24 ·
Replies
24
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K
Replies
5
Views
4K
  • · Replies 8 ·
Replies
8
Views
9K
  • · Replies 15 ·
Replies
15
Views
6K
  • · Replies 23 ·
Replies
23
Views
6K