Quick question about interpretation of contravariant and covariant components

in summary, the covariant components of a vector can be thought of as the vector multiplied by a matrix of linearly independent vectors that span the vector space, and the contravariant components of the same vector, v, the vector v multiplied by the *inverse* of that same matrix.
  • #1
enfield
21
0
Can the covariant components of a vector, v, be thought of as v multiplied by a matrix of linearly independent vectors that span the vector space, and the contravariant components of the same vector, v, the vector v multiplied by the *inverse* of that same matrix?

thinking about it like that makes it easy to see why the covariant and contravariant components are equal when the basis is the normalized mutually orthogonal one, for example, because then the matrix is just the identity one, which is its own inverse.

that's what the definitions i read seem to imply.

Thanks!
 
Physics news on Phys.org
  • #2
enfield said:
Can the covariant components of a vector, v, be thought of as v multiplied by a matrix of linearly independent vectors that span the vector space, and the contravariant components of the same vector, v, the vector v multiplied by the *inverse* of that same matrix?

thinking about it like that makes it easy to see why the covariant and contravariant components are equal when the basis is the normalized mutually orthogonal one, for example, because then the matrix is just the identity one, which is its own inverse.

that's what the definitions i read seem to imply.

Thanks!

Hey enfield.

At first I didn't get what you were getting at but I think I do now.

The way I interpret what you said is to think about the metric tensor and its conjugate. The metric tensor written in gij vs gij represents the metric from A to B vs B to A and in this context if we look at covariant vs contravariant then under this interpretation all we are doing is from A to B in one and from B to A in another.

But then I tried to think about the situation when you have a mixed tensor. In the metric situation it makes sense like you have alluded to with matrices but for the mixed tensor, my guess if all tensors are multilinear objects, then they should have a matrix expansion through finding the tensor product decomposition and then subsequently getting the matrix form of the multilinear representation which like other matrices, has a basis and if invertible has an inverse map.

I'd be interested to hear your reply on this if you don't mind.
 
  • #3
thanks for the thoughtful response!

okay, i hadn't read about tensors yet when i posted this - just the definitions of contra- and co-variant vector components. now i have a bit though.

As I understand them, the contra- and co-variant components of a vector can be defined in many ways (as the result of multiplying any invertible matrix with the right dimensions by the vector). But with tensors, the matrix that is used to define the components is exclusively the Jacobian.

this pdf has an accessible section near the start on "Jacobian matrices and metric tensors": http://medlem.spray.se/gorgelo/tensors.pdf.

So using the Jacobian to define them you might have [tex] A_v = Jv [/tex] and [tex] B_v=J^{-1}v [/tex] (A would be the co-variant components of v here i think because they co-vary with the Jacobian), and then the the metric tensor could be interpreted as J^2 or the inverse of that, depending on which way you were relating the components, which is obviously the matrix that would relate A and B to each other.
 
Last edited:
  • #4
Yes, if your coordinate system has perpendicular straight lines as coordinate lines "covariant" and "contravariant" components are the same.

Another way of looking at is is this. To identify a point in a coordinate system, we can drop perpendiculars from the point to each of the coordiate axes, then measure the distance from the foot of the perpendicular to the origin. Another way is, again, drop perpendiculars from the point to each of the coordinate axes and this time measure the distance, along that perpendicular, from the point to the foot of the perpendicular. As long as the coordinate axes are perpendicular straight lines, those are the same. But other wise, the first gives the "contravariant components" and the second the "covariant components".
 
  • #5


Yes, your interpretation is correct. The covariant components of a vector can be thought of as the vector multiplied by a matrix of linearly independent vectors that span the vector space. Similarly, the contravariant components can be thought of as the vector multiplied by the inverse of that same matrix.

This interpretation is often used in tensor calculus, where the transformation between covariant and contravariant components depends on the choice of basis vectors. In the case of a normalized mutually orthogonal basis, the matrix of basis vectors is indeed the identity matrix, making the transformation between covariant and contravariant components trivial.

It is important to note that this interpretation only holds for basis vectors that are linearly independent and span the vector space. In other cases, the transformation between covariant and contravariant components may be more complex.

I hope this helps clarify the concept of covariant and contravariant components for you. Let me know if you have any further questions.
 

1. What is the difference between contravariant and covariant components?

Contravariant and covariant components are two types of mathematical objects used to represent vector quantities in different coordinate systems. Contravariant components are represented by upper indices (ex. xi) and transform according to the inverse of the coordinate transformation, while covariant components are represented by lower indices (ex. xi) and transform according to the coordinate transformation.

2. How are contravariant and covariant components related?

Contravariant and covariant components are related through the metric tensor, which is a mathematical object that describes the relationship between the two types of components. The metric tensor is used to raise and lower indices, allowing for the conversion between contravariant and covariant components.

3. Why is it important to understand contravariant and covariant components?

Understanding contravariant and covariant components is important in many areas of physics and engineering, as they are used to describe vector quantities in different coordinate systems. This allows for the consistent representation and manipulation of vector quantities in different contexts, making calculations and problem-solving more efficient and accurate.

4. How do contravariant and covariant components relate to tensors?

Contravariant and covariant components are the building blocks of tensors, which are multi-dimensional mathematical objects used to represent physical quantities that transform in a specific way under coordinate transformations. Tensors can be thought of as generalizations of vectors and matrices, with contravariant and covariant components representing different directions of the tensor.

5. Can you provide an example of contravariant and covariant components in action?

An example of contravariant and covariant components in action is in the calculation of the stress tensor in mechanics. The stress tensor is a second-order tensor that describes the distribution of forces and stresses within a material. It is represented by both contravariant and covariant components, with the former representing the direction of the force and the latter representing the direction of the surface on which the force acts.

Similar threads

Replies
24
Views
1K
Replies
1
Views
1K
Replies
11
Views
4K
  • General Math
Replies
5
Views
1K
  • Differential Geometry
Replies
6
Views
2K
Replies
22
Views
2K
  • Differential Geometry
Replies
21
Views
16K
  • Differential Geometry
Replies
9
Views
2K
  • Advanced Physics Homework Help
Replies
5
Views
2K
  • Differential Geometry
Replies
15
Views
4K
Back
Top