Difference between Co-variant and Contra-variant vectors?

In summary: So the linear form components transform covariantly to the basis vectors.The distinction between covariant and contravariant indices depends on the choice of basis vectors, which is a matter of convenience.In summary, contravariant vectors have an upper index and co-variant vectors use a lower index. The difference between them can be seen in how their components transform under a change of coordinates. Contravariant components change in the opposite direction as the coordinates, while covariant components change in the same direction as the coordinates. The choice of which form to use is based on convenience and the specific physical situation, as
  • #1
sderamus
19
0
I understand that Contravariant vectors have an upper index and co-variant vectors use a lower index. But why is one preferred over another in a specific physical situation?

I am just struggling with which form is preferred, and why one would bother trying to raise or lower the index in a specific situation.

Thanks!
 
Physics news on Phys.org
  • #2
In short, one is a tangent vector and the other a gradient. The difference is in describing how far a curve moves under an infinitesimal change of the curve parameters versus considering how much a function changes under an infinitesimal change in the coordinates.

Now, as long as you have a metric and can raise and lower indices at will, the distinction is not that important (but important) since the metric gives you a one-to-one map between the two. If you do not have a metric, the distinction is vital.
 
  • #3
Intuitively, the difference between a contravariant and a covariant vector is how their components transform when you change coordinates. The components of a contravariant vector shrink when you stretch the coordinates and stretch when you shrink the coordinates;the components of a covariant vector stretch or shrink with the coordinates.

For example, velocity is a contravariant vector (strictly speaking, a rank-one contravariant tensor). If your velocity vector is ##\vec{A}## and its x-component is ##A^1## when you apply the trivial coordinate transform ##x'=.001x; y'=y; z'=z; t'=t## then ##A^1##, the first component of the velocity written using the primed coordinates will be ##A^{1'}=.001A^1##. You've stretched the coordinate system so that a unit step along the x-axis is 1000 times as long as it used to be, and the contravariant component of the vector shrank by 1000.

Conversely, a gradient such as the change in temperature as you move from one point to another is a covariant vector (strictly speaking, a rank-one covariant tensor) and transforms the other way. Under the same coordinate transformation the temperature changes more per unit change in ##x'## than it would per unit change in x, so we have ##A_{1'}=1000A_1##; the covariant component grew with the size of one step along the x-axis.

You can get both of these results by applying the tensor transformation rules. That's not necessary for this particular transformation because it's trivial (it's also silly - Cartesian axes with a perverse non-isotropic distance scale), but for more complex coordinate transformations it will be essential.

As for which one is preferred: In the problems that you'll encounter in relativity, you'll have a metric and as Orodruin says, if you have a metric you can always map a covariant tensor to a contravariant one and vice versa. Thus, there's no particular reason to prefer one form over the other in these problems; you work with what you're given and if it turns out to be inconvenient at some point in the calculation you raise and lower indices as necessary to make life easy again.
 
  • #4
sderamus said:
I understand that Contravariant vectors have an upper index and co-variant vectors use a lower index. But why is one preferred over another in a specific physical situation?

I am just struggling with which form is preferred, and why one would bother trying to raise or lower the index in a specific situation.

Thanks!

I think it's simply a result of selecting the form that obtaines the desires result. Note however, that the Ricci tensor, the metric and the Riemann tensor, that normally appears in the stress-energy tensor, all have lower indices.

One can obtain the action of the electromagnetic field, either dependent upper and lower indexed tensors, or in strictly lower indices. The mixed formulation has popular exposure, and the strictly lower index formulation is not well known. These are equivalent except that under an odd orientation transformation of the spacetime manifold the lower index formulation changes sign, whereas the mixed formulation in invariant.

I tend to bend, speculatively, that purely gravitational physics involves upper indices in the measurement of physical quantities, whereas those involving the strong, weak and electromagnetic forces are more simply described by covariant tensors.

This is probably too much jargon, right?
 
Last edited:
  • #5
First of all you are talking about tensor components not tensors, which are invariant objects under basis transformations. Your question is best answered for a plain vector space without additional structures like a scalar product (or an indefinite fundamental bilinear form as in the case of Minkowski space).

Then you have vectors, which you can add and multiply with scalars. A basis ##\vec{b}_j##, ##j \in \{1,2,\ldots,d \}## where ##d## is the dimension of the vector space, is a set of vectors, for which any given vector ##\vec{x}## can be expressed uniquely as linear combination of these basis vectors:
$$\vec{x}=\vec{e}_j x^j,$$
where the summation over equal pairs of indices (one subscript and one superscript) is understood (Einstein summation convention).

The next simple object you can define on the vector space are linear forms, i.e., linear mappings from the vector space to the scalars. Obviously such a linear form ##L## is already determined, if you know the results for the basis:
$$L_j=L(\vec{b}_j).$$
Then you have
$$L(\vec{x})=L(\vec{e}_j x^j)=x^j L(\vec{e}_j)=L_j x^j.$$
Now the linear forms build a vector space in a natural way too. You add two linear forms or multiply a linear form with a scalar by doing this pointwise, i.e., for each vector ##\vec{x}## one defines
$$(L+M)(\vec{x})=L(\vec{x})+M(\vec{x}), \quad (\lambda L)(\vec{x})=\lambda L(\vec{x}).$$
Then a basis of linear forms is given by the dual basis of the vector-space basis ##\vec{b}_j##, which we denote as ##\vec{b}^k##. It is defined by their values for the basis vectors:
$$\vec{b}^k(\vec{b}_j)=\delta_j^k=\begin{cases} 1 & \text{for} \quad j=k, \\ 0 & \text{for} \quad j \neq k. \end{cases}$$
Then each linear form ##L## can be written as
$$L=L_j \vec{b}^j,$$
because then indeed
$$L(\vec{e}_k)=L_j \vec{b}^j(\vec{b}_k)=L_j \delta_k^j=L_k.$$
Now we choose another basis for the vector space. Then there is a matrix $${T^j}_k$$ which is invertible with inverse $${U^j}_k$$ such that
$$\vec{b}_j={T^k}_j \vec{b}_k', \quad \vec{b}_k'={U^j}_k \vec{b}_j.$$
For the vector components we get
$$\vec{x}=x^j \vec{b}_j=x'^k \vec{b}_k'=x^j {T^k}_j \vec{b}_k'.$$
Since the decomposition of ##\vec{x}## in terms of the basis ##\vec{b}_k## we have
$$x'^k={T^k}_j x^j.$$
One says the vector components transform contragrediently to the basis vectors or says the basis vectors transform covariantly and the vector components contra variantly.

Now we can also figure out the transformation properties of the corresponding dual bases of the vector space of linear forms (dual space). By definition we have
$$L_k'=L(\vec{b}_k')=L({U^j}_k \vec{b}_j)={U^j}_k L(\vec{b}_j)=L_j {U^j}_k.$$
This implies
$$L=L_k' \vec{b}'^k=L_j {U^j}_k \vec{b}'^k=L_j \vec{b}^j \; \Rightarrow \; \vec{b}^j={U^j}_k \vec{b}'^k \; \Rightarrow \; \vec{b}'^k={T^k}_j \vec{b}^j.$$
The latter follows from ##\hat{U}^{-1}=\hat{T}##. The cobasis vectors thus transform cogrediently to the basis vectors.

There's no basis independent isomorphism between the vector space and its dual space, but they are isomorphic, because you can define a one-to-one linear map between the two spaces using a basis. You can define a coordinate independent map for a vector space which has specified a non-degenerate bilinear form (like a skalar product for a Euclidean vector space or the Minkowski pseudoscalar product in Minkowski space). This "fundamental bilinear form" we denote with ##\vec{x} \cdot \vec{y}## for any two vectors ##\vec{x}## and ##\vec{y}##. Again it's completely defined for a given basis by knowing the values
$$\eta_{jk}=\vec{b}_j \cdot \vec{b}_k.$$
That the fundamental form is non-degenerate means that the matrix ##\hat{\eta}=(\eta_{jk})## has an inverse, which we call ##\hat{\eta}^{-1}=(\eta^{jk})##.

Now given a linear form ##L## we have
$$L_j=L(\vec{b}_j).$$
Now we can write for any ##\vec{x}##
$$L(\vec{x})=L_j x^j.$$
Now we define the vector
$$\vec{L}=L^j \vec{b}_j=\eta^{jk} L_k \vec{b}_j.$$
Then we have
$$\vec{L} \cdot \vec{x}=\eta_{jk} L^j x^k=L_k x^k=L(\vec{x}),$$
and this defines a coordinate independent one-to-one mapping from the dual space to the vector space. Now it makes sense to simply call ##L_j## the covariant and ##L^k## the contravariant components of the vector ##\vec{L}##. You can convert from one to the other with help of (pseudo)metric components:
$$L_j=\eta_{jk} L^k, \quad L^k=\eta^{kj} L_j.$$
 
  • #6
The effect of a transformation is inverse for a contra and a covariant vector or tensor dimension.
 

FAQ: Difference between Co-variant and Contra-variant vectors?

What is the difference between co-variant and contra-variant vectors?

Co-variant and contra-variant vectors are two types of vectors that are commonly used in mathematical and scientific fields. The main difference between these two types of vectors lies in how they transform under coordinate transformations.

How do co-variant and contra-variant vectors transform under coordinate transformations?

Co-variant vectors transform in the same way as the coordinates themselves, while contra-variant vectors transform in the opposite way. This means that co-variant vectors change their components in the same direction as the coordinate axes, while contra-variant vectors change their components in the opposite direction.

What are some examples of co-variant and contra-variant vectors?

An example of a co-variant vector is velocity, as it changes in the same direction as the coordinate axes. An example of a contra-variant vector is force, as it changes in the opposite direction of the coordinate axes.

Why is understanding the difference between co-variant and contra-variant vectors important?

Understanding the difference between these two types of vectors is crucial in many fields of science and mathematics, such as physics and differential geometry. It allows for a deeper understanding of how quantities behave and transform under coordinate transformations.

Can a vector be both co-variant and contra-variant?

No, a vector can only be either co-variant or contra-variant. However, a tensor (a mathematical object that generalizes the concept of a vector) can have both co-variant and contra-variant components.

Similar threads

Back
Top