I Trying to understand covariant tensor

Tags:
1. Oct 22, 2016

kosmonavt5

I am taking a course on GR and trying to understand Tensor calculus. I think I understand contravariant tensor (transformation of objects such as a vector from one frame to another) but I am having a hard time with covariant tensors.

The contravariant part makes sense. Vector A is decomposed into components along e1 and e2. However, I do not understand what is meant by "The covariant components are obtained by projecting onto the normal lines to the coordinate hyperplanes." In which direction are these normal lines pointing. I'd appreciate if anyone can help me visualize what is happening.

2. Oct 22, 2016

andrewkirk

The $k$th covariant component of a vector under a basis $B=\{\vec e_1,...,\vec e_n\}$ is its projection on to the 1D normal space of
• the hyperplane spanned by $B - \{\vec e_k\}$
In an orthonormal coordinate system (one in which all coordinate vectors are mutually perpendicular and have unit magnitude) that will just be the projection onto the 1D space spanned by $\vec e_k$. But that won't be the case otherwise.

To visualise, I find it helpful to consider the 2D case where the basis vectors are (1,0) and (1,1). The covariant vectors are perpendicular to those two.

3. Oct 23, 2016

Justintruth

You should look at the concept of a recirocal basis.

If you have one basis that is not orthogonal nor are its vectors unit vectors then you can construct a second basis called a reciprocal basis where each vector of the new basis is orthogonal to a corresponding vector in the original basis and has a magnitude equal to the inverse of the magnitude of original vector.

Then take some reciprocal basis and see how you would have to transform the original basis as well as the reciprocal basis for the two vectors to remain reciprocal to each other. From that you can see how co and contra variant vectors are motivated.

In the most simple case imagine a scalar - say a 4. Its recirocal is 1/4. Now I transform the 4 by multilying by 2. I get 8. But if I use the same transformation to transform the reciprocal I get 1/2 which is not the reciprocal of 8. But if I divide instead of multiply i do get the reciprocal. So there are two transformations required. These end up defining co and contravariance.

For a orthonormal basis the pythagorean theorem yields ds^2=dx^2+dy^2. For a non orthonormal basis that is not true but ds^2= dx*dx' +dy*dy' where dx',dy' is the recirocal basis of dx, dy. So by doing all this they are able to transform two vectors in one system to two in another and still be able to use the vectors to produce scalars.

I'm no expert. I am just a hobbiest. But best I can offer.

4. Oct 23, 2016

Justintruth

One other thin. I am pretty sure that co and contravariance are properties of the transformation. Calling a vector covariant just says how you plan to transform it. If you declare all the contravarient vectors in a basis covarient and then declare all the covarient vectors contravarient you get the same scalars.

5. Oct 23, 2016

Orodruin

Staff Emeritus
While this view works well in Euclidean (or Minkowski) space, it no longer does in a curved space. Generally, without a metric there is no way of taking an inner product and in order to map a vector to a scalar linearly you need an element of the dual vector space. The metric provides a linear map from the tangent vectors to their dual space and therefore an inner product (after noting it has the right properties due to the properties of the metric).

Given a coordinate basis for the tangent vector space - a tsngent vector's components transform contravariantly (because the basis transforms covariantly). Correspondingly, the dual vector components must transform covariantly in order for the contraction with a tangent vector to be a scalar and the dual coordinate basis transforms contravariantly.

This read very strange to me the first time. To clarify, co- and contravariance are properties of how the components transform under coordinate transformations, it is not a property of the coordinate transformation itself.

6. Oct 23, 2016

vanhees71

Let's look at vectors first. By definition contravariant components $V^{\mu}$ of a vector transform as the differentials of the general coordinates $\mathrm{d} q^{\mu}$, i.e., changing to new coordinates $\bar{q}^{\mu}$ you have
$$\mathrm{d} \bar{q}^{\mu} = \frac{\partial \bar{q}^{\mu}}{\partial q^{\nu}} \mathrm{d} q^{\nu} ={T^{\mu}}_{\nu} \mathrm{d} q^{\nu}.$$
Covavariant components by definition transform like the gradient of a scalar field, i.e., with $V_{\mu}=\partial_{\mu} \phi$ you get
$$\bar{V}_{\mu}=\frac{\partial \phi}{\partial \bar{q}^{\mu}} = \frac{\partial \phi}{\partial q^{\nu}} \frac{\partial q^{\nu}}{\partial \bar{q}^{\mu}}= \frac{\partial q^{\nu}}{\partial \bar{q}^{\mu}} V_{\nu}=:V_{\nu} {U^{\nu}}_{\mu}.$$
Now we have on the one hand
$$\frac{\partial \bar{q}^{\mu}}{\partial \bar{q}^{\nu}}=\delta_{\nu}^{\mu}$$
and on the other hand
$$\frac{\partial \bar{q}^{\mu}}{\partial \bar{q}^{\nu}}=\frac{\partial \bar{q}^{\mu}}{\partial q^{\rho}} \frac{\partial q^{\rho}}{\partial \bar{q}_{\nu}} = {T^{\mu}}_{\rho} {U^{\rho}}_{\nu},$$
and thus
$${T^{\mu}}_{\rho} {U^{\rho}}_{\nu} =\delta_{\nu}^{\mu},$$
i.e., $\hat{U}=\hat{T}^{-1}$, which means that the covariant components transform contragrediently to the contravariant components.

For components of tensors of higher rank, for each index you need to use the rules of transformation as for the co- and contravariant vector components depending on whether you have a lower or upper index.