Trying to understand covariant tensor

Click For Summary

Discussion Overview

The discussion revolves around understanding covariant tensors in the context of tensor calculus, particularly as it relates to general relativity (GR). Participants explore the definitions, transformations, and visualizations of covariant and contravariant components, as well as their implications in different coordinate systems.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant expresses difficulty in visualizing covariant tensors and seeks clarification on the projection of components onto normal lines to coordinate hyperplanes.
  • Another participant explains that the ##k##th covariant component is the projection onto the normal space of the hyperplane spanned by the basis excluding the ##k##th vector, noting that this is straightforward in orthonormal systems.
  • A different participant introduces the concept of a reciprocal basis, explaining how it relates to covariant and contravariant vectors, and discusses the transformation properties of scalars and vectors.
  • Some participants assert that co- and contravariance are properties of how components transform under coordinate transformations, rather than properties of the transformations themselves.
  • One participant provides a mathematical definition of the transformation of contravariant and covariant components, illustrating the relationship between them through tensor transformation rules.

Areas of Agreement / Disagreement

Participants generally agree on the definitions and transformation properties of covariant and contravariant components, but there is ongoing exploration and clarification of these concepts. Some points remain contested, particularly regarding the implications of these transformations in different coordinate systems.

Contextual Notes

There are references to specific mathematical properties and transformations that may depend on the choice of basis, including orthonormal and non-orthonormal systems. The discussion also highlights the complexity introduced in curved spaces where metrics play a crucial role.

member 606890
I am taking a course on GR and trying to understand Tensor calculus. I think I understand contravariant tensor (transformation of objects such as a vector from one frame to another) but I am having a hard time with covariant tensors.

I looked into the Wikipedia page (https://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors) and came across this figure: (https://upload.wikimedia.org/wikipedia/commons/b/b2/Basis.svg)

https://upload.wikimedia.org/wikipedia/commons/b/b2/Basis.svg

The contravariant part makes sense. Vector A is decomposed into components along e1 and e2. However, I do not understand what is meant by "The covariant components are obtained by projecting onto the normal lines to the coordinate hyperplanes." In which direction are these normal lines pointing. I'd appreciate if anyone can help me visualize what is happening.
 
Physics news on Phys.org
The ##k##th covariant component of a vector under a basis ##B=\{\vec e_1,...,\vec e_n\}## is its projection on to the 1D normal space of
  • the hyperplane spanned by ##B - \{\vec e_k\}##
In an orthonormal coordinate system (one in which all coordinate vectors are mutually perpendicular and have unit magnitude) that will just be the projection onto the 1D space spanned by ##\vec e_k##. But that won't be the case otherwise.

To visualise, I find it helpful to consider the 2D case where the basis vectors are (1,0) and (1,1). The covariant vectors are perpendicular to those two.
 
You should look at the concept of a recirocal basis.

If you have one basis that is not orthogonal nor are its vectors unit vectors then you can construct a second basis called a reciprocal basis where each vector of the new basis is orthogonal to a corresponding vector in the original basis and has a magnitude equal to the inverse of the magnitude of original vector.

Then take some reciprocal basis and see how you would have to transform the original basis as well as the reciprocal basis for the two vectors to remain reciprocal to each other. From that you can see how co and contra variant vectors are motivated.

In the most simple case imagine a scalar - say a 4. Its recirocal is 1/4. Now I transform the 4 by multilying by 2. I get 8. But if I use the same transformation to transform the reciprocal I get 1/2 which is not the reciprocal of 8. But if I divide instead of multiply i do get the reciprocal. So there are two transformations required. These end up defining co and contravariance.

For a orthonormal basis the pythagorean theorem yields ds^2=dx^2+dy^2. For a non orthonormal basis that is not true but ds^2= dx*dx' +dy*dy' where dx',dy' is the recirocal basis of dx, dy. So by doing all this they are able to transform two vectors in one system to two in another and still be able to use the vectors to produce scalars.

I'm no expert. I am just a hobbiest. But best I can offer.
 
kosmonavt5 said:
I am taking a course on GR and trying to understand Tensor calculus. I think I understand contravariant tensor (transformation of objects such as a vector from one frame to another) but I am having a hard time with covariant tensors.

I looked into the Wikipedia page (https://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors) and came across this figure: (https://upload.wikimedia.org/wikipedia/commons/b/b2/Basis.svg)

https://upload.wikimedia.org/wikipedia/commons/b/b2/Basis.svg

The contravariant part makes sense. Vector A is decomposed into components along e1 and e2. However, I do not understand what is meant by "The covariant components are obtained by projecting onto the normal lines to the coordinate hyperplanes." In which direction are these normal lines pointing. I'd appreciate if anyone can help me visualize what is happening.

One other thin. I am pretty sure that co and contravariance are properties of the transformation. Calling a vector covariant just says how you plan to transform it. If you declare all the contravarient vectors in a basis covarient and then declare all the covarient vectors contravarient you get the same scalars.
 
Justintruth said:
You should look at the concept of a recirocal basis.

If you have one basis that is not orthogonal nor are its vectors unit vectors then you can construct a second basis called a reciprocal basis where each vector of the new basis is orthogonal to a corresponding vector in the original basis and has a magnitude equal to the inverse of the magnitude of original vector.

Then take some reciprocal basis and see how you would have to transform the original basis as well as the reciprocal basis for the two vectors to remain reciprocal to each other. From that you can see how co and contra variant vectors are motivated.
While this view works well in Euclidean (or Minkowski) space, it no longer does in a curved space. Generally, without a metric there is no way of taking an inner product and in order to map a vector to a scalar linearly you need an element of the dual vector space. The metric provides a linear map from the tangent vectors to their dual space and therefore an inner product (after noting it has the right properties due to the properties of the metric).

Given a coordinate basis for the tangent vector space - a tsngent vector's components transform contravariantly (because the basis transforms covariantly). Correspondingly, the dual vector components must transform covariantly in order for the contraction with a tangent vector to be a scalar and the dual coordinate basis transforms contravariantly.

Justintruth said:
One other thin. I am pretty sure that co and contravariance are properties of the transformation.
This read very strange to me the first time. To clarify, co- and contravariance are properties of how the components transform under coordinate transformations, it is not a property of the coordinate transformation itself.
 
Let's look at vectors first. By definition contravariant components ##V^{\mu}## of a vector transform as the differentials of the general coordinates ##\mathrm{d} q^{\mu}##, i.e., changing to new coordinates ##\bar{q}^{\mu}## you have
$$\mathrm{d} \bar{q}^{\mu} = \frac{\partial \bar{q}^{\mu}}{\partial q^{\nu}} \mathrm{d} q^{\nu} ={T^{\mu}}_{\nu} \mathrm{d} q^{\nu}.$$
Covavariant components by definition transform like the gradient of a scalar field, i.e., with ##V_{\mu}=\partial_{\mu} \phi## you get
$$\bar{V}_{\mu}=\frac{\partial \phi}{\partial \bar{q}^{\mu}} = \frac{\partial \phi}{\partial q^{\nu}} \frac{\partial q^{\nu}}{\partial \bar{q}^{\mu}}= \frac{\partial q^{\nu}}{\partial \bar{q}^{\mu}} V_{\nu}=:V_{\nu} {U^{\nu}}_{\mu}.$$
Now we have on the one hand
$$\frac{\partial \bar{q}^{\mu}}{\partial \bar{q}^{\nu}}=\delta_{\nu}^{\mu}$$
and on the other hand
$$\frac{\partial \bar{q}^{\mu}}{\partial \bar{q}^{\nu}}=\frac{\partial \bar{q}^{\mu}}{\partial q^{\rho}} \frac{\partial q^{\rho}}{\partial \bar{q}_{\nu}} = {T^{\mu}}_{\rho} {U^{\rho}}_{\nu},$$
and thus
$$ {T^{\mu}}_{\rho} {U^{\rho}}_{\nu} =\delta_{\nu}^{\mu},$$
i.e., ##\hat{U}=\hat{T}^{-1}##, which means that the covariant components transform contragrediently to the contravariant components.

For components of tensors of higher rank, for each index you need to use the rules of transformation as for the co- and contravariant vector components depending on whether you have a lower or upper index.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 10 ·
Replies
10
Views
4K
  • · Replies 28 ·
Replies
28
Views
2K
Replies
5
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 23 ·
Replies
23
Views
6K
  • · Replies 124 ·
5
Replies
124
Views
10K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 19 ·
Replies
19
Views
1K