Covariant and Contravariant

In summary: L^\alpha_\mu = L^\alpha_\mu A^\mu...the matrix L^\alpha_\mu represents a transformation from one coordinate system to another. So, in general, when we say that a vector "transforms" we mean that its components change, but the vector itself remains the same.Now, although the components A^\mu are different in different coordinate systems, so people say that the vector "transforms" when you change coordinates, the combination A^\mu e_\mu is actually coordinate-independent. The vector has the same value, as a vector, in every coordinate system. What that means is that if you change coordinates from x^
  • #1
TimeRip496
254
5
I am reading a notes about tensor when I came across this which the notes did not elaborate more on it. As a result I don't quite understand why.

Here it is : " Note that we mark the covariant basis vectors with an upper index and the contravariant basis vectors with a lower index. This may sounds counter-intuitive ('did we not decide to use upper indices for contravariant vectors?') but this is precisely what we mean with the 'different meaning of the indices' here: this time they label the vectors and do not denote their components. "

I can follow except the last sentence and I do not know why. Can anyone enlighten me?
 
Physics news on Phys.org
  • #2
Using Einstein summation convention, we have [itex] \vec A=A^i e_i=A^1 e_1+A^2 e_2+A^3 e_3 [/itex]. As you know, the components of mentioned vector([itex]A^i[/itex]), are scalars(they are functions of spatial coordinates). So what makes [itex] \vec A [/itex] a vector? Its the basis vectors [itex] e_i [/itex]. They are really vectors, but basis vectors. Which means only a distinguished set of linearly independent vectors.
 
  • Like
Likes vanhees71
  • #3
Shyan said:
Using Einstein summation convention, we have [itex] \vec A=A^i e_i=A^1 e_1+A^2 e_2+A^3 e_3 [/itex]. As you know, the components of mentioned vector([itex]A^i[/itex]), are scalars(they are functions of spatial coordinates). So what makes [itex] \vec A [/itex] a vector? Its the basis vectors [itex] e_i [/itex]. They are really vectors, but basis vectors. Which means only a distinguished set of linearly independent vectors.
Do you mean that the upper indices assigned for contravariant vector while the lower indices assigned for contravariant basis vector is just a mean to distinguished them from each other? Sorry if I didnt follow you.
 
  • #4
TimeRip496 said:
Do you mean that the upper indices assigned for contravariant vector while the lower indices assigned for contravariant basis vector is just a mean to distinguished them from each other? Sorry if I didnt follow you.
Yeah, its basically a convention so you shouldn't look for reasons here.
But what you should understand is that [itex] A^i [/itex] is not a vector. Its only a general term referring to one of the components of the vector [itex] \vec A [/itex] and so its a scalar. Its just that people work with the components of vectors.
 
  • #5
Shyan said:
Yeah, its basically a convention so you shouldn't look for reasons here.
But what you should understand is that [itex] A^i [/itex] is not a vector. Its only a general term referring to one of the components of the vector [itex] \vec A [/itex] and so its a scalar. Its just that people work with the components of vectors.
Ok thanks a lot!
 
  • #6
TimeRip496 said:
I am reading a notes about tensor when I came across this which the notes did not elaborate more on it. As a result I don't quite understand why.

Here it is : " Note that we mark the covariant basis vectors with an upper index and the contravariant basis vectors with a lower index. This may sounds counter-intuitive ('did we not decide to use upper indices for contravariant vectors?') but this is precisely what we mean with the 'different meaning of the indices' here: this time they label the vectors and do not denote their components. "

I can follow except the last sentence and I do not know why. Can anyone enlighten me?

If you've taken vector calculus, you probably have seen a 2-D vector [itex]\vec{A}[/itex] written in the form [itex]A^x \hat{x} + A^y \hat{y}[/itex]. In that notation, [itex]\hat{x}[/itex] means a "unit vector" in the x-direction, while the coefficient [itex]A^x[/itex] means the component of [itex]\vec{A}[/itex] in that direction. When you get to relativity, the notion of a "unit vector" becomes not well-defined, so the more general notion is a "basis vector". You would write an arbitrary vector [itex]\vec{A}[/itex] in the form [itex]\sum_\mu A^\mu e_\mu[/itex], where the sum ranges over all basis vectors (there are 4 in SR--3 spatial directions and one time direction). By convention, people leave off the [itex]\sum_\mu[/itex] and it's assumed that if an index appears in both lowered and raised forms, then it means that it is summed over. So people would just write a vector as [itex]A^\mu e_\mu[/itex]

Now, although the components [itex]A^\mu[/itex] are different in different coordinate systems, so people say that the vector "transforms" when you change coordinates, the combination [itex]A^\mu e_\mu[/itex] is actually coordinate-independent. The vector has the same value, as a vector, in every coordinate system. What that means is that if you change coordinates from [itex]x^\mu[/itex] to some new coordinates [itex]x^\alpha[/itex], the value of [itex]\vec{A}[/itex] doesn't change:

[itex]A^\mu e_\mu = A^\alpha e_\alpha[/itex]

The components [itex]A^\mu[/itex] change, and the basis vectors [itex]e_\mu[/itex] change, but the combination remains the same.

We can relate the old and new components through a matrix [itex]L^\alpha_\mu[/itex]:

[itex]A^\alpha = L^\alpha_\mu A^\mu[/itex]

If we use this matrix to rewrite [itex]A^\alpha[/itex] in our equation relating the two vectors, we see:

[itex]A^\mu e_\mu = L^\alpha_\mu A^\mu e_\alpha = A^\mu (L^\alpha_\mu e_\alpha)[/itex]

Note that since this equation holds for any vector [itex]\vec{A}[/itex], it must mean that

[itex]e_\mu = L^\alpha_\mu e_\alpha[/itex]

or if we let [itex](L^{-1})^\mu_\alpha[/itex] be the inverse matrix, we can apply it to both sides to get:

[itex](L^{-1})^\mu_\alpha e_\mu = e_\alpha[/itex]

So we have the pair of transformation equations:
  1. [itex]A^\alpha = L^\alpha_\mu A^\mu[/itex]
  2. [itex]e_\alpha = (L^{-1})^\mu_\alpha e_\mu[/itex]
The basis vectors [itex]e_\mu[/itex] transform in the opposite way from the components [itex]A^\mu[/itex], so that the combination [itex]A^\mu e_\mu[/itex] has the same value in every coordinate system.
 
  • #7
TimeRip496 said:
Do you mean that the upper indices assigned for contravariant vector while the lower indices assigned for contravariant basis vector is just a mean to distinguished them from each other? Sorry if I didnt follow you.

Just another point: the index on a basis vector [itex]e_\mu[/itex] indicates which basis vector, rather which component of a vector. But since a basis vector is, after all, a vector, you can actually ask "what are the components of basis vector [itex]e_\mu[/itex]?" The answer is pretty trivial:

[itex](e_\mu)^\mu = 1[/itex] (In this case, [itex]\mu[/itex] is NOT summed over)

All other components are zero. This can be summarized using the delta-notation:

[itex](e_\mu)^\nu = \delta^\nu_\mu[/itex]
 
  • #8
Shyan said:
Using Einstein summation convention, we have [itex] \vec A=A^i e_i=A^1 e_1+A^2 e_2+A^3 e_3 [/itex]. As you know, the components of mentioned vector([itex]A^i[/itex]), are scalars(they are functions of spatial coordinates). So what makes [itex] \vec A [/itex] a vector? Its the basis vectors [itex] e_i [/itex]. They are really vectors, but basis vectors. Which means only a distinguished set of linearly independent vectors.

I wouldn't refer to the components of a vector as scalars. I would define a scalar as something that doesn't change under a change of coordinates, i.e., a rank-0 tensor.
 
  • #9
bcrowell said:
I wouldn't refer to the components of a vector as scalars. I would define a scalar as something that doesn't change under a change of coordinates, i.e., a rank-0 tensor.
Oh...yeah, Sorry. I should have made clear I don't mean the strict meaning of the word.
So...what should we call them? Just "components of a vector"?

EDIT: But actually in the context of linear algebra, they are scalars. So we have two conflicting definitions of the word scalar.
 
Last edited:
  • #10
bcrowell said:
I wouldn't refer to the components of a vector as scalars. I would define a scalar as something that doesn't change under a change of coordinates, i.e., a rank-0 tensor.

I guess it's a matter of taste, but I don't like that way of describing things. If [itex]\vec{A}[/itex] and [itex]\vec{B}[/itex] are vectors, then wouldn't you say that [itex]\vec{A} \cdot \vec{B}[/itex] is a scalar? But in the special case where [itex]\vec{B}[/itex] is the basis vector [itex]e_\mu[/itex], we have:

[itex]\vec{A} \cdot e_\mu = A_\mu[/itex]

So it is simultaneously true that [itex]A_\mu[/itex] is a scalar (it is the result of taking the scalar product of two vectors), and it is also a component of a covector.
 
  • #11
TimeRip496 said:
Here it is : " Note that we mark the covariant basis vectors with an upper index and the contravariant basis vectors with a lower index. This may sounds counter-intuitive ('did we not decide to use upper indices for contravariant vectors?') but this is precisely what we mean with the 'different meaning of the indices' here: this time they label the vectors and do not denote their components. "

I can follow except the last sentence and I do not know why. Can anyone enlighten me?
The components of a vector ##v## with respect to an ordered basis ##(e_1,\dots,e_n)## are the unique real numbers ##v^1,\dots,v^n## such that ##v=\sum_{i=1}^n v^i e_i##.

I will elaborate a bit...

Let ##V## be an n-dimensional vector space over ##\mathbb R##. Let ##V^*## be the set of linear functions from ##V## to ##\mathbb R##. Define addition and scalar multiplication on ##V^*## by ##(f+g)(v)=f(v)+g(v)## and ##(vf)(x)=a(f(v))## for all ##v\in V##. These definitions turn ##V^*## into a vector space. The ##V^*## defined this way is called the dual space of ##V##.

Let ##(e_i)_{i=1}^n## be an ordered basis for ##V##. (The notation denotes the n-tuple ##(e_1,\dots,e_n)##). It's conventional to put these indices downstairs, and to put the indices on components of vectors in ##V## upstairs. For example, if ##v\in V##, then we write ##v=v^i e_i##. I'm using the summation convention here, so the right-hand side really means ##\sum_{i=1}^n v^i e_i##.

For each ##i\in\{1,\dots,n\}##, we define ##e^i\in V^*## by ##e^i(e_j)=\delta^i_j##. It's not hard to show that ##(e^i)_{i=1}^n## is an ordered basis for ##V^*##. The ordered basis ##(e^i)_{i=1}^n## is called the dual basis of ##(e_i)_{i=1}^n##. It's conventional to put the indices on components of vectors in ##V^*## downstairs. For example, if ##f\in V^*##, then we write ##f=f_ie^i##.

Exercise: Find an interesting way to rewrite each of the following expressions:

a) ##e^i(v)##
b) ##f(e_i)##
 
Last edited:
  • #12
stevendaryl said:
I guess it's a matter of taste, but I don't like that way of describing things. If [itex]\vec{A}[/itex] and [itex]\vec{B}[/itex] are vectors, then wouldn't you say that [itex]\vec{A} \cdot \vec{B}[/itex] is a scalar? But in the special case where [itex]\vec{B}[/itex] is the basis vector [itex]e_\mu[/itex], we have:

[itex]\vec{A} \cdot e_\mu = A_\mu[/itex]

So it is simultaneously true that [itex]A_\mu[/itex] is a scalar (it is the result of taking the scalar product of two vectors), and it is also a component of a covector.

I suppose this would depend on whether you use the convention that a basis vector like [itex]e_\mu[/itex] transforms, or doesn't transform. I would take the Greek index to mean that it's a concrete index rather than an abstract index, and I would then assume that it was to be kept fixed under a change of coordinates. In reality, I think this would usually be clear from context.
 
  • #13
bcrowell said:
I suppose this would depend on whether you use the convention that a basis vector like [itex]e_\mu[/itex] transforms, or doesn't transform. I would take the Greek index to mean that it's a concrete index rather than an abstract index, and I would then assume that it was to be kept fixed under a change of coordinates. In reality, I think this would usually be clear from context.

The way that I think of things "transforming under coordinate changes" is this:

Vectors are fixed things (in differential geometry, they can be identified with tangents to parametrized paths). Components of a vector are projections of the vector onto a basis (as set of 4 independent vectors). If I have 4 independent vectors [itex]\vec{A}, \vec{B}, \vec{C}, \vec{D}[/itex], and then I have another vector [itex]\vec{V}[/itex], I can, as Fredrick said, write [itex]\vec{V}[/itex] as a linear combination of my basis: [itex]\vec{V} = V^1 \vec{A} + V^2 \vec{B} + V^3 \vec{C} + V^4 \vec{D}[/itex]. [itex]V^1, ..., V^4[/itex] are just 4 real numbers that happen to express the relationship between [itex]\vec{V}[/itex] and my four basis vectors, [itex]\vec{A}, \vec{B}, \vec{C}, \vec{D}[/itex]. At this point, nothing has been said about a coordinate system. All 5 vectors, [itex]\vec{V},\vec{A}, ..., \vec{D}[/itex] have an identity that is independent of any coordinate system.

But if I want to use a different set of vectors as my basis, say, [itex]\vec{A'}, \vec{B'}, ..., \vec{D'}[/itex], then I can also write the same vector [itex]\vec{V}[/itex] in terms of this new basis: [itex]\vec{V} = (V^1)' \vec{A'} + ... + (V^4)' \vec{D'}[/itex]. I haven't transformed [itex]\vec{V}[/itex], I've just written it as a different linear combination.
 
  • #14
stevendaryl said:
The way that I think of things "transforming under coordinate changes" is this:

Vectors are fixed things (in differential geometry, they can be identified with tangents to parametrized paths). Components of a vector are projections of the vector onto a basis (as set of 4 independent vectors). If I have 4 independent vectors [itex]\vec{A}, \vec{B}, \vec{C}, \vec{D}[/itex], and then I have another vector [itex]\vec{V}[/itex], I can, as Fredrick said, write [itex]\vec{V}[/itex] as a linear combination of my basis: [itex]\vec{V} = V^1 \vec{A} + V^2 \vec{B} + V^3 \vec{C} + V^4 \vec{D}[/itex]. [itex]V^1, ..., V^4[/itex] are just 4 real numbers that happen to express the relationship between [itex]\vec{V}[/itex] and my four basis vectors, [itex]\vec{A}, \vec{B}, \vec{C}, \vec{D}[/itex]. At this point, nothing has been said about a coordinate system. All 5 vectors, [itex]\vec{V},\vec{A}, ..., \vec{D}[/itex] have an identity that is independent of any coordinate system.

But if I want to use a different set of vectors as my basis, say, [itex]\vec{A'}, \vec{B'}, ..., \vec{D'}[/itex], then I can also write the same vector [itex]\vec{V}[/itex] in terms of this new basis: [itex]\vec{V} = (V^1)' \vec{A'} + ... + (V^4)' \vec{D'}[/itex]. I haven't transformed [itex]\vec{V}[/itex], I've just written it as a different linear combination.
Here you're talking about linear algebra. I'm wondering how the two view points can be reconciled!
mmm...It seems to me that in linear algebra we never use different coordinates, in fact we never define such things. We just pick different sets of linearly independent vectors as bases. So...Yeah, what you're talking here, is just in the tangent space of a point. But in differential geometry where we use different coordinates, we're doing things in a much less local manner than being only at a point.
 
  • #15
In differential geometry, we have the tangent and cotangent spaces at a point - but we also usually have some additional structure, such as the connection / fibre bundle that defies a map from the tangent space at one point to the tangent space at another point, given a curve connecting the two points.
 
  • #16
Shyan said:
Here you're talking about linear algebra. I'm wondering how the two view points can be reconciled!
mmm...It seems to me that in linear algebra we never use different coordinates, in fact we never define such things. We just pick different sets of linearly independent vectors as bases. So...Yeah, what you're talking here, is just in the tangent space of a point. But in differential geometry where we use different coordinates, we're doing things in a much less local manner than being only at a point.

At any point ##P## on a manifold, the tangent and cotangent spaces are simply linear vector spaces, that's why we "talk about linear algebra" even when we are talking about differential geometry. However, there are "special" sets of basis vectors ##e_i,~~i=1,...,n## which are called "coordinate basis vectors" if they all satisfy ##[e_i,e_j]=0,~~\forall i,j##. When we transform from one set of coordinate basis vectors to another set of coordinate basis vectors, the components of vectors or one forms change in the usual fashion ##A^{i'}=\frac{\partial x^{i'}}{\partial x^j}A^j##.
 
  • #17
Matterwave said:
At any point ##P## on a manifold, the tangent and cotangent spaces are simply linear vector spaces, that's why we "talk about linear algebra" even when we are talking about differential geometry. However, there are "special" sets of basis vectors ##e_i,~~i=1,...,n## which are called "coordinate basis vectors" if they all satisfy ##[e_i,e_j]=0,~~\forall i,j##. When we transform from one set of coordinate basis vectors to another set of coordinate basis vectors, the components of vectors or one forms change in the usual fashion ##A^i'=\frac{\partial x^i'}{\partial x^j}A^j##.

Yeah, I know. But here, the question is can we call components of a vector, scalars?
As bcrowell said, it seems wrong because the components change when we do a coordinate transformation and so aren't invariant under coordinate transformations, as scalars should be!
But as stevendaryl said, [itex] \vec A \cdot \vec B [/itex] is a scalar, and if we put [itex] \vec B=\hat e_i [/itex], we get [itex] \vec A \cdot \hat e_i=A^i [/itex] and it seems components of vectors are actually scalars.
(Maybe they actually settled the issue but I didn't understand!)
 
  • #18
Shyan said:
Yeah, I know. But here, the question is can we call components of a vector, scalars?
As bcrowell said, it seems wrong because the components change when we do a coordinate transformation and so aren't invariant under coordinate transformations, as scalars should be!
But as stevendaryl said, [itex] \vec A \cdot \vec B [/itex] is a scalar, and if we put [itex] \vec B=\hat e_i [/itex], we get [itex] \vec A \cdot \hat e_i=A^i [/itex] and it seems components of vectors are actually scalars.
(Maybe they actually settled the issue but I didn't understand!)

This is a matter of terminology. The number ##A^i \equiv \vec{A}\cdot\vec{e_i}## is a scalar field certainly; however, if we view ##A^i## as "the i'th component of the vector ##\vec{A}##" then certainly it is a component of a vector and not a scalar. In other words, it depends on how you want to view the quantity ##A^i##. If you view it as "the i'th component of A in THIS PARTICULAR basis" then it is a scalar, if you view it as "the i'th component of A in SOME basis" then it is not a scalar.

Perhaps it's easier if we give a concrete example. Say we have a vector ##\vec{A}=(3,2,0)##, then ##A^1=3##. 3, being a number, is a scalar of course, but ##A^1## which we use to denote what is in the first slot of ##\vec{A}=(A^1,\quad,\quad)## is the component of a vector.
 
  • #19
Shyan said:
Here you're talking about linear algebra. I'm wondering how the two view points can be reconciled!
mmm...It seems to me that in linear algebra we never use different coordinates, in fact we never define such things. We just pick different sets of linearly independent vectors as bases. So...Yeah, what you're talking here, is just in the tangent space of a point. But in differential geometry where we use different coordinates, we're doing things in a much less local manner than being only at a point.

Right. In a coordinate basis, we're picking the basis vectors vectors in a way that relates to the coordinates: [itex]e_\mu[/itex] is that unique vector so that [itex](e_\mu \cdot \nabla) \Phi = \partial_\mu \Phi[/itex] for all scalar fields [itex]\Phi[/itex]. But that's just a particular (very convenient) way of picking a basis. The basis doesn't have to have anything to do with coordinates. (Of course, to be useful, you need some continuous way to pick a basis at every point.)
 
  • #20
Matterwave said:
This is a matter of terminology. The number ##A^i \equiv \vec{A}\cdot\vec{e_i}## is a scalar field certainly; however, if we view ##A^i## as "the i'th component of the vector ##\vec{A}##" then certainly it is a component of a vector and not a scalar. In other words, it depends on how you want to view the quantity ##A^i##. If you view it as "the i'th component of A in THIS PARTICULAR basis" then it is a scalar, if you view it as "the i'th component of A in SOME basis" then it is not a scalar.

Yeah, physics discussions (and mathematics discussions aren't much better, often) sometimes run into confusion when it's not clear whether someone is talking about a tensor (or matrix, or vector, or whatever), or whether someone is talking about a component of a tensor (with arbitrary indices).

For example, [itex]g_{\mu \nu}[/itex] might mean the metric tensor, or they might mean a particular component of the metric tensor.

There is a similar ambiguity when people talk about functions: Does [itex]f(x)[/itex] mean a function, or does it mean the value of the function at some point [itex]x[/itex]? Many people try to use different alphabets, or different fonts, or something to distinguish between a variable and a constant with an arbitrary value, so they might write [itex]f(x)[/itex] to mean the function and [itex]f(a)[/itex] to mean the value at point [itex]a[/itex]. But it's hard to be consistent about such conventions, and not everybody uses the same ones.

You can disambiguate by using lambda notation (or some equivalent "binding" mechanism):

[itex]\lambda x . f(x)[/itex] means the function, while [itex]f(x)[/itex] means its value at point [itex]x[/itex]. But it's a pain to make everything explicit that way.
 
  • #21
The use of lowered-indices to indicate basis vectors, such as [itex]e_\mu[/itex], is a little more profound than simply keeping track of indices to apply the Einstein summation convention. It's also the case that in a change of basis, the basis vectors and the components of a single vector transform in opposite way:

[itex]A^\alpha = L^\alpha_\mu A^\mu[/itex]
[itex]e_\alpha = (L^{-1})^\mu_\alpha e_\mu[/itex]

For a covector [itex]B = B_\alpha e^\alpha[/itex], where [itex]e^\alpha[/itex] is a basis of covectors, it works out the opposite way:

[itex]B_\alpha = (L^{-1})^\mu_\alpha e_\mu[/itex]
[itex]e^\alpha = L^\alpha_\mu e^\mu[/itex]I'm not sure I know of a pithy way to see that an indexed collection of basis vectors should transform like the components of a covector, and an indexed collection of basis covectors should transform like the components of a vector. It has to work out that way in order for the Einstein summation convention to produce an object that is basis-independent, but I don't know a satisfying explanation for why it should work that way.
 
  • #22
stevendaryl said:
The use of lowered-indices to indicate basis vectors, such as [itex]e_\mu[/itex], is a little more profound than simply keeping track of indices to apply the Einstein summation convention. It's also the case that in a change of basis, the basis vectors and the components of a single vector transform in opposite way:

[itex]A^\alpha = L^\alpha_\mu A^\mu[/itex]
[itex]e_\alpha = (L^{-1})^\mu_\alpha e_\mu[/itex]

For a covector [itex]B = B_\alpha e^\alpha[/itex], where [itex]e^\alpha[/itex] is a basis of covectors, it works out the opposite way:

[itex]B_\alpha = (L^{-1})^\mu_\alpha e_\mu[/itex]
[itex]e^\alpha = L^\alpha_\mu e^\mu[/itex]I'm not sure I know of a pithy way to see that an indexed collection of basis vectors should transform like the components of a covector, and an indexed collection of basis covectors should transform like the components of a vector. It has to work out that way in order for the Einstein summation convention to produce an object that is basis-independent, but I don't know a satisfying explanation for why it should work that way.

Schutz has a pretty good explanation of this.
You just have to postulate that the basis vectors transform linearly, and then you can show that the components transform via the inverse matrix in order to preserve the vector/covector.
I guess it all comes from the fact that the vectors/covectors are geometrical objects? Not sure if everyone would find that a satisfying motivation.
 
  • #23
HomogenousCow said:
Schutz has a pretty good explanation of this.
You just have to postulate that the basis vectors transform linearly,
It's not a postulate. If ##(e_i)_{i=1}^n## and ##(e_i')_{i=1}^n## are ordered bases for ##V##, then for all ##i##, there must exist numbers ##M_i^j## such that ##e_i'=M_i^j e_j##. (In other words, we can always write the new basis vectors as linear combinations of the old).
 
  • #24
I will elaborate on what I said in post #23 here, in order to answer the question of how things transform under a change of ordered basis. Some of this was already worked out by stevendaryl in post #6. ##V## denotes an arbitrary n-dimensional vector space over ##\mathbb R##. ##V^*## denotes its dual space. (See post #11 if that term is unfamiliar).

Now let ##M## be the matrix such that for all ##i,j##, the component on row ##i##, column ##j## is ##M^i_j##. Recall that the definition of matrix multiplication is ##(AB)^i_j=A^i_k B^k_j##. Let ##v\in V## be arbitrary. We have
$$v=v^j e_j=v^i{}' e_i{}' =v^i{}' M^j_i e_j,$$ and therefore ##v^j=v^i{}' M^j_i##. This implies that
$$(M^{-1})^k_j v^j =v^i{}' (M^{-1})^k_j M^j_i =v^i{}' (M^{-1}M)^k_i =v^i{}' \delta^k_i =v^k.$$ So the n-tuple of components ##(v^1,\dots,v^n)## transforms according to
$$v^i= (M^{-1})^i_j v^j.$$ The fact that the matrix that appears here is ##M^{-1}## rather than ##M## is the reason why an n-tuple of components of an element of ##V## is said to transform contravariantly. The terms "covariant" and "contravariant" should be interpreted respectively as "the same as the ordered basis" and "the opposite of the ordered basis".

It's easy to see that the dual basis transforms contravariantly. Let ##N## be the matrix such that ##e^i{}' =N^i_j e^j##. We have
$$\delta^i_j =e^i{}'(e_j{}')=N^i_k e^k (M_j^l e_l) = N^i_k M_j^l e^k{}(e_l{}) =N^i_k M_j^l \delta^k_l =N^i_k M_j^k =(NM)^i_j.$$ This implies that ##N=M^{-1}##. So we have
$$e^i{}' =(M^{-1})^i_j e^j.$$ Now we can easily see that an n-tuple of components of an arbitrary ##f\in V^*## transforms covariantly. We can prove it in a way that's very similar to how we determined the transformation properties of the n-tuple of components of ##v##, but the simplest way is to use the formula ##f_i=f(e_i)##, which I left as an easy exercise in post #11.
$$f_i{}' =f(e_i{}')=f(M_i^j e_j) =M_i^j f(e_j)= M_i^j f_j.$$ Note that what's "transforming" under a change of ordered basis in these examples are n-tuples of real numbers or n-tuples of vectors (in ##V## or ##V^*##). In the case of a tensor of type ##(k,l)##, what's transforming isn't the tensor, but its ##n^{k+l}##-tuple of components with respect to the ordered basis ##(e_i)_{i=1}^n##.

Of course, one can take the point of view that these ##n##-tuples or ##n^{k+l}##-tuples are the tensors, or rather, that the function that associates tuples with ordered bases is what should be called a tensor. I'm not a fan of that view myself. I consider it inferior and obsolete. However, there isn't anything fundamentally wrong with it. The real problem is that it's so hard to find an explanation of this view that isn't unbelievably bad.
 
Last edited:
  • #25
Fredrik said:
It's not a postulate. If ##(e_i)_{i=1}^n## and ##(e_i')_{i=1}^n## are ordered bases for ##V##, then for all ##i##, there must exist numbers ##M_i^j## such that ##e_i'=M_i^j e_j##. (In other words, we can always write the new basis vectors as linear combinations of the old).
I worded that badly.
 
  • #26
Shyan said:
Here you're talking about linear algebra. I'm wondering how the two view points can be reconciled!
Some tuples transform in a certain way under a change of ordered basis. That's just multilinear algebra. In differential geometry we want to talk about how tuples transform under a change of coordinate system, not under a change of ordered basis. These things are reconciled by associating the ordered basis ##\big(\frac{\partial}{\partial x^i}\big|_p\big)_{i=1}^n## for ##T_pM## with each coordinate system ##x## with ##p## in its domain. Now a change of coordinate system induces a change of ordered basis.

stevendaryl said:
The basis doesn't have to have anything to do with coordinates. (Of course, to be useful, you need some continuous way to pick a basis at every point.)
I suppose that the assignment ##x\mapsto \big(\frac{\partial}{\partial x^i}\big|_p\big)_{i=1}^n## of an ordered basis for ##T_pM## to each coordinate system with ##p## in its domain can be viewed as somewhat arbitrary, but when we're talking about tensors, then we have to use an ordered basis that's completely determined by the coordinate system. Otherwise it wouldn't make any sense to talk about how a tuple transforms under a change of coordinates. It's not the change of coordinate system that changes the tuple. It's the change of ordered basis. So unless the change of coordinate system induces a change of ordered basis, then the tuples can't be said to transform under a change of coordinate system.
 
Last edited:
  • #27
Fredrik said:
I suppose that the assignment x↦(∂∂xi∣∣p)ni=1x\mapsto \big(\frac{\partial}{\partial x^i}\big|_p\big)_{i=1}^n of an ordered basis for TpMT_pM to each coordinate system with pp in its domain can be viewed as somewhat arbitrary, but when we're talking about tensors, then we have to use an ordered basis that's completely determined by the coordinate system.Otherwise it wouldn't make any sense to talk about how a tuple transforms under a change of coordinates. It's not the change of coordinate system that changes the tuple. It's the change of ordered basis. So unless the change of coordinate system induces a change of ordered basis, then the tuples can't be said to transform under a change of coordinate system.
But my limited knowledge on vielbeins tells me that they are actually a set of ordered basis in the tangent space that aren't associated with any coordinate system and people seem to be happy with them in GR!
 
  • #28
Fredrik said:
I suppose that the assignment ##x\mapsto \big(\frac{\partial}{\partial x^i}\big|_p\big)_{i=1}^n## of an ordered basis for ##T_pM## to each coordinate system with ##p## in its domain can be viewed as somewhat arbitrary, but when we're talking about tensors, then we have to use an ordered basis that's completely determined by the coordinate system.

That's sort of my point--that the more general concept is not how things change under a coordinate change, but how coefficients change when you change basis.
 
  • #29
Fredrik said:
It's not a postulate. If ##(e_i)_{i=1}^n## and ##(e_i')_{i=1}^n## are ordered bases for ##V##, then for all ##i##, there must exist numbers ##M_i^j## such that ##e_i'=M_i^j e_j##. (In other words, we can always write the new basis vectors as linear combinations of the old).

It's not the fact that there is a linear relationship that is surprising, it's that the linear relationship between basis vectors is described by the same matrix as the relationship between components of a covector.
 

1. What is the difference between covariant and contravariant?

Covariant and contravariant refer to how a mathematical object changes under a coordinate transformation. In simple terms, covariant means that the object changes in the same way as the coordinates, while contravariant means that it changes in the opposite way.

2. How are covariant and contravariant vectors related to each other?

Covariant and contravariant vectors are related through a transformation matrix, which represents the change of basis between the two types of vectors. This matrix is typically denoted by gij and is the inverse of the metric tensor gij.

3. What is the importance of understanding covariant and contravariant in physics?

In physics, understanding covariant and contravariant is crucial for correctly formulating and solving problems in relativity, quantum mechanics, and other branches of theoretical physics. These concepts are used to describe physical quantities that are dependent on the observer's frame of reference.

4. How does the covariant derivative differ from the ordinary derivative?

The covariant derivative is a generalization of the ordinary derivative in which the derivative is taken with respect to a curved coordinate system. Unlike the ordinary derivative, the covariant derivative takes into account the curvature of the space and properly accounts for changes in the coordinate system.

5. Can you give an example of a covariant and contravariant object?

An example of a covariant object is the position vector of a particle, which changes in the same way as the coordinates of the observer. An example of a contravariant object is the momentum vector of a particle, which changes in the opposite way as the coordinates. Both of these examples are commonly used in relativity and other branches of physics.

Similar threads

  • Special and General Relativity
Replies
3
Views
801
  • Special and General Relativity
Replies
10
Views
1K
  • Special and General Relativity
2
Replies
36
Views
2K
  • Special and General Relativity
Replies
8
Views
1K
Replies
23
Views
3K
  • Special and General Relativity
Replies
20
Views
972
Replies
24
Views
1K
  • Special and General Relativity
Replies
14
Views
2K
  • General Math
Replies
5
Views
1K
  • Advanced Physics Homework Help
Replies
5
Views
2K
Back
Top