# Is Covariant Derivative Notation Misleading in Vector Calculus?

• I
• stevendaryl
In summary, the notation for covariant derivatives is confusing because it is not actually a tensor. The notation is clunky and weird because first you take a covariant derivative of the vector, and the result is another vector. With my notation, the expression is just the partial derivative of the original vector.
stevendaryl
Staff Emeritus
[Moderator's note: Thread spun off from previous thread due to topic change.]

This thread brings a pet peeve I have with the notation for covariant derivatives. When people write

##\nabla_\mu V^\nu##

what it looks like is the result of operating on the component ##V^\nu##. But the components of a vector are just scalars, so there's no difference between a covariant derivative and a partial derivative.

My preferred notation (which I don't think anybody but me uses) is:

##(\nabla_\mu V)^\nu##

The meaning of the parentheses is this: First you take a covariant derivative of the vector ##V##. The result is another vector. Then you take component ##\nu## of that vector.

Then with this notation, you can substitute ##V = V^\nu e_\nu## to get ##V## in terms of basis vectors, and you get:

##\nabla_\mu V = \nabla_\mu (V^\nu e_\nu) = (\nabla_\mu V^\nu) e_\nu + V^\nu (\nabla_\mu e_\nu)##

With my notation, the expression ##(\nabla_\mu V^\nu)## is just ##\partial_\mu V^\nu##. So we have (after relabeling the dummy index ##\nu## to ##\sigma## on the last expression)

##\nabla_\mu V = (\partial_\mu V^\nu) e_\nu + V^\sigma (\nabla_\mu e_\sigma)##

Taking components gives:

##(\nabla_\mu V)^\nu = \partial_\mu V^\nu + V^\sigma \Gamma^\nu_{\mu \sigma}##

where ##\Gamma^\nu_{\mu \sigma}## is just defined to be equal to ##(\nabla_\mu e_\sigma)^\nu## (component ##\nu## of the vector ##\nabla_\mu e_\sigma##).

With the usual notation, you would have something like:

##\nabla_\mu (e_\nu)^\sigma = \Gamma^\sigma_{\mu \nu}##

which is, to me, clunky and weird.

vanhees71 and etotheipi
stevendaryl said:
First you take a covariant derivative of the vector ##V##. The result is another vector.

No, it isn't, it's a (1, 1) tensor--one upper index, one lower index. The ##\nu## index is the upper index. The confusion, which your notation does at least try to alleviate, is that the upper index is not "the same" as the upper index of the vector whose covariant derivative is being taken. To completely alleviate confusion, one could write ##\left( \nabla V \right)_\mu{}^\nu##, since putting the lower index on the ##\nabla## can also invite confusion (particularly if you start bringing in basis vectors that have indexes, but whose indexes don't refer to components but to which basis vector).

cianfa72, pervect and (deleted member)
@PeterDonis can't the connection also be thought of as a map from two vector fields to a vector field? That's to say ##\nabla(X,Y) := \nabla_{X} Y##, i.e. the covariant derivative of ##Y## along ##X##, is also a vector field on the manifold? And similarly for ## \nabla_{e_{\nu}} X \overset{\text{abbrev.}}{\equiv} \nabla_{\nu} X##.

Although I do agree that if you only feed the ##\nabla## a single vector field, e.g. ##\nabla X##, then that is a ##(1,1)## tensor field.

Last edited by a moderator:
etotheipi said:
can't the connection also be thought of as a map from two vector fields to a vector field?

By "the connection" you appear to mean the operator ##\nabla## by itself. By your description, that operator would be a (1, 2) object--two lower indexes, one upper index. What I called ##\nabla V## would be the result of feeding one vector to this operator, contracting away one lower index; what you are calling ##\nabla_X V## would be the result of feeding two vectors to this operator, contracting away both lower indexes.

The only real problem with taking this viewpoint is that this ##\nabla## is not a tensor. If you think about it, you will see that it is just another way of describing the Christoffel symbols ##\Gamma##, which do not form a tensor. This ##\nabla## just happens to yield tensors when you feed it one or two vectors: feed it one vector and you get a (1, 1) tensor, feed it two vectors and you get a (1, 0) tensor, i.e., a vector.

vanhees71 and etotheipi
To me this is one of those cases where practicality trumps cumbersome notation. I can agree that there is no a priori clear indication as to if ##\nabla_\nu V^\mu## means ##(\nabla_\nu V)^\mu## or ##\nabla_\nu(V^\mu)##. However, we already have the notation ##\partial_\nu V^\mu## for ##\nabla_\nu (V^\mu)## so we do not really need another one. With this in mind, the notation is pretty consistent. As for the basis ##e_\mu##, the connection coefficient would be most easily expressed as ##\nabla_\mu e_\nu = \Gamma_{\mu\nu}^\lambda e_\lambda## - I don't really see a problem with this. Alternatively, ##\Gamma_{\mu\nu}^\lambda = e^\lambda(\nabla_\mu e_\nu)##, where ##e^\lambda## is the dual basis. The only notational rule is that when you have an expression of the form ##\nabla_\nu T^{\ldots}_{\ldots}##, the indices represented by ##\ldots## refer to the additional indices of the expression ##\nabla_\nu T##, i.e., it is understood as ##(\nabla_\nu T)_\ldots^\ldots##. I do not really see any possible misinterpretation here.

Regardless, I think this entire discussion is distracting from the OP's inquiry. It should perhaps be moved to a thread of its own?

etotheipi
PeterDonis said:
No, it isn't, it's a (1, 1) tensor--one upper index, one lower index.

This is something that drives me crazy about physics notation, which is that the notation doesn't distinguish between a vector and a component of a vector, and doesn't distinguish between a tensor and a component of a tensor.

If ##U## and ##V## are vectors, then ##\nabla_U V## is another vector. In the special case where ##U## is the basis vector ##e_\mu##, then you have that ##\nabla_{(e_\mu)} V## is a vector. That vector has components

##(\nabla_{(e_\mu)} V)^\nu = \nabla_\mu V^\nu## (or ##(\nabla_\mu V)^\nu## in my preferred notation).

Yes, There is a 1-1 tensor ##T## whose components are given by ##T^\nu_\mu = \nabla_\mu V^\nu##. But it's also true that if you fix ##\mu##, then there is also a vector ##V'## whose components are given by ##V'^\nu = \nabla_\mu V^\nu##.

Orodruin said:
As for the basis ##e_\mu##, the connection coefficient would be most easily expressed as ##\nabla_\mu e_\nu = \Gamma_{\mu\nu}^\lambda e_\lambda## - I don't really see a problem with this.

But it's inconsistent notation, because for every vector ##V## other than basis vectors, you write the covariant derivative as ##\nabla_\mu V^\nu##. So to be consistent with that notation, you should write ##\nabla_\mu (e_\sigma)^\nu##

stevendaryl said:
But it's inconsistent notation, because for every vector V other than basis vectors, you write the covariant derivative as ∇μVν. So to be consistent with that notation, you should write ∇μ(eσ)ν
I do not agree with this. The notation ##\nabla_\mu V^\nu## technically refers to the components of ##\nabla_{e_\mu} V##. It is perfectly consistent to write ##\nabla_\mu e_\sigma##.

etotheipi
I guess my point is to emphasize that connection coefficients ##\Gamma^\mu_{\nu \lambda}## arise from taking the covariant derivative of the basis vectors. The original poster didn't seem to realize that basis vectors change from point to point.

Orodruin said:
I do not agree with this. The notation ##\nabla_\mu V^\nu## technically refers to the components of ##\nabla_{e_\mu} V##. It is perfectly consistent to write ##\nabla_\mu e_\sigma##.

Yes, I know that in ##\nabla_\mu V^\nu##, the ##\nu## refers to components of a vector, but the notation obscures that. You might say that once you understand what's going on, it's perfectly clear, but I think the notation makes it more difficult to get to that point.

In the definition of covariant derivative:

##\nabla_\mu V^\nu = \partial_\mu V^\nu + \Gamma^\nu_{\mu \lambda} V^\lambda##

the term ##\nabla_\mu V^\nu## really means ##(\nabla_\mu V)^\nu##, while the seemingly analogous term ##\partial_\mu V^\nu## does not mean ##(\partial_\mu V)^\nu##.

stevendaryl said:
This is something that drives me crazy about physics notation, which is that the notation doesn't distinguish between a vector and a component of a vector, and doesn't distinguish between a tensor and a component of a tensor.

I agree this can be an issue, but the confusion you are showing is a different one, between the directional derivative of a vector along another vector, which is a vector, and the covariant derivative of a vector, which is a (1, 1) tensor. Unfortunately, I don't think your suggested notation helps with this confusion.

stevendaryl said:
If ##U## and ##V## are vectors, then ##\nabla_U V## is another vector. In the special case where ##U## is the basis vector ##e_\mu##, then you have that ##\nabla_{(e_\mu)} V## is a vector. That vector has components

##(\nabla_{(e_\mu)} V)^\nu = \nabla_\mu V^\nu## (or ##(\nabla_\mu V)^\nu## in my preferred notation).

But nobody else that I've ever seen uses the notation ##\nabla_\mu V^\nu## to refer to the directional derivative of the vector ##V^\nu## along the basis vector ##e_\mu##. Everyone else uses that notation to refer to the (1, 1) tensor that you get when you apply the covariant derivative operator ##\nabla_\mu## to the vector ##V^\nu##.

stevendaryl said:
Yes, There is a 1-1 tensor ##T## whose components are given by ##T^\nu_\mu = \nabla_\mu V^\nu##. But it's also true that if you fix ##\mu##, then there is also a vector ##V'## whose components are given by ##V'^\nu = \nabla_\mu V^\nu##.

This makes no sense. You can't "fix ##\mu##" on the covariant derivative operator to get another operator. The covariant derivative operator ##\nabla## takes any (p, q) tensor and produces a (p, q + 1) tensor. The directional derivative operator ##\nabla_X## is not formed by "fixing ##\mu##" on ##\nabla##; it's formed by contracting ##\nabla## with ##X##, i.e., it's the operator ##X^\mu \nabla_\mu##. This operator takes a (p, q) tensor and produces another (p, q) tensor.

Orodruin said:
The notation ##\nabla_\mu V^\nu## technically refers to the components of ##\nabla_{e_\mu} V##.

I don't agree. The ##\mu## on ##e_\mu## is not a component index; it's a "which basis vector" index. So the operator ##\nabla_{e_\mu}## is a directional derivative operator along ##e_\mu##, and we really should pick a different set of index letters for "which basis vector" indexes. For example, we could say that the basis vector is ##e_a##, and then we have an operator ##\nabla_{e_a} = \left( e_a \right)^\mu \nabla_\mu## that is the directional derivative operator along ##e_a##. In other words, as I said in response to @stevendaryl just now, the directional derivative operator is formed by contracting the covariant derivative operator ##\nabla## with the vector along which we want to take the directional derivative. The notation ##\nabla_\mu V^\nu## does not denote such a contraction.

PeterDonis said:
I don't agree. The ##\mu## on ##e_\mu## is not a component index; it's a "which basis vector" index. So the operator ##\nabla_{e_\mu}## is a directional derivative operator along ##e_\mu##, and we really should pick a different set of index letters for "which basis vector" indexes. For example, we could say that the basis vector is ##e_a##, and then we have an operator ##\nabla_{e_a} = \left( e_a \right)^\mu \nabla_\mu## that is the directional derivative operator along ##e_a##. In other words, as I said in response to @stevendaryl just now, the directional derivative operator is formed by contracting the covariant derivative operator ##\nabla## with the vector along which we want to take the directional derivative. The notation ##\nabla_\mu V^\nu## does not denote such a contraction.
I will agree in the general case, but it really does not matter as long as we are dealing with holonomic bases. Since the components are ##(e_\mu)^\nu = \delta_\mu^\nu##, it is indeed the case that ##\nabla_{e_\nu} = \delta^\mu_\nu \nabla_\mu = \nabla_\nu##.

Edit: In fact, I would argue that this is the essence of extracting components out of any tensor. Assuming ##T## to be a ##(n,m)## tensor, its components are always given by
$$T^{a_1\ldots a_n}_{b_1\ldots b_m} = T(e^{a_1},\ldots, e^{a_n},e_{b_1},\ldots, e_{b_m}).$$
Hence, considering ##\nabla V## as a (1,1) tensor, the components are can be extracted by inserting the tangent and dual bases, i.e.,
$$\nabla_\mu V^\nu = e^\nu(\nabla_{e_\mu} V)$$

Orodruin said:
considering ##\nabla V## as a (1,1) tensor, the components are can be extracted by inserting the tangent and dual bases, i.e.,
$$\nabla_\mu V^\nu = e^\nu(\nabla_{e_\mu} V)$$

This doesn't seem right. If ##\nabla V## is a (1, 1) tensor, then we should have ##\left( \nabla V \right)_\mu{}^\nu = \nabla V \left( e_\mu, f^\nu \right)##, where ##e_\mu## is a vector in the basis and ##f^\nu## is a covector in the dual basis. But ##\nabla_{e_\mu} V## is a vector, whose components are given by ##\left( \nabla_{e_\mu} V \right)^\nu = \nabla_{e_\mu} V \left( f^\nu \right)##. If we want to capture the fact that ##\nabla_{e_\mu}## is a contraction of ##e_\mu## with ##\nabla##, we would write ##\nabla_{e_\mu} = e_\mu \cdot \nabla##, and the components would be ##e_\mu \cdot \nabla \left( f^\nu \right)##. None of these seem to be what ##e^\nu(\nabla_{e_\mu} V)## is saying: that seems to be saying that we form a dyad with the vector ##e^\nu## (which is really a covector, what I have been calling ##f^\nu##) and ##\nabla_{e_\mu} V##. But that's not what ##\nabla_\mu V^\nu## is.

However, given a vector ##v## and a dual vector ##\omega##, you have ##v(\omega) = v^{\mu} e_{\mu} (\omega_{\nu} e^{\nu}) = v^{\mu} \omega_{\nu} \delta^{\nu}_{\mu} = v^{\mu}\omega_{\mu}## but also ##\omega(v) = \omega_{\mu} e^{\mu}(v^{\nu} e_{\nu}) = \omega_{\mu} v^{\nu} \delta ^{\mu}_{\nu} = \omega_{\mu} v^{\mu}##, so ##\omega(v) = v(\omega)##. So wouldn't ##\nabla_{e_{\mu}} V(e^{\nu}) = e^{\nu}(\nabla_{e_{\mu}} V)##?

Orodruin said:
Regardless, I think this entire discussion is distracting from the OP's inquiry. It should perhaps be moved to a thread of its own?

I have now done this--as you will note, this post and the one quoted above now appear in the new thread. I will post a link in the old thread for reference.

PeterDonis said:
This doesn't seem right. If ##\nabla V## is a (1, 1) tensor, then we should have ##\left( \nabla V \right)_\mu{}^\nu = \nabla V \left( e_\mu, f^\nu \right)##, where ##e_\mu## is a vector in the basis and ##f^\nu## is a covector in the dual basis. But ##\nabla_{e_\mu} V## is a vector, whose components are given by ##\left( \nabla_{e_\mu} V \right)^\nu = \nabla_{e_\mu} V \left( f^\nu \right)##. If we want to capture the fact that ##\nabla_{e_\mu}## is a contraction of ##e_\mu## with ##\nabla##, we would write ##\nabla_{e_\mu} = e_\mu \cdot \nabla##, and the components would be ##e_\mu \cdot \nabla \left( f^\nu \right)##. None of these seem to be what ##e^\nu(\nabla_{e_\mu} V)## is saying: that seems to be saying that we form a dyad with the vector ##e^\nu## (which is really a covector, what I have been calling ##f^\nu##) and ##\nabla_{e_\mu} V##. But that's not what ##\nabla_\mu V^\nu## is.
It is definitely not a dyad. By definition, the dual space is the space of linear functions from the tangent space to real numbers. Hence, being a dual vector, ##e^\nu## is a function from the tangent vectors to real numbers to which we pass the argument ##\nabla_{e_\mu}V##, which is a tangent vector and therefore a valid argument. I would call this pretty standard notation.

etotheipi
stevendaryl said:
the components of a vector are just scalars

No, they aren't; they don't transform like scalars on a change of coordinates.

stevendaryl said:
connection coefficients ##\Gamma^\mu_{\nu \lambda}## arise from taking the covariant derivative of the basis vectors.

No, they don't. The covariant derivative of a vector is a (1, 1) tensor. The connection coefficients are (1, 2) objects which, as I noted in an earlier post in this thread, are not even tensors.

I understand what you mean (or what you should mean) with the above expressions, but sloppiness in terminology is a pet peeve of mine just as potentially confusing notation is a pet peeve of yours.

Orodruin said:
It is definitely not a dyad.

"Dyad" is strictly speaking not a correct term for a (1, 1) object formed by "multiplying" a vector and a covector, no. But such an object is still a (1, 1) object, and is not the same as the (1, 1) object formed by taking the covariant derivative of a vector.

Orodruin said:
being a dual vector, ##e^\nu## is a function from the tangent vectors to real numbers to which we pass the argument ##\nabla_{e_\mu}V##

Ok, but the result of that operation is a scalar, not a (1, 1) tensor. In other words, the notation ##e^\nu \left( \nabla_{e_\mu} V \right)##, interpreted as "take the covector ##e^\nu## and feed it the vector ##\nabla_{e_\mu} V##", should give a scalar (another way of describing this scalar would be the contraction of the covector ##e^\nu## with the vector ##\nabla_{e_\mu} V##, which would be denoted ##\left( e^\nu \right)_\alpha \left( \nabla_{e_\mu} V \right)^\alpha## if we were really being careful about distinguishing component indexes from "which vector or covector" indexes). But the covariant derivative ##\nabla_\mu V^\nu##, which is what you were trying to capture with the notation ##e^\nu \left( \nabla_{e_\mu} V \right)##, is a (1, 1) tensor.

PeterDonis said:
But nobody else that I've ever seen uses the notation ##\nabla_\mu V^\nu## to refer to the directional derivative of the vector ##V^\nu## along the basis vector ##e_\mu##. Everyone else uses that notation to refer to the (1, 1) tensor that you get when you apply the covariant derivative operator ##\nabla_\mu## to the vector ##V^\nu##.

I don't at all understand the distinction you're making.

Do you agree that for any vector ##U##, there is a corresponding operator ##\nabla_U## that takes a vector and returns a vector?

Do you agree that basis vectors are vectors?

Then it follows that for each basis vector ##e_\mu##, there is a corresponding operator ##\nabla_(e_\mu)## that takes a vector and returns a vector. Do you agree with that?

So if ##V## is another vector, do you agree that ##\nabla_{e_\mu} V## is a vector? Do you agree that the components are given by ##(\nabla_{e_\mu} V)^\nu = (\nabla V)^\nu_\mu##? Do you agree that that is equal to ##(\nabla_\mu V)^\nu##?

So ##\nabla_{e_\mu} V## is a vector whose components are ##(\nabla_\mu V)^\nu##. So I don't understand why you would not want to say that ##\nabla_\mu V## is a vector with components ##(\nabla_\mu V)^\nu##.

This makes no sense. You can't "fix ##\mu##" on the covariant derivative operator to get another operator.

"Fixing ##\mu##" means the same thing as contracting with ##e_\mu##.

The covariant derivative operator ##\nabla## takes any (p, q) tensor and produces a (p, q + 1) tensor. The directional derivative operator ##\nabla_X## is not formed by "fixing ##\mu##" on ##\nabla##; it's formed by contracting ##\nabla## with ##X##, i.e., it's the operator ##X^\mu \nabla_\mu##. This operator takes a (p, q) tensor and produces another (p, q) tensor.

I have no idea why you think there is a distinction between those two. ##\nabla_\mu## means the same thing as ##\nabla_{e_\mu}##.

stevendaryl said:
"Fixing ##\mu##" means the same thing as contracting with ##e_\mu##.

As I have already pointed out, you are mixing up two different kinds of indexes that should really be kept separate. There are component indexes and there are "which vector or covector" indexes, and they are not the same thing.

stevendaryl said:
I have no idea why you think there is a distinction between those two. ##\nabla_\mu## means the same thing as ##\nabla_{e_\mu}##.

As @Orodruin has pointed out in post #13, you can get away with this in a coordinate basis. But not otherwise.

PeterDonis said:
But the covariant derivative ##\nabla_\mu V^\nu##, which is what you were trying to capture with the notation ##e^\nu \left( \nabla_{e_\mu} V \right)##, is a (1, 1) tensor.

But this is again just notational trouble, because from the context when @Orodruin wrote ##\nabla_{\mu} V^{\nu}## I think it should not be interpreted as ##{(\nabla V)_{\mu}}^{\nu}## but rather as ##(\nabla_{\mu} V)^{\nu}##, i.e. you take the vector field ##\nabla(e_{\mu}, V) = \nabla_{e_{\mu}} V## and you act it upon the basis vector ##e^{\nu}## to extract the ##\nu^{\mathrm{th}}## component of the vector field.

I was under the impression that the more common convention in relativity was indeed to take ##\nabla_{\mu} V^{\nu}## to be a vector component, e.g. as in post #5?

PeterDonis said:
No, they aren't; they don't transform like scalars on a change of coordinates.

I think that your way of looking at things is very confusing. The components of a vector are SCALARS.
If I am using 2-D Cartesian coordinates, and a coordinate basis, I have basis vectors ##e_x## and ##e_y##, I can make a new vector by taking linear combinations:

##V = 3 e_x + 4 e_y##

3 and 4 are just ordinary numbers. They are scalars. They also happen to be the components of vector ##V## in the basis ##e_x, e_y##.

When I transform to another basis, I'm re-expressing ##V## as a linear combination of different vectors. I'm not changing the value of 3 or 4.

A basis is a set of linearly independent vectors ##e^\mu##. There are corresponding co-vectors (type (1,0) tensor that take a vector and return a scalar) ##e^\lambda## satisfying ##e^\lambda(e_\mu) = \delta^\lambda_\mu##. Then in terms of those co-vectors, the components of an arbitrary vector ##V## are given by:

##V^\mu = e^\mu(V)##

That definitely is a scalar.

etotheipi said:
I was under the impression that the more common convention in relativity was indeed to take ##\nabla_{\mu} V^{\nu}## to be a vector component?

I am under the opposite impression, that the usual meaning of ##\nabla_{\mu} V^{\nu}##, at least if one is dealing with a source that takes proper account of the issue at all (which many don't--see further comments below), is to denote the (1, 1) tensor obtained by taking the covariant derivative of ##V##. In other words, both indexes are to be interpreted as component indexes.

Wald's abstract index notation is intended to try to address this issue; his abstract indexes refer to what MTW call "slots" on a general tensorial object. So in Wald's notation, the covariant derivative of a vector, the (1, 1) tensor, would be ##\nabla_a V^b##, indicating a (1, 1) object (in MTW's terminology, one "upper" slot that takes a covector argument, and one "lower" slot that takes a vector argument). Then you would extract the components of this object in a particular basis by inserting the appropriate basis vectors or covectors into the appropriate slots. Most other sources don't really discuss this, but just use component notation more or less the way Wald uses abstract index notation, so that ##\nabla_{\mu} V^{\nu}## means more or less the same thing as Wald's ##\nabla_a V^b##.

Since, as @Orodruin has pointed out, in a coordinate basis the index ##\mu## on ##\nabla_\mu## can be interpreted either way (as a tensor component index or as a "which vector or covector" index), a source that implicitly assumes a coordinate basis, and doesn't take proper account of this issue, would use the notation ##\nabla_{\mu} V^{\nu}## to mean both things--the ##\mu## index would be both a component index and a "which vector or covector" index, because the source is simply ignoring the distinction between the two. But I do not think that is a valid argument for claiming that the "which vector or covector" index interpretation of ##\nabla_\mu## is standard. I think it's just an illustration of how sloppy many sources in the literature are.

etotheipi
stevendaryl said:
The components of a vector are SCALARS.

If you interpret "the components of a vector" to mean "contractions of the vector with the four basis vectors in some fixed basis", then yes, those things are scalars.

But on that interpretation of "the components of a vector" (as opposed to the more specific "the components of a vector in this particular fixed basis"), the components of a vector do not change when you change coordinates, because scalars don't change when you change coordinates. But I think the phrase "the components of a vector", without any qualifiers, indicates things that do change when you change coordinates; just look at any discussion of coordinate transformations in a GR textbook, which will be all about how the components of vectors and other tensorial objects change when you change coordinates. But numbers that change when you change coordinates are not scalars.

stevendaryl said:
When I transform to another basis, I'm re-expressing ##V## as a linear combination of different vectors.

Yes, and, as above, the way any discussion of coordinate transformations in a GR textbook will describe this process is "transforming the components of ##V##". But on your interpretation, the components of ##V## are ##3## and ##4##, period. They don't change when I transform to another basis. Is that the interpretation you are defending?

PeterDonis said:
But on that interpretation of "the components of a vector" (as opposed to the more specific "the components of a vector in this particular fixed basis"), the components of a vector do not change when you change coordinates, because scalars don't change when you change coordinates. But I think the phrase "the components of a vector", without any qualifiers, indicates things that do change when you change coordinates; just look at any discussion of coordinate transformations in a GR textbook, which will be all about how the components of vectors and other tensorial objects change when you change coordinates. But numbers that change when you change coordinates are not scalars.

I'm not certain about this, but I think the components of a vector field are actually scalars, so long as you define a scalar as per usual as a function on a manifold [satisfying ##\varphi(p) = \varphi'(p)##]. But the key is that a coordinate transformation amounts to mapping from one tuple of scalars/functions ##(v^{\mu})## to a different tuple of scalars/functions ##(\bar{v}^{\mu})##, i.e. by ##\bar{v}^{\mu} = {T^{\mu}}_{\nu} v^{\nu}##.

It's to say ##v^{\mu}(p)## and ##\bar{v}^{\mu}(p)##, when considering ##v^{\mu}## and ##\bar{v}^{\mu}## as functions, both have invariant meanings!

Last edited by a moderator:
stevendaryl said:
[Moderator's note: Thread spun off from previous thread due to topic change.]

This thread brings a pet peeve I have with the notation for covariant derivatives. When people write

##\nabla_\mu V^\nu##

what it looks like is the result of operating on the component ##V^\nu##. But the components of a vector are just scalars, so there's no difference between a covariant derivative and a partial derivative.

My preferred notation (which I don't think anybody but me uses) is:

##(\nabla_\mu V)^\nu##

The meaning of the parentheses is this: First you take a covariant derivative of the vector ##V##. The result is another vector. Then you take component ##\nu## of that vector.

Then with this notation, you can substitute ##V = V^\nu e_\nu## to get ##V## in terms of basis vectors, and you get:

##\nabla_\mu V = \nabla_\mu (V^\nu e_\nu) = (\nabla_\mu V^\nu) e_\nu + V^\nu (\nabla_\mu e_\nu)##

With my notation, the expression ##(\nabla_\mu V^\nu)## is just ##\partial_\mu V^\nu##. So we have (after relabeling the dummy index ##\nu## to ##\sigma## on the last expression)

##\nabla_\mu V = (\partial_\mu V^\nu) e_\nu + V^\sigma (\nabla_\mu e_\sigma)##

Taking components gives:

##(\nabla_\mu V)^\nu = \partial_\mu V^\nu + V^\sigma \Gamma^\nu_{\mu \sigma}##

where ##\Gamma^\nu_{\mu \sigma}## is just defined to be equal to ##(\nabla_\mu e_\sigma)^\nu## (component ##\nu## of the vector ##\nabla_\mu e_\sigma##).

With the usual notation, you would have something like:

##\nabla_\mu (e_\nu)^\sigma = \Gamma^\sigma_{\mu \nu}##

which is, to me, clunky and weird.
What is the derivation one should follow to compute ##\nabla_\mu (e_\nu)^\sigma ##?

Last edited:
PeterDonis said:
If you interpret "the components of a vector" to mean "contractions of the vector with the four basis vectors in some fixed basis", then yes, those things are scalars.

But on that interpretation of "the components of a vector" (as opposed to the more specific "the components of a vector in this particular fixed basis"), the components of a vector do not change when you change coordinates, because scalars don't change when you change coordinates.

I think that's the way it should be. Instead of talking about "transforming" a vector, you say that a vector is a fixed object that is independent of any basis or coordinate system. But you can express a vector as a linear combination of other vectors. Of course, the coefficients depend on which other vectors you are talking about.

Yes, and, as above, the way any discussion of coordinate transformations in a GR textbook will describe this process is "transforming the components of ##V##". But on your interpretation, the components of ##V## are ##3## and ##4##, period. They don't change when I transform to another basis. Is that the interpretation you are defending?

Absolutely. I think that it's much better to think in terms of re-expressing a vector as a linear combination of different basis vectors, rather than to think of the components tranforming.

etotheipi said:
I think the components of a vector field are actually scalars

The components of a vector in a specific, fixed basis are scalars, yes, because they are simply contractions (or perhaps "inner products" would be a better term) of vectors. But "the components of a vector" without specifying a basis is, strictly speaking, not even a well-defined expression, although most sources use it in a somewhat sloppy fashion when talking about coordinate transformations. @stevendaryl is correct that the proper way to describe what a coordinate transformation does is to change which set of basis vectors you contract a given vector with to get components. But most sources don't describe it that way and don't really make it clear that that is what is going on.

etotheipi
stevendaryl said:
I think that it's much better to think in terms of re-expressing a vector as a linear combination of different basis vectors, rather than to think of the components tranforming.

That's fine, but it still doesn't make "the components of a vector are scalars" correct without qualification, because on the interpretation you are defending (which is the same one I described in my response to @etotheipi just now), the expression "the components of a vector" is not even well-defined since it doesn't specify a basis. The numbers ##3## and ##4## in your previous example are not "the components of ##V##", they are "the components of ##V## in the ##e_x##, ##e_y## basis".

PeterDonis said:
The components of a vector in a specific, fixed basis are scalars, yes, because they are simply contractions (or perhaps "inner products" would be a better term) of vectors. But "the components of a vector" without specifying a basis is, strictly speaking, not even a well-defined expression, although most sources use it in a somewhat sloppy fashion when talking about coordinate transformations. @stevendaryl is correct that the proper way to describe what a coordinate transformation does is to change which set of basis vectors you contract a given vector with to get components. But most sources don't describe it that way and don't really make it clear that that is what is going on.
Sure, from linear algebra theory the isomorphism ##\varphi_{\mathcal{B}} : V \longrightarrow \mathbf{R}^n## taking ##v \in V## to its coordinate vector ##[v]_{\mathcal{B}}## with respect to a basis ##\mathcal{B}## of ##V## of course depends on ##\mathcal{B}##, and due to the linearity is defined totally by its action on the basis e.g. ##\varphi_{\mathcal{B}}(e_i) = (0, \dots, 1, \dots, 0)## with the ##1## in the ##i^\mathrm{th}## position. And transformation from ##\mathcal{A}##-coordinates to ##\mathcal{B}##-coordinates is just application of a transition function ##\varphi_{\mathcal{B}} \circ \varphi_{\mathcal{A}}^{-1}##, where here this function can be found explicitly by considering ##v = v^{\mu} e_{\mu} = \bar{v}^{\mu} \bar{e}_{\mu}##, so it follows ##v^{\nu} = v^{\mu} \delta^{\nu}_{\mu} = e^{\nu}(v^{\mu} e_{\mu}) = \bar{v}^{\mu} e^{\nu}(\bar{e}_{\mu})## and is defined completely by the numbers ##{T^{\nu}}_{\mu} := e^{\nu}(\bar{e}_{\mu})##.

It doesn't even make sense to mention vector components without specifying a basis!

PeterDonis said:
But on that interpretation of "the components of a vector" (as opposed to the more specific "the components of a vector in this particular fixed basis"), the components of a vector do not change when you change coordinates, because scalars don't change when you change coordinates.

Just saying "the components of a vector" is implicitly leaving out the important bit of "with respect to basis A". The components relative to a different basis B will take a different set of basis vectors and therefore be different from those relative to basis A. Components are useless without knowledge about what basis they refer to.

PeterDonis said:
Yes, and, as above, the way any discussion of coordinate transformations in a GR textbook will describe this process is "transforming the components of V". But on your interpretation, the components of V are 3 and 4, period. They don't change when I transform to another basis. Is that the interpretation you are defending?
They do change because the components depend on the basis you insert into the tensor.

Jufa said:
What is the derivation one should follow to compute ##\nabla_\mu (e_\nu)^\sigma ##?

That's just a different notation for the connection coefficient ##\Gamma^\sigma_{\mu \nu}##.

There are two different ways to compute ##\Gamma^\sigma_{\mu \nu}##. If you know the components of the metric tensor, ##g_{\mu \nu}##, then ##\Gamma^\sigma_{\mu \nu} = \dfrac{1}{2} g^{\sigma \lambda}(\partial_\mu g_{\nu \lambda} + \partial_\nu g_{\lambda \mu} - \partial_\lambda g_{\nu \mu})##

Or, if you know what the relationship is between your basis vectors ##e_\mu## and a set of local Cartesian basis vectors ##e_j##, then you can use:

##\nabla_\mu e_\nu = \Gamma^\sigma_{\mu \nu} e_\sigma##

To compute ##\nabla_\mu e_\nu## you can re-express ##e_\nu## in terms of Cartesian basis:

##e_\nu = L^\nu_j e_j##

where ##L^\nu_j## are the coefficients of the transformation. Then you can use the fact that (by definition) Cartesian basis vectors are covariantly constant to write:
##\nabla_\mu L^\nu_j e_j = (\partial_\mu L^\nu_j) e_j##

Let me work out an example with 2-D polar coordinates:

##e_r = \dfrac{dx}{dr} e_x + \dfrac{dy}{df} e_y = cos(\theta) e_x + sin(\theta) e_y##
##e_\theta = \dfrac{dx}{d\theta} e_x + \dfrac{dy}{d\theta} e_y = - r sin(\theta) e_x + r cos(\theta) e_y##

##\nabla_r e_r = 0##
##\nabla_\theta e_r = -sin(\theta) e_x + cos(\theta) e_y = \dfrac{1}{r} e_\theta##
##\nabla_r e_\theta = - sin(\theta) e_x + cos(\theta) e_y = \dfrac{1}{r} e_\theta##
##\nabla_\theta e_\theta = -r cos(\theta) e_x - r sin(\theta) e_y = -r e_r##

So ##\Gamma^\theta_{r \theta} = \Gamma^\theta_{\theta r} = 1/r##
##\Gamma^r_{\theta \theta} = -r##

All the other coefficients are zero.

stevendaryl said:
That's just a different notation for the connection coefficient ##\Gamma^\sigma_{\mu \nu}##.

There are two different ways to compute ##\Gamma^\sigma_{\mu \nu}##. If you know the components of the metric tensor, ##g_{\mu \nu}##, then ##\Gamma^\sigma_{\mu \nu} = \dfrac{1}{2} g^{\sigma \lambda}(\partial_\mu g_{\nu \lambda} + \partial_\nu g_{\lambda \mu} - \partial_\lambda g_{\nu \mu})##

Or, if you know what the relationship is between your basis vectors ##e_\mu## and a set of local Cartesian basis vectors ##e_j##, then you can use:

##\nabla_\mu e_\nu = \Gamma^\sigma_{\mu \nu} e_\sigma##

To compute ##\nabla_\mu e_\nu## you can re-express ##e_\nu## in terms of Cartesian basis:

##e_\nu = L^\nu_j e_j##

where ##L^\nu_j## are the coefficients of the transformation. Then you can use the fact that (by definition) Cartesian basis vectors are covariantly constant to write:
##\nabla_\mu L^\nu_j e_j = (\partial_\mu L^\nu_j) e_j##

Let me work out an example with 2-D polar coordinates:

##e_r = \dfrac{dx}{dr} e_x + \dfrac{dy}{df} e_y = cos(\theta) e_x + sin(\theta) e_y##
##e_\theta = \dfrac{dx}{d\theta} e_x + \dfrac{dy}{d\theta} e_y = - r sin(\theta) e_x + r cos(\theta) e_y##

##\nabla_r e_r = 0##
##\nabla_\theta e_r = -sin(\theta) e_x + cos(\theta) e_y = \dfrac{1}{r} e_\theta##
##\nabla_r e_\theta = - sin(\theta) e_x + cos(\theta) e_y = \dfrac{1}{r} e_\theta##
##\nabla_\theta e_\theta = -r cos(\theta) e_x - r sin(\theta) e_y = -r e_r##

So ##\Gamma^\theta_{r \theta} = \Gamma^\theta_{\theta r} = 1/r##
##\Gamma^r_{\theta \theta} = -r##

All the other coefficients are zero.
Many thanks, I think I understand it now.

@etotheipi just now), the expression "the components of a vector" is not even well-defined since it doesn't specify a basis. The numbers ##3## and ##4## in your previous example are not "the components of ##V##", they are "the components of ##V## in the ##e_x##, ##e_y## basis".

Yes, it should always be "The components of ##V## in the basis ##e_\mu##"

• Special and General Relativity
Replies
2
Views
1K
• Special and General Relativity
Replies
7
Views
2K
• Special and General Relativity
Replies
7
Views
386
• Special and General Relativity
Replies
17
Views
1K
• Special and General Relativity
Replies
10
Views
775
• Special and General Relativity
Replies
62
Views
4K
• Special and General Relativity
Replies
22
Views
2K
• Special and General Relativity
Replies
6
Views
2K
• Special and General Relativity
Replies
6
Views
1K
• Special and General Relativity
Replies
19
Views
498