Undergrad Is Covariant Derivative Notation Misleading in Vector Calculus?

Click For Summary
The discussion centers on the notation for covariant derivatives in vector calculus, with a focus on the potential confusion arising from the representation of vectors and their components. The original poster argues that the notation ##\nabla_\mu V^\nu## misleadingly suggests a direct operation on the components of a vector, while they propose a clearer alternative, ##(\nabla_\mu V)^\nu##. Participants highlight that the covariant derivative is a (1, 1) tensor, and there is debate over whether the notation adequately distinguishes between vectors and their components. The conversation also touches on the implications of using different notations for directional derivatives and covariant derivatives. Overall, the thread emphasizes the need for clarity in mathematical notation to avoid misinterpretation.
  • #61
etotheipi said:
It's just notation!

Yes, but since there are multiple different, contradictory conventions for notation, and since it's often not clear which convention a particular person is using, it's at least worth trying to describe the different conventions and have some debate about their pros and cons.
 
  • Like
Likes etotheipi
Physics news on Phys.org
  • #62
PeterDonis said:
Yes, but since there are multiple different, contradictory conventions for notation, and since it's often not clear which convention a particular person is using, it's at least worth trying to describe the different conventions and have some debate about their pros and cons.

Very true. Good notation can get you a long way in a problem!

On the plus side, I'm happy to see Penrose's notation getting some love in this thread. It seems that some people like to bash it, but I actually think it's quite nice. It's helpful conceptually and it looks pretty :smile:
 
  • #63
etotheipi said:
I actually think it's quite nice.

So do I. It's one thing I greatly prefer about Wald as compared to, say, MTW. Their boldface notation for tensors, while it has some nice features, completely obscures the "slot" information that Wald's abstract index notation makes clear. And the only alternative notation MTW uses is component notation, which, while in their usage it does make the slot information clear (they basically use it to serve both functions--the Wald abstract index function and the "components in an unspecified basis" function, and in their usage components are never scalars--they will always write an explicit contraction when they want to obtain a scalar), does invite other confusions.
 
  • Like
Likes etotheipi
  • #64
PeterDonis said:
We've already been around this merry-go-round once. I'm sorry, but I simply don't see the point of trying to gerrymander the interpretation of individual indexes of the same kind on expressions the way you are doing.

If ##V^\nu## does not mean a component of vector ##V##, then how do you indicate the components of vector ##V##? If ##\nabla_\mu V^\nu## doesn't mean the ##^{\nu}_\mu## component of tensor ##\nabla V##, then how do you indicate that component?

I think you're defending a convention that causes no end of confusion.

To me, the whole point of having indexes on expressions is to denote what kind of object the thing is.

Doesn't calling it a ##(1,1)## tensor already indicate that?

Then you would be writing a different expression, which at least would give some indication that you intended the ##x## index to mean something different from the other ones (although even then the "directional derivative" interpretation is not the one that I would intuitively assign, nor, I think, would most physics readers--we would intuitively think you mean the ##x## component of the 1, 1 tensor).

You keep making a distinction where there is no distinction. If ##T## is a (1,1) tensor, then it is true BOTH that ##T^x_y## is a component of the (1,1) tensor ##T##, and ALSO that ##T^x_y## is the x-component of the vector formed by contracting ##T## with the basis vector ##e_y##. So you're making a distinction that doesn't exist.
 
  • #65
pervect said:
The only difference between index notation and abstract index notation is that if we require a specific basis (for instance, a coordinate basis), then we use greek letters in the subscripts and superscripts as a warning of this requirement.

I think it is unambiguous, except that in practice, people don't always stick to greek letters for names of components. They often use, instead, ##i, j, k## if it's meant to be a Cartesian basis. The notation ##V^a## looks like it's talking about a component of vector ##V##.
 
  • #66
So I think that the two sides can be reconciled if people do always use roman letters for abstract indices, and use greek letters for concrete indices. So under that convention, ##\nabla_\mu V^\nu## is talking about components of a tensor. If you want to talk about the tensor itself, you would use ##\nabla_a V^b##.
 
  • #67
stevendaryl said:
If ##V^\nu## does not mean a component of vector ##V##

It does. That's not the issue. The issue is that, while you're fine with using the Greek index ##\nu## to indicate a component, you insist on using the Greek index ##\mu## to indicate something else, like a directional derivative.

stevendaryl said:
I think you're defending a convention that causes no end of confusion.

I think you're mistaken about what I am saying. See above.

stevendaryl said:
Doesn't calling it a ##(1,1)## tensor already indicate that?

Sure, if you don't mind writing "##\nabla V##, a (1, 1) tensor" every time. But the whole point of having a notation like ##\nabla_a V^b## is to not have to write all that extra stuff every time, because what the object is is obvious from the notation.

stevendaryl said:
You keep making a distinction where there is no distinction.

No, you keep ignoring the fact that the double meaning you are trying to get away with only works if you are using a coordinate basis. We've already been around this merry-go-round as well.
 
  • #68
stevendaryl said:
I think that the two sides can be reconciled if people do always use roman letters for abstract indices, and use greek letters for concrete indices.

If by "concrete indices" you mean "component indices", then yes, that works. That's basically the convention Wald uses.

However, you have been using Greek letters for things that aren't component indices, like directional derivatives. You have used ##\nabla_\mu V^\nu## to mean, not the ##\mu##, ##\nu## component of the (1, 1) tensor ##\nabla_a V^b##, but the directional derivative in the ##\mu## direction of the vector ##V^\nu##, or more precisely the directional derivative in the direction of the vector ##e_\mu## in some fixed basis of the scalar obtained by taking the ##\nu## component of the vector ##V## in the same basis. The fact that, if the fixed basis chosen is a coordinate basis, those two things turn out to be the same, does not mean they are the same thing in general or that the same notation can always be used to designate both.

A slight abuse of Wald's convention would write what I just described above as ##\left[ \left( e_\mu \right)^a \nabla_a V^b \right]^\nu##. Notice how this makes it clear that the two Greek letters are serving different functions: ##\nu## is a component and ##\mu## is a "which vector" label that designates the direction in which we are taking the directional derivative (and this latter use is not what Wald normally uses Greek indices for, which is why I say this is a slight abuse of his convention). An even more explicit way to write it would be ##\left( e_\mu \right)^a \nabla_a V^b \left( e^\nu \right)_b##, which makes it clear what "taking the ##\nu## component" actually means. (Notice that this last expression is clearly a scalar, making it clear why "components of vectors in a fixed basis are scalars" is true.)
 
  • #69
PeterDonis said:
It does. That's not the issue.

Then ##T^\mu_\nu## is not a tensor or a vector. It's just a number that happened to be formed from a tensor ##T##. ##\nabla_\mu V^\nu## is, for a particular choice of ##\mu## and ##\nu##, just a number.

The issue is that, while you're fine with using the Greek index ##\nu## to indicate a component, you insist on using the Greek index ##\mu## to indicate something else, like a directional derivative.

The index tells you which one. ##V^\nu## tells you which component of ##V##, ##e_\mu## tells you which basis vector, ##\nabla_\mu## tells you which directional derivative (the one in the direction of basis vector ##e_\mu##.

No, you keep ignoring the fact that the double meaning you are trying to get away with only works if you are using a coordinate basis. We've already been around this merry-go-round as well.

How does it depend on a coordinate basis?

My claim is that if you have a (1,1) tensor ##T##, then it is true both that ##T^\nu_\mu## is the ##\nu-\mu## component of ##T##, and also that it is the ##\nu## component of the vector formed by contracting ##T## with the vector ##e_\mu##. How does that equivalence depend on whether ##e_\mu## is a coordinate basis?
 
  • #70
stevendaryl said:
##\nabla_\mu V^\nu## is, for a particular choice of ##\mu## and ##\nu##, just a number.

But the notation ##\nabla_\mu V^\nu##, by itself, does not say whether you mean the particular number you get from making a particular choice of values for the indices, or the abstract object that is the (1, 1) tensor itself. You, yourself, complained about that very ambiguity when you said, correctly, that physics notation doesn't make it clear whether you are talking about a vector or the components of a vector. But now you suddenly turn around and say that that notation always means the components? Why are you shifting your ground?

stevendaryl said:
The index tells you which one.

I understand quite well that your choice of index tells you perfectly clearly which one. My point is that it doesn't tell me which one--or most other physics readers. As I noted above, you complained before about physics notation not clearly distinguishing between vectors and their components; the notation you are using here fails to clearly distinguish between components and directional derivatives. I'm not arguing that the usual physics notation is clear; indeed, I have posted several times now describing the advantages of Wald's abstract index notation over the usual physics notation. I just don't see how your preferred notation is an improvement. I don't see how it helps to exchange one confusion for another.

stevendaryl said:
How does it depend on a coordinate basis?

I addressed this a while back, and so did @Orodruin. See posts #13 and #21.
 
  • #71
PeterDonis said:
But the notation ##\nabla_\mu V^\nu##, by itself, does not say whether you mean the particular number you get from making a particular choice of values for the indices, or the abstract object that is the (1, 1) tensor itself.


I thought you said that it always means components of a tensor, rather than the tensor itself. Once again, if it doesn't mean components, then how do you indicate the components of that tensor?

You, yourself, complained about that very ambiguity when you said, correctly, that physics notation doesn't make it clear whether you are talking about a vector or the components of a vector.

It's not an ambiguity if it always means components. Rather, it means one element of an indexed collection of objects.

But now you suddenly turn around and say that that notation always means the components? Why are you shifting your ground?

I'm not shifting my ground.

I understand quite well that your choice of index tells you perfectly clearly which one. My point is that it doesn't tell me which one--or most other physics readers. As I noted above, you complained before about physics notation not clearly distinguishing between vectors and their components; the notation you are using here fails to clearly distinguish between components and directional derivatives.

If there are indices, then you're always talking about one element of an indexed collection of objects. There is a directional derivative for each basis vector.
 
  • #72
PeterDonis said:
I addressed this a while back, and so did @Orodruin. See posts #13 and #21.

I looked at those posts, and they don't give an example of how the equivalence that I'm talking about fails in a non-coordinate basis.

I'm willing to be proved wrong. If ##T## is a (1,1) vector, then ##T(e_\mu)## is a vector. When is it the case that
##T^\nu_\mu \neq (T(e_\mu))^\nu##
 
Last edited:
  • #73
stevendaryl said:
thought you said that it always means components of a tensor, rather than the tensor itself.

Where did I say that?
 
  • #74
stevendaryl said:
If there are indices, then you're always talking about one element of an indexed collection of objects.

But if the indices are component indices, the objects in the indexed collection aren't basis vectors. They're
components.

stevendaryl said:
There is a directional derivative for each basis vector.

Which doesn't help if the indexes aren't indexes that denote basis vectors.
 
  • #75
stevendaryl said:
If ##T## is a (1,1) vector tensor, then ##T(e_\mu)## is a vector.

See correction above. With the correction, I agree.

stevendaryl said:
When is it the case that
##T^\nu_\mu \neq (T(e_\mu))^\nu##

Take Schwarzschild spacetime in Schwarzschild coordinates. In the coordinate basis, we have ##e_0 = (1, 0, 0, 0)##. So for a general (1, 1) tensor ##T##, ##T ( e_0 )##, in matrix multiplication form, looks like:

$$
\begin{bmatrix}
T_{00} & T_{01} & T_{02} & T_{03} \\
T_{10} & T_{11} & T_{12} & T_{13} \\
T_{20} & T_{21} & T_{22} & T_{23} \\
T_{30} & T_{31} & T_{32} & T_{33}
\end{bmatrix}
\begin{bmatrix}
1 \\
0 \\
0 \\
0
\end{bmatrix}
=
\begin{bmatrix}
T_{00} \\
T_{10} \\
T_{20} \\
T_{30}
\end{bmatrix}
$$

Or, in the notation @Orodruin used in post #13, we have ##\left( e_0 \right)^\nu = \delta^\nu_0##, so ##\left[ T ( e_0 ) \right]^\nu = T_0{}^\nu##.

But in a non-coordinate, orthonormal basis, we have

$$
\hat{e}_0 = \left( \frac{1}{\sqrt{1 - 2M / r}}, 0, 0, 0 \right)
$$

So ##T ( \hat{e}_0 )## in matrix multiplication form now looks like this:

$$
\begin{bmatrix}
T_{00} & T_{01} & T_{02} & T_{03} \\
T_{10} & T_{11} & T_{12} & T_{13} \\
T_{20} & T_{21} & T_{22} & T_{23} \\
T_{30} & T_{31} & T_{32} & T_{33}
\end{bmatrix}
\begin{bmatrix}
\frac{1}{\sqrt{1 - 2M / r}} \\
0 \\
0 \\
0
\end{bmatrix}
= \frac{1}{\sqrt{1 - 2M / r}}
\begin{bmatrix}
T_{00} \\
T_{10} \\
T_{20} \\
T_{30}
\end{bmatrix}
$$

In other words, we have ##\left[ T ( \hat{e}_0 ) \right]^\nu \neq T_0{}^\nu##. The extra factor in ##\hat{e}_0## makes the two unequal.
 
  • #76
PeterDonis said:
See correction above. With the correction, I agree.
Take Schwarzschild spacetime in Schwarzschild coordinates. In the coordinate basis, we have ##e_0 = (1, 0, 0, 0)##. So for a general (1, 1) tensor ##T##, ##T ( e_0 )##, in matrix multiplication form, looks like:

$$
\begin{bmatrix}
T_{00} & T_{01} & T_{02} & T_{03} \\
T_{10} & T_{11} & T_{12} & T_{13} \\
T_{20} & T_{21} & T_{22} & T_{23} \\
T_{30} & T_{31} & T_{32} & T_{33}
\end{bmatrix}
\begin{bmatrix}
1 \\
0 \\
0 \\
0
\end{bmatrix}
=
\begin{bmatrix}
T_{00} \\
T_{10} \\
T_{20} \\
T_{30}
\end{bmatrix}
$$

Or, in the notation @Orodruin used in post #13, we have ##\left( e_0 \right)^\nu = \delta^\nu_0##, so ##\left[ T ( e_0 ) \right]^\nu = T_0{}^\nu##.

But in a non-coordinate, orthonormal basis, we have

$$
\hat{e}_0 = \left( \frac{1}{\sqrt{1 - 2M / r}}, 0, 0, 0 \right)
$$

So ##T ( \hat{e}_0 )## in matrix multiplication form now looks like this:

$$
\begin{bmatrix}
T_{00} & T_{01} & T_{02} & T_{03} \\
T_{10} & T_{11} & T_{12} & T_{13} \\
T_{20} & T_{21} & T_{22} & T_{23} \\
T_{30} & T_{31} & T_{32} & T_{33}
\end{bmatrix}
\begin{bmatrix}
\frac{1}{\sqrt{1 - 2M / r}} \\
0 \\
0 \\
0
\end{bmatrix}
= \frac{1}{\sqrt{1 - 2M / r}}
\begin{bmatrix}
T_{00} \\
T_{10} \\
T_{20} \\
T_{30}
\end{bmatrix}
$$

In other words, we have ##\left[ T ( \hat{e}_0 ) \right]^\nu \neq T_0{}^\nu##. The extra factor in ##\hat{e}_0## makes the two unequal.
This is true only if you are looking for the components of ##T## in the coordinate basis. If you instead wanted the components in the non-coordinate basis, then that is what you need to insert. It makes little sense to insert different bases unless you for some reason need to used mixed bases to express your tensor.
 
  • #77
Orodruin said:
If you instead wanted the components in the non-coordinate basis

In post #13, you said:

Orodruin said:
I will agree in the general case, but it really does not matter as long as we are dealing with holonomic bases. Since the components are ##(e_\mu)^\nu = \delta_\mu^\nu##, it is indeed the case that ##\nabla_{e_\nu} = \delta^\mu_\nu \nabla_\mu = \nabla_\nu##.

The non-coordinate basis is not holonomic. Are you now disagreeing with yourself?
 
  • Like
Likes vanhees71
  • #78
Orodruin said:
This is true only if you are looking for the components of T in the coordinate basis. If you instead wanted the components in the non-coordinate basis, then that is what you need to insert.

What you are calling "the components in the non-coordinate basis" are components in local inertial coordinates, where the coordinate basis vectors are orthonormal. But those coordinates are only local, and in them covariant derivatives are identical to partial derivatives so none of the issues discussed in this thread even arise.
 
  • Like
Likes vanhees71
  • #79
PeterDonis said:
In post #13, you said:
The non-coordinate basis is not holonomic. Are you now disagreeing with yourself?
In that post I believe I assumed that the index was referring to the coordinate basis. If the index instead refers to a general basis it works regardless of the basis. All you really need is a basis of the tangent space ##e_a## and its dual ##e^a##. The components ##V^a## of the tangent vector ##V## are then defined through the relation ##V = V^a e_a## which also means that ##e^a(V) = e^a(V^b e_b) = V^b e^a(e_b) = V^b \delta^a_b = V^a## and so we can extract the components of ##V## by passing ##V## as the argument of ##e^a## (with the appropriate generalisation to any type of tensor). Whether ##e_a## is a coordinate basis or not is not relevant to this argument.

PeterDonis said:
What you are calling "the components in the non-coordinate basis" are components in local inertial coordinates, where the coordinate basis vectors are orthonormal.
No, not necessarily. It is true for any basis, not only coordinate bases or even normalised or orthogonal bases (now, why you would pick such a basis is a different question). It is perfectly possible to find basis fields that are not the bases of any local coordinate system (e.g., by picking linearly independent but non-commutative fields as the basis fields).

Edit:
A good example of an orthonormal non-coordinate basis would be the normalised polar basis in ##\mathbb R^2##. We would have
$$
e_r = \partial_r, \qquad e_\theta = \frac 1r \partial_\theta
$$
leading to ##[e_r,e_\theta] = [\partial_r , (1/r) \partial_\theta] = -\frac{1}{r^2} \partial_\theta \neq 0##. The corresponding dual would be
$$
e^r = dr, \qquad e^\theta = r\, d\theta.
$$
Since ##e_r## and ##e_\theta## do not commute, they cannot be the tangent fields of local coordinate functions. In this case clearly also ##\nabla_{e_a} e_b \neq 0## in general.
 
Last edited:
  • Like
Likes etotheipi
  • #80
Orodruin said:
It is perfectly possible to find basis fields that are not the bases of any local coordinate system (e.g., by picking linearly independent but non-commutative fields as the basis fields).

The orthonormal basis in Schwarzschild spacetime that I used is such a non-holonomic (i.e., the basis vector fields don't commute) basis. That was my point in using it.

Perhaps I should have explicitly included all of the basis vector fields, although they can be read off by inspection from the standard Schwarzschild line element so I had assumed it was clear which ones I was referring to:

$$
\hat{e}_0 = \frac{1}{\sqrt{1 - 2M / r}} \partial_t
$$

$$
\hat{e}_1 = \sqrt{1 - \frac{2M}{r}} \partial_r
$$

$$
\hat{e}_2 = \frac{1}{r} \partial_\theta
$$

$$
\hat{e}_3 = \frac{1}{r \sin \theta} \partial_\varphi
$$
 
  • #81
Indeed, and it has a corresponding dual basis ##\hat e^a## that together can be used to extract the components of any tensor in that basis. It is not restricted to local inertial coordinates.
 
  • Like
Likes cianfa72
  • #82
Orodruin said:
Since ##e_r## and ##e_\theta## do not commute, they cannot be the tangent fields of local coordinate functions.

Orodruin said:
Indeed, and it has a corresponding dual basis ##\hat e^a## that together can be used to extract the components of any tensor in that basis. It is not restricted to local inertial coordinates.

Hm. I think I see what was confusing me. The required commutation property is not that the basis vectors have to commute, but that the covariant derivative has to commute with contraction, since the contraction operation is what "extracting the components of the tensor" involves. AFAIK that commutation property is always true.

I'll elaborate (sorry if this is belaboring the obvious) by restating the original issue: we have a notation ##\nabla_\mu V^\nu## that can have at least two different meanings. Using Wald's abstract index notation, the two meanings are:

(1) ##\left[ \left( e_\mu \right)^a \nabla_a V^b \right]^\nu##, i.e., the ##\nu## component of the vector obtained by taking the directional derivative of the vector ##V## in the direction of the vector ##e_\mu##;

(2) ##\left( \nabla_a V^b \right)_\mu{}^\nu##, i.e., the ##\mu##, ##\nu## component of the (1, 1) tensor obtained by taking the covariant derivative of the vector ##V##.

Writing out the "taking the component" operations explicitly, we have:

(1) ##\left[ \left( e_\mu \right)^a \nabla_a V^b \right] \left[ \left( e^\nu \right)_b \right]##

(2) ##\left( \nabla_a V^b \right) \left[ \left( e_\mu \right)^a \left( e^\nu \right)_b \right]##

As long as the ##\nabla## operator commutes with contraction, these will be equal, since we just have to swap the contraction with ##e_\mu## and the ##\nabla## operation on ##V##.
 
  • Like
Likes vanhees71
  • #83
Orodruin said:
Since ##e_r## and ##e_\theta## do not commute, they cannot be the tangent fields of local coordinate functions.

This does raise another question. I understand that there is a 1-1 correspondence between tangent vectors and directional derivatives (MTW, for example, discusses this in some detail in a fairly early chapter). But doesn't that require that the tangent vectors be tangent fields of local coordinate functions? If so, how would we interpret directional derivatives in the direction of a vector that is part of a non-holonomic set of basis vector fields?
 
  • #84
PeterDonis said:
But doesn't that require that the tangent vectors be tangent fields of local coordinate functions?
Not really. The tangent space is made by linear combinations of those tangent fields (the holonomic basis), but nothing stops you from introducing a different linear combination of those fields that does not form a holonomic basis of any set of local coordinates that also span the tangent space at each point. For any given vector, you can of course find a coordinate system where it is the tangent of a local coordinate function.

PeterDonis said:
If so, how would we interpret directional derivatives in the direction of a vector that is part of a non-holonomic set of basis vector fields?
So, if I understand the question, you're asking how we should interpret something like ##e_a \phi##, where ##e_a## is a basis vector of some set of basis vectors on the tangent space (not necessarily holonomic). If we make things easier for us and just consider when this vector is on the form ##f \partial_a## where ##f## is some scalar function, then ##e_a \phi = f \partial_a \phi## would be the rate of change in ##\phi## if you go in the direction ##e_a##, which is going in the same direction as specified by the coordinate direction ##\partial_a##, but a factor ##f## faster.

To take a more concrete example, consider ##e_\theta## of the polar coordinates on ##\mathbb R^2##. While ##\partial_\theta \phi## represents the change in ##\phi## per change in the coordinate ##\theta##, ##e_\theta\phi## represents the change in ##\phi## per physical distance in the coordinate direction (since ##e_\theta## is normalised), but generally nothing stops you from defining any direction.

It should also be noted that any single field can be made into a coordinate direction (just take the flow lines of that field and label them with ##n-1## other coordinates), but that a full set of basis fields cannot necessarily form a holonomic basis together.
 
  • Like
Likes cianfa72 and PeterDonis
  • #85
PeterDonis said:
Yes, that's correct; the common notation ##\nabla_\mu## is really a shorthand for saying that ##\nabla## is an operator that takes a (p, q) tensor and produces a (p, q + 1) tensor, i.e., it "adds" one lower index. As you note, it makes more sense to put the indexes on the entire expression ##\nabla V## instead of on the ##\nabla## and the ##V## separately.
But with such purism all the "magic" of the index calculus is lost. It's just convenient notation, and I don't think that it's very problematic.
 
  • #86
stevendaryl said:
That might be true. I think that the usual notation is pretty screwed up.
That doesn't make any sense to me. To me, ##\nabla V## is a 1,1 tensor, and ##\nabla_\mu V^\nu## is not a tensor at all, but a COMPONENT of a tensor.

To me, ##V^\nu## is not a vector, it is a component of a vector.
##\nabla V## is a (1,1) tensor and ##\nabla_{\mu} V^{\nu}## are the tensor components. From the context we discuss here it's the components with respect to the holonomous coordinate basis and dual basis, though also some posters seem to also discuss non-holonomic bases, which of course have also their merits (particularly when using orthonormal tetrads).
 
  • Like
Likes dextercioby
  • #87
In this thread we talked about Wald's abstract index notation and Penrose's abstract index notation. Are they actually the same ?

etotheipi said:
I think the point being made here is that the horizontal positioning is important [well, for any not totally-symmetric tensor :wink:] and of course cannot be ignored; In slot notation, the covariant derivative of a type ##(k,l)## tensor ##T## along a vector ##U## is$$\nabla_{U} T := \nabla T(\, \cdot \, , \dots, \, \cdot \, , U)$$with the ##U## in the final slot.

Is it a usual convention to "add" the slot supposed to be filled with the vector field ##U## at the end of the tensor map ##\nabla T := \nabla T(\, \cdot \, , \dots, \, \cdot \, , \cdot \,)## ?
In other words in slot notation do we reference first the set of covector slots (instances of ##V^*##) and then the set of vector slots (instances of ##V##) ?
 
Last edited:
  • #88
This notation I think I know from Misner, Thorne, Wheeler, Gravitation.
 
  • #89
vanhees71 said:
This notation I think I know from Misner, Thorne, Wheeler, Gravitation.
Does your statement apply to the following part of my previous post ?
cianfa72 said:
Is it a usual convention to "add" the slot supposed to be filled with the vector field ##U## at the end of the tensor map ##\nabla T := \nabla T(\, \cdot \, , \dots, \, \cdot \, , \cdot \,)## ?
In other words in slot notation do we reference first the set of covector slots (instances of ##V^*##) and then the set of vector slots (instances of ##V##) ?
 
  • #90
Yes! MTW is pretty good at explaining the abstract index-free notation to physicists.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
913
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 22 ·
Replies
22
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
6K
  • · Replies 10 ·
Replies
10
Views
2K