Questions about the Riemann Tensor

  • Thread starter space-time
  • Start date
  • Tags
    Curvature
In summary: As far as what the Riemann tells you that the Ricci doesn't - suppose you are in empty space some ways away from a massive body. The Ricci will be zero, so it will basically only tell you you are in a vacuum. This tells you nothing about the local tidal forces, which are the measurable aspects of gravity at a point. Basically the Ricci tells you nothing about the gravity due to a distant source.The "electric" components of the Riemann, ##R_{xtxt}, R_{ytyt}, R_{ztzt}## will give you the tidal forces at your location. These are generally the most significant components of the Riemann. For instance
  • #1
space-time
218
4
We know how the curvature of a vector V or a manifold is depicted by the following formula:

dx[itex]\mu[/itex]dx[itex]\nu[/itex][∇[itex]\nu[/itex] , ∇[itex]\mu[/itex]]V

Now we know that the commutator is simply the Riemann tensor. My question here is:

How do you actually apply that vector V to the Riemann tensor? Here is an example of what I mean:

I know that the covariant derivative ∇j(Viei) = [(∂Vi)/(∂xj) + [itex]\Gamma[/itex]ikjVk]ei

Now having established this, would I distribute the vector Vi to the terms of my Riemann tensor or would I multiply the terms of the tensor by Vk?I also have one more question:

What extra information does the Riemann tensor tell you that the Ricci tensor does not (from a physical standpoint)? I know that the Ricci is a contraction of the Riemann and that the Riemann has a lot more elements than the Ricci, but what physical meaning do all of those other terms possesses that is not present in the Ricci tensor?
 
Physics news on Phys.org
  • #2
space-time said:
We know how the curvature of a vector V or a manifold is depicted by the following formula:

dx[itex]\mu[/itex]dx[itex]\nu[/itex][∇[itex]\nu[/itex] , ∇[itex]\mu[/itex]]V

Now we know that the commutator is simply the Riemann tensor. My question here is:

How do you actually apply that vector V to the Riemann tensor? Here is an example of what I mean:

I know that the covariant derivative ∇j(Viei) = [(∂Vi)/(∂xj) + [itex]\Gamma[/itex]ikjVk]ei

Now having established this, would I distribute the vector Vi to the terms of my Riemann tensor or would I multiply the terms of the tensor by Vk?


I also have one more question:

What extra information does the Riemann tensor tell you that the Ricci tensor does not (from a physical standpoint)? I know that the Ricci is a contraction of the Riemann and that the Riemann has a lot more elements than the Ricci, but what physical meaning do all of those other terms possesses that is not present in the Ricci tensor?

I'm not sure if this is the answer you're looking for, but the full Riemann tensor [itex]R^i_{jkl}[/itex] tells us the result of parallel-transporting a vector around a closed loop: If you make a parallelogram by having one side described by the displacement vector [itex]U^i[/itex] and the other side described by the displacement vector [itex]V^i[/itex], and you transport a third vector [itex]W^i[/itex] around the loop, it will be altered slightly by the trip, by an amount:

[itex]\delta W^i \propto R^i_{jkl} U^j V^k W^l[/itex]
 
  • #3
stevendaryl said:
I'm not sure if this is the answer you're looking for, but the full Riemann tensor [itex]R^i_{jkl}[/itex] tells us the result of parallel-transporting a vector around a closed loop: If you make a parallelogram by having one side described by the displacement vector [itex]U^i[/itex] and the other side described by the displacement vector [itex]V^i[/itex], and you transport a third vector [itex]W^i[/itex] around the loop, it will be altered slightly by the trip, by an amount:

[itex]\delta W^i \propto R^i_{jkl} U^j V^k W^l[/itex]

Thank you, but this was not quite what I meant. If you look on the following word document, perhaps that will better explain what I am asking. Please take a look at this if you will. Thank you.


View attachment riemann tensor question.docx
 
  • #4
space-time said:
but what physical meaning do all of those other terms possesses that is not present in the Ricci tensor?

Google for "Ricci decomposition"
 
  • #5
If the Ricci tensor vanishes for D>3, the spacetime can still support gravitational waves, because the Riemann tensor does not necessarily vanish (but for D=3 it does, see e.g. Carlip's book on 2+1 QG). If the Riemann tensor vanishes, this is no longer true.
 
  • #6
space-time said:
Thank you, but this was not quite what I meant. If you look on the following word document, perhaps that will better explain what I am asking. Please take a look at this if you will. Thank you.


View attachment 72369

Open office isn't opening this document (though I haven't updated it for a while), so I'm not sure how to open it.

I can't really make heads or tails of your original question either (without the document I can't open easily). I don't believe a vector can be said to curve (at least not intrinsically). Certainly a line can't, it doesn't have enough dimensions.

As far as what the Riemann tells you that the Ricci doesn't - suppose you are in empty space some ways away from a massive body. The Ricci will be zero, so it will basically only tell you you are in a vacuum. This tells you nothing about the local tidal forces, which are the measurable aspects of gravity at a point. Basically the Ricci tells you nothing about the gravity due to a distant source.

The "electric" components of the Riemann, ##R_{xtxt}, R_{ytyt}, R_{ztzt}## will give you the tidal forces at your location. These are generally the most significant components of the Riemann. For instance ignoring time dilation and integrating the "tidal forcess" is one way you might get the Newtonian gravity - though it's only accurate to first order in (M/r), because time dilation effects are of second order.

The list isn't complete, unless you do a principle axis transformation - the electric components of the Riemann form a 3x3 symmetric matrix, which you can diagonalize by a principle axis transformation.

The "magnetic" components of the Riemann, ##R_{yzxt}, R_{xyyt}, R_{xyzt}## will tell you about frame dragging effects due to rotation of the massive body. The list should have 8 elements to be complete, it's a 3x3 traceless matrix. Note that we've taken the "dual" of the front part of the Riemann, i.e. the dual of {xt} is {yz}. Modulo sign issues, which I might have messed up.

There's a last component of the Riemann, the topogravitic component, ##R_{xyxy}, R_{xzxz}, R_{yzyz}, R_{yzyz}## which tells you about "spatial curvature". Here we take the dual of both the front and the back of the Riemann. It's another symmetric 3x3 matrix. I believe that in a vacuum the topogravitic tensor is equal to the electrogravitc tensor, but I'd have to double check to be positive.


This is known as the "Bel decomposition" and by various other names. There's a sketchy amount of info on it on Wikipedia, but not a lot of other info on the web I've found. MTW's "Gravitation" has a long section on it, but not under the name "Bel decomposition" - if you look for "electric part of the Riemann" in MTW, you might have more luck.
 
Last edited:
  • #7
space-time said:
Thank you, but this was not quite what I meant. If you look on the following word document, perhaps that will better explain what I am asking. Please take a look at this if you will. Thank you.

I think you're just asking about how to join the indices when you do repeated applications of the covariant derivative. It's kind of a mess and hard to do without making mistakes. But here's what I think is the correct answer:

[itex]\nabla_\nu V[/itex] produces another vector. Call it [itex]W[/itex]. Then we can write out the components of [itex]W[/itex] as follows:

[itex]W^\alpha = \partial_\nu V^\alpha + \Gamma^\alpha_{\nu \beta} V^\beta[/itex]

We're going to need [itex]W^\beta[/itex] as well, so I'm going to rewrite some indices to get:

[itex]W^\beta= \partial_\nu V^\beta+ \Gamma^\beta_{\nu \lambda} V^\lambda[/itex]

Now, we want to compute [itex]\nabla_\mu W[/itex]. Call that [itex]Z[/itex]. Its components are given by:

[itex]Z^\alpha = \partial_\mu W^\alpha + \Gamma^\alpha_{\mu \beta} W^\beta[/itex]
[itex] = \partial_\mu (\partial_\nu V^\alpha + \Gamma^\alpha_{\nu \beta} V^\beta)
+ \Gamma^\alpha_{\mu \beta} (\partial_\nu V^\beta+ \Gamma^\beta_{\nu \lambda} V^\lambda)[/itex]

So we have:

[itex](\nabla_\mu \nabla_\nu V)^\alpha = \partial_\mu (\partial_\nu V^\alpha + \Gamma^\alpha_{\nu \beta} V^\beta)
+ \Gamma^\alpha_{\mu \beta} (\partial_\nu V^\beta+ \Gamma^\beta_{\nu \lambda} V^\lambda)[/itex]

If we did it in the opposite order, we would have gotten:

[itex](\nabla_\nu \nabla_\mu V)^\alpha = \partial_\nu (\partial_\mu V^\alpha + \Gamma^\alpha_{\mu \beta} V^\beta)
+ \Gamma^\alpha_{\nu \beta} (\partial_\mu V^\beta+ \Gamma^\beta_{\mu \lambda} V^\lambda)[/itex]

Then we subtract the two to get
[itex]([\nabla_\mu,\nabla_\nu] V)^\alpha = ...[/itex]

Hopefully, all the derivatives of [itex]V[/itex] cancel, because the result is supposed to be a tensor operating on [itex]V[/itex]
 
  • #8
stevendaryl said:
I think you're just asking about how to join the indices when you do repeated applications of the covariant derivative. It's kind of a mess and hard to do without making mistakes. But here's what I think is the correct answer:

[itex]\nabla_\nu V[/itex] produces another vector. Call it [itex]W[/itex]. Then we can write out the components of [itex]W[/itex] as follows:

[itex]W^\alpha = \partial_\nu V^\alpha + \Gamma^\alpha_{\nu \beta} V^\beta[/itex]

We're going to need [itex]W^\beta[/itex] as well, so I'm going to rewrite some indices to get:

[itex]W^\beta= \partial_\nu V^\beta+ \Gamma^\beta_{\nu \lambda} V^\lambda[/itex]

Now, we want to compute [itex]\nabla_\mu W[/itex]. Call that [itex]Z[/itex]. Its components are given by:

[itex]Z^\alpha = \partial_\mu W^\alpha + \Gamma^\alpha_{\mu \beta} W^\beta[/itex]
[itex] = \partial_\mu (\partial_\nu V^\alpha + \Gamma^\alpha_{\nu \beta} V^\beta)
+ \Gamma^\alpha_{\mu \beta} (\partial_\nu V^\beta+ \Gamma^\beta_{\nu \lambda} V^\lambda)[/itex]

So we have:

[itex](\nabla_\mu \nabla_\nu V)^\alpha = \partial_\mu (\partial_\nu V^\alpha + \Gamma^\alpha_{\nu \beta} V^\beta)
+ \Gamma^\alpha_{\mu \beta} (\partial_\nu V^\beta+ \Gamma^\beta_{\nu \lambda} V^\lambda)[/itex]

If we did it in the opposite order, we would have gotten:

[itex](\nabla_\nu \nabla_\mu V)^\alpha = \partial_\nu (\partial_\mu V^\alpha + \Gamma^\alpha_{\mu \beta} V^\beta)
+ \Gamma^\alpha_{\nu \beta} (\partial_\mu V^\beta+ \Gamma^\beta_{\mu \lambda} V^\lambda)[/itex]

Then we subtract the two to get
[itex]([\nabla_\mu,\nabla_\nu] V)^\alpha = ...[/itex]

Hopefully, all the derivatives of [itex]V[/itex] cancel, because the result is supposed to be a tensor operating on [itex]V[/itex]

Usually if we are going to use (slightly abused) index notation like this (e.g. ##\nabla_\nu##), the result of a co variant derivative ##\nabla_\mu## of a vector is a (1,1) tensor. As you can see you have a free index ##\nu## in your first equation on the right hand side, but you don't have that index on the left hand side.
 
  • #9
Matterwave said:
Usually if we are going to use (slightly abused) index notation like this (e.g. ##\nabla_\nu##), the result of a co variant derivative ##\nabla_\mu## of a vector is a (1,1) tensor. As you can see you have a free index ##\nu## in your first equation on the right hand side, but you don't have that index on the left hand side.

As far as the notation, I was just using the original poster's notation. But for a specific index [itex]\nu[/itex], the operator [itex]\nabla_\nu[/itex] takes vectors to vectors. It means the same thing as [itex]\nabla_{e_\nu}[/itex]. I don't think that's an abuse of notation.

People often write [itex]\nabla_U(V)[/itex], so [itex]\nabla_\nu[/itex] is the special case with [itex]U = e_\nu[/itex], a basis vector.
 
  • #10
stevendaryl said:
As far as the notation, I was just using the original poster's notation. But for a specific index [itex]\nu[/itex], the operator [itex]\nabla_\nu[/itex] takes vectors to vectors. It means the same thing as [itex]\nabla_{e_\nu}[/itex]. I don't think that's an abuse of notation.

People often write [itex]\nabla_U(V)[/itex], so [itex]\nabla_\nu[/itex] is the special case with [itex]U = e_\nu[/itex], a basis vector.

My own personal opinion is that it's confusing to let [itex]V^\alpha[/itex] mean a 4-vector, as opposed to a component of a 4-vector. To me, if you mean the 4-vector, then it's clearer to either just use [itex]V[/itex] (no indices at all--but then it's hard to distinguish between a vector, a scalar or a tensor), or to write it as [itex]V^\alpha e_\alpha[/itex].
 
  • #11
stevendaryl said:
As far as the notation, I was just using the original poster's notation. But for a specific index [itex]\nu[/itex], the operator [itex]\nabla_\nu[/itex] takes vectors to vectors. It means the same thing as [itex]\nabla_{e_\nu}[/itex]. I don't think that's an abuse of notation.

People often write [itex]\nabla_U(V)[/itex], so [itex]\nabla_\nu[/itex] is the special case with [itex]U = e_\nu[/itex], a basis vector.

This seems like some pretty nonstandard notation to me, but maybe I just haven't read the references you've read. :)

From the notation that I'm familiar with ##\nabla_\nu V^\alpha## would be a (1,1) tensor, it would just be a more convenient way of writing ##(\nabla V)_\nu^{~~\alpha}##.
 
  • #12
Matterwave said:
This seems like some pretty nonstandard notation to me, but maybe I just haven't read the references you've read. :)

From the notation that I'm familiar with ##\nabla_\nu V^\alpha## would be a (1,1) tensor, it would just be a more convenient way of writing ##(\nabla V)_\nu^{~~\alpha}##.

I would not say that [itex](\nabla V)_\nu^{~~\alpha}[/itex] is a (1,1) tensor, I would say it is a component of a (1,1) tensor. And it is exactly the same thing as what I was writing as [itex](\nabla_\nu V)^\alpha[/itex]

(The parenthesis reminds me that it's the [itex]\alpha[/itex] component of the vector [itex]\nabla_\nu V[/itex] rather than the results of letting [itex]\nabla_\nu[/itex] operate on [itex]V^\alpha[/itex])

[itex]\nabla V[/itex] (no indices) is a (1,1) tensor. But then if you contract it with a vector [itex]U[/itex] it becomes a vector. That contraction is written [itex]\nabla_U V[/itex]. In the specific case where [itex]U = e_\nu[/itex], a basis vector, we can write it as:

[itex]\nabla_\nu V[/itex]

In components,

[itex](\nabla_\nu V)^\alpha = (\nabla V)_\nu^{~~\alpha}[/itex]

So I don't think that we're really using different notations. We're using different conventions. A lot of physicists use the convention that an expression such as [itex]T_\beta^{~~\alpha}[/itex] means a tensor. I think that's sloppy. It's a component of a tensor [itex]T[/itex], but it is also a component of a vector (the result of contracting [itex]T[/itex] with the basis vector [itex]e_\beta[/itex]).
 
  • #13
stevendaryl said:
I would not say that [itex](\nabla V)_\nu^{~~\alpha}[/itex] is a (1,1) tensor, I would say it is a component of a (1,1) tensor.

Matterwave is using abstract index notation, not component notation, so what he said is perfectly correct. I would suggest reading chapter 2 of Wald for an explanation of abstract index notation. But to be fair, matterwave should have kept to the convention in Wald for abstract indices in order to distinguish them from component indices, which most authors also don't do so I agree with you in that outside of Wald's book it can be very confusing when it comes to notation.

stevendaryl said:
In the specific case where [itex]U = e_\nu[/itex], a basis vector, we can write it as:

[itex]\nabla_\nu V[/itex]

No you can't. I'm going to use latin for tetrad indices and greek for Lorentz indices. Then what you've written is ##\nabla_{e_a}V = (e_a)^{\nu}\nabla_{\nu}V## which is definitely not the same thing as ##\nabla_{\nu}V##.

The latter is a (1,1) 4-tensor whereas the former is simply a 4-vector.

stevendaryl said:
It's a component of a tensor [itex]T[/itex], but it is also a component of a vector (the result of contracting [itex]T[/itex] with the basis vector [itex]e_\beta[/itex]).

Again, ##\nabla_{\nu}V## is a (1,1) 4-tensor, it is not a 4-vector. It is not the result of taking the covariant derivative of ##V## along a basis vector ##e_a##, which would instead give a 4-vector. You're confusing yourself because you're using Lorentz indices where you should be using tetrad indices.
 
Last edited:
  • Like
Likes 1 person
  • #14
WannabeNewton said:
Again, ##\nabla_{\nu}V## is a (1,1) 4-tensor, it is not a 4-vector. It is not the result of taking the covariant derivative of ##V## along a basis vector ##e_a##, which would instead give a 4-vector. You're confusing yourself because you're using Lorentz indices where you should be using tetrad indices.

I'm not convinced that you're right here. I didn't say anything about tetrads. [itex]e_\nu[/itex] means a basis vector, where [itex]\nu[/itex] ranges over 0 to 4. My reference is Misner, Thorne and Wheeler's "Gravitation", and there, it is clearly written: (Page 258 of my edition, equation 10.12)

[itex]\nabla_{e_\beta} = \nabla_\beta[/itex] (def of [itex]\nabla_\beta[/itex])

Which is exactly what I said (with [itex]\beta[/itex] replaced by [itex]\nu[/itex]).

That to me seems to be saying "[itex]\nabla_\nu V[/itex] is the result of taking the covariant derivative of [itex]V[/itex] along a basis vector [itex]e_\nu[/itex]".

My inclination is to believe MTW. It's possible that notation has changed in recent years--MTW is an old book.
 
  • #15
WannabeNewton said:
Again, ##\nabla_{\nu}V## is a (1,1) 4-tensor, it is not a 4-vector.

After consulting with MTW, I'm 95% sure that, at least as of the 70s, or whenever the book was written, [itex]\nabla_\nu V[/itex] meant [itex]\nabla_{e_\nu} V[/itex], which is a vector. So I'm not exactly sure what you mean by saying [itex]\nabla_\nu V[/itex] is a (1,1) tensor. I would say that [itex]\nabla V[/itex] is a (1,1) tensor, since it takes a vector [itex]U[/itex] and returns another vector, [itex]\nabla_U V[/itex], which is the change of [itex]V[/itex] along the vector [itex]U[/itex].
 
  • #16
stevendaryl said:
After consulting with MTW, I'm 95% sure that, at least as of the 70s, or whenever the book was written, [itex]\nabla_\nu V[/itex] meant [itex]\nabla_{e_\nu} V[/itex], which is a vector.

MTW tend not to use ##\nabla## when they are writing component equations (i.e., equations with indexes); they use it when they are writing "abstract geometric" equations. In that context, yes, a subscript on ##\nabla## refers to a specific vector along which the derivative is being taken. But I think notation has indeed changed since then (for example, I haven't seen other authors pick up on MTW's "abstract geometric" notation).

In modern component notation (the notation WBN is using--for example, Wald uses this notation, IIRC), the object just referred to above would be written as ##U^{\mu} \nabla_{\mu} V^{\nu}##, where the index on the ##\nabla## is a component index (and is contracted with the index on ##U##, so this is the directional derivative of ##V## along ##U##). MTW would use the "semicolon" notation to write this in components, so it would be ##U^{\mu} V^{\nu}{}_{; \mu}##.

stevendaryl said:
So I'm not exactly sure what you mean by saying [itex]\nabla_\nu V[/itex] is a (1,1) tensor.

In modern component notation, this object (the 1, 1 tensor) would be written ##\nabla_{\mu} V^{\nu}## (notice that there is no index contracting with the index on the ##\nabla##, so both component indexes are free). In MTW's "semicolon" component notation, it would be ##V^{\nu}{}_{; \mu}## (again, with no index contracting the ##\mu## so it is a free index).

stevendaryl said:
I would say that [itex]\nabla V[/itex] is a (1,1) tensor

This is how the object I just wrote above in the two different component notations (modern and MTW "semicolon") would be written in MTW's "abstract geometric" notation.
 
  • #17
I haven't really had a chance to read this thread, and I am on my way out right now, but it seems that I am with stevendaryl on this. For example, even if non=standard, for me, ##\nabla_\mu V^\nu## is the scalar ##e_\mu \left( V^\nu \right)##. Here, ##e_\mu## could be part of any basis, orthonormal, coordinate, or other. For a coordinate basis, ##e_\mu \left( V^\nu \right)## is a partial derivative of a scalar,

$$e_\mu \left( V^\nu \right) = \frac{\partial V^\nu}{\partial x^\mu} .$$

Again, it is a personal preference, but I have never liked Penrose's abstract index notation.
 
  • #18
Using abstract index notation, ala Wald (who explains the notation if it's not familiar - it's basically component notation except the philosophy is that the equations are true in any basis)

##\nabla_a V^b## is a rank (1,1) tensor. The covariant deriative always adds a rank to a tensor.

##\nabla_v V## is a vector (I'm not sure what notation that is) equivalent to ##V^a \nabla_a V^b## in index notation. It's the vector you get back when you feed the vector V into the (1,1) tensor.
 
  • #19
All right, after thinking about this a while, I've come to the conclusion that the disagreement is not about notation, exactly. It's about conventions for what the meaning of a free variable is. People in both mathematics and physics are completely inconsistent about it.

To me, it's much clearer to use a "free" index to mean an arbitrary component. So when I write [itex]V^\alpha[/itex], I don't mean the vector [itex]V[/itex], I mean a component of vector [itex]V[/itex], an arbitrary component. It's the same use of free variables as when someone says in a proof:

Let x be a number in the range [0,1]...

In that case, x is an arbitrary number in the range.

To me, if you want to mean the vector, and not just one component of the vector, then you either leave off the indices, and just write [itex]V[/itex], or else you write [itex]V^\alpha e_\alpha[/itex], with the convention that one raised index and one lowered index implies a summation, and where [itex]e_\alpha[/itex] means a basis vector in the direction [itex]\alpha[/itex]. To me, that's infinitely clearer.

So that's the difference:

Some people use [itex]V^\alpha[/itex] to mean a vector, while others (me, for example) would use it to mean a component of a vector, and I would use [itex]V^\alpha e_\alpha[/itex] to mean the vector itself (written as a combination of basis vectors).

Similarly, some people use [itex]\nabla_\nu V^\alpha[/itex] to mean the (1,1) tensor. I would say it's a single component of that tensor, and the full tensor should be written as:

[itex]\nabla_\nu V^\alpha e^\nu \otimes e_\alpha[/itex]

(where [itex]e_\alpha[/itex] is a basis of vectors, and [itex]e^\nu[/itex] is the corresponding basis of covectors).

To me, this is much more flexible, since you can talk about components of a tensor, and you can talk about the tensor itself. I understand that most expressions a physicist is likely to write down, will be a vector or a tensor, and not just a single component, so the extra flexibility may not be wanted.
 
Last edited:
  • #20
pervect said:
Using abstract index notation, ala Wald (who explains the notation if it's not familiar - it's basically component notation except the philosophy is that the equations are true in any basis)

##\nabla_a V^b## is a rank (1,1) tensor. The covariant deriative always adds a rank to a tensor.

I understand the convention, but I dislike it immensely. If [itex]\nabla_a V^b[/itex] is the tensor, then how do you talk about specific components of that tensor? I understand that people don't tend to do that in equations, but people do in their reasoning. In the same way that people say: "Let x be a number in the range [0,1]...", it's sometimes useful in reasoning to talk about an arbitrary component of a vector.

To me, it's infinitely clearer to say that [itex]\nabla_a V^b[/itex] is a component of a (1,1) tensor in an arbitrary basis. Then if you mean the tensor itself, you can write it as:

[itex]\nabla_a V^b \ e^a \otimes e_b[/itex]

where [itex]e_b[/itex] is a basis of vectors, and [itex]e^a[/itex] is the corresponding basis of co-vectors.

Mentioning the basis vectors doesn't spoil the nice property that whatever is proved must be true for any basis, as long as you use an arbitrary basis. If you don't assume anything about the basis, then anything proved about it is true for any basis.
 
  • #21
stevendaryl said:
I understand the convention, but I dislike it immensely. If [itex]\nabla_a V^b[/itex] is the tensor, then how do you talk about specific components of that tensor? I understand that people don't tend to do that in equations, but people do in their reasoning. In the same way that people say: "Let x be a number in the range [0,1]...", it's sometimes useful in reasoning to talk about an arbitrary component of a vector.

To me, it's infinitely clearer to say that [itex]\nabla_a V^b[/itex] is a component of a (1,1) tensor in an arbitrary basis. Then if you mean the tensor itself, you can write it as:

[itex]\nabla_a V^b \ e^a \otimes e_b[/itex]

where [itex]e_b[/itex] is a basis of vectors, and [itex]e^a[/itex] is the corresponding basis of co-vectors.

Mentioning the basis vectors doesn't spoil the nice property that whatever is proved must be true for any basis, as long as you use an arbitrary basis. If you don't assume anything about the basis, then anything proved about it is true for any basis.

In Wald's convention, one would use Latin indices to be abstract indices, and Greek indices to be component indices.

So ##V^a## would be a vector while ##V^\mu## would be a particular component of a vector.

However, the point I was making was not about whether to use abstract notation or not. In any notation, the use of ##\nabla_\mu V^\nu## is a slight abuse of notation. In Wald's abstract notation, it should be properly written as ##(\nabla V)_a^{~~b}##, but of course it is much more convenient to write ##\nabla_a V^b## so we tend to allow that, since we know what it means. In this notation ##\nabla_a V^b## is a (1,1) tensor, and ##\nabla_\mu V^\nu## are the components of a (1,1) tensor.

A mathematician would probably write the notation as ##\nabla V## without any indices, but of course, then we might become confused on the nature of the objects we are dealing with. In addition, this notation makes it quite annoying to take contractions. For example, try writing down the equation ##T^c_{~~~a}=g_{ab} T^{cb}## in a non-(abstract) index notation. What would that look like in an index free notation?

But the above is all fluff when it comes to my original point. Let's just use MTW's convention of defining ##\nabla_{e_\alpha} \equiv \nabla_\alpha## since this is what you seem to want to do. MTW can get away with doing this because every vector they will denote with a boldface. E.g. you will see expressions like ##\nabla_{\alpha} \bf{V}##. In this context, (and really only after reading their book), one might understand that the definition ##\nabla_{e_\alpha} \equiv \nabla_\alpha## has been made. In the context of your original post, where you wrote something like ##\nabla_\alpha V^\beta##, I don't think anyone will assume you mean ##(\nabla_{e_\alpha} V)^\beta##. I would think that most people would assume you are using the "old timey physicists convention" that ##\nabla_\alpha V^\beta## IS an abused notation (1,1) tensor. In that case, you definitely NEED to match the indices on the left hand side of an equation and on the right hand side of the equation.
 
  • #22
Matterwave said:
In Wald's convention, one would use Latin indices to be abstract indices, and Greek indices to be component indices.

So ##V^a## would be a vector while ##V^\mu## would be a particular component of a vector.

What Matterwave said.
 
  • #23
stevendaryl said:
I'm not convinced that you're right here. I didn't say anything about tetrads. [itex]e_\nu[/itex] means a basis vector, where [itex]\nu[/itex] ranges over 0 to 4. My reference is Misner, Thorne and Wheeler's "Gravitation", and there, it is clearly written: (Page 258 of my edition, equation 10.12)

?? A tetrad is just a set of basis vectors. MTW has bad notation in that regard. In modern notation one uses separately latin for tetrad indices and greek for lorentz indices.

stevendaryl said:
That to me seems to be saying "[itex]\nabla_\nu V[/itex] is the result of taking the covariant derivative of [itex]V[/itex] along a basis vector [itex]e_\nu[/itex]".

My inclination is to believe MTW. It's possible that notation has changed in recent years--MTW is an old book.

This is again because of MTW's terrible notation. No one uses it anymore. Greek indices are Lorentz. If you want to write down the covariant derivative along a basis vector then you use tetrad indices. What youve written is a lorentz index for a covariant derivative. It only holds for a coordinate (holonomic) basis. Again I suggest reading ch2 of Wald.
 
  • #24
WannabeNewton said:
?? A tetrad is just a set of basis vectors.

On an unrelated note, I thought a tetrad was a set of specifically 4 ortho-normal basis vectors (i.e. a vierbein or frame field)? Or is it used often to denote any set of 4 basis vectors?
 
  • #25
Matterwave said:
On an unrelated note, I thought a tetrad was a set of specifically 4 ortho-normal basis vectors (i.e. a vierbein or frame field)? Or is it used often to denote any set of 4 basis vectors?

Almost always the former in GR literature.
 
  • #26
Matterwave said:
In Wald's convention, one would use Latin indices to be abstract indices, and Greek indices to be component indices.

So ##V^a## would be a vector while ##V^\mu## would be a particular component of a vector.

Okay, I completely missed that distinction between Latin and Greek indices. I prefer not to have notation rely on such conventions, though, because in calculations, I sometimes run out of good indices to use.

However, the point I was making was not about whether to use abstract notation or not. In any notation, the use of ##\nabla_\mu V^\nu## is a slight abuse of notation. In Wald's abstract notation, it should be properly written as ##(\nabla V)_a^{~~b}##,

I prefer that as well, because to me [itex]\nabla_\mu V^\nu[/itex] looks like it is a differential operator acting on the component [itex]V^\nu[/itex], when it actually is acting on the entire vector [itex]V[/itex]. But to me, [itex]\nabla_\mu[/itex] is a perfectly meaningful operator, so I don't consider it an abuse of notation to write [itex](\nabla_\mu V)^\nu[/itex], which is half-way between the two.

A mathematician would probably write the notation as ##\nabla V## without any indices, but of course, then we might become confused on the nature of the objects we are dealing with. In addition, this notation makes it quite annoying to take contractions.

Yes, I definitely agree.

But the above is all fluff when it comes to my original point. Let's just use MTW's convention of defining ##\nabla_{e_\alpha} \equiv \nabla_\alpha## since this is what you seem to want to do. MTW can get away with doing this because every vector they will denote with a boldface. E.g. you will see expressions like ##\nabla_{\alpha} \bf{V}##. In this context, (and really only after reading their book), one might understand that the definition ##\nabla_{e_\alpha} \equiv \nabla_\alpha## has been made. In the context of your original post, where you wrote something like ##\nabla_\alpha V^\beta##,

I try to be careful NOT to write anything like that. I don't think I did. Instead, I try to write

[itex](\nabla_\alpha V)^\beta[/itex]

to mean "component [itex]\beta[/itex] of the VECTOR [itex]\nabla_\alpha V[/itex]".

If you understand that [itex]\nabla_\alpha[/itex] is an operator in its own right (meaning the covariant derivative in the direction [itex]e_\alpha[/itex]), then [itex]\nabla_\alpha V[/itex] is a vector, not a tensor.

What I did do was to write [itex]\partial_\alpha V^\beta[/itex]. That's not an abuse of notation, because I literally meant the operator [itex]\partial_\alpha[/itex] acting on the component [itex]V^\beta[/itex].
 
  • #27
This is again because of MTW's terrible notation. No one uses it anymore. Greek indices are Lorentz. If you want to write down the covariant derivative along a basis vector then you use tetrad indices. What youve written is a lorentz index for a covariant derivative. It only holds for a coordinate (holonomic) basis. Again I suggest reading ch2 of Wald.

Okay, I was not aware of the Greek versus Latin distinction as being a formal part of the notation. I thought it was along the lines of ("In the following, we will use latin indices to mean... and greek indices to mean..."). That is, I thought it was a local convention, just for the purposes of whatever paragraphs were being discussed.

The first point of substance that I see is this point about something only being valid in a coordinate basis. I pretty much always use a coordinate basis, so I tend to forget about when something breaks down for a non-coordinate basis. That's a very good criticism.
 
  • #28
By the way Steven I hope I am not coming off as devil's advocate here. Recently I made a thread where in the end my confusion turned out to be that the author was using greek indices for coordinate indices whereas I, being used to mostly coordinate-free calculations, thought they were abstract indices. The distinction made all the difference, as George pointed out to me in said thread. Outside of Wald the lack of distincton between coordinate and abstract indices, then, is definitely frustrating at times. So I agree with you in that the practice is not perfect but unfortunately it is infinitely more elegant and slick to use when doing calculations than using the mathematicians' notation. It's too bad no one really sticks to Wald's conventions.
 
  • #29
WannabeNewton said:
By the way Steven I hope I am not coming off as devil's advocate here. Recently I made a thread where in the end my confusion turned out to be that the author was using greek indices for coordinate indices whereas I, being used to mostly coordinate-free calculations, thought they were abstract indices. The distinction made all the difference, as George pointed out to me in said thread. Outside of Wald the lack of distincton between coordinate and abstract indices, then, is definitely frustrating at times. So I agree with you in that the practice is not perfect but unfortunately it is infinitely more elegant and slick to use when doing calculations than using the mathematicians' notation. It's too bad no one really sticks to Wald's conventions.

It was a good exchange for me. I was pretty sure that I was right, but then I learned two ways that I was wrong: (1) I didn't appreciate the latin vs greek distinction in indices. (2) I forgot about non-holonomic bases.

I'm still not 100% convinced that Wald's notation is the best, but at least I understand the issues better. For coordinate bases, I think that I would prefer my approach of explicitly mentioning basis vectors, but since I don't know the implications for non-holonomic bases, I have to admit defeat on that basis (no pun intended).
 

1. What is the Riemann Tensor?

The Riemann Tensor, also known as the Riemann Curvature Tensor, is a mathematical object used to describe the curvature of a manifold in differential geometry. It is a tensor field that contains information about the local curvature of a space at every point.

2. How is the Riemann Tensor calculated?

The Riemann Tensor is calculated using the Christoffel symbols, which are derived from the metric tensor of a space. It can also be calculated using the commutation relations of covariant derivatives.

3. What is the significance of the Riemann Tensor?

The Riemann Tensor plays a crucial role in general relativity, as it is used to describe the curvature of spacetime. It is also used in other areas of physics and mathematics, such as in the study of differential geometry and in the formulation of other theories of gravity.

4. How is the Riemann Tensor related to the Ricci Tensor and Scalar Curvature?

The Riemann Tensor is used to calculate the Ricci Tensor, which in turn is used to calculate the Scalar Curvature. These three tensors are all related to each other and are used to fully describe the curvature of a space.

5. Can the Riemann Tensor be visualized?

No, the Riemann Tensor is a mathematical object and cannot be directly visualized. However, it can be visualized using diagrams and illustrations to help understand its properties and relationships with other tensors.

Similar threads

  • Special and General Relativity
Replies
10
Views
712
  • Special and General Relativity
Replies
26
Views
2K
Replies
5
Views
1K
  • Special and General Relativity
Replies
8
Views
1K
  • Special and General Relativity
Replies
16
Views
1K
  • Special and General Relativity
Replies
22
Views
2K
  • Special and General Relativity
Replies
4
Views
3K
  • Special and General Relativity
Replies
1
Views
920
  • Special and General Relativity
Replies
9
Views
2K
  • Special and General Relativity
Replies
12
Views
2K
Back
Top