# Questions about curvature

1. Aug 21, 2014

### space-time

We know how the curvature of a vector V or a manifold is depicted by the following formula:

dx$\mu$dx$\nu$[∇$\nu$ , ∇$\mu$]V

Now we know that the commutator is simply the Riemann tensor. My question here is:

How do you actually apply that vector V to the Riemann tensor? Here is an example of what I mean:

I know that the covariant derivative ∇j(Viei) = [(∂Vi)/(∂xj) + $\Gamma$ikjVk]ei

Now having established this, would I distribute the vector Vi to the terms of my Riemann tensor or would I multiply the terms of the tensor by Vk?

I also have one more question:

What extra information does the Riemann tensor tell you that the Ricci tensor does not (from a physical standpoint)? I know that the Ricci is a contraction of the Riemann and that the Riemann has a lot more elements than the Ricci, but what physical meaning do all of those other terms possess that is not present in the Ricci tensor?

2. Aug 22, 2014

### stevendaryl

Staff Emeritus
I'm not sure if this is the answer you're looking for, but the full Riemann tensor $R^i_{jkl}$ tells us the result of parallel-transporting a vector around a closed loop: If you make a parallelogram by having one side described by the displacement vector $U^i$ and the other side described by the displacement vector $V^i$, and you transport a third vector $W^i$ around the loop, it will be altered slightly by the trip, by an amount:

$\delta W^i \propto R^i_{jkl} U^j V^k W^l$

3. Aug 22, 2014

### space-time

Thank you, but this was not quite what I meant. If you look on the following word document, perhaps that will better explain what I am asking. Please take a look at this if you will. Thank you.

View attachment riemann tensor question.docx

4. Aug 22, 2014

### Staff: Mentor

Google for "Ricci decomposition"

5. Aug 22, 2014

### haushofer

If the Ricci tensor vanishes for D>3, the spacetime can still support gravitational waves, because the Riemann tensor does not necessarily vanish (but for D=3 it does, see e.g. Carlip's book on 2+1 QG). If the Riemann tensor vanishes, this is no longer true.

6. Aug 22, 2014

### pervect

Staff Emeritus
Open office isn't opening this document (though I haven't updated it for a while), so I'm not sure how to open it.

I can't really make heads or tails of your original question either (without the document I can't open easily). I don't believe a vector can be said to curve (at least not intrinsically). Certainly a line can't, it doesn't have enough dimensions.

As far as what the Riemann tells you that the Ricci doesn't - suppose you are in empty space some ways away from a massive body. The Ricci will be zero, so it will basically only tell you you are in a vacuum. This tells you nothing about the local tidal forces, which are the measurable aspects of gravity at a point. Basically the Ricci tells you nothing about the gravity due to a distant source.

The "electric" components of the Riemann, $R_{xtxt}, R_{ytyt}, R_{ztzt}$ will give you the tidal forces at your location. These are generally the most significant components of the Riemann. For instance ignoring time dilation and integrating the "tidal forcess" is one way you might get the Newtonian gravity - though it's only accurate to first order in (M/r), because time dilation effects are of second order.

The list isn't complete, unless you do a principle axis transformation - the electric components of the Riemann form a 3x3 symmetric matrix, which you can diagonalize by a principle axis transformation.

The "magnetic" components of the Riemann, $R_{yzxt}, R_{xyyt}, R_{xyzt}$ will tell you about frame dragging effects due to rotation of the massive body. The list should have 8 elements to be complete, it's a 3x3 traceless matrix. Note that we've taken the "dual" of the front part of the Riemann, i.e. the dual of {xt} is {yz}. Modulo sign issues, which I might have messed up.

There's a last component of the Riemann, the topogravitic component, $R_{xyxy}, R_{xzxz}, R_{yzyz}, R_{yzyz}$ which tells you about "spatial curvature". Here we take the dual of both the front and the back of the Riemann. It's another symmetric 3x3 matrix. I believe that in a vacuum the topogravitic tensor is equal to the electrogravitc tensor, but I'd have to double check to be positive.

This is known as the "Bel decomposition" and by various other names. There's a sketchy amount of info on it on Wikipedia, but not a lot of other info on the web I've found. MTW's "Gravitation" has a long section on it, but not under the name "Bel decomposition" - if you look for "electric part of the Riemann" in MTW, you might have more luck.

Last edited: Aug 22, 2014
7. Aug 22, 2014

### stevendaryl

Staff Emeritus
I think you're just asking about how to join the indices when you do repeated applications of the covariant derivative. It's kind of a mess and hard to do without making mistakes. But here's what I think is the correct answer:

$\nabla_\nu V$ produces another vector. Call it $W$. Then we can write out the components of $W$ as follows:

$W^\alpha = \partial_\nu V^\alpha + \Gamma^\alpha_{\nu \beta} V^\beta$

We're going to need $W^\beta$ as well, so I'm going to rewrite some indices to get:

$W^\beta= \partial_\nu V^\beta+ \Gamma^\beta_{\nu \lambda} V^\lambda$

Now, we want to compute $\nabla_\mu W$. Call that $Z$. Its components are given by:

$Z^\alpha = \partial_\mu W^\alpha + \Gamma^\alpha_{\mu \beta} W^\beta$
$= \partial_\mu (\partial_\nu V^\alpha + \Gamma^\alpha_{\nu \beta} V^\beta) + \Gamma^\alpha_{\mu \beta} (\partial_\nu V^\beta+ \Gamma^\beta_{\nu \lambda} V^\lambda)$

So we have:

$(\nabla_\mu \nabla_\nu V)^\alpha = \partial_\mu (\partial_\nu V^\alpha + \Gamma^\alpha_{\nu \beta} V^\beta) + \Gamma^\alpha_{\mu \beta} (\partial_\nu V^\beta+ \Gamma^\beta_{\nu \lambda} V^\lambda)$

If we did it in the opposite order, we would have gotten:

$(\nabla_\nu \nabla_\mu V)^\alpha = \partial_\nu (\partial_\mu V^\alpha + \Gamma^\alpha_{\mu \beta} V^\beta) + \Gamma^\alpha_{\nu \beta} (\partial_\mu V^\beta+ \Gamma^\beta_{\mu \lambda} V^\lambda)$

Then we subtract the two to get
$([\nabla_\mu,\nabla_\nu] V)^\alpha = ...$

Hopefully, all the derivatives of $V$ cancel, because the result is supposed to be a tensor operating on $V$

8. Aug 22, 2014

### Matterwave

Usually if we are going to use (slightly abused) index notation like this (e.g. $\nabla_\nu$), the result of a co variant derivative $\nabla_\mu$ of a vector is a (1,1) tensor. As you can see you have a free index $\nu$ in your first equation on the right hand side, but you don't have that index on the left hand side.

9. Aug 23, 2014

### stevendaryl

Staff Emeritus
As far as the notation, I was just using the original poster's notation. But for a specific index $\nu$, the operator $\nabla_\nu$ takes vectors to vectors. It means the same thing as $\nabla_{e_\nu}$. I don't think that's an abuse of notation.

People often write $\nabla_U(V)$, so $\nabla_\nu$ is the special case with $U = e_\nu$, a basis vector.

10. Aug 23, 2014

### stevendaryl

Staff Emeritus
My own personal opinion is that it's confusing to let $V^\alpha$ mean a 4-vector, as opposed to a component of a 4-vector. To me, if you mean the 4-vector, then it's clearer to either just use $V$ (no indices at all--but then it's hard to distinguish between a vector, a scalar or a tensor), or to write it as $V^\alpha e_\alpha$.

11. Aug 24, 2014

### Matterwave

This seems like some pretty nonstandard notation to me, but maybe I just haven't read the references you've read. :)

From the notation that I'm familiar with $\nabla_\nu V^\alpha$ would be a (1,1) tensor, it would just be a more convenient way of writing $(\nabla V)_\nu^{~~\alpha}$.

12. Aug 24, 2014

### stevendaryl

Staff Emeritus
I would not say that $(\nabla V)_\nu^{~~\alpha}$ is a (1,1) tensor, I would say it is a component of a (1,1) tensor. And it is exactly the same thing as what I was writing as $(\nabla_\nu V)^\alpha$

(The parenthesis reminds me that it's the $\alpha$ component of the vector $\nabla_\nu V$ rather than the results of letting $\nabla_\nu$ operate on $V^\alpha$)

$\nabla V$ (no indices) is a (1,1) tensor. But then if you contract it with a vector $U$ it becomes a vector. That contraction is written $\nabla_U V$. In the specific case where $U = e_\nu$, a basis vector, we can write it as:

$\nabla_\nu V$

In components,

$(\nabla_\nu V)^\alpha = (\nabla V)_\nu^{~~\alpha}$

So I don't think that we're really using different notations. We're using different conventions. A lot of physicists use the convention that an expression such as $T_\beta^{~~\alpha}$ means a tensor. I think that's sloppy. It's a component of a tensor $T$, but it is also a component of a vector (the result of contracting $T$ with the basis vector $e_\beta$).

13. Aug 24, 2014

### WannabeNewton

Matterwave is using abstract index notation, not component notation, so what he said is perfectly correct. I would suggest reading chapter 2 of Wald for an explanation of abstract index notation. But to be fair, matterwave should have kept to the convention in Wald for abstract indices in order to distinguish them from component indices, which most authors also don't do so I agree with you in that outside of Wald's book it can be very confusing when it comes to notation.

No you can't. I'm going to use latin for tetrad indices and greek for Lorentz indices. Then what you've written is $\nabla_{e_a}V = (e_a)^{\nu}\nabla_{\nu}V$ which is definitely not the same thing as $\nabla_{\nu}V$.

The latter is a (1,1) 4-tensor whereas the former is simply a 4-vector.

Again, $\nabla_{\nu}V$ is a (1,1) 4-tensor, it is not a 4-vector. It is not the result of taking the covariant derivative of $V$ along a basis vector $e_a$, which would instead give a 4-vector. You're confusing yourself because you're using Lorentz indices where you should be using tetrad indices.

Last edited: Aug 24, 2014
14. Aug 24, 2014

### stevendaryl

Staff Emeritus
I'm not convinced that you're right here. I didn't say anything about tetrads. $e_\nu$ means a basis vector, where $\nu$ ranges over 0 to 4. My reference is Misner, Thorne and Wheeler's "Gravitation", and there, it is clearly written: (Page 258 of my edition, equation 10.12)

Which is exactly what I said (with $\beta$ replaced by $\nu$).

That to me seems to be saying "$\nabla_\nu V$ is the result of taking the covariant derivative of $V$ along a basis vector $e_\nu$".

My inclination is to believe MTW. It's possible that notation has changed in recent years--MTW is an old book.

15. Aug 24, 2014

### stevendaryl

Staff Emeritus
After consulting with MTW, I'm 95% sure that, at least as of the 70s, or whenever the book was written, $\nabla_\nu V$ meant $\nabla_{e_\nu} V$, which is a vector. So I'm not exactly sure what you mean by saying $\nabla_\nu V$ is a (1,1) tensor. I would say that $\nabla V$ is a (1,1) tensor, since it takes a vector $U$ and returns another vector, $\nabla_U V$, which is the change of $V$ along the vector $U$.

16. Aug 24, 2014

### Staff: Mentor

MTW tend not to use $\nabla$ when they are writing component equations (i.e., equations with indexes); they use it when they are writing "abstract geometric" equations. In that context, yes, a subscript on $\nabla$ refers to a specific vector along which the derivative is being taken. But I think notation has indeed changed since then (for example, I haven't seen other authors pick up on MTW's "abstract geometric" notation).

In modern component notation (the notation WBN is using--for example, Wald uses this notation, IIRC), the object just referred to above would be written as $U^{\mu} \nabla_{\mu} V^{\nu}$, where the index on the $\nabla$ is a component index (and is contracted with the index on $U$, so this is the directional derivative of $V$ along $U$). MTW would use the "semicolon" notation to write this in components, so it would be $U^{\mu} V^{\nu}{}_{; \mu}$.

In modern component notation, this object (the 1, 1 tensor) would be written $\nabla_{\mu} V^{\nu}$ (notice that there is no index contracting with the index on the $\nabla$, so both component indexes are free). In MTW's "semicolon" component notation, it would be $V^{\nu}{}_{; \mu}$ (again, with no index contracting the $\mu$ so it is a free index).

This is how the object I just wrote above in the two different component notations (modern and MTW "semicolon") would be written in MTW's "abstract geometric" notation.

17. Aug 24, 2014

### George Jones

Staff Emeritus
I haven't really had a chance to read this thread, and I am on my way out right now, but it seems that I am with stevendaryl on this. For example, even if non=standard, for me, $\nabla_\mu V^\nu$ is the scalar $e_\mu \left( V^\nu \right)$. Here, $e_\mu$ could be part of any basis, orthonormal, coordinate, or other. For a coordinate basis, $e_\mu \left( V^\nu \right)$ is a partial derivative of a scalar,

$$e_\mu \left( V^\nu \right) = \frac{\partial V^\nu}{\partial x^\mu} .$$

Again, it is a personal preference, but I have never liked Penrose's abstract index notation.

18. Aug 24, 2014

### pervect

Staff Emeritus
Using abstract index notation, ala Wald (who explains the notation if it's not familiar - it's basically component notation except the philosophy is that the equations are true in any basis)

$\nabla_a V^b$ is a rank (1,1) tensor. The covariant deriative always adds a rank to a tensor.

$\nabla_v V$ is a vector (I'm not sure what notation that is) equivalent to $V^a \nabla_a V^b$ in index notation. It's the vector you get back when you feed the vector V into the (1,1) tensor.

19. Aug 24, 2014

### stevendaryl

Staff Emeritus
All right, after thinking about this a while, I've come to the conclusion that the disagreement is not about notation, exactly. It's about conventions for what the meaning of a free variable is. People in both mathematics and physics are completely inconsistent about it.

To me, it's much clearer to use a "free" index to mean an arbitrary component. So when I write $V^\alpha$, I don't mean the vector $V$, I mean a component of vector $V$, an arbitrary component. It's the same use of free variables as when someone says in a proof:

Let x be a number in the range [0,1]...

In that case, x is an arbitrary number in the range.

To me, if you want to mean the vector, and not just one component of the vector, then you either leave off the indices, and just write $V$, or else you write $V^\alpha e_\alpha$, with the convention that one raised index and one lowered index implies a summation, and where $e_\alpha$ means a basis vector in the direction $\alpha$. To me, that's infinitely clearer.

So that's the difference:

Some people use $V^\alpha$ to mean a vector, while others (me, for example) would use it to mean a component of a vector, and I would use $V^\alpha e_\alpha$ to mean the vector itself (written as a combination of basis vectors).

Similarly, some people use $\nabla_\nu V^\alpha$ to mean the (1,1) tensor. I would say it's a single component of that tensor, and the full tensor should be written as:

$\nabla_\nu V^\alpha e^\nu \otimes e_\alpha$

(where $e_\alpha$ is a basis of vectors, and $e^\nu$ is the corresponding basis of covectors).

To me, this is much more flexible, since you can talk about components of a tensor, and you can talk about the tensor itself. I understand that most expressions a physicist is likely to write down, will be a vector or a tensor, and not just a single component, so the extra flexibility may not be wanted.

Last edited: Aug 25, 2014
20. Aug 24, 2014

### stevendaryl

Staff Emeritus
I understand the convention, but I dislike it immensely. If $\nabla_a V^b$ is the tensor, then how do you talk about specific components of that tensor? I understand that people don't tend to do that in equations, but people do in their reasoning. In the same way that people say: "Let x be a number in the range [0,1]...", it's sometimes useful in reasoning to talk about an arbitrary component of a vector.

To me, it's infinitely clearer to say that $\nabla_a V^b$ is a component of a (1,1) tensor in an arbitrary basis. Then if you mean the tensor itself, you can write it as:

$\nabla_a V^b \ e^a \otimes e_b$

where $e_b$ is a basis of vectors, and $e^a$ is the corresponding basis of co-vectors.

Mentioning the basis vectors doesn't spoil the nice property that whatever is proved must be true for any basis, as long as you use an arbitrary basis. If you don't assume anything about the basis, then anything proved about it is true for any basis.

Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook