Metric tensor in spherical coordinates

  • #51
^
For example, we say that intuitively Gammas represent how the metric tensor g_{mn} varies from place to place. So in this sense, I have some intuition for Gammas. The Riemann tensor is made of second derivatives of the g_{mn} and product of the Gammas; so, in this sense I have a slightly shady view of what Riemann tensor really is; and Ricci tensor is made up of a sum of the Riemann tensor. So what's the intuition here?
 
Physics news on Phys.org
  • #52
Last edited:
  • #53
If not, then I would recommend watching this lecture:
http://www.youtube.com/watch?v=Pm5RO...feature=relmfu

That's Susskind! Haha! Yeah, I was following these lectures. I have managed to understand first five but stuck on this 6th lecture. I'll give it a try today.
 
  • #54
OK, that is a good starting point. If you get stuck somewhere in the lecture, please let me know and we can go from there.
 
  • #55
^
OK, I'll definitely let you know. Thanks!
 
  • #56
GRstudent said:
Ok, let's take T^{32}. It is the flow of the z-component of momentum in y direction.

This part is ok.

GRstudent said:
So I have \dfrac{d(mv_{z})}{A^y d t}. Also, the flow of some quantity is given by \dfrac{\Delta Q}{A \Delta t}, where Q is the quantity, A is an area through which the Q flows and t is a time interval.

You're assuming that momentum = mv, which is only true in the non-relativistic limit. You're also assuming that whatever you are measuring the stress-energy of can be assigned a rest mass m, but in other components you assign an energy density rho instead. For something like a fluid, for example, rho is well-defined, but how would you define m? And what about the stress-energy tensor for an electromagnetic field, which doesn't even have a rest mass to begin with?

I would agree with DaleSpam's recommendation to work this from the other direction: look at some actual SETs for different kinds of things and develop an intuition for what the components mean from that.
 
  • #57
OK, that is a good starting point. If you get stuck somewhere in the lecture, please let me know and we can go from there.

Ok, I listened to the first link that you gave (lec 6). The main point of the conversation was about parallel transport and the curvature in general sense. So the formula was that d \theta = R d a. Basically, the smaller the sphere is, the larger the value of curvature is. And this makes sense because if Earth were smaller in its Volume then we would feel that it is move curved. In the same way, if the Earth were large in its Volume then we would more and more perceive that it is flat. Therefore, this formula makes a lot of sense.
 
  • #58
Do you feel like you have a good understanding of what parallel transport is and how parallel transport along closed loops gets you back to the same vector in flat space but not in curved space?
 
  • #59
Do you feel like you have a good understanding of what parallel transport is and how parallel transport along closed loops gets you back to the same vector in flat space but not in curved space?

Sort of. I understand why in parallel transport around the closed loop the vector in curved space don't come back to its initial form. Susskind, in Lec 6, gave a tip of the cone as an example. Around the cone, the final and initial vector are the same; however, at the tip, the final vector deviates by \theta from its original form. And the \theta is a deficit angle. In this case, the d\theta=R dA, so R=\dfrac{d\theta}{dA}. Here I tread R as sort of a curvature (not curvature tensor!) value at some point on the surface of a particular object. Parallel transport is just keeping the vector parallel to itself. The parallel transport is a test to find out whether the space has curvature or not. The mathematics is d V^{m}=-\Gamma^m_{np}{} V^{n} dx^p. It is derived from \dfrac{dV^m}{ds}+\Gamma^m_{np}{} V^n \dfrac{dx^p}{ds}=0. So we have \dfrac{d V^{m}}{ds}=-\Gamma^m_{np}{} V^{n} \dfrac{dx^p}{ds}. And we multiply both sides by ds to get to my equation.
 
Last edited:
  • #60
GRstudent said:
I understand why in parallel transport around the closed loop the vector in curved space don't come back to its initial form.
OK, good. That is the basis of defining the Riemann curvature tensor. Susskind will give a much better and more detailed explanation in the 7th lecture, but basically, the Riemann curvature tensor describes how much and what direction a vector deviates when it is parallel transported around an infinitesimal loop in a particular orientation. You need a total of four indices to describe that completely.
 
  • #61
R=\dfrac{d\theta}{dA}: here I tread R as sort of a curvature (not curvature tensor!) value at some point on the surface of a particular object.

Is this right?
 
  • #62
OK, in lec 7, Susskind talks mainly about, this formula \delta V^\mu = R^{\mu}_{\sigma \nu \tau}{} dx^\nu dx^\tau V^\sigma. The dx^\nu dx^\tau is the area by which perimeter the vector was transported; the V^\sigma is the initial vector. Now, I have a question: what is the difference between \delta V^\mu and dV^\mu? And secondly, why does the Riemann tensor have the products of Gammas in it? The presence of derivatives of Gammas make some sense but products don't.

The Ricci tensor, as defined in the lecture, is R_{\mu\nu}=R^\alpha_{\mu \alpha \nu}{}= R^1_{\mu 1 \nu}{}+R^2_{\mu 2 \nu}{}+R^3_{\mu 3 \nu}{}+... (as many as there are dimensions of space) So, as I understand R_{\mu\nu}= \sum_{i=1}^{D}{} R^i_{\mu i \nu}{}.

Correct me if I am wrong.
 
  • #63
GRstudent said:
Is this right?
Yes, but don't spend too much effort on that. As you mention, that R is not a tensor, it is just motivation for using the word "curvature" to describe something that has to do with parallel transport around small loops.
 
  • #64
^
Please look at my upper post.
 
  • #65
GRstudent said:
what is the difference between \delta V^\mu and dV^\mu?
\delta V^\mu is a small but finite deviation from V. dV^\mu is either part of a derivative or the rather informal notion of an infinitesimal deviation from V.

GRstudent said:
And secondly, why does the Riemann tensor have the products of Gammas in it? The presence of derivatives of Gammas make some sense but products don't.
You get some Christoffel symbols from the vector that is being parallel transported and some Christoffel symbols from the loop around which it is being transported. The net result is a product of Christoffel symbols.

GRstudent said:
So, as I understand R_{\mu\nu}= \sum_{i=1}^{D}{} R^i_{\mu i \nu}{}.

Correct me if I am wrong.
That is correct.
 
  • #66
You get some Christoffel symbols from the vector that is being parallel transported and some Christoffel symbols from the loop around which it is being transported. The net result is a product of Christoffel symbols.

This point is somewhat unclear to me. Also, can you give me an application (example) is this concept \delta V^\mu = R^{\mu}_{\sigma \nu \tau}{}dx^\nu dx^\tau V^\sigma. Susskind said that he, himself, used a sphere and cone as an example to better understand these concepts.
 
  • #67
OK, so for simplicity let's use the metric on the unit sphere: ds^2=d\theta^2+sin(\theta) d\phi^2. And let's start with a vector V^{\sigma}=(1,0) and the area given by dx^{\nu}=(0,.001) and dx^{\tau}=(.001,0) then we have \delta V^{\mu}=(0,10^{-6}) meaning that the vector V would roughly change from (1,0) to (1,1E-6) after being transported around that little loop. This is an approximation that would become exact as the dx vectors went to 0.
 
  • #68
Your example is very good; however, I didn't understand why have you not included the Riemann tensor into your equation. And secondly, I am still waiting to have some clarification about having the products of Gammas in the Riemann tensor definition. Also, how I can derive Riemann tensor?
 
  • #69
Anyone?
 
  • #70
GRstudent said:
Your example is very good; however, I didn't understand why have you not included the Riemann tensor into your equation.
I used them in the calculation, but just didn't write them out explicitly because they are such a pain to write.
GRstudent said:
And secondly, I am still waiting to have some clarification about having the products of Gammas in the Riemann tensor definition.
I'm sorry, you are going to have to give more to go on. Why does this confuse you, and what is it about my previous answer that didn't click? Maybe you could explain why you would expect there to not be any products of Christoffel symbols?
 
  • #71
Actually, I think I can do a little better than that, although it would still help if you could explain why you would expect there to be no products of Christoffel symbols.

So, the Riemann curvature tensor can be defined in terms of covariant derivatives of covariant derivatives of vectors. The covariant derivative, in turn, involves a term with an ordinary derivative and a term with Christoffel symbols. So the covariant derivative of a covariant derivative will necessarily involve ordinary derivatives of Christoffel symbols and products of Christoffel symbols.
 
  • #72
So, the Riemann curvature tensor can be defined in terms of covariant derivatives of covariant derivatives of vectors. The covariant derivative, in turn, involves a term with an ordinary derivative and a term with Christoffel symbols. So the covariant derivative of a covariant derivative will necessarily involve ordinary derivatives of Christoffel symbols and products of Christoffel symbols.

This is a pretty good idea. So the covariant derivative of the vector is

\nabla_{n} V_{m} = \dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}

The covariant derivative of the vector is a vector so we can take the covariant derivative again:

\nabla_{n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) = \dfrac{\partial}{\partial x^n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) + \Gamma^{r}_{nm}(\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r})

I feel that there is something wrong in the calculation. Please do check the indexes and finish off this equation.

Thank you.

I understand that it is pain to write the Riemann tensor from the previous example; however, if there is a chance even to write some components of it--will be much appreciated.
 
  • #73
GRstudent said:
I feel that there is something wrong in the calculation. Please do check the indexes and finish off this equation.
It is pretty good. The only thing I could see is that when you are doing a covariant derivative of a lower index the sign of the Christoffel symbol is negative. What you have done is clearly enough to show that you get products of Christoffel symbols when you take second covariant derivatives.

GRstudent said:
I understand that it is pain to write the Riemann tensor from the previous example; however, if there is a chance even to write some components of it--will be much appreciated.
Here are some sources:
http://www.physics.usu.edu/Wheeler/GenRel/Lectures/2Sphere.pdf
http://math.ucr.edu/home/baez/gr/oz1.html
http://www.blau.itp.unibe.ch/lecturesGR.pdf (page 130)

Also, if you have Mathematica then I would be glad to post the code I used.
 
Last edited by a moderator:
  • #74
^
Yes, I do have Mathematica.
 
  • #76
^
I couldn't install the package but your file opened either way.

Going back to my equation, can I solve it further?

Also, the covariant derivative is how the tangent vector changes?
 
  • #77
GRstudent said:
Your example is very good; however, I didn't understand why have you not included the Riemann tensor into your equation. And secondly, I am still waiting to have some clarification about having the products of Gammas in the Riemann tensor definition. Also, how I can derive Riemann tensor?

Since reading your question, I spent about two days (not full-time, I do have a life) trying to derive the Riemann tensor in terms of the connection coefficients \Gamma^{\mu}_{\nu\lambda}, and I managed to convince myself that I know how to do it, but it is a mess.

Conceptually, it works this way:
  1. Start with three vectors U, V and W.
    Let U_1 = U, V_1 = V and W_1 = W.
  2. Parallel-transport all three vectors along the vector U_1. This gives you new vectors U_2, V_2 and W_2.
  3. Parallel-transport all three vectors along the vector V_2. This gives you new vectors U_3, V_3 and W_3.
  4. Parallel-transport V_3 and W_3 (we don't need to bother with U_3) along the vector -U_3. This gives you new vectors V_4 and W_4.
  5. Parallel-transport W_4 (we don't need the other ones) along the vector -V_4. This gives you a new vector W_5.
  6. Now define a final vector Q to be the vector that gets you back to where you started: Q = -U_1 - V_2 + U_3 + V_4
  7. Now, finally parallel-transport W_5 along Q to get a final vector W_6.
  8. Let \delta W = W_6 - W_1
  9. Then, to second order in the vectors U and V, the Riemann tensor R is defined by R(U,V,W) = \delta W. Or in terms of components, R^{\alpha}_{\beta\gamma\mu} U^{\beta} V^{\gamma}W^{\mu} = \delta W^{\alpha} (where \beta, \gamma and \mu are summed over).

This prescription still doesn't tell you how to compute Riemann in terms of connection coefficients. I don't know the most elegant way to do it, but the following approach works (although it's a bit of a mess).

Here's a fact about parallel transport that I have not seen written down anywhere, but I've convinced myself is true. Suppose you have two vectors A and B and you want to parallel-transport A along B to get a new vector \tilde{A}. How do the components of \tilde{A} relate to the components of A? This is a coordinate-dependent question, so it involves connection coefficients. To first order, the answer is:

\tilde{A}^{\mu} = A^{\mu} - \Gamma^{\mu}_{\nu \lambda} B^{\nu} A^{\lambda}

The following is bad notation, but since it's possible to deduce what the indices have to be, we can write this more simply as

\tilde{A} = A - \Gamma B A

That doesn't look too bad. Unfortunately, for computing the Riemann tensor, we need \tilde{A}^{\mu} to second-order in B. I'm pretty sure the answer is, in terms of components:

\tilde{A}^{\mu} = A^{\mu} - \Gamma^{\mu}_{\nu \lambda} B^{\nu} A^{\lambda} - \frac{1}{2} \partial_{\tau}\Gamma^{\mu}_{\nu \lambda} B^{\tau} B^{\nu} A^{\lambda} + \frac{1}{2} \Gamma^{\mu}_{\tau \nu} B^{\tau} \Gamma^{\nu}_{\lambda \alpha} B^{\lambda}A^{\alpha}

Using my bad, compact notation:

\tilde{A} = A - \Gamma B A - \frac{1}{2} (\partial\Gamma) B B A + \frac{1}{2}\Gamma B (\Gamma B A)

Also, the connection coefficients themselves change after moving along B. Afterwards, the new value of the connection coefficients \tilde{\Gamma}^{\mu}_{\nu \lambda} is given by:

\tilde{\Gamma}^{\mu}_{\nu \lambda} = \Gamma^{\mu}_{\nu \lambda} + \partial_{\alpha} \Gamma^{\mu}_{\nu \lambda} B^{\alpha}

In my bad compact notation:
\tilde{\Gamma} = \Gamma + (\partial \Gamma) BIf you use these formulas repeatedly to compute how the components of U, V, W and \Gamma change around the loop, only keeping the first- and second-order terms, then you will get an expression for \delta W^\mu in terms of U^\alpha, V^\beta, W^\gamma, \Gamma^{\mu}_{\nu \tau} and \partial_{\lambda}\Gamma^{\mu}_{\nu \tau}

polygon.jpg
 
  • #78
Why not just consider [\triangledown _{\alpha }, \triangledown _{\beta }] acting on a vector like usual. Its arduous and probably won't give you an insight on anything but its extremely straightforward and still follows the idea of transporting a vector around a closed loop.
 
  • #79
GRstudent said:
Going back to my equation, can I solve it further?
I am not sure what you would want to solve it for nor why.


GRstudent said:
Also, the covariant derivative is how the tangent vector changes?
The covariant derivative is how any tensor changes. It serves the same purpose as the partial derivative, but just in a covariant fashion for tensors.
 
  • #80
^
I appreciate your efforts to derive the Riemann tensor; however, unless your final conclusion isn't the following, it cannot really be considered a "derivation".

R^{\alpha }_{\beta \mu \nu } = \dfrac{\partial \Gamma ^{\alpha }_{\beta \nu } }{\partial x^{\mu }} - \dfrac{\partial \Gamma ^{\alpha }_{\beta \mu } }{\partial x^{\nu }} + \Gamma ^{\alpha }_{\sigma \mu }\Gamma ^{\sigma }_{\beta \nu} - \Gamma ^{\alpha }_{\sigma \nu }\Gamma ^{\sigma }_{\beta \mu }

And in any way, your derivation is real mess. I think there is a better way to do it; I just don't really understand whether it is mathematically possible. If we covariantly differentiate \Gamma, then it would make some sense. Still, it is very unclear. I have got some intuition on this yet I cannot see it mathematically proven to be correct.

Also, what is shady here is the fact that the Riemann tensor definition has two minus signs in it; it's very non-intuitive. If Riemann tensor really has to do with curvature, then it would be natural to expect that the definition will contain derivatives of Gammas (how Gammas vary from place to place); on the contrary, you would never expect that we have to subtract them (derivative of Gamma-derivative of Gamma).
 
  • #81
Why not just consider [\triangledown _{\alpha }, \triangledown _{\beta }] acting on a vector like usual.
I agree with this. I read somewhere that [\triangledown _{\alpha }, \triangledown _{\beta }] V^{\mu}=R^{\mu}_{\nu \alpha \beta}{} V^{\nu}.

When written out explicitly what does [\triangledown _{\alpha }, \triangledown _{\beta }] look like? Seems like a covariant deriative operator to me.
 
  • #82
GRstudent said:
I think there is a better way to do it; I just don't really understand whether it is mathematically possible. If we covariantly differentiate \Gamma, then it would make some sense. Still, it is very unclear. I have got some intuition on this yet I cannot see it mathematically proven to be correct.

Also, what is shady here is the fact that the Riemann tensor definition has two minus signs in it; it's very non-intuitive. If Riemann tensor really has to do with curvature, then it would be natural to expect that the definition will contain derivatives of Gammas (how Gammas vary from place to place); on the contrary, you would never expect that we have to subtract them (derivative of Gamma-derivative of Gamma).

Like Dale Spam already said way before just consider the commutator of the covariant derivative acting on a vector. There is nothing "shady" about the minus signs. It comes right out of the action of said commutator acting on the vector you are transporting around the loop. Also the christoffel symbols don't transform like a tensor so it doesn't really make sense to apply the covariant derivative to them.
 
  • #83
I am not sure what you would want to solve it for nor why.

\nabla_{n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) = \dfrac{\partial}{\partial x^n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) + \Gamma^{r}_{nm}(\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r})

I was asking about this equation; whether I can solve it further.
 
  • #84
\nabla_{n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) = \dfrac{\partial}{\partial x^n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) + \Gamma^{r}_{nm}(\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) = \dfrac{\partial^2 V_{m}}{\partial (x^n)^2} + \dfrac{\partial \Gamma^{r}_{nm}{} V_{r}}{\partial x^n} + \Gamma^{r}_{nm}(\dfrac{\partial V_{m}}{\partial x^{n}}) + \Gamma^{r}_{nm} \Gamma^{r}_{nm}{} V_{r}
 
  • #85
GRstudent said:
^
I appreciate your efforts to derive the Riemann tensor; however, unless your final conclusion isn't the following, it cannot really be considered a "derivation".

R^{\alpha }_{\beta \mu \nu } = \dfrac{\partial \Gamma ^{\alpha }_{\beta \nu } }{\partial x^{\mu }} - \dfrac{\partial \Gamma ^{\alpha }_{\beta \mu } }{\partial x^{\nu }} + \Gamma ^{\alpha }_{\sigma \mu }\Gamma ^{\sigma }_{\beta \nu} - \Gamma ^{\alpha }_{\sigma \nu }\Gamma ^{\sigma }_{\beta \mu }

That is what you get.

And in any way, your derivation is real mess.

I agree, but I think any way you do it is going to be a similar mess.
 
  • #86
That is what you get.

I couldn't see it in your derivation. I guess, the exact derivation has to do with covariant derivatives yet, as I said, I cannot grasp it at this stage.

I agree, but I think any way you do it is going to be a similar mess.

That's true. However, in case of derivation involving covariant derivative it would be a mess with equations (for me, if there is no way to avoid messy derivations, I would prefer a mess with equations because they are easier for me to understand).
 
  • #87
I'll write it out again:

\nabla_{n} V_{m} = \dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}

\nabla_{n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) = \dfrac{\partial}{\partial x^n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) + \Gamma^{r}_{nm}(\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) = \dfrac{\partial^2 V_{m}}{\partial (x^n)^2} + \dfrac{\partial \Gamma^{r}_{nm}{} V_{r}}{\partial x^n} + \Gamma^{r}_{nm}(\dfrac{\partial V_{m}}{\partial x^{n}}) + \Gamma^{r}_{nm} \Gamma^{r}_{nm}{} V_{r}
 
  • #88
GRstudent said:
If we covariantly differentiate \Gamma, then it would make some sense.

No, that's wrong. There is no such thing as "covariant differentiation of \Gamma". \Gamma is not a tensor, so you can't take the covariant derivative of it.

Also, what is shady here is the fact that the Riemann tensor definition has two minus signs in it; it's very non-intuitive.

Those minus signs are completely intuitive: When you go around a closed loop, you first travel in direction U and then in direction V and then in direction -U and then in direction -V.

Along the first segment, \delta W picks up a contribution
-\Gamma U W
along the second segment it picks up a contribution
-\Gamma V W +\Gamma V (\Gamma U W) - \partial \Gamma U V W
along the third segment it picks up a contribution
+\Gamma U W -\Gamma U (\Gamma V W) + \partial \Gamma V U W
along the fourth segment it picks up a contribution
+\Gamma V W

There are a bunch of other terms, but they cancel out.
 
  • #89
GRstudent said:
however, unless your final conclusion isn't the following, it cannot really be considered a "derivation".
I would make a similar comment about your intuition. Unless your intuition leads you to that formula then it cannot really be considered intuition about curvature.

GRstudent said:
Also, what is shady here is the fact that the Riemann tensor definition has two minus signs in it; it's very non-intuitive. If Riemann tensor really has to do with curvature, then it would be natural to expect that the definition will contain derivatives of Gammas (how Gammas vary from place to place); on the contrary, you would never expect that we have to subtract them (derivative of Gamma-derivative of Gamma).
This "expectation" or "intuition" is really strange and strangely specific. What on Earth would lead you to expect the sign of the Christoffel symbols to be positive? I can see no reason or justification for this assertion, it seems completely random, like the quantum collapse of some educational wavefunction.
 
  • #90
GRstudent said:
I was asking about this equation; whether I can solve it further.
Solve it for what? Are you trying to solve for V? (If so, you cannot solve for V with that equation, it is true for any V). I don't know what you mean by solving that equation.
 
  • #91
DaleSpam said:
I can see no reason or justification for this assertion, it seems completely random, like the quantum collapse of some educational wavefunction.

I lol'ed at that haha.
 
  • #92
GRstudent said:
I couldn't see it in your derivation. I guess, the exact derivation has to do with covariant derivatives yet, as I said, I cannot grasp it at this stage.

I'm ambivalent about the definition of Riemann in terms of covariant derivatives. It's kind of weird, since to compute R(U,V,W) we have to pretend that U, V and W are vector fields, and then do the calculation, and then in the end, nothing matters except U, V and W at a single point.

But the definition in terms of covariant derivatives is pretty succinct:

R(U,V,W) = \nabla_V (\nabla_U W) - \nabla_U (\nabla_V W)

Then in terms of components:

(\nabla_U W)^{\mu} = \partial_{\nu} W^{\mu} U^{\nu} + \Gamma^{\mu}_{\nu \lambda} U^{\nu} W^{\lambda}

(\nabla_V (\nabla_U W))^{\mu}<br /> = \partial_{\alpha} \partial_{\nu} W^{\mu} U^{\nu} V^{\alpha}<br /> + \partial_{\alpha} (\Gamma^{\mu}_{\nu \lambda} U^{\nu} W^{\lambda}) V^{\alpha}<br /> + \Gamma^{\mu}_{\alpha \beta} (\partial_{\nu} W^{\beta}) U^{\nu} V^{\alpha}<br /> + \Gamma^{\mu}_{\alpha \beta} \Gamma^{\beta}_{\nu \lambda} U^{\nu} W^{\lambda} V^{\alpha}<br />

(\nabla_U (\nabla_V W))^{\mu}<br /> = \partial_{\alpha} \partial_{\nu} W^{\mu} V^{\nu} U^{\alpha}<br /> + \partial_{\alpha} (\Gamma^{\mu}_{\nu \lambda} V^{\nu} W^{\lambda}) U^{\alpha}<br /> + \Gamma^{\mu}_{\alpha \beta} (\partial_{\nu} W^{\beta}) V^{\nu} U^{\alpha}<br /> + \Gamma^{\mu}_{\alpha \beta} \Gamma^{\beta}_{\nu \lambda} V^{\nu} W^{\lambda} U^{\alpha}<br />

Subtract them to get:
(\nabla_V (\nabla_U W) - \nabla_U (\nabla_V W))^{\mu}<br /> = (\partial_{\alpha} \Gamma^{\mu}_{\nu \lambda}) U^{\nu} W^{\lambda} V^{\alpha}<br /> - (\partial_{\alpha} \Gamma^{\mu}_{\nu \lambda}) V^{\nu} W^{\lambda} U^{\alpha}<br /> + \Gamma^{\mu}_{\alpha \beta} \Gamma^{\beta}_{\nu \lambda} U^{\nu} W^{\lambda} V^{\alpha}<br /> - \Gamma^{\mu}_{\alpha \beta} \Gamma^{\beta}_{\nu \lambda} V^{\nu} W^{\lambda} U^{\alpha}

Note the miracle that all the derivatives of W, U and V cancel out. (I guess it's not a miracle, since the result has to be a tensor, so those cancellations must happen.)

Rename some dummy indices to factor out U_\nu, V_\alpha and W_\lambda to get:
(\nabla_V (\nabla_U W) - \nabla_U (\nabla_V W))^{\mu}<br /> = ((\partial_{\alpha} \Gamma^{\mu}_{\nu \lambda})<br /> - (\partial_{\nu} \Gamma^{\mu}_{\alpha \lambda})<br /> + \Gamma^{\mu}_{\alpha \beta} \Gamma^{\beta}_{\nu \lambda}<br /> - \Gamma^{\mu}_{\nu\beta} \Gamma^{\beta}_{\alpha \lambda})U^{\nu}V^{\alpha} W^{\lambda}
 
  • #93
It becomes more clear in this way. I appreciate your efforts! Thanks!
 
  • #94
stevendaryl said:
I'm ambivalent about the definition of Riemann in terms of covariant derivatives. It's kind of weird

Just to expand on my complaint; the definition of R in terms of covariant derivatives is very succinct, but it's a little mysterious why it gives the right answer for parallel transport of a vector around a loop.
 
  • #95
Actually, the derivation of the Riemann tensor can be done in much easier way. This was discussed in 8th lecture on GR by Susskind.

It is important to realize that the operator of the covariant derivative is the following:

\nabla_\mu=\dfrac{\partial}{\partial x^\mu} + \Gamma_\mu

Then, the idea of a commutator comes out:

AB-BA=[A,B]

[\dfrac{\partial}{\partial x}, f(x)] = \dfrac{\partial f}{\partial x}

[\nabla_\nu , \nabla_\mu] is basically the Riemann tensor:

(\partial_\nu + \Gamma_\nu)(\partial_\mu +\Gamma_\mu)-(\partial_\mu+\Gamma_\mu)(\partial_\nu+\Gamma_\nu)

When we expand this equation, we are left with

\partial_\nu \partial_\mu+\partial_\nu \Gamma_\mu + \Gamma_\nu \partial_\mu + \Gamma_\nu \Gamma_\mu - \partial_\mu \partial_\nu - \partial_\mu \Gamma_\nu - \Gamma_\mu \partial_\nu - \Gamma_\mu \Gamma_\nu

The products of two derivative operators are commute so they cancel each other out.

Then we have

\partial_\nu \Gamma_\mu + \Gamma_\nu \partial_\mu + \Gamma_\nu \Gamma_\mu - \partial_\mu \Gamma_\nu - \Gamma_\mu \partial_\nu - \Gamma_\mu \Gamma_\nu

Now, we can notice that there are two commutators written out explicitly in that equation, so we put them together to have:

-[\partial_\mu, \Gamma_\nu] + [\partial_\nu, \Gamma_\mu]+\Gamma_\nu \Gamma_\mu- \Gamma_\mu \Gamma_\nu

Using formula for commutator, we have that:

\dfrac{\partial \Gamma_\mu}{\partial x^\nu}-\dfrac{\partial \Gamma_\nu}{\partial x^\mu} + \Gamma_\nu \Gamma_\mu- \Gamma_\mu \Gamma_\nu

^
Which is an exact definition of the Riemann tensor.
 
  • #96
GRstudent said:
Actually, the derivation of the Riemann tensor can be done in much easier way. This was discussed in 8th lecture on GR by Susskind.

Isn't that basically the same derivation as the one in post #92?
 
  • #97
stevendaryl said:
Isn't that basically the same derivation as the one in post #92?

I guess not, in the sense that considering \partial and \Gamma as operators and computing the commutator is "cleaner" than sticking in vectors to operate on. Conceptually, it's the same thing, whether you write:

[\nabla_\mu, \nabla_\nu]

or

\nabla_U (\nabla_V W) - \nabla_V (\nabla_U W)
 
  • #98
^
Correct.

So basically, the covariant derivative shows how the tangent vector varies along the curve?
 
Last edited:
  • #99
The Gammas in polar coordinates are \dfrac{1}{r}, what does it really mean?
 
  • #100
GRstudent said:
The Gammas in polar coordinates are \dfrac{1}{r}, what does it really mean?

The coefficient \Gamma^{i}_{jk} is defined implicitly via the equation:

\nabla_{j} e_{k} = \Gamma^{i}_{jk} e_{i}

where e_{i} is the ith basis vector. To illustrate, let's make it simpler, and use 2D polar coordinates. In rectangular coordinates, the coordinates are x and y, with basis vectors e_x and e_y. Now we can do a coordinate change to coordinates r and \theta defined by:
x = r cos(\theta), y = r sin(\theta) with corresponding basis vectors e_r = cos(\theta) e_x + sin(\theta) e_y and e_{\theta} = - r sin(\theta) e_x + r cos(\theta) e_y. We compute derivatives:

\nabla_{r} e_{r} = 0

So, \Gamma^{r}_{rr} = \Gamma^{\theta}_{rr} = 0

\nabla_{\theta} e_{r} = (\partial_{\theta} cos(\theta)) e_x + (\partial_{\theta} sin(\theta)) e_y
= - sin(\theta) e_x + cos(\theta) e_y
= e_{\theta}/r

So, \Gamma^{r}_{\theta r} = 0, and \Gamma^{\theta}_{\theta r} = 1/r

\nabla_{r} e_{\theta} = (\partial_{r} (-r sin(\theta))) e_x + (\partial_{r} (r cos(\theta))) e_y
= -r sin(\theta) e_x + cos(\theta) e_y
= e_\theta/r

So, \Gamma^{r}_{r \theta} = 0, and \Gamma^{\theta}_{r \theta} = 1/r

\nabla_{\theta} e_{\theta} = (\partial_{\theta} (-r sin(\theta))) e_x + (\partial_{\theta} (r cos(\theta))) e_y
= - r cos(\theta) e_x - r sin(\theta) e_y
= -r e_{r}

So, \Gamma^{r}_{\theta \theta} = -r, and \Gamma^{\theta}_{\theta \theta} = 0
 
Back
Top