Metric tensor in spherical coordinates

In summary: I would like to know if there is something wrong in my thinking.^^You are correct, the Einstein tensor being zero would imply that the curvature of the space - time is zero. However, this is only true for the exterior of the planet. Inside the planet, the stress - energy tensor would be nonzero and would contribute to the curvature of space - time. The Schwarzschild solution only describes the space - time outside the planet, where there is no matter present.^^In summary, the conversation discusses the flat space-time metric and the Schwarzschild metric, which describes the gravitational field outside a non-spinning, symmetrical planet. It is noted that far from the planet, the Schwarzschild
  • #71
Actually, I think I can do a little better than that, although it would still help if you could explain why you would expect there to be no products of Christoffel symbols.

So, the Riemann curvature tensor can be defined in terms of covariant derivatives of covariant derivatives of vectors. The covariant derivative, in turn, involves a term with an ordinary derivative and a term with Christoffel symbols. So the covariant derivative of a covariant derivative will necessarily involve ordinary derivatives of Christoffel symbols and products of Christoffel symbols.
 
Physics news on Phys.org
  • #72
So, the Riemann curvature tensor can be defined in terms of covariant derivatives of covariant derivatives of vectors. The covariant derivative, in turn, involves a term with an ordinary derivative and a term with Christoffel symbols. So the covariant derivative of a covariant derivative will necessarily involve ordinary derivatives of Christoffel symbols and products of Christoffel symbols.

This is a pretty good idea. So the covariant derivative of the vector is

[itex]\nabla_{n} V_{m} = \dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}[/itex]

The covariant derivative of the vector is a vector so we can take the covariant derivative again:

[itex]\nabla_{n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) = \dfrac{\partial}{\partial x^n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) + \Gamma^{r}_{nm}(\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r})[/itex]

I feel that there is something wrong in the calculation. Please do check the indexes and finish off this equation.

Thank you.

I understand that it is pain to write the Riemann tensor from the previous example; however, if there is a chance even to write some components of it--will be much appreciated.
 
  • #73
GRstudent said:
I feel that there is something wrong in the calculation. Please do check the indexes and finish off this equation.
It is pretty good. The only thing I could see is that when you are doing a covariant derivative of a lower index the sign of the Christoffel symbol is negative. What you have done is clearly enough to show that you get products of Christoffel symbols when you take second covariant derivatives.

GRstudent said:
I understand that it is pain to write the Riemann tensor from the previous example; however, if there is a chance even to write some components of it--will be much appreciated.
Here are some sources:
http://www.physics.usu.edu/Wheeler/GenRel/Lectures/2Sphere.pdf
http://math.ucr.edu/home/baez/gr/oz1.html
http://www.blau.itp.unibe.ch/lecturesGR.pdf (page 130)

Also, if you have Mathematica then I would be glad to post the code I used.
 
Last edited by a moderator:
  • #74
^
Yes, I do have Mathematica.
 
  • #76
^
I couldn't install the package but your file opened either way.

Going back to my equation, can I solve it further?

Also, the covariant derivative is how the tangent vector changes?
 
  • #77
GRstudent said:
Your example is very good; however, I didn't understand why have you not included the Riemann tensor into your equation. And secondly, I am still waiting to have some clarification about having the products of Gammas in the Riemann tensor definition. Also, how I can derive Riemann tensor?

Since reading your question, I spent about two days (not full-time, I do have a life) trying to derive the Riemann tensor in terms of the connection coefficients [itex]\Gamma^{\mu}_{\nu\lambda}[/itex], and I managed to convince myself that I know how to do it, but it is a mess.

Conceptually, it works this way:
  1. Start with three vectors [itex]U[/itex], [itex]V[/itex] and [itex]W[/itex].
    Let [itex]U_1 = U[/itex], [itex]V_1 = V[/itex] and [itex]W_1 = W[/itex].
  2. Parallel-transport all three vectors along the vector [itex]U_1[/itex]. This gives you new vectors [itex]U_2[/itex], [itex]V_2[/itex] and [itex]W_2[/itex].
  3. Parallel-transport all three vectors along the vector [itex]V_2[/itex]. This gives you new vectors [itex]U_3[/itex], [itex]V_3[/itex] and [itex]W_3[/itex].
  4. Parallel-transport [itex]V_3[/itex] and [itex]W_3[/itex] (we don't need to bother with [itex]U_3[/itex]) along the vector [itex]-U_3[/itex]. This gives you new vectors [itex]V_4[/itex] and [itex]W_4[/itex].
  5. Parallel-transport [itex]W_4[/itex] (we don't need the other ones) along the vector [itex]-V_4[/itex]. This gives you a new vector [itex]W_5[/itex].
  6. Now define a final vector [itex]Q[/itex] to be the vector that gets you back to where you started: [itex]Q = -U_1 - V_2 + U_3 + V_4[/itex]
  7. Now, finally parallel-transport [itex]W_5[/itex] along [itex]Q[/itex] to get a final vector [itex]W_6[/itex].
  8. Let [itex]\delta W = W_6 - W_1[/itex]
  9. Then, to second order in the vectors [itex]U[/itex] and [itex]V[/itex], the Riemann tensor [itex]R[/itex] is defined by [itex]R(U,V,W) = \delta W[/itex]. Or in terms of components, [itex]R^{\alpha}_{\beta\gamma\mu} U^{\beta} V^{\gamma}W^{\mu} = \delta W^{\alpha}[/itex] (where [itex]\beta[/itex], [itex]\gamma[/itex] and [itex]\mu[/itex] are summed over).

This prescription still doesn't tell you how to compute Riemann in terms of connection coefficients. I don't know the most elegant way to do it, but the following approach works (although it's a bit of a mess).

Here's a fact about parallel transport that I have not seen written down anywhere, but I've convinced myself is true. Suppose you have two vectors [itex]A[/itex] and [itex]B[/itex] and you want to parallel-transport [itex]A[/itex] along [itex]B[/itex] to get a new vector [itex]\tilde{A}[/itex]. How do the components of [itex]\tilde{A}[/itex] relate to the components of [itex]A[/itex]? This is a coordinate-dependent question, so it involves connection coefficients. To first order, the answer is:

[itex]\tilde{A}^{\mu} = A^{\mu} - \Gamma^{\mu}_{\nu \lambda} B^{\nu} A^{\lambda}[/itex]

The following is bad notation, but since it's possible to deduce what the indices have to be, we can write this more simply as

[itex]\tilde{A} = A - \Gamma B A[/itex]

That doesn't look too bad. Unfortunately, for computing the Riemann tensor, we need [itex]\tilde{A}^{\mu}[/itex] to second-order in [itex]B[/itex]. I'm pretty sure the answer is, in terms of components:

[itex]\tilde{A}^{\mu} = A^{\mu} - \Gamma^{\mu}_{\nu \lambda} B^{\nu} A^{\lambda} - \frac{1}{2} \partial_{\tau}\Gamma^{\mu}_{\nu \lambda} B^{\tau} B^{\nu} A^{\lambda} + \frac{1}{2} \Gamma^{\mu}_{\tau \nu} B^{\tau} \Gamma^{\nu}_{\lambda \alpha} B^{\lambda}A^{\alpha}[/itex]

Using my bad, compact notation:

[itex]\tilde{A} = A - \Gamma B A - \frac{1}{2} (\partial\Gamma) B B A + \frac{1}{2}\Gamma B (\Gamma B A)[/itex]

Also, the connection coefficients themselves change after moving along [itex]B[/itex]. Afterwards, the new value of the connection coefficients [itex]\tilde{\Gamma}^{\mu}_{\nu \lambda}[/itex] is given by:

[itex]\tilde{\Gamma}^{\mu}_{\nu \lambda} = \Gamma^{\mu}_{\nu \lambda} + \partial_{\alpha} \Gamma^{\mu}_{\nu \lambda} B^{\alpha}[/itex]

In my bad compact notation:
[itex]\tilde{\Gamma} = \Gamma + (\partial \Gamma) B[/itex]If you use these formulas repeatedly to compute how the components of [itex]U[/itex], [itex]V[/itex], [itex]W[/itex] and [itex]\Gamma[/itex] change around the loop, only keeping the first- and second-order terms, then you will get an expression for [itex]\delta W^\mu[/itex] in terms of [itex]U^\alpha[/itex], [itex]V^\beta[/itex], [itex]W^\gamma[/itex], [itex]\Gamma^{\mu}_{\nu \tau}[/itex] and [itex]\partial_{\lambda}\Gamma^{\mu}_{\nu \tau}[/itex]

polygon.jpg
 
  • #78
Why not just consider [itex][\triangledown _{\alpha }, \triangledown _{\beta }][/itex] acting on a vector like usual. Its arduous and probably won't give you an insight on anything but its extremely straightforward and still follows the idea of transporting a vector around a closed loop.
 
  • #79
GRstudent said:
Going back to my equation, can I solve it further?
I am not sure what you would want to solve it for nor why.


GRstudent said:
Also, the covariant derivative is how the tangent vector changes?
The covariant derivative is how any tensor changes. It serves the same purpose as the partial derivative, but just in a covariant fashion for tensors.
 
  • #80
^
I appreciate your efforts to derive the Riemann tensor; however, unless your final conclusion isn't the following, it cannot really be considered a "derivation".

[itex]R^{\alpha }_{\beta \mu \nu } = \dfrac{\partial \Gamma ^{\alpha }_{\beta \nu } }{\partial x^{\mu }} - \dfrac{\partial \Gamma ^{\alpha }_{\beta \mu } }{\partial x^{\nu }} + \Gamma ^{\alpha }_{\sigma \mu }\Gamma ^{\sigma }_{\beta \nu} - \Gamma ^{\alpha }_{\sigma \nu }\Gamma ^{\sigma }_{\beta \mu }[/itex]

And in any way, your derivation is real mess. I think there is a better way to do it; I just don't really understand whether it is mathematically possible. If we covariantly differentiate [itex]\Gamma[/itex], then it would make some sense. Still, it is very unclear. I have got some intuition on this yet I cannot see it mathematically proven to be correct.

Also, what is shady here is the fact that the Riemann tensor definition has two minus signs in it; it's very non-intuitive. If Riemann tensor really has to do with curvature, then it would be natural to expect that the definition will contain derivatives of Gammas (how Gammas vary from place to place); on the contrary, you would never expect that we have to subtract them (derivative of Gamma-derivative of Gamma).
 
  • #81
Why not just consider [itex][\triangledown _{\alpha }, \triangledown _{\beta }][/itex] acting on a vector like usual.
I agree with this. I read somewhere that [itex][\triangledown _{\alpha }, \triangledown _{\beta }] V^{\mu}[/itex][itex]=[/itex][itex]R^{\mu}_{\nu \alpha \beta}{} V^{\nu}[/itex].

When written out explicitly what does [itex][\triangledown _{\alpha }, \triangledown _{\beta }][/itex] look like? Seems like a covariant deriative operator to me.
 
  • #82
GRstudent said:
I think there is a better way to do it; I just don't really understand whether it is mathematically possible. If we covariantly differentiate [itex]\Gamma[/itex], then it would make some sense. Still, it is very unclear. I have got some intuition on this yet I cannot see it mathematically proven to be correct.

Also, what is shady here is the fact that the Riemann tensor definition has two minus signs in it; it's very non-intuitive. If Riemann tensor really has to do with curvature, then it would be natural to expect that the definition will contain derivatives of Gammas (how Gammas vary from place to place); on the contrary, you would never expect that we have to subtract them (derivative of Gamma-derivative of Gamma).

Like Dale Spam already said way before just consider the commutator of the covariant derivative acting on a vector. There is nothing "shady" about the minus signs. It comes right out of the action of said commutator acting on the vector you are transporting around the loop. Also the christoffel symbols don't transform like a tensor so it doesn't really make sense to apply the covariant derivative to them.
 
  • #83
I am not sure what you would want to solve it for nor why.

[itex]\nabla_{n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) = \dfrac{\partial}{\partial x^n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) + \Gamma^{r}_{nm}(\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r})[/itex]

I was asking about this equation; whether I can solve it further.
 
  • #84
[itex]\nabla_{n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) = \dfrac{\partial}{\partial x^n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) + \Gamma^{r}_{nm}(\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) = \dfrac{\partial^2 V_{m}}{\partial (x^n)^2} + \dfrac{\partial \Gamma^{r}_{nm}{} V_{r}}{\partial x^n} + \Gamma^{r}_{nm}(\dfrac{\partial V_{m}}{\partial x^{n}}) + \Gamma^{r}_{nm} \Gamma^{r}_{nm}{} V_{r} [/itex]
 
  • #85
GRstudent said:
^
I appreciate your efforts to derive the Riemann tensor; however, unless your final conclusion isn't the following, it cannot really be considered a "derivation".

[itex]R^{\alpha }_{\beta \mu \nu } = \dfrac{\partial \Gamma ^{\alpha }_{\beta \nu } }{\partial x^{\mu }} - \dfrac{\partial \Gamma ^{\alpha }_{\beta \mu } }{\partial x^{\nu }} + \Gamma ^{\alpha }_{\sigma \mu }\Gamma ^{\sigma }_{\beta \nu} - \Gamma ^{\alpha }_{\sigma \nu }\Gamma ^{\sigma }_{\beta \mu }[/itex]

That is what you get.

And in any way, your derivation is real mess.

I agree, but I think any way you do it is going to be a similar mess.
 
  • #86
That is what you get.

I couldn't see it in your derivation. I guess, the exact derivation has to do with covariant derivatives yet, as I said, I cannot grasp it at this stage.

I agree, but I think any way you do it is going to be a similar mess.

That's true. However, in case of derivation involving covariant derivative it would be a mess with equations (for me, if there is no way to avoid messy derivations, I would prefer a mess with equations because they are easier for me to understand).
 
  • #87
I'll write it out again:

[itex]\nabla_{n} V_{m} = \dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}[/itex]

[itex]\nabla_{n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) = \dfrac{\partial}{\partial x^n} (\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) + \Gamma^{r}_{nm}(\dfrac{\partial V_{m}}{\partial x^{n}} + \Gamma^{r}_{nm}{} V_{r}) = \dfrac{\partial^2 V_{m}}{\partial (x^n)^2} + \dfrac{\partial \Gamma^{r}_{nm}{} V_{r}}{\partial x^n} + \Gamma^{r}_{nm}(\dfrac{\partial V_{m}}{\partial x^{n}}) + \Gamma^{r}_{nm} \Gamma^{r}_{nm}{} V_{r}[/itex]
 
  • #88
GRstudent said:
If we covariantly differentiate [itex]\Gamma[/itex], then it would make some sense.

No, that's wrong. There is no such thing as "covariant differentiation of [itex]\Gamma[/itex]". [itex]\Gamma[/itex] is not a tensor, so you can't take the covariant derivative of it.

Also, what is shady here is the fact that the Riemann tensor definition has two minus signs in it; it's very non-intuitive.

Those minus signs are completely intuitive: When you go around a closed loop, you first travel in direction [itex]U[/itex] and then in direction [itex]V[/itex] and then in direction [itex]-U[/itex] and then in direction [itex]-V[/itex].

Along the first segment, [itex]\delta W[/itex] picks up a contribution
[itex]-\Gamma U W[/itex]
along the second segment it picks up a contribution
[itex]-\Gamma V W +\Gamma V (\Gamma U W) - \partial \Gamma U V W[/itex]
along the third segment it picks up a contribution
[itex]+\Gamma U W -\Gamma U (\Gamma V W) + \partial \Gamma V U W[/itex]
along the fourth segment it picks up a contribution
[itex]+\Gamma V W[/itex]

There are a bunch of other terms, but they cancel out.
 
  • #89
GRstudent said:
however, unless your final conclusion isn't the following, it cannot really be considered a "derivation".
I would make a similar comment about your intuition. Unless your intuition leads you to that formula then it cannot really be considered intuition about curvature.

GRstudent said:
Also, what is shady here is the fact that the Riemann tensor definition has two minus signs in it; it's very non-intuitive. If Riemann tensor really has to do with curvature, then it would be natural to expect that the definition will contain derivatives of Gammas (how Gammas vary from place to place); on the contrary, you would never expect that we have to subtract them (derivative of Gamma-derivative of Gamma).
This "expectation" or "intuition" is really strange and strangely specific. What on Earth would lead you to expect the sign of the Christoffel symbols to be positive? I can see no reason or justification for this assertion, it seems completely random, like the quantum collapse of some educational wavefunction.
 
  • #90
GRstudent said:
I was asking about this equation; whether I can solve it further.
Solve it for what? Are you trying to solve for V? (If so, you cannot solve for V with that equation, it is true for any V). I don't know what you mean by solving that equation.
 
  • #91
DaleSpam said:
I can see no reason or justification for this assertion, it seems completely random, like the quantum collapse of some educational wavefunction.

I lol'ed at that haha.
 
  • #92
GRstudent said:
I couldn't see it in your derivation. I guess, the exact derivation has to do with covariant derivatives yet, as I said, I cannot grasp it at this stage.

I'm ambivalent about the definition of Riemann in terms of covariant derivatives. It's kind of weird, since to compute [itex]R(U,V,W)[/itex] we have to pretend that [itex]U[/itex], [itex]V[/itex] and [itex]W[/itex] are vector fields, and then do the calculation, and then in the end, nothing matters except [itex]U[/itex], [itex]V[/itex] and [itex]W[/itex] at a single point.

But the definition in terms of covariant derivatives is pretty succinct:

[itex] R(U,V,W) = \nabla_V (\nabla_U W) - \nabla_U (\nabla_V W)[/itex]

Then in terms of components:

[itex](\nabla_U W)^{\mu} = \partial_{\nu} W^{\mu} U^{\nu} + \Gamma^{\mu}_{\nu \lambda} U^{\nu} W^{\lambda}[/itex]

[itex](\nabla_V (\nabla_U W))^{\mu}
= \partial_{\alpha} \partial_{\nu} W^{\mu} U^{\nu} V^{\alpha}
+ \partial_{\alpha} (\Gamma^{\mu}_{\nu \lambda} U^{\nu} W^{\lambda}) V^{\alpha}
+ \Gamma^{\mu}_{\alpha \beta} (\partial_{\nu} W^{\beta}) U^{\nu} V^{\alpha}
+ \Gamma^{\mu}_{\alpha \beta} \Gamma^{\beta}_{\nu \lambda} U^{\nu} W^{\lambda} V^{\alpha}
[/itex]

[itex](\nabla_U (\nabla_V W))^{\mu}
= \partial_{\alpha} \partial_{\nu} W^{\mu} V^{\nu} U^{\alpha}
+ \partial_{\alpha} (\Gamma^{\mu}_{\nu \lambda} V^{\nu} W^{\lambda}) U^{\alpha}
+ \Gamma^{\mu}_{\alpha \beta} (\partial_{\nu} W^{\beta}) V^{\nu} U^{\alpha}
+ \Gamma^{\mu}_{\alpha \beta} \Gamma^{\beta}_{\nu \lambda} V^{\nu} W^{\lambda} U^{\alpha}
[/itex]

Subtract them to get:
[itex](\nabla_V (\nabla_U W) - \nabla_U (\nabla_V W))^{\mu}
= (\partial_{\alpha} \Gamma^{\mu}_{\nu \lambda}) U^{\nu} W^{\lambda} V^{\alpha}
- (\partial_{\alpha} \Gamma^{\mu}_{\nu \lambda}) V^{\nu} W^{\lambda} U^{\alpha}
+ \Gamma^{\mu}_{\alpha \beta} \Gamma^{\beta}_{\nu \lambda} U^{\nu} W^{\lambda} V^{\alpha}
- \Gamma^{\mu}_{\alpha \beta} \Gamma^{\beta}_{\nu \lambda} V^{\nu} W^{\lambda} U^{\alpha}[/itex]

Note the miracle that all the derivatives of [itex]W[/itex], [itex]U[/itex] and [itex]V[/itex] cancel out. (I guess it's not a miracle, since the result has to be a tensor, so those cancellations must happen.)

Rename some dummy indices to factor out [itex]U_\nu[/itex], [itex]V_\alpha[/itex] and [itex]W_\lambda[/itex] to get:
[itex](\nabla_V (\nabla_U W) - \nabla_U (\nabla_V W))^{\mu}
= ((\partial_{\alpha} \Gamma^{\mu}_{\nu \lambda})
- (\partial_{\nu} \Gamma^{\mu}_{\alpha \lambda})
+ \Gamma^{\mu}_{\alpha \beta} \Gamma^{\beta}_{\nu \lambda}
- \Gamma^{\mu}_{\nu\beta} \Gamma^{\beta}_{\alpha \lambda})U^{\nu}V^{\alpha} W^{\lambda}[/itex]
 
  • #93
It becomes more clear in this way. I appreciate your efforts! Thanks!
 
  • #94
stevendaryl said:
I'm ambivalent about the definition of Riemann in terms of covariant derivatives. It's kind of weird

Just to expand on my complaint; the definition of R in terms of covariant derivatives is very succinct, but it's a little mysterious why it gives the right answer for parallel transport of a vector around a loop.
 
  • #95
Actually, the derivation of the Riemann tensor can be done in much easier way. This was discussed in 8th lecture on GR by Susskind.

It is important to realize that the operator of the covariant derivative is the following:

[itex]\nabla_\mu[/itex][itex]=[/itex][itex]\dfrac{\partial}{\partial x^\mu} + \Gamma_\mu [/itex]

Then, the idea of a commutator comes out:

[itex]AB-BA=[A,B][/itex]

[itex][\dfrac{\partial}{\partial x}, f(x)] = \dfrac{\partial f}{\partial x}[/itex]

[itex] [\nabla_\nu , \nabla_\mu][/itex] is basically the Riemann tensor:

[itex](\partial_\nu + \Gamma_\nu)(\partial_\mu +\Gamma_\mu)-(\partial_\mu+\Gamma_\mu)(\partial_\nu+\Gamma_\nu)[/itex]

When we expand this equation, we are left with

[itex]\partial_\nu \partial_\mu+\partial_\nu \Gamma_\mu + \Gamma_\nu \partial_\mu + \Gamma_\nu \Gamma_\mu - \partial_\mu \partial_\nu - \partial_\mu \Gamma_\nu - \Gamma_\mu \partial_\nu - \Gamma_\mu \Gamma_\nu[/itex]

The products of two derivative operators are commute so they cancel each other out.

Then we have

[itex]\partial_\nu \Gamma_\mu + \Gamma_\nu \partial_\mu + \Gamma_\nu \Gamma_\mu - \partial_\mu \Gamma_\nu - \Gamma_\mu \partial_\nu - \Gamma_\mu \Gamma_\nu[/itex]

Now, we can notice that there are two commutators written out explicitly in that equation, so we put them together to have:

[itex]-[\partial_\mu, \Gamma_\nu] + [\partial_\nu, \Gamma_\mu]+\Gamma_\nu \Gamma_\mu- \Gamma_\mu \Gamma_\nu[/itex]

Using formula for commutator, we have that:

[itex]\dfrac{\partial \Gamma_\mu}{\partial x^\nu}-\dfrac{\partial \Gamma_\nu}{\partial x^\mu} + \Gamma_\nu \Gamma_\mu- \Gamma_\mu \Gamma_\nu[/itex]

^
Which is an exact definition of the Riemann tensor.
 
  • #96
GRstudent said:
Actually, the derivation of the Riemann tensor can be done in much easier way. This was discussed in 8th lecture on GR by Susskind.

Isn't that basically the same derivation as the one in post #92?
 
  • #97
stevendaryl said:
Isn't that basically the same derivation as the one in post #92?

I guess not, in the sense that considering [itex]\partial[/itex] and [itex]\Gamma[/itex] as operators and computing the commutator is "cleaner" than sticking in vectors to operate on. Conceptually, it's the same thing, whether you write:

[itex][\nabla_\mu, \nabla_\nu][/itex]

or

[itex]\nabla_U (\nabla_V W) - \nabla_V (\nabla_U W)[/itex]
 
  • #98
^
Correct.

So basically, the covariant derivative shows how the tangent vector varies along the curve?
 
Last edited:
  • #99
The Gammas in polar coordinates are [itex]\dfrac{1}{r}[/itex], what does it really mean?
 
  • #100
GRstudent said:
The Gammas in polar coordinates are [itex]\dfrac{1}{r}[/itex], what does it really mean?

The coefficient [itex]\Gamma^{i}_{jk}[/itex] is defined implicitly via the equation:

[itex]\nabla_{j} e_{k} = \Gamma^{i}_{jk} e_{i}[/itex]

where [itex]e_{i}[/itex] is the ith basis vector. To illustrate, let's make it simpler, and use 2D polar coordinates. In rectangular coordinates, the coordinates are [itex]x[/itex] and [itex]y[/itex], with basis vectors [itex]e_x[/itex] and [itex]e_y[/itex]. Now we can do a coordinate change to coordinates [itex]r[/itex] and [itex]\theta[/itex] defined by:
[itex]x = r cos(\theta), y = r sin(\theta)[/itex] with corresponding basis vectors [itex]e_r = cos(\theta) e_x + sin(\theta) e_y[/itex] and [itex]e_{\theta} = - r sin(\theta) e_x + r cos(\theta) e_y[/itex]. We compute derivatives:

[itex]\nabla_{r} e_{r} = 0[/itex]

So, [itex]\Gamma^{r}_{rr} = \Gamma^{\theta}_{rr} = 0[/itex]

[itex]\nabla_{\theta} e_{r} = (\partial_{\theta} cos(\theta)) e_x + (\partial_{\theta} sin(\theta)) e_y[/itex]
= [itex]- sin(\theta) e_x + cos(\theta) e_y[/itex]
= [itex]e_{\theta}/r[/itex]

So, [itex]\Gamma^{r}_{\theta r} = 0[/itex], and [itex]\Gamma^{\theta}_{\theta r} = 1/r[/itex]

[itex]\nabla_{r} e_{\theta} = (\partial_{r} (-r sin(\theta))) e_x + (\partial_{r} (r cos(\theta))) e_y [/itex]
= [itex]-r sin(\theta) e_x + cos(\theta) e_y [/itex]
= [itex] e_\theta/r[/itex]

So, [itex]\Gamma^{r}_{r \theta} = 0[/itex], and [itex]\Gamma^{\theta}_{r \theta} = 1/r[/itex]

[itex]\nabla_{\theta} e_{\theta} = (\partial_{\theta} (-r sin(\theta))) e_x + (\partial_{\theta} (r cos(\theta))) e_y[/itex]
= [itex]- r cos(\theta) e_x - r sin(\theta) e_y[/itex]
= [itex]-r e_{r}[/itex]

So, [itex]\Gamma^{r}_{\theta \theta} = -r[/itex], and [itex]\Gamma^{\theta}_{\theta \theta} = 0[/itex]
 
  • #101
stevendaryl,

One of the components of the Riemann tensor in Schwarzschild solution is

[itex]R^{r}_{\theta \theta r}{} = \dfrac{M}{R}[/itex]

What does this component mean in real sense? Susskind told that, here, two [itex] \theta \theta [/itex] indexes (which are downstairs) have something to do with two vectors which define a plane at some point in space. What other two r-r represent? Basically, Riemann tensor takes as input 4 vectors and outputs the curvature value, right?

Thank you.

Is this a true picture of basis vectors? http://www.springerimages.com/Images/RSS/1-10.1007_978-1-4614-0706-5_6-0
 
Last edited:
  • #102
GRstudent said:
[itex]R^{r}_{\theta \theta r}{} = \dfrac{M}{R}[/itex]

What does this component mean in real sense?
I am not completely certain that I have this right, but I believe that it means that if you move a [itex]dr[/itex] vector around a closed differential [itex]d\theta[/itex], [itex]d\theta[/itex] loop then it will change by a differential amount M/R in the [itex]dr[/itex] direction.
 
  • #103
^
This makes some sense.
 
  • #104
DaleSpam said:
I am not completely certain that I have this right, but I believe that it means that if you move a [itex]dr[/itex] vector around a closed differential [itex]d\theta[/itex], [itex]d\theta[/itex] loop then it will change by a differential amount M/R in the [itex]dr[/itex] direction.

I'm not sure about the convention for the order of the indices, but if you have a loop (parallelogram) in which the two sides are parallel (both [itex]d\theta[/itex]), then the loop has area 0, and so the parallel transport gives 0.

I think that the correct interpretation is this: Make a parallelogram with one side equal to [itex]dr\ e_r[/itex] and another side equal to [itex]d\theta \ e_\theta[/itex]. March the vector [itex]V = e_\theta[/itex] around the parallelogram to get a new vector [itex]V'[/itex] that will be slightly different from [itex]e_\theta[/itex]. Then letting [itex]\delta V = V' - V[/itex],

[itex]\delta V^r = R^r_{\theta \theta r} dr d\theta[/itex]
 
  • #105
Last edited by a moderator:
  • Like
Likes D.S.Beyer

Similar threads

  • Special and General Relativity
2
Replies
50
Views
3K
  • Special and General Relativity
Replies
8
Views
1K
  • Special and General Relativity
Replies
18
Views
2K
  • Special and General Relativity
Replies
4
Views
1K
  • Special and General Relativity
Replies
3
Views
103
  • Special and General Relativity
Replies
9
Views
1K
  • Special and General Relativity
Replies
11
Views
1K
Replies
13
Views
640
  • Special and General Relativity
Replies
32
Views
3K
  • Special and General Relativity
Replies
5
Views
1K
Back
Top