Help deriving the Geodesic equation

  • #1
Hamiltonian
267
181
I was following David tongs notes on GR, right after deriving the Euler Lagrange equation, he jumps into writing the Lagrangian of a free particle and then applying the EL equation to it, he mentions curved spaces by specifying the infinitesimal distance between any two points, ##x^i##and ##x^i + dx^i##, the line element as ##ds^2 = g_{ij}(x)dx^i dx^j##
and the Lagrangian for the free particle as:
$$\mathcal{L} = \frac{1}{2}mg_{ij}(x)\dot{x^j}\dot{x^i}$$
I don't understand why he has introduced the metric tensor here. He doesn't really explain how he has written the above equations and I feel a bit lost.
His handouts state clearly that you don't need previous experience with GR to follow it, am I missing something obvious?

I also don't understand how he is taking the derivatives of the the Lagrangian and then putting everything together to get the geodesic equation.
1642685957753.png

:oldconfused:
 
Last edited:

Answers and Replies

  • #2
MathematicalPhysicist
Gold Member
4,699
369
What is equation (1.2)? or else please give a link to the specific handout.
 
  • #4
MathematicalPhysicist
Gold Member
4,699
369
Ah yes, sorry my thickness.

Anyway, if you interchange between ##j\leftrightarrow k##, since ##\frac{\partial g_{ik}}{\partial x^j}\dot{x^j}\dot{x^k}=\frac{\partial g_{ij}}{\partial x^k}\dot{x^k}\dot{x^j}##, since ##\dot{x^k}\dot{x^j}=\dot{x^j}\dot{x^k}##. (It would be interesting if these coordinates were anti-commuting like in Grassmann algebra.
 
  • #5
MathematicalPhysicist
Gold Member
4,699
369
As for the dertivatives use the fact that ##\frac{dx^k}{dx^j}=\delta_{jk}## where this delta is Kronecker's delta symbol.
 
  • #6
Hamiltonian
267
181
Ah yes, sorry my thickness.

Anyway, if you interchange between ##j\leftrightarrow k##, since ##\frac{\partial g_{ik}}{\partial x^j}\dot{x^j}\dot{x^k}=\frac{\partial g_{ij}}{\partial x^k}\dot{x^k}\dot{x^j}##, since ##\dot{x^k}\dot{x^j}=\dot{x^j}\dot{x^k}##. (It would be interesting if these coordinates were anti-commuting like in Grassmann algebra.
Actually I didn't understand the step before that,

$$\frac{\partial \mathcal{L}}{\partial \dot{x^i}} = \frac{\partial}{\partial \dot{x^i}}\left( \frac{1}{2}mg_{ij}(x)\dot{x^j}\dot{x^j}\right) = mg_{ik}\dot{x^i}$$

I mean I can see how ##\frac{\partial}{\partial \dot{x^i}}\left(\frac{1}{2}mg_{ij}{x^i}^2\right) = mg_{ik}{x^i}##
but I don't see why the ##\mathcal{L}## has been written in terms of ##\dot{x^j}\dot{x^i}## instead of just ##(\dot{x^i})^2##, which goes back to my first question how was the ##\mathcal{L}## written in the first place in terms of the Metric tensor and ##\dot{x^i}\dot{x^j}##?
 
  • #7
Ibix
Science Advisor
Insights Author
2022 Award
10,342
11,104
In Cartesian coordinates on a Euclidean plane, the length-squared of a vector is the sum of the squares of the components, right? Somewhat sloppily, that's ##V^iV^i##.

That doesn't work in a non-Cartesian basis, though - try re-writing a Cartesian vector in polar coordinates and see. But the metric tensor, ##g_{ij}##, deals with that. The length-squared of a vector in an arbitrary basis is ##g_{ij}V^iV^j##. And that's why ##\left(\dot{x}^i\right)^2## is written explicitly as ##g_{ij}\dot{x}^i\dot{x}^j##.

Going back to the Cartesian case, in fact it's still true that the length squared of a vector is ##g_{ij}V^iV^j##. It's just that the metric of Euclidean space expressed in Cartesian coordinates is ##\delta^i_j##, so you can get away with pretending it's not there.
 
  • Like
Likes Hamiltonian
  • #8
Nugatory
Mentor
14,210
8,099
I don't see why the ##\mathcal{L}## has been written in terms of ##\dot{x^j}\dot{x^i}## instead of just ##(\dot{x^i})^2##, which goes back to my first question how was the ##\mathcal{L}## written in the first place in terms of the Metric tensor and ##\dot{x^i}\dot{x^j}##?
##g_{ij}\dot{x^j}\dot{x^i}## is the magnitude squared of the velocity, ##\vec{v}\cdot\vec{v} = |v|^2##. The more familiar ##(\dot{x^i})^2## is the special case in which the metric coefficients are all equal to unity.
 
  • Like
Likes Hamiltonian
  • #9
Hamiltonian
267
181
In Cartesian coordinates on a Euclidean plane, the length-squared of a vector is the sum of the squares of the components, right? Somewhat sloppily, that's ##V^iV^i##.

That doesn't work in a non-Cartesian basis, though - try re-writing a Cartesian vector in polar coordinates and see. But the metric tensor, ##g_{ij}##, deals with that. The length-squared of a vector in an arbitrary basis is ##g_{ij}V^iV^j##. And that's why ##\left(\dot{x}^i\right)^2## is written explicitly as ##g_{ij}\dot{x}^i\dot{x}^j##.

Going back to the Cartesian case, in fact it's still true that the length squared of a vector is ##g_{ij}V^iV^j##. It's just that the metric of Euclidean space expressed in Cartesian coordinates is ##\delta^i_j##, so you can get away with pretending it's not there.
##g_{ij}\dot{x^j}\dot{x^i}## is the magnitude squared of the velocity, ##\vec{v}\cdot\vec{v} = |v|^2##. The more familiar ##(\dot{x^i})^2## is the special case in which the metric coefficients are all equal to unity.
This makes sense, but I am still a bit confused as to why the length squared of a vector in an arbitrary basis is ##g_{ij}V^iV^j##.
The notes also start by mentioning(out of left field) the most general form of a line element(the infinitesimal distance between any two points, ##x^i## and ##x^i + dx^i##) is:
$$ds^2 = g_{ij}(x)dx^i dx^j$$
I don't understand where they get this from.
 
  • #10
ergospherical
888
1,222
I think most of your confusion is just unfamiliarity with using index notation. Let me fill in the gaps (albeit using slightly different indices). ##L = \frac{1}{2}g_{ij}(x) \dot{x}^i \dot{x}^j##\begin{align*}
\dfrac{\partial L}{\partial x^k} = \dfrac{1}{2} \dfrac{\partial g_{ij}}{\partial x^k} \dot{x}^i \dot{x}^j
\end{align*}Note ##\partial \dot{x}^i / \partial \dot{x}^j = \delta^i_j##, so \begin{align*}
\frac{\partial L}{\partial \dot{x}^k} = \frac{1}{2} g_{ij}(x) (\dot{x}^i \delta^j_k + \delta^i_k \dot{x}^j )
\end{align*}##i## and ##j## are dummy indices (summed over), so ##g_{ij}(x) \delta^i_k \dot{x}^j = g_{ji}(x) \delta^j_k \dot{x}^i##. Also ##g_{ij}(x) = g_{ji}(x)## is symmetric, so ##g_{ji}(x) \delta^j_k \dot{x}^i = g_{ij}(x) \delta^j_k \dot{x}^i##. Then\begin{align*}
\frac{\partial L}{\partial \dot{x}^k} &= g_{ij}(x) \delta^j_k \dot{x}^i = g_{ik}(x) \dot{x}^i \\
\frac{d}{dt} \frac{\partial L}{\partial \dot{x}^k} &= \dot{g}_{ik}(x)\dot{x}^i + g_{ik}(x) \ddot{x}^i = \frac{\partial g_{ik}}{\partial x^j} \dot{x}^i \dot{x}^j + g_{ik}(x) \ddot{x}^i
\end{align*}Euler-Lagrange:\begin{align*}
\dfrac{1}{2} \dfrac{\partial g_{ij}}{\partial x^k} \dot{x}^i \dot{x}^j &= \frac{\partial g_{ik}}{\partial x^j} \dot{x}^i \dot{x}^j + g_{ik}(x) \ddot{x}^i \\
\dfrac{1}{2} \dfrac{\partial g_{ij}}{\partial x^k} \dot{x}^i \dot{x}^j - \frac{\partial g_{ik}}{\partial x^j} \dot{x}^i \dot{x}^j &= g_{ik}(x) \ddot{x}^i
\end{align*}again by symmetricity, \begin{align*}
\dfrac{\partial g_{ik}}{\partial x^j} &= \dfrac{1}{2} \dfrac{\partial g_{ik}}{\partial x^j} + \dfrac{1}{2} \dfrac{\partial g_{ki}}{\partial x^j} \\
\dfrac{\partial g_{ik}}{\partial x^j} \dot{x}^i \dot{x}^j &= \dfrac{1}{2} \dfrac{\partial g_{ik}}{\partial x^j} \dot{x}^i \dot{x}^j + \dfrac{1}{2} \dfrac{\partial g_{ki}}{\partial x^j} \dot{x}^i \dot{x}^j \\
&= \dfrac{1}{2} \dfrac{\partial g_{ik}}{\partial x^j} \dot{x}^i \dot{x}^j + \dfrac{1}{2} \dfrac{\partial g_{kj}}{\partial x^i} \dot{x}^i \dot{x}^j
\end{align*}where in the last line we re-labelled the dummy indices ##\dfrac{\partial g_{ki}}{\partial x^j} \dot{x}^i \dot{x}^j = \dfrac{\partial g_{kj}}{\partial x^i} \dot{x}^j \dot{x}^i = \dfrac{\partial g_{kj}}{\partial x^i} \dot{x}^i \dot{x}^j##.
 
  • Love
Likes Hamiltonian and PeroK
  • #11
ergospherical
888
1,222
The notes also start by mentioning(out of left field) the most general form of a line element(the infinitesimal distance between any two points, ##x^i## and ##x^i + dx^i##) is:$$ds^2 = g_{ij}(x)dx^i dx^j$$I don't understand where they get this from.

There is a non-degenerate symmetric bilinear structure called a metric which characterises a scalar product. Recall a simple manifold ##\mathbf{R}^3## equipped with a Euclidean metric (i.e. characterising the usual dot product) to help visualise, although everything generalises. If you have two close points separated by a vector ##d\mathbf{l} = dx^1 \mathbf{e}_1 + dx^2 \mathbf{e}_2 + dx^3 \mathbf{e}_3 = dx^i \mathbf{e}_i## (with the ##x^i## arbitrary, not necessarily orthogonal, coordinates), then ##dl^2 = d\mathbf{l} \cdot d\mathbf{l} = g(d\mathbf{l}, d\mathbf{l}) = g(dx^i \mathbf{e}_i, dx^j \mathbf{e}_j) = dx^i dx^j g(\mathbf{e}_i, \mathbf{e}_j)## by virtue of the bilinearity, and ##g_{ij} \equiv g(\mathbf{e}_i, \mathbf{e}_j)## is the definition of tensor components.
 
  • Like
Likes Hamiltonian and PeroK
  • #12
PeroK
Science Advisor
Homework Helper
Insights Author
Gold Member
2022 Award
24,028
15,721
I was following David tongs notes on GR ...
If these notes are too advanced, you might try the MIT lectures on GR by Professor Hughes:

 
  • Like
Likes Hamiltonian
  • #13
Hamiltonian
267
181
I think most of your confusion is just unfamiliarity with using index notation. Let me fill in the gaps (albeit using slightly different indices). ##L = \frac{1}{2}g_{ij}(x) \dot{x}^i \dot{x}^j##\begin{align*}
\dfrac{\partial L}{\partial x^k} = \dfrac{1}{2} \dfrac{\partial g_{ij}}{\partial x^k} \dot{x}^i \dot{x}^j
\end{align*}Note ##\partial \dot{x}^i / \partial \dot{x}^j = \delta^i_j##, so \begin{align*}
\frac{\partial L}{\partial \dot{x}^k} = \frac{1}{2} g_{ij}(x) (\dot{x}^i \delta^j_k + \delta^i_k \dot{x}^j )
\end{align*}##i## and ##j## are dummy indices (summed over), so ##g_{ij}(x) \delta^i_k \dot{x}^j = g_{ji}(x) \delta^j_k \dot{x}^i##. Also ##g_{ij}(x) = g_{ji}(x)## is symmetric, so ##g_{ji}(x) \delta^j_k \dot{x}^i = g_{ij}(x) \delta^j_k \dot{x}^i##. Then\begin{align*}
\frac{\partial L}{\partial \dot{x}^k} &= g_{ij}(x) \delta^j_k \dot{x}^i = g_{ik}(x) \dot{x}^i \\
\frac{d}{dt} \frac{\partial L}{\partial \dot{x}^k} &= \dot{g}_{ik}(x)\dot{x}^i + g_{ik}(x) \ddot{x}^i = \frac{\partial g_{ik}}{\partial x^j} \dot{x}^i \dot{x}^j + g_{ik}(x) \ddot{x}^i
\end{align*}Euler-Lagrange:\begin{align*}
\dfrac{1}{2} \dfrac{\partial g_{ij}}{\partial x^k} \dot{x}^i \dot{x}^j &= \frac{\partial g_{ik}}{\partial x^j} \dot{x}^i \dot{x}^j + g_{ik}(x) \ddot{x}^i \\
\dfrac{1}{2} \dfrac{\partial g_{ij}}{\partial x^k} \dot{x}^i \dot{x}^j - \frac{\partial g_{ik}}{\partial x^j} \dot{x}^i \dot{x}^j &= g_{ik}(x) \ddot{x}^i
\end{align*}again by symmetricity, \begin{align*}
\dfrac{\partial g_{ik}}{\partial x^j} &= \dfrac{1}{2} \dfrac{\partial g_{ik}}{\partial x^j} + \dfrac{1}{2} \dfrac{\partial g_{ki}}{\partial x^j} \\
\dfrac{\partial g_{ik}}{\partial x^j} \dot{x}^i \dot{x}^j &= \dfrac{1}{2} \dfrac{\partial g_{ik}}{\partial x^j} \dot{x}^i \dot{x}^j + \dfrac{1}{2} \dfrac{\partial g_{ki}}{\partial x^j} \dot{x}^i \dot{x}^j \\
&= \dfrac{1}{2} \dfrac{\partial g_{ik}}{\partial x^j} \dot{x}^i \dot{x}^j + \dfrac{1}{2} \dfrac{\partial g_{kj}}{\partial x^i} \dot{x}^i \dot{x}^j
\end{align*}where in the last line we re-labelled the dummy indices ##\dfrac{\partial g_{ki}}{\partial x^j} \dot{x}^i \dot{x}^j = \dfrac{\partial g_{kj}}{\partial x^i} \dot{x}^j \dot{x}^i = \dfrac{\partial g_{kj}}{\partial x^i} \dot{x}^i \dot{x}^j##.
This was really helpful! it filled in all the gaps between the steps given in the notes, my only question is why weren't all these intermediate steps mentioned in the notes in the first place! could it be that we are expected to fill the gaps ourselves or these notes are meant for someone with some prior experience with GR and not for a complete novice(such as myself)😭

Edit: ig I'll watch the Mit lectures!
 
  • #14
PeroK
Science Advisor
Homework Helper
Insights Author
Gold Member
2022 Award
24,028
15,721
This was really helpful! it filled in all the gaps between the steps given in the notes, my only question is why weren't all these intermediate steps mentioned in the notes in the first place! could it be that we are expected to fill the gaps ourselves or these notes are meant for someone with some prior experience with GR and not for a complete novice(such as myself)😭
The notes are designed for graduate students. Generally, in both mathematics and physics, as the material gets more advanced you are expected to know more, be able to fill in more gaps yourself and be able to understand material without necessarily filling in every gap.
 
  • Like
Likes Hamiltonian
  • #15
Ibix
Science Advisor
Insights Author
2022 Award
10,342
11,104
This makes sense, but I am still a bit confused as to why the length squared of a vector in an arbitrary basis is ##g_{ij}V^iV^j##.
Actually, ##V^i## is the components of a vector. A vector is those components multiplied by the basis. Presumably you've seen Cartesian three-vectors written as both ##(v^x,v^y,v^z)## and ##v^x\hat{\vec{x}}+v^y\hat{\vec{y}}+v^z\hat{\vec{z}}##, right? And that actually works fine for non-Cartesian bases as well. So the square of a vector is actually$$\left(v^x\hat{\vec{x}}+v^y\hat{\vec{y}}+v^z\hat{\vec{z}}\right)^2$$OK so far? Now just expand the brackets.$$\begin{eqnarray*}
&&\left(v^x\hat{\vec{x}}+v^y\hat{\vec{y}}+v^z\hat{\vec{z}}\right)^2\\
&=&\left(v^x\hat{\vec{x}}\right)^2+\left(v^y\hat{\vec{y}}\right)^2+\left(v^z\hat{\vec{z}}\right)^2\\
&&+\left(v^xv^y\hat{\vec{x}}\hat{\vec{y}}\right)+\left(v^xv^y\hat{\vec{x}}\hat{\vec{y}}\right)\\
&&+\left(v^xv^z\hat{\vec{x}}\hat{\vec{z}}\right)+\left(v^xv^z\hat{\vec{z}}\hat{\vec{x}}\right)\\
&&+\left(v^yv^z\hat{\vec{y}}\hat{\vec{z}}\right)+\left(v^yv^z\hat{\vec{z}}\hat{\vec{y}}\right)
\end{eqnarray*}$$Note that I haven't defined what I mean by multiplying vectors in this context. Leaving that aside for a moment, you can verify that this can be written as a matrix expression:$$
\left(\begin{array}{ccc}v^x&v^y&v^z\end{array}\right)
\left(\begin{array}{ccc}
\hat{\vec{x}}\hat{\vec{x}}&\hat{\vec{x}}\hat{\vec{y}}&\hat{\vec{x}}\hat{\vec{z}}\\
\hat{\vec{y}}\hat{\vec{x}}&\hat{\vec{y}}\hat{\vec{y}}&\hat{\vec{y}}\hat{\vec{z}}\\
\hat{\vec{z}}\hat{\vec{x}}&\hat{\vec{z}}\hat{\vec{y}}&\hat{\vec{z}}\hat{\vec{z}}
\end{array}\right)
\left(\begin{array}{c}v^x\\v^y\\v^z\end{array}\right)$$and that if we identify the matrix in the middle with ##g##, we've got ##g_{ij}V^iV^j##.

Finally, I just need to work out what the elements of ##g## are. If I'd done all this with two vectors ##u## and ##v## and insisted that I was talking about the inner product (you can generalise what I wrote easily enough) then you'd see that the components are just the inner products of the basis vectors. So, essentially, the metric is a statement of what we want the inner products of the basis vectors to be, and it turns out to be useful for calculating inner products of arbitrary vectors as well.

When we are working in Cartesian coordinates in Euclidean space, we want the inner products of our basis vectors to be ##\delta^i_j##, and so ##g## is the identity matrix and we can pretend it's not there. In Minkowski spacetime, we want the squared length of the timelike basis vector to have the opposite sign to the rest, but all to be orthogonal - so the metric is ##\mathrm{diag}(1,-1,-1,-1)##. In general, you specify symmetries of spacetime and then set up the stress-energy tensor and solve the Einstein field equations to get ##g##.
 
  • Like
Likes Hamiltonian
  • #16
MathematicalPhysicist
Gold Member
4,699
369
This was really helpful! it filled in all the gaps between the steps given in the notes, my only question is why weren't all these intermediate steps mentioned in the notes in the first place! could it be that we are expected to fill the gaps ourselves or these notes are meant for someone with some prior experience with GR and not for a complete novice(such as myself)😭

Edit: ig I'll watch the Mit lectures!
Usually these notes are meant for students taking this course.
As for self-study, I warn you that learning things by yourself from books etc is quite daunting, and you can be led astray in the paths you take.

Good Luck in your quest!
 
  • Like
Likes Hamiltonian
  • #18
MathematicalPhysicist
Gold Member
4,699
369
That's what this place is for, though.
Yes, I am still waiting for a QCD expert to bump in my post in HEP forum.
 
  • #19
ergospherical
888
1,222
This was really helpful! it filled in all the gaps between the steps given in the notes, my only question is why weren't all these intermediate steps mentioned in the notes in the first place! could it be that we are expected to fill the gaps ourselves or these notes are meant for someone with some prior experience with GR and not for a complete novice(such as myself)😭

Edit: ig I'll watch the Mit lectures!

Do keep in mind that these are the notes for a Part III (Masters) course whilst suffix/index notation is introduced in the first year of undergrad and is assumed as a pre-requisite.
 
  • Like
Likes Hamiltonian

Suggested for: Help deriving the Geodesic equation

Replies
25
Views
1K
  • Last Post
Replies
9
Views
870
Replies
13
Views
1K
Replies
2
Views
416
  • Last Post
Replies
11
Views
683
Replies
4
Views
678
  • Last Post
Replies
2
Views
562
  • Last Post
2
Replies
65
Views
2K
Replies
0
Views
289
Replies
1
Views
310
Top