Help with Weinberg p. 72 -- time dt for a photon to travel a distance d⃗x

In summary, Weinberg's equation for the time required for light to travel a distance is given as:0 = g_{00}dt^2 + 2g_{i0}dx^i dt + g_{ij}dx^i dx^j
  • #1
Kostik
82
9
Hi all, and thanks in advance. I am an old guy learning GR for fun. Reading Weinberg's "Gravitation and Cosmology". PhD in math 1998, so I read all books like I read math books: every character, every word, every line, every page extremely carefully.

I am stuck on the stupidest thing. On p.72, he writes out a quadratic equation for the time ##dt## for a photon to travel a distance ##d\vec{x}##:
$$0 = g_{00}dt^2 + 2g_{i0}dx^i dt + g_{ij}dx^i dx^j$$
(##i=1,2,3##.), which follows immediately from (3.2.9) or (3.2.6). He then gives the solution in (3.2.10) using the quadratic formula.

What stumps me is that he has used the minus (-) square root instead of the (+) square root. How does he know to do that? If you test it using the simplest coordinate transformation ##x^α=ξ^α## and hence the metric ##g_{μν}=η_{μν}##, then of course he IS right, because ##g_{00} = -1##. But the ##g_{μν}## could be anything, so how does he justify taking the negative square root?
 
Physics news on Phys.org
  • #2
Please write out the equations you are referring to. Everybody does not have a copy of Weinberg easily accessible.
 
  • #3
Orodruin said:
Please write out the equations you are referring to. Everybody does not have a copy of Weinberg easily accessible.
Sure. (3.2.6) is simply the statement of proper time ##dτ^2##:
$$dτ^2 = -g_{μν}dx^μdx^ν.$$
From this follows the equation for the time ##dt## for a photon to travel a distance ##d\vec{x}##:
$$0 = g_{00}dt^2 + 2g_{i0}dx^i dt + g_{ij}dx^i dx^j$$
Weinberg says, "the solution is":
$$dt = \frac1 {g_{00}} \left[ -g_{i0}dx^i - \sqrt { (g_{i0}g_{j0} - g_{ij}g_{00}) dx^i dx^j } \right]$$
What stumps me is how he justifies taking the negative squate root, given
$$g_{μν} = \frac {∂ξ^α} {∂x^μ} \frac {∂ξ^β} {∂x^μ} η_{αβ}$$ can be anything. Note Weinberg specifically remarks on the previous page that the coordinate system ##x^μ## can be "a Cartesian coordinate system at rest in the laboratory, but also may be curvilinear, accelerated, rotating, or what we will."
 
Last edited:
  • #4
Solving the eqn in @Kostik's post #1 as a quadratic in ##dt##, we get: $$ dt ~=~ \frac{-2g_{i0} \pm \sqrt{(4g_{i0}g_{j0} - 4 g_{00} g_{ij}) dx^i dx^j}}{2 g_{00}} ~=~ \frac{-g_{i0} \pm \sqrt{(g_{i0}g_{j0} - g_{00} g_{ij}) dx^i dx^j}}{g_{00}} ~,~~~~~~ [3.2.10]$$ IIUC, the "##\pm##" merely expresses the 2 possibilities that light could travel forward in time or backward in time, and still cover a spatial displacement given by ##dx^i##.

The context of this is that Weinberg is considering the "time required for light to travel along any path". So he's arbitrarily picking 1 of the 2 possible directions along the path.
 
  • #5
So what is the point of this exercise? The coordinate time generally does not have the interpretation of a time difference. In particular not for a geodesic with changing spatial coordinates. What is coordinate time and spatial coordinates is an arbitrary convention.
 
  • #6
Orodruin said:
So what is the point of this exercise?
The section is titled "Gravitational Forces". He's basically just working through an exercise to show that "all effects of gravitation are comprised in ##\Gamma^\lambda_{~\mu\nu}## and ##g_{\mu\nu}##." (See para at end of that section, on p73.) In the next section, he goes on to develop the relation between metric and connection...

However, the para in question begins with "Incidentally..." so he could also be taking a somewhat self-indulgent Weinberg-esque detour. o_O
 
  • #7
strangerep said:
Solving the eqn in @Kostik's post #1 as a quadratic in ##dt##, we get: $$ dt ~=~ \frac{-2g_{i0} \pm \sqrt{(4g_{i0}g_{j0} - 4 g_{00} g_{ij}) dx^i dx^j}}{2 g_{00}} ~=~ \frac{-g_{i0} \pm \sqrt{(g_{i0}g_{j0} - g_{00} g_{ij}) dx^i dx^j}}{g_{00}} ~,~~~~~~ [3.2.10]$$ IIUC, the "##\pm##" merely expresses the 2 possibilities that light could travel forward in time or backward in time, and still cover a spatial displacement given by ##dx^i##.

The context of this is that Weinberg is considering the "time required for light to travel along any path". So he's arbitrarily picking 1 of the 2 possible directions along the path.
strangerep: the two values of ##dt## will NOT in general be equal in magnitude and opposite in sign. In fact, they could both be positive. All I can see is that, if I choose the simplest metric
$$g=\begin{pmatrix}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}$$
then it's true that you need to take the minus sign to get ##dt = \sqrt {d\vec x^2}##.
I'm guessing there must be some constraints among the components of ##g## that require the minus sign. Not seeing it.
 
  • #8
Kostik said:
strangerep; in this solution, the two values of ##dt## won't be equal in magnitude and opposite in sign.
If the metric has ##g_{i0}=0##, then they will be. But a nonzero ##g_{i0}## means there's some counter-intuitive interplay occurring between space and time. Nevertheless, there will still be 2 directions along any path.

BTW, I will say that I've never found any of Weiberg's textbooks to be good as introductions. They tend to be better suited as more advanced texts, where the reader already knows something about the subject.

You might consider switching to Wald's textbook on "General Relativity".
 
  • #9
strangerep said:
The section is titled "Gravitational Forces". He's basically just working through an exercise to show that "all effects of gravitation are comprised in ##\Gamma^\lambda_{~\mu\nu}## and ##g_{\mu\nu}##." (See para at end of that section, on p73.) In the next section, he goes on to develop the relation between metric and connection...

However, the para in question begins with "Incidentally..." so he could also be taking a somewhat self-indulgent Weinberg-esque detour. o_O
I still do not see the point of computing the differential of an arbitrarily chosen timelike coordinate. It simply has no physical meaning.
 
  • #10
Orodruin said:
I still do not see the point of computing the differential of an arbitrarily chosen timelike coordinate. It simply has no physical meaning.
The section is showing (among other things) that metric and connection (in an arbitrary coord system) provide enough information to determine locally inertial coords in the neighborhood of a point.

But perhaps this should wait until you have a copy of Weinberg at hand? Otherwise, I'll end up re-typing the whole section. :oldruck:
 
  • #11
strangerep said:
If the metric has ##g_{i0}=0##, then they will be. But a nonzero ##g_{i0}## means there's some counter-intuitive interplay occurring between space and time. Nevertheless, there will still be 2 directions along any path.

BTW, I will say that I've never found any of Weiberg's textbooks to be good as introductions. They tend to be better suited as more advanced texts, where the reader already knows something about the subject.

You might consider switching to Wald's textbook on "General Relativity".
Right. But ##g_{i0}=0## is not true in general. And even when it is, you would choose the minus sign if ##g_{00} < 0## and the plus sign if ##g_{00} > 0##. I think I'm missing something.

I liked Weinberg's summary of S.R. in Chapter 2; but admittedly, I knew a little about it already. The derivation of the Lorentz transformations as the only ones that preserve proper time (hence the constancy of the speed of light) is very simple and elegant, and so much better than pictures of trains and flashbulbs. I would like to try to stick it out with this book if I can.
 
  • #12
Kostik said:
Right. But ##g_{i0}=0## is not true in general. And even when it is, you would choose the minus sign if ##g_{00} < 0## and the plus sign if ##g_{00} > 0##. I think I'm missing something.
Your choice would correspond to a choice of which direction is "forward in time".

Maybe also try to think of it in terms of finding locally inertial coords near a point, in which the metric is (approximately) diagonal.

Tbh, I wouldn't get too hung up on this point. It's just saying that given a quadratic expression in ##dt##, you can integrate along any given path in 2 possible directions.
 
  • #13
strangerep said:
The section is showing (among other things) that metric and connection (in an arbitrary coord system) provide enough information to determine locally inertial coords in the neighborhood of a point.

But perhaps this should wait until you have a copy of Weinberg at hand? Otherwise, I'll end up re-typing the whole section. :oldruck:
It is not what the section says I am interested in. It is the motivation for calling this arbitrary coordinate differential "time" in the first place. The construction of a locally inertial coordinate system from the metric is rather trivial. Just take a local orthonormal basis and base the coordinate system on the geodesics.
 
  • #14
Orodruin said:
It is not what the section says I am interested in.
In that case, it sounds like a thread fork is in order (since this thread was about something in that section of Weinberg).

It is the motivation for calling this arbitrary coordinate differential "time" in the first place.
An excellent subject for discussion in a different thread: "What is time?" :oldbiggrin:
 
  • #15
strangerep said:
In that case, it sounds like a thread fork is in order (since this thread was about something in that section of Weinberg).

An excellent subject for discussion in a different thread: "What is time?" :oldbiggrin:
You still misunderstand me - I know the answer to this question already (and you have been around long enough to know that I do). I am questioning the validity of calling this arbitrary coordinate differential dt "time" without additional qualifiers. Now I can guess that the t is going to be the time coordinate of Weinberg's constructed local inertial frame but really I have no way of knowing for sure. If so, please just say so instead of going off on a tangent.

Edit: So looking at that section, t is just an arbitrary timelike coordinate in an arbitrary coordinate system. It does not necessarily have the interpretation of an actual physical time. This answers my question - it is just a coordinate time.
 
Last edited:
  • #16
Kostik said:
...you would choose the minus sign if ##g_{00} < 0## and the plus sign if ##g_{00} > 0##.
The sign of ##g_{00}## determines whether ##x^0## is timelike or spacelike. I'm assuming it has already been specified that ##t## is timelike, therefore ##g_{00} < 0##.

So he's choosing the solution that gives the maximum value of ##dt##, on the further assumption that coordinate ##t## increases with time.
 
  • #17
Orodruin said:
[...] If so, please just say so instead of going off on a tangent.
Tbh, I perceived that you were the one drifting off-topic, and I was puzzled since I know that your GR knowledge exceeds mine by a considerable margin. I will leave it at that.
 
  • #18
While I don't have a copy of Weinberg, from the quoted sections, I see nothing forcing the coordinate called t to be timelike. That would depend on the sign of g00. The derivation would not hold for a light like t coordinate (g00 = 0), but it would for t being space like. In which case you have solved for what distance along a funny spatial coordinate would be necessary for light given the other coordinate differentials. Note that these other coordinates could all be light like. The derivation only breaks down if the zeroth coordinate happens to be light like.

[edit: I see dr. Greg just made part of this point.]
 
  • #19
PAllen said:
I see nothing forcing the coordinate called t to be timelike. That would depend on the sign of g00.
This is true. I just assumed this since Weinberg calls it "time". If it is not timelike, calling it that makes even less sense.

Weinberg mentions the coordinates possibility to be "cartesian coordinates in the lab frame" but generally states that he accepts any coordinate system. He might be implicitly expecting the reader to take a system with a timelike coordinate. I find "Cartesian coordinates in the lab frame" quite imprecise anyway.
 
  • #20
strangerep, thanks. I won't get hung up on this. Can I ask another question from the section that follows? This is quite interesting where he shows that in a neighborhood of a point ##X##, you can find the correct locally inertial coordinates ##\xi^\alpha## (up to a Lorentz transformation) by expanding in a Taylor series - eqn (3.2.12). What I don't understand is why eqn (3.2.14) "determines the ##b^\alpha~_\lambda## up to a Lorentz transformation". Certainly eqn (3.2.14) remains correct if you transform the ##b^\alpha~_\lambda## by a Lorentz transformation. But (3.2.14) is 10 quadratic equations in 16 variables (10 because ##g_{\mu\nu}## is symmetric). How do we know that (3.2.14) COMPLETELY determines the ##b^\alpha~_\lambda## up to a Lorentz transformation?
 
  • #21
Kostik said:
What I don't understand is why eqn (3.2.14) "determines the ##b^\alpha~_\lambda## up to a Lorentz transformation".
Was that a typo? In my copy, he says that eq(3.2.13) determines the ##b^\alpha_{~\lambda}## up to a Lorentz transformation.

I'm not sure if this will help, but a Lorentz transformation has 6 degrees of freedom (being an exponentiation involving 3 rotation generators and 3 boost generators). Perhaps this resolves the apparent discrepancy of "6" that you've noticed?
 
Last edited:
  • Like
Likes Kostik
  • #22
strangerep: yes the book says (3.2.13) but I believe THAT is a typo, it should be (3.2.14). I'm sure the missing six is due to the arbitrary "up to a Lorentz transformation". I just wish I could see how that follows algebraically from (3.2.14). (PS thanks I appreciate your help.)
 
  • #23
It does follow directly from (3.2.14). Just use the fact that the Lorentz transformation is defined as the set of coordinate transformations that leave the Minkowski metric invariant. You will find that if ##b^\alpha_{\ \mu}## satisfies the equation, then so does ##\Lambda^\alpha_{\ \beta} b^\beta_{\ \mu}##, telling you that the equation only determines ##b## up to an arbitrary six-parameter transformation.
 
  • #24
Kostik said:
strangerep: yes the book says (3.2.13) but I believe THAT is a typo, it should be (3.2.14).
After I posted my last message, I began to think the same thing. :olduhh:

I'm sure the missing six is due to the arbitrary "up to a Lorentz transformation". I just wish I could see how that follows algebraically from (3.2.14).
Well, a Lorentz transformation leaves the LHS of (3.2.14) form-invariant, since, e.g., if we have a Lorentz transformation which turns ##b^\alpha_{~\mu}## into ##\Lambda^\alpha_{~\beta} b^\beta_{~\mu}##, then the LHS of (3.2.14) is unchanged, since $$\eta_{\alpha\beta} b^\alpha_{~\mu} b^\beta_{~\nu} ~\to~ \eta_{\alpha\beta} \Lambda^\alpha_{~\rho} b^\rho_{~\mu} \Lambda^\beta_{~\sigma} b^\sigma_{~\nu} ~=~ \eta_{\rho\sigma} b^\rho_{~\mu} b^\sigma_{~\nu} ~,$$ which is the same as the original after a renaming of the dummy summation indices ##\rho, \sigma##.

[Edit: I see Orodruin got in first...]
 
  • #25
Gents: yes, I saw that, it was easy to verify that (3.2.14) remains unchanged if a Lorentz transformation is applied. I wasn't asking whether "up to a Lorentz solution" follows from (3.2.14)--clearly it does. What isn't clear is why (3.2.14) is enough to *determine* a solution ##b^\beta_{~\mu}##. As I said, ten *quadratic* equations in 16 variables sounds right knowing that the solution is only good up to any Lorentz transformation, but why is it clear that (3.2.14) *is* sufficient to determine ##b^\beta_{~\mu}##?

To put it differently, can you show a constructive algebraic solution that *produces* any ##b^\beta_{~\mu}## (which of course can be multiplied by any Lorentz transformation)?

Eqn. (3.2.14) is necessary, but is it sufficient to determine a single solution to all 16 ##b^\beta_{~\mu}##?

Just for fun, pretend we're in only two dimensions, so ##\{\mu, \nu\} \in \{0, 1\}##. Then writing out (3.2.14) in detail, using ##\eta = diag(-1,1)##:
$$-b^0_{~\mu} b^0_{~\nu} + b^1_{~\mu} b^1_{~\nu} = g_{\mu\nu}, ~~~ \{\mu, \nu\} \in \{0, 1\}$$ and the three equations corresponding to ##\mu, \nu = (0,0), (1,1), (0,1)## are:
$$\begin{align}
-{b^0_0}^2 + {b^1_0}^2 &= g_{00}\\
-{b^0_1}^2 + {b^1_1}^2 &= g_{11}\\
-b^0_0 b^0_1 + b^1_0 b^1_1 &= g_{01}
\end{align}$$
It's not so obvious that there is a unique solution to these equations, defined up to a Lorentz transformation. (Although, again, it *is* clear that if ##b^\alpha_{~\mu}## is a solution, then so is the same under a Lorentz transformation.)

Weinberg starts this discussion in the middle of p. 72 by saying "The values of the metric tensor ##g_{\mu\nu}## and the affine connection ##Γ^\lambda_{\mu\nu}## at a point X in an arbitrary coordinate system ##x^\mu## PROVIDE ENOUGH INFORMATION TO DETERMINE THE LOCALLY INERTIAL COORDINATES ##\xi^\alpha## in a neighborhood of X." *This* is what I am not yet convinced of.

Thanks again for any help.
 
Last edited:
  • #26
The vierbein is always completely determined, because it won't have 16 independent components.

There aren't 3 equations in your set, there are 4, and the 4th one which you haven't written is (upon a simple rearrangement of terms in the product) equal to the 3rd, IFF the RHS is symmetric. Therefore the vierbein definition is consistent, because out of the apparently 16 independent components of the vierbein matrix, you only use 10 equations which have them, therefore 6 components of the vierbein are algebraically redundant, because the (curved/flat) metric has only 10 independent components. By going to local Lorentz coordinates, you keep the number of independent metric components (10 from curved go to 10 from flat) iff. the vierbein also has 10 independent components.
Another way to see it: The spin of gravity is 2, you can track this 2 to the 10 = number of the independent components of curved space time metric (easy, consider weak gravity and expand g to first order g = eta + lambda h. h is symmetric (obviously, the so-called Pauli Fierz field). Now linearize the vierbein as well (the "both down" components linearized vierbein need not be symmetric) and write down the linearized HE action in terms of the "both down" linearized vierbein (after getting rid of the linearized spin connection, thus moving from Palatini to 2nd order formalism). You will discover that the 6 independent components of the antisymmetric part of the linearized vierbein are completely eliminated from the HE action. Physics "lives" in the 10 independent components of the vierbein. Calculations are on page 23 to 25 from here: https://arxiv.org/abs/0704.2321

Added in proof: assume there are 16 independent vierbeins. Pure gravity has pure spin two content. It would mean that, by going to local Lorentz coordinates, you add spin one (electromagnetism) and spin 0 (a scalar field) to the theory, thus you'd be adding matter fields where there were none.
 
Last edited:
  • #27
Dexter, as I noted before, there are clearly 10 equations, not 16, because the metric tensor is symmetric. It is still not clear that these equations specify a unique solution up to a Lorentz transformation. I don't understand the rest of your comments.
 
  • #28
The quadratic system of equations is mathematically sufficiently determined (16 nonlinear equations for 16 vierbein components), thus its solution is unique.
Add physics, i.e. assume both metric tensors are symmetric. Your apparent indetermination (10 independent equations for 16 components of the vierbein) which would mathematically lead to a loss of an unique solution starts off from the incorrect premise that all 16 components of the vierbein are algebraically independent one from each other. They aren't. Only 10 are. Otherwise, a gravitational wave would transport photons and Higgs particles, not only gravitons.
 
Last edited:
  • #29
Dexter: I know there are only 10 independent equations. Absolutely obvious since the metric tensor is symmetric. It is not clear that those 10 quadratic (not linear!) equations determine a unique solution up to a Lorentz transformation. Can you provide a constructive solution?
 
  • #30
Ah, silly me... :doh:

It's just an application of eigendecomposition, i.e.,$$A ~=~ V D V^T ~,$$where ##D## is diagonal (with entries being the eigenvalues of ##A##), and the columns of ##V## are the eigenvectors of a (real, symmetric) matrix ##A##. In our case: $$g ~=~ b \eta b^T ~,$$ and the "constructive" solution for ##b## is obtained by finding eigenvectors of ##g##.
:blushing:
 
Last edited:
  • Like
Likes Kostik
  • #31
Thanks a lot strangerep. The decomposition appears to be ##A = VDV^{-1}##, not ##VDV^T##, so it doesn't quite match up with our equation ##g=b^T\eta b##. Of course, ##A^{-1}=A^T## for orthogonal matrices, but is ##b## orthogonal?*

It is interesing to note that the definition of a Lorentz transformation is ##Λ^T\eta Λ=\eta## so we can see that when ##g=\eta## then ##b=Λ##, which does agree with the known fact that b is only defined up to a Lorentz transformation.

I'll try to tidy this up ... I think it still isn't clear enough to me ... but it does seem that Weinberg made a little bit of a jump here.

*Edit: Yes. Since g is symmetric, in fact ##g=VDV^T## where D is the diagonal matrix of eigenvalues of g, and V is the matrix of g's [orthogonal] eigenvectors. But our equation is ##g=b^T\eta b##, so ##b=V^T##. And again here it's obvious that b (or V) is valid up to a Lorentz transformation because ##Λ^T\eta Λ=\eta##.

Again many thanks. Your last post really helped me crack this.
 
Last edited:
  • #32
Kostik said:
The decomposition appears to be ##A = VDV^{-1}##, not ##VDV^T##, so it doesn't quite match up with our equation ##g=b^T\eta b##. Of course, ##A^{-1}=A^T## for orthogonal matrices, but is ##b## orthogonal?*
As you note in your edit, ##A = VDV^T## is correct if ##g## is real symmetric. Additional discussion here.

[...] it does seem that Weinberg made a little bit of a jump here.
Yes -- that's an example of what I meant when I said that Weinberg tends to be more suitable as an advanced text. He requires rather more of his readers, and sometimes this isn't obvious -- as in the present case.
 
  • #33
strangerep said:
It's just an application of eigendecomposition, i.e.,$$A ~=~ V D V^T ~,$$where ##D## is diagonal (with entries being the eigenvalues of ##A##), and the columns of ##V## are the eigenvectors of a (real, symmetric) matrix ##A##. In our case: $$g ~=~ b \eta b^T ~,$$ and the "constructive" solution for ##b## is obtained by finding eigenvectors of ##g##.
Note that in general, ##~g ~=~ b \eta b^T ~## is not an eigendecomposition of ##g##. The diagonal elements of ##\eta## are not, in general, the eigenvalues of ##g## and the columns of ##b## are not, in general, eigenvectors of ##g## .

As an example, suppose ##g =
\left( \begin{array}{cc}
7 & 4 \\
4 & 1 \\
\end{array} \right)##.

Then you can show that ##g = b \eta b^T## where ##\eta =
\left( \begin{array}{cc}
-1 &0 \\
0 & 1 \\
\end{array} \right)## and ##b =
\left( \begin{array}{cc}
3 & 4 \\
0 & 1 \\
\end{array} \right)##

But the eigendecomposition of ##g## is ##g = VDV^T ## where ##V =
\left( \begin{array}{cc}
1/\sqrt{5} &2/\sqrt{5} \\
-2/\sqrt{5} & 1/\sqrt{5} \\
\end{array} \right)## and ##D =
\left( \begin{array}{cc}
-1 &0 \\
0 & 9 \\
\end{array} \right)##
 
Last edited:
  • #34
TSny said:
Note that in general, ##~g ~=~ b \eta b^T ~## is not an eigendecomposition of ##g##. [...]
Heh, I was wondering whether someone would mention that.

In GR, the equivalence principle motivates an assumption that we have a metric field with signature (-,+,+,+) at each point of spacetime. Indeed, Weinberg actually starts from ##\eta## and transforms to ##g## -- see p71. His ##b##'s are then also expressed as coord transformation coefficients -- see eqn(3.2.13).

IOW, we're not really starting from an arbitrary ##g##, but one with the right eigenvalues.
 
Last edited:
  • #35
Strangerep, I agree. Weinberg's discussion starts from the equivalence principle. This implies that (3.2.14) must have a solution where the ##b##'s are associated with the coordinate transformation as in (3.2.13).

Still, the eigenvalues of ##g(X)## need not be the diagonal elements of ##\eta##.

But, as you say, you can't just start from any symmetric matrix ##g##. For example, taking the determinant of each side of (3.2.14) requires that the determinant of ##g## be negative.
 

Similar threads

  • Special and General Relativity
Replies
12
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
1K
Back
Top