- #1
physlosopher
- 30
- 4
Edit: I'm leaving the original post as is, but after discussion I'm not confused over coordinate time having a physical meaning. I was confused over a particular use of a coordinate time difference to solve a problem, in which a particular coordinate time interval for a particular choice of coordinates is taken to have a physical meaning that I found confusing and problematic.
I'm reading through Collier's A Most Incomprehensible Thing, and am getting tripped up thinking through the physical meaning of coordinate time between two spatially separated events. Any help would be much appreciated!
In the chapter on black holes, Collier asks what it would be like to watch something fall into a black hole from far away. To do this he uses the Schwarzschild metric to calculate the coordinate time separating two events: a signal emitted at ##r_{1}## and the signal being received by an observer at ##r_{2}##, where ##r## is the radial Schwarzschild coordinate, not a proper distance. He considers a signal along a radial line where ##\theta = \phi = 0##, so that the interval is defined by $$ds^2 = (1 - \frac{R_{s}}{r})c^2dt^2 - \frac{dr^2}{1 - \frac{R_{s}}{r}},$$ ##R_{s}## being the Schwarzschild radius, which corresponds to the event horizon. For a beam of light ##ds = 0##, so this reduces to $$0 = (1 - \frac{R_{s}}{r})c^2dt^2 - \frac{dr^2}{1 - \frac{R_{s}}{r}}.$$ Integrating leads to the relationship $$t_{2} - t_{1} = \Delta t = \frac{r_{2}-r_{1}}{c} + \frac{R_{s}}{c} \ln(\frac{r_{2}-R_{s}}{r_{1}-R_{s}}),$$ which gives the coordinate time separating the event at ##r_{1}## and the event at ##r_{2}##.
Collier let's ##r_{1}## approach ##R_{s}## to show that the coordinate time separating the emission of light at the event horizon of a black hole and the reception of that signal far away is infinite.
I understand that this indicates that a signal sent from the event horizon will never reach a point far away (nor, the math seems to suggest, any point with ##r_{2} > r_{1}##?). But I'm having trouble wrapping my head around the physical meaning of a coordinate time difference between two events that are separated in space. Am I correct in thinking that even though for points far from ##r = 0## the proper time will agree with coordinate time, here the coordinate time difference does not correspond to a proper time difference far away?
My reasoning is that it doesn't seem to make physical sense to think about a proper time difference between an event at ##r_{1}## and an event at ##r_{2}## for ##r_{1} \neq r_{2}## for an observer sitting at ##r_{2}##. If we want to talk about an actual physical period of time between these events, since they're spatially separated, wouldn't we need to talk about the interval of proper time on a world line that connects them? And then isn't the argument that ##\Delta t \to \infty## subtly different than "it takes the faraway observer an infinitely long time to receive the signal"? Collier writes "if we can find the coordinate time taken for a light signal, or for a change in position of the freely falling object, we have automatically found the distant observer's proper time measurement for those two events." But I'm having trouble squaring this with the way I'm thinking about the problem. Any advice?
I'm thinking that one sort of remedy might be to modify the thought experiment a bit so that I'm thinking about events that aren't spatially separated. For example, I could look for the coordinate time difference between an object in free fall emitting a photon near the event horizon and then emitting one at the event horizon. But because coordinate time corresponds to proper time far away, wouldn't this coordinate time difference be the same as the proper time difference between the faraway observer receiving each of those signals?
Thanks in advance for any assistance!
I'm reading through Collier's A Most Incomprehensible Thing, and am getting tripped up thinking through the physical meaning of coordinate time between two spatially separated events. Any help would be much appreciated!
In the chapter on black holes, Collier asks what it would be like to watch something fall into a black hole from far away. To do this he uses the Schwarzschild metric to calculate the coordinate time separating two events: a signal emitted at ##r_{1}## and the signal being received by an observer at ##r_{2}##, where ##r## is the radial Schwarzschild coordinate, not a proper distance. He considers a signal along a radial line where ##\theta = \phi = 0##, so that the interval is defined by $$ds^2 = (1 - \frac{R_{s}}{r})c^2dt^2 - \frac{dr^2}{1 - \frac{R_{s}}{r}},$$ ##R_{s}## being the Schwarzschild radius, which corresponds to the event horizon. For a beam of light ##ds = 0##, so this reduces to $$0 = (1 - \frac{R_{s}}{r})c^2dt^2 - \frac{dr^2}{1 - \frac{R_{s}}{r}}.$$ Integrating leads to the relationship $$t_{2} - t_{1} = \Delta t = \frac{r_{2}-r_{1}}{c} + \frac{R_{s}}{c} \ln(\frac{r_{2}-R_{s}}{r_{1}-R_{s}}),$$ which gives the coordinate time separating the event at ##r_{1}## and the event at ##r_{2}##.
Collier let's ##r_{1}## approach ##R_{s}## to show that the coordinate time separating the emission of light at the event horizon of a black hole and the reception of that signal far away is infinite.
I understand that this indicates that a signal sent from the event horizon will never reach a point far away (nor, the math seems to suggest, any point with ##r_{2} > r_{1}##?). But I'm having trouble wrapping my head around the physical meaning of a coordinate time difference between two events that are separated in space. Am I correct in thinking that even though for points far from ##r = 0## the proper time will agree with coordinate time, here the coordinate time difference does not correspond to a proper time difference far away?
My reasoning is that it doesn't seem to make physical sense to think about a proper time difference between an event at ##r_{1}## and an event at ##r_{2}## for ##r_{1} \neq r_{2}## for an observer sitting at ##r_{2}##. If we want to talk about an actual physical period of time between these events, since they're spatially separated, wouldn't we need to talk about the interval of proper time on a world line that connects them? And then isn't the argument that ##\Delta t \to \infty## subtly different than "it takes the faraway observer an infinitely long time to receive the signal"? Collier writes "if we can find the coordinate time taken for a light signal, or for a change in position of the freely falling object, we have automatically found the distant observer's proper time measurement for those two events." But I'm having trouble squaring this with the way I'm thinking about the problem. Any advice?
I'm thinking that one sort of remedy might be to modify the thought experiment a bit so that I'm thinking about events that aren't spatially separated. For example, I could look for the coordinate time difference between an object in free fall emitting a photon near the event horizon and then emitting one at the event horizon. But because coordinate time corresponds to proper time far away, wouldn't this coordinate time difference be the same as the proper time difference between the faraway observer receiving each of those signals?
Thanks in advance for any assistance!
Last edited: