Curved Space-time and Relative Velocity

Click For Summary
The discussion centers on the concept of relative velocity between moving points in curved space-time, questioning its validity within general relativity. It highlights that calculating relative velocity requires parallel transport of velocity vectors to a common point, which can yield different results depending on the transport path taken. This non-uniqueness complicates the definition of relative velocity, suggesting it may not be meaningful in certain scenarios. Examples involving parallel transport on curved surfaces illustrate that vectors can change orientation, further challenging the concept. Ultimately, the conversation underscores the complexities of defining relative motion in the context of curved space-time and its implications for understanding physical observations.
  • #271
DaleSpam said:
I only singled out momentum because it was the first one you mentioned, not because it was conceptually different from the others. The same thing I said about momentum applies to distances, times, and velocities. Describe the experimental set up of your measurement and all frames will agree on the result.

Okay, the experimental setup for measurement in Alex's frame is that Alex looks, or takes a picture, or videotapes the events. The experimental setup for measurement in Barbara's frames is that Barbara looks, or takes a picture, or videotapes the events.

The end result is that Alex and Barbara disagree on times, distances, and velocities.
 
Physics news on Phys.org
  • #272
JDoolin said:
Okay, the experimental setup for measurement in Alex's frame is that Alex looks, or takes a picture, or videotapes the events. The experimental setup for measurement in Barbara's frames is that Barbara looks, or takes a picture, or videotapes the events.
That is two different measurements, not one measurement in two different frames. The first postulate does not say that different measurements will produce the same result, only that the same measurement will produce the same result in different frames.

So, if the measure is that Alex uses a pinhole camera to take a digital picture of some bright object and then counts the number of pixels illuminated then both Alex's frame and Barbara's frame will agree on the number of pixels illuminated.
 
Last edited:
  • #273
DaleSpam said:
That is two different measurements, not one measurement in two different frames. The first postulate does not say that different measurements will produce the same result, only that the same measurement will produce the same result in different frames.

So, if the measure is that Alex uses a pinhole camera to take a digital picture of some bright object and then counts the number of pixels illuminated then both Alex's frame and Barbara's frame will agree on the number of pixels illuminated.

Of course it is two different measurements!

It has never been my intention to claim that "the same measurement" would result in different results. The different results come from the fact that the different observers are forced to make different measurements, from their own positions and from their own reference frames.

My other point was that Dolby and Gull's method does little or nothing to actually represent what Barbara sees with her own eyes and her own instruments.

If Alex shows Barbara what she filmed with her pinhole camera, Barbara will of course agree and say "Yes, Alex, I'm sure that is what you saw."

But if Alex tries to use Dolby and Gull's Radar Time and says, "Okay, Barbara, this is what you saw,right?"

[URL]http://www.wiu.edu/users/jdd109/stuff/img/dolbygull.jpg[/URL]

Barbara will say to Alex:

"No, silly, that is not what I saw at all. That's just what you saw with some arbitrary lines of simultaneity through it. What I saw was for half of the trip, your image was contracted, moving away from me at less than half the speed of light and you were moving in slow-motion, then when I turned around your image shot away from me, then as I was coming back, you were moving in fast motion, and the image was elongated, and coming toward me at faster than the speed of light."
 
Last edited by a moderator:
  • #274
JDoolin said:
Of course it is two different measurements!

It has never been my intention to claim that "the same measurement" would result in different results. The different results come from the fact that the different observers are forced to make different measurements, from their own positions and from their own reference frames.
Do you agree with the following: There is nothing whatsoever that forces you to use a reference frame where a specific measuring device is at rest. All reference frames will agree on the number that device produces for a specific measurement regardless of the device's velocity in that frame.

If you agree, then I do not understand in what sense you mean that an obeserver is forced to make a measurement from their reference frame.

JDoolin said:
My other point was that Dolby and Gull's method does little or nothing to actually represent what Barbara sees with her own eyes and her own instruments.
So what? Alex's inertial frame doesn't represent what Alex sees with his own eyes and his own instruments either. That is not what coordinate systems are for.

However, you can perform the analysis in any reference frame to determine what Alex or Barbara saw with their own eyes and their own instruments. You are guaranteed to get the same results.
 
  • #275
DaleSpam said:
Do you agree with the following: There is nothing whatsoever that forces you to use a reference frame where a specific measuring device is at rest. All reference frames will agree on the number that device produces for a specific measurement regardless of the device's velocity in that frame.

If you agree, then I do not understand in what sense you mean that an obeserver is forced to make a measurement from their reference frame.

So what? Alex's inertial frame doesn't represent what Alex sees with his own eyes and his own instruments either. That is not what coordinate systems are for.

However, you can perform the analysis in any reference frame to determine what Alex or Barbara saw with their own eyes and their own instruments. You are guaranteed to get the same results.

I am not sure what you are still bothered about. Of course an instrument can only gather data in the reference frame that it is in. Everyone is going to agree on whatever data the equipment gathered.

You can map from one reference frame to another, but the distances between events, times between events, and velocities of objects will not agree in the different reference frames.

Is there something you still disagree with?
 
  • #276
JDoolin said:
Is there something you still disagree with?
Yes. You are being self-contradictory here:

JDoolin said:
an instrument can only gather data in the reference frame that it is in.
and
JDoolin said:
Everyone is going to agree on whatever data the equipment gathered.
The first statement violates the first postulate of relativity and contradicts the second statement.

If you do not see these two statements as self-contradictory then you really need to explain what you mean for an object to be "in" a reference frame. Despite repeated queries from multiple people you have still not given a clear definition of what you mean by that, and in post 239 you explicitly disagreed with the typical usage of the term.
 
  • #277
Suppose I have a coil, an electron moving an .8c to the right, another electron moving .99c to the left, aimed to come near the other electron, both being well within the magnetic field of the coil. I have cloud chamber to capture the electron paths. What frame of reference is anything 'in'??! No matter what frame I choose, to determine what will happen in the cloud chamber, I have to deal with fast moving e/m fields. I can't separate anything into independent interactions: from either particle's 'point of view' I have a fast moving coil and a fast moving 'current'. From the cloud chamber I have two fast moving currents interacting with each other and the coil field. This is a conceptually straightforward problem that can be analyzed in any frame; none will be simpler much than any other. How can you talk about anything being 'forced' be be analyzed in 'their frame' ??
 
Last edited:
  • #278
PAllen said:
Suppose I have a coil, an electron moving an .8c to the right, another electron moving .99c to the left, aimed to come near the other electron, both being well within the magnetic field of the coil. I have cloud chamber to capture the electron paths. What frame of reference is anything 'in'??! No matter what frame I choose, to determine what will happen in the cloud chamber, I have to deal with fast moving e/m fields. I can't separate anything into independent interactions: from either particle's 'point of view' I have a fast moving coil and a fast moving 'current'. From the cloud chamber I have two fast moving currents interacting with each other and the coil field. This is a conceptually straightforward problem that can be analyzed in any frame; none will be simpler much than any other. How can you talk about anything being 'forced' be be analyzed in 'their frame' ??

There is an implicit reference frame as soon as you say that there is an electron moving .8c to the right.

You ask, ".8c relative to what?" The answer to that question tells you whose or what's reference frame you're in.

In most cases, it is the frame of whatever apparatus you are using to measure the location of the electron. You are not forced to analyze the data from any particular frame, but you are forced to collect the data from a particular frame.
 
Last edited:
  • #279
DaleSpam said:
Yes. You are being self-contradictory here:

and The first statement violates the first postulate of relativity and contradicts the second statement.

If you do not see these two statements as self-contradictory then you really need to explain what you mean for an object to be "in" a reference frame. Despite repeated queries from multiple people you have still not given a clear definition of what you mean by that, and in post 239 you explicitly disagreed with the typical usage of the term.

My use of the word reference frame is quite typical:

http://en.wikipedia.org/wiki/Frame_of_reference

"A frame of reference in physics, may refer to a coordinate system or set of axes within which to measure the position, orientation, and other properties of objects in it, or it may refer to an observational reference frame tied to the state of motion of an observer. It may also refer to both an observational reference frame and an attached coordinate system, as a unit."

Example:
If I am driving down the highway at 55 miles per hour, and a truck is traveling at 55 miles per hour, how fast is the truck going in my reference frame? 110 miles per hour. How fast am I going in the truck's reference frame? 110 miles per hour. How fast are we going in the Earth's reference frame? 55 miles per hour.
 
  • #280
JDoolin said:
There is an implicit reference frame as soon as you say that there is an electron moving .8c to the right.

You ask, ".8c relative to what?" The answer to that question tells you whose or what's reference frame you're in.

In most cases, it is the frame of whatever apparatus you are using to measure the location of the electron. You are not forced to analyze the data from any particular frame, but you are forced to collect the data from a particular frame.

Yes, I was describing things from the point of view of the coil. Let me try this one more way:

What a detector/observor measures/sees is determined by its world line. This is an invariant, physical fact, and can even be dealt with without coordinates. The world line can be desribed and analyzed from any number of frames, with each with any number of coordinate labeling choices (e.g. polar vs rectilinear coordinates). Everthing except the world line (and the intrinsic geometry and surrounding fields, etc.) is convention, not physics, and affects only the ease of calculation; what is easiest depends on what calculation you want to do.

The cloud chamber has a world line - that is intrinsic, determines what it detects. The cloud chamber has a frame of reference only by convention. Saying the cloud chamber has a frame of reference is shorthand for: it is convenient for some purpose to label events by building a coordinate patch whose origin is some position along a world line, and, usually, whose time coordinate is proper time along the world line from a chosen origin.
 
  • #281
JDoolin said:
My use of the word reference frame is quite typical
But your use of the word "in" is very atypical. You keep on referring to objects being "in a reference frame" rather than "being at rest in" or "moving in" a reference frame. Your usage doesn't make any sense.

JDoolin said:
Example:
If I am driving down the highway at 55 miles per hour, and a truck is traveling at 55 miles per hour, how fast is the truck going in my reference frame? 110 miles per hour. How fast am I going in the truck's reference frame? 110 miles per hour. How fast are we going in the Earth's reference frame? 55 miles per hour.
This is typical usage, all three objects (you, truck, highway) have a specified velocity with respect to all three reference frames. Each object is "at rest in" or "moving in" every given reference frame. This is the usage that I mentioned in post 237 and you specifically rejected in post 239. If you have changed your mind and adopted the standard usage then it will certainly help communication.

Assuming that you are now indeed using the standard terminology then I must re-emphasize the fact that the first postulate ensures that a measuring device will get the same result for a given measurement regardless of the reference frame. You are never forced to use the reference frame where the device/observer is at rest.
 
  • #282
DaleSpam said:
But your use of the word "in" is very atypical. You keep on referring to objects being "in a reference frame" rather than "being at rest in" or "moving in" a reference frame. Your usage doesn't make any sense.

This is typical usage, all three objects (you, truck, highway) have a specified velocity with respect to all three reference frames. Each object is "at rest in" or "moving in" every given reference frame. This is the usage that I mentioned in post 237 and you specifically rejected in post 239. If you have changed your mind and adopted the standard usage then it will certainly help communication.

Assuming that you are now indeed using the standard terminology then I must re-emphasize the fact that the first postulate ensures that a measuring device will get the same result for a given measurement regardless of the reference frame. You are never forced to use the reference frame where the device/observer is at rest.


I have not been as clear as I thought. For what I am referring to, it is not sufficient just to say "the reference frame I am in," because, indeed I am in every reference frame. Mea Culpa.

You may assume that every time I have said "the reference frame someone is in" I actually meant "the reference frame in which someone is momentarily at rest."

If that helps, I still disagree on the issue of whether an observer is "forced" to use the reference frame where it is momentarily at rest.

Let me try to make my main point in as simple a way as I can. I have asked several people the following question: Imagine you are in a truck, driving in a soft snowfall. To you, it seems that the snow is moving almost horizontally, toward you. Which way is the snow "really" moving.

Everyone I have asked this question answers, "straight down." Of course, this is a good Aristotlean answer, but relativistically speaking there is no correct answer, because there is no ether by which one could determine how the snow is "really" moving.

On the other hand, if you put a camcorder in the front window of the truck and filmed the snow, that camera has no other option than to film the snowfall as it appears in the reference frame where the vehicle (and the camera) is at rest. In the film, it will appear that the snow is traveling almost horizontally, straight toward the camera.

Even if you stop the truck, or throw the camera out the window, the camera still films everything in such a way that the camera is always momentarily at rest in its own reference frame. It is effectively forced to film things this way; not as a matter of convention, but as a matter of physical reality.

It is also the same with Barbara, who on her trip accelerates and turns around--what she sees is not a matter of convention, but a a matter of physical fact.

Now, there is also the matter of stellar aberration. In general, the common view is that the actual positions of stars are stationary, but it is only some optical illusion which causes them to move up to 20 arcseconds in the sky over the course of the year. The nature of this question is similar to the snowflake question. Is the light coming from the direction that the light appears to be coming from? If you point toward the image of the star, are you pointing toward the star? Are you pointing toward the event which created the light you are now seeing?

I would say that in the truck and snow example, as far as the truck-driver is concerned, the snow really is coming toward him. And in the stellar aberration case, you really are pointing toward the event which produced the light of the star. In each case, the observed phenomena are results of the observers being at rest in particular reference frames. The phenomena they are seeing are not optical illusions, but are true representations of what is happening in the reference frames where they are momentarily at rest.
 
  • #283
JDoolin said:
In each case, the observed phenomena are results of the observers being at rest in particular reference frames. The phenomena they are seeing are not optical illusions, but are true representations of what is happening in the reference frames where they are momentarily at rest.
Could you clarify your meaning here? I also would not characterize them as optical illusions since an optical illusion is due to our eyes and brains and how they interpret images, but instead they are due to the finite speed of light. The coordinates of events in an inertial reference frame are what remains after properly accounting for the finite speed of light. A camera does not account for the finite speed of light, therefore this seems wrong to me:
JDoolin said:
if you put a camcorder in the front window of the truck and filmed the snow, that camera has no other option than to film the snowfall as it appears in the reference frame where the vehicle (and the camera) is at rest.
The film from the camcorder will show Terrell rotation and aberration and other effects due to the finite speed of light which are carefully accounted for and removed by the coordinate system. The film will most definitely not show how things are in the inertial rest frame.

If you wish to use a coordinate system that directly reflects the effects due to the finite speed of light then you will need to use light-cone coordinates, not the inertial rest frame. Light-cone coordinates would directly indicate what the camera would film, but they are not inertial. Of course, using the inertial rest frame you can certainly calculate what the image will look like, but you can do that from any frame, inertial or not.
 
  • #284
DaleSpam said:
If you wish to use a coordinate system that directly reflects the effects due to the finite speed of light then you will need to use light-cone coordinates, not the inertial rest frame.
I am interested, do you have some good references (e.g. books or significant papers) to light cone coordinates DaleSpam?
 
  • #285
I would probably start with this one:
http://ysfine.com/articles/dircone.pdf
 
Last edited by a moderator:
  • #286
DaleSpam said:
Could you clarify your meaning here? I also would not characterize them as optical illusions since an optical illusion is due to our eyes and brains and how they interpret images, but instead they are due to the finite speed of light. The coordinates of events in an inertial reference frame are what remains after properly accounting for the finite speed of light. A camera does not account for the finite speed of light, therefore this seems wrong to me:The film from the camcorder will show Terrell rotation and aberration and other effects due to the finite speed of light which are carefully accounted for and removed by the coordinate system. The film will most definitely not show how things are in the inertial rest frame.

If you wish to use a coordinate system that directly reflects the effects due to the finite speed of light then you will need to use light-cone coordinates, not the inertial rest frame. Light-cone coordinates would directly indicate what the camera would film, but they are not inertial. Of course, using the inertial rest frame you can certainly calculate what the image will look like, but you can do that from any frame, inertial or not.

Now that I know you call this "light-cone coordinates" I can tell you I have been talking about "light-cone coordinates" the whole time. Now, can you understand this is what Barbara would see?

JDoolin said:
Barbara will say to Alex:

"... What I saw was for half of the trip, your image was contracted, moving away from me at less than half the speed of light and you were moving in slow-motion, then when I turned around your image shot away from me, then as I was coming back, you were moving in fast motion, and the image was elongated, and coming toward me at faster than the speed of light."
 
  • #287
JDoolin said:
Now that I know you call this "light-cone coordinates" I can tell you I have been talking about "light-cone coordinates" the whole time. Now, can you understand this is what Barbara would see?
Light cone coordinates are most definitely not the same as the momentarily co-moving inertial frame (MCIF). However, if you like light cone coordinates then you should really like Dolby and Gull's coordinates. They are very closely related (much more closely related than the MCIF). That is actually one of the things that I find appealing about them.
 
  • #288
DaleSpam said:
Light cone coordinates are most definitely not the same as the momentarily co-moving inertial frame (MCIF). However, if you like light cone coordinates then you should really like Dolby and Gull's coordinates. They are very closely related (much more closely related than the MCIF). That is actually one of the things that I find appealing about them.

Let me first make clear that I do like the article about light cone coordinates, although I think I jumped the gun in saying that I was using the light-cone coordinates. (I was not.) What I was doing was considering the locus of events that are in the observer's past light cone. Unfortunately, I went by the name of the article and the context of what I thought we were talking about, and didn't spend the time to grock what the article was actually about.

This "Dirac's Light Cone Coordinates" appears to be a pretty good pedagogical method, as it turns the Lorentz Transform into a scaling and inverse scaling on the u and v axes, simply by rotating 45 degrees, so the x=ct line and x=-ct lines are vertical and horizontal:

This is another way of writing equation (2) from the article you referenced.

\left( <br /> \begin{array}{c}<br /> u \\<br /> v<br /> \end{array}<br /> \right)<br /> =<br /> <br /> \left(<br /> \begin{array}{cc}<br /> \cos (45) &amp; \sin (45) \\<br /> -\sin (45) &amp; \cos (45)<br /> \end{array}<br /> \right)<br /> \left(<br /> \begin{array}{c}<br /> t \\<br /> z<br /> \end{array}<br /> \right)<br />​
I used almost identical reasoning when I derived this: (in thread: https://www.physicsforums.com/showthread.php?t=424618").


\begin{pmatrix} ct&#039; \\ x&#039;\ \end{pmatrix}= \begin{pmatrix} \gamma &amp; -\beta\gamma \\ -\beta\gamma &amp; \gamma \end{pmatrix} \begin{pmatrix} c t \\ x\ \end{pmatrix} = \begin{pmatrix} \cosh(\theta) &amp; -sinh(\theta) \\ -sinh(\theta) &amp; \cosh\theta \end{pmatrix} \begin{pmatrix} c t \\ x\ \end{pmatrix}= \begin{pmatrix} \frac {1+s}{2} &amp; \frac {1-s}{2} \\ \frac {1-s}{2}&amp; \frac {1+s}{2} \end{pmatrix} \begin{pmatrix} \frac {s^{-1}+1}{2} &amp; \frac {s^{-1} -1}{2} \\ \frac {s^{-1}-1}{2}&amp; \frac {s^{-1}+1}{2} \end{pmatrix} \begin{pmatrix} c t \\ x\ \end{pmatrix}​

It's not immediately clear that the last two matrices represent scaling on the x=c t axis and the x=-c t axis. Thehttp://ysfine.com/articles/dircone.pdf" has made the tranformation much more elegant (though I may have a sign or two wrong somewhere)

<br /> \left(<br /> \begin{array}{c}<br /> \text{ct}&#039; \\<br /> z&#039;<br /> \end{array}<br /> \right)<br /> =<br /> \left(<br /> \begin{array}{cc}<br /> \cos (45) &amp; -\sin (45) \\<br /> \sin (45) &amp; \cos (45)<br /> \end{array}<br /> \right)<br /> <br /> \left(<br /> \begin{array}{cc}<br /> e^{\eta } &amp; 0 \\<br /> 0 &amp; e^{-\eta }<br /> \end{array}<br /> \right)<br /> <br /> \left(<br /> \begin{array}{cc}<br /> \cos (45) &amp; \sin (45) \\<br /> -\sin (45) &amp; \cos (45)<br /> \end{array}<br /> \right)<br /> <br /> \left(<br /> \begin{array}{c}<br /> \text{ct} \\<br /> z<br /> \end{array}<br /> \right)<br />​

I'm not sure how Dolby and Gull's Radar time relates to Dirac's light-cone coordinates. It appears to me that Dirac's light-cone coordinates are simply an aid to performing the Lorentz Transformations. These light-cone coordinates of Dirac's don't claim to show another frame; they simply rotate the Minkowski diagram 45 degrees.

My point is really, whatever coordinate system you use, you should be imagining Barbara, and what she is seeing, and if your predictions match mine--that she sees Alex's image basically lurch away as Barbara is turning around--then you have a good system. If you don't realize that Alex's image lurches away, then you are doing something wrong, or you haven't finished your analysis.
 
Last edited by a moderator:
  • #289
JDoolin said:
I would say that in the truck and snow example, as far as the truck-driver is concerned, the snow really is coming toward him. And in the stellar aberration case, you really are pointing toward the event which produced the light of the star. In each case, the observed phenomena are results of the observers being at rest in particular reference frames. The phenomena they are seeing are not optical illusions, but are true representations of what is happening in the reference frames where they are momentarily at rest.

The statement has a verb tense problem; should read:

The phenomena they are seeing are not optical illusions, but are true representations of what was happening in the reference frames where they are momentarily at rest.

The past-light cone of an event is the locus of events which are currently being seen by the camera. It is not what is happening, but what was happening.
 
  • #290
JDoolin said:
My point is really, whatever coordinate system you use, you should be imagining Barbara, and what she is seeing, and if your predictions match mine--that she sees Alex's image basically lurch away as Barbara is turning around--then you have a good system. If you don't realize that Alex's image lurches away, then you are doing something wrong, or you haven't finished your analysis.
And my point from the beginning of our conversation is that you can determine what Barbara sees in any coordinate system (inertial or not). There is no reason to choose one frame over another other than convenience. Are you OK with that statement now?
 
  • #291
DaleSpam said:
And my point from the beginning of our conversation is that you can determine what Barbara sees in any coordinate system (inertial or not). There is no reason to choose one frame over another other than convenience. Are you OK with that statement now?

I remain agnostic about the usefulness of accelerated reference frames. I think Rindler coordinates may have some potential. But "radar time" seems rather too arbitrary to me. I found an article by Antony Eagle that has some of my same criticisms:

http://arxiv.org/abs/physics/0411008

I also found the "Debs and Redhead" article referenced:

http://chaos.swarthmore.edu/courses/PDG/AJP000384.pdf

It concludes: "Perhaps the method discussed in this paper, the conventionality of simultaneity applied to depicting the relative progress of two travelers in Minkowski space-time, will settle the issue of the twin paradox, one which has been almost continuously discussed since Lagevin's 1911 paper."

If I correctly understand their meaning, the "relative progress" of a traveler in Minkowski spacetime is be simulated here:

http://www.wiu.edu/users/jdd109/stuff/relativity/LT.html
 
Last edited by a moderator:
  • #292
JDoolin said:
But "radar time" seems rather too arbitrary to me.
Radar time for an inertial observer is the Einstein synchronization convention. It is arbitrary, but certainly no more nor less arbitrary than the usual convention. And even more arbitrary conventions will work.

The Debs and Redhead article supports my position that the choice of simultaneity is a matter of convenience (they use the word convention).

The Eagle article explicitly admits in the third paragraph that the Dolby and Gull article is mathematically correct. Eagle's point is not that Dolby and Gull are wrong, just that their approach is not necessary. I fully agree, you can use any coordinate system you choose.
 
Last edited:
  • #293
While distant simultaneity is a matter of convention, I prefer choices that rely on some operational definition. The Einstein convention (equiv. radar time) is a particularly intuitive operational definition. However, one issue I have with it in cosmological (GR) context is that it requires that one be able (at minimum) to extend an observer's worldline back to the past light cone of distant event. In cosmology, for a very distant object, this is simply impossible (before the big bang anyone?)

I have played with a similarly intuitive operational definition that only requires an observer to pass into the future light cone of a distant event (which they must to ever be aware of it at all). Conceptually, one imagines that the distant event emits a signal of known intensity, and known frequency (e.g. a pattern of hydrogen lines). In this conceptual definition, one ignores any source of attenuation except distance. Then a receiving observer can identify the original frequency by the line pattern, compensate for red/blue shift, getting the intensity that would be received from a hypothetically non-shifted source (whether such could actually exist in the cosmology is not relevant to the operational definition). Then comparing this normalized received intensity to the assumed original intensity, applying a standard attenuation model, one gets a conventional distance to the event. Divide by c and you get the time in your current frame that would be considered simultaneous.

As a simpler stand in for this model, I have thought about the following, which might be equivalent. Imagine a two light rays emitted from a distant event at infinitesimal angle to each other. Taking the limit, as angle goes to zero, of their separation in the receiver's frame over the angle in the sender's would seem to measure the expected attenuation and directly provide a conventional distance that leads to a conventional simultaneity.

I have not actually tried these out for any interesting cases. Has anyone ever heard of any work on similar definitions and how results compare to other simultaneity conventions?
 
  • #294
I have not actually tried these out for any interesting cases. Has anyone ever heard of any work on similar definitions and how results compare to other simultaneity conventions?
If you skip the "compensate for red/blue shift" part, you get the definition of "http://en.wikipedia.org/wiki/Luminosity_distance" ".
 
Last edited by a moderator:
  • #295
PAllen said:
While distant simultaneity is a matter of convention, I prefer choices that rely on some operational definition. The Einstein convention (equiv. radar time) is a particularly intuitive operational definition.

Can you give me more detail on just what is involved in the Einstein Convention?

However, one issue I have with it in cosmological (GR) context is that it requires that one be able (at minimum) to extend an observer's worldline back to the past light cone of distant event. In cosmology, for a very distant object, this is simply impossible (before the big bang anyone?)

In the standard model, I gather certain things are impossible that would not be impossible in the Milne model. (See my blog)

I have played with a similarly intuitive operational definition that only requires an observer to pass into the future light cone of a distant event (which they must to ever be aware of it at all). Conceptually, one imagines that the distant event emits a signal of known intensity, and known frequency (e.g. a pattern of hydrogen lines). In this conceptual definition, one ignores any source of attenuation except distance. Then a receiving observer can identify the original frequency by the line pattern, compensate for red/blue shift, getting the intensity that would be received from a hypothetically non-shifted source (whether such could actually exist in the cosmology is not relevant to the operational definition). Then comparing this normalized received intensity to the assumed original intensity, applying a standard attenuation model, one gets a conventional distance to the event. Divide by c and you get the time in your current frame that would be considered simultaneous.

As a simpler stand in for this model, I have thought about the following, which might be equivalent. Imagine a two light rays emitted from a distant event at infinitesimal angle to each other. Taking the limit, as angle goes to zero, of their separation in the receiver's frame over the angle in the sender's would seem to measure the expected attenuation and directly provide a conventional distance that leads to a conventional simultaneity.

I have not actually tried these out for any interesting cases. Has anyone ever heard of any work on similar definitions and how results compare to other simultaneity conventions?

I think that apparent distance can be estimated by apparent size related to actual size in some way. Your method involves an observer that must be in two places at once (to get the end-points of two rays coming from the same point.) An alternative would be to use the positions of two ends of the object; and what angle they would be seen in the position of a point-observer. I like the idea, but I'm not well-read enough to know whether either approach has been published.
 
  • #296
Ich said:
If you skip the "compensate for red/blue shift" part, you get the definition of "http://en.wikipedia.org/wiki/Luminosity_distance" ".

I thought this must a standard astronomy technique. Actually, the wikipedia reference says you do try to compensate for redshift, time dilation, and curvature, though they don't say how (and, it seems these are very intertwined). So that is the definition I am looking for. So then, I am looking for what sort of coordinate system that imposes on, e.g. a Friedman model compared to other coordinate systems.
 
Last edited by a moderator:
  • #297
JDoolin said:
Can you give me more detail on just what is involved in the Einstein Convention?
It's the same as the rader time you've been discussing with Dalespam. You imagine signal sent to a distant event and received back, and take 1/2 your locally measured time difference. To model sending the signal, you need to extend your world line to the past light cone of the distant event.
JDoolin said:
In the standard model, I gather certain things are impossible that would not be impossible in the Milne model. (See my blog)
I looked at this and I don't think I understand the applicability. It seemed from your blog that this model imposes a global Minkowski frame. How is that possible for a strongly curved model that may include inflation?
JDoolin said:
I think that apparent distance can be estimated by apparent size related to actual size in some way. Your method involves an observer that must be in two places at once (to get the end-points of two rays coming from the same point.) An alternative would be to use the positions of two ends of the object; and what angle they would be seen in the position of a point-observer. I like the idea, but I'm not well-read enough to know whether either approach has been published.
A measure relation of apparent angular size in my frame with size of object in a distant frame I would take to be a measure of my distance from them. In effect, I am doing the reverse: relating angular size in the distant frame to actual size in my frame, which seems more directly equivalent to signal attenuation. Normally, I would expect these distances to be symmetric, but I don't want to assume that for some extreme case. Since none of these angular size measurements could actually be done in the real world, while luminosity measurements can be done, I was looking for a directly computable simple analog of what I now know is luminosity distance. Then I could relate computations in cosmology model to actually possible astronomic measurements.
 
  • #298
Actually, the wikipedia reference says you do try to compensate for redshift, time dilation, and curvature, though they don't say how (and, it seems these are very intertwined).
Yeah, this article claims a lot of strange things. Anyway, from the formula you see that no such corrections are applied. They want to keep the distance as close to the measured data as possible, at the expense of deliberately deviating from the most reasonabe definition if there is redshift.
So then, I am looking for what sort of coordinate system that imposes on, e.g. a Friedman model compared to other coordinate systems.
If you correct time for light travel time and distance for redshift? Minkowskian in the vicinity, and then something like reduced-circumference coordinates, with more or less static slices. Like the Schwarzschild r-coordinate, I guess.
 
  • #299
PAllen said:
It's the same as the rader time you've been discussing with Dalespam. You imagine signal sent to a distant event and received back, and take 1/2 your locally measured time difference. To model sending the signal, you need to extend your world line to the past light cone of the distant event.

I have to say I doubt the wisdom of that technique. It works fine in an inertial frame, but it shouldn't be used while you are accelerating. By the time the signal comes back to you you will not have the same lines of simultaneity as when you sent the signal.

Say I was trying to determine what the y-coordinate of an object were on a graph as I was rotating. I figure out what the y-coordinate is, and a moment later, after I've rotated 30 degrees, I find what the y-coordinate is again. Would it be valid in ANY way for me to just take the average of those two y-coordinates, and claim it as the "radar y-coordinate?"

Edit: Also unless you are accelerating dead-on straight toward your target, the signal that you send toward it is more-than-likely going to miss (unless you calculate its trajectory in your momentarily comoving frame), and certainly won't reflect straight back at you after you accelerate!

I looked at this and I don't think I understand the applicability. It seemed from your blog that this model imposes a global Minkowski frame. How is that possible for a strongly curved model that may include inflation?

Not sure exactly what you're asking about a strongly curved model, but to get inflation, you just apply a Lorentz Transformation around some event later than the Big Bang event in Minkowski space. The Big Bang gets moved further into the past, and voila... inflation.

A measure relation of apparent angular size in my frame with size of object in a distant frame I would take to be a measure of my distance from them. In effect, I am doing the reverse: relating angular size in the distant frame to actual size in my frame, which seems more directly equivalent to signal attenuation. Normally, I would expect these distances to be symmetric, but I don't want to assume that for some extreme case. Since none of these angular size measurements could actually be done in the real world, while luminosity measurements can be done, I was looking for a directly computable simple analog of what I now know is luminosity distance. Then I could relate computations in cosmology model to actually possible astronomic measurements.

Hmmm, there is "your distance from them" which is something, philosophically, I think is anti-relativistic, and there is "their distance from you" which is philosophically in tune with relativity. The difference is that relativity is based on the view of the observer. (at least in Special Relativity it is. That philosophy may have changed in General Relativity.) Can you clarify which one are you interested in?
 
Last edited:
  • #300
PAllen said:
A measure relation of apparent angular size in my frame with size of object in a distant frame I would take to be a measure of my distance from them. In effect, I am doing the reverse: relating angular size in the distant frame to actual size in my frame, which seems more directly equivalent to signal attenuation. Normally, I would expect these distances to be symmetric, but I don't want to assume that for some extreme case. Since none of these angular size measurements could actually be done in the real world, while luminosity measurements can be done, I was looking for a directly computable simple analog of what I now know is luminosity distance. Then I could relate computations in cosmology model to actually possible astronomic measurements.

I missed the comparison of the "distant frame" and "my frame." I gather you are assuming there is some different spatial scale to the distant objects than the nearby objects. My assumption would be that there is no such spatial scale difference.
 

Similar threads

  • · Replies 26 ·
Replies
26
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 35 ·
2
Replies
35
Views
5K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 63 ·
3
Replies
63
Views
5K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 9 ·
Replies
9
Views
3K