Curved Space-time and Relative Velocity

  • #251
JDoolin said:
Two people traveling at different speeds that are co-located will see exactly the same events, but they will see them at different places.
JesseM said:
By "see" do you mean visual appearances,
JDoolin said:
Yes
JesseM said:
or do you mean calculations of distances and times in his own rest frame?
JDoolin said:
Yes.
So you mean your statement to apply to both apparent visual distance and also to distance in each observer's rest frame? But in terms of apparent visual distance, it's not true that "two people traveling at different speeds that are co-located will see exactly the same events, but they will see them at different places"--if they are co-located, both will see exactly the same thing, so naturally the apparent visual distance of different objects (i.e. their visual size) and their visual arrangement relative to one another will be identical for both of the co-located observers at that instant.
JesseM said:
-do you think they will be unable to perform measurements that determine the position and times of events in a frame other than their rest frame,
JDoolin said:
Yes. (If it is a fast moving frame.)
Why would the speed of the frame affect how hard it is to determine the position and time of an event? (which might be on the worldline of an object moving fast or slow relative to yourself)
JDoolin said:
It depends on how fast the objects are moving. If you happen to be in a system where another planet is going by at, say a rapidity=10. That planet will appear so suddenly in your view, and then be so time-dilated after it passes by; there would be absolutely no use whatsoever in defining your coordinates in terms of that planet's coordinate system.
What do you mean "no use"? The problem of the object appearing suddenly in your view is a visual issue which applies regardless of what coordinate system you use. I don't see why for any given object, whatever its visual appearance as it passes within range of your instruments, it should be harder to assign position and time coordinates in one imaginary coordinate grid than in another imaginary coordinate grid. Can you explain further, give a numerical example or something?
JDoolin said:
I can tell you what it would look like: It would look like one big diagonal smear
What would look like a diagonal smear? The visual appearance of the object passing by you, or your drawing of the object's worldline in a diagram which uses the frame moving rapidly relative to the Earth, or the drawing of the Earth's own worldline in a diagram in that frame?
JDoolin said:
with each second on Earth stretching out for cosh(10)=11,013 seconds on the diagram.
How does that make it hard to calculate the coordinates of any events? You just did the calculation yourself, showing that two events on the worldline of a clock on Earth which have 1 second of proper time between them must have occurred 11,013 seconds apart in the coordinate time of this frame. Piece of cake!
JDoolin said:
but rounding errors would creep up very quickly because everything in your diagram for Earth would be just a smidgeon off the line x=c*t.
What do "rounding errors" have to do with the speed of the Earth in this frame? Again, a frame is a purely imaginary thing, we can define the frame to be the one where the Earth is moving at some precisely specified velocity if we wish, rather than defining it in some other way and then trying to measure the speed of the Earth in this frame. And even if we do define the frame in some other way that requires us to measure the speed of the Earth in this frame, this is a practical issue, not a theoretical argument for why we must use a frame with low velocity relative to ourselves regardless of the precision of our measuring instruments.
 
Physics news on Phys.org
  • #252
JDoolin said:
Let's see if we're on the same page at all.

[URL]http://www.wiu.edu/users/jdd109/stuff/img/dolbygull.jpg​
[/URL]

Dolby and Gull have drawn lines of simultaneity in three different reference frames.

These lines of simultaneity are valid
  • in Region P for Barbara's outbound trip,
  • in Region F for Barbara's return trip,
  • in Regions I and II the Lines of simultaneity are drawn for Alex's reference frame.
No, you are completely misunderstanding the figure and the point of the D&G paper. Those are the lines of simultaneity in Barbara's reference frame. This is a single frame which covers the whole spacetime from Barbara's perspective. The bends in the lines are because Barbara is a non-inertial observer, but they are all Barbara's lines of simultaneity. Alex's lines of simultaneity are not drawn on this figure, and note that Barbara's lines of simultaneity are 1-to-1 as required.

JDoolin said:
Now, that's how the lines of simultaneity are drawn, but if you look at the whole graph, and the way it is laid out on the page, the whole thing is actually drawn in Alex's reference frame. No real attempt is made to draw it in any of Barbara's frames.
Obviously. These are Barbara's lines of simultaneity plotted in Alex's frame; Barbara's lines of simultaneity in Barbara's frame would just be horizontal lines, no need to draw such a figure. For Alex's lines of simultaneity plotted in Barbara's frame see Figure 9. Note also the asymmetry between Barbara's path in Alex's frame and Alex's path in Barbara's frame by comparing figures 5 and 9.
 
Last edited by a moderator:
  • #253
JDoolin said:
Mathematically, too many significant figures. Physically, just a useless diagram. We may not be forced to use our own reference frame, but we are forced to use a reference frame with a small relative rapidity to our own.
This is completely wrong. In fact, on an almost daily basis, when people are designing experiments in particle accelerators they will use reference frames with very large rapidities relative to the lab frame. Your assertions are not only theoretically completely unfounded, they are directly contradicted from any practical standpoint also.

Tell me, in your own words, what does the first postulate of relativity mean in practice?
 
  • #254
An attempt at mutual understanding of these 'frame' issues:

1) Physical setup:

a) planet here, and two cameras instantaneously almost colocated there, one camera at small velocity relative to the planet, the other at huge velocity away from the planet.

b) a charge here and to two sets of iron filings instantaneously almost colocated there, one stationary relative to the charge, the other moving rapidly.

2) The physics is frame and coordinate independent.

a) One camera will take a normal picture of the planet, the other camera will take a very reddened picture. Both detect the same photons, but the difference in motion of the detectors (film or ccd) will cause a different result to be recorded.

b) One set of iron filings will sit inert, the other will line up with magnetic field resulting from the relative motion between detector and charge.

3) Complete freedom of choice in how to describe and calculate the physics. In (a) I can analyze everything from a coordinate system in which the planet is stationary at the origin, or either camera is stationary at the origin, or any other arbitrary coordinate system. Whichever I choose, if I do it correctly, I come to the exact same conclusions as to the physics. Same for situation (b).

In what sense is anything 'in a frame of reference' beyond linguistic sloppiness?
 
  • #255
DaleSpam said:
This is completely wrong. In fact, on an almost daily basis, when people are designing experiments in particle accelerators they will use reference frames with very large rapidities relative to the lab frame. Your assertions are not only theoretically completely unfounded, they are directly contradicted from any practical standpoint also.

Tell me, in your own words, what does the first postulate of relativity mean in practice?
Here is an example where high rapidity frames can actually be an advantage, as you need less time slices.

http://www.lbl.gov/cs/Archive/news080910.html
 
Last edited by a moderator:
  • #256
Passionflower said:
Here is an example where high rapidity frames can actually be an advantage, as you need less time slices.

http://www.lbl.gov/cs/Archive/news080910.html

Yes. When the events are close enough together in time and space, transferring to a high rapidity frame can be very useful. I was thinking of ordinary day-to-day life, like going to the store, or trying to find the directions on a map.

In these cases, by changing to a different reference frame, some events which are far apart in time and space will come closer together, while others which are close together in time and space will move far apart.

This may be exactly wha you are looking for if you are working with particle accelerators.
 
Last edited by a moderator:
  • #257
PAllen said:
An attempt at mutual understanding of these 'frame' issues:

1) Physical setup:

a) planet here, and two cameras instantaneously almost colocated there, one camera at small velocity relative to the planet, the other at huge velocity away from the planet.

b) a charge here and to two sets of iron filings instantaneously almost colocated there, one stationary relative to the charge, the other moving rapidly.

2) The physics is frame and coordinate independent.

a) One camera will take a normal picture of the planet, the other camera will take a very reddened picture. Both detect the same photons, but the difference in motion of the detectors (film or ccd) will cause a different result to be recorded.

b) One set of iron filings will sit inert, the other will line up with magnetic field resulting from the relative motion between detector and charge.

3) Complete freedom of choice in how to describe and calculate the physics. In (a) I can analyze everything from a coordinate system in which the planet is stationary at the origin, or either camera is stationary at the origin, or any other arbitrary coordinate system. Whichever I choose, if I do it correctly, I come to the exact same conclusions as to the physics. Same for situation (b).

In what sense is anything 'in a frame of reference' beyond linguistic sloppiness?

You said it yourself.

You can analyze everything
(1) from a coordinate system in which the planet is stationary at the origin
(2) either camera is stationary at the origin
(3) any other arbitrary coordinate system​

Each of these are analysis "in a frame of reference."

In each of these frames, you DO get all the same events, and you DO get the same "space-time intervals" between the events, but you don't get the same locations or the same times. The distances are different; the times are different, the ANGLES are different.

Each analysis is in a single reference frame. And that makes these different frames physically significant.

Edit: I realized that this statement applies to Jesse's post as well:
JesseM said:
So you mean your statement to apply to both apparent visual distance and also to distance in each observer's rest frame? But in terms of apparent visual distance, it's not true that "two people traveling at different speeds that are co-located will see exactly the same events, but they will see them at different places"--if they are co-located, both will see exactly the same thing, so naturally the apparent visual distance of different objects (i.e. their visual size) and their visual arrangement relative to one another will be identical for both of the co-located observers at that instant.

Again: In each of these frames, you DO get all the same events, and you DO get the same "space-time intervals" between the events, but you don't get the same locations or the same times. The distances are different; the times are different, the ANGLES are different.
 
Last edited:
  • #258
JesseM said:
What do you mean "no use"? The problem of the object appearing suddenly in your view is a visual issue which applies regardless of what coordinate system you use. I don't see why for any given object, whatever its visual appearance as it passes within range of your instruments, it should be harder to assign position and time coordinates in one imaginary coordinate grid than in another imaginary coordinate grid. Can you explain further, give a numerical example or something?


What would look like a diagonal smear? The visual appearance of the object passing by you, or your drawing of the object's worldline in a diagram which uses the frame moving rapidly relative to the Earth, or the drawing of the Earth's own worldline in a diagram in that frame?

How does that make it hard to calculate the coordinates of any events? You just did the calculation yourself, showing that two events on the worldline of a clock on Earth which have 1 second of proper time between them must have occurred 11,013 seconds apart in the coordinate time of this frame. Piece of cake!

What do "rounding errors" have to do with the speed of the Earth in this frame? Again, a frame is a purely imaginary thing, we can define the frame to be the one where the Earth is moving at some precisely specified velocity if we wish, rather than defining it in some other way and then trying to measure the speed of the Earth in this frame. And even if we do define the frame in some other way that requires us to measure the speed of the Earth in this frame, this is a practical issue, not a theoretical argument for why we must use a frame with low velocity relative to ourselves regardless of the precision of our measuring instruments.


Perhaps you're right.

Let's see. Imagine two 0-space-time intervals. One of them is a signal from the Earth to the moon, and another is a signal from your keyboard to your CPU. Let's just say these two beams happen to come from the same place at the same time, and are in exactly opposite directions.

It is possible using the Lorentz Transformations to find a frame where the distance between the earth-moon events is negligible, while the distance between the keyboard-CPU events is as far as you like. In such a frame, the Earth and moon would be length contracted to the point where the distances between cities would be negligible. This is where I though you would need many significant figures, but that would only happen if you addd on the x=v*t +(displacement). The v*t would give a very large number, while the lorentz-contracted displacement would give a very small number. (This is what I meant by the diagonal smear)

But I think you are right; there's not really any particular conceptual difficulty to adjusting to that; at least not too much more than adjusting to the notion of oncoming traffic.
 
  • #259
JDoolin said:
Let's see. Imagine two 0-space-time intervals. One of them is a signal from the Earth to the moon, and another is a signal from your keyboard to your CPU.
I can understand that a light signal from the Earth to the Moon has a zero spacetime interval, but why do you say that a signal from the keyboard to the CPU has a zero spacetime interval?

JDoolin said:
It is possible using the Lorentz Transformations to find a frame where the distance between the earth-moon events is negligible, while the distance between the keyboard-CPU events is as far as you like.
You mean the proper time interval right?
The distance between two events is the same for all frames right?
 
Last edited:
  • #260
Passionflower said:
I can understand that a light signal from the Earth to the Moon has a zero spacetime interval, but why do you say that a signal from the keyboard to the CPU has a zero spacetime interval?


Sorry. Use instead, the signal from your remote-control to your TV set.


You mean the proper time interval right?
The distance between two events is the same for all frames right?

The proper time interval (of time-like separated events) is the same as the space-time interval.

If you have two events that are connected by a photon, like the signal from Earth to moon, and the signal from my remote to my TV, those both have zero proper time between them.

But you can easily see that the distance to the TV and the distance to the moon are not zero, nor are they the same. But if they are in opposite directions, it is fairly trivial to come up with a Lorentz Transformation that would make the distances between the events the same.

Picture the Earth moving to the right very fast, and the moon following along behind it. The signal from the Earth to the moon barely has to move, because the moon catches up. On the other hand, the TV is moving really fast to the right, and it takes some time for the light from the remote to reach it.

Here, I attached a couple of diagrams of the same events in two different reference frames. I posted the source-code in my blog.
 

Attachments

  • FlyingRight.jpg
    FlyingRight.jpg
    26.6 KB · Views: 315
  • FlyingLeft.jpg
    FlyingLeft.jpg
    27.5 KB · Views: 322
  • #261
JDoolin said:
The proper time interval (of time-like separated events) is the same as the space-time interval.
Well yes of course but that is not what I was referring to.

JDoolin said:
If you have two events that are connected by a photon, like the signal from Earth to moon, and the signal from my remote to my TV, those both have zero proper time between them.

But you can easily see that the distance to the TV and the distance to the moon are not zero, nor are they the same. But if they are in opposite directions, it is fairly trivial to come up with a Lorentz Transformation that would make the distances between the events the same.
What I was referring to is your usage of "the distance between two events". When I think of the distance between two events I think of the spacetime distance, which of course is identical for all frames of reference. But it seems that you refer to spatial or temporal distance between two events from a frame of reference right? I think it is just a terminology issue.

By the way I really enjoy your pictures and graphs, please keep doing that :)
 
  • #262
JDoolin said:
Yes. When the events are close enough together in time and space, transferring to a high rapidity frame can be very useful.
Yes, because the laws of physics are the same we can pick any frame that is useful or convenient. It seems like you now understand that, so I believe you are in agreement now.
 
  • #263
JDoolin said:
Each of these are analysis "in a frame of reference."
Saying that an analysis is "in a frame of reference" is entirely different from saying that an object is "in a frame of reference". The former is just a colloquial way of saying that a particular frame was used during the analysis, the latter is nonsense. As long as you understand that there is nothing which requires us to use any specific frame then the rest of what we have been discussing follows.
 
  • #264
DaleSpam said:
Saying that an analysis is "in a frame of reference" is entirely different from saying that an object is "in a frame of reference". The former is just a colloquial way of saying that a particular frame was used during the analysis, the latter is nonsense. As long as you understand that there is nothing which requires us to use any specific frame then the rest of what we have been discussing follows.

There's still a rather vital nitpick remaining. If the "object" happens to be an "observer" such as Barbara, all of the data she collects to use for her analysis will come from whatever frame she is in.

Unless she's been dropping off probes along the way, there's no way for her to collect the data from any other frame except for the frame in which she is momentarily at rest.

You can analyze the data in whatever reference frame you like, but the nature of cameras and lab equipment requires you to collect the data in whatever reference frames are comoving with each piece of equipment.
 
  • #265
Passionflower said:
Well yes of course but that is not what I was referring to.


What I was referring to is your usage of "the distance between two events". When I think of the distance between two events I think of the spacetime distance, which of course is identical for all frames of reference. But it seems that you refer to spatial or temporal distance between two events from a frame of reference right? I think it is just a terminology issue.

Might be terminology issue. I never use "distance" to mean space-time-interval.

Except for that mathematical fact that it is preserved under Lorentz Transformation, just like distance is preserved under rotation, they bear little in the way of conceptual similarity.

Coordinates move along circular paths in Euclidian space when the observer rotates. Events move along hyperbolic paths in spacetime when an observer accelerates.

Increasing a distance is like moving outward along concentric circles. But there's not really any analog for concentric hyperbolic arcs.

Also, after a rotation, the observer has the choice to stay put and rotate back the same way. But you can't stay put in spacetime. You have to move forward in time.

I'm probably just telling you what you already know. But I've always found the use of the word "distance" to describe s^2=c^2 t^2 - x^2 very confusing.

By the way I really enjoy your pictures and graphs, please keep doing that :)

Time permitting, whenever I think of something good, I will. As long as one person appreciates it, it's worth it. Thank you.
 
  • #266
JDoolin said:
There's still a rather vital nitpick remaining. If the "object" happens to be an "observer" such as Barbara, all of the data she collects to use for her analysis will come from whatever frame she is in.

Unless she's been dropping off probes along the way, there's no way for her to collect the data from any other frame except for the frame in which she is momentarily at rest.

You can analyze the data in whatever reference frame you like, but the nature of cameras and lab equipment requires you to collect the data in whatever reference frames are comoving with each piece of equipment.
This is simply not true. All frames will agree on the predicted result of any measurement that Barbara makes.

You still seem to not understand the first postulate of relativity.
 
  • #267
DaleSpam said:
This is simply not true. All frames will agree on the predicted result of any measurement that Barbara makes.

You still seem to not understand the first postulate of relativity.

The first postulate of relativity is, according to Wikipedia:
The Principle of Relativity – The laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems in uniform translatory motion relative to each other.

Do you think the laws being the same mean the measurements are the same?

What about momentum? In Barbara's frame, Barbara has no momentum. In other frames Barbara has momentum.

What about velocity? In Barbara's frame, Barbara has no velocity. In other frames Barbara has velocity.

Also, distances between events change, times between events change. The measurements are all different.

Only the laws are all the same. But when you collect the data, it's completely different data.
 
  • #268
JDoolin said:
Do you think the laws being the same mean the measurements are the same?
Yes. The laws are what determine the measurements. The first implies the second.

JDoolin said:
What about momentum?
Momentum is not a measurement. Describe your physical device and procedure for measuring the momentum of some given object. That is what a measurement is, and all frames will agree on the result measured since all frames agree on the laws which govern the measuring device.

Not all frames will agree that the measured result actually represents the momentum of the object, but they will agree on the result of the measurement. That is required by the first postulate.
 
Last edited:
  • #269
DaleSpam said:
Yes. The laws are what determine the measurements. The first implies the second.

Momentum is not a measurement. Describe your physical device and procedure for measuring the momentum of some given object. That is what a measurement is, and all frames will agree on the result measured since all frames agree on the laws which govern the measuring device.

Not all frames will agree that the measured result actually represents the momentum of the object, but they will agree on the result of the measurement. That is required by the first postulate.

Okay, but you agree that the distances and times and velocities are different, right? They all disagree on results of measurement.
 
  • #270
JDoolin said:
Okay, but you agree that the distances and times and velocities are different, right? They all disagree on results of measurement.
I only singled out momentum because it was the first one you mentioned, not because it was conceptually different from the others. The same thing I said about momentum applies to distances, times, and velocities. Describe the experimental set up of your measurement and all frames will agree on the result.
 
Last edited:
  • #271
DaleSpam said:
I only singled out momentum because it was the first one you mentioned, not because it was conceptually different from the others. The same thing I said about momentum applies to distances, times, and velocities. Describe the experimental set up of your measurement and all frames will agree on the result.

Okay, the experimental setup for measurement in Alex's frame is that Alex looks, or takes a picture, or videotapes the events. The experimental setup for measurement in Barbara's frames is that Barbara looks, or takes a picture, or videotapes the events.

The end result is that Alex and Barbara disagree on times, distances, and velocities.
 
  • #272
JDoolin said:
Okay, the experimental setup for measurement in Alex's frame is that Alex looks, or takes a picture, or videotapes the events. The experimental setup for measurement in Barbara's frames is that Barbara looks, or takes a picture, or videotapes the events.
That is two different measurements, not one measurement in two different frames. The first postulate does not say that different measurements will produce the same result, only that the same measurement will produce the same result in different frames.

So, if the measure is that Alex uses a pinhole camera to take a digital picture of some bright object and then counts the number of pixels illuminated then both Alex's frame and Barbara's frame will agree on the number of pixels illuminated.
 
Last edited:
  • #273
DaleSpam said:
That is two different measurements, not one measurement in two different frames. The first postulate does not say that different measurements will produce the same result, only that the same measurement will produce the same result in different frames.

So, if the measure is that Alex uses a pinhole camera to take a digital picture of some bright object and then counts the number of pixels illuminated then both Alex's frame and Barbara's frame will agree on the number of pixels illuminated.

Of course it is two different measurements!

It has never been my intention to claim that "the same measurement" would result in different results. The different results come from the fact that the different observers are forced to make different measurements, from their own positions and from their own reference frames.

My other point was that Dolby and Gull's method does little or nothing to actually represent what Barbara sees with her own eyes and her own instruments.

If Alex shows Barbara what she filmed with her pinhole camera, Barbara will of course agree and say "Yes, Alex, I'm sure that is what you saw."

But if Alex tries to use Dolby and Gull's Radar Time and says, "Okay, Barbara, this is what you saw,right?"

[URL]http://www.wiu.edu/users/jdd109/stuff/img/dolbygull.jpg[/URL]

Barbara will say to Alex:

"No, silly, that is not what I saw at all. That's just what you saw with some arbitrary lines of simultaneity through it. What I saw was for half of the trip, your image was contracted, moving away from me at less than half the speed of light and you were moving in slow-motion, then when I turned around your image shot away from me, then as I was coming back, you were moving in fast motion, and the image was elongated, and coming toward me at faster than the speed of light."
 
Last edited by a moderator:
  • #274
JDoolin said:
Of course it is two different measurements!

It has never been my intention to claim that "the same measurement" would result in different results. The different results come from the fact that the different observers are forced to make different measurements, from their own positions and from their own reference frames.
Do you agree with the following: There is nothing whatsoever that forces you to use a reference frame where a specific measuring device is at rest. All reference frames will agree on the number that device produces for a specific measurement regardless of the device's velocity in that frame.

If you agree, then I do not understand in what sense you mean that an obeserver is forced to make a measurement from their reference frame.

JDoolin said:
My other point was that Dolby and Gull's method does little or nothing to actually represent what Barbara sees with her own eyes and her own instruments.
So what? Alex's inertial frame doesn't represent what Alex sees with his own eyes and his own instruments either. That is not what coordinate systems are for.

However, you can perform the analysis in any reference frame to determine what Alex or Barbara saw with their own eyes and their own instruments. You are guaranteed to get the same results.
 
  • #275
DaleSpam said:
Do you agree with the following: There is nothing whatsoever that forces you to use a reference frame where a specific measuring device is at rest. All reference frames will agree on the number that device produces for a specific measurement regardless of the device's velocity in that frame.

If you agree, then I do not understand in what sense you mean that an obeserver is forced to make a measurement from their reference frame.

So what? Alex's inertial frame doesn't represent what Alex sees with his own eyes and his own instruments either. That is not what coordinate systems are for.

However, you can perform the analysis in any reference frame to determine what Alex or Barbara saw with their own eyes and their own instruments. You are guaranteed to get the same results.

I am not sure what you are still bothered about. Of course an instrument can only gather data in the reference frame that it is in. Everyone is going to agree on whatever data the equipment gathered.

You can map from one reference frame to another, but the distances between events, times between events, and velocities of objects will not agree in the different reference frames.

Is there something you still disagree with?
 
  • #276
JDoolin said:
Is there something you still disagree with?
Yes. You are being self-contradictory here:

JDoolin said:
an instrument can only gather data in the reference frame that it is in.
and
JDoolin said:
Everyone is going to agree on whatever data the equipment gathered.
The first statement violates the first postulate of relativity and contradicts the second statement.

If you do not see these two statements as self-contradictory then you really need to explain what you mean for an object to be "in" a reference frame. Despite repeated queries from multiple people you have still not given a clear definition of what you mean by that, and in post 239 you explicitly disagreed with the typical usage of the term.
 
  • #277
Suppose I have a coil, an electron moving an .8c to the right, another electron moving .99c to the left, aimed to come near the other electron, both being well within the magnetic field of the coil. I have cloud chamber to capture the electron paths. What frame of reference is anything 'in'??! No matter what frame I choose, to determine what will happen in the cloud chamber, I have to deal with fast moving e/m fields. I can't separate anything into independent interactions: from either particle's 'point of view' I have a fast moving coil and a fast moving 'current'. From the cloud chamber I have two fast moving currents interacting with each other and the coil field. This is a conceptually straightforward problem that can be analyzed in any frame; none will be simpler much than any other. How can you talk about anything being 'forced' be be analyzed in 'their frame' ??
 
Last edited:
  • #278
PAllen said:
Suppose I have a coil, an electron moving an .8c to the right, another electron moving .99c to the left, aimed to come near the other electron, both being well within the magnetic field of the coil. I have cloud chamber to capture the electron paths. What frame of reference is anything 'in'??! No matter what frame I choose, to determine what will happen in the cloud chamber, I have to deal with fast moving e/m fields. I can't separate anything into independent interactions: from either particle's 'point of view' I have a fast moving coil and a fast moving 'current'. From the cloud chamber I have two fast moving currents interacting with each other and the coil field. This is a conceptually straightforward problem that can be analyzed in any frame; none will be simpler much than any other. How can you talk about anything being 'forced' be be analyzed in 'their frame' ??

There is an implicit reference frame as soon as you say that there is an electron moving .8c to the right.

You ask, ".8c relative to what?" The answer to that question tells you whose or what's reference frame you're in.

In most cases, it is the frame of whatever apparatus you are using to measure the location of the electron. You are not forced to analyze the data from any particular frame, but you are forced to collect the data from a particular frame.
 
Last edited:
  • #279
DaleSpam said:
Yes. You are being self-contradictory here:

and The first statement violates the first postulate of relativity and contradicts the second statement.

If you do not see these two statements as self-contradictory then you really need to explain what you mean for an object to be "in" a reference frame. Despite repeated queries from multiple people you have still not given a clear definition of what you mean by that, and in post 239 you explicitly disagreed with the typical usage of the term.

My use of the word reference frame is quite typical:

http://en.wikipedia.org/wiki/Frame_of_reference

"A frame of reference in physics, may refer to a coordinate system or set of axes within which to measure the position, orientation, and other properties of objects in it, or it may refer to an observational reference frame tied to the state of motion of an observer. It may also refer to both an observational reference frame and an attached coordinate system, as a unit."

Example:
If I am driving down the highway at 55 miles per hour, and a truck is traveling at 55 miles per hour, how fast is the truck going in my reference frame? 110 miles per hour. How fast am I going in the truck's reference frame? 110 miles per hour. How fast are we going in the Earth's reference frame? 55 miles per hour.
 
  • #280
JDoolin said:
There is an implicit reference frame as soon as you say that there is an electron moving .8c to the right.

You ask, ".8c relative to what?" The answer to that question tells you whose or what's reference frame you're in.

In most cases, it is the frame of whatever apparatus you are using to measure the location of the electron. You are not forced to analyze the data from any particular frame, but you are forced to collect the data from a particular frame.

Yes, I was describing things from the point of view of the coil. Let me try this one more way:

What a detector/observor measures/sees is determined by its world line. This is an invariant, physical fact, and can even be dealt with without coordinates. The world line can be desribed and analyzed from any number of frames, with each with any number of coordinate labeling choices (e.g. polar vs rectilinear coordinates). Everthing except the world line (and the intrinsic geometry and surrounding fields, etc.) is convention, not physics, and affects only the ease of calculation; what is easiest depends on what calculation you want to do.

The cloud chamber has a world line - that is intrinsic, determines what it detects. The cloud chamber has a frame of reference only by convention. Saying the cloud chamber has a frame of reference is shorthand for: it is convenient for some purpose to label events by building a coordinate patch whose origin is some position along a world line, and, usually, whose time coordinate is proper time along the world line from a chosen origin.
 
  • #281
JDoolin said:
My use of the word reference frame is quite typical
But your use of the word "in" is very atypical. You keep on referring to objects being "in a reference frame" rather than "being at rest in" or "moving in" a reference frame. Your usage doesn't make any sense.

JDoolin said:
Example:
If I am driving down the highway at 55 miles per hour, and a truck is traveling at 55 miles per hour, how fast is the truck going in my reference frame? 110 miles per hour. How fast am I going in the truck's reference frame? 110 miles per hour. How fast are we going in the Earth's reference frame? 55 miles per hour.
This is typical usage, all three objects (you, truck, highway) have a specified velocity with respect to all three reference frames. Each object is "at rest in" or "moving in" every given reference frame. This is the usage that I mentioned in post 237 and you specifically rejected in post 239. If you have changed your mind and adopted the standard usage then it will certainly help communication.

Assuming that you are now indeed using the standard terminology then I must re-emphasize the fact that the first postulate ensures that a measuring device will get the same result for a given measurement regardless of the reference frame. You are never forced to use the reference frame where the device/observer is at rest.
 
  • #282
DaleSpam said:
But your use of the word "in" is very atypical. You keep on referring to objects being "in a reference frame" rather than "being at rest in" or "moving in" a reference frame. Your usage doesn't make any sense.

This is typical usage, all three objects (you, truck, highway) have a specified velocity with respect to all three reference frames. Each object is "at rest in" or "moving in" every given reference frame. This is the usage that I mentioned in post 237 and you specifically rejected in post 239. If you have changed your mind and adopted the standard usage then it will certainly help communication.

Assuming that you are now indeed using the standard terminology then I must re-emphasize the fact that the first postulate ensures that a measuring device will get the same result for a given measurement regardless of the reference frame. You are never forced to use the reference frame where the device/observer is at rest.


I have not been as clear as I thought. For what I am referring to, it is not sufficient just to say "the reference frame I am in," because, indeed I am in every reference frame. Mea Culpa.

You may assume that every time I have said "the reference frame someone is in" I actually meant "the reference frame in which someone is momentarily at rest."

If that helps, I still disagree on the issue of whether an observer is "forced" to use the reference frame where it is momentarily at rest.

Let me try to make my main point in as simple a way as I can. I have asked several people the following question: Imagine you are in a truck, driving in a soft snowfall. To you, it seems that the snow is moving almost horizontally, toward you. Which way is the snow "really" moving.

Everyone I have asked this question answers, "straight down." Of course, this is a good Aristotlean answer, but relativistically speaking there is no correct answer, because there is no ether by which one could determine how the snow is "really" moving.

On the other hand, if you put a camcorder in the front window of the truck and filmed the snow, that camera has no other option than to film the snowfall as it appears in the reference frame where the vehicle (and the camera) is at rest. In the film, it will appear that the snow is traveling almost horizontally, straight toward the camera.

Even if you stop the truck, or throw the camera out the window, the camera still films everything in such a way that the camera is always momentarily at rest in its own reference frame. It is effectively forced to film things this way; not as a matter of convention, but as a matter of physical reality.

It is also the same with Barbara, who on her trip accelerates and turns around--what she sees is not a matter of convention, but a a matter of physical fact.

Now, there is also the matter of stellar aberration. In general, the common view is that the actual positions of stars are stationary, but it is only some optical illusion which causes them to move up to 20 arcseconds in the sky over the course of the year. The nature of this question is similar to the snowflake question. Is the light coming from the direction that the light appears to be coming from? If you point toward the image of the star, are you pointing toward the star? Are you pointing toward the event which created the light you are now seeing?

I would say that in the truck and snow example, as far as the truck-driver is concerned, the snow really is coming toward him. And in the stellar aberration case, you really are pointing toward the event which produced the light of the star. In each case, the observed phenomena are results of the observers being at rest in particular reference frames. The phenomena they are seeing are not optical illusions, but are true representations of what is happening in the reference frames where they are momentarily at rest.
 
  • #283
JDoolin said:
In each case, the observed phenomena are results of the observers being at rest in particular reference frames. The phenomena they are seeing are not optical illusions, but are true representations of what is happening in the reference frames where they are momentarily at rest.
Could you clarify your meaning here? I also would not characterize them as optical illusions since an optical illusion is due to our eyes and brains and how they interpret images, but instead they are due to the finite speed of light. The coordinates of events in an inertial reference frame are what remains after properly accounting for the finite speed of light. A camera does not account for the finite speed of light, therefore this seems wrong to me:
JDoolin said:
if you put a camcorder in the front window of the truck and filmed the snow, that camera has no other option than to film the snowfall as it appears in the reference frame where the vehicle (and the camera) is at rest.
The film from the camcorder will show Terrell rotation and aberration and other effects due to the finite speed of light which are carefully accounted for and removed by the coordinate system. The film will most definitely not show how things are in the inertial rest frame.

If you wish to use a coordinate system that directly reflects the effects due to the finite speed of light then you will need to use light-cone coordinates, not the inertial rest frame. Light-cone coordinates would directly indicate what the camera would film, but they are not inertial. Of course, using the inertial rest frame you can certainly calculate what the image will look like, but you can do that from any frame, inertial or not.
 
  • #284
DaleSpam said:
If you wish to use a coordinate system that directly reflects the effects due to the finite speed of light then you will need to use light-cone coordinates, not the inertial rest frame.
I am interested, do you have some good references (e.g. books or significant papers) to light cone coordinates DaleSpam?
 
  • #285
I would probably start with this one:
http://ysfine.com/articles/dircone.pdf
 
Last edited by a moderator:
  • #286
DaleSpam said:
Could you clarify your meaning here? I also would not characterize them as optical illusions since an optical illusion is due to our eyes and brains and how they interpret images, but instead they are due to the finite speed of light. The coordinates of events in an inertial reference frame are what remains after properly accounting for the finite speed of light. A camera does not account for the finite speed of light, therefore this seems wrong to me:The film from the camcorder will show Terrell rotation and aberration and other effects due to the finite speed of light which are carefully accounted for and removed by the coordinate system. The film will most definitely not show how things are in the inertial rest frame.

If you wish to use a coordinate system that directly reflects the effects due to the finite speed of light then you will need to use light-cone coordinates, not the inertial rest frame. Light-cone coordinates would directly indicate what the camera would film, but they are not inertial. Of course, using the inertial rest frame you can certainly calculate what the image will look like, but you can do that from any frame, inertial or not.

Now that I know you call this "light-cone coordinates" I can tell you I have been talking about "light-cone coordinates" the whole time. Now, can you understand this is what Barbara would see?

JDoolin said:
Barbara will say to Alex:

"... What I saw was for half of the trip, your image was contracted, moving away from me at less than half the speed of light and you were moving in slow-motion, then when I turned around your image shot away from me, then as I was coming back, you were moving in fast motion, and the image was elongated, and coming toward me at faster than the speed of light."
 
  • #287
JDoolin said:
Now that I know you call this "light-cone coordinates" I can tell you I have been talking about "light-cone coordinates" the whole time. Now, can you understand this is what Barbara would see?
Light cone coordinates are most definitely not the same as the momentarily co-moving inertial frame (MCIF). However, if you like light cone coordinates then you should really like Dolby and Gull's coordinates. They are very closely related (much more closely related than the MCIF). That is actually one of the things that I find appealing about them.
 
  • #288
DaleSpam said:
Light cone coordinates are most definitely not the same as the momentarily co-moving inertial frame (MCIF). However, if you like light cone coordinates then you should really like Dolby and Gull's coordinates. They are very closely related (much more closely related than the MCIF). That is actually one of the things that I find appealing about them.

Let me first make clear that I do like the article about light cone coordinates, although I think I jumped the gun in saying that I was using the light-cone coordinates. (I was not.) What I was doing was considering the locus of events that are in the observer's past light cone. Unfortunately, I went by the name of the article and the context of what I thought we were talking about, and didn't spend the time to grock what the article was actually about.

This "Dirac's Light Cone Coordinates" appears to be a pretty good pedagogical method, as it turns the Lorentz Transform into a scaling and inverse scaling on the u and v axes, simply by rotating 45 degrees, so the x=ct line and x=-ct lines are vertical and horizontal:

This is another way of writing equation (2) from the article you referenced.

\left( <br /> \begin{array}{c}<br /> u \\<br /> v<br /> \end{array}<br /> \right)<br /> =<br /> <br /> \left(<br /> \begin{array}{cc}<br /> \cos (45) &amp; \sin (45) \\<br /> -\sin (45) &amp; \cos (45)<br /> \end{array}<br /> \right)<br /> \left(<br /> \begin{array}{c}<br /> t \\<br /> z<br /> \end{array}<br /> \right)<br />​
I used almost identical reasoning when I derived this: (in thread: https://www.physicsforums.com/showthread.php?t=424618").


\begin{pmatrix} ct&#039; \\ x&#039;\ \end{pmatrix}= \begin{pmatrix} \gamma &amp; -\beta\gamma \\ -\beta\gamma &amp; \gamma \end{pmatrix} \begin{pmatrix} c t \\ x\ \end{pmatrix} = \begin{pmatrix} \cosh(\theta) &amp; -sinh(\theta) \\ -sinh(\theta) &amp; \cosh\theta \end{pmatrix} \begin{pmatrix} c t \\ x\ \end{pmatrix}= \begin{pmatrix} \frac {1+s}{2} &amp; \frac {1-s}{2} \\ \frac {1-s}{2}&amp; \frac {1+s}{2} \end{pmatrix} \begin{pmatrix} \frac {s^{-1}+1}{2} &amp; \frac {s^{-1} -1}{2} \\ \frac {s^{-1}-1}{2}&amp; \frac {s^{-1}+1}{2} \end{pmatrix} \begin{pmatrix} c t \\ x\ \end{pmatrix}​

It's not immediately clear that the last two matrices represent scaling on the x=c t axis and the x=-c t axis. Thehttp://ysfine.com/articles/dircone.pdf" has made the tranformation much more elegant (though I may have a sign or two wrong somewhere)

<br /> \left(<br /> \begin{array}{c}<br /> \text{ct}&#039; \\<br /> z&#039;<br /> \end{array}<br /> \right)<br /> =<br /> \left(<br /> \begin{array}{cc}<br /> \cos (45) &amp; -\sin (45) \\<br /> \sin (45) &amp; \cos (45)<br /> \end{array}<br /> \right)<br /> <br /> \left(<br /> \begin{array}{cc}<br /> e^{\eta } &amp; 0 \\<br /> 0 &amp; e^{-\eta }<br /> \end{array}<br /> \right)<br /> <br /> \left(<br /> \begin{array}{cc}<br /> \cos (45) &amp; \sin (45) \\<br /> -\sin (45) &amp; \cos (45)<br /> \end{array}<br /> \right)<br /> <br /> \left(<br /> \begin{array}{c}<br /> \text{ct} \\<br /> z<br /> \end{array}<br /> \right)<br />​

I'm not sure how Dolby and Gull's Radar time relates to Dirac's light-cone coordinates. It appears to me that Dirac's light-cone coordinates are simply an aid to performing the Lorentz Transformations. These light-cone coordinates of Dirac's don't claim to show another frame; they simply rotate the Minkowski diagram 45 degrees.

My point is really, whatever coordinate system you use, you should be imagining Barbara, and what she is seeing, and if your predictions match mine--that she sees Alex's image basically lurch away as Barbara is turning around--then you have a good system. If you don't realize that Alex's image lurches away, then you are doing something wrong, or you haven't finished your analysis.
 
Last edited by a moderator:
  • #289
JDoolin said:
I would say that in the truck and snow example, as far as the truck-driver is concerned, the snow really is coming toward him. And in the stellar aberration case, you really are pointing toward the event which produced the light of the star. In each case, the observed phenomena are results of the observers being at rest in particular reference frames. The phenomena they are seeing are not optical illusions, but are true representations of what is happening in the reference frames where they are momentarily at rest.

The statement has a verb tense problem; should read:

The phenomena they are seeing are not optical illusions, but are true representations of what was happening in the reference frames where they are momentarily at rest.

The past-light cone of an event is the locus of events which are currently being seen by the camera. It is not what is happening, but what was happening.
 
  • #290
JDoolin said:
My point is really, whatever coordinate system you use, you should be imagining Barbara, and what she is seeing, and if your predictions match mine--that she sees Alex's image basically lurch away as Barbara is turning around--then you have a good system. If you don't realize that Alex's image lurches away, then you are doing something wrong, or you haven't finished your analysis.
And my point from the beginning of our conversation is that you can determine what Barbara sees in any coordinate system (inertial or not). There is no reason to choose one frame over another other than convenience. Are you OK with that statement now?
 
  • #291
DaleSpam said:
And my point from the beginning of our conversation is that you can determine what Barbara sees in any coordinate system (inertial or not). There is no reason to choose one frame over another other than convenience. Are you OK with that statement now?

I remain agnostic about the usefulness of accelerated reference frames. I think Rindler coordinates may have some potential. But "radar time" seems rather too arbitrary to me. I found an article by Antony Eagle that has some of my same criticisms:

http://arxiv.org/abs/physics/0411008

I also found the "Debs and Redhead" article referenced:

http://chaos.swarthmore.edu/courses/PDG/AJP000384.pdf

It concludes: "Perhaps the method discussed in this paper, the conventionality of simultaneity applied to depicting the relative progress of two travelers in Minkowski space-time, will settle the issue of the twin paradox, one which has been almost continuously discussed since Lagevin's 1911 paper."

If I correctly understand their meaning, the "relative progress" of a traveler in Minkowski spacetime is be simulated here:

http://www.wiu.edu/users/jdd109/stuff/relativity/LT.html
 
Last edited by a moderator:
  • #292
JDoolin said:
But "radar time" seems rather too arbitrary to me.
Radar time for an inertial observer is the Einstein synchronization convention. It is arbitrary, but certainly no more nor less arbitrary than the usual convention. And even more arbitrary conventions will work.

The Debs and Redhead article supports my position that the choice of simultaneity is a matter of convenience (they use the word convention).

The Eagle article explicitly admits in the third paragraph that the Dolby and Gull article is mathematically correct. Eagle's point is not that Dolby and Gull are wrong, just that their approach is not necessary. I fully agree, you can use any coordinate system you choose.
 
Last edited:
  • #293
While distant simultaneity is a matter of convention, I prefer choices that rely on some operational definition. The Einstein convention (equiv. radar time) is a particularly intuitive operational definition. However, one issue I have with it in cosmological (GR) context is that it requires that one be able (at minimum) to extend an observer's worldline back to the past light cone of distant event. In cosmology, for a very distant object, this is simply impossible (before the big bang anyone?)

I have played with a similarly intuitive operational definition that only requires an observer to pass into the future light cone of a distant event (which they must to ever be aware of it at all). Conceptually, one imagines that the distant event emits a signal of known intensity, and known frequency (e.g. a pattern of hydrogen lines). In this conceptual definition, one ignores any source of attenuation except distance. Then a receiving observer can identify the original frequency by the line pattern, compensate for red/blue shift, getting the intensity that would be received from a hypothetically non-shifted source (whether such could actually exist in the cosmology is not relevant to the operational definition). Then comparing this normalized received intensity to the assumed original intensity, applying a standard attenuation model, one gets a conventional distance to the event. Divide by c and you get the time in your current frame that would be considered simultaneous.

As a simpler stand in for this model, I have thought about the following, which might be equivalent. Imagine a two light rays emitted from a distant event at infinitesimal angle to each other. Taking the limit, as angle goes to zero, of their separation in the receiver's frame over the angle in the sender's would seem to measure the expected attenuation and directly provide a conventional distance that leads to a conventional simultaneity.

I have not actually tried these out for any interesting cases. Has anyone ever heard of any work on similar definitions and how results compare to other simultaneity conventions?
 
  • #294
I have not actually tried these out for any interesting cases. Has anyone ever heard of any work on similar definitions and how results compare to other simultaneity conventions?
If you skip the "compensate for red/blue shift" part, you get the definition of "http://en.wikipedia.org/wiki/Luminosity_distance" ".
 
Last edited by a moderator:
  • #295
PAllen said:
While distant simultaneity is a matter of convention, I prefer choices that rely on some operational definition. The Einstein convention (equiv. radar time) is a particularly intuitive operational definition.

Can you give me more detail on just what is involved in the Einstein Convention?

However, one issue I have with it in cosmological (GR) context is that it requires that one be able (at minimum) to extend an observer's worldline back to the past light cone of distant event. In cosmology, for a very distant object, this is simply impossible (before the big bang anyone?)

In the standard model, I gather certain things are impossible that would not be impossible in the Milne model. (See my blog)

I have played with a similarly intuitive operational definition that only requires an observer to pass into the future light cone of a distant event (which they must to ever be aware of it at all). Conceptually, one imagines that the distant event emits a signal of known intensity, and known frequency (e.g. a pattern of hydrogen lines). In this conceptual definition, one ignores any source of attenuation except distance. Then a receiving observer can identify the original frequency by the line pattern, compensate for red/blue shift, getting the intensity that would be received from a hypothetically non-shifted source (whether such could actually exist in the cosmology is not relevant to the operational definition). Then comparing this normalized received intensity to the assumed original intensity, applying a standard attenuation model, one gets a conventional distance to the event. Divide by c and you get the time in your current frame that would be considered simultaneous.

As a simpler stand in for this model, I have thought about the following, which might be equivalent. Imagine a two light rays emitted from a distant event at infinitesimal angle to each other. Taking the limit, as angle goes to zero, of their separation in the receiver's frame over the angle in the sender's would seem to measure the expected attenuation and directly provide a conventional distance that leads to a conventional simultaneity.

I have not actually tried these out for any interesting cases. Has anyone ever heard of any work on similar definitions and how results compare to other simultaneity conventions?

I think that apparent distance can be estimated by apparent size related to actual size in some way. Your method involves an observer that must be in two places at once (to get the end-points of two rays coming from the same point.) An alternative would be to use the positions of two ends of the object; and what angle they would be seen in the position of a point-observer. I like the idea, but I'm not well-read enough to know whether either approach has been published.
 
  • #296
Ich said:
If you skip the "compensate for red/blue shift" part, you get the definition of "http://en.wikipedia.org/wiki/Luminosity_distance" ".

I thought this must a standard astronomy technique. Actually, the wikipedia reference says you do try to compensate for redshift, time dilation, and curvature, though they don't say how (and, it seems these are very intertwined). So that is the definition I am looking for. So then, I am looking for what sort of coordinate system that imposes on, e.g. a Friedman model compared to other coordinate systems.
 
Last edited by a moderator:
  • #297
JDoolin said:
Can you give me more detail on just what is involved in the Einstein Convention?
It's the same as the rader time you've been discussing with Dalespam. You imagine signal sent to a distant event and received back, and take 1/2 your locally measured time difference. To model sending the signal, you need to extend your world line to the past light cone of the distant event.
JDoolin said:
In the standard model, I gather certain things are impossible that would not be impossible in the Milne model. (See my blog)
I looked at this and I don't think I understand the applicability. It seemed from your blog that this model imposes a global Minkowski frame. How is that possible for a strongly curved model that may include inflation?
JDoolin said:
I think that apparent distance can be estimated by apparent size related to actual size in some way. Your method involves an observer that must be in two places at once (to get the end-points of two rays coming from the same point.) An alternative would be to use the positions of two ends of the object; and what angle they would be seen in the position of a point-observer. I like the idea, but I'm not well-read enough to know whether either approach has been published.
A measure relation of apparent angular size in my frame with size of object in a distant frame I would take to be a measure of my distance from them. In effect, I am doing the reverse: relating angular size in the distant frame to actual size in my frame, which seems more directly equivalent to signal attenuation. Normally, I would expect these distances to be symmetric, but I don't want to assume that for some extreme case. Since none of these angular size measurements could actually be done in the real world, while luminosity measurements can be done, I was looking for a directly computable simple analog of what I now know is luminosity distance. Then I could relate computations in cosmology model to actually possible astronomic measurements.
 
  • #298
Actually, the wikipedia reference says you do try to compensate for redshift, time dilation, and curvature, though they don't say how (and, it seems these are very intertwined).
Yeah, this article claims a lot of strange things. Anyway, from the formula you see that no such corrections are applied. They want to keep the distance as close to the measured data as possible, at the expense of deliberately deviating from the most reasonabe definition if there is redshift.
So then, I am looking for what sort of coordinate system that imposes on, e.g. a Friedman model compared to other coordinate systems.
If you correct time for light travel time and distance for redshift? Minkowskian in the vicinity, and then something like reduced-circumference coordinates, with more or less static slices. Like the Schwarzschild r-coordinate, I guess.
 
  • #299
PAllen said:
It's the same as the rader time you've been discussing with Dalespam. You imagine signal sent to a distant event and received back, and take 1/2 your locally measured time difference. To model sending the signal, you need to extend your world line to the past light cone of the distant event.

I have to say I doubt the wisdom of that technique. It works fine in an inertial frame, but it shouldn't be used while you are accelerating. By the time the signal comes back to you you will not have the same lines of simultaneity as when you sent the signal.

Say I was trying to determine what the y-coordinate of an object were on a graph as I was rotating. I figure out what the y-coordinate is, and a moment later, after I've rotated 30 degrees, I find what the y-coordinate is again. Would it be valid in ANY way for me to just take the average of those two y-coordinates, and claim it as the "radar y-coordinate?"

Edit: Also unless you are accelerating dead-on straight toward your target, the signal that you send toward it is more-than-likely going to miss (unless you calculate its trajectory in your momentarily comoving frame), and certainly won't reflect straight back at you after you accelerate!

I looked at this and I don't think I understand the applicability. It seemed from your blog that this model imposes a global Minkowski frame. How is that possible for a strongly curved model that may include inflation?

Not sure exactly what you're asking about a strongly curved model, but to get inflation, you just apply a Lorentz Transformation around some event later than the Big Bang event in Minkowski space. The Big Bang gets moved further into the past, and voila... inflation.

A measure relation of apparent angular size in my frame with size of object in a distant frame I would take to be a measure of my distance from them. In effect, I am doing the reverse: relating angular size in the distant frame to actual size in my frame, which seems more directly equivalent to signal attenuation. Normally, I would expect these distances to be symmetric, but I don't want to assume that for some extreme case. Since none of these angular size measurements could actually be done in the real world, while luminosity measurements can be done, I was looking for a directly computable simple analog of what I now know is luminosity distance. Then I could relate computations in cosmology model to actually possible astronomic measurements.

Hmmm, there is "your distance from them" which is something, philosophically, I think is anti-relativistic, and there is "their distance from you" which is philosophically in tune with relativity. The difference is that relativity is based on the view of the observer. (at least in Special Relativity it is. That philosophy may have changed in General Relativity.) Can you clarify which one are you interested in?
 
Last edited:
  • #300
PAllen said:
A measure relation of apparent angular size in my frame with size of object in a distant frame I would take to be a measure of my distance from them. In effect, I am doing the reverse: relating angular size in the distant frame to actual size in my frame, which seems more directly equivalent to signal attenuation. Normally, I would expect these distances to be symmetric, but I don't want to assume that for some extreme case. Since none of these angular size measurements could actually be done in the real world, while luminosity measurements can be done, I was looking for a directly computable simple analog of what I now know is luminosity distance. Then I could relate computations in cosmology model to actually possible astronomic measurements.

I missed the comparison of the "distant frame" and "my frame." I gather you are assuming there is some different spatial scale to the distant objects than the nearby objects. My assumption would be that there is no such spatial scale difference.
 
Back
Top