Here's my question first: If radio waves are transmitted from earth, are they actually redshifted, or is it that they appear redshifted when measured by an identical clock in space?

Here's where I'm coming from. I'm trying to reconcile things I know. I think I know that (1) clocks further from earth will run faster, as proven experimentally, (2) the GPS clocks in the satellites were pre-adjusted (slowed) so that the frequency at earth will appear correct.

So I could explain these by the clocks themselves in all cases running faster at higher potential, but radio waves being fixed as they move through different potentials. Conversely, if I have both effects, the GPS thing doesn't make sense: The clocks on the spacecraft will run faster than their setting on the ground, but if I also have the waves blueshift as they come down, I'd be "double booking" the effect. I'm fairly certain the corrections to GPS were only one application of the redshift equation, not the equation squared.

Does my question make sense?

Here's my question first: If radio waves are transmitted from earth, are they actually redshifted, or is it that they appear redshifted when measured by an identical clock in space?

Here's where I'm coming from. I'm trying to reconcile things I know. I think I know that (1) clocks further from earth will run faster, as proven experimentally, (2) the GPS clocks in the satellites were pre-adjusted (slowed) so that the frequency at earth will appear correct.

So I could explain these by the clocks themselves in all cases running faster at higher potential, but radio waves being fixed as they move through different potentials. Conversely, if I have both effects, the GPS thing doesn't make sense: The clocks on the spacecraft will run faster than their setting on the ground, but if I also have the waves blueshift as they come down, I'd be "double booking" the effect. I'm fairly certain the corrections to GPS were only one application of the redshift equation, not the equation squared.

Does my question make sense?
I think you are not recognizing the distinction between the two cases. The clock periodicity will determine the interval between timed signals but is distinct from the mechanism of transmitting those signals. In the case of the frequency of any individual signal, that is purely dependent on the resonance frequencies of the emitting electron. The clock and the transmitter are the same mechanism. SO it is a single shift dependent only on the relative frequencies of the emitting and receiving electrons.

Staff Emeritus
2021 Award
There is no difference between "actually" redshifted and "appearing" redshifted. If something was emitting at 500 nm and is observed at 600 nm, is this "actual" or "apparent" red shift?

The fact that clocks run faster at higher potentials is the same as the fact that light is redshifted as it rises out of a potential well. If you like, you can think of gravitational time dilation as causing the redshift.

I knew my confusion was hard to explain, and I obviously did a bad job. Suppose I have two identical, perfect clocks and I take one into space. The one in space uses its clock as a reference and radiates RF down to earth. The clock in space is fast by something of the form √(1-2GM/(Rc^2)), I think. What does the guy on the ground measure? Does he measure that offset? If so, there is no additional blueshift during propagation. If there is blueshift in propagation, wouldn't I have to apply the factor again (once for the fact the clock is running slow, once for blueshift in propagation)?

Thanks for the help.

I knew my confusion was hard to explain, and I obviously did a bad job. Suppose I have two identical, perfect clocks and I take one into space. The one in space uses its clock as a reference and radiates RF down to earth. The clock in space is fast by something of the form √(1-2GM/(Rc^2)), I think. What does the guy on the ground measure? Does he measure that offset? If so, there is no additional blueshift during propagation. If there is blueshift in propagation, wouldn't I have to apply the factor again (once for the fact the clock is running slow, once for blueshift in propagation)?

Thanks for the help.

Clocks do not radiate RF EM. So you really mean you have two perfect radios.

But the answer is there will be only a single shift varying by the difference in potential.

Some people seem to like to interpret this as occurring only at the source and receptor due to the difference in the dilation factor relative to potential. No change in the signal during transit.

Other people like to attribute this to change of the signal during propagation.

But all agree there is only a single shift that agrees with the difference in potential.

I think the former but just MHO

Clocks do not radiate RF EM. So you really mean you have two perfect radios.

I said the clock was used as the reference. But no matter.

But the answer is there will be only a single shift varying by the difference in potential.

Some people seem to like to interpret this as occurring only at the source and receptor due to the difference in the dilation factor relative to potential. No change in the signal during transit.

Other people like to attribute this to change of the signal during propagation.

But all agree there is only a single shift that agrees with the difference in potential.

I think the former but just MHO

Isn't only the former possible, since the clock is actually going faster? If we brought it back down we'd find that it was ahead in time compared to the one on the ground.

mfb
Mentor
Let's say your clocks tick with a frequency of 1 MHz, and emit signals with that frequency (peaks of radio waves, pulses of visible light - does not matter). Now, you lift one clock to space - when viewed from earth, it runs faster, like 1.000001MHz (probably some "0" missing). As seen from earth, the emission rate is 1.000001MHz, and as no signals are lost the received rate on earth is still 1.000001MHz.

Viewed from space, the emission rate is 1 MHz, and it is still the same on earth. However, the clocks on earth run slower, so per earth-second more signals arrive, therefore earth will measure a rate of 1.000001MHz.

Your signal, measured in space, has a frequency of 1 MHz, and if you follow it to earth, its frequency will increase (blue shift) to 1.000001MHz.

You have only one red/blueshift, and that is consistent for all observers.

Viewed from space, the emission rate is 1 MHz, and it is still the same on earth. However, the clocks on earth run slower, so per earth-second more signals arrive, therefore earth will measure a rate of 1.000001MHz.

This is true if the space person measures using his own clock as truth. He could as easily say "I know all about relativity, my clock sped up from coming up here and is now running at 1.00001 MHz." With that mindset, there is also no shift during propagation down. It just transmits, and both observers see the transmit clock as being faster. Is there something wrong with this view? The reason I like it is that everyone can also agree on the amount of absolute time difference between clocks if the space clock came back down. The change in frequency during propagation view doesn't account for this. At least I don't see how it does.

This is true if the space person measures using his own clock as truth. He could as easily say "I know all about relativity, my clock sped up from coming up here and is now running at 1.00001 MHz." With that mindset, there is also no shift during propagation down. It just transmits, and both observers see the transmit clock as being faster. Is there something wrong with this view? The reason I like it is that everyone can also agree on the amount of absolute time difference between clocks if the space clock came back down. The change in frequency during propagation view doesn't account for this. At least I don't see how it does.
i would say there is nothing wrong with your view and as I indicated i personally share it.

i was simply telling you that there are knowledgeable people who choose to view it as a change occurring during transit. There is no difference in the end results. Everyone agrees on the quantitative results so there is no doubling of effect. Those who view change in transit do not then add on a difference for potential.
I completely understand your question as i pondered and asked exactly the same things so I am sorry if I have been less than clear in my responses

All the above is related to photons not timed signals. I don't think anybody thinks that the periodicity of timed signals changes in transit.

Last edited:

I said the clock was used as the reference. But no matter.

Isn't only the former possible, since the clock is actually going faster? If we brought it back down we'd find that it was ahead in time compared to the one on the ground.
There is a difference as I mentioned before between timed signals and photon shifts.

My remarks have all been addressed to the frequency shift of photons. In this case there is no residual.cumulative effect. You bring an emitter down and it behaves just like the locals.

Think of it this way. You are in space and emit pulses exactly once every second according to your clock, which then travel to a point on the Earth. You and the Earth are stationary to each other, the Earth non-rotating for simplicity, all observers static. The first pulse is emitted and travels along some path, perhaps slower than c within the well according to you, curved or otherwise, even zig zagging, whatever, it doesn't matter, and arrives at the point on Earth in some time 't' according to you. The next pulse is emitted and travels along an identical path. Whatever the first photon did the next does also, and every photon thereafter. So all photons emitted take an identical amount of time t to travel an identical path to the point on the Earth, and so each photon will arrive exactly one second after the last photon arrived according to you. So the only reason the Earth observers won't measure the frequency that they receive the photons to be one per second is solely due to the local gravitational time dilation, observing their own clocks to tick slower and measure a greater frequency that they receive the photons in direct inverse proportion to the local clock tick rate, a blueshift. And of course, for the same reason you would measure a redshift for photons emitted from the Earth to you, since the Earth observers would emit the pulses at the same rate that their clock ticks, which is slower to you.

Last edited:
mfb
Mentor
All the above is related to photons not timed signals. I don't think anybody thinks that the periodicity of timed signals changes in transit.
Where is the difference?
If you send a short pulse every 10 oscillations of your wave, and the frequency of that wave increases (with your favourite point of view), the pulse frequency increases, too.
It stays constant for every fixed observer - but that is true for photons, too.

Here's my question first: If radio waves are transmitted from earth, are they actually redshifted, or is it that they appear redshifted when measured by an identical clock in space?

Here's where I'm coming from. I'm trying to reconcile things I know. I think I know that (1) clocks further from earth will run faster, as proven experimentally, (2) the GPS clocks in the satellites were pre-adjusted (slowed) so that the frequency at earth will appear correct.

So I could explain these by the clocks themselves in all cases running faster at higher potential, but radio waves being fixed as they move through different potentials. Conversely, if I have both effects, the GPS thing doesn't make sense: The clocks on the spacecraft will run faster than their setting on the ground, but if I also have the waves blueshift as they come down, I'd be "double booking" the effect. I'm fairly certain the corrections to GPS were only one application of the redshift equation, not the equation squared.

Does my question make sense?
Yes, that's exact - you're obviously just talking of gravitational redshift, not including effects due to acceleration. That the falling photon blueshift way of describing the observations is misleading was discussed in an article by Okin in the AJP.
- http://arxiv.org/abs/physics/9907017

As a matter of fact, what you conclude here was already explained by Einstein in 1911 on the assumption of classical EM because of a simple continuity requirement: the same amount of cycles must be received as were emitted in a certain time (according to either reckoning), as no wave cycles can get lost or be created in vacuum. Although that should have settled the issue, in that same paper he earlier made a different calculation based on potential energy that also gave the right answer but which was inconsistent with that logical conclusion - and perhaps that caused the later confusions.

PS I forgot that I did not really answer your question, but only what I think you had in mind! Redshift refers to observation, it's simply the difference between the locally received and emitted frequencies. And that is actual according to both explanations.

Last edited:
pervect
Staff Emeritus
This is true if the space person measures using his own clock as truth. He could as easily say "I know all about relativity, my clock sped up from coming up here and is now running at 1.00001 MHz." With that mindset, there is also no shift during propagation down. It just transmits, and both observers see the transmit clock as being faster. Is there something wrong with this view? The reason I like it is that everyone can also agree on the amount of absolute time difference between clocks if the space clock came back down. The change in frequency during propagation view doesn't account for this. At least I don't see how it does.

I feel I need to warn you that according to relativity, there isn't any such thing as an "absolute time difference", and you'll wind up getting very confused if you try to believe otherwise. If you add moving observers in to the picture, this becomes obvious due to relativistic time dilation due to velocity. No notion of absolute time can explain what everyone observes.

However, there are two sorts of time, so perhaps the problem here is only a problem of word choice.

The two sorts of time that do exist are coordinate time, and proper time. Proper time is the sort of time measured by an inertial observer who is at both events - it's "wristwatch" time.

Coordinate time assigns time coordinates to every event - on Earth, atomic time or TAI time is an example of coordinate time.

So you can define a set of points that correspond to, say "noon", on and above the Earth's surface, and a set of points that are "1 pm".

However, the proper time interval between "noon" and "1 pm" as measured by your local clocks won't be the same everywhere. The convention is to make them the same at the Earth's surface. Thus, if you are on a mountain top, you'll find that there is more than one hour of proper time (measured by your wristwatch) between the events with the coordinates "noon" and the event with the coordinate "1 pm", for example, which can be ascribed to the effects of gravitational time dilation.

To quote wiki http://en.wikipedia.org/w/index.php?title=International_Atomic_Time&oldid=511460967

International Atomic Time (TAI, from the French name Temps atomique international)[1] is a high-precision atomic coordinate[2] time standard ...

Wiki talks a bit about coordinate time as well, see for instance:

http://en.wikipedia.org/w/index.php?title=Coordinate_time&oldid=510905210

Yes, that's exact - you're obviously just talking of gravitational redshift, not including effects due to acceleration. That the falling photon blueshift way of describing the observations is misleading was discussed in an article by Okin in the AJP.
- http://arxiv.org/abs/physics/9907017

This is great. This paper answers my exact question. Thanks.

This is great. This paper answers my exact question. Thanks.
You're welcome. :tongue2:

Note that his criticism there is a bit incomplete (and as a result overly mild), as he did not mention Einstein's continuity argument.

Last edited: