I understand the idea that, given two frames of reference, and measuring for a time T defined as the time between two events, T measured in the moving frame will be less than T measured in the stationary frame by a factor of (1-v/c)^1/2. So, they did this experiment in 1971 where they had two atomic clocks-- one on a plane that was sent around the world, and one at a US naval observatory. After its round trip, the clock on the plane showed a smaller reading by about 3 E -7, which agrees with the predicted time dilation (within error). My questions is, why do we assume the plane is the moving frame, and the naval observatory the stationary frame? What if, instead of seeing the plane as moving, imagine the following: 1. The plane is suspended above the naval observatory. 2. An observer in space is watching the Earth spin around its axis. 3. At some point the observer grabs hold of the plane so that the Earth continues to spin "underneath" the plane while the plane itself is held "in place" wrt the observer. 4. After 24 hours, the plane will be back in the position where it was before. In this case, can't we argue that the earth is the moving frame, and thus the atomic clock on the naval observatory should have the smaller reading? And, couldn't this essentially involve the same mechanics as the 1971 experiment, which resulted with the plane's atomic clock with the smaller time? It seems to me that which frame is chosen as the moving frame is arbitrary, so the idea that the "moving" frame's measured time should be less than the "stationary" frame's time doesn't seem right, since "moving" and "stationary" classifications are arbitrary. Or aren't they?