Terrell Revisited: The Invisibility of the Lorentz Contraction

  • #51
harrylin said:
PS. I see that PAllen elaborates in post #49 the first argument I made in post #41.
Yes that was the way I was thinking originally as well, that you could easily "see" that the rod was length contracted. But then I realized what Terrell meant, which is that a shortened rod still looks like a rod-- it's not distorted, so you only know it's contracted if you know how far away it is from the CCD. I agree with JDoolin that this does not constitute good use of the concept of "invisibility", because seeing always involves some inclusion of additional information to make sense of the image, I'm just saying that Terrell's meaning of invisibility is only about the non-distortion of small shapes. That's what I was struggling with before, I couldn't see how Terrell was missing such an obvious point, but now I see he just had an odd interpretation of the words.
 
Physics news on Phys.org
  • #52
Ken G said:
Yes that was the way I was thinking originally as well, that you could easily "see" that the rod was length contracted. But then I realized what Terrell meant, which is that a shortened rod still looks like a rod-- it's not distorted, so you only know it's contracted if you know how far away it is from the CCD. I agree with JDoolin that this does not constitute good use of the concept of "invisibility", because seeing always involves some inclusion of additional information to make sense of the image, I'm just saying that Terrell's meaning of invisibility is only about the non-distortion of small shapes. That's what I was struggling with before, I couldn't see how Terrell was missing such an obvious point, but now I see he just had an odd interpretation of the words.
I think there is more to it. Terrell was (I think) modeling an idealized camera, not a shadow cast image such as Harrylin and I mentioned. In the latter, shape change is trivially visible - a moving circle becomes an oval (as does a moving sphere).

Yet another point is that the effect of light delays on a idealized camera would distort shapes more if weren't for length contraction (a sphere would be elongated if it weren't for length contraction). Thus the absence of many types of shape distortion is direct evidence of length contraction!

Finally, other sources derive that shapes change anyway - a rectangle can become a curved parallelogram.

So I really think there is no substantive way in which the title of the paper is defensible.
 
  • Like
Likes JDoolin
  • #53
PAllen said:
I think there is more to it. Terrell was (I think) modeling an idealized camera, not a shadow cast image such as Harrylin and I mentioned. In the latter, shape change is trivially visible - a moving circle becomes an oval (as does a moving sphere).
I'm not sure the moving sphere would look squashed, even in its shadow, since to make a shadow the sphere must scatter away the light, but light moving as the sphere goes by is going to scatter at multiple places around the sphere. A flat disk I can see, but then if you see a squashed flat disk, it can look rotated rather than squashed. But if you know it's all at the same distance, because you know something about the setup, you can include that knowledge in what you are calling the "image." I think Terrell's point is you will always need to include that knowledge, it's not in the "raw" image. But I admit I'm still unclear on just what the claim is.
Yet another point is that the effect of light delays on a idealized camera would distort shapes more if weren't for length contraction (a sphere would be elongated if it weren't for length contraction). Thus the absence of many types of shape distortion is direct evidence of length contraction!
But that's all right, Terrell knows you can infer length contraction from what you see, he is only claiming you can't "see it" without some analysis.
Finally, other sources derive that shapes change anyway - a rectangle can become a curved parallelogram.

So I really think there is no substantive way in which the title of the paper is defensible.
But if that's true, it's not just the title-- it's essentially every word in the abstract that is wrong. That requires a flaw in the mathematics, does it not?
 
  • #54
harrylin said:
I can totally not follow that argument; in my analysis of SR space is homogeneous. The aberration of light from a LED with velocity v at x=x1 that shines towards a CCD element at x=x1 must be equal to the aberration of light from a LED with velocity v at position x=x2 that shines towards a CCD element at x=x2.

You can call it angle of reception. :smile:

PS. I see that PAllen elaborates in post #49 the first argument I made in post #41.

Is it the angle of reception?

I may be misunderstanding the equation for aberration but look at the following diagram

2015-05-01-Relativistic-Aberration.png


Now there's nothing wrong with the math here, insofar as it goes:
"the source is moving with speed
2d3fdc651d296cf7a5bde9d58fa58c47.png
at an angle
bbfb19e7605365bcdbfa94eecbf619ad.png
relative to the vector from the observer to the source at the time when the light is emitted. Then the following formula, which was derived by Einstein in 1905, describes the aberration of the light source, [PLAIN]http://upload.wikimedia.org/math/3/9/9/39994abba112928ccc9e9d70a502fb93.png, measured by the observer:"

I think that the hardest thing to do is to figure out what these angles mean verbally and intuitively. For instance the light that goes along that "measured observed angle" never actually hits the observer along the vector between the observer and the source. It's just where the light passes through the observers reference frame.

Now, if you're sophisticated in it enough that you've thought through all this, more power to you. But as for me, I find the idea of finding the location of the object according to the intersection of past-light-cones with the worldlines of the object much more intuitive.

Rather than figuring out where a particular aimed vector of light passes through your reference frame, it figures out the locus of events being seen from a particular point in space and time.
 
Last edited by a moderator:
  • #55
Ken G said:
I'm not sure the moving sphere would look squashed, even in its shadow, since to make a shadow the sphere must scatter away the light, but light moving as the sphere goes by is going to scatter at multiple places around the sphere. A flat disk I can see, but then if you see a squashed flat disk, it can look rotated rather than squashed. But if you know it's all at the same distance, because you know something about the setup, you can include that knowledge in what you are calling the "image." I think Terrell's point is you will always need to include that knowledge, it's not in the "raw" image. But I admit I'm still unclear on just what the claim is.
No scattering is needed for shadow casting. Imagine all light striking the body is absorbed. Then a moving sphere clearly casts an oval shadow. As for distance, you assume it is nearly touching the film for shadow casting. Terrell was simply not analyzing this scenario. I don't know why you are trying to defend a different case than Terrell analyzed. It really is trivial that shape change from length contraction is visible via shadow casting (given a perfect plane wave of near zero duration). It is a perfect measure of simultaneity for the frame generating the plane wave flash. In another frame, different elements of the flash are generated at different times, so the explanation of the shape distortion is frame dependent, but not the fact of the shape distortion.

Ken G said:
But that's all right, Terrell knows you can infer length contraction from what you see, he is only claiming you can't "see it" without some analysis.

But if that's true, it's not just the title-- it's essentially every word in the abstract that is wrong. That requires a flaw in the mathematics, does it not?
Yes, it would, and on this I don't know for sure who is right. I have never done a complete ray tracing for a complex shape from first principles on my own. I do know there are many videos such as A.T. has linked that show even the same object changing shape as it approaches, passes, and recedes. Unless these are all wrong, then even a limited claim of shape preservation is wrong.
 
Last edited:
  • #56
On carefully reading Terrel's abstract I can see how my detailed analysis of the rod could be considered consistent with it. The increased space between ruler marks on the approaching part, and decrease on the receding part could be consistent with and interpretation of the ruler being rotated rather than contracted. However, as Penrose noted in his book, it would be easy to establish that this physically the wrong interpretation - imagine the rod as having wheels, moving on a stationary track. You would never see the wheels leave the track. Therefore, seeing this, you would be forced to interpret the image as contracted with stretching and compression of ruler lines.

As for the discrepancy between parts of the abstract and various ray traced videos, it is possible the video cases exceed his 'small subtended angle' restriction.
 
  • #57
PAllen said:
No scattering is needed for shadow casting. Imagine all light striking the body is absorbed. Then a moving sphere clearly casts an oval shadow. As for distance, you assume it is nearly touching the film for shadow casting. Terrell was simply not analyzing this scenario. I don't know why you are trying to defend a different case than Terrell analyzed. It really is trivial that shape change from length contraction is visible via shadow casting (given a perfect plane wave of near zero duration). It is a perfect measure of simultaneity for the frame generating the plane wave flash. In another frame, different elements of the flash are generated at different times, so the explanation of the shape distortion is frame dependent, but not the fact of the shape distortion.
Actually, if you use my proposal from #49, the distant flash will produce what is interpreted as plane wave pulse in all frames. The simultaneity detection comes from the sheet of film. If the image interaction is simultaneous across the sheet in one one frame, it will not be simultaneous in a different frame, and that will explain the shape change per that frame.
 
Last edited:
  • #58
PAllen said:
On carefully reading Terrel's abstract I can see how my detailed analysis of the rod could be considered consistent with it. The increased space between ruler marks on the approaching part, and decrease on the receding part could be consistent with and interpretation of the ruler being rotated rather than contracted. However, as Penrose noted in his book, it would be easy to establish that this physically the wrong interpretation - imagine the rod as having wheels, moving on a stationary track. You would never see the wheels leave the track. Therefore, seeing this, you would be forced to interpret the image as contracted with stretching and compression of ruler lines.
Terrell might say you are not allowed to compare the ruler lines, as then the object is not "small" in the way Terrell means. He is apparently arguing that if you allow yourself to compare different places in the image, you must make additional assumptions about what you are looking at in order to "connect the dots", and that could subject you to illusions that don't count as "seeing." This is the tricky part of his language. Terrell certainly knows that if we are allowed to include analytical details about the situation, especially time of flight information, we can correctly infer there is length contraction, that's how length contraction was discovered. So he is using a very restricted idea of what things "look like"-- he is comparing photographs made by two observers in relative motion, and saying the shapes of small things in photographs taken at the same time and place look the same. So he must say that your shadow analysis, done close to the film, subtends a solid angle that is too large to count for what he is talking about. In some sense he seems to be claiming that a shadow analysis is not what things look like, it is an analytical tool for saying what they are actually doing-- akin to using time-of-flight corrections to do the same thing.

So I think it all comes down to what is meant by saying a shape "looks no different". Maybe the explanation by Baez in the link PeterDonis provided will shed light on this:
"Now let's consider the object: say, a galaxy. In passing from his snapshot to hers, the image of the galaxy slides up the sphere, keeping the same face to us. In this sense, it has rotated. Its apparent size will also change, but not its shape (to a first approximation)."

But the more I think about what Baez is saying there, I just don't get it. Surely a camera moving at the same velocity as a "plus sign" of rods will see the symmetric plus sign, and the camera that sees the plus sign as moving can take an image of something apparently at closest approach, which will look distorted. A distorted image looks different, no matter which images you choose to match up to make the comparison. It doesn't seem to matter if you can attribute the distortion to rotation or length contraction, Baez claimed the images will have the same shape, and I don't see how that could be.
 
Last edited:
  • #59
How do you derive the aberration equation?

\cos \theta_o=\frac{\cos \theta_s-\frac{v}{c}}{1-\frac{v}{c} \cos \theta_s}

You'll see I posted a quote from the wikipedia article about it from above... But the more I think about it, I start to think this might be the source of the problem in Terrell's paper.

From Wikipedia: ""the source is moving with speed
2d3fdc651d296cf7a5bde9d58fa58c47.png
at an angle
bbfb19e7605365bcdbfa94eecbf619ad.png
relative to the vector from the observer to the source at the time when the light is emitted. Then the following formula, which was derived by Einstein in 1905, describes the aberration of the light source, [PLAIN]http://upload.wikimedia.org/math/3/9/9/39994abba112928ccc9e9d70a502fb93.png, measured by the observer:"

Now my reading of this is that the light is emitted along a "tube" that is aimed directly toward the observer in the reference frame of the observer when the source is at the given point.

The trouble is that if the "tube" is aimed directly toward the observer, in the reference frame of the observer, you're looking at the situation Post-Lorentz-Contraction. That is \theta_s is not the angle of the tube in the source's reference frame, but the angle of the tube in the observer's reference frame. So this equation is not relating a difference between appearances in the source's reference frame and the observer's reference frame.

Rather, it is relating a difference between two different angles measured in the observer's reference frame.

If I were to try to confirm this, I would probably try to set up a diagram similar to the one I gave in post 54, and do some vector and trigonometric calculations, dividing the velocities into well-chosen x and y components, setting the final speed of the photon through the moving tube at c, and see if I could reproduce the aberration equation from scratch.

My point is, I don't think you would find any evidence of Lorentz Contraction in the aberration equation, because the aberration equation may simply be figuring out the direction at which rays travel from already lorentz-contracted tubes.
 
Last edited by a moderator:
  • #60
JDoolin said:
My point is, I don't think you would find any evidence of Lorentz Contraction in the aberration equation, because the aberration equation may simply be figuring out the direction at which rays travel from already lorentz-contracted tubes.
That aberration formula is just one of three basic definitions, using cos, sin and tan. It just happens that gamma cancels out in the cos definition. Reference. See for example equation (2) which contains gamma explicitly.
 
  • Like
Likes JDoolin
  • #61
JDoolin said:
How do you derive the aberration equation?

\cos \theta_o=\frac{\cos \theta_s-\frac{v}{c}}{1-\frac{v}{c} \cos \theta_s}

You'll see I posted a quote from the wikipedia article about it from above... But the more I think about it, I start to think this might be the source of the problem in Terrell's paper.

From Wikipedia: ""the source is moving with speed
2d3fdc651d296cf7a5bde9d58fa58c47.png
at an angle
bbfb19e7605365bcdbfa94eecbf619ad.png
relative to the vector from the observer to the source at the time when the light is emitted. Then the following formula, which was derived by Einstein in 1905, describes the aberration of the light source, [PLAIN]http://upload.wikimedia.org/math/3/9/9/39994abba112928ccc9e9d70a502fb93.png, measured by the observer:"

Now my reading of this is that the light is emitted along a "tube" that is aimed directly toward the observer in the reference frame of the observer when the source is at the given point.

The trouble is that if the "tube" is aimed directly toward the observer, in the reference frame of the observer, you're looking at the situation Post-Lorentz-Contraction. That is \theta_s is not the angle of the tube in the source's reference frame, but the angle of the tube in the observer's reference frame. So this equation is not relating a difference between appearances in the source's reference frame and the observer's reference frame.

Rather, it is relating a difference between two different angles measured in the observer's reference frame.

If I were to try to confirm this, I would probably try to set up a diagram similar to the one I gave in post 54, and do some vector and trigonometric calculations, dividing the velocities into well-chosen x and y components, setting the final speed of the photon through the moving tube at c, and see if I could reproduce the aberration equation from scratch.

My point is, I don't think you would find any evidence of Lorentz Contraction in the aberration equation, because the aberration equation may simply be figuring out the direction at which rays travel from already lorentz-contracted tubes.
The wikipedia description is poor. The 's' angle is measured in one reference frame, the 'o' angle is measured in the other. The discussion in the mathpages link is much clearer.
 
Last edited by a moderator:
  • #62
JDoolin said:
Is it the angle of reception?

I may be misunderstanding the equation for aberration but look at the following diagram
[..] Now there's nothing wrong with the math here, insofar as it goes: [..]Now, if you're sophisticated in it enough that you've thought through all this, more power to you. [..]
Sorry in the past I was sophisticated enough to do that, but this time I imagined a simple set-up with identical emitter-receiver pairs that utilizes a basic physical principle - the laws of nature (including aberration) do not depend on position.
No math or drawings are needed (OK a mind sketch is useful) to know that if one LED shines at a certain angle, then an identical LED in an identical state must shine at the same angle, because the calculations and drawings are identical.
In the setup that I considered with identical LED's with matching CCD's, only anti-SR space anisotropy can provide a different outcome.
 
  • #63
PAllen said:
The wikipedia description is poor. The 's' angle is measured in one reference frame, the 'o' angle is measured in the other. The discussion in the mathpages link is much clearer.

Since that's a rather long page, I thought it might be helpful to focus in on what I think is the most relevant part.

2015-05-02-RelativisticAberrationFormula01.PNG
2015-05-02-RelativisticAberrationFormula02.PNG

I've lost my link to Terrell's paper, but I'm trying to imagine how I could use this equation to determine the shape of a relativistically passing object?

If I had an extended source which had a length L, along its velocity vector, then it is not an "object at point A" because a point cannot have a length L.

• The angle \alpha at one end of length L, would be different from the angle \alpha at the other end of length L.
• The angle \theta_s would be different at the two ends of the object.
• The time t_1 would be different at the two ends of the object.

If there is enough L there to measure length contraction, you have an object that is not located wholly at the origin, so it would become difficult, if not impossible to use any form of the aberration equation derived from having the object at the origin.
 

Attachments

  • 2015-05-02-RelativisticAberrationFormula01.PNG
    2015-05-02-RelativisticAberrationFormula01.PNG
    89.8 KB · Views: 388
  • #64
An important thing to notice about aberration is that it is an effect that appears at order v/c, so it is primarily a simple time-of-flight effect, similar to what happens when you have directional hearing of sound waves. All we are concerned with are Lorentzian effects, i.e., that which is different in Lorentzian relativity versus Galilean relativity. Has anyone tried to calculate what a moving "plus sign" would "look like" in Galilean relativity, and compare it? I'm sure things would look pretty weird in either relativity, but we can only claim length contraction is "invisible" if what we see looks the same in both forms of relativity.

This also raises the key question: what did Terrell actually show to be true? Baez thinks he showed something really interesting to be true, and he seemed to be saying that small shapes would look the same to two observers in relative motion, but that does not seem to be true for the plus sign photographed at the instant that it appears to be at the point of closest approach for the stationary observer, because certainly the observer moving with the plus sign will never see anything but a fully symmetric plus sign. So what did Terrell prove, and did both he and Baez draw erroneous conclusions from what was actually shown?

The one thing that gives me pause is that I can't help wondering if maybe the aberration that makes the plus sign appear to be somewhere other than where it actually was when it emitted that light, means that when the stationary observer sees the light emitted when the plus sign really was at closest approach, and also sees it skewed to be shorter along the direction along its motion, aberration will make it look like it is not yet at the point of closest approach-- so they might think "oh, it's skewed because I'm seeing it from an angle that is rotated by its lateral position." Then it wouldn't "look" length contracted, it would just look rotated in a perfectly normal way and nothing relativistic would be apparent (if it was small enough).

But that doesn't sound like what Baez is saying at all-- he is saying the two photographs taken through shutters at the same time and place would photograph the same shape, so would have to be a symmetric plus sign in both, and I just can't see how that could be true. But I hesitate to conclude that something Baez has thought about this much is wrong!
 
Last edited:
  • #65
PAllen said:
The wikipedia description is poor. The 's' angle is measured in one reference frame, the 'o' angle is measured in the other. The discussion in the mathpages link is much clearer.

You know? I was able to confirm the equations from the mathpages link, once I understood the definitions of all the variables. It's a fairly straightforward application of the Lorentz Transformation on the vector between two events.

\begin{pmatrix} t'\\ x'\\ y' \end{pmatrix} = \begin{pmatrix} -\beta \gamma &\gamma &0 \\ \gamma &-\beta \gamma &0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} t_1\\ t_1 \cos \alpha\\ t_1 \sin \alpha \end{pmatrix}

Then the velocity angles can be calculated from x'/t', and y'/t'.
JDoolin said:
Now there's nothing wrong with the math here, insofar as it goes:

"the source is moving with speed
2d3fdc651d296cf7a5bde9d58fa58c47.png
at an angle
bbfb19e7605365bcdbfa94eecbf619ad.png
relative to the vector from the observer to the source at the time when the light is emitted. Then the following formula, which was derived by Einstein in 1905, describes the aberration of the light source, [PLAIN]http://upload.wikimedia.org/math/3/9/9/39994abba112928ccc9e9d70a502fb93.png, measured by the observer:"

Although I said, before, that there is nothing wrong with the math--I should point out that it would have been incredibly difficult to guess the meaning of the 's' angle from the description given in the wikipedia article.

The angle between the source and the observer and the velocity vector "at the time the light is emitted" is NOT \theta_s.

\theta_s is the angle between the observer and the source and the velocity vector "at the time the light is received by the observer" in the source's reference frame.
 
Last edited by a moderator:
  • #66
Ken G said:
The one thing that gives me pause is that I can't help wondering if maybe the aberration that makes the plus sign appear to be somewhere other than where it actually was when it emitted that light, means that when the stationary observer sees the light emitted when the plus sign really was at closest approach, and also sees it skewed to be shorter along the direction along its motion, aberration will make it look like it is not yet at the point of closest approach-- so they might think "oh, it's skewed because I'm seeing it from an angle that is rotated by its lateral position." Then it wouldn't "look" length contracted, it would just look rotated in a perfectly normal way and nothing relativistic would be apparent (if it was small enough).

I have been thinking today about modeling an asterisk-shaped model. A set of eight or more tubes that would show the light paths as they came out of it, as well as the Lorentz contracted moving structure.

What I'd want to show is an animation of the paths of the light following the paths predicted by the aberration equation:

2015-05-02-RelativisticAberrationFormula03.PNG

(Image from http://mathpages.com/rr/s2-05/2-05.htm )

But at the same time as it shows those paths of light, it should be showing overlying simply Lorentz contracted structure of the object.
 
  • #67
Ken G said:
maybe the aberration that makes the plus sign appear to be somewhere other than where it actually was when it emitted that light
The camera always sees the light coming from where it was emitted, in the rest frame of the camera.
 
  • Like
Likes JDoolin
  • #68
JDoolin said:
I have been thinking today about modeling an asterisk-shaped model. A set of eight or more tubes that would show the light paths as they came out of it, as well as the Lorentz contracted moving structure.

What I'd want to show is an animation of the paths of the light following the paths predicted by the aberration equation:

View attachment 82988
(Image from http://mathpages.com/rr/s2-05/2-05.htm )

But at the same time as it shows those paths of light, it should be showing overlying simply Lorentz contracted structure of the object.
Can you contrast a similar picture for Galilean and Lorentzian relativity? I'm wondering if the Lorentz contraction cancels out the Lorentzian modification to the aberration equation.
 
  • #69
A.T. said:
The camera always sees the light coming from where it was emitted, in the rest frame of the camera.
I'm not sure that's true, wouldn't the camera see the image in the same direction that a tube would need to be pointed to accept a stream of photons from the source, not along the path of any single one of those photons? In other words, imagine a helicopter flying along a straight path, firing straight-line bullets to try to hit a single point on ground (so they have to be aimed to account for the motion of the helicopter). It seems to me the stream of bullets will arrive, at any moment, along a line that does not track the actual trajectories of the individual bullets that are coming in. If we wanted to point a tube to accept those bullets, you would have to keep the tube rotating to track the incoming bullets, and at any instant the tube would not point along the trajectory of the bullets that are hitting the bottom of the tube at that moment. So I think if the bullets are photons, the eye will see the apparent image along the direction the tube is pointing instantaneously as the photons hit the bottom, not along the direction of motion of the photons. If one takes a wavefront picture, this must have to do with how the wavefronts are turned by the phase variations coming from the movement of the source, such that we cannot expect the arriving plane wave to be perpendicular to the line from the point where the light was emitted. Is that not what aberration is?
 
  • #70
Ken G said:
I'm not sure that's true, wouldn't the camera see the image in the same direction that a tube would need to be pointed to accept a stream of photons from the source, not along the path of any single one of those photons? In other words, imagine a helicopter flying along a straight path, firing straight-line bullets to try to hit a single point on ground (so they have to be aimed to account for the motion of the helicopter). It seems to me the stream of bullets will arrive, at any moment, along a line that does not track the actual trajectories of the individual bullets that are coming in.

The bullets would arrive along many lines, each tracking the actual trajectories of the individual bullets that are coming in.

If we wanted to point a tube to accept those bullets, you would have to keep the tube rotating to track the incoming bullets, and at any instant the tube would not point along the trajectory of the bullets that are hitting the bottom of the tube at that moment.

That's a good point. The tube would have to be rotating even as it was receiving the light. If the tube were narrow enough, and the passing object were moving fast enough, you'd have to rotate the tube so fast that the photons would hit the side of the tube before they made it into the camera.

So I think if the bullets are photons, the eye will see the apparent image along the direction the tube is pointing instantaneously as the photons hit the bottom, not along the direction of motion of the photons.

2015-05-03-RelativisticAberrationFormula04.PNG
Check the thumbnail. If the top of the tube is rotating to stay aligned with the incoming "bullets" the bullet arriving at the bottom is not necessarily traveling along the direction the tube is oriented.

If one takes a wavefront picture, this must have to do with how the wavefronts are turned by the phase variations coming from the movement of the source, such that we cannot expect the arriving plane wave to be perpendicular to the line from the point where the light was emitted. Is that not what aberration is?

Well, would that be consistent with the derivation I copied from http://mathpages.com/rr/s2-05/2-05.htm in post number #63? The derivation there uses the Lorentz Transformation of two events in a pair of reference frames orthogonal to the relative velocity vector of a source and an observer.

What is the principle by which we know that the Lorentz Transformation works? It is the fact that the LT is the unique transformation that preserves light-cones, while serves as an approximation for Galilean Relativity at low velocities. But at what point during all that did anyone ever say "We cannot expect the arriving plane wave to be perpendicular to the line from the point where the light was emitted?" Never. Quite the contrary, the Lorentz Transformation absolutely preserve the principle where spherical wave-fronts create images of objects at their center. That seems to me, one of the many selling-points of having a transformation which preserves the light-cone.

Now, I don't know what other people have said about the aberration equation, but, according to the derivation, I would say, yes we CAN expect the arriving plane wave to be perpendicular to the line from the point where the light was emitted.[/QUOTE]
 
  • #71
Ken G said:
Can you contrast a similar picture for Galilean and Lorentzian relativity? I'm wondering if the Lorentz contraction cancels out the Lorentzian modification to the aberration equation.

JDoolin said:
I have been thinking today about modeling an asterisk-shaped model. A set of eight or more tubes that would show the light paths as they came out of it, as well as the Lorentz contracted moving structure.

What I'd want to show is an animation of the paths of the light following the paths predicted by the aberration equation:

View attachment 82988
(Image from http://mathpages.com/rr/s2-05/2-05.htm )

But at the same time as it shows those paths of light, it should be showing overlying simply Lorentz contracted structure of the object.

Here, I just made this video showing the concept... Showing how the tubes of the source can be pointed one way in the observer's reference frame, while the actual photo paths can be pointing in an entirely different way. It's pretty sloppy, but I think, at least, it gets the idea across.

 
  • #72
Ken G said:
if the bullets are photons
Light is not like bullets. If we are in flat space time and the rest frame of the camera is inertial, then light is propagating isotropically in all directions from each emission point in the rest frame of the camera, so the wave fronts reaching the static camera are perpendicular to the straight line between the camera and the emission point.

Ken G said:
Is that not what aberration is?
There is no aberration in the rest frame of the camera.
 
  • #73
All right, thanks to everyone for clearing that up for me, I was definitely making aberration too difficult. My bad.

Anyway, I think I see that Baez is right, though Terrell's "invisibility" claim is still a bit of a stretch-- two cameras in relative motion that image the same object at the same place and time will always image the same shape, it will just appear to be in a different direction, and it can also have a different total angular size.

If how that could be is still as unclear to anyone else as it was to me, consider Baez' two pinhole cameras, taking a picture when at the same place and time, but this time let's put the "plus sign" a little ahead of the camera that is tracking its motion, such that when the two cameras coincide, the stationary camera takes an image of the plus sign apparently at its point of closest approach. The plus sign is riding on a string through its horizontal piece, and the moving camera is trailing it on a parallel string that passes through the stationary camera.

So if the plus sign is moving left to right, this of course means the plus sign is really a bit to the right of the point of closest approach at the moment the cameras coincide and snap their photos. We know that the image from the stationary camera will show a length contracted horizontal piece, because we agree that there we can correctly reckon that it is length contracted. The moving camera, on the other hand, sees the plus sign as being a little rotated, because it is trailing it a bit, so the photo in the moving camera will also have a shortened horizontal piece. Baez is saying that the amount it will be shortened in this example is exactly the Lorentz factor, such that the shapes of the plus signs will be the same in the two photos. So to get the moving camera to coincide with the stationary one when the stationary one needs to snap this photo, the moving camera must trail the plus sign by exactly the angle needed to make the plus sign look Lorentz contracted. That the two images look the same is the basis for saying length contraction is "invisible"-- it's an ambiguity between whether the visible length contraction is real as for the stationary camera, or due to rotation as for the moving camera, when just looking at the "literal" images. This would seem to be a special feature of Lorentz contraction, perhaps an equivalent way to assert the postulates of relativity.
 
  • #74
A.T. said:
Light is not like bullets. If we are in flat space time and the rest frame of the camera is inertial, then light is propagating isotropically in all directions from each emission point in the rest frame of the camera, so the wave fronts reaching the static camera are perpendicular to the straight line between the camera and the emission point.

I would just change the word "point" to "event"

There is no aberration in the rest frame of the camera.

That is, there are no obliquely traveling wave-fronts of light from any event.
 
  • #75
Ken G said:
All right, thanks to everyone for clearing that up for me, I was definitely making aberration too difficult. My bad.

Anyway, I think I see that Baez is right, though Terrell's "invisibility" claim is still a bit of a stretch-- two cameras in relative motion that image the same object at the same place and time will always image the same shape, it will just appear to be in a different direction, and it can also have a different total angular size.

If how that could be is still as unclear to anyone else as it was to me, consider Baez' two pinhole cameras, taking a picture when at the same place and time, but this time let's put the "plus sign" a little ahead of the camera that is tracking its motion, such that when the two cameras coincide, the stationary camera takes an image of the plus sign apparently at its point of closest approach. The plus sign is riding on a string through its horizontal piece, and the moving camera is trailing it on a parallel string that passes through the stationary camera.

So if the plus sign is moving left to right, this of course means the plus sign is really a bit to the right of the point of closest approach at the moment the cameras coincide and snap their photos. We know that the image from the stationary camera will show a length contracted horizontal piece, because we agree that there we can correctly reckon that it is length contracted. The moving camera, on the other hand, sees the plus sign as being a little rotated, because it is trailing it a bit, so the photo in the moving camera will also have a shortened horizontal piece. Baez is saying that the amount it will be shortened in this example is exactly the Lorentz factor, such that the shapes of the plus signs will be the same in the two photos. So to get the moving camera to coincide with the stationary one when the stationary one needs to snap this photo, the moving camera must trail the plus sign by exactly the angle needed to make the plus sign look Lorentz contracted. That the two images look the same is the basis for saying length contraction is "invisible"-- it's an ambiguity between whether the visible length contraction is real as for the stationary camera, or due to rotation as for the moving camera, when just looking at the "literal" images. This would seem to be a special feature of Lorentz contraction, perhaps an equivalent way to assert the postulates of relativity.

Are you really coming back to the conclusion that you can't "see" Lorentz Contraction here? I hope I have helped you to understand aberration a bit better, but it was definitely not my goal to get you to come to that conclusion!
 
  • #76
I was previously going to point out a couple of no-cost programs that illustrate all the relevant effects (aberration/doppler/headlight); I now have the full information so here goes (search the web if you need to know more about what they do):
1. Real-time relativity. Windows/Mac downloads but runs under Wine in Linux. The author's SR primer is here.
2. A slower speed of light, Windows/Mac/Linux. Testbed for MIT's SR game framework, now starting to look like abandonware.
Enjoy!
 
  • #77
JDoolin said:
Are you really coming back to the conclusion that you can't "see" Lorentz Contraction here? I hope I have helped you to understand aberration a bit better, but it was definitely not my goal to get you to come to that conclusion!
No, I agree that the standard meaning of "see" is much broader than the limited meaning applied by Terrell. And I appreciate your efforts to elucidate all the various factors here!

What I'm actually saying is that it is the conclusion of Baez that appears to be correct, and I did not see that before. Baez' claim is that two cameras taking a picture at the same place and time will always photograph small shapes the same. The shapes may appear at different places in the visual field if one of the cameras is subject to aberration, and there can also be some changes in total angular size relating to similar issues, but the two shapes will be the same, i.e., a plus sign seen as having a given contrast in the lengths of its pieces in one photograph will have that same contrast in the other photograph as well, and that would not be true in Galilean relativity it seems. How to express that fact in words is a bit tricky!
 
  • Like
Likes JDoolin
  • #78
m4r35n357 said:
I was previously going to point out a couple of no-cost programs that illustrate all the relevant effects (aberration/doppler/headlight); I now have the full information so here goes (search the web if you need to know more about what they do):
1. Real-time relativity. Windows/Mac downloads but runs under Wine in Linux. The author's SR primer is here.
2. A slower speed of light, Windows/Mac/Linux. Testbed for MIT's SR game framework, now starting to look like abandonware.
Enjoy!

Here is another potential one:

http://www.visus.uni-stuttgart.de/u...vistic_Visualization_by_Local_Ray_Tracing.pdf
As future work, we plan to extend our software to a freely available tool usable for teaching in the context of Special Relativity. We want to allow the user to interactively explore relativistic effects by supporting import of arbitrary 3D models from common file formats and graphical interaction with the relevant visualization parameters, e.g., observer’s position, directions of motion, speed, and the different visual effects shown (geometric only, Doppler shift, and searchlight effect).

Maybe they have made it available already or would if enough people ask them. I think it would be a great tool.
 
  • #79
JDoolin said:
I would just change the word "point" to "event"
I said "point" because a meant a spatial point, not a point in space-time. Light is propagating isotropically in all directions of space, not of space-time.
 
  • #80
A.T. said:
Maybe they have made it available already or would if enough people ask them. I think it would be a great tool.
I'd like to think so too but the paper is from 2010 and they don't even give the program a name to search for. At first glance at least some of the emphasis of their approach is on numerical "fudging" for efficiency. I think the approach taken by Real Time Relativity is "purer" mathematically. Section 10 of the primer deals with rendering via stereographic projection, and builds on the work of Penrose.
[UPDATE]
No sooner had I posted than I found a recent program by one of the authors called GeoVis here. Unfortunately it appears to be unavailable to the general public, and with a license that I can't be bothered to even read. Shame as apparently it's a Linux program, and that's one of my things . . .
[UPDATE 2] Try here.
 
Last edited:
  • #81
A.T. said:
I said "point" because a meant a spatial point, not a point in space-time. Light is propagating isotropically in all directions of space, not of space-time.

By point, did you mean a stationary point in the observer's reference frame, or a point attached to an object which may or may not be moving in the observer's reference frame?

Because I think we've pretty well established that if the light from a point (attached to an object) is isotropic in one reference frame, it is not isotropic if you moving fast with respect to that object. That's what the diagram in post 66 is showing, and what I tried to explain in more detail in the video in post 71 (How an isotropic arrangement of beams in one reference frame leads to a nonistropic arrangement of beams in another reference frame.)

Maybe I'm misunderstanding your meaning of the word isotropic here. In the diagram in post 66, you see that the intensity of the light must be much greater coming off the front side of the source than from the back end. But the speed of light is the same in all directions. So if by isotropic you mean "the same speed" I'd agree with you, but if by isotropic, you mean "the same intensity" I'd have to disagree with you.
 
Last edited:
  • #82
JDoolin said:
By point, did you mean a stationary point in the observer's reference frame
This.
 
  • Like
Likes JDoolin
  • #83
Sorry about that. I have a tendency to compulsively edit my posts for a few minutes after posting. I may have added about three paragraphs since your response.
 
  • #84
m4r35n357 said:
I think the approach taken by Real Time Relativity is "purer" mathematically.
After reading this:
http://people.physics.anu.edu.au/~cms130/RTR/Physicist.html
"The 2D screen image is created using the computer graphics technique known as environment mapping, which renders the 3D virtual world onto a 2D cube map."

I'm not sure if this accounts for differential signal delays, which are key to the visual effects for close passing by objects discussed here. It depends how it "renders the 3D virtual world onto a 2D cube map". The 4D-raytracing approach seems to be the most general to me.
 
  • #85
JDoolin said:
So if by isotropic you mean "the same speed"
This.
 
  • Like
Likes JDoolin
  • #86
Ken G said:
No, I agree that the standard meaning of "see" is much broader than the limited meaning applied by Terrell. And I appreciate your efforts to elucidate all the various factors here!

What I'm actually saying is that it is the conclusion of Baez that appears to be correct, and I did not see that before. Baez' claim is that two cameras taking a picture at the same place and time will always photograph small shapes the same. The shapes may appear at different places in the visual field if one of the cameras is subject to aberration, and there can also be some changes in total angular size relating to similar issues, but the two shapes will be the same, i.e., a plus sign seen as having a given contrast in the lengths of its pieces in one photograph will have that same contrast in the other photograph as well, and that would not be true in Galilean relativity it seems. How to express that fact in words is a bit tricky!

And I claim that is false and provable from my detailed analysis of the moving rod. If you had a cross (with equal arms in its rest frame) with one arm in the direction of motion, then there would be one moment where the length of one arm would be shorter by gamma than the other arm. A camera at rest with respect to the cross at the same time would not see this. This is actually in agreement with Terrell (but not Baez if you quote him correctly). Terrell would say that the moving cross looks rotated such that one arm does have shorter angular span than the other. My analysis agrees with rotation (as one possible visual interpretation) in that the markings on the shorter arm would be stretched on one side and compressed on the other in a way that precisely matches rotation. However, if you imagined this arm parallel to motion as hollow, moving along rigid rod at rest with respect to the camera, you would be forced to re-interpret the same image as contraction with stretching and compression, because rotation could no longer be sustained as a visual interpretation. Penrose actually makes this point in his book "Road to Reality" that the rotation interpretation would interfered with if you introduce other objects moving at different speeds (he mentioned a track rather than hollowed out arm I proposed).

[I used brute force ray tracing in my analysis with no prior assumptions. I posted the resulting formulas, but not the derivation. If I have time at some point, I may post the derivation (it is actually only a page long on my hand written sheet). ]

[Edit1: One caveat, is that I have not analyzed the image for the camera co-moving with the cross, located at the same place and time as the camera taking the image I described above. It is possible that such an analysis could vindicate Baez as follows: in the frame of this camera co-moving with the cross, the cross is not being viewed head on, but substantially displaced; that is, at rest but far from head on in the direction of one arm. Then that arm would subtend less angle, and show distortion consistent with rotation. If that is the case, then Baez is vindicated (in the sense that both cameras would see a distorted cross). Again, if I have time in the future I will try to check whether this is what occurs. At this moment, I am suspecting that it does, and Baez is right, but in a different sense than Ken G. implies above.]

[Edit2: Further, if I am right in how Baez is correct in a certain sense, the if you start from a camera at rest with respect to the cross looking head on, and ask about a camera passing by at high speed snapping at that moment, what it would see is a non-distorted cross shifted a good distance forward, such that the light delay induced stretching compensates for the length contraction. Thus, a big part of this is simply that what one camera sees as 'head on' the other [momentarily colocated camera] sees as displaced in such a way that the combination of effects (displacement, light delay, contraction) preserves the shape.

So, while this is all interesting, it remains true for watching the cross go by:

1) There will be a time when one of its arms is shorter by gamma (and, at this time, it will look like it is being viewed head on - equidistant on either side of your line of sight). It is really hard for me to accept any definition of 'seeing' that doesn't call this directly seeing length contraction (despite the somewhat perverse way Baez's claim may remain true, as outlined in edit1).

2) At all times, length contraction is visible in the obvious way that if you account for light delays without assuming length contraction, you predict the wrong image. Thus what you see is at all times directly seeing what is expected by length contraction, and not what you would see without it.

]
 
Last edited:
  • Like
Likes mattt and JDoolin
  • #87
If you line the
m4r35n357 said:
I was previously going to point out a couple of no-cost programs that illustrate all the relevant effects (aberration/doppler/headlight); I now have the full information so here goes (search the web if you need to know more about what they do):
1. Real-time relativity. Windows/Mac downloads but runs under Wine in Linux. The author's SR primer is here.
2. A slower speed of light, Windows/Mac/Linux. Testbed for MIT's SR game framework, now starting to look like abandonware.
Enjoy!

I just now played a slower speed of light, on my machine... Although my graphics card is woefully insufficient for smooth graphics, I was able to get through the game at lowest resolution and some choppy graphics. I thought it was quite nice; artistically done, and enjoyable and probably a lot more fun on a gaming computer. There was a question I have related to the graphics though.

When I got up to 30, 40, 50 percent of the speed of light, I was happy to find, as I would expect, that when I accelerated toward objects, they immediiately receded into the background, and when I backed up, the objects sprang forward. That is totally what I would have expected from aberration.

However, I was trying hard to watch the cross-section of objects against the ground. While the ground in front of me, itself, stretched out significantly, the actual mushroom and hut cross-sections against the ground did not seem to be stretched. So I'm wondering if I'm actually seeing the phenomenon that Penrose, Terrell and/or Baez is talking about? Or did the programmers shortcut the rendering of the huts, and just render them as circular huts and circular mushrooms? If the demonstration represents an accurate rendering, I'll have to eat my words... But I can't imagine How could the ground appear stretched, while the objects along the ground do not appear stretched by the same ratio?

I couldn't see any aberration in the shapes of individual objects until the very end, when I collected the final watermelon. Then the aberration in the yz-plane (away and vertical) became plainly visible. However, I still don't think I saw aberration in the xy-plane. (away and horizontal.)
 
Last edited:
  • #88
PAllen said:
And I claim that is false and provable from my detailed analysis of the moving rod. If you had a cross (with equal arms in its rest frame) with one arm in the direction of motion, then there would be one moment where the length of one arm would be shorter by gamma than the other arm.
Yes, I originally thought that had to make Baez wrong. But it's not something he would likely get wrong, so it's very odd. I agree that if you put the moving camera right across from the moving cross, it has to see a symmetric cross at all times. But it certainly doesn't seem like the stationary camera will see a symmetric cross when the two cameras coincide in that case (though I suggested a different case where it seems like they might both see a contracted horizontal arm), because in this case, the image at that moment won't look like it is at the point of closest approach, it will look like it hasn't gotten there yet. But that should still make it look asymmetric. So we seem to have a case where one image looks symmetric, and the other doesn't. But this is in stark contrast to the conclusion of both Terrell and Baez, who cite high-powered mathematics and seem to understand exactly what they are saying.
What Baez says is:
"First, circles go to circles under the pixel mapping, so a sphere will always photograph as a sphere. Second, shapes of objects are preserved in the infinitesimally small limit. "
This seems to also be said by Terrell:
" Observers photographing the meter stick simultaneously from the same position will obtain precisely the same picture, except for a change in scale given by the Doppler shift ratio, irrespective of their velocity relative to the meter stick."
Terrell would say that the moving cross looks rotated such that one arm does have shorter angular span than the other.
Terrell does at one point say it won't look contracted, it will look rotated, but a rotated cross does not look like "precisely the same picture".
My analysis agrees with rotation (as one possible visual interpretation) in that the markings on the shorter arm would be stretched on one side and compressed on the other in a way that precisely matches rotation.
I don't understand that, wouldn't rotation contract all the tickmarks on the horizontal arm? But more importantly, we cannot even compare different tickmark lengths, because both Terrell and Baez are talking about an effect that is first order in smallness (the "infinitesmally small limit"), so no second order terms like a gradient in the stretching.
However, if you imagined this arm parallel to motion as hollow, moving along rigid rod at rest with respect to the camera, you would be forced to re-interpret the same image as contraction with stretching and compression, because rotation could no longer be sustained as a visual interpretation. Penrose actually makes this point in his book "Road to Reality" that the rotation interpretation would interfered with if you introduce other objects moving at different speeds (he mentioned a track rather than hollowed out arm I proposed).
But note that does not say the stationary camera would not also see something it could interpret as a rotation, so this doesn't speak to the issue of differences between the images.
It is possible that such an analysis could vindicate Baez as follows: in the frame of this camera co-moving with the cross, the cross is not being viewed head on, but substantially displaced; that is, at rest but far from head on in the direction of one arm. Then that arm would subtend less angle, and show distortion consistent with rotation. If that is the case, then Baez is vindicated (in the sense that both cameras would see a distorted cross). Again, if I have time in the future I will try to check whether this is what occurs. At this moment, I am suspecting that it does, and Baez is right, but in a different sense than Ken G. implies above.]
Actually, the scenario you describe here is exactly the one I described above. (Ignore the false turn on aberration, you are right that aberration only appears for a camera we regard as moving.) I figured that's what made Baez right, but what about the case where the moving camera is directly across from the cross, where it will always see a symmetric cross-- how could the stationary camera see that when they coincide? It doesn't seem like the stationary camera would ever see that, but if it ever does, then Baez must be right. If not, then I'm confused about what they mean by preserving the "shape of an object"-- if I take a cross in my hand, and rotate it, is that the same shape only rotated, or a different shape?
 
Last edited:
  • #89
JDoolin said:
If the demonstration represents an accurate rendering, I'll have to eat my words... But I can't imagine How could the ground appear stretched, while the objects along the ground do not appear stretched by the same ratio?
Could it be the difference between first-order-small effects that don't show any difference, and larger-solid-angle pictures where you start to see the distortions? Terrell only ever claimed that small shapes appeared the same, larger images require some type of cobbling together that might involve bringing in "non-literal" information, analogous to how all local frames in GR are Minkowski but the equivalence principle breaks down on larger scales.
 
Last edited:
  • #90
I'll comment more later, for now, just this.
Ken G said:
Terrell does at one point say it won't look contracted, it will look rotated, but a rotated cross does not look like "precisely the same picture". I don't understand that, wouldn't rotation contract all the tickmarks on the horizontal arm?
Not at all. One side gets closer to you, the other side further away. The angle subtended by markings closer to you will be greater than those further away. The result I got for this effect precisely matches rotation, so I find it very hard to believe this is not what Terrell is referring to. Penrose also describes this effect.
Ken G said:
But more importantly, we cannot even compare different tickmark lengths, because both Terrell and Baez are talking about an effect that is first order in smallness (the "infinitesmally small limit"), so no second order terms like a gradient in the stretching.
I don't necessarily think their results are that limited. A small object can still have markings on it.
Ken G said:
Actually, the scenario you describe here is exactly the one I described above. (Ignore the false turn on aberration, you are right that aberration only appears for a camera we regard as moving.) I figured that's what made Baez right, but what about the case where the moving camera is directly across from the cross, where it will always see a symmetric cross-- how could the stationary camera see that when they coincide? It doesn't seem like the stationary camera would ever see that, but if it ever does, then Baez must be right. If not, then I'm confused about what they mean by preserving the "shape of an object"-- if I take a cross in my hand, and rotate it, is that the same shape only rotated, or a different shape?

When what you call the stationary camera coincides and sees a symmetric cross, in that camera's frame its viewing is NOT head on at that event. Aberration will have change the incoming light angle such that the image appears to still be approaching, and the light delay stretching will compensate for the length contraction such that it produces a symmetric photograph.The one case I can't quite resolve is that a rapidly approaching cross still far away will have the parallel (to motion) arm greatly stretched by light delay (by much more than length contraction can compensate, when it is far away). I can't find an all stationary analog of this case.

[edit: I think I resolved this last case, so there no discrepancies between my understanding and Terrell (Baez?), except for describing any of this as not seeing length contraction.

The resolution for approach is to consider a camera stationary relative to the cross, but displaced, sees some subtended angle for the shorter arm as e.g. 3 degrees, with a viewing angle of, say 40 degrees to the left. Then, a moving camera approaching the cross, momentarily coinciding with this camera, sees the same 3 degree subtended angle, the viewing angle is interpreted as much more than 40 degrees off head on. Thus, compared to a similar cross stationary with respect this 'moving camera', at the same viewing angle, the moving cross will appear to have one arm very elongated.

Properly accounting for frame dependence of viewing angle appears to resolve all remaining anomalies, as I see it.]
 
Last edited:
  • #91
PAllen said:
Not at all. One side gets closer to you, the other side further away.
Not in the limit of infinitesmally small images, in that limit, a rotation will contract uniformly along the horizontal direction. It has to be a linear transformation.
The angle subtended by markings closer to you will be greater than those further away. The result I got for this effect precisely matches rotation, so I find it very hard to believe this is not what Terrell is referring to. Penrose also describes this effect.
I don't understand how you can get that it exactly matches rotation, does not the scale of the effect you describe depend on the ratio of how wide the cross is, to how far it is away? But that ratio doesn't appear in the analysis, it is a limit.
I don't necessarily think their results are that limited. A small object can still have markings on it.
Yes, but all transformations on those markings must be linear, so no gradients in what happens to them.
When what you call the stationary camera coincides and sees a symmetric cross, in that camera's frame its viewing is NOT head on at that event.
This is the scenario that is not clear to me-- I can't see if there will ever be a time when the stationary camera sees a symmetric cross. But for Terrell and Baez to be right, there must be such a time, and it must be when the moving camera directly opposite the cross coincides with the stationary camera.
Aberration will have change the incoming light angle such that the image appears to still be approaching, and the light delay stretching will compensate for the length contraction such that it produces a symmetric photograph.
When the moving camera is directly opposite the cross, we can certainly agree the cross will appear symmetric. You are explaining how that is reckoned in the frame of the stationary camera, but in any event we know it must be true.
The one case I can't quite resolve is that a rapidly approaching cross still far away will have the parallel (to motion) arm greatly stretched by light delay (by much more than length contraction can compensate, when it is far away). I can't find an all stationary analog of this case.
But are you also including the rotation effect, not just length contraction? Since none of the locations we could put the moving camera will ever see a horizontal arm that is wider than the vertical arm, it must hold that the stationary camera never sees that either.

But I think your point that the horizontal arm is stretched by time delay effects is the crucial reason that there is a moment when the cross looks symmetric to the stationary camera. I believe that moment will also be when the camera directly opposite from the cross passes the stationary camera. That is, it is the moment when the cross is actually at closest approach. A moment like that would make Baez right. Note that for any orientation of the moving camera, relative to the comoving cross, there is only one moment when the stationary camera needs to see the same thing-- the moment when the two cameras are coincident. If we imagine a whole string of moving cameras, then the stationary camera will always see what the moving camera sees that is at the same place as the stationary camera-- but at no other time do they need to see the same thing.

If so, this means it is very easy to tell what shape the stationary camera will photograph-- simply ask what the moving camera would see that is at the same place when the stationary camera takes its picture, and what the moving camera sees just depends on its location relative to the comoving object. You just have to un-contract the string of moving cameras, and measure their angle to the object, and that's the angle of rotation the stationary observer will see at that moment.
 
Last edited:
  • #92
Ken G said:
Not in the limit of infinitesmally small images, in that limit, a rotation will contract uniformly along the horizontal direction. It has to be a linear transformation.
I disagree on how restrictive Terrell/Baez conclusion is. It may only be exact in some limit, but is good 'reasonably small'.
Ken G said:
I don't understand how you can get that it matches rotation, does not the scale of the effect you describe depend on the ratio of how wide the cross is, to how far it is away? But that ratio doesn't appear in the analysis, it is a limit.
No, I disagree. Moving a tilted ruler further away linearly scales the image, but does not change the ratio of subtended angle at one end compared to the other for e.g. centimeter markings. Per my computation, the effect does match that produced by rotation. [edit: well, as long as you are not too close. Once you are far enough that further distance is linear shrinkage, just imagine the tilted ruler against a non-tilted ruler. The whole image scales, thus preserving the ratio of subtended angle between a closer inch and a further inch, on the tilted ruler] [edit 2: OK, I see that if you allow the angular span of a tilted ruler to go to zero with distance, the ratio angles subtended by ruler lines goes to 1. However, if you fix the angular span of a tilted ruler (e.g. 2 degrees), then distance doesn't matter and the ratio front and back ruler lines remains constant. This is what I was actually modeling when comparing to rotation - all angles. I remain convinced the the rotation model is quite accurate for small, finite spans, e.g several degrees.]
Ken G said:
Yes, but all transformations on those markings must be linear, so no gradients in what happens to them.
I disagree. I claim the result goes beyond this.
Ken G said:
This is the scenario that is not clear to me-- I can't see if there will ever be a time when the stationary camera sees a symmetric cross. But for Terrell and Baez to be right, there must be such a time, and it must be when the moving camera directly opposite the cross coincides with the stationary camera.
When the moving camera is directly opposite the cross, we can certainly agree the cross will appear symmetric. You are explaining how that is reckoned in the frame of the stationary camera, but of course we know it must be true.
I think this is the crucial issue-- it is that stretching that allows the cross to have a moment when it looks symmetric to the stationary camera, and I believe that moment will also be when the camera directly across from the cross passes the stationary camera. That would make Baez right.

I agree, and I thought that's what I was explaining in my last few posts.

Where I continue to disagree (with you, but I think agree with Terrell and Penrose) is that I think there is more to image rotation than you want to admit. Consider it from another angle, so to speak. Imagine a camera and cross stationary with respect to each other, but with the cross displaced from head on. One arm will subtend less angle than the other, and it would look (exactly) rotated relative to a colocated stationary cross turned head on to the camera. To be concrete, let us imagine the displacement is to the left. Now add a camera moving left to right, past this stationary camera. It will see a viewing angle (for the appropriate set up) of 'head on' due to aberration of viewing angle. The image seen by the stationary camera will have moved to a perpendicular viewing angle, but otherwise essentially unchanged. Thus it will see a rotated image in the head on viewing angle, with the rotation producing the contraction and explaining the distribution of ruler lines on this cross arm. Then, my final comment is that this is only one way to interpret the image. If you introduce another element that establishes rotation could not have occurred, you change your interpretation to contraction and distortion - that happens to match rotation.
 
Last edited:
  • #93
I will attempt another summary, similar to #49, that includes full understanding of Terrell/Penrose (I haven't looked as much at Baez) explaining my view that while accurate when properly understood, common statements of these results are inaccurate.

1) A common sense definition of 'seeing length contraction' means with knowledge of the object's rest characteristics. It is only relative to that there is any meaning to 'contraction'.

2) There are obvious ways to directly measure/see any changes in cross section implied by the coordinate description. Simply have the object pass very close to a sheet of film, moving along it (not towards or away) and have a bright flash from very far away so you get as close as you want to a plane wave. Then circles [and spheres] becoming ovals, and every other aspect of the coordinate description will be visible. Note that in a frame co-moving with the object, the plane wave reaching the film will be considered angled, and the exposure non-simultaneous. It is precisely because this method directly measures simultaneity across a surface in a given frame, that it directly detects the coordinate description of length contraction.

3) The impact of light delays on idealized camera image formation has nothing to do with SR. However it combines with SR in such a way that a common sense definition of 'see', length contraction is always visible (if it occurs, e.g. not for objects fully embedded in the plane perpendicular to their motion ). That is, if you establish what you would see from light delay under the assumption that the object didn't contract, and compare to what you would see given the contraction, they are different. You have thus seen (the effect of, and verified) length contraction.

4) To my mind, a correct description of the Terrell/Penrose result is that they have described a much more computationally elegant way (compared to ray tracing) to arrive at the image detected by any idealized camera, that allows one to qualitatively arrive at a result with often no computation at all.

A) Instead of ray tracing based on world tube representation of the object, simply represent the image in terms of angles for a camera at rest with respect to the object at the detection event of interest. Then apply aberration to get the angular displacement of all detected rays in a camera moving in any way at this same event. This method is completely general and exact, up to having a frame in which you can ignore the object's motion (e.g. for a swirling gas cloud where you care about the details, there is no small collection of rest frames you can use). Given the static nature of the analysis before applying aberration, this is a huge simplification.

B) For objects of smallish size (not just infinitesimal objects; size defined by subtended angle), the result of (A) is (to good approximation) to shift the stationary (with respect to object) camera image to a different viewing position (with some scaling as well). This implies apparent visual rotation in a substantive sense. Viewing a sphere with continents on it, from a moving camera, the apparent hemisphere seen will correspond to a different viewing angle the one you are sighting along. The markings on a rod will appear distorted (relative to what is expected for the viewing angle of the moving camera) as if rotated by the change in viewing angle between the stationary and moving cameras. All of these results can be had, much more laboriously, by direct ray tracing in the frame of moving camera, with the object properly represented as a world tube.

C) Summarizing A) and B) as "invisibility of length contraction is physically absurd", not just because of the logical point made in (3), but also because if additional elements are introduced into the visual scene that are stationary with respect to the camera considered moving in the 4)A) analysis, you will see that the apparent rotation of the image of the moving object is illusory, and must be replaced by an alternate interpretation of the same image - that 'actual' contraction plus light delay is the only interpretation consistent with the whole scene.
 
Last edited:
  • #94
PAllen said:
I will attempt another summary, similar to #49, that includes full understanding of Terrell/Penrose (I haven't looked as much at Baez) explaining my view that while accurate when properly understood, common statements of these results are inaccurate.

1) A common sense definition of 'seeing length contraction' means with knowledge of the object's rest characteristics. It is only relative to that there is any meaning to 'contraction'.
...
...
C) Summarizing A) and B) as "invisibility of length contraction is physically absurd", not just because of the logical point made in (3), but also because if additional elements are introduced into the visual scene that are stationary with respect to the camera considered moving in the 4)A) analysis, you will see that the apparent rotation of the image of the moving object is illusory, and must be replaced by an alternate interpretation of the same image - that 'actual' contraction plus light delay is the only interpretation consistent with the whole scene.
This seems well argued but I have always had a problem with 'actual contraction'. If you mean what I think you mean, I don't see how the object can have a different 'actual contraction' for different observers. I understand that different observers might 'measure' a contracted length, but it is a frame dependent measurement. In its rest frame the object does not experience contraction.
 
  • #95
PAllen said:
[edit 2: OK, I see that if you allow the angular span of a tilted ruler to go to zero with distance, the ratio angles subtended by ruler lines goes to 1. However, if you fix the angular span of a tilted ruler (e.g. 2 degrees), then distance doesn't matter and the ratio front and back ruler lines remains constant. This is what I was actually modeling when comparing to rotation - all angles. I remain convinced the the rotation model is quite accurate for small, finite spans, e.g several degrees.]
This is the crux of the matter, it is what I find confusing about the language relating to "rotation." A rotation looks different at different angular sizes, because of how it makes some parts get closer, and other parts farther away. Is that being included, or just the first-order foreshortening? And what angular scales count as "sufficiently small"? Baez said:
"Well-known facts from complex analysis now tell us two things. First, circles go to circles under the pixel mapping, so a sphere will always photograph as a sphere. Second, shapes of objects are preserved in the infinitesimally small limit."

I interpreted that to mean the shapes are only preserved in the infinitesmally small limit, i.e., for the Lorentz contracted cross to look like a rotated cross, it has to be infinitesmally small, so this would not include how the forward tilted arm can look longer than the backward tilted arm on a large enough angular scale. You are saying I am overinterpreting Baez here, and what's more, your own investigation shows a connection between that longer forward arm, and what Lorentzian relativity actually does. So perhaps Baez missed that, or did not mean to imply what I thought he implied.

This is what Terrell says in his abstract:
"if the apparent directions of objects are plotted as points on a sphere surrounding the observer, the Lorentz transformation corresponds to a conformal transformation on the surface of this sphere. Thus, for sufficiently small subtended solid angle, an object will appear-- optically-- the same shape to all observers."

The answer must lie in the meaning of having a conformal transformation on the sphere of apparent directions. If we use JDoolin's asterisk, instead of a cross, we can see that a rotation will foreshorten the angles of the diagonal arms, and it is clear that a conformal transformation will keep those angles fixed, so certainly Terrell is saying that the Lorentz contraction will foreshorten the angles in exactly the same way. But what about the contrast in the apparent lengths of the arms tilted toward us and the arms tilted away, is that contrast also preserved in the conformal transformation? You are saying that it does, and that seems to be the key issue. We do have one more clue from Terrell's abstract:
"Observers photographing the meter stick simultaneously from the same position will obtain precisely the same picture, except for a change in scale given by the Doppler shift ratio"
So the word "precisely" says a lot, but what is meant by this change in scale, and is that change in scale uniform or only locally determined? You are saying that it looks precisely like a rotation, including the contrast between the fore and aft distortions, not just the first-order foreshortening effect.

A sphere with continents on it might be a good case to answer this. We all agree the sphere still looks like a sphere, and in some sense it looks rotated because we see different continents than we might have expected. But the key question that remains open is, do the continents in the apparent forward regions of the sphere appear larger than the continents in the most distant parts of the sphere, or is that element not preserved in the conformal transformation between the moving and stationary cameras? I agree Terrell's key result is essentially that it is easier to predict what you will see for small shapes by using the comoving camera at the same place and time as the stationary camera, but what we are wondering about is over what angular scale, and what types of detail, we should expect the two photos to agree on. The mapping between the two cameras is conformal, but it is not the identity mapping, so can we conclude the continents will look the same size in both photos? Certainly distortions on the surfaces of large spheres should look different between the two photos, but even large spheres will still look like spheres.
 
Last edited:
  • #96
Mentz114 said:
This seems well argued but I have always had a problem with 'actual contraction'. If you mean what I think you mean, I don't see how the object can have a different 'actual contraction' for different observers. I understand that different observers might 'measure' a contracted length, but it is a frame dependent measurement. In its rest frame the object does not experience contraction.
Substitute "actual contraction per some camera's frame of reference", if you prefer. The contraction is actual up to inclusion of interaction fields, e.g. an EM model of an object moving in some frame will have the EM field represented such that equilibrium distances of moving charges will be closer than modeled in a frame where the charges are not moving.
 
  • #97
Ken G said:
Could it be the difference between first-order-small effects that don't show any difference, and larger-solid-angle pictures where you start to see the distortions? Terrell only ever claimed that small shapes appeared the same, larger images require some type of cobbling together that might involve bringing in "non-literal" information, analogous to how all local frames in GR are Minkowski but the equivalence principle breaks down on larger scales.

Actually, I'm starting to think maybe the game designers rendered some of the objects in the game with the full aberration effect, and other objects in the game without it.

Here are four screen-captures from the promotional video at http://gamelab.mit.edu/games/a-slower-speed-of-light/

2015-05-04-RelativisticAberrationFormula-screenshots.PNG

This is a very short part of the promotional video but it captures several things. For instance the distance between the two poles increases when the observer moves to the right, and it shrinks when the observer moves to the left (so long as the poles are on the right-side of the observer's view.

Looking at the warping of this one structure in the game, it seems like they attempted to get the shapes right. The circles on the ground don't look quite circular, but rather they look flattened ovals, as I think they should.

2015-05-04-RelativisticAberrationFormula-screenshots02.PNG
 
  • #98
Ken G said:
This is the crux of the matter, it is what I find confusing about the language relating to "rotation." A rotation looks different at different angular sizes, because of how it makes some parts get closer, and other parts farther away. Is that being included, or just the first-order foreshortening? And what angular scales count as "sufficiently small"? Baez said:
"Well-known facts from complex analysis now tell us two things. First, circles go to circles under the pixel mapping, so a sphere will always photograph as a sphere. Second, shapes of objects are preserved in the infinitesimally small limit."

In interpreted that to mean the shapes are only preserved in the infinitesmally small limit, i.e., for the Lorentz contracted cross to look like a rotated cross, it has to be infinitesmally small, so this would not include how the forward tilted arm can look longer than the backward tilted arm on a large enough angular scale. You are saying I am overinterpreting Baez here, and what's more, your own investigation shows a connection between that longer forward arm, and what Lorentzian relativity actually does. So perhaps Baez missed that, or did not mean to imply what I thought he implied.

This is what Terrell says in his abstract:
"if the apparent directions of objects are plotted as points on a sphere surrounding the observer, the Lorentz transformation corresponds to a conformal transformation on the surface of this sphere. Thus, for sufficiently small subtended solid angle, an object will appear-- optically-- the same shape to all observers."

The answer must lie in the meaning of having a conformal transformation on the sphere of apparent directions. If we use JDoolin's asterisk, instead of a cross, we can see that a rotation will foreshorten the angles of the diagonal arms, and it is clear that a conformal transformation will keep those angles fixed, so certainly Terrell is saying that the Lorentz contraction will foreshorten the angles in exactly the same way. But what about the contrast in the apparent lengths of the arms tilted toward us and the arms tilted away, is that contrast also preserved in the conformal transformation? You are saying that it does, and that seems to be the key issue. We do have one more clue from Terrell's abstract:
"Observers photographing the meter stick simultaneously from the same position will obtain precisely the same picture, except for a change in scale given by the Doppler shift ratio"
So the word "precisely" says a lot, but what is meant by this change in scale, and is that change in scale uniform or only locally determined? You are saying that it looks precisely like a rotation, including the contrast between the fore and aft distortions, not just the first-order foreshortening effect.

Focusing on Terrell's statement above, and on my description of the exact method of 4)A) in post #93, the key is that the the conformal transform is applied to the image from a different viewing angle - thus it preserves the angular distortion produced by the overall shift in viewing angle of the smallish object. What is conformally mapped is the image from the camera stationary with respect to the object. But this image, even to first order, is rotated by the overall change in viewing angle for the moving camera, compared to what the moving camera would expect at its apparent viewing angle.
 
  • #99
PAllen said:
Focusing on Terrell's statement above, and on my description of the exact method of 4)A) in post #93, the key is that the the conformal transform is applied to the image from a different viewing angle - thus it preserves the angular distortion produced by the overall shift in viewing angle of the smallish object. What is conformally mapped is the image from the camera stationary with respect to the object. But this image, even to first order, is rotated by the overall change in viewing angle for the moving camera, compared to what the moving camera would expect at its apparent viewing angle.
Yes, the globe with continents will show different continents from what would be expected if the globe was not in relative motion. The question is, will the continents look larger in the forward parts, as a static image would give, or will their relative sizes be distorted from that? In other words, the conformal transformation maps spheres to spheres, but it need not be the identity mapping on the surfaces of those spheres, so distortions can appear between the two photographs if the spheres are not small. It is not clear that aspect of what a "rotation" does is intended to be taken literally in Terrell-Penrose rotation, it might just be the fact that you see the different continents and no more than that can be relied on in general. It seems to me what is crucial is that a cross seen by a comoving camera directly across from it will look symmetric, so a stationary camera that sees the cross as moving must at the appropriate moment also see the cross as symmetric, that's the first-order "invisibility" of the length contraction. We are wondering if there is also a higher-order effect, where you can take contrasts in the fore and aft parts of the rotated object as part of that "invisibility" as well.
 
  • #100
Ken G said:
Yes, the globe with continents will show different continents from what would be expected if the globe was not in relative motion. The question is, will the continents look larger in the forward parts, as a static image would give, or will their relative sizes be distorted from that? In other words, the conformal transformation maps spheres to spheres, but it need not be the identity mapping on the surfaces of those spheres, so distortions can appear between the two photographs if the spheres are not small. It is not clear that aspect of what a "rotation" does is intended to be taken literally in Terrell-Penrose rotation, it might just be the fact that you see the different continents and no more.
I am not sure how to convince you. The aberration is applied to the rays forming an image at viewing angle x. To first order, for modest subtended angle, they rotate all the rays by the change in viewing angle. This produces a distortion in the positions of ruler lines that I independently verify with direct ray tracing computation. Perhaps I overstated precise - my computational comparison was numerical to 4 significant digits, for a two degree subtended ruler.

Consider, for example, in the camera stationary with respect to a ruler viewed off to the left. Suppose the angle between 1 cm markings is .02 degrees on one side and .01 degrees on the other side. If all of these rays are rotated by the overall aberration change in viewing angle, these angles are preserved.
 

Similar threads

Replies
14
Views
2K
Replies
8
Views
2K
Replies
54
Views
3K
Replies
52
Views
5K
Replies
14
Views
1K
Replies
3
Views
1K
Back
Top