# Lorentz Transformations and Photon Delay

For fun, I'm writing a simple special relativity simulator with a much smaller speed of light so that relativistic effects are clear even at low speeds. I already have time dilation and speed of light delay working. However, right now, the speed of light does NOT always appear to be the same for all observers.

The main problem is computing where something appears to be from the perspective of the user.

The speed of light is not infinite. So when you look at something, you see it where it used to be. Ignoring special relativity, but still including a finite speed of light, let's compute where you see it. Let p(T) be the location of the object at time T. Let u(T) be your location at time T.

Let T be how "late" the image of the object is: right now, you see the object where it was at time (now-T).

Let c be the speed of light in meters per second. Assume that all distance/location units are in meters.

Light travels at the speed of light. Therefore, T seconds ago, the light that you are seeing now was T*c light-seconds away.

distance( p(now - T), u(now) ) = c * T

Assuming that the object in question never moves faster than the speed of light, this equation always yields a unique satisfying T. For my purposes, this is enough to compute T.

Suppose you're in a spaceship moving near the speed of light and you emit a flash of light in all directions around you. Under special relativity, from your perspective, the light beams should always look like they are centered around you, even as you move, because that's what it would look like if you were stationary. However, in the simulation, they do what you would expect in Newtonian mechanics: they form a sphere around the location that you emitted them from. So it's not right yet.

----

I compute everything relative to a nonaccelerating global reference frame. When I want to draw something, I compute where it should appear to be in the reference frame of the user. Similarly, I alter the time-step of the program so that when the user is travelling close to the speed of light, stationary clocks appear to move faster.

Let's have a short thought experiment to defend this model. Suppose there's a spaceship A, a spaceship B, and an observer O. A and B are both moving near the speed of light. Suppose we want to know where B appears to be from A's perspective. All we have to do is use O's information about A and B's locations over time: the time that A sees the image of B is the time that O sees A seeing the image of B. To convert that time into A's perspective, all we have to do is compute how much time A has observed passing since T_0.

Keep in mind that all of that holds even if A and B are acclerating.

If you have an event E that happens at time T and location x,y,z in the global reference frame, and the user's velocity (as observed by the global reference frame) is Vx, Vy, Vz, then I understand that you apply the Lorentz transformation to get T', x', y', z', the location of E in spacetime according to the user's reference frame.

So here's the question: given the posititions of A and B for all T up to now (in the global reference frame), where does B appear to be from A's perspective in A's reference frame, taking both special relativity and speed of light delay into account? Does the Lorentz transformation hold for an accelerating observer?

ghwellsjr
Gold Member
Can you show us what you've go so far? I'm not sure your approach is the best way to go. The Lorentz Transformation only works for Frames of Reference moving at a constant relative speed. Why don't you just do it for your global reference frame and then you won't have to worry about any problems with acceleration?

I will post source code if you insist, but it's a bit long and I don't want to distract between coding problems / problems of efficiency and problems about the correctness of the algorithm. Furthermore, pseudocode / descriptions of algorithms are much easier to read than source code.

On each frame, I compute how much time should pass. This amount of time is (1/60) / sqrt(1-v^2/c^2) where v is the velocity of the user.

For each ship, if it is accelerating, I use F=ma to compute its new velocity, while keeping in mind that mass is not constant during acceleration. If the speed of any object is > .99c, I scale it down to .99c.

For each ship, on each frame, I store its current location, rotation, velocity, global seconds passed since the simulation began, and local seconds passed since the simulation began.

To determine where to draw an object, I compute the "light time" of the object relative to the user, then draw it where it was at that time. The light time is now - T, where T minimizes

|distance(objectLocation(now-T), userLocation(now)) - c * T|

I compute this error term for all recorded times to find T.

Why don't you just do it for your global reference frame and then you won't have to worry about any problems with acceleration?

Do what from the global reference frame?

All of the data I store (location, velocity, etc) I store in the global reference frame. This makes things significantly easier than storing everything in the reference frame of the user. Currently, when the user accelerates, I only have to update data about the user. If I stored everything in the reference frame of the user, then I'd have to update every object when the user accelerates.

Furthermore, even ignoring this problem, I'd eventually like to be able to support multiple users with different reference frames. Eventually, I'm going to have to solve the problem I described.

Also, while the global reference frame does not accelerate, objects in the world DO accelerate relative to it, so acceleration is an inherent problem to the system.

ghwellsjr
Gold Member
So here's the question: given the posititions of A and B for all T up to now (in the global reference frame), where does B appear to be from A's perspective in A's reference frame, taking both special relativity and speed of light delay into account? Does the Lorentz transformation hold for an accelerating observer?
I thought your problem was being able to transform events from your global reference frame to an accelerating reference frame. Did I misunderstand?

I don't want to see any source code but I thought you had parts of your simulation working. I thought your simulation was going to display something and since I have no idea what to expect, I thought it might be helpful to see the display of what you have working so far.

I got the impression that you were planning on a display that would show us what an observer would see, is that correct? If so, that won't really show us what Special Relativity is all about since SR allows us to "see" things that observers cannot see and that's why I made my suggestion. Also, if you just use your one global reference frame, you won't have to concern yourself with the fact that the Lorentz Transform does not work on accelerating frames.

I thought your problem was being able to transform events from your global reference frame to an accelerating reference frame. Did I misunderstand?

That's part of it. The other part is deciding what event needs to be translated. Is it the emission of light from the observed object? From when? Or is it the arrival of the light emitted from the observed object, and again, from when?

http://imgur.com/DdGVF" [Broken] The triangle on the left is moving up. The one in the middle is the user. The user is moving down. The one on the right is stationary. Each has a crude clock drawn over it, but they're a little hard to see. Before taking the screenshot, the user went far up, then back, so the one on the right has observed more time passing than the user.

The dim triangle to the left is where the triangle on the left "actually" is, rather than where it appears to be to the user.

I got the impression that you were planning on a display that would show us what an observer would see, is that correct? If so, that won't really show us what Special Relativity is all about since SR allows us to "see" things that observers cannot see and that's why I made my suggestion. Also, if you just use your one global reference frame, you won't have to concern yourself with the fact that the Lorentz Transform does not work on accelerating frames.

Then maybe special relativity isn't sufficient for what I want. I'd like to show what the user would actually see. The "camera", so to speak, has to follow the user. I don't want the user to be able to see events before the light from the event would actually reach them.

Last edited by a moderator:
ghwellsjr
Gold Member
When you say you'd like to show what the user would actually see, I think of the kind of video game where you never see yourself, the display depicts what you would actually see with your eyes, as if you were wearing a videocam on your forehead.

But that's not what you presented in your screenshot. That depiction is like some of the really old video games like asteroids where you saw yourself and your surroundings and if you fired a missle at something, you could watch its trajectory and you could easily determine how fast it approached the target and how close it was getting to the target. But that's not what a user would see watching a missile. He would only see the missle in the middle of his field of view getting smaller and smaller until it impacted the target.

But I think I see what you are trying to do. Let's say you want to illustrate the Twin Paradox. You would show the stationary twin as the "user" who would have a clock running at normal speed. Then you would show two images of his twin traveling away from him, one showing where he actually was with a clock showing his dilated time and another one showing where he would appear to be to the user showing his relativistic doppler time. Then when the twin reached his destination, he would turn around and head back home which the first image would show immediately but the other one would be lagging behind. It wouldn't be until the first image got most of the way home that the second image finally reaches the turnaround point and then rushes back home with the clock now running way faster and then the two images finally converge at the home position with the same time on both clocks (for the twin) while the user's clock has a much more advanced time on it.

Is this the sort of thing you have in mind?

Yes! That's exactly what I meant.

I'm keeping things in two (spatial) dimensions for simplicity. When I said "what the user would actually see" what I meant was that you see the same information that the user would actually see*, although it would be presented in a different way than two-dimensional eyes might present it to a two-dimensional brain. If and when I switch to 3D, I'll probably switch over to either a "forehead" camera or a following camera.

*ignoring the fact that you can't see through solid objects, etc