How do we simulate time flow in video games? EE/PHYS

AI Thread Summary
The discussion centers on the flow of time in video games and simulations, particularly in relation to relativistic physics and computer processing speeds. A processor operating at 4.2 GHz executes cycles rather than instructions, with modern CPUs capable of executing many instructions simultaneously due to pipelining and multi-core architectures. This allows for significant speed in processing, especially in graphics routines that benefit from parallelization. While relativity is relevant for chip designers—since electrons move at relativistic speeds—the electromagnetic signals that transmit data travel at light speed, which is critical for performance. The design of chips must account for quantum mechanics effects as transistor sizes shrink, impacting performance and efficiency. However, for programmers and users, the signals operate in normal time, with screens updating at standard refresh rates, meaning relativity does not bridge the time gap between the computer and user experience. Overall, the impressive speed of processors is attributed to advanced programming techniques rather than relativistic effects.
Israel Eydelson
Messages
2
Reaction score
0
Hi Everyone,

I am very ignorant and uneducated but I have a few questions about a difficult thought experiment.

'How does time flow in a video game/simulation or even in our imagination in relativistic terms?'

Let's say a processor operating at 4.2 Ghz, 42 billion instructions per second at 50% - 99% of the speed of light, generating 60 images per second on a 60 Hz monitor.

Do the generated images flow at a speed relative to our time ? is the electricity used to execute code moving slower in time since it is moving much closer to the speed of light through space?

Do we use relativity to bridge the time gap between the computer and us ?

Sorry if this seems like a complete waste of time by my ignorance, I don't really have a good concept of the speed at which electricity moves throughout the system between the time it is compiled, executed and generated which probably explains my misconceptions. Can anyone with a strong understanding answer these questions?

Thank you,
 
Last edited:
Computer science news on Phys.org
Well, you seem to have some misconceptions :-)

For one thing, your processor is executing approximately 4.2 billion cycles per second (base rate) - not instructions. Instructions take anywhere from 3 cycles on up. On the other other hand, with pipelining, CPUs can execute quite a few cycles at once - of different types: addition, multiplication, integer, float, etc - if you write the software to take full advantage.

Also important for speed: all modern processors have multiple cores. A machine with the specs you give undoubtedly has at least 4. Mine has 6, with hyper-threading making it 12 ... although, they're not effective for CPU-intensive algorithms such as real-time math routines (very important in video games).

Bottom line, a machine like that is actually capable of quite a bit more than 4.2 billion instructions per second; impossible to nail down just how many. But for video games in particular, it's humongous, because graphics routines lend themselves very well to parallelization, allowing the multiple cores to work very efficiently. (You assign each core to a separate area of the screen). For instance I've gotten somebody else's video-game routines, that execute in 100 ms, down to 5 by appropriate multi-thread and pipelining techniques. (Actually cache handling is even more important, but let's not get into that).

The point is, you get unbelievable speed out of these things but it has nothing to with relativity, only good programming.

Relativity is somewhat relevant for the chip designers, however. Since it's all DC electrons are actually moving at relativistic speeds so you need to take it into account; but the EM signal is what really matters, and it's at light speed. A major goal of chip designers is to make buses as short as possible, because speed-of-light signals are traveling too slow (!) compared to the CPU which is clocking at 4.2 gig. In that one picosecond, light travels only about 7.5 x 10^-5 meters.

Quantum mechanics is much more important for chip designers. At 22nm, Intel's Ivy Bridge (the last one I studied - 2011) is very vulnerable to QM effects such as tunneling. They have to round off the edges of the typical 90 degree turns made by busses, or all the electrons would fly right off! Today, I suppose they're even lower than 22; google tells me IBM has an experimental chip at 7nm scaling.

What I want to get across is that physics - relativity and QM, EM, silicon and metals science, etc - is very important for chip designers, but not for programmers or users. From our perspective: the signals all flow in normal time; screens are updated at 1/60 second (sometimes twice that) according to your normal time; relativity is NOT used to bridge the time gap between the computer and us, except in so far as it must be taken into account designing the chip.

Basically the processor screams because of that 4.2 gig clock, supported by all the systems and techniques I've briefly touched on. It makes programming them an awful lot of fun - really, more fun than playing the video game, if you're into it.

Hope you find this information as fascinating as I do!
 
Last edited:
  • Like
Likes QuantumQuest and Israel Eydelson
secur said:
Well, you seem to have some misconceptions :-)

For one thing, your processor is executing approximately 4.2 billion cycles per second (base rate) - not instructions. Instructions take anywhere from 3 cycles on up. On the other other hand, with pipelining, CPUs can execute quite a few cycles at once - of different types: addition, multiplication, integer, float, etc - if you write the software to take full advantage.

Also important for speed: all modern processors have multiple cores. A machine with the specs you give undoubtedly has at least 4. Mine has 6, with hyper-threading making it 12 ... although, they're not effective for CPU-intensive algorithms such as real-time math routines (very important in video games).

Bottom line, a machine like that is actually capable of quite a bit more than 4.2 billion instructions per second; impossible to nail down just how many. But for video games in particular, it's humongous, because graphics routines lend themselves very well to parallelization, allowing the multiple cores to work very efficiently. (You assign each core to a separate area of the screen). For instance I've gotten somebody else's video-game routines, that execute in 100 ms, down to 5 by appropriate multi-thread and pipelining techniques. (Actually cache handling is even more important, but let's not get into that).

The point is, you get unbelievable speed out of these things but it has nothing to with relativity, only good programming.

Relativity is somewhat relevant for the chip designers, however. Since it's all DC electrons are actually moving at relativistic speeds so you need to take it into account; but the EM signal is what really matters, and it's at light speed. A major goal of chip designers is to make buses as short as possible, because speed-of-light signals are traveling too slow (!) compared to the CPU which is clocking at 4.2 gig. In that one picosecond, light travels only about 7.5 x 10^-5 meters.

Quantum mechanics is much more important for chip designers. At 22nm, Intel's Ivy Bridge (the last one I studied - 2011) is very vulnerable to QM effects such as tunneling. They have to round off the edges of the typical 90 degree turns made by busses, or all the electrons would fly right off! Today, I suppose they're even lower than 22; google tells me IBM has an experimental chip at 7nm scaling.

What I want to get across is that physics - relativity and QM, EM, silicon and metals science, etc - is very important for chip designers, but not for programmers or users. From our perspective: the signals all flow in normal time; screens are updated at 1/60 second (sometimes twice that) according to your normal time; relativity is NOT used to bridge the time gap between the computer and us, except in so far as it must be taken into account designing the chip.

Basically the processor screams because of that 4.2 gig clock, supported by all the systems and techniques I've briefly touched on. It makes programming them an awful lot of fun - really, more fun than playing the video game, if you're into it.

Hope you find this information as fascinating as I do!

Thank you so much, that was a brilliant explanation.
 
Thread 'Urgent: Physically repair - or bypass - power button on Asus laptop'
Asus Vivobook S14 flip. The power button is wrecked. Unable to turn it on AT ALL. We can get into how and why it got wrecked later, but suffice to say a kitchen knife was involved: These buttons do want to NOT come off, not like other lappies, where they can snap in and out. And they sure don't go back on. So, in the absence of a longer-term solution that might involve a replacement, is there any way I can activate the power button, like with a paperclip or wire or something? It looks...
This week, I saw a documentary done by the French called Les sacrifiés de l'IA, which was presented by a Canadian show Enquête. If you understand French I recommend it. Very eye-opening. I found a similar documentary in English called The Human Cost of AI: Data workers in the Global South. There is also an interview with Milagros Miceli (appearing in both documentaries) on Youtube: I also found a powerpoint presentation by the economist Uma Rani (appearing in the French documentary), AI...
Back
Top