Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

How do we simulate time flow in video games? EE/PHYS

  1. Apr 3, 2016 #1
    Hi Everyone,

    I am very ignorant and uneducated but I have a few questions about a difficult thought experiment.

    'How does time flow in a video game/simulation or even in our imagination in relativistic terms?'

    Let's say a processor operating at 4.2 Ghz, 42 billion instructions per second at 50% - 99% of the speed of light, generating 60 images per second on a 60 Hz monitor.

    Do the generated images flow at a speed relative to our time ? is the electricity used to execute code moving slower in time since it is moving much closer to the speed of light through space?

    Do we use relativity to bridge the time gap between the computer and us ?

    Sorry if this seems like a complete waste of time by my ignorance, I don't really have a good concept of the speed at which electricity moves throughout the system between the time it is compiled, executed and generated which probably explains my misconceptions. Can anyone with a strong understanding answer these questions?

    Thank you,
     
    Last edited: Apr 3, 2016
  2. jcsd
  3. Apr 3, 2016 #2
    Well, you seem to have some misconceptions :-)

    For one thing, your processor is executing approximately 4.2 billion cycles per second (base rate) - not instructions. Instructions take anywhere from 3 cycles on up. On the other other hand, with pipelining, CPUs can execute quite a few cycles at once - of different types: addition, multiplication, integer, float, etc - if you write the software to take full advantage.

    Also important for speed: all modern processors have multiple cores. A machine with the specs you give undoubtedly has at least 4. Mine has 6, with hyper-threading making it 12 ... although, they're not effective for CPU-intensive algorithms such as real-time math routines (very important in video games).

    Bottom line, a machine like that is actually capable of quite a bit more than 4.2 billion instructions per second; impossible to nail down just how many. But for video games in particular, it's humongous, because graphics routines lend themselves very well to parallelization, allowing the multiple cores to work very efficiently. (You assign each core to a separate area of the screen). For instance I've gotten somebody else's video-game routines, that execute in 100 ms, down to 5 by appropriate multi-thread and pipelining techniques. (Actually cache handling is even more important, but let's not get into that).

    The point is, you get unbelievable speed out of these things but it has nothing to with relativity, only good programming.

    Relativity is somewhat relevant for the chip designers, however. Since it's all DC electrons are actually moving at relativistic speeds so you need to take it into account; but the EM signal is what really matters, and it's at light speed. A major goal of chip designers is to make buses as short as possible, because speed-of-light signals are traveling too slow (!) compared to the CPU which is clocking at 4.2 gig. In that one picosecond, light travels only about 7.5 x 10^-5 meters.

    Quantum mechanics is much more important for chip designers. At 22nm, Intel's Ivy Bridge (the last one I studied - 2011) is very vulnerable to QM effects such as tunneling. They have to round off the edges of the typical 90 degree turns made by busses, or all the electrons would fly right off! Today, I suppose they're even lower than 22; google tells me IBM has an experimental chip at 7nm scaling.

    What I want to get across is that physics - relativity and QM, EM, silicon and metals science, etc - is very important for chip designers, but not for programmers or users. From our perspective: the signals all flow in normal time; screens are updated at 1/60 second (sometimes twice that) according to your normal time; relativity is NOT used to bridge the time gap between the computer and us, except in so far as it must be taken into account designing the chip.

    Basically the processor screams because of that 4.2 gig clock, supported by all the systems and techniques I've briefly touched on. It makes programming them an awful lot of fun - really, more fun than playing the video game, if you're into it.

    Hope you find this information as fascinating as I do!
     
    Last edited: Apr 3, 2016
  4. Apr 3, 2016 #3
    Thank you so much, that was a brilliant explanation.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: How do we simulate time flow in video games? EE/PHYS
  1. Video game (Replies: 4)

  2. Video Game Awards (Replies: 16)

Loading...