## Physical limitations of computer speed

We can agree that there are limitations originating from physical principles (constanats) computer speed.
On many sites I read common sentence was one referring to CPU frequency increment. It’s sad that as we increase CPU’s working frequency we have to locate memory (RAM) closer to CPU because this length has to be smaller than the wave length of given frequency, why ?
Than there’s the problem with constant for speed of light….

What are your thoughts on this subject, how do you think we’ll try to pass this barriers?

 PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus

 Quote by jhirlo We can agree that there are limitations originating from physical principles (constanats) computer speed. On many sites I read common sentence was one referring to CPU frequency increment. It’s sad that as we increase CPU’s working frequency we have to locate memory (RAM) closer to CPU because this length has to be smaller than the wave length of given frequency, why ? Than there’s the problem with constant for speed of light…. What are your thoughts on this subject, how do you think we’ll try to pass this barriers?
Taylor and Wheeler (1992) explore this problem in one of the exercises in
Spacetime Physics. In their book, Taylor and Wheeler assume that each instruction involves the transmission of data from the memory to the processor where the computation is carried out, followed by transmission of the result back to the memory. If the average distance between the processor and the memory is $\ell$, then distance covered by the signal during one instruction is $2\ell$. Assuming the signal propagates at the maximum possible celerity $c$, then the time taken to carry out one instruction is $2\ell/c$. Today's computers are capable of performing billions of instructions per second. A one gigaflop computer may carry out up to 1 billion sequential instructions per second. So the duration of each instruction is $2\ell/c = 10^{-9}s$. This allows us to calculate the value of $\ell$. It can be seen from the equation that if the time for one instruction decreases, then the value of $\ell$ must decrease to keep the equation balanced.

 Quote by jhirlo Than there’s the problem with constant for speed of light….
The speed of light is a limitation, but for data communication, since fiber optics has not yet been used as a technology for computer busses and CPU data paths. The constant c however, in the context of data communications, brings propagation delays which are ignorable. The important thing is the bandwidth (data rate) carried by light.
Physical limits on the bandwidth of electical data communications are implied by Nyquists' theorem (on the rate that hardware can change signals) and by Shannon's law (on the effects of noise on data rate).

Recognitions:
Staff Emeritus

## Physical limitations of computer speed

We may not use fiber optics, but the signals on an integrated circuit still travel at a speed slower than 'c'.

The wiring traces on an IC can be modelled as a lossy transmission line. (The substrate serves as a ground plane, and the glass insulation serves as the dielectric). The speed of propagation of electical signals along this transmission line is (of course) less than 'c'.

Note though that not all memory accesses have to occur at the operating speed of the processor, because of a technique known as "cache". The most commonly used memory locations are kept close to the CPU in very fast memory. Usually some portion of the CPU is dedicated to cache memory nowadays (level 1 cahce). There's also often some external cache as well (level 2).

Parallel processing is another way around the speed-of-light limit, as are other tricks such as out-of-order execution. A lot of these tricks are already being used, the typical marketing specification of a prcoessor's "speed" has only a vague resemblence to the actual clock rate of the CPU.

 We haven't even cme close to meeting the basic physicaly limitations of computers. Like networking, for instance, physically those cat5 cables ca carr way more than they do. However, we just have trouble making the parts that pus them to the limits, so to speak.
 Recognitions: Science Advisor Staff Emeritus We're starting to getting close to the limits of standard silicon, even if we aren't close to the theoretical limits of computation. Of course you may not consider 20 years more of growth at the current rate "close", depending on your viewpoint. The 20 year figure comes from article
 Does Nyquist's Theorem regarding rate of signal change also imply limitations to the speed of microprocessors? Does it have any relationship with the limitation of constant c?