Physical limitations of computer speed

Click For Summary

Discussion Overview

The discussion centers around the physical limitations affecting computer speed, particularly in relation to CPU frequency, memory proximity, and the speed of light. Participants explore theoretical and practical implications of these limitations, including data transmission and communication technologies.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants propose that as CPU frequency increases, memory must be located closer to the CPU to maintain efficiency, as the distance must be smaller than the wavelength of the frequency.
  • One participant references Taylor and Wheeler's work, suggesting that the time taken for instructions is influenced by the distance between the processor and memory, implying a relationship between instruction duration and physical distance.
  • Another participant discusses the limitations imposed by the speed of light on data communication, noting that while fiber optics are not commonly used in computer buses, propagation delays are often negligible compared to bandwidth considerations.
  • It is noted that electrical signals on integrated circuits travel slower than the speed of light, and that the design of integrated circuits can affect signal propagation speed.
  • Some participants mention techniques like cache memory and parallel processing as methods to mitigate speed limitations, indicating that not all memory accesses need to occur at the processor's operating speed.
  • One participant argues that while physical limitations exist, current technology has not fully utilized the potential of networking capabilities, suggesting that existing hardware could support higher data rates than currently achieved.
  • Another participant expresses that while we may be nearing the limits of silicon technology, theoretical limits of computation remain far from being reached.
  • A question is raised about the implications of Nyquist's Theorem on microprocessor speed and its relationship to the speed of light limitation.

Areas of Agreement / Disagreement

Participants generally agree that physical principles impose limitations on computer speed, but there are multiple competing views regarding the extent of these limitations and the effectiveness of current technologies and techniques to overcome them. The discussion remains unresolved with respect to specific implications and future directions.

Contextual Notes

Participants express uncertainty regarding the exact relationships between various physical constants, the implications of theoretical limits versus practical limitations, and the effectiveness of current technologies in overcoming these barriers.

Who May Find This Useful

This discussion may be of interest to those studying computer engineering, physics, and data communications, as well as professionals involved in hardware design and optimization.

jhirlo
Messages
40
Reaction score
0
We can agree that there are limitations originating from physical principles (constanats) computer speed.
On many sites I read common sentence was one referring to CPU frequency increment. It’s sad that as we increase CPU’s working frequency we have to locate memory (RAM) closer to CPU because this length has to be smaller than the wave length of given frequency, why ?
Than there’s the problem with constant for speed of light….

What are your thoughts on this subject, how do you think we’ll try to pass this barriers?
 
Physics news on Phys.org
jhirlo said:
We can agree that there are limitations originating from physical principles (constanats) computer speed.
On many sites I read common sentence was one referring to CPU frequency increment. It’s sad that as we increase CPU’s working frequency we have to locate memory (RAM) closer to CPU because this length has to be smaller than the wave length of given frequency, why ?
Than there’s the problem with constant for speed of light….

What are your thoughts on this subject, how do you think we’ll try to pass this barriers?

Taylor and Wheeler (1992) explore this problem in one of the exercises in
Spacetime Physics. In their book, Taylor and Wheeler assume that each instruction involves the transmission of data from the memory to the processor where the computation is carried out, followed by transmission of the result back to the memory. If the average distance between the processor and the memory is \ell, then distance covered by the signal during one instruction is 2\ell. Assuming the signal propagates at the maximum possible celerity c, then the time taken to carry out one instruction is 2\ell/c. Today's computers are capable of performing billions of instructions per second. A one gigaflop computer may carry out up to 1 billion sequential instructions per second. So the duration of each instruction is 2\ell/c = 10^{-9}s. This allows us to calculate the value of \ell. It can be seen from the equation that if the time for one instruction decreases, then the value of \ell must decrease to keep the equation balanced.
 
jhirlo said:
Than there’s the problem with constant for speed of light….
The speed of light is a limitation, but for data communication, since fiber optics has not yet been used as a technology for computer busses and CPU data paths. The constant c however, in the context of data communications, brings propagation delays which are ignorable. The important thing is the bandwidth (data rate) carried by light.
Physical limits on the bandwidth of electical data communications are implied by Nyquists' theorem (on the rate that hardware can change signals) and by Shannon's law (on the effects of noise on data rate).
 
We may not use fiber optics, but the signals on an integrated circuit still travel at a speed slower than 'c'.

The wiring traces on an IC can be modeled as a lossy transmission line. (The substrate serves as a ground plane, and the glass insulation serves as the dielectric). The speed of propagation of electical signals along this transmission line is (of course) less than 'c'.

Note though that not all memory accesses have to occur at the operating speed of the processor, because of a technique known as "cache". The most commonly used memory locations are kept close to the CPU in very fast memory. Usually some portion of the CPU is dedicated to cache memory nowadays (level 1 cahce). There's also often some external cache as well (level 2).

Parallel processing is another way around the speed-of-light limit, as are other tricks such as out-of-order execution. A lot of these tricks are already being used, the typical marketing specification of a prcoessor's "speed" has only a vague resemblence to the actual clock rate of the CPU.
 
We haven't even cme close to meeting the basic physicaly limitations of computers. Like networking, for instance, physically those cat5 cables ca carr way more than they do. However, we just have trouble making the parts that pus them to the limits, so to speak.
 
We're starting to getting close to the limits of standard silicon, even if we aren't close to the theoretical limits of computation. Of course you may not consider 20 years more of growth at the current rate "close", depending on your viewpoint. The 20 year figure comes from

article
 
Does Nyquist's Theorem regarding rate of signal change also imply limitations to the speed of microprocessors? Does it have any relationship with the limitation of constant c?
 

Similar threads

  • · Replies 19 ·
Replies
19
Views
5K
  • · Replies 58 ·
2
Replies
58
Views
5K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 35 ·
2
Replies
35
Views
7K
Replies
29
Views
5K
  • · Replies 42 ·
2
Replies
42
Views
8K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 18 ·
Replies
18
Views
2K