# Why computer clocks hardly goes above ~5GHz?

1. Mar 19, 2009

### MTd2

What is the reason chips won't go above 5GHz nowadays, independtly of the micro archicteture? Special cooling makes them go a little bit above, but it's always around this scale.

Until 2002 computers were increasing frequency in every generation of produts, but it mostly stopped since then. Why?

2. Mar 19, 2009

### mgb_phys

Ye cannae change the laws of physics captain.

It's mainly to do with capcitance. to change the state of a clock you have to transfer charge. Current is the rate of flow of charge so the less time you take to move the charge the higher the current, to get ahigh current to flow you need a high voltage or a low resistance. But to reduce heat you want to lower the voltage (which is why chips run at 3.3V instead of 5V) to reduce the charge you try and make the components smaller - but that increases the resistance.
There are also limits to making the parts smaller, it's difficult to print parts that are 1/10 the wavelength of light, the leakage of charge increases as the parts get smaller and closer together and ultimately the doping atoms in the silicon simply diffuse into other surrounding parts. There are attempts to fix all these like super low K dielectrics and Silicon on insulator.

Ultimately it looks like 5Ghz is about where Si is going to top out for most general applications.
So you can either reinvent the world in GaAs or start getting cleverer about how we design software.

Last edited: Mar 19, 2009
3. Mar 19, 2009

### MTd2

Really? Do you have any source to support this claim? When transistors are tested alone, they go to the order of 100's GHz.

4. Mar 19, 2009

### Staff: Mentor

You provided it! Clock speeds increased rapidly until 2003 or so, now they're not. That implies (or rather, is the definition of) a plateau. If it is permanent remains to be seen, but for it to not be permanent will require a radical change in technology.
As mgb correctly stated, the biggest enemy is heat. A single transistor switching at 100 ghz doesn't put out anywhere near as much heat as a billion of them at 5 ghz.

5. Mar 19, 2009

### Topher925

In a nutshell, current technology has reached the physical barrier of processor speed. As mgb_phys pointed out, its basically a thermal problem where you can't remove enough heat fast enough. Thats why processor manufacturers now make multiple cores and do fancy things with software such as clustering.

6. Mar 19, 2009

### signerror

Working out the power densities was eye-opening for me. See: assuming 0.75mm wafer thickness, take the 130 W(max) Pentium D, which has 140mm^2 die size:

http://en.wikipedia.org/wiki/Semiconductor_device_fabrication#Wafers

http://www.techpowerup.com/cpudb/323/Intel_Pentium_D_955_EE.html

$$\frac{130 \mbox{ W}}{0.75 \mbox{ mm} \times 140 \mbox{ mm}^2} = 1.2 \mbox{ MW/L}$$

At full power, it heats up with ten times the power density of a nuclear reactor! It helps that it's under a millimeter thick, but still it gives you some perspective. You could not, for instance, make a 3D block of transistors, say by stacking hundreds of wafers. It would be impossible to cool.

Last edited: Mar 19, 2009
7. Mar 19, 2009

### MTd2

I see, it is linked to transistor density/heat dispersion optimized to a given architecture. It seems that a typical GPU( graphics processor unit) on 65nm process has about the same transistor density than an AMD or Intel processor at 45nm process, which would explain why videocards rarely goes beyond 900MHz. GPUs relies more on paralel processing so they do not require clocks as high as those CPUs.

But the point is that, when I asked for that claim, I would like to see some calculation that supported that value.

8. Mar 19, 2009

### xxChrisxx

On the technical side its heat that stops increasing clock speeds.

From a more general point of view it's simply not cost effective enough to try and develop/implement methods for cooling chips of increasing clock speed (diminishing returns). Its more cost effective to research into parallel processing and multicore technology. In the future when this avenure is exhausted you may very well see companies going back to researching chips with higher clock speeds.

9. Mar 19, 2009

### mheslep

Yes I saw where someone had plotted the power density history of CPU development; the curve would have overtaken the power density on the surface of the sun within a few years, a tough heat transfer challenge.

10. Mar 19, 2009

### Staff: Mentor

I've seen that graph too...crazy.

11. Mar 19, 2009

### Staff: Mentor

You don't need a theoretical calculation, you already have the data! Why bother calculating what the power dissipation should be (probably a pretty difficult if not impossible thing to calculate from scratch) when we already know what it is?

Here's a graph. Note that all the graphs are power functions, following Moore's law (2^x), though the last is plotted on a logarithmic scale. Now the graph doesn't extrapolate, it just shows the history, but the hard ceiling we hit is obvious: above 140 watts or so, air cooling isn't enough anymore and switching to water (or refrigerant!) would add a lot to the cost and complexity of a PC. http://www.spectrum.ieee.org/apr08/6106/CPU [Broken]

Article it was from: http://www.spectrum.ieee.org/apr08/6106

Last edited by a moderator: May 4, 2017
12. Mar 19, 2009

### mgb_phys

A Pentium D is 130W in a 140mm^2 die = 1MW/m^2
Sun = 3.8×10^26 W / 6.08×10^18 m^2 = 62MW/m^2
So not quite our sun but certainly more than a Red Giant.

The limit with the Pentium family is that they have SiO2 insulators, and with a 45nm process the insulator would be only 2-3 atoms thick. The high K-Hafnium gates on the Core2 family mean they can go to 32nm without too many problems.

13. Mar 22, 2009

### bassplayer142

The smaller the size of the transisters "45nm", then the smaller voltage, the smaller voltage then that smaller heat which destroys chips. Also at small sizes different phenomenon start to occur within the chip like quantum tunneling,

14. Mar 22, 2009

### MTd2

I didn't really want to know the reason why it doesn't get above 5GHz, but a calculation that would show specificaly that number. It would be nice if that would be a estimative of order of magnetude for that frequency.

15. Mar 22, 2009

### Staff: Mentor

I'm not sure that such a calculation exists, but if you can find a power dissipation for the switching of a transistor of a certain size, you could multiply by the number of transistors and frequency to find total wattage.

16. Mar 22, 2009

### mgb_phys

It's also a question of economics, you can take a stock 4GHz CPU and run it at 8-10GHz if you don't mind spending 10x as much on the cooling system as on the chip and it having a lifetime of weeks instead of years.

17. Mar 22, 2009

### MTd2

Yes, that's what I am looking for. For several generations, despid different kinds of materials used to print a transistor, from different companies, the upper limit seems almost constant. So, that's why I am looking for an estimative caculus.

18. Mar 22, 2009

### Staff: Mentor

You could work backwards from the wattage of an existing chip or pick a handful of chips and plot the power per transistor switch.

19. Apr 1, 2009

### MTd2

20. Apr 1, 2009

### mgb_phys

Don't hold your breath, GaAs has been the next breakthrough in high speed chips since the 70s. 250Ghz GaAs transistors have been around for years but nobody has made them into chips. The graphene transistors might be useful in a few specialist applications that currently use GaAs.

Then you hit the next limit of the speed of light, the electrical signal can't go faster than 1ft/nS. So a 1Thz chip would have to wait 1000clock cycles for each new bit of memory.