Why computer clocks hardly goes above ~5GHz?

  • Thread starter Thread starter MTd2
  • Start date Start date
  • Tags Tags
    Clocks Computer
Click For Summary

Discussion Overview

The discussion centers on the reasons why computer clock speeds have plateaued around 5GHz, exploring the physical, technical, and economic limitations that contribute to this phenomenon. Participants examine factors such as heat dissipation, transistor density, and the evolution of chip architecture, as well as the implications for future technology development.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants suggest that the primary limitation on clock speeds is related to heat dissipation, as increasing clock speeds leads to higher power densities that are difficult to manage thermally.
  • Others argue that while individual transistors can operate at much higher frequencies, the collective heat generated by billions of transistors at 5GHz creates significant thermal challenges.
  • A participant points out that the trend of increasing clock speeds halted around 2003, indicating a potential plateau that may require radical technological changes to overcome.
  • Some contributions highlight the economic considerations, noting that developing advanced cooling solutions for higher clock speeds may not be cost-effective compared to pursuing multicore architectures.
  • There are discussions about the physical limits of silicon technology, including issues related to capacitance, resistance, and the challenges of miniaturizing components further.
  • A participant requests calculations or empirical data to support claims about the 5GHz limit, emphasizing the need for quantitative analysis in the discussion.
  • Some participants mention alternative materials and architectures, such as GaAs, as potential avenues for future exploration beyond silicon limitations.

Areas of Agreement / Disagreement

Participants generally agree that heat dissipation is a significant barrier to increasing clock speeds, but there is no consensus on whether the 5GHz limit is permanent or what specific calculations could definitively support this claim. Multiple competing views regarding the implications of these limitations and potential future directions remain present.

Contextual Notes

Limitations include unresolved mathematical steps related to power dissipation calculations and the dependence on specific definitions of performance metrics. The discussion does not resolve the complexities of transitioning to alternative materials or architectures.

Who May Find This Useful

This discussion may be of interest to those studying semiconductor physics, computer engineering, and thermal management in electronics, as well as professionals involved in CPU design and development.

MTd2
Gold Member
Messages
2,019
Reaction score
25
What is the reason chips won't go above 5GHz nowadays, independtly of the micro archicteture? Special cooling makes them go a little bit above, but it's always around this scale.

Until 2002 computers were increasing frequency in every generation of produts, but it mostly stopped since then. Why?
 
Engineering news on Phys.org
Ye cannae change the laws of physics captain.

It's mainly to do with capcitance. to change the state of a clock you have to transfer charge. Current is the rate of flow of charge so the less time you take to move the charge the higher the current, to get ahigh current to flow you need a high voltage or a low resistance. But to reduce heat you want to lower the voltage (which is why chips run at 3.3V instead of 5V) to reduce the charge you try and make the components smaller - but that increases the resistance.
There are also limits to making the parts smaller, it's difficult to print parts that are 1/10 the wavelength of light, the leakage of charge increases as the parts get smaller and closer together and ultimately the doping atoms in the silicon simply diffuse into other surrounding parts. There are attempts to fix all these like super low K dielectrics and Silicon on insulator.

Ultimately it looks like 5Ghz is about where Si is going to top out for most general applications.
So you can either reinvent the world in GaAs or start getting cleverer about how we design software.
 
Last edited:
mgb_phys said:
Ultimately it looks like 5Ghz is about where Si is going to top out for most general applications.
Really? Do you have any source to support this claim? When transistors are tested alone, they go to the order of 100's GHz.
 
MTd2 said:
Really? Do you have any source to support this claim?
You provided it! Clock speeds increased rapidly until 2003 or so, now they're not. That implies (or rather, is the definition of) a plateau. If it is permanent remains to be seen, but for it to not be permanent will require a radical change in technology.
When transistors are tested alone, they go to the order of 100's GHz.
As mgb correctly stated, the biggest enemy is heat. A single transistor switching at 100 ghz doesn't put out anywhere near as much heat as a billion of them at 5 ghz.
 
In a nutshell, current technology has reached the physical barrier of processor speed. As mgb_phys pointed out, its basically a thermal problem where you can't remove enough heat fast enough. Thats why processor manufacturers now make multiple cores and do fancy things with software such as clustering.
 
Working out the power densities was eye-opening for me. See: assuming 0.75mm wafer thickness, take the 130 W(max) Pentium D, which has 140mm^2 die size:

http://en.wikipedia.org/wiki/Semiconductor_device_fabrication#Wafers

http://www.techpowerup.com/cpudb/323/Intel_Pentium_D_955_EE.html

[tex]\frac{130 \mbox{ W}}{0.75 \mbox{ mm} \times 140 \mbox{ mm}^2} = 1.2 \mbox{ MW/L}[/tex]

At full power, it heats up with ten times the power density of a nuclear reactor! It helps that it's under a millimeter thick, but still it gives you some perspective. You could not, for instance, make a 3D block of transistors, say by stacking hundreds of wafers. It would be impossible to cool.
 
Last edited:
I see, it is linked to transistor density/heat dispersion optimized to a given architecture. It seems that a typical GPU( graphics processor unit) on 65nm process has about the same transistor density than an AMD or Intel processor at 45nm process, which would explain why videocards rarely goes beyond 900MHz. GPUs relies more on parallel processing so they do not require clocks as high as those CPUs.

But the point is that, when I asked for that claim, I would like to see some calculation that supported that value.
 
On the technical side its heat that stops increasing clock speeds.

From a more general point of view it's simply not cost effective enough to try and develop/implement methods for cooling chips of increasing clock speed (diminishing returns). Its more cost effective to research into parallel processing and multicore technology. In the future when this avenure is exhausted you may very well see companies going back to researching chips with higher clock speeds.
 
signerror said:
Working out the power densities was eye-opening for me. See: assuming 0.75mm wafer thickness, take the 130 W(max) Pentium D, which has 140mm^2 die size:

http://en.wikipedia.org/wiki/Semiconductor_device_fabrication#Wafers

http://www.techpowerup.com/cpudb/323/Intel_Pentium_D_955_EE.html

[tex]\frac{130 \mbox{ W}}{0.75 \mbox{ mm} \times 140 \mbox{ mm}^2} = 1.2 \mbox{ MW/L}[/tex]

At full power, it heats up with ten times the power density of a nuclear reactor! It helps that it's under a millimeter thick, but still it gives you some perspective...
Yes I saw where someone had plotted the power density history of CPU development; the curve would have overtaken the power density on the surface of the sun within a few years, a tough heat transfer challenge.
 
  • #10
mheslep said:
Yes I saw where someone had plotted the power density history of CPU development; the curve would have overtaken the power density on the surface of the sun within a few years, a tough heat transfer challenge.
I've seen that graph too...crazy.
 
  • #11
MTd2 said:
But the point is that, when I asked for that claim, I would like to see some calculation that supported that value.
You don't need a theoretical calculation, you already have the data! Why bother calculating what the power dissipation should be (probably a pretty difficult if not impossible thing to calculate from scratch) when we already know what it is?

Here's a graph. Note that all the graphs are power functions, following Moore's law (2^x), though the last is plotted on a logarithmic scale. Now the graph doesn't extrapolate, it just shows the history, but the hard ceiling we hit is obvious: above 140 watts or so, air cooling isn't enough anymore and switching to water (or refrigerant!) would add a lot to the cost and complexity of a PC. http://www.spectrum.ieee.org/apr08/6106/CPU

Article it was from: http://www.spectrum.ieee.org/apr08/6106
 
Last edited by a moderator:
  • #12
A Pentium D is 130W in a 140mm^2 die = 1MW/m^2
Sun = 3.8×10^26 W / 6.08×10^18 m^2 = 62MW/m^2
So not quite our sun but certainly more than a Red Giant.

The limit with the Pentium family is that they have SiO2 insulators, and with a 45nm process the insulator would be only 2-3 atoms thick. The high K-Hafnium gates on the Core2 family mean they can go to 32nm without too many problems.
 
  • #13
The smaller the size of the transisters "45nm", then the smaller voltage, the smaller voltage then that smaller heat which destroys chips. Also at small sizes different phenomenon start to occur within the chip like quantum tunneling,
 
  • #14
I didn't really want to know the reason why it doesn't get above 5GHz, but a calculation that would show specificaly that number. It would be nice if that would be a estimative of order of magnetude for that frequency.
 
  • #15
I'm not sure that such a calculation exists, but if you can find a power dissipation for the switching of a transistor of a certain size, you could multiply by the number of transistors and frequency to find total wattage.
 
  • #16
It's also a question of economics, you can take a stock 4GHz CPU and run it at 8-10GHz if you don't mind spending 10x as much on the cooling system as on the chip and it having a lifetime of weeks instead of years.
 
  • #17
russ_watters said:
I'm not sure that such a calculation exists, but if you can find a power dissipation for the switching of a transistor of a certain size, you could multiply by the number of transistors and frequency to find total wattage.

Yes, that's what I am looking for. For several generations, despid different kinds of materials used to print a transistor, from different companies, the upper limit seems almost constant. So, that's why I am looking for an estimative caculus.
 
  • #18
You could work backwards from the wattage of an existing chip or pick a handful of chips and plot the power per transistor switch.
 
  • #19
I found this:

http://www.nordichardware.com/news,9025.html

MIT tests experimental chip. Does anyone know more details?
 
  • #20
Tom Palacios, professor at MIT, believes that we will see graphene-based chips in 1-2 years.

Don't hold your breath, GaAs has been the next breakthrough in high speed chips since the 70s. 250Ghz GaAs transistors have been around for years but nobody has made them into chips. The graphene transistors might be useful in a few specialist applications that currently use GaAs.

Then you hit the next limit of the speed of light, the electrical signal can't go faster than 1ft/nS. So a 1Thz chip would have to wait 1000clock cycles for each new bit of memory.
 
  • #21
But that was a record for an isolated transistor. When they are packed, the frequency is much lowered because the bigger the transistor, the harder to cool the system. This time was a whole chip.
 
  • #22
What if you cooled the chip and made the materials superconductive? Would that increase the speed the signal is capable of?
 
  • #23
Lancelot59 said:
What if you cooled the chip and made the materials superconductive? Would that increase the speed the signal is capable of?

Nope. The speed of light isn't just a good idea - it's the law.
 
  • #24
Light speed chips, interesting.

Well yes for obvious reasons, such as the infinite energy needed to get infinite time dilation, mass, and length contraction. However I pose this question to you:

You have a pencil, and a pencil one lightyear long. You push the back end of the pencil, and turn the flashlight on at the same time. Does the opposing end of the pencil move first? Or does the light reach the same distance before the end moves?

Here is an interesing thing. I have heard of scientists being able to artifically slow down photons to almost a crawl. Inside the apparatus where the light was slowed would not relativistic velocities be much lower? In an opposing situation, theoretically if we could increase the speed of light could we create faster chips?

Also for the heat situation, perhaps we are making things TOO small for them to be viable. Such as phones so small that you need tweezers to hit the buttons. What if there were spaces left inside a chip, and the empty space filled with a thermally conductive fluid that would not cause short circuits or degrade the transistors. Such a setup is done already, however that involves the entire computer case being filled with oil. Why not directly cool the transistors instead of having the heat disperse through a casing and then to a fan/water-block? Have the cooling liquid move through the circuit itself? Such a system would be bigger but easier to manage, so we may not be able to make faster processors, but we can stack more cores inside the case.
 
Last edited:
  • #25
Well light can carry information, so having chips operating at light speed would be pointless. I saw it on some discovery programme where they slowed light at Harvard using lasers.

We can't increase the speed of light, but you can slow it down, code it with something, speed it up and then slow it down at the receiving end. Was interesting stuff, but I didnt fully get it.
 
  • #26
That method actually makes sense to me. As technology progresses we'll have to slow down the light less and less.

I think however we've reached the limit of what we can create mechanically. Perhaps we should look to organic solutions? Our brain operates many many MANY times faster than any single machine we could conceivably build. Why not make an organic computer? Once human brain easily dwarfs every machine on this planet combined. We could finally do the crazy stuff we've always wanted to do, like create sentient AI.
 
  • #27
Lancelot59 said:
You have a pencil, and a pencil one lightyear long. You push the back end of the pencil, and turn the flashlight on at the same time. Does the opposing end of the pencil move first? Or does the light reach the same distance before the end moves?
Common question - no, the push travels through the pencil at the speed of sound in wood.
It's just that the speed of sound is fast enough in things like metal that we see it as instant.
If you work in explosives you have to consider how long it takes the bang to go through the side of a tank.

Here is an interesing thing. I have heard of scientists being able to artifically slow down photons to almost a crawl. Inside the apparatus where the light was slowed would not relativistic velocities be much lower? In an opposing situation, theoretically if we could increase the speed of light could we create faster chips?
Relativity is only concerned with the speed of light in vacuum - a photon only goes 40% as fast in diamond but that doesn't mean engagement rings do time travel.

Our brain operates many many MANY times faster than any single machine we could conceivably build.
No it doesn't - it works differently.
Try doing 1.23 * 4.45 in your head - a graphics card can do a trillion of these a second.
Your brain is very slow, it's software is much better at doing correlations between things than a typical computer design.

What if there were spaces left inside a chip, and the empty space filled with a thermally conductive fluid that would not cause short circuits or degrade the transistors.
The best thing to fill the spaces up with is a solid thermal conductive material.
Oil only really works well if it is flowing - on the scale of chips the features are much smaller than an oil molecule so it wouldn't flow well.
There are advances in making the base silicon a better heat conductor to get the heat out to the package more easily. Interestingly the best material to use for this would be diamond - it has the highest ratio of heat conduction to electrical insulation
 
  • #28
mgb_phys said:
Common question - no, the push travels through the pencil at the speed of sound in wood.
It's just that the speed of sound is fast enough in things like metal that we see it as instant.
If you work in explosives you have to consider how long it takes the bang to go through the side of a tank.
Ok besides a pencil, just a solid object. Does energy have to obey the light speed limit?

mgb_phys said:
No it doesn't - it works differently.
Try doing 1.23 * 4.45 in your head - a graphics card can do a trillion of these a second.
Your brain is very slow, it's software is much better at doing correlations between things than a typical computer design.
We are slower at doing calculations because we have to consciously conceptualize everything. The actual hardware and autonomous functions are amazingly fast

mgb_phys said:
The best thing to fill the spaces up with is a solid thermal conductive material.
Oil only really works well if it is flowing - on the scale of chips the features are much smaller than an oil molecule so it wouldn't flow well.
There are advances in making the base silicon a better heat conductor to get the heat out to the package more easily. Interestingly the best material to use for this would be diamond - it has the highest ratio of heat conduction to electrical insulation

That's what I mean. Instead of having a "water-block" sit on top turn the whole package into a giant water block.
 
  • #29
mgb_phys said:
a photon only goes 40% as fast in diamond but that doesn't mean engagement rings do time travel.
I am adding that to my all time favorite quotes.
 
  • #30
Actually if there was anything able to move around in the diamond and it moved a 0.4c in the diamond relative to a point outside the diamond then relativity would happen, would it not?
 

Similar threads

  • · Replies 15 ·
Replies
15
Views
4K
  • · Replies 19 ·
Replies
19
Views
5K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 53 ·
2
Replies
53
Views
36K
  • · Replies 8 ·
Replies
8
Views
3K
Replies
17
Views
6K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
9
Views
4K
  • · Replies 6 ·
Replies
6
Views
7K