Why computer clocks hardly goes above ~5GHz?

In summary: I see, it is linked to transistor density/heat dispersion optimized to a given architecture. It seems that a typical GPU( graphics processor unit) on 65nm process has about the same transistor density than an AMD or Intel processor at 45nm process, which would explain why videocards rarely goes beyond 900MHz. GPUs relies more on parallel processing so they do not require clocks as high as those CPUs.Yes, this is a general trend. Higher clock speeds require more power, so the power density on a chip scales with clock speed.
  • #1
MTd2
Gold Member
2,028
25
What is the reason chips won't go above 5GHz nowadays, independtly of the micro archicteture? Special cooling makes them go a little bit above, but it's always around this scale.

Until 2002 computers were increasing frequency in every generation of produts, but it mostly stopped since then. Why?
 
Engineering news on Phys.org
  • #2
Ye cannae change the laws of physics captain.

It's mainly to do with capcitance. to change the state of a clock you have to transfer charge. Current is the rate of flow of charge so the less time you take to move the charge the higher the current, to get ahigh current to flow you need a high voltage or a low resistance. But to reduce heat you want to lower the voltage (which is why chips run at 3.3V instead of 5V) to reduce the charge you try and make the components smaller - but that increases the resistance.
There are also limits to making the parts smaller, it's difficult to print parts that are 1/10 the wavelength of light, the leakage of charge increases as the parts get smaller and closer together and ultimately the doping atoms in the silicon simply diffuse into other surrounding parts. There are attempts to fix all these like super low K dielectrics and Silicon on insulator.

Ultimately it looks like 5Ghz is about where Si is going to top out for most general applications.
So you can either reinvent the world in GaAs or start getting cleverer about how we design software.
 
Last edited:
  • #3
mgb_phys said:
Ultimately it looks like 5Ghz is about where Si is going to top out for most general applications.
Really? Do you have any source to support this claim? When transistors are tested alone, they go to the order of 100's GHz.
 
  • #4
MTd2 said:
Really? Do you have any source to support this claim?
You provided it! Clock speeds increased rapidly until 2003 or so, now they're not. That implies (or rather, is the definition of) a plateau. If it is permanent remains to be seen, but for it to not be permanent will require a radical change in technology.
When transistors are tested alone, they go to the order of 100's GHz.
As mgb correctly stated, the biggest enemy is heat. A single transistor switching at 100 ghz doesn't put out anywhere near as much heat as a billion of them at 5 ghz.
 
  • #5
In a nutshell, current technology has reached the physical barrier of processor speed. As mgb_phys pointed out, its basically a thermal problem where you can't remove enough heat fast enough. Thats why processor manufacturers now make multiple cores and do fancy things with software such as clustering.
 
  • #6
Working out the power densities was eye-opening for me. See: assuming 0.75mm wafer thickness, take the 130 W(max) Pentium D, which has 140mm^2 die size:

http://en.wikipedia.org/wiki/Semiconductor_device_fabrication#Wafers

http://www.techpowerup.com/cpudb/323/Intel_Pentium_D_955_EE.html

[tex]\frac{130 \mbox{ W}}{0.75 \mbox{ mm} \times 140 \mbox{ mm}^2} = 1.2 \mbox{ MW/L}[/tex]

At full power, it heats up with ten times the power density of a nuclear reactor! It helps that it's under a millimeter thick, but still it gives you some perspective. You could not, for instance, make a 3D block of transistors, say by stacking hundreds of wafers. It would be impossible to cool.
 
Last edited:
  • #7
I see, it is linked to transistor density/heat dispersion optimized to a given architecture. It seems that a typical GPU( graphics processor unit) on 65nm process has about the same transistor density than an AMD or Intel processor at 45nm process, which would explain why videocards rarely goes beyond 900MHz. GPUs relies more on parallel processing so they do not require clocks as high as those CPUs.

But the point is that, when I asked for that claim, I would like to see some calculation that supported that value.
 
  • #8
On the technical side its heat that stops increasing clock speeds.

From a more general point of view it's simply not cost effective enough to try and develop/implement methods for cooling chips of increasing clock speed (diminishing returns). Its more cost effective to research into parallel processing and multicore technology. In the future when this avenure is exhausted you may very well see companies going back to researching chips with higher clock speeds.
 
  • #9
signerror said:
Working out the power densities was eye-opening for me. See: assuming 0.75mm wafer thickness, take the 130 W(max) Pentium D, which has 140mm^2 die size:

http://en.wikipedia.org/wiki/Semiconductor_device_fabrication#Wafers

http://www.techpowerup.com/cpudb/323/Intel_Pentium_D_955_EE.html

[tex]\frac{130 \mbox{ W}}{0.75 \mbox{ mm} \times 140 \mbox{ mm}^2} = 1.2 \mbox{ MW/L}[/tex]

At full power, it heats up with ten times the power density of a nuclear reactor! It helps that it's under a millimeter thick, but still it gives you some perspective...
Yes I saw where someone had plotted the power density history of CPU development; the curve would have overtaken the power density on the surface of the sun within a few years, a tough heat transfer challenge.
 
  • #10
mheslep said:
Yes I saw where someone had plotted the power density history of CPU development; the curve would have overtaken the power density on the surface of the sun within a few years, a tough heat transfer challenge.
I've seen that graph too...crazy.
 
  • #11
MTd2 said:
But the point is that, when I asked for that claim, I would like to see some calculation that supported that value.
You don't need a theoretical calculation, you already have the data! Why bother calculating what the power dissipation should be (probably a pretty difficult if not impossible thing to calculate from scratch) when we already know what it is?

Here's a graph. Note that all the graphs are power functions, following Moore's law (2^x), though the last is plotted on a logarithmic scale. Now the graph doesn't extrapolate, it just shows the history, but the hard ceiling we hit is obvious: above 140 watts or so, air cooling isn't enough anymore and switching to water (or refrigerant!) would add a lot to the cost and complexity of a PC. http://www.spectrum.ieee.org/apr08/6106/CPU

Article it was from: http://www.spectrum.ieee.org/apr08/6106
 
Last edited by a moderator:
  • #12
A Pentium D is 130W in a 140mm^2 die = 1MW/m^2
Sun = 3.8×10^26 W / 6.08×10^18 m^2 = 62MW/m^2
So not quite our sun but certainly more than a Red Giant.

The limit with the Pentium family is that they have SiO2 insulators, and with a 45nm process the insulator would be only 2-3 atoms thick. The high K-Hafnium gates on the Core2 family mean they can go to 32nm without too many problems.
 
  • #13
The smaller the size of the transisters "45nm", then the smaller voltage, the smaller voltage then that smaller heat which destroys chips. Also at small sizes different phenomenon start to occur within the chip like quantum tunneling,
 
  • #14
I didn't really want to know the reason why it doesn't get above 5GHz, but a calculation that would show specificaly that number. It would be nice if that would be a estimative of order of magnetude for that frequency.
 
  • #15
I'm not sure that such a calculation exists, but if you can find a power dissipation for the switching of a transistor of a certain size, you could multiply by the number of transistors and frequency to find total wattage.
 
  • #16
It's also a question of economics, you can take a stock 4GHz CPU and run it at 8-10GHz if you don't mind spending 10x as much on the cooling system as on the chip and it having a lifetime of weeks instead of years.
 
  • #17
russ_watters said:
I'm not sure that such a calculation exists, but if you can find a power dissipation for the switching of a transistor of a certain size, you could multiply by the number of transistors and frequency to find total wattage.

Yes, that's what I am looking for. For several generations, despid different kinds of materials used to print a transistor, from different companies, the upper limit seems almost constant. So, that's why I am looking for an estimative caculus.
 
  • #18
You could work backwards from the wattage of an existing chip or pick a handful of chips and plot the power per transistor switch.
 
  • #19
I found this:

http://www.nordichardware.com/news,9025.html

MIT tests experimental chip. Does anyone know more details?
 
  • #20
Tom Palacios, professor at MIT, believes that we will see graphene-based chips in 1-2 years.

Don't hold your breath, GaAs has been the next breakthrough in high speed chips since the 70s. 250Ghz GaAs transistors have been around for years but nobody has made them into chips. The graphene transistors might be useful in a few specialist applications that currently use GaAs.

Then you hit the next limit of the speed of light, the electrical signal can't go faster than 1ft/nS. So a 1Thz chip would have to wait 1000clock cycles for each new bit of memory.
 
  • #21
But that was a record for an isolated transistor. When they are packed, the frequency is much lowered because the bigger the transistor, the harder to cool the system. This time was a whole chip.
 
  • #22
What if you cooled the chip and made the materials superconductive? Would that increase the speed the signal is capable of?
 
  • #23
Lancelot59 said:
What if you cooled the chip and made the materials superconductive? Would that increase the speed the signal is capable of?

Nope. The speed of light isn't just a good idea - it's the law.
 
  • #24
Light speed chips, interesting.

Well yes for obvious reasons, such as the infinite energy needed to get infinite time dilation, mass, and length contraction. However I pose this question to you:

You have a pencil, and a pencil one lightyear long. You push the back end of the pencil, and turn the flashlight on at the same time. Does the opposing end of the pencil move first? Or does the light reach the same distance before the end moves?

Here is an interesing thing. I have heard of scientists being able to artifically slow down photons to almost a crawl. Inside the apparatus where the light was slowed would not relativistic velocities be much lower? In an opposing situation, theoretically if we could increase the speed of light could we create faster chips?

Also for the heat situation, perhaps we are making things TOO small for them to be viable. Such as phones so small that you need tweezers to hit the buttons. What if there were spaces left inside a chip, and the empty space filled with a thermally conductive fluid that would not cause short circuits or degrade the transistors. Such a setup is done already, however that involves the entire computer case being filled with oil. Why not directly cool the transistors instead of having the heat disperse through a casing and then to a fan/water-block? Have the cooling liquid move through the circuit itself? Such a system would be bigger but easier to manage, so we may not be able to make faster processors, but we can stack more cores inside the case.
 
Last edited:
  • #25
Well light can carry information, so having chips operating at light speed would be pointless. I saw it on some discovery programme where they slowed light at Harvard using lasers.

We can't increase the speed of light, but you can slow it down, code it with something, speed it up and then slow it down at the receiving end. Was interesting stuff, but I didnt fully get it.
 
  • #26
That method actually makes sense to me. As technology progresses we'll have to slow down the light less and less.

I think however we've reached the limit of what we can create mechanically. Perhaps we should look to organic solutions? Our brain operates many many MANY times faster than any single machine we could conceivably build. Why not make an organic computer? Once human brain easily dwarfs every machine on this planet combined. We could finally do the crazy stuff we've always wanted to do, like create sentient AI.
 
  • #27
Lancelot59 said:
You have a pencil, and a pencil one lightyear long. You push the back end of the pencil, and turn the flashlight on at the same time. Does the opposing end of the pencil move first? Or does the light reach the same distance before the end moves?
Common question - no, the push travels through the pencil at the speed of sound in wood.
It's just that the speed of sound is fast enough in things like metal that we see it as instant.
If you work in explosives you have to consider how long it takes the bang to go through the side of a tank.

Here is an interesing thing. I have heard of scientists being able to artifically slow down photons to almost a crawl. Inside the apparatus where the light was slowed would not relativistic velocities be much lower? In an opposing situation, theoretically if we could increase the speed of light could we create faster chips?
Relativity is only concerned with the speed of light in vacuum - a photon only goes 40% as fast in diamond but that doesn't mean engagement rings do time travel.

Our brain operates many many MANY times faster than any single machine we could conceivably build.
No it doesn't - it works differently.
Try doing 1.23 * 4.45 in your head - a graphics card can do a trillion of these a second.
Your brain is very slow, it's software is much better at doing correlations between things than a typical computer design.

What if there were spaces left inside a chip, and the empty space filled with a thermally conductive fluid that would not cause short circuits or degrade the transistors.
The best thing to fill the spaces up with is a solid thermal conductive material.
Oil only really works well if it is flowing - on the scale of chips the features are much smaller than an oil molecule so it wouldn't flow well.
There are advances in making the base silicon a better heat conductor to get the heat out to the package more easily. Interestingly the best material to use for this would be diamond - it has the highest ratio of heat conduction to electrical insulation
 
  • #28
mgb_phys said:
Common question - no, the push travels through the pencil at the speed of sound in wood.
It's just that the speed of sound is fast enough in things like metal that we see it as instant.
If you work in explosives you have to consider how long it takes the bang to go through the side of a tank.
Ok besides a pencil, just a solid object. Does energy have to obey the light speed limit?

mgb_phys said:
No it doesn't - it works differently.
Try doing 1.23 * 4.45 in your head - a graphics card can do a trillion of these a second.
Your brain is very slow, it's software is much better at doing correlations between things than a typical computer design.
We are slower at doing calculations because we have to consciously conceptualize everything. The actual hardware and autonomous functions are amazingly fast

mgb_phys said:
The best thing to fill the spaces up with is a solid thermal conductive material.
Oil only really works well if it is flowing - on the scale of chips the features are much smaller than an oil molecule so it wouldn't flow well.
There are advances in making the base silicon a better heat conductor to get the heat out to the package more easily. Interestingly the best material to use for this would be diamond - it has the highest ratio of heat conduction to electrical insulation

That's what I mean. Instead of having a "water-block" sit on top turn the whole package into a giant water block.
 
  • #29
mgb_phys said:
a photon only goes 40% as fast in diamond but that doesn't mean engagement rings do time travel.
I am adding that to my all time favorite quotes.
 
  • #30
Actually if there was anything able to move around in the diamond and it moved a 0.4c in the diamond relative to a point outside the diamond then relativity would happen, would it not?
 
  • #31
It means you can go 2.5 times the speed of light in a diamond.
This does happen - the blue glow from a reactor is due to particles traveling faster than light in water (n=1.33). The limit is only c=speed of light vacuum
 
  • #32
Ah, ok. I only have a basic understanding of relativity from Grade 11 Physics. :/
 
  • #33
Lancelot59 said:
Ok besides a pencil, just a solid object. Does energy have to obey the light speed limit?

In any solid object, the "push" is a pressure wave, and propagates from one end to the other at the speed of sound for that object's material.
 
  • #34
That makes sense.
 

1. Why can't computer clocks go above ~5GHz?

Computer clocks are limited by the physical properties of the materials used to make them. As the clock speed increases, the materials used to make the clock become less and less efficient at handling the high frequencies, leading to errors and instability. This is known as the "clock speed wall."

2. What happens if a computer clock goes above ~5GHz?

If a computer clock goes above ~5GHz, it can lead to errors and instability in the system. This can cause crashes, data corruption, and other issues that can negatively impact the performance and reliability of the computer.

3. Are there any benefits to having a computer clock above ~5GHz?

While a higher clock speed may seem desirable for faster processing, the benefits are limited. Most modern processors are designed to be efficient at lower clock speeds and rely on other factors such as multiple cores and cache memory to improve performance.

4. Can computer clocks be improved to go above ~5GHz?

There have been efforts to improve computer clocks and break the clock speed wall, but these attempts have been met with challenges and limitations. As technology advances, new materials and techniques may be developed to allow for higher clock speeds, but it is a complex and ongoing process.

5. Is there a maximum limit for computer clock speeds?

While current technology has reached a limit of around ~5GHz for computer clock speeds, it is difficult to predict if this will be the absolute maximum. As technology and materials continue to advance, it is possible that higher clock speeds may be achieved in the future, but it is unlikely that there will be a significant increase beyond this point.

Similar threads

  • Programming and Computer Science
Replies
15
Views
1K
Replies
9
Views
1K
Replies
53
Views
35K
  • Electrical Engineering
Replies
15
Views
2K
  • Special and General Relativity
Replies
10
Views
1K
  • Special and General Relativity
Replies
8
Views
3K
  • Special and General Relativity
2
Replies
45
Views
3K
  • STEM Academic Advising
Replies
4
Views
1K
  • STEM Academic Advising
Replies
1
Views
1K
Back
Top