Will CPU stock clock speeds ever exceed 4.0 GHz?


by The_Absolute
Tags: clock, exceed, speeds, stock
The_Absolute
The_Absolute is offline
#1
Oct16-08, 02:24 PM
P: 182
The fastest stock clock speed I have ever seen on a CPU the is currently available to the public is the Intel Core 2 Quad 4.0 GHz. (it may be overclocked) For the past few years, clocks on CPU's never seemed to get substantially higher. Is there some kind of thermal limit to how fast you can run electricity through a circuit without it melting/catching fire? They seemed to just add more CPU cores on one die working in tandem. I remember when the Pentium 4 was released, which was solely based on high clock speeds.

I read something on the internet recently about a dual, quad-core CPU computer (eight cores) with each core running at 6.0 GHz. (48.0 GHz) I know that if you tried to overclock a pentium 4 to 6.0 GHz, it would immediately melt and/or catch fire, even with liquid nitrogen cooling it. I'm not an electrical engineer, nor a computer scientist. But I was wondering if maybe in a five years or so we will see 8-16-32+ CPU core systems with each core running at maybe 6.0-12.0 GHz.
Phys.Org News Partner Science news on Phys.org
SensaBubble: It's a bubble, but not as we know it (w/ video)
The hemihelix: Scientists discover a new shape using rubber bands (w/ video)
Microbes provide insights into evolution of human language
mgb_phys
mgb_phys is offline
#2
Oct16-08, 02:42 PM
Sci Advisor
HW Helper
P: 8,961
Yes the problem is mostly thermal. Power in a switching transistor is roughly frequency^2, so going from 4->6GHz is 225% as much power to remove. The problem is made worse with the smaller die rules needed to get more features on a chip, smaller features suffer from thermal damage more easily (thermal migration of dopeants, changes in crystal structure etc)

You can run at high speeds but you have to do a much better job of cooling to get the extra heat out and it's more difficult to get the required amount of power in. Multiple cores, more efficent processor instructions and larger caches are generally a better value for money way of improving performance.

Another possibility is asychronous designs where instead of every transistor running at the full clock frequency the different parts run at different speeds only clocking when they have work to do and going idle while waiting for the results from another part. This is used in some simple lower power embedded designs but is tricky for a full PC type CPU.
Redbelly98
Redbelly98 is offline
#3
Oct16-08, 02:55 PM
Mentor
Redbelly98's Avatar
P: 11,989
I've wondered about this myself. It's like Moore's law just hit a wall a few years ago, as far as PC processor speed is concerned.

Greg Bernhardt
Greg Bernhardt is offline
#4
Oct16-08, 03:00 PM
Admin
Greg Bernhardt's Avatar
P: 8,542

Will CPU stock clock speeds ever exceed 4.0 GHz?


Gaming has always moved the industry. In gaming your graphics card is much more important than your CPU. So most of the advancements are in graphics cards.
mgb_phys
mgb_phys is offline
#5
Oct16-08, 03:04 PM
Sci Advisor
HW Helper
P: 8,961
Strictly speaking Moore's law was that the most cost effective number of transistors to fit on a single CPU would increase expoentially - that's still true.
It didn't say anything about clock speeds or performance.

The real breakthrough recently has been in using the massive power of your graphics card to perform the sort of calculations where you need to process lots of data in parallel. A $500 graphics card with somethign like Nvidia's CUDA almost beats a supercomputer of a few years ago for these sort of applications.
Some cards can operate at nearly CPU speeds but on 128 or 256 pieces of data at the same time!
Redbelly98
Redbelly98 is offline
#6
Oct16-08, 03:22 PM
Mentor
Redbelly98's Avatar
P: 11,989
Thanks for the replies. I had a vague suspicion that computer performance was being improved in other ways.
turbo
turbo is offline
#7
Oct16-08, 03:56 PM
PF Gold
turbo's Avatar
P: 7,367
Intel's upcoming quad-core Core i7 is designed to be over-clocked to 3.2 Ghz, and with memory control integrated on CPU, it should be pretty darned fast. This may be the first Intel CPU specifically designed to be over-clocked.
waht
waht is offline
#8
Oct16-08, 04:04 PM
P: 1,636
The only problems with the graphic cards GPU is that it processes in single precision, unlike a regular CPU which handles double.

It seems now that the industry is moving to the mulitcore designs, rather than raw speed. But small embedded devices in the near future could be first that will run in the 2 to 10 GHz range.
B. Elliott
B. Elliott is offline
#9
Oct16-08, 06:37 PM
PF Gold
B. Elliott's Avatar
P: 386
Quote Quote by waht View Post
The only problems with the graphic cards GPU is that it processes in single precision, unlike a regular CPU which handles double.

It seems now that the industry is moving to the mulitcore designs, rather than raw speed. But small embedded devices in the near future could be first that will run in the 2 to 10 GHz range.
The typical single core GPU has been gone for a while now. Take a look at Nvidias G80 which was released about two years ago now. (I also bought the 768Mb 8800GTX G80 the day it was released btw.)

The G80 GPU was revolutionary in the aspect that it incorporated 'stream processors'...

http://techreport.com/r.x/geforce-8800/block-g80.gif

Nvidia's GeForce 8800 graphics processor
The G80 has eight groups of 16 SPs, for a total of 128 stream processors. These aren't vertex or pixel shaders, but generalized floating-point processors capable of operating on vertices, pixels, or any manner of data. Most GPUs operate on pixel data in vector fashion, issuing instructions to operate concurrently on the multiple color components of a pixel (such as red, green, blue and alpha), but the G80's stream processors are scalar—each SP handles one component. SPs can also be retasked to handle vertex data (or other things) dynamically, according to demand.
http://techreport.com/articles.x/11211

When it comes to processing large amounts of data arranged into small packets, streaming processors rule.
mgb_phys
mgb_phys is offline
#10
Oct16-08, 06:44 PM
Sci Advisor
HW Helper
P: 8,961
The GE8xxx series only handle single precision floats, the GT200 series will do doubles.
turbo
turbo is offline
#11
Oct16-08, 06:47 PM
PF Gold
turbo's Avatar
P: 7,367
With RISC programming and faster chips (with on-chip memory control) it could be time for another revolution in PCs. Intel's Core i7 CPU with fast-bus access to RAM could revolutionize PC performance, if we get RISC - friendly OSs and applications. Very fast on-chip processing combined with instruction sets that tend toward LOTS of very short simple operations can speed up computers greatly. The PC industry is wading through molasses, preserving the status-quo and protecting market share.
B. Elliott
B. Elliott is offline
#12
Oct16-08, 06:59 PM
PF Gold
B. Elliott's Avatar
P: 386
And to answer the OP question, I don't see processor speeds exceeding 4Ghz for quite some time. Going back to what mgb_phys was saying, the chips undergo cycles of die shrink. When they shrink the die, the chip runs cooler (which can allow greater overclocking), the chip (typically) processes data more efficiently, and they can pack in larger cashes (L1/L2).

With that increase in efficiency (faster processing) there isn't that much of a need for a frequency increase as more work is being done within the same amount of clock cycles, so they keep the frequency low (less heat, less power consumption).

What you'll see continue to happen is what has been happening for years now. You'll see a die shrink coupled with an overall decrease in operating frequency. The frequencies will slowly creep up (higher end models) and then when the next generation chip is released, you'll see another drop in frequency, heat output, and an increase in frequency and available cache (due to the added available space))

But keep in mind this is being generic since Intel offers different styles of chip within a family. For example, the Core2s; Conroe, Allendale, Wolfdale, Kentsfield, Yorkfield, Conroe XE, Kentsfield XE, Yorkfield XE.
rcgldr
rcgldr is offline
#13
Oct16-08, 07:01 PM
HW Helper
P: 6,931
The issue as I understand it, is the voltage to size ratio it takes to get transitors to switch at a 4 ghz or faster cpu rate, and the amount of heat generated in a relatively tiny area. Scaling down the size and voltage isn't helping, because the ratio of voltage to size remains about the same in order to achieve the same switching rates. I read somewhere that using very small technology, but spreading it out could help, but it wouldn't be cost effective (relatively low density for a given size technology).

Note that Intel mentioned 4.0ghz as a limit a few years ago, even before they started making hyperthreading and multi-core cpu's.
B. Elliott
B. Elliott is offline
#14
Oct16-08, 07:10 PM
PF Gold
B. Elliott's Avatar
P: 386
Required voltage and operating frequency is intimately linked.

With the Core2duo processors, i've been able to overclock them considerably before having to make a voltage increase. The only time I did have to increase voltage, is when I started getting closer to the point of diminishing returns anyway.
mgb_phys
mgb_phys is offline
#15
Oct16-08, 07:15 PM
Sci Advisor
HW Helper
P: 8,961
As you make the parts smaller they can switch faster since the layers are thinner and the capacitance less, but the field is larger which causes dopeants to move out of the depletion layer. Heat makes this worse.
The current record for regular silicon is around 50Ghz, you can do about double that with extra guard dopeants like flourine to stop the boron diffusing out.
Then there is leakage current that justs wastes power wether the transistor is being switched or not - the latest 45nm processes use fancy metal gates to reduce that.

But innevitably a 4Ghz CPU is going to burn a couple of hundred watts in a square cm of silicon.
The_Absolute
The_Absolute is offline
#16
Oct16-08, 07:47 PM
P: 182
I have a few more PC questions to ask.

1. I'm not an electrician, but if I were to build a PC that required a 1100-1500+ watt power supply, is there enough juice going through the wall outlet to meet the power demands for the PC? Would the outlet burn out? Do I need a special outlet?

2. Approximately how many volts are there in 1200 watts? And how is that comparable to the ammount of electricity to run a television, a vacuum cleaner, or a washing machine for example?

3. The NVidia Geforce GTX 280 is alledgedly capable of almost 2 Teraflops of computing muscle. IBM's 400+ teraflop "blue gene", formerly the world's fastest supercomputer until it became obsolete by IBM's 10 Petaflop "roadrunner" very recently. About seven years ago or so, the worlds fastest supercomputer's power was equal to the computational power of today's latest GPU's. In the distant future, will there be GPU's parring, or topping the fastest supercomputers of today? And how come GPU's are dwarfing the computing power of CPU's?

4. I was told that having multiple graphics cards (SLI) doesn't give you more graphics muscle in "99%" of all applications. Which applications partially/fully utilize multi-GPU's? I'm guessing that running extremely performance demanding programs like the video game "Crysis" on it's maximum settings would call for more GPU's? Will the GTX 280 do the job?

5. How does the computational power of the human brain compare to today's fastest supercomputers? If the worlds fastest computers dwarf the human mind, then is it possible for there to be a super-AI that completely rivals all human thought and intellect? Alledgedly, the human brain is the most complex object in the known universe. (I don't know if that is true).

6. I've heard about a chip that was invented as part of a highly classified government project that has the power of 100,000 CRAY-5 supercomputers, and that there is a massive underground bunker full of racks and racks of these chips running in tandem for as far as the eye can see. (I can't imagine what they are used for) maybe trying to set a new Crysis benchmark (lol!) is this true? How powerful is a CRAY-5?

7. What will computers, especially personal computers be like in the year 2050-2100 by your approximate?

8. How much more powerful is a NVidia GeForce GTX 280 over an ATI Radeon X-800 Pro (my old card on my now broken PC) Oh, and NVidia is coming out with the GeForce GTX 350, which uses 2GBs of GDDR5 RAM with a single core.
mgb_phys
mgb_phys is offline
#17
Oct16-08, 08:14 PM
Sci Advisor
HW Helper
P: 8,961
Quote Quote by The_Absolute View Post
1. I'm not an electrician, but if I were to build a PC that required a 1100-1500+ watt power supply, is there enough juice going through the wall outlet to meet the power demands for the PC? Would the outlet burn out? Do I need a special outlet?
Not quite yet, a typical outlet can supply around 220v*13A = 3000W (europe) or 110V*15A=1600W (US)

2. Approximately how many volts are there in 1200 watts? And how is that comparable to the ammount of electricity to run a television, a vacuum cleaner, or a washing machine for example?
Divide by your volts, 220V (europe) or 110V (USA)
A TV is a few 100 W, a washing machine can be 2-3Kw - thats why you have special 220V outlets for them in the US.

3. The NVidia Geforce GTX 280 is alledgedly capable of almost 2 Teraflops of computing muscle. IBM's 400+ teraflop "blue gene", formerly the world's fastest supercomputer until it became obsolete by IBM's 10 Petaflop "roadrunner" very recently. About seven years ago or so, the worlds fastest supercomputer's power was equal to the computational power of today's latest GPU's. In the distant future, will there be GPU's parring, or topping the fastest supercomputers of today? And how come GPU's are dwarfing the computing power of CPU's?
These are ideal case figures - they assume you want to do a very small class of operations to 128/256 numbers in parallel and don't need much memory or bandwidth. But for these sorts of applications GPUs are amazing - NVidia have a library called CUDA to use your GPU for general calcualtions.

4. I was told that having multiple graphics cards (SLI) doesn't give you more graphics muscle in "99%" of all applications. Which applications partially/fully utilize multi-GPU's? I'm guessing that running extremely performance demanding programs like the video game "Crysis" on it's maximum settings would call for more GPU's? Will the GTX 280 do the job?
There are a few apps that can use one card to do the physics and the other to display the game.
Redbelly98
Redbelly98 is offline
#18
Oct16-08, 08:33 PM
Mentor
Redbelly98's Avatar
P: 11,989
Quote Quote by mgb_phys View Post
A TV is a few 100 W ...
Jeez, I need to get out more or something. I was reading that as "a Teravolt is a few 100 W"


Register to reply

Related Discussions
Proving E must exceed the min potential Introductory Physics Homework 2
Nothing can exceed the speed of light Special & General Relativity 4
a moving clock lags behind a stationary clock Special & General Relativity 2
How to Exceed The Velocity Of Light 2 General Physics 1
How To Exceed The Velocity Of Light 2 General Physics 9