Will CPU stock clock speeds ever exceed 4.0 GHz?

  1. The fastest stock clock speed I have ever seen on a CPU the is currently available to the public is the Intel Core 2 Quad 4.0 GHz. (it may be overclocked) For the past few years, clocks on CPU's never seemed to get substantially higher. Is there some kind of thermal limit to how fast you can run electricity through a circuit without it melting/catching fire? They seemed to just add more CPU cores on one die working in tandem. I remember when the Pentium 4 was released, which was solely based on high clock speeds.

    I read something on the internet recently about a dual, quad-core CPU computer (eight cores) with each core running at 6.0 GHz. (48.0 GHz) I know that if you tried to overclock a pentium 4 to 6.0 GHz, it would immediately melt and/or catch fire, even with liquid nitrogen cooling it. I'm not an electrical engineer, nor a computer scientist. But I was wondering if maybe in a five years or so we will see 8-16-32+ CPU core systems with each core running at maybe 6.0-12.0 GHz.
     
  2. jcsd
  3. mgb_phys

    mgb_phys 8,952
    Science Advisor
    Homework Helper

    Yes the problem is mostly thermal. Power in a switching transistor is roughly frequency^2, so going from 4->6GHz is 225% as much power to remove. The problem is made worse with the smaller die rules needed to get more features on a chip, smaller features suffer from thermal damage more easily (thermal migration of dopeants, changes in crystal structure etc)

    You can run at high speeds but you have to do a much better job of cooling to get the extra heat out and it's more difficult to get the required amount of power in. Multiple cores, more efficent processor instructions and larger caches are generally a better value for money way of improving performance.

    Another possibility is asychronous designs where instead of every transistor running at the full clock frequency the different parts run at different speeds only clocking when they have work to do and going idle while waiting for the results from another part. This is used in some simple lower power embedded designs but is tricky for a full PC type CPU.
     
    Last edited: Oct 16, 2008
  4. Redbelly98

    Redbelly98 12,051
    Staff Emeritus
    Science Advisor
    Homework Helper

    I've wondered about this myself. It's like Moore's law just hit a wall a few years ago, as far as PC processor speed is concerned.
     
  5. Greg Bernhardt

    Staff: Admin

    Gaming has always moved the industry. In gaming your graphics card is much more important than your CPU. So most of the advancements are in graphics cards.
     
  6. mgb_phys

    mgb_phys 8,952
    Science Advisor
    Homework Helper

    Strictly speaking Moore's law was that the most cost effective number of transistors to fit on a single CPU would increase expoentially - that's still true.
    It didn't say anything about clock speeds or performance.

    The real breakthrough recently has been in using the massive power of your graphics card to perform the sort of calculations where you need to process lots of data in parallel. A $500 graphics card with somethign like Nvidia's CUDA almost beats a supercomputer of a few years ago for these sort of applications.
    Some cards can operate at nearly CPU speeds but on 128 or 256 pieces of data at the same time!
     
    Last edited: Oct 16, 2008
  7. Redbelly98

    Redbelly98 12,051
    Staff Emeritus
    Science Advisor
    Homework Helper

    Thanks for the replies. I had a vague suspicion that computer performance was being improved in other ways.
     
  8. turbo

    turbo 7,366
    Gold Member

    Intel's upcoming quad-core Core i7 is designed to be over-clocked to 3.2 Ghz, and with memory control integrated on CPU, it should be pretty darned fast. This may be the first Intel CPU specifically designed to be over-clocked.
     
  9. The only problems with the graphic cards GPU is that it processes in single precision, unlike a regular CPU which handles double.

    It seems now that the industry is moving to the mulitcore designs, rather than raw speed. But small embedded devices in the near future could be first that will run in the 2 to 10 GHz range.
     
  10. The typical single core GPU has been gone for a while now. Take a look at Nvidias G80 which was released about two years ago now. (I also bought the 768Mb 8800GTX G80 the day it was released btw.)

    The G80 GPU was revolutionary in the aspect that it incorporated 'stream processors'...

    http://techreport.com/r.x/geforce-8800/block-g80.gif

    Nvidia's GeForce 8800 graphics processor
    http://techreport.com/articles.x/11211

    When it comes to processing large amounts of data arranged into small packets, streaming processors rule.
     
  11. mgb_phys

    mgb_phys 8,952
    Science Advisor
    Homework Helper

    The GE8xxx series only handle single precision floats, the GT200 series will do doubles.
     
  12. turbo

    turbo 7,366
    Gold Member

    With RISC programming and faster chips (with on-chip memory control) it could be time for another revolution in PCs. Intel's Core i7 CPU with fast-bus access to RAM could revolutionize PC performance, if we get RISC - friendly OSs and applications. Very fast on-chip processing combined with instruction sets that tend toward LOTS of very short simple operations can speed up computers greatly. The PC industry is wading through molasses, preserving the status-quo and protecting market share.
     
  13. And to answer the OP question, I don't see processor speeds exceeding 4Ghz for quite some time. Going back to what mgb_phys was saying, the chips undergo cycles of die shrink. When they shrink the die, the chip runs cooler (which can allow greater overclocking), the chip (typically) processes data more efficiently, and they can pack in larger cashes (L1/L2).

    With that increase in efficiency (faster processing) there isn't that much of a need for a frequency increase as more work is being done within the same amount of clock cycles, so they keep the frequency low (less heat, less power consumption).

    What you'll see continue to happen is what has been happening for years now. You'll see a die shrink coupled with an overall decrease in operating frequency. The frequencies will slowly creep up (higher end models) and then when the next generation chip is released, you'll see another drop in frequency, heat output, and an increase in frequency and available cache (due to the added available space))

    But keep in mind this is being generic since Intel offers different styles of chip within a family. For example, the Core2s; Conroe, Allendale, Wolfdale, Kentsfield, Yorkfield, Conroe XE, Kentsfield XE, Yorkfield XE.
     
  14. rcgldr

    rcgldr 7,606
    Homework Helper

    The issue as I understand it, is the voltage to size ratio it takes to get transitors to switch at a 4 ghz or faster cpu rate, and the amount of heat generated in a relatively tiny area. Scaling down the size and voltage isn't helping, because the ratio of voltage to size remains about the same in order to achieve the same switching rates. I read somewhere that using very small technology, but spreading it out could help, but it wouldn't be cost effective (relatively low density for a given size technology).

    Note that Intel mentioned 4.0ghz as a limit a few years ago, even before they started making hyperthreading and multi-core cpu's.
     
  15. Required voltage and operating frequency is intimately linked.

    With the Core2duo processors, i've been able to overclock them considerably before having to make a voltage increase. The only time I did have to increase voltage, is when I started getting closer to the point of diminishing returns anyway.
     
  16. mgb_phys

    mgb_phys 8,952
    Science Advisor
    Homework Helper

    As you make the parts smaller they can switch faster since the layers are thinner and the capacitance less, but the field is larger which causes dopeants to move out of the depletion layer. Heat makes this worse.
    The current record for regular silicon is around 50Ghz, you can do about double that with extra guard dopeants like flourine to stop the boron diffusing out.
    Then there is leakage current that justs wastes power wether the transistor is being switched or not - the latest 45nm processes use fancy metal gates to reduce that.

    But innevitably a 4Ghz CPU is going to burn a couple of hundred watts in a square cm of silicon.
     
  17. I have a few more PC questions to ask.

    1. I'm not an electrician, but if I were to build a PC that required a 1100-1500+ watt power supply, is there enough juice going through the wall outlet to meet the power demands for the PC? Would the outlet burn out? Do I need a special outlet?

    2. Approximately how many volts are there in 1200 watts? And how is that comparable to the ammount of electricity to run a television, a vacuum cleaner, or a washing machine for example?

    3. The NVidia Geforce GTX 280 is alledgedly capable of almost 2 Teraflops of computing muscle. IBM's 400+ teraflop "blue gene", formerly the world's fastest supercomputer until it became obsolete by IBM's 10 Petaflop "roadrunner" very recently. About seven years ago or so, the worlds fastest supercomputer's power was equal to the computational power of today's latest GPU's. In the distant future, will there be GPU's parring, or topping the fastest supercomputers of today? And how come GPU's are dwarfing the computing power of CPU's?

    4. I was told that having multiple graphics cards (SLI) doesn't give you more graphics muscle in "99%" of all applications. Which applications partially/fully utilize multi-GPU's? I'm guessing that running extremely performance demanding programs like the video game "Crysis" on it's maximum settings would call for more GPU's? Will the GTX 280 do the job?

    5. How does the computational power of the human brain compare to today's fastest supercomputers? If the worlds fastest computers dwarf the human mind, then is it possible for there to be a super-AI that completely rivals all human thought and intellect? Alledgedly, the human brain is the most complex object in the known universe. (I don't know if that is true).

    6. I've heard about a chip that was invented as part of a highly classified government project that has the power of 100,000 CRAY-5 supercomputers, and that there is a massive underground bunker full of racks and racks of these chips running in tandem for as far as the eye can see. (I can't imagine what they are used for) maybe trying to set a new Crysis benchmark (lol!) is this true? How powerful is a CRAY-5?

    7. What will computers, especially personal computers be like in the year 2050-2100 by your approximate?

    8. How much more powerful is a NVidia GeForce GTX 280 over an ATI Radeon X-800 Pro (my old card on my now broken PC) Oh, and NVidia is coming out with the GeForce GTX 350, which uses 2GBs of GDDR5 RAM with a single core.
     
  18. mgb_phys

    mgb_phys 8,952
    Science Advisor
    Homework Helper

    Not quite yet, a typical outlet can supply around 220v*13A = 3000W (europe) or 110V*15A=1600W (US)

    Divide by your volts, 220V (europe) or 110V (USA)
    A TV is a few 100 W, a washing machine can be 2-3Kw - thats why you have special 220V outlets for them in the US.

    These are ideal case figures - they assume you want to do a very small class of operations to 128/256 numbers in parallel and don't need much memory or bandwidth. But for these sorts of applications GPUs are amazing - NVidia have a library called CUDA to use your GPU for general calcualtions.

    There are a few apps that can use one card to do the physics and the other to display the game.
     
  19. Redbelly98

    Redbelly98 12,051
    Staff Emeritus
    Science Advisor
    Homework Helper

    Jeez, I need to get out more or something. I was reading that as "a Teravolt is a few 100 W" :confused:
     
  20. What about the NVidia "Tesla" GPU. I've heard about it but know absolutely nothing about it.
     
  21. I realise this is almost a year later now, but im running about 4.2Ghz on an Intel Wolfdale 3.33Ghz Core 2 Duo, im expecting to get it much higher once I encorporate liquid cooling into it, just aswell, my GPU's memory is running at 3.6Gbs (900Mhz) which I havent over clocked yet, my core clock is running at 850Mhz with 800 Streaming Processing Units, if you wanna take a loot at it, its called the XFX HD-489X-ZSFC Radeon HD 4890 1GB 256-bit GDDR5 PCI Express 2.0 x16 HDCP Ready CrossFire Supported Video Card. Also, if you guys want to look at some very incredible technology, go onto Apple's website and build a MacPro with maxed out GPUs and CPUs, it runs 2 Intel Xeon Nehalem Quad Cores each at about 3.33Ghz stock, so if you think about it, you can easily OverClock them to about 4.1Ghz a piece, so your total processing power is running at 8.2Ghz/ or 4.1Ghz on 8 cores, aswell as 4 GPUs max, to take care of any graphic situation you could possibly throw at it, all in all, if you max this machine out, it costs about $13,000.
     
Know someone interested in this topic? Share this thead via email, Google+, Twitter, or Facebook

Have something to add?