Will CPU stock clock speeds ever exceed 4.0 GHz?

  • Thread starter The_Absolute
  • Start date
  • Tags
    Clock cpu
In summary, the fastest stock clock speed I have ever seen on a CPU is currently available to the public is the Intel Core 2 Quad 4.0 GHz. This may be overclocked. For the past few years, clocks on CPU's never seemed to get substantially higher. Is there some kind of thermal limit to how fast you can run electricity through a circuit without it melting/catching fire? They seemed to just add more CPU cores on one die working in tandem. I remember when the Pentium 4 was released, which was solely based on high clock speeds.
  • #1
The_Absolute
174
0
The fastest stock clock speed I have ever seen on a CPU the is currently available to the public is the Intel Core 2 Quad 4.0 GHz. (it may be overclocked) For the past few years, clocks on CPU's never seemed to get substantially higher. Is there some kind of thermal limit to how fast you can run electricity through a circuit without it melting/catching fire? They seemed to just add more CPU cores on one die working in tandem. I remember when the Pentium 4 was released, which was solely based on high clock speeds.

I read something on the internet recently about a dual, quad-core CPU computer (eight cores) with each core running at 6.0 GHz. (48.0 GHz) I know that if you tried to overclock a pentium 4 to 6.0 GHz, it would immediately melt and/or catch fire, even with liquid nitrogen cooling it. I'm not an electrical engineer, nor a computer scientist. But I was wondering if maybe in a five years or so we will see 8-16-32+ CPU core systems with each core running at maybe 6.0-12.0 GHz.
 
Computer science news on Phys.org
  • #2
Yes the problem is mostly thermal. Power in a switching transistor is roughly frequency^2, so going from 4->6GHz is 225% as much power to remove. The problem is made worse with the smaller die rules needed to get more features on a chip, smaller features suffer from thermal damage more easily (thermal migration of dopeants, changes in crystal structure etc)

You can run at high speeds but you have to do a much better job of cooling to get the extra heat out and it's more difficult to get the required amount of power in. Multiple cores, more efficent processor instructions and larger caches are generally a better value for money way of improving performance.

Another possibility is asychronous designs where instead of every transistor running at the full clock frequency the different parts run at different speeds only clocking when they have work to do and going idle while waiting for the results from another part. This is used in some simple lower power embedded designs but is tricky for a full PC type CPU.
 
Last edited:
  • #3
I've wondered about this myself. It's like Moore's law just hit a wall a few years ago, as far as PC processor speed is concerned.
 
  • #4
Gaming has always moved the industry. In gaming your graphics card is much more important than your CPU. So most of the advancements are in graphics cards.
 
  • #5
Strictly speaking Moore's law was that the most cost effective number of transistors to fit on a single CPU would increase expoentially - that's still true.
It didn't say anything about clock speeds or performance.

The real breakthrough recently has been in using the massive power of your graphics card to perform the sort of calculations where you need to process lots of data in parallel. A $500 graphics card with somethign like Nvidia's CUDA almost beats a supercomputer of a few years ago for these sort of applications.
Some cards can operate at nearly CPU speeds but on 128 or 256 pieces of data at the same time!
 
Last edited:
  • #6
Thanks for the replies. I had a vague suspicion that computer performance was being improved in other ways.
 
  • #7
Intel's upcoming quad-core Core i7 is designed to be over-clocked to 3.2 Ghz, and with memory control integrated on CPU, it should be pretty darned fast. This may be the first Intel CPU specifically designed to be over-clocked.
 
  • #8
The only problems with the graphic cards GPU is that it processes in single precision, unlike a regular CPU which handles double.

It seems now that the industry is moving to the mulitcore designs, rather than raw speed. But small embedded devices in the near future could be first that will run in the 2 to 10 GHz range.
 
  • #9
waht said:
The only problems with the graphic cards GPU is that it processes in single precision, unlike a regular CPU which handles double.

It seems now that the industry is moving to the mulitcore designs, rather than raw speed. But small embedded devices in the near future could be first that will run in the 2 to 10 GHz range.

The typical single core GPU has been gone for a while now. Take a look at Nvidias G80 which was released about two years ago now. (I also bought the 768Mb 8800GTX G80 the day it was released btw.)

The G80 GPU was revolutionary in the aspect that it incorporated 'stream processors'...

http://techreport.com/r.x/geforce-8800/block-g80.gif

Nvidia's GeForce 8800 graphics processor
The G80 has eight groups of 16 SPs, for a total of 128 stream processors. These aren't vertex or pixel shaders, but generalized floating-point processors capable of operating on vertices, pixels, or any manner of data. Most GPUs operate on pixel data in vector fashion, issuing instructions to operate concurrently on the multiple color components of a pixel (such as red, green, blue and alpha), but the G80's stream processors are scalar—each SP handles one component. SPs can also be retasked to handle vertex data (or other things) dynamically, according to demand.
http://techreport.com/articles.x/11211

When it comes to processing large amounts of data arranged into small packets, streaming processors rule.
 
  • #10
The GE8xxx series only handle single precision floats, the GT200 series will do doubles.
 
  • #11
With RISC programming and faster chips (with on-chip memory control) it could be time for another revolution in PCs. Intel's Core i7 CPU with fast-bus access to RAM could revolutionize PC performance, if we get RISC - friendly OSs and applications. Very fast on-chip processing combined with instruction sets that tend toward LOTS of very short simple operations can speed up computers greatly. The PC industry is wading through molasses, preserving the status-quo and protecting market share.
 
  • #12
And to answer the OP question, I don't see processor speeds exceeding 4Ghz for quite some time. Going back to what mgb_phys was saying, the chips undergo cycles of die shrink. When they shrink the die, the chip runs cooler (which can allow greater overclocking), the chip (typically) processes data more efficiently, and they can pack in larger cashes (L1/L2).

With that increase in efficiency (faster processing) there isn't that much of a need for a frequency increase as more work is being done within the same amount of clock cycles, so they keep the frequency low (less heat, less power consumption).

What you'll see continue to happen is what has been happening for years now. You'll see a die shrink coupled with an overall decrease in operating frequency. The frequencies will slowly creep up (higher end models) and then when the next generation chip is released, you'll see another drop in frequency, heat output, and an increase in frequency and available cache (due to the added available space))

But keep in mind this is being generic since Intel offers different styles of chip within a family. For example, the Core2s; Conroe, Allendale, Wolfdale, Kentsfield, Yorkfield, Conroe XE, Kentsfield XE, Yorkfield XE.
 
  • #13
The issue as I understand it, is the voltage to size ratio it takes to get transitors to switch at a 4 ghz or faster cpu rate, and the amount of heat generated in a relatively tiny area. Scaling down the size and voltage isn't helping, because the ratio of voltage to size remains about the same in order to achieve the same switching rates. I read somewhere that using very small technology, but spreading it out could help, but it wouldn't be cost effective (relatively low density for a given size technology).

Note that Intel mentioned 4.0ghz as a limit a few years ago, even before they started making hyperthreading and multi-core cpu's.
 
  • #14
Required voltage and operating frequency is intimately linked.

With the Core2duo processors, I've been able to overclock them considerably before having to make a voltage increase. The only time I did have to increase voltage, is when I started getting closer to the point of diminishing returns anyway.
 
  • #15
As you make the parts smaller they can switch faster since the layers are thinner and the capacitance less, but the field is larger which causes dopeants to move out of the depletion layer. Heat makes this worse.
The current record for regular silicon is around 50Ghz, you can do about double that with extra guard dopeants like flourine to stop the boron diffusing out.
Then there is leakage current that justs wastes power wether the transistor is being switched or not - the latest 45nm processes use fancy metal gates to reduce that.

But innevitably a 4Ghz CPU is going to burn a couple of hundred watts in a square cm of silicon.
 
  • #16
I have a few more PC questions to ask.

1. I'm not an electrician, but if I were to build a PC that required a 1100-1500+ watt power supply, is there enough juice going through the wall outlet to meet the power demands for the PC? Would the outlet burn out? Do I need a special outlet?

2. Approximately how many volts are there in 1200 watts? And how is that comparable to the amount of electricity to run a television, a vacuum cleaner, or a washing machine for example?

3. The NVidia Geforce GTX 280 is alledgedly capable of almost 2 Teraflops of computing muscle. IBM's 400+ teraflop "blue gene", formerly the world's fastest supercomputer until it became obsolete by IBM's 10 Petaflop "roadrunner" very recently. About seven years ago or so, the worlds fastest supercomputer's power was equal to the computational power of today's latest GPU's. In the distant future, will there be GPU's parring, or topping the fastest supercomputers of today? And how come GPU's are dwarfing the computing power of CPU's?

4. I was told that having multiple graphics cards (SLI) doesn't give you more graphics muscle in "99%" of all applications. Which applications partially/fully utilize multi-GPU's? I'm guessing that running extremely performance demanding programs like the video game "Crysis" on it's maximum settings would call for more GPU's? Will the GTX 280 do the job?

5. How does the computational power of the human brain compare to today's fastest supercomputers? If the worlds fastest computers dwarf the human mind, then is it possible for there to be a super-AI that completely rivals all human thought and intellect? Alledgedly, the human brain is the most complex object in the known universe. (I don't know if that is true).

6. I've heard about a chip that was invented as part of a highly classified government project that has the power of 100,000 CRAY-5 supercomputers, and that there is a massive underground bunker full of racks and racks of these chips running in tandem for as far as the eye can see. (I can't imagine what they are used for) maybe trying to set a new Crysis benchmark (lol!) is this true? How powerful is a CRAY-5?

7. What will computers, especially personal computers be like in the year 2050-2100 by your approximate?

8. How much more powerful is a NVidia GeForce GTX 280 over an ATI Radeon X-800 Pro (my old card on my now broken PC) Oh, and NVidia is coming out with the GeForce GTX 350, which uses 2GBs of GDDR5 RAM with a single core.
 
  • #17
The_Absolute said:
1. I'm not an electrician, but if I were to build a PC that required a 1100-1500+ watt power supply, is there enough juice going through the wall outlet to meet the power demands for the PC? Would the outlet burn out? Do I need a special outlet?
Not quite yet, a typical outlet can supply around 220v*13A = 3000W (europe) or 110V*15A=1600W (US)

2. Approximately how many volts are there in 1200 watts? And how is that comparable to the amount of electricity to run a television, a vacuum cleaner, or a washing machine for example?
Divide by your volts, 220V (europe) or 110V (USA)
A TV is a few 100 W, a washing machine can be 2-3Kw - that's why you have special 220V outlets for them in the US.

3. The NVidia Geforce GTX 280 is alledgedly capable of almost 2 Teraflops of computing muscle. IBM's 400+ teraflop "blue gene", formerly the world's fastest supercomputer until it became obsolete by IBM's 10 Petaflop "roadrunner" very recently. About seven years ago or so, the worlds fastest supercomputer's power was equal to the computational power of today's latest GPU's. In the distant future, will there be GPU's parring, or topping the fastest supercomputers of today? And how come GPU's are dwarfing the computing power of CPU's?
These are ideal case figures - they assume you want to do a very small class of operations to 128/256 numbers in parallel and don't need much memory or bandwidth. But for these sorts of applications GPUs are amazing - NVidia have a library called CUDA to use your GPU for general calcualtions.

4. I was told that having multiple graphics cards (SLI) doesn't give you more graphics muscle in "99%" of all applications. Which applications partially/fully utilize multi-GPU's? I'm guessing that running extremely performance demanding programs like the video game "Crysis" on it's maximum settings would call for more GPU's? Will the GTX 280 do the job?
There are a few apps that can use one card to do the physics and the other to display the game.
 
  • #18
mgb_phys said:
A TV is a few 100 W ...

Jeez, I need to get out more or something. I was reading that as "a Teravolt is a few 100 W" :confused:
 
  • #19
What about the NVidia "Tesla" GPU. I've heard about it but know absolutely nothing about it.
 
  • #20
I realize this is almost a year later now, but I am running about 4.2Ghz on an Intel Wolfdale 3.33Ghz Core 2 Duo, I am expecting to get it much higher once I encorporate liquid cooling into it, just aswell, my GPU's memory is running at 3.6Gbs (900Mhz) which I haven't over clocked yet, my core clock is running at 850Mhz with 800 Streaming Processing Units, if you want to take a loot at it, its called the XFX HD-489X-ZSFC Radeon HD 4890 1GB 256-bit GDDR5 PCI Express 2.0 x16 HDCP Ready CrossFire Supported Video Card. Also, if you guys want to look at some very incredible technology, go onto Apple's website and build a MacPro with maxed out GPUs and CPUs, it runs 2 Intel Xeon Nehalem Quad Cores each at about 3.33Ghz stock, so if you think about it, you can easily OverClock them to about 4.1Ghz a piece, so your total processing power is running at 8.2Ghz/ or 4.1Ghz on 8 cores, as well as 4 GPUs max, to take care of any graphic situation you could possibly throw at it, all in all, if you max this machine out, it costs about $13,000.
 
  • #21
The_Absolute said:
And how come GPU's are dwarfing the computing power of CPU's?

Because of the high number of stream processors. Think of a stream processor as a tiny, simple CPU for your video card except that there's over 100 of them. (240 in the 285GTX). When it comes to performing simple math (which drawing shapes for video games really is anyway), nothing beats stream processors.

http://en.wikipedia.org/wiki/Stream_processing

4. I was told that having multiple graphics cards (SLI) doesn't give you more graphics muscle in "99%" of all applications. Which applications partially/fully utilize multi-GPU's? I'm guessing that running extremely performance demanding programs like the video game "Crysis" on it's maximum settings would call for more GPU's? Will the GTX 280 do the job?

SLI definitely gives you more 'muscle' in quite a few apps. No doubt more than just 1%. For example, SETI@home and other UC Berkeley BIONIC apps utilize SLI for number crunching. When it comes to PC games, even though I don't like to freely throw around percentages, i'd say closer to 20% of the games out there utilize SLI. A 280 GTX is just about the second fastest single GPU solution video card under the 285 GTX. It will do the job nicely.

5. How does the computational power of the human brain compare to today's fastest supercomputers? If the worlds fastest computers dwarf the human mind, then is it possible for there to be a super-AI that completely rivals all human thought and intellect? Alledgedly, the human brain is the most complex object in the known universe. (I don't know if that is true).

No clue.

6. I've heard about a chip that was invented as part of a highly classified government project that has the power of 100,000 CRAY-5 supercomputers, and that there is a massive underground bunker full of racks and racks of these chips running in tandem for as far as the eye can see. (I can't imagine what they are used for) maybe trying to set a new Crysis benchmark (lol!) is this true? How powerful is a CRAY-5?

I've never heard anything about this.

7. What will computers, especially personal computers be like in the year 2050-2100 by your approximate?

The way computer technology changes, there's honestly no way to predict it. I can tell you this though; Whatever prediction someone makes about the far future as far as computer technology is concerned, they will be way off.

8. How much more powerful is a NVidia GeForce GTX 280 over an ATI Radeon X-800 Pro (my old card on my now broken PC) Oh, and NVidia is coming out with the GeForce GTX 350, which uses 2GBs of GDDR5 RAM with a single core.

A 280 GTX is faster than an X800 by at least an order of magnitude! lol. An X800 is somewhat comparable in performance to an X1650. Which is again comparable to Nvidias 7600GS. A major slouch compared to the current 2xx series.
 
  • #22
turbo-1 said:
With RISC programming and faster chips (with on-chip memory control) it could be time for another revolution in PCs. Intel's Core i7 CPU with fast-bus access to RAM could revolutionize PC performance, if we get RISC - friendly OSs and applications. Very
The i7 is by no definition a RISC processor, even modern "RISC" processors themselves are difficult to qualify as such. The reality is, there is no longer a CISC/RISC disctinction between processors, merely somewhat different philosophies, neither of which can be said to be reduced.

As for revolutionizing performance, if CPUs are to accomplish this they would need to become GPUs, which is somewhat superfluous.
 
  • #23
I saw this question

"5. How does the computational power of the human brain compare to today's fastest supercomputers? If the worlds fastest computers dwarf the human mind, then is it possible for there to be a super-AI that completely rivals all human thought and intellect? Alledgedly, the human brain is the most complex object in the known universe. (I don't know if that is true)."

I was recently discussing this question with some of my colleagues. See, there is a great misconception here. The human brain does not operate the same way as computers so the performance cannot be compared easily. Computers are great at doing repetitive tasks especially floating point arithmetic. That is what they excel at. Human brains are significantly more powerful than computers in many other areas. Even less intelligent animals such as dogs and cats are significantly more powerful than computers and supercomputers at many different tasks. For example, computers are at least 100 years away (assuming they ever get there) from processing reality and understanding dimensional space. When a human or even a dog walks into a room, you can make sense and understand all objects in the room, you can immediately recognize what the shortest path to a point in the room is and how to get there. You immediately know the shape, the color, the texture of objects, etc. If, say, a table was missing a leg, you would know for a fact that is a table that is missing one leg. You know this information almost instantly. A modern supercomputer would require hours to recognize these patterns and process them to figure out what objects these are. There is simply no comparison between a computer and a human here. Another example is, say, your friend is singing a song out of tune but he has almost all the lyrics correct. If you know the song and you have heard it before, you can immediately tell the artist and the title of the song. You can even correct your friend so that he sings in the right tune. This would be an almost impossible task for a modern computer to accomplish within a reasonable time. Again, computers are great at many things but their power and capability cannot be compared to the power of the human brain. In the future, perhaps. However, there is a school of thought that claims that there is an non-computable aspect to the human brain. As such, no computer will ever be comparable. Of course, this is open for debate and AI research is divided into the schools of thought.

As far as the human brain being the most complex object in the known universe, there is no way to be sure of that now. There might be ET civilizations out there whose minds are much more complex and capable of so much more. Our civilization is still in its infancy and our understanding is still very limited. But we are learning...
 
  • #24
sciadvisor said:
The human brain does not operate the same way as computers so the performance cannot be compared easily.
Not easily but it can be compared. Creating several levels of abstraction the computational requirements to run the abstraction can be determined.

http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf [Broken]

The unknown element is which level of abstraction can properly describe a functional brain. Something between spiking networks and Hodgkin Huxley neurons is most probable, which gives an estimation of upwards of 10s of Petaflops and ~Exaflop of memory. Building large scale systems with various approaches will ultimately produce the proper model.

sciadvisor said:
Computers are great at doing repetitive tasks especially floating point arithmetic. That is what they excel at. Human brains are significantly more powerful than computers in many other areas.
Computers are good at executing computable functions, "repetitive tasks" and "floating point" have nothing to do with computation theory, these are amongst the many of an infinitude of applications for universal logic. The only possible way for a brain to not be computable is if it is a non-causal or "illogical" system, and there isn't the faintest clue that the brain, or any physical system, is in such a category.

sciadvisor said:
Again, computers are great at many things but their power and capability cannot be compared to the power of the human brain.
Their power and capability is very often compared to that of a human brain, whether in market prediction, operating a power grid, pattern recognition or general bayesian inference. It's the 21st century, there is no more room in science for metaphysical mysticism. The brain is a system which can be analyzed, mimicked and compared like any other.

sciadvisor said:
there is a school of thought that claims that there is an non-computable aspect to the human brain.
It's a school of something perhaps, but not thought.

sciadvisor said:
As far as the human brain being the most complex object in the known universe, there is no way to be sure of that now.
If you have ever seen GNU OS source code you will have no doubt what is the most complex object in the known universe.
 
Last edited by a moderator:
  • #25
I was simply referring to the complexity of tasks and cited a simple example. My post was not intended as a description of the theory of computation. Of course, there is more to it than floating point.

As far as the schools of thought on AI regarding the non-computable aspect of the human brain. There are serious scientists, all of them widely regarded as leading scientists in their field, who claim and firmly believe that there is a non-computable aspect to the human brain and to consciousness. My post was not intended to take any sides either. There are arguments for and there are arguments against this point of view.

I was not familiar with the document you linked in your post. Very interesting. I will read it. Thank you.
 
  • #26
sciadvisor said:
There are serious scientists, all of them widely regarded as leading scientists in their field, who claim and firmly believe that there is a non-computable aspect to the human brain

Scientists are not invulnerable to saying incredibly stupid things and holding the most unreasonable of misconceptions. An idea only holds as much weight as it's basis, regardless to who it might belong. If god himself said it I would demand a coherent explanation just the same. I'm keen on seeing someone make this claim followed by a thorough explanation of the process which led to this conclusion. From all prior experience however this claim and thorough explanations have been mutually exclusive.
 
  • #27
The_Absolute said:
I have a few more PC questions to ask.
6. I've heard about a chip that was invented as part of a highly classified government project that has the power of 100,000 CRAY-5 supercomputers, and that there is a massive underground bunker full of racks and racks of these chips running in tandem for as far as the eye can see. (I can't imagine what they are used for) maybe trying to set a new Crysis benchmark (lol!) is this true? How powerful is a CRAY-5?

The materials and manufacturing for something like this would not go unnoticed in the commercial market. It would be too hard to keep secret, unless the 'government' also had their own underground silicon mine, manufacturing facilities, the proper amount of trusty people to run all these, and the programmers to properly divide the workload of whatever the 'eff they are trying to calculate, I do not see this being feasible while retaining the conspiracy theory level of secrecy.
 

1. Will advancements in technology allow for CPU stock clock speeds to reach 4.0 GHz in the future?

It is very likely that advancements in technology will allow for CPU stock clock speeds to exceed 4.0 GHz in the future. As technology continues to evolve and improve, we can expect to see faster and more powerful CPUs being developed.

2. What factors contribute to the limitation of CPU stock clock speeds?

There are several factors that contribute to the limitation of CPU stock clock speeds. These include thermal constraints, power consumption, and the physical limitations of semiconductor materials.

3. Why have CPU stock clock speeds not increased significantly in recent years?

In recent years, the focus of CPU development has shifted more towards increasing the number of cores rather than increasing clock speeds. This allows for better multi-tasking and overall performance, without the added heat and power consumption that comes with higher clock speeds.

4. Can overclocking be used to achieve CPU stock clock speeds higher than 4.0 GHz?

Yes, overclocking is a method of pushing a CPU beyond its stock clock speeds. However, it is important to note that overclocking can also lead to increased heat and power consumption, and can potentially damage the CPU if not done properly.

5. How do CPU manufacturers determine the stock clock speed for their processors?

CPU manufacturers determine stock clock speeds based on a variety of factors such as the intended use of the CPU, the target market, and the capabilities of the manufacturing process. They also conduct extensive testing to ensure stability and reliability at the designated stock clock speed.

Similar threads

  • Computing and Technology
Replies
15
Views
2K
  • Computing and Technology
Replies
13
Views
13K
Replies
2
Views
3K
  • Computing and Technology
Replies
2
Views
4K
  • Computing and Technology
Replies
2
Views
4K
  • Computing and Technology
Replies
2
Views
6K
  • Computing and Technology
Replies
2
Views
2K
Replies
9
Views
7K
  • Computing and Technology
Replies
1
Views
3K
Replies
11
Views
6K
Back
Top