# Speed of processors, does it have a max?

Summary:
The speed of light is too slow?
I had heard that computer processors are reaching the speed of light. Is this true, and if it is how do we combat this cap?

## Answers and Replies

jim mcnamara
Mentor
I am not sure what you mean by 'reaching the speed of light' in processors. Where did you encounter this idea?
As stated, I don't think I could give any kind of meaningful answer.

I am not sure what you mean by 'reaching the speed of light' in processors. Where did you encounter this idea?
As stated, I don't think I could give any kind of meaningful answer.

I forgot where I heard it but basically, The internal clock of the cpu can only tick so fast. And if that speed reaches the speed of light it cannot get faster.

from some website:
So to make computers faster, their components must become smaller. At current rates of miniaturization, the behavior of computer components will hit the atomic scale in a few decades. At the atomic scale, the speed at which information can be processed is limited by Heisenberg's uncertainty principle

Last edited by a moderator:
Baluncore
I had heard that computer processors are reaching the speed of light. Is this true, and if it is how do we combat this cap?
The signals in a computer travel at close to the speed of light, that will not change.

The size of the gates has been reducing, so the distances have been reduced, so speed has been increasing.

Building multiple parallel processors will process more data without needing to make things smaller.

BvU
Homework Helper
Do some googling... Moore's law quickly takes you to MOSFET and under 9. Scaling (Hey ! Same picture! ) click
etc etc.

MIT has a nice article on fundamental limits -- dated 2010
INTEL roadmap to 2029 (1.4 nm is 12 silicon atoms cross )

ITRS, IRDS, and on and on ...

##\ ##

anorlunda
Staff Emeritus
Never underestimate human ingenuity and innovation. We simply make the computers do more useful things per one tick of the clock.

Ever since Moore's Law was proposed in 1965, not a month has gone by without someone predicting that it will come to an end real soon now. So far, they have all been wrong.

Mark44
Mentor
Ever since Moore's Law was proposed in 1965, not a month has gone by without someone predicting that it will come to an end real soon now. So far, they have all been wrong.
I don't agree.
To some extent, Moore's Law has ended. The problem with ever-decreasing transistor sizes is that the distance between these features gets smaller and quantum tunneling effects increase, not to mention that the heat generated increases. To combat the increased heat, CPU vendors have decreased the voltage, but there are limits on how low the voltage can go. Moore's Law was in effect from the late 70's/early 80's into the early 2000's, with the number of transistors on a chip and clock speeds doubling about every 18 months. At the moment, the top clock rate I'm aware of is about 5 GHz on one of the AMD Rizen models, I believe. For the past 10 - 15 years, CPU vendors have been able to make CPUs more productive, not by increasing clock speed, but by increasing the number of cores in a processor.

atyy, CalcNerd, russ_watters and 2 others
Staff Emeritus
I'm with @Mark44. Moore's Law is over, partly because of Moore's Second Law, which says that the cost of a CPU factory doubles every four years. Right now, only a half-dozen companies are even able to make a modern CPU. You have Intel, GlobalFoundries, TSMC, and a few others, mostly in China.

Here's Wikipedia's plot on Moore's Law:

My take on this is that the slope is below where it was in say, 1992-2004, and most of the top performing CPUs are a) heavily multicore and b) expensive. The Epyc Rome line tops out at 64 cores and $7000. The K5 was in the$100 ballpark.

CalcNerd, russ_watters and jim mcnamara
f95toli
Gold Member
Indeed, single core speed has not increased significantly in the past few years. More cores is great but not all problems can be parallelised efficiently.

I suspect the "speed of light" comment refers not to clock speed but to the latency when different part of a processor or a computer needs to communicate. This is not(?) yet so much of an issue for the internal processing inside a processor(because they are relatively small); but is an issue for bigger systems with multiple processors/sub-systems.
Communication latency is one reason why scaling up by simply adding more processors does not always work; and the speed of light does of course limit how quickly different parts of a system can communicate.

atyy and russ_watters
Mark44
Mentor
The Epyc Rome line tops out at 64 cores and $7000. The Intel Xeon Phi 7290 processor (now discontinued) can still be found for$8000+, and has 72 cores and a clock rate of 1.5 GHz, with a turbo rate of 1.7 GHz.

My 7-year-old HP computer, running a Quad Duo-Core (8 cores) processor at 3.4 GHz, has a clock rate that is twice as fast, but lots fewer cores.

russ_watters
rcgldr
Homework Helper
... decreasing transistor sizes ... the heat generated increases. To combat the increased heat, CPU vendors have decreased the voltage
The speed of a transistor is mostly due to the relative voltage versus the relative size. For typical chips as seen in home or office computers, as sizes decrease, the density increases, reducing the area available for heat dissipation. In the case of the faster Intel processors, there was a 4 GHz "barrier", dating back to 2012 or 2013 for some Core i7 processors. If not using all cores, then speeds faster than 4 GHz were possible, up to 5 GHz. Liquid cooling allows some processors to run overclocked at up to 8 GHz, but I don't know if these speeds are reliable speeds or just done to set speed records.

The liquid cooled IBM zEC12, released in 2012, could run at 5.5 GHz. According to Wikipedia, the fastest base clock rate for a processor for commercial sale.

https://en.wikipedia.org/wiki/IBM_zEC12_(microprocessor)

Mark44
Mentor
I had heard that computer processors are reaching the speed of light.
Already dealt with, but there's enough confusion in the statement above to warrant some more clarification. When people talk about processor speed, they're talking about the clock rate, which is how fast a particular crystal vibrates, and which has nothing to do with the speed of light. The electrons travelling inside a CPU move at about 1/2 the speed of light ( Computers are becoming faster and faster, but their speed is still limited by the physical restrictions of an electron moving through matter. What technologies are emerging to break through this speed barrier? - Scientific American )
And if that speed reaches the speed of light it cannot get faster.
Again, you are confusing the clock speed with how fast electrons can move through the processor circuitry.

Baluncore
The electrons travelling inside a CPU move at about 1/2 the speed of light...
And you are now confusing the speed an EM wave propagates through a dielectric, with the diffusion of electrons through a conductor.
The signals travelling inside a CPU move at about 1/2 the speed of light...

atyy and BvU
Mark44
Mentor
And you are now confusing the speed an EM wave propagates through a dielectric, with the diffusion of electrons through a conductor.
The signals travelling inside a CPU move at about 1/2 the speed of light...
Yes, that's what I meant -- signals, not electrons. I appreciate the correction.

Last edited:
atyy and Baluncore
...latency ... is not(?) yet so much of an issue for the internal processing inside a processor(because they are relatively small)
As far as I know regarding the cores it's not an issue, but for the internal buses (connecting the cores and other parts together) and such, it already is.
Especially for the newly emerged 'chiplet' designs.

Staff Emeritus
Not so sure about that. At 4 GHz, your maximum one-clock distance is about 40 mm. That's a really, really big chip. The EPYC we were discussing earlier has an edge of 11 mm. How exactly does this work?

I think you need to consider the harmonics of signal edges too.

Baluncore
I think you need to consider the harmonics of signal edges too.
No, the transmission line will propagate a step and will be impedance matched to prevent ringing.
The 40 mm is the distance between successive edges travelling along the line.
There is no reason why you cannot have many bits on the line at one time, so long as you can receive the data asynchronously.

the transmission line...
But it's not simply about transmission, but about having an area where the state changes are in sync (sync enough to have consistent data out of a bus).

Baluncore
But it's not simply about transmission, but about having an area where the state changes are in sync (sync enough to have consistent data out of a bus).
If you transmit the data bits and a register load clock signal, along parallel paths, then the clock will be delayed by the same propagation time as the data. The transfer will be synchronous within itself, but asynchronous with respect to some defined master clock.
All things are relatively local in a systolic processor or messaging system.

The transfer will be synchronous within itself, but asynchronous with respect to some defined master clock.
That's exactly the problem we are discussing. Delay induced asynchronicity between parts of a CPU.

Baluncore
That's exactly the problem we are discussing. Delay induced asynchronicity between parts of a CPU.
Why do the CPU modules need to be globally synchronous?

Why do the CPU modules need to be globally synchronous?
It's rather 'they cannot be' instead of 'not need to be'.
latency ... regarding the cores it's not an issue, but for the internal buses (connecting the cores and other parts together) and such, it already is.

That's all.

Svein
If you transmit the data bits and a register load clock signal, along parallel paths, then the clock will be delayed by the same propagation time as the data. The transfer will be synchronous within itself, but asynchronous with respect to some defined master clock.
That assumes that clock and date lines have the same length. This is even important on a PCB layout - one of my last designs incorporated a DRAM and the guy doing the layout had to measure all lines between the DRAM and the processor and adjust them to be within 1mm of each other. That was in 2005!

berkeman
That assumes that clock and date lines have the same length. This is even important on a PCB layout - one of my last designs incorporated a DRAM and the guy doing the layout had to measure all lines between the DRAM and the processor and adjust them to be within 1mm of each other. That was in 2005!

We need a delay. Can you just run a wire around the inside of the case?

If I haven't errored the order of magnitude, it looks like light travels 30 cm in one nanosecond, which should be equivalent to the clock rate of a 1 GHz CPU. For a 5 GHz CPU that would be 6 centimeters. Knowing the size of a motherboard this makes it all too clear how valuable it is to have cache memory located physically on the processor die.