Why are computers not getting faster?

  • Thread starter moe darklight
  • Start date
  • Tags
    Computers
In summary, it seems like the microprocessor industry is still continuing to grow, but at a slower rate than in the past. Processors are getting faster, but memory and other components are not keeping up. There are new technologies being developed, but they are expensive and not necessarily needed by most users.
  • #1
moe darklight
409
0
It seems like, for the past two or so years, computers have stopped getting faster at the rate that they used to. I mean, I remember back in 2000 or so, they were getting faster and faster and faster so quickly, that your brand-new computer was pretty much obsolete by the time you finished your car ride home form the computer store.

It seems like now they're getting maybe a bit faster... a lot of it is the dual core and multi processor aspect... but it seems like processors have been at 3.something tops for a while now. Memory has increased and we have 64 bit and all that, but what happened?

Is there any new incredibly fast processing technology on the way that I'm not aware of? or am I just wrong and they are getting faster like they were back then ... I don't know much about hardware I must admit.
 
Computer science news on Phys.org
  • #2
There are theoretical and practical limits to how fast a processor can go. Not the least of which is power (heat) dissipation, which is proportional to both clock speed and transistor count. If the trend had continued, we'd be ducting room-sized air conditioners to our PC's by now! One defense against that is the size of the transistors (heat is inversely proportional to transistor size), but then that is limited by the size of atoms and quantum mechanics (electrons will spontaneously jump between wires if they are too small/close together).
 
  • #3
Chip complexity has continued to grow at roughly the same rate since the intergrated circuit was invented. The chips are growing in transistor count at the same rate they always have. Moore's law is alive and well.

Speed is a much less concrete way to judge the progress of the microprocessor industry, compared to transistor count. Clock speed, in particular, has only a weak correlation with the number of instructions per second executed by a processor. There are many techniques in play (everything from pipelining and superscalar up to multi-core) that can make a new 3 GHz processor much, much faster than last year's 3 GHz processor.

Russ is right that heat dissipation is a major concern (and it varies with the square of clock speed), so manufacturers are shifting their focus from ridiculous clock speeds to other approaches, like simply putting more transistors on a chip.

You also need to consider that DRAM and other motherboard components have not increased in speed at anywhere near the rate than processors themselves have. It does little good to develop a 20 GHz processor and strap it to a 1 GHz DRAM array. Physical limitations will continue to limit advances in the speed of system-level (chip-to-chip) communications.

I've also said before that most users do not need (and will not even notice) a faster processor. Computer manufacturers are shifting their focus to the improvement of components like memories, hard drives, and I/O bridges. These components have a much larger impact on the user's perception of speed than does the processor itself. If you were the chief designer for Dell, say, why would you want to push your customers into paying for a much more expensive, fast processor when they won't even notice any improvement? That wouldn't make economic sense.

- Warren
 
  • #4
Why haven't we moved on to 3D implementations of CPUs (i.e. spherical, cubical)? With the clock at the center wouldn't we gain some performance due to decreased signal propagation time?
 
  • #5
How exactly would you propose we assemble such a device, -Job-?

- Warren
 
  • #6
By layers, in much the same way 3D printing is done today. But I've never heard of even any attempts or research in this direction, and that's why I'm asking.
 
  • #7
Printing on a sperical surface? That's a hard problem even for T-shirt shops. Just the thought of all those edge effects makes me want to cringe.

Keep in mind that most IC "printing" is done by vapor deposition, which requires very high voltages. I can't really even imagine a machine that could do CVD on a half-sphere in such a way to keep the fields normal to the surface in every direction.

It would certainly be such an expensive alternative, if possible at all, to put it out of the running economically.

- Warren
 
  • #8
Intel and AMD both announced years ago that 4ghz would be a limit difficult to overcome. Intel reached 3.8ghz on only a few processors. The main issue is the ratio of voltage compared to the size of a transistor has to be fairly large to get switching rates up to 4ghz and this presents a localized heating issue. To get around this issue, significant space would be required to allow sufficient cooling surfaces betweeen high speed transistors, greatly reducing transitor density on a chip. I'm not sure how much effort is being put into >4ghz processors. Liquid cooling would be another solution, but I'm not sure if liquid cooling would become mainstream on home computers.
 
Last edited:
  • #10
chroot said:
Printing on a sperical surface? That's a hard problem even for T-shirt shops.

I think you're misunderstanding, i didn't mean to print on the surface, but instead have the chip embedded inside the sphere, in a 3 dimensional fashion.
 
  • #11
-Job- said:
I think you're misunderstanding, i didn't mean to print on the surface, but instead have the chip embedded inside the sphere, in a 3 dimensional fashion.

The early Cray designs were pretty much like that, except the geometry was cylindrical not spherical.

But since the Cray CPUs were built out of SSI ECL-logic chips which were the fastest available at the time, the size was a few orders of magnitude bigger than what you are thinking of. A 1-m diameter CPU running at 80 MHz and consuming about 20 kW of power was state of the art in 1980, but not any more!
 
  • #12
As a side note, more gigahertz does not always imply a faster processor. For example, AMD's last generation, socket 939, beat Intels offerings hands down and were the cpu of choice. (I built several computers around AMD's cpus) At the time, Intels cpus were rated just under 4 ghz while AMD's equivalents never broke 2.7.
 
  • #13
In response to the OP's question, I think one has to also consider cost of production [is it too expensive?], hardware requirements [does it run too hot?], and the computing market [will enough people pay for it? can my competition do it?].

Certainly, new materials and production processes will help with raw CPU power.
I'm sure there's new stuff in the pipeline.

It seems clusters of multiple-core CPUs is the trend now. We probably could use better software to take advantage of this.
 
  • #14
Why focus only on the CPU? There are other parts of the Computer that are getting faster day by day, such as Memory and I/O systems, not to mention the Video Cards (stronger GPU, more RAM). As for CPUs goes I think we will see more and more of multi-core CPUs.

Btw: How fast is fast enough? I have a P4 1.6 GHz with 512MB Ram and it does the job superbly, I can do all what I want (Internet, Some light to medium gaming and programming). Frankly I don't see the need for faster desktop hardware.
 
  • #15
Thanks for the explanations... though I can't say I really understand much :smile: , I think I get the gist.

for the average user, I guess there's not much difference. But I use my computer to work with film editing and music too. It's amazing for me that I can now do with an imac what would have required a professional end computer a few years ago ... and when you're starting out and working indie, the difference in $ is the difference between being able to afford a camera, or sound/light set, and not.
For users like me, the difference is definitely noticeable. in render time, playback, etc.
 
  • #16
moe darklight said:
Thanks for the explanations... though I can't say I really understand much :smile: , I think I get the gist.

The gist is that a processor's clock speed is a very poor way to judge its overall performance, especially at the point in their technological development. It was a pretty reliable metric in the early 90's, but it's almost meaningless today.

It's amazing for me that I can now do with an imac what would have required a professional end computer a few years ago ...

Wait, weren't you the person who started this thread by asserting that computers weren't getting any faster?

- Warren
 
  • #17
chroot said:
Wait, weren't you the person who started this thread by asserting that computers weren't getting any faster?

- Warren

:smile:, ok that does sound like it makes no sense... I was saying that the processors weren't getting faster at the same rate as they were before. I obviously don't know much about computer hardware :biggrin:. I was expecting something like a 5 GHz computer by 2007 at the rate they were going before, is what I meant.
 
  • #18
it doesn't seem its getting faster because now we can run even more complicated programs. more complex programes means more lines of code to process. try the hardware of today, but on windows 98. you'll feel like windows 98 took steroids, speed, and meth all at the same time. ;)

although i must note windows 98 won't read 4 gigs of ram and a tb of hardrive space. but youll still feel the difference tramendously.

(sorry if i have spelling errors, too lazy to download the spell checker and install it lol, although i should know how to spell anyways :/ ...computers the beginning and the end of humanity. that tottaly didnt make sence.)
 
  • #19
kruptworld said:
it doesn't seem its getting faster because now we can run even more complicated programs. more complex programes means more lines of code to process. try the hardware of today, but on windows 98. you'll feel like windows 98 took steroids, speed, and meth all at the same time. ;)

Exactly. My former Win98 box would snap into Corel Draw 5 in a few seconds.
Nowadays it seems both the OS and the programs I run are so bloated that it takes a LONG time.
Part of this extended time, however, is due to my running "real-time" AV/Spyware scans.
 
  • #20
One of the problems is that, while the theoretical maximum number of operations a cpu is capable of has increased significantly in recent years, many pieces of software are unable to utilise this, because the increase in performance comes from putting more cores onto a chip, rather than increasing the clock rate of a single core. Because of the heat dissipation/voltage/stability issues, clock rate has plateaued at around 3GHz, so most improvements have come in the form of (1) carrying out more operations per clock cycle per core.
(2) Putting more cores in there.
This sounds great, but as it happens, coding for 4-6 cores (or double that, for the intel processors with 'hyperthreading') rather than one presents many difficulties (in particular to do with memory access), so a lot of software will still only use one core. Despite the increase in transistors, without the right software things won't get much quicker.
 
  • #21
Where on Earth did you find this dinosaur of a thread?

Amusingly performance per dollar has gone up eight-fold since the original posting :)
 

FAQ: Why are computers not getting faster?

1. Why are computers not getting faster?

There are several reasons why computers are not getting faster at the same rate as they used to. One reason is the physical limitations of computer hardware components, such as the size of transistors and the speed of electrons. Another reason is that software programs are becoming more complex, requiring more processing power to run. Additionally, there may be a lack of incentives for companies to invest in research and development for faster computers.

2. Is Moore's Law still applicable?

Moore's Law, which states that the number of transistors on a computer chip will double every two years, is still applicable but it is becoming increasingly difficult to keep up with. This is due to the physical limitations mentioned earlier, as well as the increasing costs and complexity of developing smaller and more powerful chips.

3. Will quantum computing make traditional computers obsolete?

Quantum computing has the potential to solve certain problems much faster than traditional computers, but it is not a replacement for general-purpose computers. Traditional computers will still be necessary for everyday tasks and will continue to improve in speed and capabilities.

4. Can software optimization improve computer speed?

Yes, software optimization can improve computer speed by making programs more efficient and reducing the amount of processing power needed to run them. However, there is a limit to how much optimization can be done, and it may not be enough to keep up with the increasing demands of modern software.

5. What other advancements can make computers faster?

There are ongoing research and development in areas such as artificial intelligence, quantum computing, and new materials for computer hardware that may lead to faster computers in the future. Additionally, improving computer architecture and finding new ways to parallelize tasks can also contribute to increasing computer speed.

Back
Top