Ratio of Computation power over time: Consumer PCs vs. Supercomputers?

  • Thread starter Thread starter Algr
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around the comparative advancement of computation power in consumer PCs versus supercomputers over time. Participants explore whether these categories of computers are evolving at similar rates and how this can be measured, considering various factors such as cost, technology, and application areas.

Discussion Character

  • Exploratory
  • Debate/contested
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • Some participants suggest that while all computers are becoming faster, the rate of advancement may differ significantly between consumer PCs and supercomputers, with consumer PCs potentially advancing more rapidly.
  • One participant recalls that in the 1990s, computers were not available for as low as $1000, proposing that comparing average-to-maximum performance might yield more useful insights.
  • Another viewpoint emphasizes that supercomputers focus on maximizing calculations per second for complex modeling, while consumer electronics prioritize features, battery life, and cost-effectiveness.
  • A participant notes that many supercomputers utilize similar processors to consumer PCs, suggesting that performance differences may not be as pronounced as expected.
  • Concerns are raised about the limitations of network speed in supercomputers, which can affect overall performance and efficiency.
  • Some participants speculate that advancements in quantum computing could lead to significant leaps in supercomputer capabilities, though the timeline for consumer-level quantum computing remains uncertain.
  • There is discussion about the potential for quantum computers to serve as coprocessors, enhancing existing computational architectures rather than replacing them.
  • One participant expresses skepticism about the practical utility of quantum computers for general computations, suggesting their primary application may be in simulating quantum systems.

Areas of Agreement / Disagreement

Participants express a range of views on the advancement of computation power, with no clear consensus on whether consumer PCs or supercomputers are advancing more rapidly. There are competing perspectives on the implications of quantum computing and the specific challenges faced by supercomputers.

Contextual Notes

Participants highlight various factors influencing performance, such as power consumption, cooling requirements, and the nature of computational tasks. The discussion also reflects on historical trends and technological shifts without resolving the complexities involved.

Algr
Messages
935
Reaction score
459
Of course all computers are getting faster over time. But are computers at different cost levels changing at the same rate? I've read some things indicating that high end supercomputers aren't advancing as quickly as PCs on the consumer level. Could this be measured objectively over time?

I find Google struggles with this kind of question, as it isn't clear what to call this. It's hard to search for something you don't have a name for.

What I'm hoping someone has done is something like this:
For each year:
- How powerful is a $1000 computer?
- How powerful is a $10 million computer?
- Calculate the ratio, and plot on a graph.

So in 1985, adjusted for inflation, you would be comparing a Cray 2 supercomputer (1.9 GFLOPS) to perhaps a Commodore 64.
Today you would compare a Typical PC to something from Frontier, perhaps?

Has a study like this been done?
 
Computer science news on Phys.org
I recall a time in the 1990s where no one sold a computer for as low as $1000. Maybe comparing average-to-maximum would be more useful.
 
I dont think there's a good correlation here.

Supercomputers are always striving for speed running more calculations per second for modeling vast systems of points and particles.

Consumer electronics is driven by features, battery life, compactness, economy and speed.

This why we have ARM processors to address compactness, batt life and features vs other types of chip cpus designed for genl computing vs ML chips. All are playing with the mix of capacity vs speed vs compactness vs cost.
 
  • Like
Likes   Reactions: phinds and FactChecker
Supercomputers in the last 30 years or so use PC processors. About half are x86. and the remainder is some mix of ARM, POWER and GPUs. So there can't be a huge difference in CPU performance because they use the same chips.

These machines have thousands of nodes and a fast network connecting them. Network speed is a limitation, as at some point its faster for a node to do a calculation itself than to wait for another node to send it an answer.

Power (and its offspring, heat removal) is a factor. The original IBM PC had a 65W power supply and the CPU was cooled by air without even a heat sink. Today PC s can have kilowatt supplies and water cooled chips. About 1/3 (and increasing) of the cost of a supercomputer is electricity.

The same forces that drove computing to the desktop are driving supercomputers to smaller mini-supercomputers". If I can get an allocation of 0.1% of a million core machine, wouldn't it be better to get exclusive use of a thousand core machine? So the true supercomputers are becoming more and more specialized for a workflow that relies more on interprocess communication and less on raw CPU power.
 
  • Like
Likes   Reactions: russ_watters, fluidistic and jedishrfu
If a class of supercomputers uses a new technology, we might expect it to make a great leap in speed and then stay there for a while. In the past, supercomputers had 64-bit processors that were significantly faster then the contemporary PCs. Eventually, home PCs got 64-bit processors and supercomputers just combined a lot more of them. In the near future, quantum computers may be the new supercomputer. It might take a while for that to be available in a home computer. IMO, the progress of home PC speeds may depend on the need for more speed. If video games are the driver of home PC speeds, that driving force might eventually be satisfied. Your guess is as good as mine.
 
  • Like
Likes   Reactions: berkeman and jedishrfu
One reason for supercomputer miniaturization is to get the lines between components smaller for faster signalling. The problem is then that you will have to kept the CPU cool by either minimizing the current or by using external cooling methods to get rid of excess heat.

It becomes a complex supercomputer balancing act to build things compactly that can run at the fastest of speeds. As @Vanadium 50 has mentioned, designers also try to refactor problems for parallel calculations and run pieces of it across a network to improve the speed of computation.

Consumer computers benefit from these advances and try to make a marketable ie cheaper computer that can run more complex video games at speeds acceptable to players ie that they experience no apparent lag in speed of play. Faster speeds give software designers more room to add graphical time consuming features that don't go past the 1/10 second notion of slowness.

GUI systems for example wanted responses in the 1/10 for button presses any longer and the user felt the machine was too slow. I once worked on a new OS called Taligent where it was overly optimized for OO design (everything was an object and objects had deep heirarchy of at least 10 levels of parent components) and the end result was a slow GUI so slow that a button press would take 5 seconds on a machine that normally would have a 1/10 of a second response to a windows button press. Needless to say, the OS never made it out of development hell and became a victim of Apple (in Apple it was known as the Pink OS) and IBM abandonment.
 
I look at quantum computers as the new math coprocessor of an existing computer. In the early days of the 8080 style chips a math coprocessor was added to eliminate the need for software based floating pt math routines. Eventually, though it became indispensible and was merged with the CPU chip.

The CELL chip design had several satellite processors on chip that were controlled by a general purpose processor. Parallel calculations could be farmed out to one or more satellite processors to do some graphics calculation. One weakness was the satellite processor worked very fast except when a jump was required and then the pipeline was flushed of stacked instructions (the secret to their speed). Typically the CELL chip could run at twice the speed of other CPU chips. There were plans to have it used in mainframes (many networked chips), minis (several chips), desktops (a few chips) and gaming consoles (1 chip). A kind of lego building blocks approach to computing. Intel multi-core obsoleted them in the marketplace pretty quickly.

If and when we do get quantum computing power for consumer machines, it will likely be a coprocessor style design where the general CPU programs the QC n times and then retrieves and aggregates the results. Basically wrapping a digital blanket around the quantum results.
 
I'm not sure we will see quantum supercomputers any time soon. We don't yet even have a quantum abacus.

Further, US government supercomputers are intended to solve a very, very specific class of problems. You can guess what problem this is by looking to see who has them. Quantum computers are not so well-suited for this problem.

I have run full-machine jobs on some of the largest systems and I can tell you the limitation - it's the networking. Crudely, suppose you have 100 Gb/s links between nodes. Pretty fast, right? But if this is shared among 100,000 nodes, each node gets 1 Mb/s, The 80;s called and they want their networking back.

If you want to have code that runs at scale, you can use the interconnects to coordinate work, but not to do much work.

I think the things you are going to see more of are:
  • On-die GPUs like the Intel Xe
  • Maybe FPGAs so operations other than FMA can be sped up
  • CPU clocks controlled by the network. (If I am not ready for the results of a node's calculation for a while, I might save heat by slowing it down) Today's throttling isn't very predictable, much less optimal.
 
It seems to me that quantum computers will never be helpful for ordinary computations. The main use could be the simulation of the quantum systems of physics and chemistry.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 152 ·
6
Replies
152
Views
11K
  • · Replies 4 ·
Replies
4
Views
3K
  • Sticky
  • · Replies 48 ·
2
Replies
48
Views
68K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
31K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 7 ·
Replies
7
Views
4K