Ratio of Computation power over time: Consumer PCs vs. Supercomputers?

  • Thread starter Algr
  • Start date
  • #1
Algr
865
394
Of course all computers are getting faster over time. But are computers at different cost levels changing at the same rate? I've read some things indicating that high end supercomputers aren't advancing as quickly as PCs on the consumer level. Could this be measured objectively over time?

I find Google struggles with this kind of question, as it isn't clear what to call this. It's hard to search for something you don't have a name for.

What I'm hoping someone has done is something like this:
For each year:
- How powerful is a $1000 computer?
- How powerful is a $10 million computer?
- Calculate the ratio, and plot on a graph.

So in 1985, adjusted for inflation, you would be comparing a Cray 2 supercomputer (1.9 GFLOPS) to perhaps a Commodore 64.
Today you would compare a Typical PC to something from Frontier, perhaps?

Has a study like this been done?
 
Computer science news on Phys.org
  • #2
I recall a time in the 1990s where no one sold a computer for as low as $1000. Maybe comparing average-to-maximum would be more useful.
 
  • #3
I dont think theres a good correlation here.

Supercomputers are always striving for speed running more calculations per second for modeling vast systems of points and particles.

Consumer electronics is driven by features, battery life, compactness, economy and speed.

This why we have ARM processors to address compactness, batt life and features vs other types of chip cpus designed for genl computing vs ML chips. All are playing with the mix of capacity vs speed vs compactness vs cost.
 
  • Like
Likes phinds and FactChecker
  • #4
Supercomputers in the last 30 years or so use PC processors. About half are x86. and the remainder is some mix of ARM, POWER and GPUs. So there can't be a huge difference in CPU performance because they use the same chips.

These machines have thousands of nodes and a fast network connecting them. Network speed is a limitation, as at some point its faster for a node to do a calculation itself than to wait for another node to send it an answer.

Power (and its offspring, heat removal) is a factor. The original IBM PC had a 65W power supply and the CPU was cooled by air without even a heat sink. Today PC s can have kilowatt supplies and water cooled chips. About 1/3 (and increasing) of the cost of a supercomputer is electricity.

The same forces that drove computing to the desktop are driving supercomputers to smaller mini-supercomputers". If I can get an allocation of 0.1% of a million core machine, wouldn't it be better to get exclusive use of a thousand core machine? So the true supercomputers are becoming more and more specialized for a workflow that relies more on interprocess communication and less on raw CPU power.
 
  • Like
Likes russ_watters, fluidistic and jedishrfu
  • #5
If a class of supercomputers uses a new technology, we might expect it to make a great leap in speed and then stay there for a while. In the past, supercomputers had 64-bit processors that were significantly faster then the contemporary PCs. Eventually, home PCs got 64-bit processors and supercomputers just combined a lot more of them. In the near future, quantum computers may be the new supercomputer. It might take a while for that to be available in a home computer. IMO, the progress of home PC speeds may depend on the need for more speed. If video games are the driver of home PC speeds, that driving force might eventually be satisfied. Your guess is as good as mine.
 
  • Like
Likes berkeman and jedishrfu
  • #6
One reason for supercomputer miniaturization is to get the lines between components smaller for faster signalling. The problem is then that you will have to kept the CPU cool by either minimizing the current or by using external cooling methods to get rid of excess heat.

It becomes a complex supercomputer balancing act to build things compactly that can run at the fastest of speeds. As @Vanadium 50 has mentioned, designers also try to refactor problems for parallel calculations and run pieces of it across a network to improve the speed of computation.

Consumer computers benefit from these advances and try to make a marketable ie cheaper computer that can run more complex video games at speeds acceptable to players ie that they experience no apparent lag in speed of play. Faster speeds give software designers more room to add graphical time consuming features that don't go past the 1/10 second notion of slowness.

GUI systems for example wanted responses in the 1/10 for button presses any longer and the user felt the machine was too slow. I once worked on a new OS called Taligent where it was overly optimized for OO design (everything was an object and objects had deep heirarchy of at least 10 levels of parent components) and the end result was a slow GUI so slow that a button press would take 5 seconds on a machine that normally would have a 1/10 of a second response to a windows button press. Needless to say, the OS never made it out of development hell and became a victim of Apple (in Apple it was known as the Pink OS) and IBM abandonment.
 
  • #7
I look at quantum computers as the new math coprocessor of an existing computer. In the early days of the 8080 style chips a math coprocessor was added to eliminate the need for software based floating pt math routines. Eventually, though it became indispensible and was merged with the CPU chip.

The CELL chip design had several satellite processors on chip that were controlled by a general purpose processor. Parallel calculations could be farmed out to one or more satellite processors to do some graphics calculation. One weakness was the satellite processor worked very fast except when a jump was required and then the pipeline was flushed of stacked instructions (the secret to their speed). Typically the CELL chip could run at twice the speed of other CPU chips. There were plans to have it used in mainframes (many networked chips), minis (several chips), desktops (a few chips) and gaming consoles (1 chip). A kind of lego building blocks approach to computing. Intel multi-core obsoleted them in the marketplace pretty quickly.

If and when we do get quantum computing power for consumer machines, it will likely be a coprocessor style design where the general CPU programs the QC n times and then retrieves and aggregates the results. Basically wrapping a digital blanket around the quantum results.
 
  • #8
I'm not sure we will see quantum supercomputers any time soon. We don't yet even have a quantum abacus.

Further, US government supercomputers are intended to solve a very, very specific class of problems. You can guess what problem this is by looking to see who has them. Quantum computers are not so well-suited for this problem.

I have run full-machine jobs on some of the largest systems and I can tell you the limitation - it's the networking. Crudely, suppose you have 100 Gb/s links between nodes. Pretty fast, right? But if this is shared among 100,000 nodes, each node gets 1 Mb/s, The 80;s called and they want their networking back.

If you want to have code that runs at scale, you can use the interconnects to coordinate work, but not to do much work.

I think the things you are going to see more of are:
  • On-die GPUs like the Intel Xe
  • Maybe FPGAs so operations other than FMA can be sped up
  • CPU clocks controlled by the network. (If I am not ready for the results of a node's calculation for a while, I might save heat by slowing it down) Today's throttling isn't very predictable, much less optimal.
 
  • #9
It seems to me that quantum computers will never be helpful for ordinary computations. The main use could be the simulation of the quantum systems of physics and chemistry.
 

1. How has the computation power of consumer PCs evolved over time compared to supercomputers?

The computation power of both consumer PCs and supercomputers has increased exponentially over time, largely following Moore's Law, which predicts a doubling of transistors on a microchip (and hence computational power) approximately every two years. While supercomputers have historically led in computational capabilities, the gap in raw power between consumer PCs and supercomputers has widened, although consumer PCs have also become significantly more powerful and capable of handling complex tasks that were once the domain of supercomputers.

2. What metrics are used to compare computation power between consumer PCs and supercomputers?

Computation power is often measured in FLOPS (floating-point operations per second), which quantifies how many operations a computer can perform in a second. For consumer PCs, benchmarks like Cinebench or Geekbench are also used, providing a more relatable measure of performance through tasks like image rendering or mobile performance. Supercomputers, on the other hand, are often evaluated based on their performance in specialized tests, such as those from the TOP500 list, which ranks supercomputers based on their LINPACK benchmark scores.

3. What technological advancements have driven the increase in computation power in supercomputers?

Several key advancements have driven the increase in computation power in supercomputers, including parallel processing, where many calculations are carried out simultaneously, and the development of highly efficient and specialized processors like GPUs and TPUs. Advances in cooling technology, energy efficiency, and architectural innovations have also played critical roles in enhancing the capabilities of supercomputers.

4. Are there specific applications or tasks where the gap in computation power between consumer PCs and supercomputers is particularly evident?

The gap in computation power is most evident in tasks that require massive parallel processing capabilities, such as climate modeling, cryptographic analysis, and large-scale simulations (e.g., nuclear simulations, astronomical phenomena). These tasks involve handling vast amounts of data and computations that are beyond the practical reach of consumer PCs, necessitating the unique capabilities of supercomputers.

5. How might the future trajectory of consumer PCs and supercomputers differ?

The future trajectory of consumer PCs and supercomputers might diverge further with supercomputers likely focusing more on specialized, high-performance tasks that require extraordinary computational abilities. Consumer PCs might continue to become more integrated with everyday devices and focus on energy efficiency, portability, and cost-effectiveness, incorporating sufficient power to perform everyday tasks and some level of advanced computing, possibly supported by cloud computing resources for more demanding tasks.

Similar threads

  • High Energy, Nuclear, Particle Physics
Replies
2
Views
2K
  • Electrical Engineering
Replies
2
Views
4K
Replies
152
Views
5K
  • STEM Career Guidance
Replies
4
Views
2K
  • Sticky
  • Aerospace Engineering
2
Replies
48
Views
60K
Replies
4
Views
30K
  • Quantum Physics
Replies
2
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
7
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
7
Views
3K
Back
Top