Why do GPU's dwarf the computational power of CPU's?

Click For Summary
The ATI Radeon HD 6990 demonstrates significantly higher computational power compared to top-tier CPUs, achieving around 5 teraflops versus CPUs' 250 gigaflops, primarily due to GPUs' ability to massively parallelize tasks. This architecture allows GPUs to excel in specific applications like graphics rendering and complex calculations, while CPUs are optimized for sequential processing. The discussion highlights the potential for future processors to reach exaflop capabilities, although current limitations include heat dissipation challenges and the need for innovative cooling solutions. The conversation also touches on the increasing use of GPUs in hacking due to their processing power, and the evolution of CPU designs to incorporate more cores and parallel processing capabilities. As technology progresses, the feasibility of stacking transistors in multi-layer designs is debated, with concerns about heat management remaining a critical issue. Overall, advancements in GPU technology and parallel processing are reshaping computing power dynamics, suggesting a future where exaflop processors could become commonplace in personal computing.
  • #31
KrisOhn said:
Although I'm not entirely sure I understand everything that was said there, damn does it sound cool.

I have a question though, how would the AT1000 made out of 35billion circa 1975 GPU's stack up against something like this:
http://en.wikipedia.org/wiki/Tianhe-1A


It is becoming very difficult for electronic engineers to significantly increase the performance and efficiency of microprocessors. The next step for PC hardware is 28nm chips. Moore's law might become utterly obsolete at around 11nm.

That's when they will have to really put their heads together and think of something revolutionary if they want to keep making faster and faster hardware. There are some ideas on the slate, but most of them are purely theoretical/experimental.
 
Computer science news on Phys.org
  • #32
KrisOhn:

I had a reply that I tried to enter day-before-yesterday but there was a glitch and the post was lost.

However the gist was this:

The AT1000 and its associated GPUs contain adaptive electronics, much more sophisticated than GPUs of the day, each GPU can be configured electronically by arrays similar to PEELs (Programmable Electrically Erasable Logic). I figure that the Tianhe-1A may just be adequate to load the P7E BIOS into the AT1000 Unit. My original estimation on the loading and compilation of the P7E BIOS using CRAY computers circa 1990 was about 3.38 years computertime. My estimation is the Tianhe-1A might take about 6 or more days.

FishmanGeertz:

I believe that the upper limit to computer clocking speed is the EM effects beyond 10GHz, it might be necessary to couple devices with waveguides. Imagine using X-rays to clock a computer circuit, very very difficult, not necessarily imposable.

As for the heating problem, immersion of the chip in a suitable non-polar refrigerant coupled to a suitable heat sink as part of the chip carrier should solve the heating problem. Liquid coolant allows for the maximum junction heat transfer over the entire surface of the device and not just through the substrate of the device.
 
  • #33
Eimacman said:
KrisOhn:

I had a reply that I tried to enter day-before-yesterday but there was a glitch and the post was lost.

However the gist was this:

The AT1000 and its associated GPUs contain adaptive electronics, much more sophisticated than GPUs of the day, each GPU can be configured electronically by arrays similar to PEELs (Programmable Electrically Erasable Logic). I figure that the Tianhe-1A may just be adequate to load the P7E BIOS into the AT1000 Unit. My original estimation on the loading and compilation of the P7E BIOS using CRAY computers circa 1990 was about 3.38 years computertime. My estimation is the Tianhe-1A might take about 6 or more days.

FishmanGeertz:

I believe that the upper limit to computer clocking speed is the EM effects beyond 10GHz, it might be necessary to couple devices with waveguides. Imagine using X-rays to clock a computer circuit, very very difficult, not necessarily imposable.

As for the heating problem, immersion of the chip in a suitable non-polar refrigerant coupled to a suitable heat sink as part of the chip carrier should solve the heating problem. Liquid coolant allows for the maximum junction heat transfer over the entire surface of the device and not just through the substrate of the device.

Do you think we'll ever see EXAFLOP processors in home computers?
 
  • #34
FishmanGeertz said:
Do you think we'll ever see EXAFLOP processors in home computers?

You might, but your computer still won't be able to play the latest games.
 
  • #35
jhae2.718 said:
You might, but your computer still won't be able to play the latest games.

An EXAFLOP GPU would absolutely butcher Crysis. 2560x1600, x32MSAA, everything else set to max, on six monitors, and you would still get over 100 FPS! I'm not sure about how it would perform in 2030-2035 PC games.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
8K
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
18K
Replies
11
Views
6K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 4 ·
Replies
4
Views
5K
Replies
4
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K