Why do GPU's dwarf the computational power of CPU's?

Click For Summary

Discussion Overview

The discussion revolves around the comparative computational power of GPUs and CPUs, exploring the reasons behind the significant performance differences. Participants delve into aspects such as parallelization, memory bandwidth, and potential future developments in processing technology.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Exploratory

Main Points Raised

  • Some participants note that GPUs, like the ATI Radeon HD 6990, can achieve about 5 teraflops, while top CPUs from Intel and AMD reach around 250 gigaflops, highlighting the disparity in computational power.
  • It is mentioned that GPUs have a significantly higher number of cores (e.g., 3072 cores at 0.83 GHz) compared to CPUs (e.g., dual-core CPUs at 1.6 GHz), which allows for massive parallelization.
  • Participants discuss the memory bandwidth differences, with GPUs offering around 320 GB/sec compared to CPUs at approximately 24 GB/sec, suggesting that this contributes to the GPUs' ability to handle large data sets efficiently.
  • Some argue that CPUs are optimized for sequential processing, while GPUs excel in parallel tasks, particularly in graphics rendering and certain computational tasks.
  • There are mentions of the potential for future CPUs to incorporate more cores and parallel processing capabilities, with speculation about the development of exaflop-capable PCs in the coming years.
  • Concerns are raised about the use of GPUs in hacking, particularly for brute force attacks on encryption, indicating a shift in how computational power is being utilized.
  • Participants discuss the limitations of CPUs and GPUs, including heat dissipation issues and the challenges of optimizing algorithms for parallel processing.

Areas of Agreement / Disagreement

Participants express a range of views on the advantages and limitations of GPUs versus CPUs, with no clear consensus on the superiority of one over the other. The discussion includes both supportive and critical perspectives on the capabilities of each type of processor.

Contextual Notes

Some participants highlight the importance of optimizations in CPU and GPU design, noting that the specific applications and tasks can influence performance outcomes. There are also references to the evolving nature of processing technology and the potential for future advancements.

Who May Find This Useful

This discussion may be of interest to individuals involved in computer science, hardware engineering, cybersecurity, and those curious about advancements in processing technology and their implications.

  • #31
KrisOhn said:
Although I'm not entirely sure I understand everything that was said there, damn does it sound cool.

I have a question though, how would the AT1000 made out of 35billion circa 1975 GPU's stack up against something like this:
http://en.wikipedia.org/wiki/Tianhe-1A


It is becoming very difficult for electronic engineers to significantly increase the performance and efficiency of microprocessors. The next step for PC hardware is 28nm chips. Moore's law might become utterly obsolete at around 11nm.

That's when they will have to really put their heads together and think of something revolutionary if they want to keep making faster and faster hardware. There are some ideas on the slate, but most of them are purely theoretical/experimental.
 
Computer science news on Phys.org
  • #32
KrisOhn:

I had a reply that I tried to enter day-before-yesterday but there was a glitch and the post was lost.

However the gist was this:

The AT1000 and its associated GPUs contain adaptive electronics, much more sophisticated than GPUs of the day, each GPU can be configured electronically by arrays similar to PEELs (Programmable Electrically Erasable Logic). I figure that the Tianhe-1A may just be adequate to load the P7E BIOS into the AT1000 Unit. My original estimation on the loading and compilation of the P7E BIOS using CRAY computers circa 1990 was about 3.38 years computertime. My estimation is the Tianhe-1A might take about 6 or more days.

FishmanGeertz:

I believe that the upper limit to computer clocking speed is the EM effects beyond 10GHz, it might be necessary to couple devices with waveguides. Imagine using X-rays to clock a computer circuit, very very difficult, not necessarily imposable.

As for the heating problem, immersion of the chip in a suitable non-polar refrigerant coupled to a suitable heat sink as part of the chip carrier should solve the heating problem. Liquid coolant allows for the maximum junction heat transfer over the entire surface of the device and not just through the substrate of the device.
 
  • #33
Eimacman said:
KrisOhn:

I had a reply that I tried to enter day-before-yesterday but there was a glitch and the post was lost.

However the gist was this:

The AT1000 and its associated GPUs contain adaptive electronics, much more sophisticated than GPUs of the day, each GPU can be configured electronically by arrays similar to PEELs (Programmable Electrically Erasable Logic). I figure that the Tianhe-1A may just be adequate to load the P7E BIOS into the AT1000 Unit. My original estimation on the loading and compilation of the P7E BIOS using CRAY computers circa 1990 was about 3.38 years computertime. My estimation is the Tianhe-1A might take about 6 or more days.

FishmanGeertz:

I believe that the upper limit to computer clocking speed is the EM effects beyond 10GHz, it might be necessary to couple devices with waveguides. Imagine using X-rays to clock a computer circuit, very very difficult, not necessarily imposable.

As for the heating problem, immersion of the chip in a suitable non-polar refrigerant coupled to a suitable heat sink as part of the chip carrier should solve the heating problem. Liquid coolant allows for the maximum junction heat transfer over the entire surface of the device and not just through the substrate of the device.

Do you think we'll ever see EXAFLOP processors in home computers?
 
  • #34
FishmanGeertz said:
Do you think we'll ever see EXAFLOP processors in home computers?

You might, but your computer still won't be able to play the latest games.
 
  • #35
jhae2.718 said:
You might, but your computer still won't be able to play the latest games.

An EXAFLOP GPU would absolutely butcher Crysis. 2560x1600, x32MSAA, everything else set to max, on six monitors, and you would still get over 100 FPS! I'm not sure about how it would perform in 2030-2035 PC games.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
8K
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
18K
Replies
11
Views
6K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 4 ·
Replies
4
Views
5K
Replies
4
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K