Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

GPU Supercomputers

  1. Nov 22, 2008 #1
  2. jcsd
  3. Nov 22, 2008 #2


    User Avatar

    Staff: Mentor

    People have been making supercomputers by mounting standard desktop PC chips in one cabinet for at least a decade. What is innovative about this one is that it utilizes GPUs, which are specialized and have limited instruction sets and are therefore limited in what they can do, but are very fast at it.

    For certain types of tasks, clustering (and this can be done in a network too) works well, but for others it doesn't. The drawback is that the processors aren't really collaborating on the same task, they have the task broken up in to sub-tasks that they all work on individually. If a problem can't be broken up into pieces, it won't necessarily work well on a cluster. Digital animation, however, is an application well suited to this kind of technology and has been done this way for a long time. Toy Story was made that way in 1995.
  4. Nov 24, 2008 #3
    define super computer. What i'm running at home now (dual core 3GHz with 4GB of Ram and a 1/2TB harddisk) was probably considered a supercomputer about 10 years ago. Now days we have quad core processors in standard desktops. We have SLi or Crossfire running multiple graphics cards in one rig. We have 64bit systems addressing tens of gigabytes of RAM and Terabyte harddisks. All of this is commercially available to anyone with a good budget.
  5. Nov 24, 2008 #4
    I had no idea. So whats the next thing on the horizon?

    I cant believe toy story was made that way! I remember the graphics in that were really good. Another movie, The Final Fantasy (1998 I think) had amazing graphics, specially when everyone was running Pentium 2 boxes and it had me wondering how they did that. I remember thinking that they probably used supercomputers! Obviously the definition of supercomputing changes (unless there's an industry definition for it which takes into account the ever increasing processing speeds available to the market), this is going to sound corny, but say a computer which can do what no other can:rolleyes:?
  6. Dec 1, 2008 #5
    GPGPUs (General Purpose Graphics Processing Units) such as the NVIDIA Tesla are able to do anything a CPU can do, but better. They're the future of Supercomputers IMO, and eventually consumer-grade computers.
    I'm sad to say I've never had a chance to use a computer, but hey, how many people my age (15) have? The thought of using one excites me though. Even the word "Cray" gets me excited. :)

    In addition to GPGPUs, botnets will likely play a large role in supercomputing in the future too.
  7. Dec 6, 2008 #6
  8. Dec 6, 2008 #7


    User Avatar
    Science Advisor
    Homework Helper

    No, it really can't.

    First of all, the company reports its performance in terms of single-precision calculations rather than double-precision, exaggerating its performance for bignum math and the like by a factor of > 4. So its floating-point performance isn't nearly as great as claimed. Second, its integer performance is far worse than its floating-point, making it useless for the number theory that I'm trying to crunch.

    For graphics, raytracing, and similar tasks these general-purpose GPUs look really great. For other stuff... they have a long way to catch up to conventional CPUs.
  9. Dec 6, 2008 #8


    User Avatar
    Gold Member

    Remember the fantastic Vorlon ships and other great graphics in Babylon5? The graphics were done here in Maine on an array of Amigas called a "video toaster".
  10. Dec 12, 2008 #9
    GPU-computing seems to be rapidly becoming a big deal. Two years ago no one I knew ever mentioned it; now I can't seem to avoid hearing about it every day.

    From what I understand, the bottleneck with GPU computing is memory access rather than processing time. Massive parallelization allows you to get an order of magnitude or two of speedup over conventional computers but only on certain tasks. Each GPU processor has a local memory buffer which it can access often but can only access the main memory much less frequently. So depending on the application... it might be extremely useful, or it might not. If you are doing something where you can define a difference between "local" and "global" information and the global information needs to be accessed much less frequently than the local information than you should be able to get a dramatic speedup with a GPU.

    For an example of something that GPUs are good at: a guy in my lab was able to use a single GPU and beat our entire cluster in calculating a convolution.
  11. Apr 13, 2011 #10
    sounds very cool.

    Does anyone know if it possible to build a computer with only a GPU and no CPU?

    Yes it might not be optimal for everything, I am just curious about whether it is possible with the current hardware. For example do any motherboards exist that can take a GPU (or a graphics card) as a CPU?
  12. Apr 13, 2011 #11


    User Avatar
    Science Advisor
    Homework Helper

    Seymour Cray had a simple definition: A supercomputer is the fastest machine you can sell for $20m and make a profit. The logic was that back in the 1980s, $20m was about the limit on the funds his customers (mostly the military and national research labs) had available for single projects that needed a lot of computer power.

    The original Cray-1 had a 80 MHz (not GHz!) clock speed, with a maximum possible throughput of about 132 Mflops, and IIRC 64 MB (not GB) of RAM.

    You couldn't even boot up most 21st bloatware OS's on something that small and slow!

    My first encounter with Unix was logging onto a Cray-2 interactively and then figuring out how to port some software onto it from IBM's OS/360...
  13. Apr 22, 2011 #12
    I don't think so since they use different sockets. Graphics cards use PCI express slots (or AGP for the old computers) which are completely different to the Socket 1155 used by the latest Intel chips or AM3 used by their AMD counterparts. HOWEVER! The GPU is inside a case so I don't know what kind of sockets they use inside them, but I guess they probably hard solder them in; although sometimes such as in the AMD Radeon 6950 cards they sometimes use 6970 cores but some of the dies have been damaged in some way so thus have been deactivated. Sometimes they just deactivate them anyway (makes manufacturing cheaper). This makes it possible to flash a 6950 into a 6970 by installing a 6970 BIOS into the 6950 which pretty much gives the power of a £200 card the power of a £300 card just by changing the software. Anyway back to the thread, and my example, they may have sockets in order to make the swapping of cores from different GPUs. Also it wouldn't run unless you wrote your own OS since they are optimised for CPU proccessing, not GPU. But hey maybe in a few years the ol' CPU might have caught up with the raw single threaded speed of the GPU =D
  14. Apr 14, 2012 #13


    User Avatar
    Gold Member

    No, there is no such thing. Some computers can be faster at some types of operations than others, but basic computer theory says that ANYTHING that can be done by one computer can be done by any other computer (just not necessarily as fast, although it could be faster).

    Alan Turing proved this 75 years ago. Google "Turning Machine"
  15. Apr 15, 2012 #14
    Hm. Absolutes never work out (and yes, I realize the irony of that statement). Would this rule apply to quantum computers, for example?
  16. Apr 15, 2012 #15


    User Avatar
    Gold Member

    Yes. The point is that the algorithms that any computer runs can be run by any other computer.
  17. Apr 24, 2012 #16
    In supercomputing, the software is where the magic happens.
  18. May 22, 2012 #17
    Care to elaborate?

    I mean, how is supercomputing any different than "regular" computing? Are you saying that software is just as relevant (i.e. highly emphasized) in any computing environment, supercomputing or otherwise?

    On a side-note: software can emulate hardware, correct (e.g. software transform and lighting)? But can hardware emulate software? If so, what are its implications and limitations?
  19. May 23, 2012 #18
    You surely won't be shoving X86 code through any supercomputer.

    The software has to be written for the hardware.

    This is why you used to get many different versions of windows NT.

    It was written for a few different platforms.

    Most "supercomputing" uses parallel rather than linear processing so the software has to be written for parallel processing.
  20. Jul 30, 2012 #19
    I wanted to address emulation.

    Emulation is a band aid that reduces performance VS a rig running on it's native code.

    It's great for versatility but speed is not one of it's virtues.
  21. Jul 30, 2012 #20
    GPU and GPGPU computing are the current big thing because the technology is maturing, while some are saying reconfigurable computing is the next big thing. That's where the software can physically adapt the hardware to whatever need it has on the fly. For example, with memristors what is memory can become transistors and vice versa allowing a small number of parts to do the job of many. They allow for computing massive recursive functions and other things that would either be impossible or impractical using conventional technology.

    Just to give you some idea of how powerful the technology can be IBM's goal for their new neuromorphic chip that incorporates memristors is to have the equivalent of a cat or human brain's neurons on a single chip sometimes within the next ten years. That's immensely compact functionality and it appears the experimentalists might soon leave the theorists in the dust scratching their heads and trying to figure out how best to leverage the technology.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: GPU Supercomputers
  1. External GPU issue (Replies: 0)

  2. GPU for CFE/CE/QCC (Replies: 1)

  3. Next-gen GPU's? (Replies: 3)