Univ. of Antwerp Creates $4000 GPU Supercomputer

  • Thread starter chaoseverlasting
  • Start date
  • Tags
    Gpu
In summary: GPU and no CPU, but it wouldn't be very practical. The GPU is designed to handle specific tasks related to graphics and video processing, while the CPU is responsible for general computing tasks. In summary, the use of GPUs in supercomputing is becoming more prevalent and has the potential to greatly increase processing speed, but it may not be suitable for all types of tasks. It is also possible to build a computer with only a GPU and no CPU, but it would not be practical for everyday use.
Computer science news on Phys.org
  • #2
People have been making supercomputers by mounting standard desktop PC chips in one cabinet for at least a decade. What is innovative about this one is that it utilizes GPUs, which are specialized and have limited instruction sets and are therefore limited in what they can do, but are very fast at it.

For certain types of tasks, clustering (and this can be done in a network too) works well, but for others it doesn't. The drawback is that the processors aren't really collaborating on the same task, they have the task broken up into sub-tasks that they all work on individually. If a problem can't be broken up into pieces, it won't necessarily work well on a cluster. Digital animation, however, is an application well suited to this kind of technology and has been done this way for a long time. Toy Story was made that way in 1995.
 
  • #3
define super computer. What I'm running at home now (dual core 3GHz with 4GB of Ram and a 1/2TB harddisk) was probably considered a supercomputer about 10 years ago. Now days we have quad core processors in standard desktops. We have SLi or Crossfire running multiple graphics cards in one rig. We have 64bit systems addressing tens of gigabytes of RAM and Terabyte harddisks. All of this is commercially available to anyone with a good budget.
 
  • #4
russ_watters said:
People have been making supercomputers by mounting standard desktop PC chips in one cabinet for at least a decade. What is innovative about this one is that it utilizes GPUs, which are specialized and have limited instruction sets and are therefore limited in what they can do, but are very fast at it.

For certain types of tasks, clustering (and this can be done in a network too) works well, but for others it doesn't. The drawback is that the processors aren't really collaborating on the same task, they have the task broken up into sub-tasks that they all work on individually. If a problem can't be broken up into pieces, it won't necessarily work well on a cluster. Digital animation, however, is an application well suited to this kind of technology and has been done this way for a long time. Toy Story was made that way in 1995.

I had no idea. So what's the next thing on the horizon?

I can't believe toy story was made that way! I remember the graphics in that were really good. Another movie, The Final Fantasy (1998 I think) had amazing graphics, specially when everyone was running Pentium 2 boxes and it had me wondering how they did that. I remember thinking that they probably used supercomputers! Obviously the definition of supercomputing changes (unless there's an industry definition for it which takes into account the ever increasing processing speeds available to the market), this is going to sound corny, but say a computer which can do what no other can:rolleyes:?
 
  • #5
GPGPUs (General Purpose Graphics Processing Units) such as the NVIDIA Tesla are able to do anything a CPU can do, but better. They're the future of Supercomputers IMO, and eventually consumer-grade computers.
I'm sad to say I've never had a chance to use a computer, but hey, how many people my age (15) have? The thought of using one excites me though. Even the word "Cray" gets me excited. :)

In addition to GPGPUs, botnets will likely play a large role in supercomputing in the future too.
 
  • #7
Gr!dl0cK said:
GPGPUs (General Purpose Graphics Processing Units) such as the NVIDIA Tesla are able to do anything a CPU can do, but better.

No, it really can't.

First of all, the company reports its performance in terms of single-precision calculations rather than double-precision, exaggerating its performance for bignum math and the like by a factor of > 4. So its floating-point performance isn't nearly as great as claimed. Second, its integer performance is far worse than its floating-point, making it useless for the number theory that I'm trying to crunch.

For graphics, raytracing, and similar tasks these general-purpose GPUs look really great. For other stuff... they have a long way to catch up to conventional CPUs.
 
  • #8
chaoseverlasting said:
I can't believe toy story was made that way!
Remember the fantastic Vorlon ships and other great graphics in Babylon5? The graphics were done here in Maine on an array of Amigas called a "video toaster".
 
  • #9
GPU-computing seems to be rapidly becoming a big deal. Two years ago no one I knew ever mentioned it; now I can't seem to avoid hearing about it every day.

From what I understand, the bottleneck with GPU computing is memory access rather than processing time. Massive parallelization allows you to get an order of magnitude or two of speedup over conventional computers but only on certain tasks. Each GPU processor has a local memory buffer which it can access often but can only access the main memory much less frequently. So depending on the application... it might be extremely useful, or it might not. If you are doing something where you can define a difference between "local" and "global" information and the global information needs to be accessed much less frequently than the local information than you should be able to get a dramatic speedup with a GPU.

For an example of something that GPUs are good at: a guy in my lab was able to use a single GPU and beat our entire cluster in calculating a convolution.
 
  • #10
sounds very cool.

Does anyone know if it possible to build a computer with only a GPU and no CPU?

Yes it might not be optimal for everything, I am just curious about whether it is possible with the current hardware. For example do any motherboards exist that can take a GPU (or a graphics card) as a CPU?
 
  • #11
redargon said:
define super computer.
Seymour Cray had a simple definition: A supercomputer is the fastest machine you can sell for $20m and make a profit. The logic was that back in the 1980s, $20m was about the limit on the funds his customers (mostly the military and national research labs) had available for single projects that needed a lot of computer power.

What I'm running at home now (dual core 3GHz with 4GB of Ram and a 1/2TB harddisk) was probably considered a supercomputer about 10 years ago.
The original Cray-1 had a 80 MHz (not GHz!) clock speed, with a maximum possible throughput of about 132 Mflops, and IIRC 64 MB (not GB) of RAM.

You couldn't even boot up most 21st bloatware OS's on something that small and slow!

My first encounter with Unix was logging onto a Cray-2 interactively and then figuring out how to port some software onto it from IBM's OS/360...
 
  • #12
jjoensuu said:
sounds very cool.

Does anyone know if it possible to build a computer with only a GPU and no CPU?

Yes it might not be optimal for everything, I am just curious about whether it is possible with the current hardware. For example do any motherboards exist that can take a GPU (or a graphics card) as a CPU?

I don't think so since they use different sockets. Graphics cards use PCI express slots (or AGP for the old computers) which are completely different to the Socket 1155 used by the latest Intel chips or AM3 used by their AMD counterparts. HOWEVER! The GPU is inside a case so I don't know what kind of sockets they use inside them, but I guess they probably hard solder them in; although sometimes such as in the AMD Radeon 6950 cards they sometimes use 6970 cores but some of the dies have been damaged in some way so thus have been deactivated. Sometimes they just deactivate them anyway (makes manufacturing cheaper). This makes it possible to flash a 6950 into a 6970 by installing a 6970 BIOS into the 6950 which pretty much gives the power of a £200 card the power of a £300 card just by changing the software. Anyway back to the thread, and my example, they may have sockets in order to make the swapping of cores from different GPUs. Also it wouldn't run unless you wrote your own OS since they are optimised for CPU processing, not GPU. But hey maybe in a few years the ol' CPU might have caught up with the raw single threaded speed of the GPU =D
 
  • #13
chaoseverlasting said:
... say a computer which can do what no other can:rolleyes:?

No, there is no such thing. Some computers can be faster at some types of operations than others, but basic computer theory says that ANYTHING that can be done by one computer can be done by any other computer (just not necessarily as fast, although it could be faster).

Alan Turing proved this 75 years ago. Google "Turning Machine"
 
  • #14
phinds said:
No, there is no such thing. Some computers can be faster at some types of operations than others, but basic computer theory says that ANYTHING that can be done by one computer can be done by any other computer (just not necessarily as fast, although it could be faster).

Hm. Absolutes never work out (and yes, I realize the irony of that statement). Would this rule apply to quantum computers, for example?
 
  • #15
Hobin said:
Hm. Absolutes never work out (and yes, I realize the irony of that statement). Would this rule apply to quantum computers, for example?

Yes. The point is that the algorithms that any computer runs can be run by any other computer.
 
  • #16
In supercomputing, the software is where the magic happens.
 
  • #17
HowlerMonkey said:
In supercomputing, the software is where the magic happens.
Care to elaborate?

I mean, how is supercomputing any different than "regular" computing? Are you saying that software is just as relevant (i.e. highly emphasized) in any computing environment, supercomputing or otherwise?

On a side-note: software can emulate hardware, correct (e.g. software transform and lighting)? But can hardware emulate software? If so, what are its implications and limitations?
 
  • #18
You surely won't be shoving X86 code through any supercomputer.

The software has to be written for the hardware.

This is why you used to get many different versions of windows NT.

It was written for a few different platforms.

Most "supercomputing" uses parallel rather than linear processing so the software has to be written for parallel processing.
 
  • #19
I wanted to address emulation.

Emulation is a band aid that reduces performance VS a rig running on it's native code.

It's great for versatility but speed is not one of it's virtues.
 
  • #20
chaoseverlasting said:
I had no idea. So what's the next thing on the horizon?

I can't believe toy story was made that way! I remember the graphics in that were really good. Another movie, The Final Fantasy (1998 I think) had amazing graphics, specially when everyone was running Pentium 2 boxes and it had me wondering how they did that. I remember thinking that they probably used supercomputers! Obviously the definition of supercomputing changes (unless there's an industry definition for it which takes into account the ever increasing processing speeds available to the market), this is going to sound corny, but say a computer which can do what no other can:rolleyes:?

GPU and GPGPU computing are the current big thing because the technology is maturing, while some are saying reconfigurable computing is the next big thing. That's where the software can physically adapt the hardware to whatever need it has on the fly. For example, with memristors what is memory can become transistors and vice versa allowing a small number of parts to do the job of many. They allow for computing massive recursive functions and other things that would either be impossible or impractical using conventional technology.

Just to give you some idea of how powerful the technology can be IBM's goal for their new neuromorphic chip that incorporates memristors is to have the equivalent of a cat or human brain's neurons on a single chip sometimes within the next ten years. That's immensely compact functionality and it appears the experimentalists might soon leave the theorists in the dust scratching their heads and trying to figure out how best to leverage the technology.
 
  • #21
HowlerMonkey said:
In supercomputing, the software is where the magic happens.

Bingo.

The algorithms and the development of such, especially for particular architectures like GPU's and the GPGPU's are the more important aspect of computing and not the hardware per se.

If you don't think this is an issue, go ask the theoretical computer scientists what it would mean if many NP-hard problems were transformed into a lower complexity class and whether this would be preferred over having a single 10x increase in computing throughput power and the answer won't be in the least bit surprising.
 
  • #22
chiro said:
Bingo.

The algorithms and the development of such, especially for particular architectures like GPU's and the GPGPU's are the more important aspect of computing and not the hardware per se.

If you don't think this is an issue, go ask the theoretical computer scientists what it would mean if many NP-hard problems were transformed into a lower complexity class and whether this would be preferred over having a single 10x increase in computing throughput power and the answer won't be in the least bit surprising.

IBM's neuromorphic chip isn't programmed. It's an adaptive system that learns from experience.
 
  • #23
wuliheron said:
IBM's neuromorphic chip isn't programmed. It's an adaptive system that learns from experience.

So what's the point in reference to the response on algorithms? Do you agree/disagree or have any specific comments? I don't know what you are trying to get at.
 
  • #24
This neuromorphic stuff from IBM seems very similar to the Ni1000 (nestor/intel) neural network hardware of 1994.
 
  • #25
HowlerMonkey said:
This neuromorphic stuff from IBM seems very similar to the Ni1000 (nestor/intel) neural network hardware of 1994.

People have been imitating neurons in software and circuitry for decades, however, the ni1000 only had about 3 million transistors while memristors are in the 5nm range allowing for 100gb per square centimeter and even more if they go 3 dimensional which is entirely possible. There's just no comparison. In addition IBM used the latest brain scanning technology to study how the neurons in a cat's frontal lobes are networked. It's a step into wonderland where all our theories start to break down and we don't even have existing supercomputers that can crunch the numbers.

The "Super Turing Model" is also being pursued at the same time so we will hopefully see some convergence between the experimental and theoretical approaches some time in the next decade or so.
 
  • #26
It's called progress.

It has been 19 years since the Ni1000
 
  • #27
HowlerMonkey said:
It's called progress.

It has been 19 years since the Ni1000

Past a certain point progress attains the status of unexplored territory.
 

1. What is a GPU supercomputer?

A GPU supercomputer is a high-performance computing system that uses graphics processing units (GPUs) as its primary computational engine. GPUs are designed to handle large amounts of data in parallel, making them well-suited for complex calculations and simulations.

2. How does the University of Antwerp's GPU supercomputer differ from traditional supercomputers?

The University of Antwerp's GPU supercomputer differs from traditional supercomputers in that it uses GPUs as its main processing units instead of CPUs. This allows for faster and more efficient computation, as GPUs are better at handling complex calculations and simulations. Additionally, the university's supercomputer is significantly more affordable, making it accessible to a wider range of researchers and students.

3. What are the potential applications of this GPU supercomputer?

This GPU supercomputer has a wide range of potential applications, including scientific research, data analysis, machine learning, and artificial intelligence. It can also be used for simulations in fields such as physics, chemistry, and engineering. The supercomputer's high processing power and affordability make it a valuable tool for a variety of research and educational purposes.

4. How does the creation of this GPU supercomputer contribute to the field of high-performance computing?

The creation of this GPU supercomputer by the University of Antwerp is a significant advancement in the field of high-performance computing. It demonstrates the potential for using GPUs as the primary computational engine in supercomputers, which can lead to faster and more efficient computing. Additionally, the affordability of this supercomputer makes it more accessible to researchers and students, potentially driving further innovation and advancements in the field.

5. What does the $4000 price tag include for the University of Antwerp's GPU supercomputer?

The $4000 price tag for the University of Antwerp's GPU supercomputer includes the GPU processors, the necessary hardware and software, and the assembly and installation of the supercomputer. The university has also made the design and specifications of the supercomputer open source, allowing others to replicate and potentially improve upon the design.

Similar threads

  • Computing and Technology
Replies
2
Views
919
  • Computing and Technology
Replies
10
Views
2K
Replies
1
Views
5K
  • Quantum Physics
Replies
8
Views
1K
  • Introductory Physics Homework Help
Replies
2
Views
452
Replies
204
Views
2K
Replies
28
Views
6K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
2
Views
4K
  • General Math
Replies
2
Views
923
  • General Discussion
Replies
12
Views
1K
Back
Top