What Are The Functions Of A GPU?

  • Thread starter Thread starter Hornbein
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around the functions and operations of a GPU, exploring its technical capabilities, applications in graphics and AI, and the underlying mathematical operations involved. Participants seek a deeper understanding beyond general statements about GPUs speeding up graphics.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Some participants describe GPUs as capable of performing vector and matrix operations, emphasizing their parallel processing abilities.
  • One participant highlights the importance of multiply-accumulate (MAC) operations and matrix inversion in solving linear algebra equations.
  • Another participant mentions that GPUs are used for graphics operations like rotation, scaling, and translation, and also for AI applications such as projections and change of basis.
  • Historical context is provided regarding the role of GPUs in early AI developments, with references to significant advancements in neural networks.
  • Some participants discuss the programming frameworks like CUDA and OpenCL for utilizing GPUs in calculations, noting the performance benefits over CPUs.
  • There are mentions of the evolution of graphics APIs, with some participants questioning the relevance of OpenGL compared to newer technologies like Vulkan.

Areas of Agreement / Disagreement

Participants express various viewpoints on the functions of GPUs, with no clear consensus on specific operations or the best frameworks for GPU programming. Disagreements arise regarding the relevance of certain technologies and the historical context of GPU use in AI.

Contextual Notes

Some discussions include assumptions about the definitions of terms like "vector" and the conditions under which GPUs outperform CPUs. Limitations in the discussion include unresolved details about specific operations and the scope of GPU applications.

Who May Find This Useful

This discussion may be useful for individuals interested in the technical aspects of GPU operations, applications in graphics and AI, and those exploring programming frameworks for parallel processing.

Hornbein
Gold Member
Messages
3,800
Reaction score
3,069
What are the operations of a GPU? I tried to look it up online and all I could find was "they speed up graphics" and long lists of brand names. They do "vector operations." I want a more technical source of info. Is that linear algebra? If so, what operations?
 
Computer science news on Phys.org
It comes down to the two different meanings of the term "vector", which in computing, can refer to both a 1D array of many elements, or a mathematical vector, arrow, or phasor, represented by a single complex number.

A GPU is used to speed up graphics, where rows, columns or blocks of pixels need to be processed in parallel, to get a high throughput.

A "vector processor" performs the same arithmetic operation on all the elements of a vector (a 1D array) of elements. A GPU can do that on thousands of elements, all at the same time, in parallel.

A GPU can do "vector rotations" using complex numbers, if that is what you require, and they can do it on thousands of elements from an array in parallel.
 
  • Like
Likes   Reactions: berkeman and DaveE
I'm no expert, but it's basically vector/matrix operations. MAC (multiply-accumulate) and inversion operations are the common benchmarks. These are crucial to solving linear algebra equations. This includes things like coordinate transformations, rotations, scaling etc. The emphasis is on efficient operations on large data sets. If I had to pick one thing to highlight it would be multiplying two large matrices efficiently.
 
Multiplying matrices. So for graphics it's rotation and scaling. Translation would be matrix addition. I presume that the matrices are of the same dimension.

For AI I suppose projections are important and maybe change of basis. I'm interested because AI took off when they figured out how to use GPUs to train their models. I read that this happened gradually so no one person gets credit. GPUs are also used for mining bitcoin.
 
I read somewhere that Michael Bay was the first person to fry a GPU on a super computer during a CGI computing. It would probably have burned out anyway though. But it's typical of Michael Bay and the amount of CGI he uses in his films.
 
Hornbein said:
What are the operations of a GPU?
It may also be instructive for you (and others) to look into how one would use a GPU to speed up calculations, for example by writing a so-called kernel using CUDA, OpenCL, OpenGL or similar framework. An example on the OpenCL wiki page show how a simple matrix multiplication can be done. Another example, the original purpose of GPU's, is to speed up the rendering of on-screen computer graphics, e.g. rendering a 3D scene in real time.

Typically the GPU are setup up by (application) code to repeat the same set of processing pipelines on data that changes. An example here could be that if a pipeline can process an image in some way (e.g. contrast enhance), then this pipeline can be used to process a sequence of images (i.e. video), possibly in real time.

And just in case you didn't notice, the why all this is done is to get a massive performance boost compared to letting the CPU carry out the calculations, even when that CPU has a lot of cores and there are overhead associated with setting up the GPU and transfer data to it. So usually, as long as you need to process a lot of data in parallel, the GPU wins performance-wise.
 
Isn't OpenGL technically obsolete by now?

EDIT: Nah, sorry. Development has slowed and it is being superceded by Vulkan.

EDIT2: As they say in biology: functionally extinct.
 
Also for someone who've tried both OpenGL and DirectX I can't say I'm sorry.
 
  • #10
Filip Larsen said:
It may also be instructive for you (and others) to look into how one would use a GPU to speed up calculations, for example by writing a so-called kernel using CUDA, OpenCL, OpenGL or similar framework. An example on the OpenCL wiki page show how a simple matrix multiplication can be done. Another example, the original purpose of GPU's, is to speed up the rendering of on-screen computer graphics, e.g. rendering a 3D scene in real time.

Typically the GPU are setup up by (application) code to repeat the same set of processing pipelines on data that changes. An example here could be that if a pipeline can process an image in some way (e.g. contrast enhance), then this pipeline can be used to process a sequence of images (i.e. video), possibly in real time.

And just in case you didn't notice, the why all this is done is to get a massive performance boost compared to letting the CPU carry out the calculations, even when that CPU has a lot of cores and there are overhead associated with setting up the GPU and transfer data to it. So usually, as long as you need to process a lot of data in parallel, the GPU wins performance-wise.

Yes. obviously a supercomputer is basically just a ton of GPUs working together. In parallel or serial I don't know. I'd suspect the former.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
10
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
1
Views
5K
  • · Replies 41 ·
2
Replies
41
Views
6K
  • · Replies 1 ·
Replies
1
Views
2K