Exploring the Power of Parallel Computing with CUDA Programming and GPUs

In summary, the conversation discusses the use of CUDA code on NVIDIA GPUs for highly performant code. The speaker shares their experience with writing parallel code using CUDA and mentions the use of arrays in host and device memory. They also mention the benefits of using GPUs for parallel processing and the availability of multiple GPUs for even more number crunching capability. The conversation also touches on the cost of GPUs and the importance of organizing algorithms to take advantage of parallel computing.
  • #1
37,622
9,857
Not a question - I just got started at writing CUDA code to run on an NVIDIA card I bought a couple of months ago. I've been interested in highly performant code for a long time and have spent some time in the last few years tinkering with the technologies that are featured in the Intel and AMD CPUs. These include code that directly accesses the floating point unit (FPU) and advances in the last 10 or so years, such as MMX, SIMD, AVX and a whole alphabet soup of other technogies.

The CUDA stuff offered by NVIDIA is pretty cool, making it possible to write some really fast code via massive parallelism.

Here's a very simple example to give you a little of the flavor of what's going on. There's a lot that's not shown, but there's enough here to show the power that's available.

The first snippet is "host" code, code that is mostly plain old C or C++ that will run on the CPU. Here we have two 50,000 element arrays being initialized. The "h" prefixes are reminders that these are arrays in the host memory.

C:
// Host code
numElements = 50000;
for (int i = 0; i < numElements; ++i)
{
    h_A[i] = .0625 *i;
    h_B[i] = h_A[i] + .0625;
}
.
.
.
vectorAdd<<<blocksPerGrid, threadsPerBlock>>>(d_A, d_B, d_C, numElements);
.
.
The part that isn't plain old C code is the line just above, which includes a CUDA extension to C that calls "device" code, code that runs on the graphics processing unit (GPU). A part that I omitted was the business of copying the two arrays on the host to corresponding arrays on the device. These latter arrays have "d" prefixes.

Below is the code for the function that adds two vectors. A close look suggests that it is adding a component of one array to the corresponding component of another array, and storing this value in the component with the same index of a third array. In short, this code is just adding two numbers together, and storing their sum in another number. I thought we were adding two 50,000 element arrays. What gives here?
C:
// Device code
/* Vector addition: C = A + B.
* This sample implements element by element vector addition.
*/
__global__ void vectorAdd(const float *A, const float *B, float *C, int N)
{
  int i = blockDim.x * blockIdx.x + threadIdx.x;
  if (i < N)
  C[i] = A[i] + B[i];
}
What's happening under the hood is that the CUDA library code is generating lots (hundreds or thousands) of threads, and they're all running concurrently, so instead of iterating through the two arrays sequentially, the additions are being done in parallel, in very little time, an order of magnitude faster than even the latest CPUs can do this work.

Anyway, I'm pretty excited about this and thought I'd share what I've learned so far (which isn't very much - I'm still pretty much a n00b).
 
Technology news on Phys.org
  • #2
Programmers looking to enhance floating point performance without spending a lot of money on fancy CPUs are using the parallel processing features of GPUs to boost throughput. It's not unheard of in custom rigs to have multiple GPUs running for even more number crunching capability. nVidia competitor ATI announced similar plans several years ago for using its GPU in non-graphics number crunching.

http://xcorr.net/2013/12/25/gpu-computing-which-card-should-you-get/
 
  • #3
Look into parallel processing more too. There are some problems like fractal generation that benefit from parallel computation and other more interconnected problems that don't like numerically solving a system of partial differential equations where the value at given point is dependent on the values of nearby points which means the computation thread for that point must wait or coordinate with threads computing the neighboring points.
 
  • #4
SteamKing said:
Programmers looking to enhance floating point performance without spending a lot of money on fancy CPUs are using the parallel processing features of GPUs to boost throughput. It's not unheard of in custom rigs to have multiple GPUs running for even more number crunching capability. nVidia competitor ATI announced similar plans several years ago for using its GPU in non-graphics number crunching.

http://xcorr.net/2013/12/25/gpu-computing-which-card-should-you-get/
My nVidia card (GTX 750) was pretty reasonable at around $100. It has 4 multiprocessors with 128 CUDA cores in each one. Some of the higher-end nVidia cards (like the GTX 980 in one of the links in the page you linked to above) have over 2000 CUDA cores, but sell for between $550 and $600, and I guess people who are really into it can drop $4000 on one of these cards (e.g., Tesla K40). I picked a card that had close to the highest "compute capability" (5.0) at a reasonable cost.

A lot of people get these kinds of cards for the computer games they support, but since Solitaire suits me just fine, I don't care at all about the game performance.
 
Last edited:
  • #5
jedishrfu said:
Look into parallel processing more too. There are some problems like fractal generation that benefit from parallel computation and other more interconnected problems that don't like numerically solving a system of partial differential equations where the value at given point is dependent on the values of nearby points which means the computation thread for that point must wait or coordinate with threads computing the neighboring points.
Right. There's a real trick to organizing your algorithm to take advantage of parallel computing. Some of the samples that are provided in the CUDA toolkit deal with synchronizing threads in situations where one thread has to wait on another. In the sample code I have in my post, the additions are completely independent of one another, so they can be done in parallel.
 

1. What is CUDA programming and how does it differ from traditional programming?

CUDA (Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA. It allows developers to use the power of a GPU (Graphics Processing Unit) for general purpose computing tasks, rather than just graphics rendering. This is different from traditional programming, which typically uses the CPU (Central Processing Unit) for all tasks.

2. What are the benefits of using a GPU for programming?

GPUs have a highly parallel architecture, meaning they can perform many calculations simultaneously. This makes them well-suited for tasks that require a lot of data processing, such as scientific simulations, image and video processing, and machine learning. Using a GPU can significantly speed up these types of tasks compared to using a CPU alone.

3. What are the basic components of a CUDA program?

A CUDA program consists of a host code, which runs on the CPU, and a kernel function, which runs on the GPU. The host code typically handles tasks such as data input and output, while the kernel function performs the actual computations. The program also includes codes for memory management and error handling.

4. How does data transfer between the CPU and GPU work in CUDA programming?

Data transfer between the CPU and GPU in CUDA programming is done through the use of memory allocation and copying functions. The host code allocates memory on the GPU and then copies data from the CPU to the GPU. The kernel function then operates on the data on the GPU and can also return results back to the CPU.

5. What are some common applications of CUDA programming?

CUDA programming is commonly used in various fields such as scientific computing, machine learning, and video and image processing. It is also used in industries such as finance, oil and gas, and healthcare for tasks such as data analysis, simulations, and image reconstruction. Additionally, CUDA is used for developing video games and virtual reality applications that require advanced graphics processing.

Similar threads

  • Programming and Computer Science
Replies
6
Views
2K
  • Programming and Computer Science
Replies
5
Views
3K
  • Engineering and Comp Sci Homework Help
Replies
5
Views
3K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
7
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
0
Views
3K
  • Engineering and Comp Sci Homework Help
Replies
3
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
3K
  • STEM Academic Advising
Replies
13
Views
2K
Back
Top