It may also be instructive for you (and others) to look into
how one would use a GPU to speed up calculations, for example by writing a so-called kernel using
CUDA,
OpenCL,
OpenGL or similar framework. An example on the OpenCL wiki page show how a simple matrix multiplication can be done. Another example, the original purpose of GPU's, is to speed up the rendering of on-screen computer graphics, e.g. rendering a 3D scene in real time.
Typically the GPU are setup up by (application) code to repeat the same set of processing pipelines on data that changes. An example here could be that if a pipeline can process an image in some way (e.g. contrast enhance), then this pipeline can be used to process a sequence of images (i.e. video), possibly in real time.
And just in case you didn't notice, the
why all this is done is to get a massive performance boost compared to letting the CPU carry out the calculations, even when that CPU has a lot of cores and there are overhead associated with setting up the GPU and transfer data to it. So usually, as long as you need to process a lot of data in parallel, the GPU wins performance-wise.