Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Learning parallel programming

  1. Feb 12, 2010 #1
    Hello folks. I just finished my phd in theoretical chemistry and my work thus far hasn't involved any parallel programming. In the future I'm sure I'll be expected to know and use parallel methods, so I'm trying to learn it now. The books that I'm reading from begin by describing the different cluster architectures and hardware details before getting to multi-threading etc etc before even talking about actually writing parallel code.

    For those who use parallel programming often, how much of these details do I need to know? My impression of those that I work with is that they just sort of use parallel computing by including a few extra lines in their code and knowing that this will make the calculation faster.

    Am I wasting my time learning it from a "computer scientist's" point of view, or is this the only way to go?

    Thanks for any advice.
     
  2. jcsd
  3. Feb 12, 2010 #2

    rcgldr

    User Avatar
    Homework Helper

    Link to zip of example multi-threaded C++ dos console program to copy a file. Mutexes, semaphores, waitformultipleobjects(), and link list fifo "messaging" between threads are demonstrated.

    http://jeffareid.net/misc/mtcopy.zip

    Support for MMX and SSE instructions in Visual Studio:
    http://msdn.microsoft.com/en-us/library/y0dh78ez.aspx

    There are also math library packages for multi-core cpus and gpus

    Intel:
    http://software.intel.com/en-us/intel-mkl

    Amd:
    http://developer.amd.com/cpu/Libraries/acml/Pages/default.aspx [Broken]

    Ati gpu:
    http://developer.amd.com/gpu/acmlgpu/pages/default.aspx [Broken]

    Nividia gpu (CULA):
    http://www.culatools.com
     
    Last edited by a moderator: May 4, 2017
  4. Feb 12, 2010 #3
    You should have a general understanding of different architectures. The sole purpose of parallel programming is to get stuff done faster than it would be in a single thread, and different architectures require different approaches. It's one thing to code for a Core2Quad (four processors, lots of fast shared and local memory). It's different to code for a GPU (two hundred processors, very limited local memory, serialized access to shared memory). It's something else entirely to code for a cluster of computers connected to each other with ethernet cables.

    A good starting point would be to understand basic methods of threading and synchronization on a multicore machine, using pthread (Unix) or Windows threading APIs.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook