How Much Architecture Knowledge Is Needed for Parallel Programming?

AI Thread Summary
Understanding parallel programming is essential for enhancing computational efficiency, especially in fields like theoretical chemistry. While some professionals may only need to add a few lines of code to leverage parallel computing, a deeper grasp of cluster architectures and hardware can significantly improve coding practices and outcomes. Familiarity with different architectures, such as multi-core CPUs and GPUs, is crucial since each requires distinct programming approaches. Basic threading and synchronization methods, like pthread for Unix or Windows threading APIs, are fundamental starting points. Learning these concepts will not only prepare for future expectations in the field but also enable more effective utilization of parallel computing resources.
Einstein Mcfly
Messages
161
Reaction score
3
Hello folks. I just finished my phd in theoretical chemistry and my work thus far hasn't involved any parallel programming. In the future I'm sure I'll be expected to know and use parallel methods, so I'm trying to learn it now. The books that I'm reading from begin by describing the different cluster architectures and hardware details before getting to multi-threading etc etc before even talking about actually writing parallel code.

For those who use parallel programming often, how much of these details do I need to know? My impression of those that I work with is that they just sort of use parallel computing by including a few extra lines in their code and knowing that this will make the calculation faster.

Am I wasting my time learning it from a "computer scientist's" point of view, or is this the only way to go?

Thanks for any advice.
 
Technology news on Phys.org
Link to zip of example multi-threaded C++ dos console program to copy a file. Mutexes, semaphores, waitformultipleobjects(), and link list fifo "messaging" between threads are demonstrated.

http://jeffareid.net/misc/mtcopy.zip

Support for MMX and SSE instructions in Visual Studio:
http://msdn.microsoft.com/en-us/library/y0dh78ez.aspx

There are also math library packages for multi-core cpus and gpus

Intel:
http://software.intel.com/en-us/intel-mkl

Amd:
http://developer.amd.com/cpu/Libraries/acml/Pages/default.aspx

Ati gpu:
http://developer.amd.com/gpu/acmlgpu/pages/default.aspx

Nividia gpu (CULA):
http://www.culatools.com
 
Last edited by a moderator:
You should have a general understanding of different architectures. The sole purpose of parallel programming is to get stuff done faster than it would be in a single thread, and different architectures require different approaches. It's one thing to code for a Core2Quad (four processors, lots of fast shared and local memory). It's different to code for a GPU (two hundred processors, very limited local memory, serialized access to shared memory). It's something else entirely to code for a cluster of computers connected to each other with ethernet cables.

A good starting point would be to understand basic methods of threading and synchronization on a multicore machine, using pthread (Unix) or Windows threading APIs.
 
Thread 'Is this public key encryption?'
I've tried to intuit public key encryption but never quite managed. But this seems to wrap it up in a bow. This seems to be a very elegant way of transmitting a message publicly that only the sender and receiver can decipher. Is this how PKE works? No, it cant be. In the above case, the requester knows the target's "secret" key - because they have his ID, and therefore knows his birthdate.
I tried a web search "the loss of programming ", and found an article saying that all aspects of writing, developing, and testing software programs will one day all be handled through artificial intelligence. One must wonder then, who is responsible. WHO is responsible for any problems, bugs, deficiencies, or whatever malfunctions which the programs make their users endure? Things may work wrong however the "wrong" happens. AI needs to fix the problems for the users. Any way to...
Back
Top