Start Parallel Processing: Learn with 16 Computers

Click For Summary
SUMMARY

The discussion focuses on initiating parallel processing using a network of 16 computers. A notable tool mentioned is DMAKE, a distributed make utility designed for managing large source projects by coordinating multiple compiler servers. DMAKE efficiently compiles individual source modules and links them into libraries or executables. Additionally, the concept of distributing satellite photo analysis across multiple computers is highlighted as a practical application of parallel processing.

PREREQUISITES
  • Understanding of parallel processing concepts
  • Familiarity with network configuration for multiple computers
  • Knowledge of compilation processes and tools
  • Basic experience with DMAKE or similar build automation tools
NEXT STEPS
  • Research how to set up a network for parallel processing
  • Learn about DMAKE and its implementation for distributed builds
  • Explore parallel algorithms for data processing, such as image analysis
  • Investigate modern alternatives to DMAKE for current development environments
USEFUL FOR

This discussion is beneficial for software developers, systems engineers, and anyone interested in optimizing build processes and leveraging parallel computing for large-scale projects.

welatiger
Messages
85
Reaction score
0
I graduated from the Faculty of Science Department of Physics
and i want to know about parallel processing
from where i start ?
and i have 16 computer, what i can do to make a network between them to produce a parallel processing.
 
Technology news on Phys.org
One somewhat cool tool that never seemed to make it out of Microsoft in house development was DMAKE, a distributed make utility for large multiple source projects. The network contained a computer with the source files, another computer with the DMAKE app and associated tools, and multiple computers that were compiler servers. DMAKE would send off messages to each of the compiler servers to start compiling individual source modules. When a compile server signaled completion, DMAKE would then assign yet another module to be compliled, until all modules were compiled. Once the compile steps for a library module were completed, DMAKE would then start a link server to link all the just produced objects into a library or directly to an executable if it was a single library project. I saw it demoed while taking some device driver class for Windows NT at Microsoft University (back in the 1990's). The compiler and linker servers were kept effeciently busy until all the libraries were made and then linked to create a completed project. I don't recall if DMAKE kept a history of compile times from previous builds in order to optimize the scheduling. Computers are so fast these days that it would take a huge project for something like DMAKE to make sense now, but it was impressive at the time.


Another example in the parallel algorithm thread was distributing the analysis of satellite photos by breaking them up into multiple parts and sending each part to a different computer to operate on.
 
Last edited:

Similar threads

  • · Replies 102 ·
4
Replies
102
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 31 ·
2
Replies
31
Views
3K
  • · Replies 6 ·
Replies
6
Views
4K
Replies
2
Views
2K
Replies
6
Views
2K
  • · Replies 11 ·
Replies
11
Views
5K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K