Do MPI collective operations involve multiple hops?

Click For Summary
SUMMARY

The discussion centers on the behavior of MPI collective operations, specifically the reduce operation. It clarifies that nodes do not always send data directly to the root; instead, they can utilize intermediate nodes for data reduction. The use of MPI_ALLREDUCE is emphasized as a method to achieve this in two steps, allowing all group members to receive results simultaneously. The MPI Forum documentation is referenced for further details on the operations of MPI_ALLREDUCE and MPI_REDUCE.

PREREQUISITES
  • Understanding of MPI (Message Passing Interface) concepts
  • Familiarity with collective operations in parallel computing
  • Knowledge of MPI_ALLREDUCE and MPI_REDUCE functions
  • Basic comprehension of intracommunicators in MPI
NEXT STEPS
  • Research the differences between MPI_ALLREDUCE and MPI_REDUCE
  • Explore the MPI Forum documentation for detailed explanations of collective operations
  • Learn about optimizing data communication patterns in MPI
  • Investigate the performance implications of using intermediate nodes in MPI operations
USEFUL FOR

Researchers, developers, and engineers working with parallel computing and MPI who need to optimize collective operations and data reduction strategies.

ektrules
Messages
35
Reaction score
0
Consider the reduce operation for example. Do all nodes send data directly to the root? Or is there some structure where a node will receive data from a few other nodes, perform a reduction, then pass the intermediate results to other nodes?
 
Technology news on Phys.org
ektrules said:
Or is there some structure where a node will receive data from a few other nodes, perform a reduction, then pass the intermediate results to other nodes?

Sth like that could be achieved in two steps using appropriate grouping and using MPI_ALLREDUCE (instead of MPI_REDUCE) first

Pls attend:
http://www.mpi-forum.org/docs/mpi22-report/node109.htm#Node109
If comm is an intracommunicator, MPI_ALLREDUCE behaves the same as MPI_REDUCE except that the result appears in the receive buffer of all the group members.

and
http://www.mpi-forum.org/docs/mpi22-report/node87.htm#Node87
 
Last edited by a moderator:

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 15 ·
Replies
15
Views
3K
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K