Why Is MPI_COMM_RANK Not Working Correctly on Mac OSX?

  • Context: Fortran 
  • Thread starter Thread starter natski
  • Start date Start date
  • Tags Tags
    Fortran Mac
Click For Summary
SUMMARY

The issue with MPI_COMM_RANK not functioning correctly on Mac OSX arises when running Fortran MPI code, specifically when using the command mpirun -np 4 mpihello. Despite the expectation of parallel execution, the output consistently shows "node 0" for all instances, indicating that the rank is not being assigned correctly. This problem does not occur on Linux systems, suggesting a potential compatibility issue with the MPI implementation on Mac. Users have confirmed that multiple instances of the program are running, but the rank retrieval is flawed.

PREREQUISITES
  • Familiarity with Fortran programming and MPI (Message Passing Interface)
  • Understanding of parallel computing concepts
  • Basic knowledge of Mac OSX system configurations
  • Experience with compiling and running Fortran code using mpif90
NEXT STEPS
  • Investigate MPI implementation differences between Mac OSX and Linux
  • Explore troubleshooting techniques for MPI_COMM_RANK issues on Mac
  • Learn about alternative MPI libraries compatible with Mac OSX
  • Review the Mac Research guide on compiling MPI for Fortran support
USEFUL FOR

This discussion is beneficial for Fortran developers, researchers in parallel computing, and anyone facing MPI-related issues on Mac OSX.

natski
Messages
262
Reaction score
2
Hi all,

I have been running mpi fortran code on Linux based systems without any trouble for some time but making the same code run on a Mac is causing me some headaches. I installed mpi using the guide at: http://www.macresearch.org/compiling-mpi-f90-support-snow-leopard

I am trying to run some simple "hello world" code:

! Fortran example
program hello
include 'mpif.h'
integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)

call MPI_INIT(ierror)
call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
print*, 'node', rank, ': Hello world'
call MPI_FINALIZE(ierror)
end

Compiling with mpif90 -o mpihello mpihello.f90 and then executing with mpirun -np 4 mpihello I get...

node 0 : Hello world
node 0 : Hello world
node 0 : Hello world
node 0 : Hello world

So essentially I am calling node 0 four times rather than actually parallelizing anything. Has anyone else encountered this problem? The same code runs fine on Linux FYI. Any tips/suggestions would be extremely appreciated!

Natski
 
Technology news on Phys.org
By looping the inner code and re-running I notice in Activity Monitor that four parallel versions of the code are in fact running. So this seems simply something wrong with the rank being returned by MPI_COMM_RANK - is this a normal issue for Macs?
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 12 ·
Replies
12
Views
7K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 54 ·
2
Replies
54
Views
5K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K