Discussion Overview
The discussion revolves around the behavior of MCNP6 when run in MPI mode across multiple nodes, specifically addressing the generation of multiple runtpe files and outputs. Participants explore the implications of using different commands and configurations in a parallel computing environment.
Discussion Character
- Technical explanation
- Debate/contested
- Exploratory
Main Points Raised
- One participant notes that increasing the number of nodes results in multiple runtpe files and outputs, questioning if this is normal behavior for MCNP in MPI mode.
- Another participant suggests that the observed behavior is typical for MCNP running in MPI mode, but emphasizes the importance of understanding the variations in outputs.
- A participant expresses concern about the necessity of using both srun and mpirun in the command, suggesting that srun may replace mpirun.
- One participant explains that srun is a cluster system scheduler used to configure nodes and CPUs, while noting that the MCNP manual specifies using mpirun for multiple CPUs.
- Another participant speculates that the command structure might lead to separate instances of MCNP running on each node, which could explain the multiple outputs.
- A participant unfamiliar with MPI suggests that having multiple output files seems unhelpful and questions the configuration being used.
Areas of Agreement / Disagreement
Participants express differing views on the necessity of using both srun and mpirun, and whether the behavior of generating multiple outputs is expected or indicative of a configuration issue. The discussion remains unresolved regarding the optimal command structure for running MCNP6 in MPI mode.
Contextual Notes
There are uncertainties regarding the specific configurations required for optimal performance in a slurm system, and the implications of using both srun and mpirun together are not fully clarified.