Skip to content

Commit 4b584d1

Browse files
authored
Merge pull request #1239 from Libensemble/formatting/openmpi
hyphen in openmpi
2 parents 723cc3e + 5ab70ad commit 4b584d1

5 files changed

Lines changed: 16 additions & 16 deletions

File tree

CHANGELOG.rst

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ Output changes:
120120

121121
Bug fixes:
122122

123-
* Resolved PETSc/OpenMPI issue (when using the Executor). #1064
123+
* Resolved PETSc/Open-MPI issue (when using the Executor). #1064
124124
* Prevent `mpi4py` validation running during local comms (when using OO interface). #1065
125125

126126
Performance changes:
@@ -380,7 +380,7 @@ Documentation:
380380

381381
:Known issues:
382382

383-
* OpenMPI does not work with direct MPI job launches in ``mpi4py`` comms mode,
383+
* Open-MPI does not work with direct MPI job launches in ``mpi4py`` comms mode,
384384
since it does not support nested MPI launches.
385385
(Either use local mode or the Balsam Executor.)
386386
* See known issues section in the documentation for more issues.
@@ -444,7 +444,7 @@ Other functionality changes:
444444
445445
:Known issues:
446446

447-
* OpenMPI does not work with direct MPI job launches in ``mpi4py`` comms mode,
447+
* Open-MPI does not work with direct MPI job launches in ``mpi4py`` comms mode,
448448
since it does not support nested MPI launches.
449449
(Either use local mode or the Balsam Executor.)
450450
* See known issues section in the documentation for more issues.
@@ -492,7 +492,7 @@ Documentation:
492492

493493
:Known issues:
494494

495-
* OpenMPI does not work with direct MPI job launches in ``mpi4py`` comms mode, since it does not support nested MPI launches
495+
* Open-MPI does not work with direct MPI job launches in ``mpi4py`` comms mode, since it does not support nested MPI launches
496496
(Either use local mode or Balsam Executor).
497497
* See known issues section in the documentation for more issues.
498498

@@ -540,7 +540,7 @@ Documentation:
540540
:Known issues:
541541

542542
* We currently recommend running in Central mode on Bridges, as distributed runs are experiencing hangs.
543-
* OpenMPI does not work with direct MPI job launches in mpi4py comms mode, since it does not support nested MPI launches
543+
* Open-MPI does not work with direct MPI job launches in mpi4py comms mode, since it does not support nested MPI launches
544544
(Either use local mode or Balsam Executor).
545545
* See known issues section in the documentation for more issues.
546546

@@ -696,7 +696,7 @@ Release 0.5.0
696696

697697
:Known issues:
698698

699-
* OpenMPI does not work with direct MPI job launches in mpi4py comms mode, since it does not support nested MPI launches
699+
* Open-MPI does not work with direct MPI job launches in mpi4py comms mode, since it does not support nested MPI launches
700700
(Either use local mode or Balsam job controller).
701701
* Local comms mode (multiprocessing) may fail if MPI is initialized before forking processors. This is thought to be responsible for issues combining with PETSc.
702702
* Remote detection of logical cores via LSB_HOSTS (e.g., Summit) returns number of physical cores since SMT info not available.
@@ -728,7 +728,7 @@ Release 0.4.0
728728

729729
:Known issues:
730730

731-
* OpenMPI is not supported with direct MPI launches since nested MPI launches are not supported.
731+
* Open-MPI is not supported with direct MPI launches since nested MPI launches are not supported.
732732

733733
Release 0.3.0
734734
-------------
@@ -749,7 +749,7 @@ Release 0.3.0
749749

750750
:Known issues:
751751

752-
* OpenMPI is not supported with direct MPI launches since nested MPI launches are not supported.
752+
* Open-MPI is not supported with direct MPI launches since nested MPI launches are not supported.
753753

754754
Release 0.2.0
755755
-------------
@@ -765,7 +765,7 @@ Release 0.2.0
765765
:Known issues:
766766

767767
* Killing MPI jobs does not work correctly on some systems (including Cray XC40 and CS400). In these cases, libEnsemble continues, but processes remain running.
768-
* OpenMPI does not work correctly with direct launches (and has not been tested with Balsam).
768+
* Open-MPI does not work correctly with direct launches (and has not been tested with Balsam).
769769

770770
Release 0.1.0
771771
-------------

docs/known_issues.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,8 @@ may occur when using libEnsemble.
1313
``srun: error: CPU binding outside of job step allocation, allocated`` in
1414
the application's standard error. This is being investigated. If this happens
1515
we recommend using ``local`` comms in place of ``mpi4py``.
16-
* When using the Executor: OpenMPI does not work with direct MPI task
17-
submissions in mpi4py comms mode, since OpenMPI does not support nested MPI
16+
* When using the Executor: Open-MPI does not work with direct MPI task
17+
submissions in mpi4py comms mode, since Open-MPI does not support nested MPI
1818
executions. Use either ``local`` mode or the Balsam Executor instead.
1919
* Local comms mode (multiprocessing) may fail if MPI is initialized before
2020
forking processors. This is thought to be responsible for issues combining

docs/platforms/improv.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,11 +61,11 @@ Now run forces with five workers (one for generator and four for simulations)::
6161
mpi4py comms
6262
============
6363

64-
You can install mpi4py as usual having installed the openmpi module::
64+
You can install mpi4py as usual having installed the Open-MPI module::
6565

6666
pip install mpi4py
6767

68-
Note if using ``mpi4py`` comms with openmpi, you may need to set ``export OMPI_MCA_coll_hcoll_enable=0``
68+
Note if using ``mpi4py`` comms with Open-MPI, you may need to set ``export OMPI_MCA_coll_hcoll_enable=0``
6969
to prevent HCOLL warnings.
7070

7171
.. _Improv: https://www.lcrc.anl.gov/for-users/using-lcrc/running-jobs/running-jobs-on-improv/

libensemble/executors/mpi_runner.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -364,7 +364,7 @@ def express_spec(self, task, nprocs, nnodes, ppn, machinefile, hyperthreads, ext
364364
"""
365365
hostlist = None
366366
machinefile = None
367-
# Use machine files for OpenMPI
367+
# Use machine files for Open-MPI
368368
# as "-host" requires entry for every rank
369369

370370
machinefile = "machinefile_autogen"

libensemble/tests/regression_tests/test_with_app_persistent_aposmm_tao_nm.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
Runs libEnsemble with APOSMM with a PETSc/TAO local optimizer and using
33
the executor to run an application.
44
5-
This is to test the scenario, where OpenMPI will fail due to nested MPI, if
5+
This is to test the scenario, where Open-MPI will fail due to nested MPI, if
66
PETSc is imported at global level.
77
88
Execute via one of the following commands (e.g., 3 workers):
@@ -35,7 +35,7 @@
3535
from libensemble.sim_funcs.var_resources import multi_points_with_variable_resources as sim_f
3636
from libensemble.tools import add_unique_random_streams, parse_args
3737

38-
# For OpenMPI the following lines cannot be used, thus allowing PETSc to import.
38+
# For Open-MPI the following lines cannot be used, thus allowing PETSc to import.
3939
# import libensemble.gen_funcs
4040
# libensemble.gen_funcs.rc.aposmm_optimizers = "petsc"
4141

0 commit comments

Comments
 (0)