@@ -24,8 +24,12 @@ generator and simulator functions. Many :doc:`examples<examples/examples_index>`
2424are available.
2525
2626There are currently three communication options for libEnsemble (determining how
27- the Manager and Workers communicate). These are ``mpi ``, ``local ``, ``tcp ``.
28- The default is ``mpi ``.
27+ the Manager and Workers communicate). These are ``local ``, ``mpi ``, ``tcp ``.
28+ The default is ``local `` if ``nworkers `` is specified, otherwise ``mpi ``.
29+
30+ Note that ``local `` comms can be used on multi-node systems, where
31+ the ``MPI executor `` is used to distribute MPI applications across the nodes.
32+ Indeed, this is the most commonly used option, even on large supercomputers.
2933
3034.. note ::
3135 You do not need the ``mpi `` communication mode to use the
@@ -34,38 +38,12 @@ The default is ``mpi``.
3438
3539.. tab-set ::
3640
37- .. tab-item :: MPI Comms
38-
39- This option uses mpi4py _ for the Manager/Worker communication. It is used automatically if
40- you run your libEnsemble calling script with an MPI runner such as::
41-
42- mpirun -np N python myscript.py
43-
44- where ``N `` is the number of processes. This will launch one manager and
45- ``N-1 `` workers.
46-
47- This option requires ``mpi4py `` to be installed to interface with the MPI on your system.
48- It works on a standalone system, and with both
49- :doc: `central and distributed modes<platforms/platforms_index> ` of running libEnsemble on
50- multi-node systems.
51-
52- It also potentially scales the best when running with many workers on HPC systems.
53-
54- **Limitations of MPI mode **
55-
56- If launching MPI applications from workers, then MPI is nested. **This is not
57- supported with Open MPI **. This can be overcome by using a proxy launcher
58- (see :doc: `Balsam<executor/balsam_2_executor> `). This nesting does work
59- with MPICH _ and its derivative MPI implementations.
60-
61- It is also unsuitable to use this mode when running on the **launch ** nodes of
62- three-tier systems (e.g., Summit). In that case ``local `` mode is recommended.
63-
6441 .. tab-item :: Local Comms
6542
6643 Uses Python's built-in multiprocessing _ module.
6744 The ``comms `` type ``local `` and number of workers ``nworkers `` may
6845 be provided in :ref: `libE_specs<datastruct-libe-specs> `.
46+
6947 Then run::
7048
7149 python myscript.py
@@ -78,6 +56,10 @@ The default is ``mpi``.
7856
7957 This will launch one manager and ``N `` workers.
8058
59+ The following abbreviated line is equivalent to the above::
60+
61+ python myscript.py -n N
62+
8163 libEnsemble will run on **one node ** in this scenario. To
8264 :doc: `disallow this node<platforms/platforms_index> `
8365 from app-launches (if running libEnsemble on a compute node),
@@ -97,6 +79,33 @@ The default is ``mpi``.
9779 - In some scenarios, any import of ``mpi4py `` will cause this to break.
9880 - Does not have the potential scaling of MPI mode, but is sufficient for most users.
9981
82+ .. tab-item :: MPI Comms
83+
84+ This option uses mpi4py _ for the Manager/Worker communication. It is used automatically if
85+ you run your libEnsemble calling script with an MPI runner such as::
86+
87+ mpirun -np N python myscript.py
88+
89+ where ``N `` is the number of processes. This will launch one manager and
90+ ``N-1 `` workers.
91+
92+ This option requires ``mpi4py `` to be installed to interface with the MPI on your system.
93+ It works on a standalone system, and with both
94+ :doc: `central and distributed modes<platforms/platforms_index> ` of running libEnsemble on
95+ multi-node systems.
96+
97+ It also potentially scales the best when running with many workers on HPC systems.
98+
99+ **Limitations of MPI mode **
100+
101+ If launching MPI applications from workers, then MPI is nested. **This is not
102+ supported with Open MPI **. This can be overcome by using a proxy launcher
103+ (see :doc: `Balsam<executor/balsam_2_executor> `). This nesting does work
104+ with MPICH _ and its derivative MPI implementations.
105+
106+ It is also unsuitable to use this mode when running on the **launch ** nodes of
107+ three-tier systems (e.g., Summit). In that case ``local `` mode is recommended.
108+
100109 .. tab-item :: TCP Comms
101110
102111 Run the Manager on one system and launch workers to remote
0 commit comments