File tree Expand file tree Collapse file tree
Expand file tree Collapse file tree Original file line number Diff line number Diff line change @@ -96,24 +96,24 @@ Distributed Running
9696--------------------
9797
9898In the **distributed ** approach, libEnsemble can be run using the **mpi4py **
99- communicator, with workers distributed across nodes to be co-located with their tasks.
99+ communicator, with workers distributed across nodes. This is most often used
100+ when workers run simulation code directly, via a Python interface, and may be
101+ run an mpirunner, for example (using an `mpich ` based MPI)::
102+
103+ mpirun -np 4 -ppn 1 python myscript.py
104+
105+ The distributed approach, can also be used with the executor, to co-locate workers
106+ with the applications they submit. To ensure workers are placed as required in this
107+ case, requires a careful MPI rank placement.
100108
101109 .. image :: ../images/distributed_new_detailed.png
102110 :alt: distributed
103111 :scale: 30
104112 :align: center
105113
114+ This allows the libEnsemble worker to read files produced by the application on
115+ local node storage.
106116
107- To run using a 3-node allocation with 3 workers using an `mpich ` based MPI. From the
108- head node of the allocation run the following (inc. manager and 3 workers)::
109-
110- mpirun -np 4 -ppn 1 python myscript.py
111-
112- The distributed approach allows the libEnsemble worker to read files produced by the
113- application on local node storage.
114-
115- Distributed mode is also useful when workers run simulations directly, via a
116- Python interface.
117117
118118Configuring the Run
119119-------------------
You can’t perform that action at this time.
0 commit comments