@@ -24,35 +24,21 @@ simulation worker, and libEnsemble will distribute user applications across the
2424node allocation. This is the **most common approach ** where each simulation
2525runs an MPI application.
2626
27- The generator will run on a worker by default, but if running a single generator,
28- the :ref: ` libE_specs<datastruct-libe-specs> ` option ** gen_on_manager ** is recommended,
29- which runs the generator on the manager (using a thread) as below.
27+ .. image :: ../images/centralized_gen_on_manager.png
28+ :alt: centralized
29+ :scale: 55
3030
31- .. list-table ::
32- :widths: 60 40
31+ A SLURM batch script may include:
3332
34- * - .. image:: ../images/centralized_gen_on_manager.png
35- :alt: centralized
36- :scale: 55
33+ .. code-block :: bash
3734
38- - In calling script:
35+ # SBATCH --nodes 3
3936
40- .. code-block :: python
41- :linenos:
37+ python run_libe_forces.py --nworkers 3
4238
43- ensemble.libE_specs = LibeSpecs(
44- gen_on_manager = True ,
45- )
46-
47- A SLURM batch script may include:
48-
49- .. code-block :: bash
50-
51- # SBATCH --nodes 3
52-
53- python run_libe_forces.py --nworkers 3
54-
55- When using **gen_on_manager **, set ``nworkers `` to the number of workers desired for running simulations.
39+ If running multiple generator processes instead, then set the
40+ :ref: `libE_specs<datastruct-libe-specs> ` option **gen_on_worker ** so that multiple
41+ worker processes can run multiple generator instances.
5642
5743Dedicated Mode
5844^^^^^^^^^^^^^^
@@ -62,32 +48,29 @@ True, the MPI executor will not launch applications on nodes where libEnsemble P
6248processes (manager and workers) are running. Workers launch applications onto the
6349remaining nodes in the allocation.
6450
65- .. list-table ::
66- :widths: 60 40
67-
68- * - .. image:: ../images/centralized_dedicated.png
69- :alt: centralized dedicated mode
70- :scale: 30
7151
72- - In calling script:
52+ .. image :: ../images/centralized_dedicated.png
53+ :alt: centralized dedicated mode
54+ :scale: 30
7355
74- .. code-block :: python
75- :linenos:
56+ In calling script:
7657
77- ensemble.libE_specs = LibeSpecs(
78- num_resource_sets = 2 ,
79- dedicated_mode = True ,
80- )
58+ .. code-block :: python
59+ :linenos:
8160
82- A SLURM batch script may include:
61+ ensemble.libE_specs = LibeSpecs(
62+ gen_on_worker = True ,
63+ num_resource_sets = 2 ,
64+ dedicated_mode = True ,
65+ )
8366
84- .. code-block :: bash
67+ A SLURM batch script may include:
8568
86- # SBATCH --nodes 3
69+ .. code-block :: bash
8770
88- python run_libe_forces.py --nworkers 3
71+ # SBATCH --nodes 3
8972
90- Note that ** gen_on_manager ** is not set in the above example.
73+ python run_libe_forces.py --nworkers 3
9174
9275 Distributed Running
9376-------------------
@@ -137,8 +120,7 @@ Zero-resource workers
137120---------------------
138121
139122Users with persistent ``gen_f `` functions may notice that the persistent workers
140- are still automatically assigned system resources. This can be resolved by using
141- the ``gen_on_manager `` option or by
123+ are still automatically assigned system resources. This can be resolved by
142124:ref: `fixing the number of resource sets<zero_resource_workers> `.
143125
144126Assigning GPUs
0 commit comments