Skip to content

Commit 4feafc3

Browse files
committed
Update platform tutorials
1 parent 5b9433b commit 4feafc3

4 files changed

Lines changed: 35 additions & 16 deletions

File tree

docs/platforms/frontier.rst

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,10 @@ Now grab an interactive session on one node::
6464

6565
Then in the session run::
6666

67-
python run_libe_forces.py --comms local --nworkers 8
67+
python run_libe_forces.py --comms local --nworkers 9
68+
69+
This places the generator on the first worker and runs simulations on the
70+
others (each simulation using one GPU).
6871

6972
To see GPU usage, ssh into the node you are on in another window and run::
7073

docs/platforms/perlmutter.rst

Lines changed: 27 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -96,12 +96,36 @@ Now grab an interactive session on one node::
9696
Then in the session run::
9797

9898
export LIBE_PLATFORM="perlmutter_g"
99-
python run_libe_forces.py --comms local --nworkers 4
99+
python run_libe_forces.py -n 5
100+
101+
This places the generator on the first worker and runs simulations on the
102+
others (each simulation using one GPU).
100103

101104
To see GPU usage, ssh into the node you are on in another window and run::
102105

103106
watch -n 0.1 nvidia-smi
104107

108+
Running generator on the manager
109+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
110+
111+
An alternative is to run the generator on a thread on the manager. The
112+
number of workers can then be set to the number of simulation workers.
113+
114+
Change the libE_specs in `run_libe_forces.py` as follows.
115+
116+
.. code-block:: python
117+
118+
nsim_workers = ensemble.nworkers
119+
120+
# Persistent gen does not need resources
121+
ensemble.libE_specs = LibeSpecs(
122+
gen_on_manager=True,
123+
124+
and run with::
125+
126+
python run_libe_forces.py -n 4
127+
128+
105129
To watch video
106130
^^^^^^^^^^^^^^
107131
@@ -128,10 +152,10 @@ to use ``mpi4py``, you should install and run as follows::
128152
129153
This line will build ``mpi4py`` on top of a CUDA-aware Cray MPICH.
130154
131-
To run using 4 workers (one manager)::
155+
To run using 5 workers (one manager)::
132156
133157
export SLURM_EXACT=1
134-
srun -n 5 python my_script.py
158+
srun -n 6 python my_script.py
135159
136160
More information on using Python and ``mpi4py`` on Perlmutter can be found
137161
in the `Python on Perlmutter`_ documentation.

docs/platforms/polaris.rst

Lines changed: 0 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -39,17 +39,6 @@ environment (if you need ``conda install``). More details at `Python for Polaris
3939
See :doc:`here<../advanced_installation>` for more information on advanced options
4040
for installing libEnsemble, including using Spack.
4141

42-
Ensuring use of mpiexec
43-
-----------------------
44-
45-
Prior to libE v 0.10.0, when using the :doc:`MPIExecutor<../executor/mpi_executor>` it
46-
is necessary to manually tell libEnsemble to use``mpiexec`` instead of ``aprun``.
47-
When setting up the executor use::
48-
49-
from libensemble.executors.mpi_executor import MPIExecutor
50-
exctr = MPIExecutor(custom_info={'mpi_runner':'mpich', 'runner_name':'mpiexec'})
51-
52-
From version 0.10.0, this is not necessary.
5342

5443
Job Submission
5544
--------------

docs/platforms/spock_crusher.rst

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,10 @@ Now grab an interactive session on one node::
6767

6868
Then in the session run::
6969

70-
python run_libe_forces.py --comms local --nworkers 4
70+
python run_libe_forces.py -n 5
71+
72+
This places the generator on the first worker and runs simulations on the
73+
others (each simulation using one GPU).
7174

7275
To see GPU usage, ssh into the node you are on in another window and run::
7376

0 commit comments

Comments
 (0)