@@ -96,12 +96,36 @@ Now grab an interactive session on one node::
9696Then in the session run::
9797
9898 export LIBE_PLATFORM="perlmutter_g"
99- python run_libe_forces.py --comms local --nworkers 4
99+ python run_libe_forces.py -n 5
100+
101+ This places the generator on the first worker and runs simulations on the
102+ others (each simulation using one GPU).
100103
101104To see GPU usage, ssh into the node you are on in another window and run::
102105
103106 watch -n 0.1 nvidia-smi
104107
108+ Running generator on the manager
109+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
110+
111+ An alternative is to run the generator on a thread on the manager. The
112+ number of workers can then be set to the number of simulation workers.
113+
114+ Change the libE_specs in `run_libe_forces.py ` as follows.
115+
116+ .. code-block :: python
117+
118+ nsim_workers = ensemble.nworkers
119+
120+ # Persistent gen does not need resources
121+ ensemble.libE_specs = LibeSpecs(
122+ gen_on_manager = True ,
123+
124+ and run with ::
125+
126+ python run_libe_forces.py - n 4
127+
128+
105129To watch video
106130^^^^^^^^^^^^^^
107131
@@ -128,10 +152,10 @@ to use ``mpi4py``, you should install and run as follows::
128152
129153This line will build `` mpi4py`` on top of a CUDA - aware Cray MPICH .
130154
131- To run using 4 workers (one manager)::
155+ To run using 5 workers (one manager)::
132156
133157 export SLURM_EXACT = 1
134- srun -n 5 python my_script.py
158+ srun - n 6 python my_script.py
135159
136160More information on using Python and `` mpi4py`` on Perlmutter can be found
137161in the `Python on Perlmutter` _ documentation.
0 commit comments