Skip to content

Commit 3d8c3ba

Browse files
authored
Merge pull request #1535 from Libensemble/examples/abbrev_local
Examples/abbrev local
2 parents 16bf770 + 162afaa commit 3d8c3ba

89 files changed

Lines changed: 97 additions & 97 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs/FAQ.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ Common Errors
9090

9191
.. dropdown:: **PETSc and MPI errors with "[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=59"**
9292

93-
with ``python [test with PETSc].py --comms local --nworkers 4``
93+
with ``python [test with PETSc].py --nworkers 4``
9494

9595
This error occurs on some platforms when using PETSc with libEnsemble
9696
in ``local`` (multiprocessing) mode. We believe this is due to PETSc initializing MPI

docs/platforms/bebop.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ Once in the interactive session, you may need to reload your modules::
6969

7070
Now run your script with four workers (one for generator and three for simulations)::
7171

72-
python my_libe_script.py --comms local --nworkers 4
72+
python my_libe_script.py --nworkers 4
7373

7474
``mpirun`` should also work. This line launches libEnsemble with a manager and
7575
**three** workers to one allocated compute node, with three nodes available for

docs/platforms/frontier.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ Now grab an interactive session on one node::
6464

6565
Then in the session run::
6666

67-
python run_libe_forces.py --comms local --nworkers 9
67+
python run_libe_forces.py --nworkers 9
6868

6969
This places the generator on the first worker and runs simulations on the
7070
others (each simulation using one GPU).

docs/platforms/improv.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ Once in the interactive session, you may need to reload the modules::
5656

5757
Now run forces with five workers (one for generator and four for simulations)::
5858

59-
python run_libe_forces.py --comms local --nworkers 5
59+
python run_libe_forces.py --nworkers 5
6060

6161
mpi4py comms
6262
============

docs/platforms/platforms_index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ of the allocation::
5959

6060
or::
6161

62-
python myscript.py --comms local --nworkers 4
62+
python myscript.py --nworkers 4
6363

6464
Either of these will run libEnsemble (inc. manager and 4 workers) on the first node. The remaining
6565
4 nodes will be divided among the workers for submitted applications. If the same run was

docs/platforms/polaris.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ A simple example batch script for a libEnsemble use case that runs 5 workers
6464
6565
cd $PBS_O_WORKDIR
6666
67-
python run_libe_forces.py --comms local --nworkers 5
67+
python run_libe_forces.py --nworkers 5
6868
6969
The script can be run with::
7070

docs/resource_manager/overview.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -276,7 +276,7 @@ Also, this can be set on the command line as a convenience.
276276

277277
.. code-block:: bash
278278
279-
python run_ensemble.py --comms local --nworkers 5 --nresource_sets 8
279+
python run_ensemble.py --nworkers 5 --nresource_sets 8
280280
281281
.. _persistent_sampling_var_resources.py: https://github.com/Libensemble/libensemble/blob/develop/libensemble/gen_funcs/persistent_sampling_var_resources.py
282282
.. _test_GPU_variable_resources.py: https://github.com/Libensemble/libensemble/blob/develop/libensemble/tests/regression_tests/test_GPU_variable_resources.py

docs/resource_manager/zero_resource_workers.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ only run ``gen_f`` functions in-place (i.e., they do not use the Executor
99
to submit applications to allocated nodes). Suppose the user is using the
1010
:meth:`parse_args()<tools.parse_args>` function and runs::
1111

12-
python run_ensemble_persistent_gen.py --comms local --nworkers 3
12+
python run_ensemble_persistent_gen.py --nworkers 3
1313

1414
If three nodes are available in the node allocation, the result may look like the
1515
following.
@@ -21,7 +21,7 @@ following.
2121

2222
To avoid the the wasted node above, add an extra worker::
2323

24-
python run_ensemble_persistent_gen.py --comms local --nworkers 4
24+
python run_ensemble_persistent_gen.py --nworkers 4
2525

2626
and in the calling script (*run_ensemble_persistent_gen.py*), explicitly set the
2727
number of resource sets to the number of workers that will be running simulations.

docs/running_libE.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ supercomputers.
5353
or an :class:`Ensemble<libensemble.ensemble.Ensemble>` object with ``Ensemble(parse_args=True)``,
5454
you can specify these on the command line::
5555

56-
python myscript.py --comms local --nworkers N
56+
python myscript.py --nworkers N
5757

5858
This will launch one manager and ``N`` workers.
5959

docs/tutorials/aposmm_tutorial.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ busy.
117117
In practice, since a single worker becomes "persistent" for APOSMM, users
118118
should initiate one more worker than the number of parallel simulations::
119119

120-
python my_aposmm_routine.py --comms local --nworkers 4
120+
python my_aposmm_routine.py --nworkers 4
121121

122122
results in three workers running simulations and one running APSOMM.
123123

@@ -265,7 +265,7 @@ optimization method::
265265

266266
Finally, run this libEnsemble / APOSMM optimization routine with the following::
267267

268-
python my_first_aposmm.py --comms local --nworkers 4
268+
python my_first_aposmm.py --nworkers 4
269269

270270
Please note that one worker will be "persistent" for APOSMM for the duration of
271271
the routine.

0 commit comments

Comments
 (0)