@@ -5,22 +5,26 @@ Running on HPC Systems
55
66libEnsemble has been tested on systems of highly varying scales, from laptops to
77thousands of compute nodes. On multi-node systems, there are a few alternative
8- ways of configuring libEnsemble to run and launch tasks (i.e., user applications) on
9- the available nodes.
8+ ways of configuring libEnsemble to run and launch tasks (i.e., user applications)
9+ on the available nodes.
1010
1111The :doc: `Forces tutorial <../../tutorials/executor_forces_tutorial >` gives an
1212example with a simple MPI application.
1313
14+ Note that while the diagrams below show one application being run per node,
15+ configurations with **multiple nodes per worker ** or **multiple workers per node **
16+ are both common use cases.
17+
1418Centralized Running
1519-------------------
1620
1721The default communications scheme places the manager and workers on the first node.
1822The :doc: `MPI Executor<../executor/mpi_executor> ` can then be invoked by each
1923simulation worker, and libEnsemble will distribute user applications across the
20- node allocation. This is the most common approach where each simulation
24+ node allocation. This is the ** most common approach ** where each simulation
2125runs an MPI application.
2226
23- The generator will run on one worker by default, but if running a single generator,
27+ The generator will run on a worker by default, but if running a single generator,
2428the :ref: `libE_specs<datastruct-libe-specs> ` option **gen_on_manager ** is recommended,
2529which runs the generator on the manager (using a thread) as below.
2630
@@ -89,9 +93,6 @@ remaining nodes in the allocation.
8993
9094 Note that **gen_on_manager ** is not set in the above example.
9195
92- Note that while these diagrams show one application being run per node, configurations
93- with **multiple nodes per worker ** or **multiple workers per node ** are both common use cases.
94-
9596Distributed Running
9697--------------------
9798
0 commit comments