This repository supports replication of the study on topology and scale effects on performance and energy in μBench microservices. The layout is aligned with the paper structure:
- Section 3: Experiment Planning — Topology definitions, workmodels, and service-graph choices. These are not in a separate directory; they are documented below and live inside
4_experiment_execution/. - Section 4: Experiment Execution —
4_experiment_execution/ - Section 5: Results and Analysis —
5_results_data/(data) and5_results_analysis/(notebooks and outputs)
If this replication package is helping your research, consider citing it as follows:
@misc{ReplicationPackage,
title = {{An Empirical Study on How Architectural Topology Affects Microservice Performance and Energy Usage}},
author = {{Irena Ristova, Vincenzo Stoico}},
year = {2026},
publisher = {GreenArch 2026: The First International Workshop on Software Architecture for Green Sustainable Carbon-aware Software Systems},
url = {https://github.com/S2-group/greenarch26-topology-impact},
note = {Accessed: 2026-03-27}
}
(Update the note field with your date of access when citing.)
The material that corresponds to Section 3 of the paper (experiment planning: topologies, workmodels, service graphs) is contained in 4_experiment_execution/:
- Workmodel definitions:
4_experiment_execution/muBench/Examples/— The 18 workmodel JSON files used in the experiment (e.g.workmodel-serial-{5,10,20}services.json,workmodel-parallel-*,workmodelA*,workmodelC*,workmodelC-multi*,workmodelD*). These define topology patterns and system sizes. - Topology–workmodel mapping:
4_experiment_execution/experiment-runner/examples/mubench-benchmarking/RunnerConfig.py— The functionget_workmodel_path()maps each (topology, size) to a workmodel file. The factors (6 topologies × 3 sizes) and design are documented in that folder’sREADME.mdandEXPERIMENT_ARCHITECTURE.md. - Deployment configuration:
4_experiment_execution/muBench/Configs/— Configs used by the deployer (e.g.K8sParameters.json).
There is no 3_experiment_planning/ directory; the planning artifacts above are used directly by the execution in Section 4.
Paper section: Section 4 (Experiment Execution).
This directory contains everything needed to re-run the experiment: Experiment Runner, a trimmed copy of muBench, and the experiment definition (RunnerConfig.py). It includes the workmodels and scripts used for deployment, load generation (Locust), and measurement (Prometheus, EnergiBridge).
- Contents:
experiment-runner/,muBench/, and the mubench-benchmarking example (RunnerConfig.py,EXPERIMENT_ARCHITECTURE.md). - Documentation: 4_experiment_execution/README.md, 4_experiment_execution/HOW_TO_RUN_THE_EXPERIMENT.md.
Paper section: Section 5 (Results and Analysis — data).
Contains the experiment outputs: run_table.csv (aggregated metrics per run), raw_runs/ (per-run Locust, Prometheus, and EnergiBridge outputs), and RUN_TABLE_COLUMNS_EXPLANATION.md. One raw EnergiBridge file (run_15_repetition_5/energibridge_output.csv) exceeds GitHub’s 100 MB limit and is omitted from the repo; its aggregated values are in run_table.csv, and the raw file can be reproduced by re-running that configuration (see 4_experiment_execution/HOW_TO_RUN_THE_EXPERIMENT.md).
- Documentation: 5_results_data/README.md.
Paper section: Section 5 (Results and Analysis — figures and tables).
Contains Jupyter notebooks that produce the paper’s figures and tables from 5_results_data/run_table.csv. Outputs are written to 5_results_analysis/figures/ and 5_results_analysis/tables/.
- Documentation: 5_results_analysis/README.md.
To reproduce the paper’s tables and figures from the provided data without re-running the experiment, run the Jupyter notebooks in 5_results_analysis/ (Boxplots, Descriptive_Statistics, Structural_Groups_Analysis). Input: 5_results_data/run_table.csv. Outputs: 5_results_analysis/figures/, 5_results_analysis/tables/.
See 5_results_analysis/README.md for how to activate the virtualenv and start Jupyter.
- Section 4 — Experiment execution: Follow 4_experiment_execution/HOW_TO_RUN_THE_EXPERIMENT.md to install prerequisites, set up the server, start tunnels, and run the Experiment Runner. Results are produced under the runner’s output path; the package’s
5_results_data/is a snapshot of that output. - Section 5 — Analysis: Use the run table (from step 1 or the provided
5_results_data/run_table.csv) and run the analysis notebooks as described in 5_results_analysis/README.md.
Replication package for the study on topology and scale effects on performance and energy in μBench microservices (Experiment Runner, muBench, Locust, Prometheus, EnergiBridge).