|
| 1 | +# Testing overview |
| 2 | + |
| 3 | +This directory contains two layers of tests: |
| 4 | + |
| 5 | +1. **Shell-based functional tests** under `testing/tests/`. |
| 6 | +2. **Integration/boot tests** driven by `testing/testing-*` scripts plus kickstart configs in `testing/*.cfg`. |
| 7 | + |
| 8 | +The notes below focus on what a developer needs to add a new test without describing each existing one. |
| 9 | + |
| 10 | +## 1) Shell test suite (`testing/tests/`) |
| 11 | + |
| 12 | +### Runner |
| 13 | +- The runner is `testing/tests/run`. |
| 14 | +- It accepts one or more test suite directories as arguments. Example usage from repo root: |
| 15 | + |
| 16 | +```sh |
| 17 | +./testing/tests/run testing/tests |
| 18 | +``` |
| 19 | + |
| 20 | +- For each suite directory, it walks all subdirectories named `ts*` (e.g. `ts0010-put-file`). |
| 21 | +- It executes each test’s `run` script and captures **stdout+stderr** into `ts*/output`. |
| 22 | +- It compares the output with `ts*/expect` and prints a diff on mismatch. |
| 23 | +- On success, `output` is removed. If `expect.in` exists, it is expanded into |
| 24 | + `expect` before compare (currently unused in tree, but supported by the runner). |
| 25 | +- If the test exits with code `127`, the test is **skipped** and `output` is removed. |
| 26 | +- If a file named `known-failure` exists in the test directory, failures do not fail the overall suite. |
| 27 | + |
| 28 | +### Test directory layout |
| 29 | +A typical test directory looks like: |
| 30 | + |
| 31 | +``` |
| 32 | +testing/tests/tsNNNN-some-name/ |
| 33 | + run # executable script |
| 34 | + expect # exact expected output |
| 35 | + data* # optional input data files (varies per test) |
| 36 | + src/ # optional sources to build (varies per test) |
| 37 | +``` |
| 38 | + |
| 39 | +Patterns in existing tests: |
| 40 | +- **`put-file` tests** use helper functions in `testing/tests/put-file-sh-functions` and often build small sources in a local `src/` directory. |
| 41 | +- **`sort-services` tests** are data-driven; the `run` script calls `tools/sort-services` over `data*` files and compares to `expect`. |
| 42 | +- **`kernel-compare` tests** run `tools/kernel-compare` and compare the collected output to `expect`. |
| 43 | + |
| 44 | +### Adding a new shell test |
| 45 | +1. Choose the next `tsNNNN-name` directory name under `testing/tests/`. |
| 46 | +2. Create an executable `run` script. Keep it hermetic, use `set -e`/`-u` as appropriate, and write |
| 47 | + all output to stdout/stderr (the runner captures it). |
| 48 | +3. Create an `expect` file with the **exact** expected output. A common workflow: |
| 49 | + |
| 50 | +```sh |
| 51 | +# From repo root |
| 52 | +cd testing/tests/tsNNNN-name |
| 53 | +./run > expect |
| 54 | +``` |
| 55 | + |
| 56 | +4. If the test depends on external tools (e.g. `tools/*`), ensure those tools are built/available before running the suite. |
| 57 | +5. If you need a permanent known failure, add an empty `known-failure` file in the test directory. |
| 58 | + |
| 59 | +Notes: |
| 60 | +- The runner does not order tests by number; it uses shell glob order, so keep naming consistent. |
| 61 | +- Use relative paths inside tests when possible, and prefer `$cwd` patterns shown in existing tests. |
| 62 | + |
| 63 | +## 2) Integration/boot tests (`testing/testing-*` and `testing/*.cfg`) |
| 64 | + |
| 65 | +These are higher-level tests that build images and boot them under QEMU for different distros (e.g. |
| 66 | +Fedora, Ubuntu, Gentoo). The driver scripts are: |
| 67 | + |
| 68 | +- `testing/testing-altlinux` |
| 69 | +- `testing/testing-fedora` |
| 70 | +- `testing/testing-gentoo` |
| 71 | +- `testing/testing-gentoo_musl` |
| 72 | +- `testing/testing-ubuntu` |
| 73 | + |
| 74 | +Common helpers and inputs: |
| 75 | +- `testing/sh-functions` contains shared logic (QEMU setup, kickstart generation, status tracking). |
| 76 | +- `testing/packages-*` define per-distro package sets. |
| 77 | +- Kickstart fragments: |
| 78 | + - `testing/ks-*-sysimage.cfg` |
| 79 | + - `testing/ks-*-initrd.cfg` |
| 80 | + - `testing/ks-*-done.cfg` |
| 81 | + - `testing/ks-sysimage.cfg` / `testing/ks-initrd.cfg` (fallbacks) |
| 82 | +- Test case configs like `testing/test-root-*.cfg` define partition layouts and parameters. |
| 83 | + |
| 84 | +### How test case configs are wired |
| 85 | +When you pass a test name to a `testing/testing-*` script, it reads `testing/<TESTNAME>.cfg` and |
| 86 | +also any `# param ...` lines inside it. Example header: |
| 87 | + |
| 88 | +``` |
| 89 | +# param KICKSTART_DISKS=4 |
| 90 | +# param BOOT_DISKS=4 |
| 91 | +# param BOOT_CMDLINE="..." |
| 92 | +``` |
| 93 | + |
| 94 | +`testing/sh-functions:prepare_testsuite` combines the kickstart fragments and the test case config |
| 95 | +into `testing/cache/<vendor>/<test>/ks.cfg`, then runs the requested steps (image build, sysimage |
| 96 | +pack, kickstart, boot, etc.). Logs go under `testing/logs/<vendor>/<test>` and status is tracked |
| 97 | +under `testing/status/`. |
| 98 | + |
| 99 | +### Adding a new integration test |
| 100 | +1. Add a new kickstart test config in `testing/` (e.g. `testing/test-root-myfeature.cfg`). |
| 101 | +2. Include any `# param` lines needed for the test (disks, cmdline, etc.). |
| 102 | +3. Run the appropriate distro script with your test name and steps, for example: |
| 103 | + |
| 104 | +```sh |
| 105 | +./testing/testing-fedora test-root-myfeature create-images build-sources build-sysimage build-kickstart run-boot |
| 106 | +``` |
| 107 | + |
| 108 | +4. Inspect `testing/logs/<vendor>/<test>` and `testing/status/` for results. |
| 109 | + |
| 110 | +## Quick reference |
| 111 | +- Shell suite runner: `testing/tests/run` |
| 112 | +- Shell tests live under: `testing/tests/ts*` |
| 113 | +- Integration test drivers: `testing/testing-*` |
| 114 | +- Integration test cases: `testing/test-root-*.cfg` |
| 115 | +- Shared helper functions: `testing/sh-functions` |
0 commit comments