This repo has three distinct validation layers:
- local environment/runtime checks
- unit and policy tests
- AWS-backed end-to-end validation
The supported checkout entry point is:
source ./activateQuick sanity checks:
daylily-ec version
daylily-ec runtime status
aws --version
pcluster version
session-manager-pluginIf that shell is wrong, fix the environment first. Do not debug AWS behavior from a broken local toolchain.
Useful targeted tests:
pytest -q tests/test_supported_no_pem_refs.py
pytest -q tests/test_environment_contract.py
pytest -q tests/test_ssm.py
pytest -q tests/test_script_entrypoints.py
pytest -q tests/test_ssm_e2e_runner.py
pytest -q tests/test_workflow.pyWhen you are editing a specific module, keep the test loop narrow. Run the whole suite later.
Typical broader runs:
pytest -q
pytest --maxfail=1 -q
pytest -q tests/test_ssm.py tests/test_workflow.py tests/test_ssm_e2e_runner.pyTwo repo policy tests matter for the supported doc/runtime contract:
pytest -q tests/test_supported_no_pem_refs.py
pytest -q tests/test_environment_contract.pyThey guard:
- supported docs and scripts staying free of retired key-file guidance
- the
environment.yamlandpyproject.tomlcontract - legacy env files staying archived/quarantined
The repo ships a real AWS-backed acceptance runner:
python -m daylily_ec.ssh_to_ssm_e2e_runner --helpIt exercises the supported lifecycle through the actual CLI/helpers:
daylily-ec preflightdaylily-ec createdaylily-ec headnode connectdaylily-ec samples stagedaylily-ec workflow launchdaylily-ec export- optionally
daylily-ec delete
AWS_PROFILE="$AWS_PROFILE" python -m daylily_ec.ssh_to_ssm_e2e_runner \
--profile "$AWS_PROFILE" \
--region "$REGION" \
--cluster-name "$CLUSTER_NAME" \
--reuse-existing-cluster \
--reference-bucket "$REF_BUCKET" \
--analysis-samples "$ANALYSIS_SAMPLES" \
--workflow-live \
--output-json "$PWD/tmp-e2e-results/$CLUSTER_NAME.json"AWS_PROFILE="$AWS_PROFILE" python -m daylily_ec.ssh_to_ssm_e2e_runner \
--profile "$AWS_PROFILE" \
--region "$REGION" \
--region-az "$REGION_AZ" \
--config "$DAY_EX_CFG" \
--reference-bucket "$REF_BUCKET" \
--analysis-samples "$ANALYSIS_SAMPLES" \
--workflow-live \
--output-json "$PWD/tmp-e2e-results/run.json"Important runner flags:
--reuse-existing-cluster--cluster-name--workflow-live--workflow-timeout-minutes--interactive-session-smoke--skip-export--delete-cluster--allow-destroy
The runner is non-destructive by default. Delete requires both delete flags.
Start here:
daylily-ec preflight --profile "$AWS_PROFILE" --region-az "$REGION_AZ" --config "$DAY_EX_CFG" --debug
daylily-ec info
daylily-ec runtime explainThen inspect:
- the terminal output from preflight/create
- Daylily state/config directories reported by
daylily-ec info pcluster describe-cluster --region "$REGION" -n "$CLUSTER_NAME"
Check:
aws ssm get-document \
--name SSM-SessionManagerRunShell \
--document-format JSON \
--query Content \
--output text \
--region "$REGION" \
--profile "$AWS_PROFILE"Also verify the local plugin:
session-manager-pluginTry:
daylily-ec headnode configure \
--profile "$AWS_PROFILE" \
--region "$REGION" \
--cluster "$CLUSTER_NAME"Then reconnect and check:
whoami
command -v day-cloneInspect on the headnode:
daylily-ec headnode jobs --profile "$AWS_PROFILE" --region "$REGION" --cluster "$CLUSTER_NAME"
daylily-ec --json workflow status --profile "$AWS_PROFILE" --region "$REGION" --cluster "$CLUSTER_NAME" --session <session>
daylily-ec workflow logs --profile "$AWS_PROFILE" --region "$REGION" --cluster "$CLUSTER_NAME" --session <session> --lines 100Inspect:
cat "$EXPORT_DIR/fsx_export.yaml"That file is the first place to look because it records success/error and the target path.
When the failure is ambiguous, use this order:
source ./activatedaylily-ec runtime statusdaylily-ec preflight --debug ...daylily-ec cluster list ...daylily-ec headnode connect ...- inspect
/home/ubuntu/daylily-runs/<session>/
That sequence is usually faster than jumping directly into AWS console tabs.