Uptime monitoring system for Ethora instances.
Each check can be marked as:
- critical: affects instance rollup status (failures can make the instance red, missing data can make it amber)
- optional: never affects instance rollup status (still recorded + visible)
In config you can set:
severity: critical | optional
By default, journey checks should be configured as optional.
docker compose up --buildOpen:
http://localhost:8099(dashboard)http://localhost:8099/api/summary(JSON)
/shows a wallboard view (per-instance rollup + critical checks)- Optional checks are under “Optional checks” (expandable)
- Journey checks have a Run button (manual regression run)
- Click any check name to open history:
/history.html#<checkId>
GET /api/summary: rollup view for UIGET /api/history?checkId=<id>&sinceMinutes=1440: time series points from DBPOST /api/run-checkwith{ "checkId": "instanceId:checkId" }: run a check now and record the result
The deploy template defines two instance types:
| Instance | Name suffix | What it checks |
|---|---|---|
| local | (base name, e.g. "Astro Test") | Internal connectivity: uptime container → host services via host.docker.internal (API, MinIO) and → XMPP container via xmpp:5280. Use this to verify the stack works from inside Docker. |
| public | _Public (e.g. "Astro Test_Public") |
External connectivity: internet → your server via public URLs (https://api..., https://xmpp...). Use this to verify TLS, Nginx, and DNS. |
- Local red, Public green: The uptime container cannot reach host services (e.g.
host.docker.internalnot resolving on Linux, or backend not listening). External access works. - Local green, Public red: Internal stack is fine; external access is broken (DNS, firewall, SSL).
If you want a check to not be scheduled, set:
enabled: false
You can still run it from the UI “Run” button (or via POST /api/run-check).
There are two supported journey levels (configured via checks[].id):
journey→ basic flow (app + users + 1 chat + add member)journey_advanced→ comprehensive flow (2 chats, membership changes, XMPP delivery, file upload)journey_b2b→ tenant/B2B admin flow (create app, app token, users, chat, membership changes, cleanup)
ETHORA_API_BASE(e.g.http://host.docker.internal:8080)ETHORA_BASE_DOMAIN_NAME(base app domain slug)ETHORA_ADMIN_EMAILETHORA_ADMIN_PASSWORD
Optional:
ETHORA_APP_NAME_PREFIXETHORA_USERS_COUNT
Advanced mode requires XMPP websocket connectivity to validate message delivery:
ETHORA_XMPP_SERVICE(e.g.ws://xmpp:5280/ws)ETHORA_XMPP_HOST(e.g.localhostor your XMPP domain)ETHORA_XMPP_MUC_SERVICE(optional; defaults toconference.<XMPP_HOST>)
The B2B journey signs a server token locally and exercises the tenant/admin API surface:
ETHORA_B2B_APP_IDETHORA_B2B_APP_SECRET
Compatibility fallback:
ETHORA_CHAT_APP_IDETHORA_CHAT_APP_SECRET
if the ETHORA_B2B_* variables are not set.
There is an optional check type:
type: push_validate
It logs into the Ethora API using the same env vars as journeys (ETHORA_API_BASE, ETHORA_BASE_DOMAIN_NAME, ETHORA_ADMIN_EMAIL, ETHORA_ADMIN_PASSWORD)
and calls POST /v1/push/validate/{appId} to perform a Firebase dry-run validation.
By default, journey runs create their own temporary chats and operate there (so membership/removal tests are isolated).
If you want to watch a journey run live in an existing chat room, you can provide an observer room and the journey will stream high-level progress updates into that room (best-effort).
- From the UI: click Run on a journey check and enter an Observer room JID / name when prompted.
- From env (default for all runs): set
ETHORA_JOURNEY_OBSERVER_ROOMto a room name or full room JID.
Notes:
- The observer room is not used for the journey’s membership tests; it’s only an operator “log stream”.
- You can paste either:
- a full room JID like
APPID_operator@conference.xmpp.example.com, or - a short room name / suffix like
operator(it will be prefixed asAPPID_operatorfor multi-tenant ejabberd).
- a full room JID like
Run the server continuously; it schedules each check by intervalSeconds and records results.
If you prefer running checks via cron:
- Build once:
npm run build - Run one tick and exit:
node dist/runOnce.js
- This project is intended to be used both as a standalone public repo and as a monoserver module (git submodule).