A free, forkable, mobile-first analytics web app that turns Orderbird POS transactions into banking-grade growth metrics (cohorts, retention, LTV) for non-technical restaurant owners.
V1 tenant: one ramen shop (the founder's friend). Architecture is multi-tenant-ready from day 1 so any restaurant owner can fork or self-host.
- Frontend: SvelteKit 2 + Svelte 5 runes, Cloudflare Pages (
adapter-cloudflare) - Backend: Supabase Postgres + Edge Functions +
pg_cron - Auth:
@supabase/ssr(cookie-based SSR). Never@supabase/auth-helpers-sveltekit. - Extraction: Python 3.12 + Playwright, hosted on GitHub Actions cron
- Insights: Claude API via Supabase Edge Function, triggered by
pg_cron→pg_net
See CLAUDE.md for the full tech-stack rationale and "What NOT to Use" list.
- Fork and clone this repo.
- Create two Supabase projects:
rba-devandrba-test. - Copy
.env.test.example→.env.testand fill in the TEST project's URL, anon key, and service-role key. npm install- Apply migrations to DEV:
supabase login && supabase link --project-ref <dev-ref> && supabase db push - Apply migrations to TEST: repeat
supabase link+supabase db pushagainst the TEST project ref. - In both Supabase projects: Authentication → Hooks → Custom Access Token Hook →
select
public.custom_access_token_hook. Seedocs/reference/auth-hook-registration.mdfor the exact dashboard steps. Without this step, RLS will deny every query silently. npx vitest run— all Phase 1 integration tests should go green against the TEST project.bash scripts/ci-guards.sh— all four CI guards should exit 0.- Create your first user via the Supabase Dashboard → Authentication → Users, and
insert a row into
public.membershipslinking that user to the seeded restaurant (seesupabase/migrations/0005_seed_tenant.sql). - Push to a branch on GitHub; the
CI Guards,Tests, andDB Migrations (DEV)workflows run automatically.
The v1 ingest is CSV-driven. No Orderbird scraper yet.
- Produce a pre-joined CSV using Claude co-working (see
orderbird_data/README.md) OR drop an existing one at:orderbird_data/5-JOINED_DATA_YYYY-MM-DD/ramen_bones_order_items.csv - Provision your tenant row (v1 has no signup UI per D-19):
Copy the returned UUID — you'll need it for memberships and
INSERT INTO public.restaurants (id, name, timezone) VALUES (gen_random_uuid(), 'Your Restaurant', 'Europe/Berlin') RETURNING id;
RESTAURANT_IDin.env. - Stage the CSV into Supabase Storage and run the loader:
npx tsx scripts/ingest/upload-csv.ts <path-to-csv> orderbird-raw dev/ramen_bones_order_items.csv npm run ingest
- Verify:
psql "$SUPABASE_DB_URL" -c "SELECT COUNT(*) FROM public.transactions;"returns > 0.
- Apply migrations 0010–0013 (cohort_mv, kpi_daily_mv, leaf views, refresh cron):
supabase db push
- Enable the
pg_cronextension via Supabase Dashboard → Database → Extensions → pg_cron → Enable. - Verify the nightly refresh job is scheduled:
SELECT jobname, schedule FROM cron.job WHERE jobname = 'refresh-analytics-mvs'; -- expect: 0 3 * * *
- Force a first refresh:
SELECT refresh_analytics_mvs();
- Fork this repo, clone locally.
- Connect the forked repo to Cloudflare Pages:
- Cloudflare dashboard → Workers & Pages → Create → Pages → Connect to Git
- Build command:
npm run build - Build output:
.svelte-kit/cloudflare
- In the CF Pages project, Settings → Environment variables, set every var in
.env.exampleunder# --- destination: cf pages project env ---. - Push to
main— CF Pages builds and deploys. - Create your user via Supabase Dashboard → Authentication → Users.
- Link user to tenant via memberships row (adjust IDs):
INSERT INTO public.memberships (user_id, restaurant_id, role) VALUES ('<auth-user-id>', '<restaurant-id>', 'owner');
- Visit your CF Pages URL, sign in, see the dashboard.
- Get an Anthropic API key: https://console.anthropic.com/settings/keys
- Set the Supabase secrets:
supabase secrets set ANTHROPIC_API_KEY="sk-ant-..." supabase secrets set SUPABASE_SERVICE_ROLE_KEY="<your-service-role-jwt>"
- Deploy the Edge Function:
supabase functions deploy generate-insight --no-verify-jwt
- Note the function URL printed by
deploy— looks likehttps://<ref>.supabase.co/functions/v1/generate-insight. - Store URL + bearer in Supabase Vault (Dashboard → Database → Vault):
- Secret
generate_insight_url= full function URL - Secret
generate_insight_bearer= your service_role JWT (same asSUPABASE_SERVICE_ROLE_KEY)
- Secret
- Apply migrations 0016 + 0017:
supabase db push
- Verify the insights cron is scheduled:
SELECT jobname, schedule FROM cron.job WHERE jobname = 'generate-insights'; -- expect: 15 3 * * *
- Smoke-test the function manually (optional — the cron will fire at 03:15 UTC):
curl -X POST "<function-url>" \ -H "Authorization: Bearer $SUPABASE_SERVICE_ROLE_KEY" \ -H "Content-Type: application/json" -d '{}'
- Verify:
SELECT business_date, substring(headline, 1, 40), fallback_used FROM public.insights ORDER BY generated_at DESC LIMIT 1;returns a row. - Reload your dashboard — the insight card renders at the top.
- LICENSE: this repo ships MIT by default (see
LICENSE). Keep it or replace with your org's license. - Flip your fork to public on GitHub if it isn't already.
- Add topics:
analytics,restaurant-analytics,sveltekit,supabase,cloudflare-pages,forkable,pos-integration. - Add a one-line description: "Free, forkable, mobile-first analytics for restaurant owners. Turns Orderbird POS transactions into banking-grade cohort/retention/LTV metrics."
- Run the dry-run smoke check:
Exit 0 means every required file, migration, and env-var documentation is in place.
./scripts/fork-dryrun.sh
Phase 1 validates session persistence at the supabase-js setSession layer only.
Phase 4 copies the reference files in docs/reference/ into src/ and re-validates
FND-06 end-to-end through an actual browser refresh via @supabase/ssr cookie
hydration:
docs/reference/hooks.server.ts.example→src/hooks.server.tsdocs/reference/+layout.server.ts.example→src/routes/+layout.server.tsdocs/reference/login/→src/routes/login/
- Dashboard UI (Phase 4)
- Orderbird scraper (Phase 2)
- Analytics SQL — cohorts, retention, LTV (Phase 3)
- Claude nightly insights (Phase 5)
Phase 1 is pure infrastructure: tenancy schema, auth hook, RLS, materialized-view wrapper template, CI guards, and the integration test harness.
Phase 2 ships a CSV loader that reads an Orderbird export from Supabase Storage and upserts it into public.stg_orderbird_order_items (raw line items) + public.transactions (deduped, card-hash-scoped customer rows).
Env vars (see .env.example):
SUPABASE_URL— DEV or PROD Supabase project URLSUPABASE_SERVICE_ROLE_KEY— service-role key (server-only, never commit)RESTAURANT_ID— tenant UUID fromsupabase/migrations/0005_seed_tenant.sqlORDERBIRD_CSV_BUCKET— typicallyorderbird-rawORDERBIRD_CSV_OBJECT— object path inside the bucket, e.g.dev/ramen_bones_order_items.csv
PII note: the CSV contains card PANs and must never hit the repo. See docs/reference/pii-columns.txt for the hashed-only columns the loader persists.
# Stage CSV into Supabase Storage (one-off)
npx tsx scripts/ingest/upload-csv.ts ./orderbird_data/.../ramen_bones_order_items.csv orderbird-raw dev/ramen_bones_order_items.csv
# Dry-run: prints the report without touching DB
npm run ingest -- --dry-run
# Write mode: upserts staging + transactions
npm run ingestThe loader emits a single JSON line on stdout:
| Field | Meaning |
|---|---|
rows_read |
Raw CSV lines parsed (one per Orderbird line item) |
invoices_deduped |
Unique positive-total invoices destined for transactions |
staging_upserted |
Rows written to stg_orderbird_order_items (= rows_read in write mode) |
transactions_new |
Net-new invoice rows inserted this run |
transactions_updated |
Existing invoices touched by the upsert path |
cash_rows_excluded |
Cash line items excluded from card-hash customer tracking |
missing_worldline_rows |
Card rows where the Orderbird worldline join failed — monitor this |
errors |
Parse/upsert errors (should always be 0) |
If missing_worldline_rows grows meaningfully run-over-run, Worldline is silently dropping card references. Founder should investigate the POS export before we trust customer cohorts.
Re-running npm run ingest on the same CSV is a no-op at the row-count level: transactions_new=0, physical row counts unchanged. The natural key (restaurant_id, source_tx_id) plus a 2-day overlap window drives the upsert. See .planning/phases/02-ingestion/02-04-REAL-RUN.md for a verified real-data run.
tests/ingest/fixtures/README.md documents the 11 semantic scenarios the loader handles (split bills, negative invoices, missing worldline joins, etc.) and is the source of truth for the founder-facing interpretation.
.planning/PROJECT.md— vision and non-negotiables.planning/REQUIREMENTS.md— FND-01..FND-08 acceptance criteria.planning/ROADMAP.md— five-phase roadmapCLAUDE.md— tech-stack rationale and forbidden patterns