diff --git a/.claude/agents/api-documenter.md b/.claude/agents/api-documenter.md new file mode 100644 index 0000000..4e332e5 --- /dev/null +++ b/.claude/agents/api-documenter.md @@ -0,0 +1,22 @@ +--- +name: api-documenter +description: Generate REST API documentation by tracing route handlers +--- + +# API Documenter + +Analyze the API routes and handlers to generate endpoint documentation. + +## Instructions + +1. Read `src/index.ts` to identify all registered routes +2. Read each handler in `src/handlers/` to understand request parameters and response shapes +3. Read helpers in `src/helpers/` for business logic details where relevant +4. Read types in `src/types/` for data structures +5. For each endpoint, document: + - HTTP method and path + - URL parameters (if any) + - Query parameters (if any) + - Response format and shape + - Example response values +6. Output the documentation in a clear format diff --git a/.claude/settings.json b/.claude/settings.json new file mode 100644 index 0000000..a82736f --- /dev/null +++ b/.claude/settings.json @@ -0,0 +1,26 @@ +{ + "hooks": { + "PreToolUse": [ + { + "matcher": "Edit|Write", + "hooks": [ + { + "type": "command", + "command": "FILE=$(jq -r '.tool_input.file_path') && echo \"$FILE\" | grep -qE '(package-lock\\.json|src/database/migrations/)' && echo 'BLOCK: Do not edit generated files (lock files, migrations)' && exit 1 || exit 0" + } + ] + } + ], + "PostToolUse": [ + { + "matcher": "Edit|Write", + "hooks": [ + { + "type": "command", + "command": "jq -r '.tool_input.file_path' | xargs npx prettier --write 2>/dev/null || true" + } + ] + } + ] + } +} diff --git a/.claude/skills/db-migrate/SKILL.md b/.claude/skills/db-migrate/SKILL.md new file mode 100644 index 0000000..bc5bf94 --- /dev/null +++ b/.claude/skills/db-migrate/SKILL.md @@ -0,0 +1,17 @@ +--- +name: db-migrate +description: Generate and review Drizzle ORM database migrations after schema changes +disable-model-invocation: true +--- + +# Database Migration Workflow + +Run the full Drizzle ORM migration workflow after schema changes. + +## Steps + +1. Review changes in `src/database/schema.ts` to understand what changed +2. Run `npm run db:generate` to generate a new migration SQL file +3. Read the newly generated SQL file in `src/database/migrations/` and review it for correctness +4. Report the migration SQL to the user for confirmation before proceeding +5. Only run `npm run db:migrate` if the user explicitly confirms diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 0000000..3c28dbd --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,42 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +## Commands + +- `npm run dev` — Local dev server (staging env, port 8787) +- `npm run build` — Dry-run deploy to generate dist +- `npm run format` — Prettier format all files +- `npm run lint` — ESLint check (config at `.github/linters/eslint.config.mjs`) +- `npm run lint:fix` — ESLint autofix +- `npm run db:generate` — Generate Drizzle migration from schema changes +- `npm run db:migrate` — Push schema to database +- `npm run db:seed` — Seed reference data (denoms, operation types) + +## Architecture + +Cloudflare Worker for the cheqd blockchain network. Two entry points in `src/index.ts`: + +1. **`fetch`** — HTTP API via itty-router. Serves supply data, account balances, and identity analytics. +2. **`scheduled`** — Hourly cron trigger that updates cached circulating supply balances and syncs identity data from BigDipper GraphQL into PostgreSQL. + +### Data flow + +- **Supply/balance endpoints** — Handlers in `src/handlers/` call external APIs (`src/api/`) via helpers. The Cosmos SDK REST API (`REST_API`) provides account data; BigDipper GraphQL (`GRAPHQL_API`) provides total supply and identity transactions. +- **Circulating supply** — Watchlist addresses are stored in Cloudflare KV, grouped by `group_N:` prefix. The hourly cron processes one group per hour (24 groups = 24 hours), updating each address's cached balance breakdown. The circulating supply endpoint subtracts all watchlist balances from total supply. +- **Identity analytics sync** — `SyncService` in `src/helpers/identity.ts` incrementally syncs DID and resource transactions from BigDipper into PostgreSQL (via Hyperdrive). It tracks the last block height to avoid re-processing, with composite key deduplication (txHash + operationType + entityId). +- **Analytics queries** — `src/handlers/analytics.ts` queries the PostgreSQL tables with filtering (date range, operation type, denom, feePayer, didId, success) and pagination. Supports CSV export. + +### Database + +PostgreSQL accessed through Cloudflare Hyperdrive. Schema in `src/database/schema.ts` mirrors tables for mainnet and testnet (e.g., `did_mainnet`/`did_testnet`, `resource_mainnet`/`resource_testnet`). Each network has its own enum types, denom lookup table, and operation types lookup table. The `TABLES` map in `src/helpers/identity.ts` selects the correct table set by network. + +### Environment + +All env vars and bindings are typed in `src/worker-types.d.ts` as the global `Env` interface. Key bindings: `HYPERDRIVE` (PostgreSQL connection pooler), `CIRCULATING_SUPPLY_WATCHLIST` (KV namespace). Secrets (`WEBHOOK_URL`) are set via `wrangler secret put`, not in config files. + +## Conventions + +- Prettier: tabs, single quotes, 120 char width, trailing commas (es5) +- Conventional commits — semantic-release automates versioning and changelogs +- All token amounts are converted from lowest denom (`ncheq`) to main denom (`CHEQ`) using `TOKEN_EXPONENT` (10^9)