Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions .claude/agents/api-documenter.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
name: api-documenter
description: Generate REST API documentation by tracing route handlers
---

# API Documenter

Analyze the API routes and handlers to generate endpoint documentation.

## Instructions

1. Read `src/index.ts` to identify all registered routes
2. Read each handler in `src/handlers/` to understand request parameters and response shapes
3. Read helpers in `src/helpers/` for business logic details where relevant
4. Read types in `src/types/` for data structures
5. For each endpoint, document:
- HTTP method and path
- URL parameters (if any)
- Query parameters (if any)
- Response format and shape
- Example response values
6. Output the documentation in a clear format
26 changes: 26 additions & 0 deletions .claude/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
{
"hooks": {
"PreToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "FILE=$(jq -r '.tool_input.file_path') && echo \"$FILE\" | grep -qE '(package-lock\\.json|src/database/migrations/)' && echo 'BLOCK: Do not edit generated files (lock files, migrations)' && exit 1 || exit 0"
}
]
}
],
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "jq -r '.tool_input.file_path' | xargs npx prettier --write 2>/dev/null || true"
}
]
}
]
}
}
17 changes: 17 additions & 0 deletions .claude/skills/db-migrate/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
---
name: db-migrate
description: Generate and review Drizzle ORM database migrations after schema changes
disable-model-invocation: true
---

# Database Migration Workflow

Run the full Drizzle ORM migration workflow after schema changes.

## Steps

1. Review changes in `src/database/schema.ts` to understand what changed
2. Run `npm run db:generate` to generate a new migration SQL file
3. Read the newly generated SQL file in `src/database/migrations/` and review it for correctness
4. Report the migration SQL to the user for confirmation before proceeding
5. Only run `npm run db:migrate` if the user explicitly confirms
42 changes: 42 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

## Commands

- `npm run dev` — Local dev server (staging env, port 8787)
- `npm run build` — Dry-run deploy to generate dist
- `npm run format` — Prettier format all files
- `npm run lint` — ESLint check (config at `.github/linters/eslint.config.mjs`)
- `npm run lint:fix` — ESLint autofix
- `npm run db:generate` — Generate Drizzle migration from schema changes
- `npm run db:migrate` — Push schema to database
- `npm run db:seed` — Seed reference data (denoms, operation types)

## Architecture

Cloudflare Worker for the cheqd blockchain network. Two entry points in `src/index.ts`:

1. **`fetch`** — HTTP API via itty-router. Serves supply data, account balances, and identity analytics.
2. **`scheduled`** — Hourly cron trigger that updates cached circulating supply balances and syncs identity data from BigDipper GraphQL into PostgreSQL.

### Data flow

- **Supply/balance endpoints** — Handlers in `src/handlers/` call external APIs (`src/api/`) via helpers. The Cosmos SDK REST API (`REST_API`) provides account data; BigDipper GraphQL (`GRAPHQL_API`) provides total supply and identity transactions.
- **Circulating supply** — Watchlist addresses are stored in Cloudflare KV, grouped by `group_N:` prefix. The hourly cron processes one group per hour (24 groups = 24 hours), updating each address's cached balance breakdown. The circulating supply endpoint subtracts all watchlist balances from total supply.
- **Identity analytics sync** — `SyncService` in `src/helpers/identity.ts` incrementally syncs DID and resource transactions from BigDipper into PostgreSQL (via Hyperdrive). It tracks the last block height to avoid re-processing, with composite key deduplication (txHash + operationType + entityId).
- **Analytics queries** — `src/handlers/analytics.ts` queries the PostgreSQL tables with filtering (date range, operation type, denom, feePayer, didId, success) and pagination. Supports CSV export.

### Database

PostgreSQL accessed through Cloudflare Hyperdrive. Schema in `src/database/schema.ts` mirrors tables for mainnet and testnet (e.g., `did_mainnet`/`did_testnet`, `resource_mainnet`/`resource_testnet`). Each network has its own enum types, denom lookup table, and operation types lookup table. The `TABLES` map in `src/helpers/identity.ts` selects the correct table set by network.

### Environment

All env vars and bindings are typed in `src/worker-types.d.ts` as the global `Env` interface. Key bindings: `HYPERDRIVE` (PostgreSQL connection pooler), `CIRCULATING_SUPPLY_WATCHLIST` (KV namespace). Secrets (`WEBHOOK_URL`) are set via `wrangler secret put`, not in config files.

## Conventions

- Prettier: tabs, single quotes, 120 char width, trailing commas (es5)
- Conventional commits — semantic-release automates versioning and changelogs
- All token amounts are converted from lowest denom (`ncheq`) to main denom (`CHEQ`) using `TOKEN_EXPONENT` (10^9)
Loading