This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Never chain Bash commands with &&, ;, or cd ... &&. Use separate Bash
calls instead.
bun dev- Start hot-reloading development server (watches TypeScript files)bun start- Start production serverbun test- Run TypeScript tests (usesbun:test, files inplatform/test/)bun py <service> --input <input.json>- Run Python service directly via entry.py (bypasses HTTP, same execution path as server)
poetry install- Install main Python dependencies (creates.venvin project)poetry install --with ft- Also install finetuning dependencies (large models)poetry add <module>- Add a new Python dependency
Hybrid TypeScript/Python platform providing AI and data services for the OpenFn platform. Bun+Elysia server routes HTTP/WebSocket/SSE requests to Python (or TypeScript) service modules.
- Entry:
platform/src/index.ts→platform/src/server.ts - Framework: Elysia on Bun runtime
- Bridge:
platform/src/bridge.ts- Spawns Python as child processes, manages temp files intmp/data/, captures stdout for log/event routing - Service discovery:
platform/src/util/describe-modules.ts- Auto-mounts anyservices/<name>/directory not starting with_. Detects service type by checking for<name>.py(Python) or<name>.ts(TypeScript) index file.
Each service lives in services/<name>/ with an index file
services/<name>/<name>.py (or .ts) exporting a main() function.
- Python:
main(data_dict: dict) -> dict— see.claude/rules/python-services.mdfor details on entry.py, imports, and code quality - TypeScript:
export default (port, payload, onLog?) => Promise<any>
Every mounted service gets three endpoints automatically:
POST /services/<name>- Synchronous JSON request/responsePOST /services/<name>/stream- SSE streaming (events:log,complete,error, plus custom event types)WS /services/<name>- WebSocket withstart/log/completeevents
create_logger(name)- Logger whose output streams to WebSocket/SSE clients (useprint()for private/debug logging only)ApolloError(code, message, type, details)- Dataclass exception; returned errors with acodefield get mapped to HTTP status codes by the bridgeapollo(name, payload)- Call another Apollo service via HTTP (for inter-service communication)DictObj(dict)- Dot-accessible dictionary wrapperAdaptorSpecifier(str)- Parses adaptor strings like"@openfn/language-http@3.1.11"or"http@3.1.11"
StreamManager emits Anthropic-formatted SSE events (message_start,
content_block_start/delta/stop, message_delta, message_stop) through the
EVENT:type:json protocol that bridge.ts captures from stdout and forwards as
SSE to clients.
global_chat/- Orchestrator service and single entry point for OpenFn AI chat. Routes requests via a RouterAgent (Haiku) to specialized subagents, or escalates to a PlannerAgent (Sonnet) that coordinates multi-step tasks using tool calls. Depends onjob_chat,workflow_chat, andsearch_docsite.job_chat/- AI chat service for OpenFn job code assistance. Supports conversational help and a code suggestions mode with auto-patching. Uses RAG viasearch_docsiteand injects adaptor API docs. Streams responses.workflow_chat/- AI chat service for generating and editing OpenFn workflow YAML. Preserves job code and IDs during edits, validates adaptors, and retries on parse failures. Streams responses.search_docsite/- Searches OpenFn docs using Pinecone vector store (used by job_chat and global_chat for dynamic context)embed_docsite/- Indexes OpenFn documentation for searchembeddings/- Vector embeddings with Pinecone (production index: "apollo-mappings")vocab_mapper/- Maps medical vocabularies (LOINC/SNOMED) using embeddingsecho/- Test service that returns its input; useful for verifying the server pipeline
- Python 3.11 exactly (recommend asdf with python plugin)
- Poetry with in-project
.venv(configured inpoetry.toml) .envfile at root for API keys (OpenAI, Pinecone, Sentry DSN, POSTGRES_URL)- Sentry integration in entry.py with environment-based trace sampling
- Vector store: Pinecone index "apollo-mappings" with namespace-based collections