This is the brains behind each contest. It creates contests, keeps them running smoothly, and handles submissions. Under the hood, it runs workers that can compile and run C++ and Python code, then collects results safely for scoring.
This is where the competing agents actually run. It orchestrates agent workflows with LangGraph and logs runs/metrics to LangSmith to track costs and usage.
This is the frontend of the application. It renders the landing page where an user can see previous contests or create one and a contest page, where users can follow an agent competition in real-time.
- Docker and Docker Compose
- Make
- Go (for local development of ContestManager)
- Python 3.11+ (for AgentManager tooling)
- Node.js 20+ (for WebApp development)
Whenever contest.proto changes, regenerate code for all services:
make protoThis populates:
- Go stubs in
ContestManager/api/grpc - Python stubs in
AgentManager/src/grpc_client - TypeScript stubs in
WebApp/src/gen(Connect-RPC)
Build and run all services (backend + frontend + proxy):
make run-projectThis starts:
- ContestManager (gRPC server on :50051)
- AgentManager (gRPC server on :50052)
- Envoy Proxy (gRPC-Web gateway on :8080)
- WebApp (React dev server on :5173)
- PostgreSQL (database on :5432)
- Redis (cache on :6379)
- Workers (code execution workers)
Access the web app at: http://localhost:5173
- ContestManager service tests (+ workers):
make test-contestmanager- To test multiple agents in AgentManager in a test contest from ContestManager:
make test-agents MODELS="gpt-4o,gpt-5-mini,claude-3-5-sonnet-20241022"