How It Works
API Server ──enqueue──> Redis ──dequeue──> Executor Service ──runs──> Sandbox Containers
(has docker.sock)
Implementation Steps
1. Add Redis and executor to docker-compose
Files: docker-compose.yml, docker-compose.dev.yml
2. Build sandbox Docker images
Files: docker/Dockerfile.python
Start with only Python, will add more later:
Each image should:
Accept source code mounted/copied to /code/
Accept test input via stdin
Print output to stdout
Exit with code 0 on success, non-zero on error
3. Scaffold the executor service
File: cmd/runner/main.go
main.go: connect to Redis, connect to Postgres, start worker pool, block on signal
Config via env vars: REDIS_URL, DATABASE_URL, WORKER_COUNT (default 4), EXECUTION_TIMEOUT (default 10s), SANDBOX_MEMORY_LIMIT (default 256m)
Graceful shutdown: on SIGINT/SIGTERM, stop accepting new jobs, drain in-flight workers, exit
4. Implement the sandbox runner
File: internal/execution/sandbox.go
This is the core — it runs code in a Docker container and captures output. No DB logic, no queue logic, just Docker.
Run(ctx context.Context, language string, sourceCode string, stdin string) (stdout string, stderr string, exitCode int, err error)
Create a temp dir, write source code to it
docker run with:
--rm (auto-cleanup)
--network none (no internet)
--read-only (immutable filesystem)
--memory 256m (configurable)
--cpus 0.5
--tmpfs /tmp:rw,noexec,nosuid,size=64m (writable scratch space)
Bind-mount temp dir to /code:ro
Pipe stdin to container
Use context.WithTimeout (from the passed ctx) to enforce hard timeout
Capture stdout/stderr from container
Clean up temp dir after execution
Use the Docker SDK (github.com/docker/docker/client) or shell out to docker CLI — SDK is preferred
5. Implement the Redis queue consumer (worker)
File: internal/execution/worker.go
Workers loop: BRPOP from Redis list codesce:submissions (or use Redis Streams with consumer groups for better reliability)
Job payload (JSON): { "submission_id": 123, "question_id": 456, "language": "python", "source_code": "...", "test_cases": [...] }
For each job:
Update submissions row: status = 'running'
For each test case: call sandbox.Run(language, sourceCode, testCase.Input)
Compare stdout (trimmed) to testCase.ExpectedOutput
Insert submission_results rows
Compute score_awarded = sum of scores of passed test cases
Update submissions row: status (passed/failed/runtime_error/compile_error), stdout, stderr, execution_ms, score_awarded
Worker accesses Postgres directly (same DATABASE_URL as backend) — raw pgx queries are fine, no need for the repo layer
6. Add enqueue helper to the API server
File: internal/execution/enqueue.go (in the main backend)
Testing locally
docker compose up --build — builds everything (sandbox images, executor, backend, frontend) and starts all services
The executor should log "connected to Redis" and "N workers started"
To test the sandbox runner in isolation, write a small internal/execution/sandbox_test.go that runs a hello-world Python script and asserts stdout
File tree when done
cmd/
server/main.go <-- existing API server
runner/main.go <-- new runner service entrypoint
internal/
execution/
sandbox.go <-- Docker sandbox runner
sandbox_test.go
worker.go <-- Redis queue consumer + worker pool
enqueue.go <-- enqueue helper (used by API server)
docker/
Dockerfile.python
How It Works
Implementation Steps
1. Add Redis and executor to docker-compose
Files:
docker-compose.yml,docker-compose.dev.ymlredisservice (redis:7-alpine, port 6379, healthcheck viaredis-cli ping)executorservice:./executor/var/run/docker.sock:/var/run/docker.sock(executor only, NOT the backend)REDIS_URL,DATABASE_URL,WORKER_COUNT,EXECUTION_TIMEOUT,SANDBOX_MEMORY_LIMITdepends_on: redis (healthy), database (healthy)REDIS_URLto.env.example(redis://redis:6379)REDIS_URLtoo (for enqueuing)2. Build sandbox Docker images
Files:
docker/Dockerfile.pythonStart with only Python, will add more later:
python:3.12-slim, no pip, no extras. Entrypoint runs/code/solution.pywith stdin piped in.codesce-sandbox-python:latestEach image should:
/code/3. Scaffold the executor service
File:
cmd/runner/main.gomain.go: connect to Redis, connect to Postgres, start worker pool, block on signalREDIS_URL,DATABASE_URL,WORKER_COUNT(default 4),EXECUTION_TIMEOUT(default 10s),SANDBOX_MEMORY_LIMIT(default 256m)4. Implement the sandbox runner
File:
internal/execution/sandbox.goThis is the core — it runs code in a Docker container and captures output. No DB logic, no queue logic, just Docker.
Run(ctx context.Context, language string, sourceCode string, stdin string) (stdout string, stderr string, exitCode int, err error)docker runwith:--rm(auto-cleanup)--network none(no internet)--read-only(immutable filesystem)--memory 256m(configurable)--cpus 0.5--tmpfs /tmp:rw,noexec,nosuid,size=64m(writable scratch space)/code:rostdinto containercontext.WithTimeout(from the passed ctx) to enforce hard timeoutgithub.com/docker/docker/client) or shell out todockerCLI — SDK is preferred5. Implement the Redis queue consumer (worker)
File:
internal/execution/worker.goBRPOPfrom Redis listcodesce:submissions(or use Redis Streams with consumer groups for better reliability){ "submission_id": 123, "question_id": 456, "language": "python", "source_code": "...", "test_cases": [...] }submissionsrow:status = 'running'sandbox.Run(language, sourceCode, testCase.Input)stdout(trimmed) totestCase.ExpectedOutputsubmission_resultsrowsscore_awarded= sum of scores of passed test casessubmissionsrow: status (passed/failed/runtime_error/compile_error), stdout, stderr, execution_ms, score_awardedDATABASE_URLas backend) — rawpgxqueries are fine, no need for the repo layer6. Add enqueue helper to the API server
File:
internal/execution/enqueue.go(in the main backend)Enqueue(ctx, redisClient, submission)—LPUSHthe job JSON ontocodesce:submissionsCreateSubmissionhandler: insert DB row withstatus=queued, then enqueue, return 202Testing locally
docker compose up --build— builds everything (sandbox images, executor, backend, frontend) and starts all servicesinternal/execution/sandbox_test.gothat runs a hello-world Python script and asserts stdoutFile tree when done