Skip to content

getting started with docker execution service #11

@adarshm11

Description

@adarshm11

How It Works

API Server  ──enqueue──>  Redis  ──dequeue──>  Executor Service  ──runs──>  Sandbox Containers
                                                (has docker.sock)

Implementation Steps

1. Add Redis and executor to docker-compose

Files: docker-compose.yml, docker-compose.dev.yml

  • Add a redis service (redis:7-alpine, port 6379, healthcheck via redis-cli ping)
  • Add an executor service:
    • Build context: ./executor
    • Mount /var/run/docker.sock:/var/run/docker.sock (executor only, NOT the backend)
    • Env vars: REDIS_URL, DATABASE_URL, WORKER_COUNT, EXECUTION_TIMEOUT, SANDBOX_MEMORY_LIMIT
    • depends_on: redis (healthy), database (healthy)
  • Add REDIS_URL to .env.example (redis://redis:6379)
  • Backend service gets REDIS_URL too (for enqueuing)

2. Build sandbox Docker images

Files: docker/Dockerfile.python

Start with only Python, will add more later:

  • Python imagepython:3.12-slim, no pip, no extras. Entrypoint runs /code/solution.py with stdin piped in.
  • Tag as codesce-sandbox-python:latest

Each image should:

  • Accept source code mounted/copied to /code/
  • Accept test input via stdin
  • Print output to stdout
  • Exit with code 0 on success, non-zero on error

3. Scaffold the executor service

File: cmd/runner/main.go

  • main.go: connect to Redis, connect to Postgres, start worker pool, block on signal
  • Config via env vars: REDIS_URL, DATABASE_URL, WORKER_COUNT (default 4), EXECUTION_TIMEOUT (default 10s), SANDBOX_MEMORY_LIMIT (default 256m)
  • Graceful shutdown: on SIGINT/SIGTERM, stop accepting new jobs, drain in-flight workers, exit

4. Implement the sandbox runner

File: internal/execution/sandbox.go

This is the core — it runs code in a Docker container and captures output. No DB logic, no queue logic, just Docker.

  • Run(ctx context.Context, language string, sourceCode string, stdin string) (stdout string, stderr string, exitCode int, err error)
  • Create a temp dir, write source code to it
  • docker run with:
    • --rm (auto-cleanup)
    • --network none (no internet)
    • --read-only (immutable filesystem)
    • --memory 256m (configurable)
    • --cpus 0.5
    • --tmpfs /tmp:rw,noexec,nosuid,size=64m (writable scratch space)
    • Bind-mount temp dir to /code:ro
    • Pipe stdin to container
  • Use context.WithTimeout (from the passed ctx) to enforce hard timeout
  • Capture stdout/stderr from container
  • Clean up temp dir after execution
  • Use the Docker SDK (github.com/docker/docker/client) or shell out to docker CLI — SDK is preferred

5. Implement the Redis queue consumer (worker)

File: internal/execution/worker.go

  • Workers loop: BRPOP from Redis list codesce:submissions (or use Redis Streams with consumer groups for better reliability)
  • Job payload (JSON): { "submission_id": 123, "question_id": 456, "language": "python", "source_code": "...", "test_cases": [...] }
  • For each job:
    1. Update submissions row: status = 'running'
    2. For each test case: call sandbox.Run(language, sourceCode, testCase.Input)
    3. Compare stdout (trimmed) to testCase.ExpectedOutput
    4. Insert submission_results rows
    5. Compute score_awarded = sum of scores of passed test cases
    6. Update submissions row: status (passed/failed/runtime_error/compile_error), stdout, stderr, execution_ms, score_awarded
  • Worker accesses Postgres directly (same DATABASE_URL as backend) — raw pgx queries are fine, no need for the repo layer

6. Add enqueue helper to the API server

File: internal/execution/enqueue.go (in the main backend)

  • Enqueue(ctx, redisClient, submission)LPUSH the job JSON onto codesce:submissions
  • Used by the CreateSubmission handler: insert DB row with status=queued, then enqueue, return 202
  • If Redis is down, return 503

Testing locally

  1. docker compose up --build — builds everything (sandbox images, executor, backend, frontend) and starts all services
  2. The executor should log "connected to Redis" and "N workers started"
  3. To test the sandbox runner in isolation, write a small internal/execution/sandbox_test.go that runs a hello-world Python script and asserts stdout

File tree when done

cmd/
  server/main.go                         <-- existing API server
  runner/main.go                         <-- new runner service entrypoint
internal/
  execution/
    sandbox.go                           <-- Docker sandbox runner
    sandbox_test.go
    worker.go                            <-- Redis queue consumer + worker pool
    enqueue.go                           <-- enqueue helper (used by API server)
docker/
  Dockerfile.python

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions