Skip to content

darvell/blob

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The Blob

Self-hosted, agent-native PaaS. Stand in a folder, run blob deploy, get a public HTTPS URL.

This monorepo is The Blob's runtime. Live deployment: https://blob.irrigate.cc (control-plane API). Deployed apps land at https://<name>.irrigate.cc.

Quick start

# 1. Install blobctl (macOS / Linux, amd64 or arm64)
curl -fsSL https://raw.githubusercontent.com/darvell/blob/main/scripts/install.sh | sh

# 2. Authenticate
blob login --endpoint https://blob.irrigate.cc --token $YOUR_TOKEN

# 3. In any project folder
blob init                 # auto-detects: Dockerfile, Compose, static, package.json build
blob deploy

That's it. The CLI tarballs the folder, ships it to blobd, which builds on the host (so the architecture matches), pushes to the platform registry, schedules a Nomad job, and Traefik picks up the route. ACME issues the cert automatically on first hit.

Live dogfooded apps

These are real side projects pulled out of ~/code and deployed with no edits beyond blob init:

Plus an example with a custom domain attached: https://static.darv.ai/ (same backing app as pong).

Imported via v0.9 importers

The same platform host runs these via blob import compose|procfile|flyblob deploy with no manual blob.yaml edits:

What runs today

Capability Status
One-command deploy from any folder shipped
Auto-detect Dockerfile / Compose / index.html / build script shipped
Static sites via form: static (Caddy serves a folder) shipped
web-service, function, daemon, job, cronjob workload forms shipped
Kata microVM isolation via isolation: kata or blob deploy --isolation kata on nodes bootstrapped with ENABLE_KATA=1 shipped
Multi-component App manifest (web + worker + cron) shipped
Bundle sidecars (co-scheduled tasks sharing the netns) shipped
Per-component command override shipped
Secrets: AES-256-GCM at rest, per-environment, env injection shipped
Environments (prod, staging, pr-1234, …) shipped
Managed Postgres with services: env injection (DATABASE_URL) shipped
Per-project Postgres users (services: [<instance>.<project>]) with isolated role + database + per-project statement_timeout shipped
Postgres backups (blob postgres backup/backups/restore) shipped
Off-host backup shipping to S3-compatible stores + scheduled cron + retention shipped
Observability: managed Loki + Grafana + Promtail; blob logs --since/--grep/--follow queries Loki when registered, falls back to nomad alloc tail shipped
Importers: `blob import compose procfile
Preview environments: blob preview create <app> --branch <name> for ephemeral per-branch deploys at <app>-<branch>.<base>; multi-component preview ships in v0.13; GitHub webhook auto-create on PR open/synchronize/close in v0.13 shipped
Object storage: managed S3-compatible (blob storage create <name>); services: [<storage>] injects S3_ENDPOINT/S3_BUCKET/S3_ACCESS_KEY/S3_SECRET_KEY (+ AWS_* aliases) shipped
Managed MySQL (blob mysql create); services: [<mysql>] injects MYSQL_URL/MYSQL_HOST/MYSQL_PORT/MYSQL_USER/MYSQL_PASSWORD/MYSQL_DATABASE shipped
Managed ClickHouse (blob clickhouse create); services: [<ch>] injects CLICKHOUSE_URL (native) + CLICKHOUSE_HTTP_URL + standard ports/creds shipped
Messaging: managed NATS with JetStream (services: [<nats>] injects NATS_URL) shipped
Tracing: managed Tempo (OTLP gRPC); blobd auto-exports deploy spans when a Tempo is registered; Grafana provisioned with Tempo datasource shipped
Metrics: managed Prometheus + Nomad service discovery + blobd /metrics; Grafana provisioned with Prometheus datasource shipped
Autoscaling: per-app horizontal autoscaler (cpu/memory/http_qps/raw PromQL) with min/max + cooldowns; blob autoscale set <app> shipped
Service rollup: blob services list shows postgres/valkey/loki/grafana/promtail/nats/tempo/prometheus in one table shipped
Managed Valkey (Redis-compatible) with services: env injection (REDIS_URL) shipped
Custom domains with blob domains attach (auto-HTTPS) shipped
Public TCP services with exposure: tcp + blob tcp add shipped
Multiple hostnames per app shipped
Scaling (blob scale) shipped
Restart (blob restart) shipped
Releases / rollback (blob releases, blob rollback) shipped
Exec into a running allocation (blob exec) shipped
Open in browser (blob open) shipped
Volumes: per-app Docker named volumes shipped
Nodes: list, drain, undrain, generate join script shipped
Resource graph + placement preflight: persisted Nomad node/allocation capacity, blob nodes recommend, and impossible deploy refusal before Nomad scheduling shipped
Status pages: public HTML + JSON with route health, monitors, incidents, and sanitized doctor issues shipped
Uptime monitors: persisted HTTP checks with optional alert webhooks and status-page integration shipped
Audit log: append-only hash-chained events for authenticated mutating API actions shipped
Identity/RBAC: scoped service tokens with per-token grants shipped
Cost rollups: blob costs summary/apps/nodes reports reserved resources and optional monthly estimates shipped
Web console: server-rendered /ui operator views for apps, nodes, costs, doctor, status, audit, and identity shipped
Deploy plugins: per-app pre/post deploy hooks with bounded output and timeout shipped
Doctor drift / orphan / liveness checks shipped
Manifest projection hashes: deploy records intended job projection and blob doctor detects live/on-disk drift shipped
Bootstrap script for turning a fresh server into a Blob shipped
Phase-timed deploys shipped
/blob Claude Code skill shipped

What the spec describes that's not yet here

The full v1 spec (docs/the-blob-spec.md) is the destination. The runtime ships the deploy core, operability surfaces, and the first managed services. Honest gap list:

  • Blebs warm pool, hot journal volumes, rewind
  • Tempo/Prometheus: shipped in v0.10 — see managed services above
  • Multi-region active-passive failover
  • GPU/confidential compute

Setting up your own Blob

Three short docs:

blob.yaml

blob.yaml is the canonical authoring file. Everything is optional except name. blob init auto-detects a sensible starting point.

Static site

name: my-site
form: static
root: .              # or "dist", "build", "out", "_site", "public"
spa: false           # true = SPA fallback to index.html for any unmatched path
not_found: /404.html # optional: serve this for 404s

For React/Vite/Next/etc. with a build step:

name: my-spa
form: static
build: "pnpm install && pnpm run build"
root: dist
spa: true

Single web service

name: hello
form: web-service
port: 8080
domain: hello.example.com
domains:
  - hello-alt.example.com
secrets:
  - env: API_TOKEN
    name: hello-api-token
volumes:
  - name: data
    path: /var/lib/hello

Function

name: hello-fn
form: function
handler: index.mjs # default auto-detects index.mjs/index.js/function.mjs/function.js
runtime: nodejs

The handler exports default or handler and receives an HTTP event with method, path, query, headers, body, and rawBody. Blob wraps it in a tiny Node HTTP server on port 8080 and publishes the same HTTPS route as a web service.

Public TCP daemon

name: tcp-echo
form: daemon
image: hashicorp/http-echo:1.0.0
command: ["/http-echo", "-listen=:5678", "-text=tcp-ok"]
port: 5678
exposure: tcp

After deploy, run blob tcp add tcp-echo to allocate a public port.

Cronjob

name: nightly-backup
form: cronjob
schedule: "0 3 * * *"

Bundle (sidecar)

name: bundle
form: web-service
port: 8080
sidecars:
  - name: tunnel
    image: cloudflare/cloudflared:latest
    args: ["tunnel", "run"]
    cpu: 50
    memory: 64

Multi-component App

name: my-app
environment: prod
components:
  - name: web
    form: web-service
    port: 8080
    command: ["node", "web.js"]
    secrets:
      - env: DATABASE_URL
        name: my-app-db
  - name: worker
    form: daemon
    command: ["node", "worker.js"]
    secrets:
      - env: DATABASE_URL
        name: my-app-db
  - name: nightly
    form: cronjob
    schedule: "0 3 * * *"
    command: ["node", "nightly.js"]

Each component becomes its own Nomad job named <app>-<component> (e.g. my-app-web, my-app-worker).

CLI reference

blob init [--name N] [--port P] [--domain D] [--form F] [--root D]
blob import <compose|procfile|fly|nextjs|netlify|render|vercel|nix|helm|kubernetes|cloudflare-workers> <path>
blob login --endpoint URL [--token T]
blob deploy [--name N] [--port P] [--domain D] [--image IMG] [--env ENV] [--cpu C] [--memory M] [--replicas N]
blob deploy --function [--handler FILE]
blob deploy --from <compose|procfile|fly|nextjs|netlify|render|vercel|nix|helm|kubernetes|cloudflare-workers> <path>
blob list
blob status <app>
blob logs <app> [-n 200]
blob scale <app> <replicas>
blob restart <app>
blob releases <app>
blob rollback <app> <revision> [--yes]
blob open <app>
blob exec <app> -- <cmd ...>
blob destroy <app> [--yes]

blob domains attach <app> <host> [--mode MODE]

blob audit list [--limit N]
blob audit show <id>

blob identity tokens list
blob identity tokens create <name>
blob identity tokens revoke <id> [--yes]
blob identity grants list [--token ID]
blob identity grants add <id> <scope>
blob identity grants revoke <id> <scope> [--yes]

blob costs summary [--monthly-usd N]
blob costs apps [--monthly-usd N]
blob costs nodes [--monthly-usd N]

blob status-pages enable <app>
blob status-pages list
blob status-pages show <app>
blob status-pages disable <app> [--yes]

blob monitors add <app> [--path P] [--interval S] [--webhook URL]
blob monitors list
blob monitors show <name>
blob monitors remove <name> [--yes]

blob tcp add <app> [--public-port P] [--target-port P]
blob tcp list
blob tcp show <public-port>
blob tcp remove <public-port> [--yes]

blob secrets list [--env ENV]
blob secrets set <name> [--env ENV] [--from FILE | --value V]
blob secrets unset <name> [--env ENV]

blob postgres list
blob postgres create <name> [--version V] [--database D]
blob postgres url <name>
blob postgres connect <name>
blob postgres backup <name>
blob postgres backups <name>
blob postgres restore <name> [path|latest] [--force]
blob postgres destroy <name> [--yes]

blob postgres project list <instance>
blob postgres project create <instance> <project> [--timeout 30s]
blob postgres project url <instance> <project>
blob postgres project timeout <instance> <project> <duration>
blob postgres project destroy <instance> <project> [--yes]

blob valkey list
blob valkey create <name> [--version V]
blob valkey url <name>
blob valkey destroy <name> [--yes]

blob nodes list
blob nodes drain <id>
blob nodes undrain <id>
blob nodes join          # prints a one-liner shell script for a new server

blob volumes list

blob doctor

blob whoami
blob version

Environment variables: BLOB_HOST, BLOB_TOKEN. They override config file values.

Architecture

laptop                                    platform host(s)
─────                                     ─────────────
blobctl ──tar.gz──> /v1/sources/<app> ──> /srv/blob/sources/<app>
        ──json───>  /v1/deploy[-app]
                      │
                      ├── resolve secrets (AES-256-GCM at rest in /srv/blob/secrets)
                      ├── docker login <registry>
                      ├── if form=static: synthesize Caddyfile + Dockerfile.blob-static
                      ├── docker build  -t <registry>/<app>:<tag>
                      ├── docker push   <registry>/<app>:<tag>
                      ├── render Nomad HCL → /srv/blob/jobs/<id>.nomad
                      │   plus meta.json (form, env, domain, image)
                      ├── nomad job run
                      └── poll /v1/job/<id>/allocations until running
                                                │
                                                ▼
                                      Traefik (Nomad provider)
                                                │
                                                ▼
                                      https://<id>.<base-domain>

Multi-node fleets: any number of additional Nomad clients can register with blob nodes join. Workloads place across the fleet automatically based on capacity.

Repository layout

cmd/
  blobctl/                 # the CLI
  blobd/                   # the control-plane daemon
internal/
  api/                     # request/response types, shared between client and server
  client/                  # HTTP client
  config/                  # blobctl config
  detect/                  # auto-detect project type for `blob init`
  manifest/                # blob.yaml parser and validator
  secrets/                 # at-rest-encrypted secret store
  server/                  # blobd: routes, deploy phases, Nomad job rendering
                           # nodes.go: nodes/join/volumes/restart/exec/domains
                           # static.go: form=static (Caddy) build path
                           # doctor: drift/orphan checks
  tarball/                 # deterministic tar.gz packer
skills/blob/               # /blob Claude Code skill
docs/
  the-blob-spec.md         # full Business Requirements + Technical Spec
  host-setup.md            # turn a fresh server into a Blob
  joining-nodes.md         # add a node to an existing Blob
  operator.md              # day-2 ops runbook
scripts/
  install.sh               # blobctl installer (curl | sh)
  bootstrap-host.sh        # one-shot: Docker + Nomad + Traefik + registry
  install-blobd.sh         # systemd unit installer
  blobd.service            # systemd unit
  blobd-edge.nomad         # Nomad job that exposes blobd through Traefik

Building from source

go build ./...
go build -o /usr/local/bin/blob ./cmd/blobctl
go build -o /usr/local/bin/blobd ./cmd/blobd
go test ./...

License

MIT.

About

The Blob — self-hosted, agent-native PaaS. Stand in a folder, run `blob deploy`, get a public HTTPS URL.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors