Production deployment strategies for photon applications.
- Overview
- Standalone Binary
- Docker Deployment
- Cloudflare Workers
- AWS Lambda
- Systemd Service
- Environment Variables
- Health Checks
- Monitoring
Photons can be deployed in multiple ways depending on your needs:
| Target | Best For | Scaling |
|---|---|---|
| Standalone Binary | Zero-dependency distribution, air-gapped envs | Single binary per platform |
| Docker | Self-hosted, full control | Horizontal with orchestrator |
| Cloudflare Workers | Edge computing, global low latency | Automatic |
| AWS Lambda | Serverless, pay-per-use | Automatic |
| Systemd | Traditional VPS, always-on services | Manual/VM autoscaling |
Compile any photon into a self-contained executable. The target machine needs no Node.js, no npm, no Photon runtime — just the binary.
photon build my-tool # Binary for current platform
photon build my-tool -o my-tool-bin # Custom output name
photon build my-tool -t bun-linux-x64 # Cross-compile for Linux x64
photon build my-tool --with-app # Embed Beam UI for desktop app mode- The photon source and all
@dependencies - Transitive
@photondependencies (resolved recursively) - The embedded Photon runtime
- Beam frontend assets (with
--with-app)
| Target | Platform |
|---|---|
bun-darwin-arm64 |
macOS Apple Silicon |
bun-darwin-x64 |
macOS Intel |
bun-linux-x64 |
Linux x64 |
bun-linux-arm64 |
Linux ARM64 |
@mcpdependencies (external MCP servers) cannot be bundled — a warning is emitted@clidependencies (system binaries likeffmpeg) must be present on the target machine- Requires Bun installed on the build machine
The resulting binary is fully portable:
# Build on macOS, deploy to Linux server
photon build my-tool -t bun-linux-x64
scp my-tool user@server:/usr/local/bin/
ssh user@server my-tool sse --port 3000FROM node:22-alpine
WORKDIR /app
# Install photon CLI
RUN npm install -g @portel/photon
# Copy your photon files
COPY *.photon.ts ./
# Expose MCP SSE port
EXPOSE 3000
# Run as MCP server with SSE transport
CMD ["photon", "sse", "my-photon"]FROM node:22-alpine
WORKDIR /app
# Install photon CLI
RUN npm install -g @portel/photon
# Copy all photons
COPY *.photon.ts ./
# Run Beam UI (serves multiple photons)
EXPOSE 3000
CMD ["photon", "beam", "--port", "3000"]version: '3.8'
services:
photon:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- LOG_LEVEL=info
volumes:
- photon-data:/app/.photon
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
volumes:
photon-data:- Use multi-stage builds to minimize image size
- Pin dependencies with a lock file
- Run as non-root user for security
- Mount volumes for persistent data (e.g., SQLite databases)
- Set memory limits appropriate for your workload
Photons can be deployed to Cloudflare Workers for edge computing.
photon host deploy my-photon --target cloudflare- Create a
wrangler.toml:
name = "my-photon-worker"
main = "dist/worker.js"
compatibility_date = "2024-01-01"
[vars]
ENVIRONMENT = "production"
# For KV storage
[[kv_namespaces]]
binding = "PHOTON_KV"
id = "your-kv-id"- Build the worker bundle:
photon host build my-photon --target cloudflare --output dist/worker.js- Deploy:
npx wrangler deploy- No filesystem access (use KV or R2 for storage)
- 50ms CPU limit on free tier
- Bundle size limit of 1MB compressed
photon host deploy my-photon --target lambda- Create
template.yaml:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
PhotonFunction:
Type: AWS::Serverless::Function
Properties:
Handler: handler.handler
Runtime: nodejs20.x
Timeout: 30
MemorySize: 256
Events:
Api:
Type: Api
Properties:
Path: /{proxy+}
Method: ANY- Build and deploy:
sam build
sam deploy --guided- Cold start optimization: Keep bundles small, minimize dependencies
- Connection reuse: Use keep-alive for database connections
- Provisioned concurrency: For consistent latency
- Layers: Share dependencies across functions
For always-on deployment on Linux servers.
Create /etc/systemd/system/photon.service:
[Unit]
Description=Photon MCP Server
After=network.target
[Service]
Type=simple
User=photon
Group=photon
WorkingDirectory=/opt/photon
Environment=NODE_ENV=production
Environment=LOG_LEVEL=info
ExecStart=/usr/bin/node /usr/local/bin/photon sse my-photon --port 3000
Restart=always
RestartSec=10
# Security hardening
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=yes
PrivateTmp=yes
ReadWritePaths=/opt/photon/.photon
[Install]
WantedBy=multi-user.targetsudo systemctl daemon-reload
sudo systemctl enable photon
sudo systemctl start photonsudo journalctl -u photon -fPhotons support configuration via environment variables.
| Variable | Description | Default |
|---|---|---|
NODE_ENV |
Environment mode | development |
LOG_LEVEL |
Log verbosity (error/warn/info/debug) | info |
PHOTON_DIR |
Data directory | ~/.photon |
Constructor parameters can be injected via environment variables:
export default class MyPhoton {
constructor(
/** @env MY_API_KEY */
private apiKey: string,
/** @env MY_TIMEOUT */
private timeout: number = 30000
) {}
}Set via environment:
export MY_API_KEY=sk-xxx
export MY_TIMEOUT=60000Photon servers expose health endpoints for monitoring.
curl http://localhost:3000/healthcurl http://localhost:3000/healthResponse:
{
"status": "ok",
"uptime": 3600,
"photons": 5
}Enable JSON logs for log aggregation:
photon sse my-photon --json-logsOutput format:
{"level":"info","message":"Tool executed","tool":"search","duration":152,"timestamp":"2024-01-01T00:00:00.000Z"}For production monitoring, consider:
- Prometheus: Expose
/metricsendpoint - Datadog: Use structured logs with trace IDs
- CloudWatch: For AWS deployments
Set up alerts for:
- High error rates (>1% of requests)
- Slow tool execution (>5s p99)
- Memory usage (>80% of limit)
- Connection failures to external services
- SECURITY.md - Security hardening guide
- GUIDE.md - Development guide
- ADVANCED.md - Advanced patterns