- OS: Linux (tested on RHEL/CentOS 9)
- Python: 3.9+ (uses
Optional[]syntax instead ofX | None) - RAM: 4GB minimum (8GB+ recommended for transcription)
- Storage: NAS or large local storage for video files
# Python 3.9+
python3 --version
# ffmpeg with libx264 and aac
ffmpeg -version
# Optional: yt-dlp for YouTube downloads
pip install yt-dlpgit clone https://github.com/filthyrake/vlog.git
cd vlog
python3 -m venv venv
source venv/bin/activate
pip install -e . # Install package in development mode# For NAS setup
sudo mkdir -p /mnt/nas/vlog-storage/{videos,uploads,archive}
sudo chown $USER:$USER /mnt/nas/vlog-storage
# Or for local storage, set environment variable
export VLOG_STORAGE_PATH=$HOME/vlog-storage# Install PostgreSQL
sudo dnf install postgresql-server postgresql # RHEL/Rocky
# OR
sudo apt install postgresql postgresql-contrib # Debian/Ubuntu
# Initialize and start PostgreSQL
sudo postgresql-setup --initdb
sudo systemctl enable --now postgresql
# Create database and user
sudo -u postgres psql << EOF
CREATE USER vlog WITH PASSWORD 'vlog_password';
CREATE DATABASE vlog OWNER vlog;
GRANT ALL PRIVILEGES ON DATABASE vlog TO vlog;
EOF
# Enable local password authentication (edit pg_hba.conf)
# Change 'ident' to 'md5' for local connections:
# local all all md5
# host all all 127.0.0.1/32 md5
sudo vim /var/lib/pgsql/data/pg_hba.conf
sudo systemctl restart postgresqlpython api/database.py# Start all services
./start.sh
# Or start individually
./start-public.sh # Port 9000
./start-admin.sh # Port 9001
./start-worker.sh # Transcoding
./start-transcription.sh # Transcription (optional)# /etc/fstab entry (replace <NAS_IP> and <YOUR_USER> with your values)
//<NAS_IP>/share/vlog-storage /mnt/nas/vlog-storage cifs credentials=/etc/samba/credentials,uid=<YOUR_USER>,gid=<YOUR_USER>,file_mode=0644,dir_mode=0755 0 0sudo tee /etc/samba/credentials << EOF
username=your_nas_user
password=your_nas_password
EOF
sudo chmod 600 /etc/samba/credentials# Install and configure PostgreSQL (see Development Setup for details)
sudo dnf install postgresql-server postgresql
sudo postgresql-setup --initdb
sudo systemctl enable --now postgresql
# Create database
sudo -u postgres createuser vlog -P # Enter password when prompted
sudo -u postgres createdb -O vlog vlogEnable Redis for instant job dispatch and real-time progress updates.
Option 1: Docker Container (recommended)
Use the provided systemd service file which runs Redis in a Docker container with password authentication:
# Set up Redis password
sudo mkdir -p /etc/vlog
sudo cp systemd/vlog-redis.env.example /etc/vlog/redis.env
sudo chmod 600 /etc/vlog/redis.env
# Generate and set a strong password
REDIS_PASS=$(python -c "import secrets; print(secrets.token_urlsafe(32))")
sudo sed -i "s/CHANGE_ME_TO_A_SECURE_PASSWORD/$REDIS_PASS/" /etc/vlog/redis.env
echo "Redis password: $REDIS_PASS" # Save this!
# Install and start Redis container service
sudo cp systemd/vlog-redis.service.template /etc/systemd/system/vlog-redis.service
sudo systemctl daemon-reload
sudo systemctl enable --now vlog-redis
# Verify (use password from above)
docker exec vlog-redis redis-cli --no-auth-warning -a "$REDIS_PASS" ping # Should return PONG
# Configure VLog to use Redis (include password in URL)
# VLOG_REDIS_URL=redis://:YOUR_REDIS_PASSWORD@localhost:6379
# VLOG_JOB_QUEUE_MODE=hybridOption 2: System Redis
# Install Redis
sudo dnf install redis # RHEL/Rocky
# OR
sudo apt install redis-server # Debian/Ubuntu
# Configure password authentication (edit config file directly)
# Find and uncomment/add the requirepass line:
sudo nano /etc/redis.conf # or /etc/redis/redis.conf on Debian/Ubuntu
# Add or update: requirepass YOUR_STRONG_PASSWORD
# Enable and start
sudo systemctl restart redis
# Verify
redis-cli --no-auth-warning -a YOUR_STRONG_PASSWORD ping # Should return PONG
# Configure VLog to use Redis (include password in URL)
# VLOG_REDIS_URL=redis://:YOUR_STRONG_PASSWORD@localhost:6379
# VLOG_JOB_QUEUE_MODE=hybridTemplate service files are provided in the systemd/ directory. Copy and customize them:
# Copy template files
for f in systemd/*.template; do
sudo cp "$f" "/etc/systemd/system/$(basename "$f" .template)"
done
sudo cp systemd/vlog.target /etc/systemd/system/
# Edit each service file to set your paths and username
sudo nano /etc/systemd/system/vlog-public.service
# ... repeat for other services
sudo systemctl daemon-reloadThe service files include:
- Security hardening - PrivateTmp, ProtectSystem, NoNewPrivileges
- Resource limits - Memory caps, file descriptor limits
- Restart policies - Automatic restart on failure with rate limiting
- Venv Python - Uses the project's virtual environment Python directly
Note: The service files in systemd/ use hardcoded paths. Before deploying, edit them to match your installation:
- Replace
/home/damen/vlogwith your installation path - Replace
User=damenandGroup=damenwith your user - Replace
/mnt/nas/vlog-storagewith your storage path
[Unit]
Description=VLog Public API
After=network.target mnt-nas.mount
Wants=mnt-nas.mount
[Service]
Type=simple
User=<YOUR_USER>
Group=<YOUR_USER>
WorkingDirectory=/path/to/vlog
ExecStart=/path/to/vlog/venv/bin/python -m uvicorn api.public:app --host 0.0.0.0 --port 9000 --proxy-headers --forwarded-allow-ips='127.0.0.1,<PROXY_IP>'
# Security hardening
PrivateTmp=true
ProtectSystem=strict
ProtectHome=read-only
NoNewPrivileges=true
CapabilityBoundingSet=
AmbientCapabilities=
# Allowed paths
ReadWritePaths=/path/to/vlog /mnt/nas/vlog-storage
# Resource limits
LimitNOFILE=65535
MemoryMax=2G
# Restart policy
Restart=on-failure
RestartSec=5
StartLimitIntervalSec=300
StartLimitBurst=5
[Install]
WantedBy=multi-user.target[Unit]
Description=VLog Admin API
After=network.target mnt-nas.mount
Wants=mnt-nas.mount
[Service]
Type=simple
User=<YOUR_USER>
Group=<YOUR_USER>
WorkingDirectory=/path/to/vlog
ExecStart=/path/to/vlog/venv/bin/python -m uvicorn api.admin:app --host 0.0.0.0 --port 9001
# Security hardening
PrivateTmp=true
ProtectSystem=strict
ProtectHome=read-only
NoNewPrivileges=true
# Allowed paths
ReadWritePaths=/path/to/vlog /mnt/nas/vlog-storage
# Resource limits
LimitNOFILE=65535
MemoryMax=2G
# Restart policy
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target[Unit]
Description=VLog Transcoding Worker
After=network.target mnt-nas.mount
Wants=mnt-nas.mount
[Service]
Type=simple
User=<YOUR_USER>
Group=<YOUR_USER>
WorkingDirectory=/path/to/vlog
Environment=PYTHONUNBUFFERED=1
ExecStart=/path/to/vlog/venv/bin/python worker/transcoder.py
# Security hardening
PrivateTmp=true
ProtectSystem=strict
ProtectHome=read-only
NoNewPrivileges=true
# Allowed paths
ReadWritePaths=/path/to/vlog /mnt/nas/vlog-storage
# Resource limits (higher for transcoding)
LimitNOFILE=65535
MemoryMax=8G
# Restart policy
Restart=on-failure
RestartSec=30
# Timeouts (longer for transcoding jobs)
TimeoutStopSec=120
[Install]
WantedBy=multi-user.target[Unit]
Description=VLog Transcription Worker
After=network.target mnt-nas.mount
Wants=mnt-nas.mount
[Service]
Type=simple
User=<YOUR_USER>
Group=<YOUR_USER>
WorkingDirectory=/path/to/vlog
Environment=PYTHONUNBUFFERED=1
ExecStart=/path/to/vlog/venv/bin/python worker/transcription.py
# Security hardening
PrivateTmp=true
ProtectSystem=strict
ProtectHome=read-only
NoNewPrivileges=true
# Allowed paths
ReadWritePaths=/path/to/vlog /mnt/nas/vlog-storage
# Resource limits (higher for whisper model)
LimitNOFILE=65535
MemoryMax=8G
# Restart policy
Restart=on-failure
RestartSec=30
# Timeouts (longer for transcription jobs)
TimeoutStartSec=60
TimeoutStopSec=120
[Install]
WantedBy=multi-user.target[Unit]
Description=VLog Worker API
After=network.target
Requires=network.target
[Service]
Type=simple
User=<YOUR_USER>
Group=<YOUR_USER>
WorkingDirectory=/path/to/vlog
ExecStart=/path/to/vlog/venv/bin/python -m uvicorn api.worker_api:app --host 0.0.0.0 --port 9002
# Security hardening
PrivateTmp=true
ProtectSystem=strict
ProtectHome=read-only
NoNewPrivileges=true
# Allowed paths
ReadWritePaths=/path/to/vlog /mnt/nas/vlog-storage
# Resource limits
LimitNOFILE=65535
MemoryMax=2G
# Restart policy
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target[Unit]
Description=VLog Video Platform
Wants=vlog-public.service vlog-admin.service vlog-worker.service vlog-transcription.service vlog-worker-api.service
[Install]
WantedBy=multi-user.targetsudo systemctl daemon-reload
sudo systemctl enable vlog.target
sudo systemctl start vlog.target
# Check status
sudo systemctl status vlog-public vlog-admin vlog-worker vlog-transcriptionCreate /etc/nginx/conf.d/vlog.conf:
# Public site
server {
listen 80;
server_name videos.yourdomain.com;
# Increase timeouts for long videos
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
location / {
proxy_pass http://127.0.0.1:9000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Video segments need larger buffer
location /videos/ {
proxy_pass http://127.0.0.1:9000;
proxy_buffering off;
proxy_set_header Host $host;
}
}
# Admin panel (internal only - restrict access!)
server {
listen 9001;
listen [::]:9001;
# Only allow internal IPs
allow 10.0.0.0/8;
allow 192.168.0.0/16;
allow 127.0.0.1;
deny all;
client_max_body_size 50G; # For large video uploads
location / {
proxy_pass http://127.0.0.1:9001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
# Upload timeout for large files
proxy_read_timeout 3600;
proxy_send_timeout 3600;
}
}# Public site
sudo firewall-cmd --permanent --add-port=9000/tcp
# Worker API (for remote workers)
sudo firewall-cmd --permanent --add-port=9002/tcp
# Admin (only if needed externally - NOT recommended)
# sudo firewall-cmd --permanent --add-port=9001/tcp
sudo firewall-cmd --reloadFor horizontal scaling, deploy containerized GPU workers to Kubernetes.
The GPU worker container is based on Rocky Linux 10:
- FFmpeg 7.1.2 from RPM Fusion (nvenc, vaapi, qsv encoders pre-built)
- intel-media-driver 25.2.6 (Battlemage/Arc B580 support)
- Python 3.12
Local registry: localhost:9003/vlog-worker-gpu:rocky10
cd /path/to/vlog
# Build the GPU-enabled image (Rocky Linux 10 based)
docker build -f Dockerfile.worker.gpu -t vlog-worker-gpu:rocky10 .
# Tag as latest
docker tag vlog-worker-gpu:rocky10 vlog-worker-gpu:latest
# Push to local registry (port 9003)
docker push localhost:9003/vlog-worker-gpu:rocky10
# For k3s with containerd, import directly
docker save vlog-worker-gpu:rocky10 | sudo k3s ctr images import -
# For multi-node clusters, import on each node
docker save vlog-worker-gpu:rocky10 | ssh node2 'sudo k3s ctr images import -'NVIDIA NVENC:
- nvidia-container-toolkit installed on nodes
- nvidia device plugin daemonset
- RuntimeClass
nvidiaconfigured
Intel VAAPI (Arc/Battlemage):
- Node Feature Discovery (NFD)
- Intel GPU device plugin
# Install Intel GPU support
kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd?ref=main'
kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin?ref=main'# Register a worker via CLI
vlog worker register --name "k8s-worker-1"
# Output: API Key: vlog_xxxxxxxx...
# Save this key - it cannot be retrieved again!
# Or via curl
curl -X POST http://localhost:9002/api/worker/register \
-H "Content-Type: application/json" \
-d '{"worker_name": "k8s-worker-1", "worker_type": "remote"}'# Create namespace
kubectl apply -f k8s/namespace.yaml
# Create secret with API key
kubectl create secret generic vlog-worker-secret -n vlog \
--from-literal=api-key='YOUR_API_KEY_HERE'
# Create configmap with API URL
kubectl create configmap vlog-worker-config -n vlog \
--from-literal=api-url='http://YOUR_SERVER_IP:9002'
# Deploy workers
kubectl apply -f k8s/deployment.yamlSee k8s/ directory for full manifests. Key examples:
NVIDIA GPU Worker (k8s/worker-deployment-nvidia.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: vlog-worker-nvidia
namespace: vlog
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: vlog-worker
app.kubernetes.io/component: nvidia
template:
spec:
runtimeClassName: nvidia # Required for GPU access
containers:
- name: worker
image: vlog-worker-gpu:rocky10
imagePullPolicy: Never
env:
- name: VLOG_WORKER_API_URL
valueFrom:
configMapKeyRef:
name: vlog-worker-config
key: api-url
- name: VLOG_WORKER_API_KEY
valueFrom:
secretKeyRef:
name: vlog-worker-secret
key: nvidia-api-key
- name: VLOG_HWACCEL_TYPE
value: "nvidia"
resources:
limits:
nvidia.com/gpu: 1
memory: "4Gi"Intel Arc/Battlemage Worker (k8s/worker-deployment-intel.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: vlog-worker-intel
namespace: vlog
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: vlog-worker
app.kubernetes.io/component: intel
template:
spec:
containers:
- name: worker
image: vlog-worker-gpu:rocky10
imagePullPolicy: Never
env:
- name: VLOG_WORKER_API_URL
valueFrom:
configMapKeyRef:
name: vlog-worker-config
key: api-url
- name: VLOG_WORKER_API_KEY
valueFrom:
secretKeyRef:
name: vlog-worker-secret
key: intel-api-key
- name: VLOG_HWACCEL_TYPE
value: "intel"
resources:
limits:
gpu.intel.com/xe: 1
memory: "4Gi"CPU-Only Worker (k8s/worker-deployment.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: vlog-worker
namespace: vlog
spec:
replicas: 2
selector:
matchLabels:
app: vlog-worker
template:
spec:
containers:
- name: worker
image: vlog-worker-gpu:rocky10
imagePullPolicy: Never
env:
- name: VLOG_WORKER_API_URL
valueFrom:
configMapKeyRef:
name: vlog-worker-config
key: api-url
- name: VLOG_WORKER_API_KEY
valueFrom:
secretKeyRef:
name: vlog-worker-secret
key: api-key
- name: VLOG_HWACCEL_TYPE
value: "none"
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "4"# Check worker status via CLI
vlog worker status
# View worker logs
kubectl logs -n vlog -l app=vlog-worker -f
# List registered workers
vlog worker list
# Check job status
kubectl exec -n vlog deployment/vlog-worker -- ps aux# Scale workers manually
kubectl scale deployment/vlog-worker -n vlog --replicas=4
# Or use HPA for auto-scaling
kubectl autoscale deployment/vlog-worker -n vlog \
--min=1 --max=10 --cpu-percent=70# Check if workers are connecting
vlog worker status
# View detailed logs
kubectl logs -n vlog -l app=vlog-worker --tail=100
# Check for pending jobs
psql -U vlog -d vlog -c "SELECT id, video_id, current_step, worker_id FROM transcoding_jobs WHERE completed_at IS NULL"
# Reset stuck jobs
psql -U vlog -d vlog -c "UPDATE transcoding_jobs SET worker_id = NULL WHERE completed_at IS NULL"
psql -U vlog -d vlog -c "UPDATE videos SET status = 'pending' WHERE status = 'processing'"If SELinux is enforcing:
# Allow nginx to proxy
sudo setsebool -P httpd_can_network_connect 1
# Allow Python to bind to ports
sudo semanage port -a -t http_port_t -p tcp 9000
sudo semanage port -a -t http_port_t -p tcp 9001
sudo semanage port -a -t http_port_t -p tcp 9002# All services
sudo journalctl -u vlog-public -u vlog-admin -u vlog-worker -f
# Specific service
sudo journalctl -u vlog-worker -f
# Since last boot
sudo journalctl -u vlog-public -bLogs are managed by journald. Configure retention in /etc/systemd/journald.conf:
[Journal]
SystemMaxUse=1G
MaxRetentionSec=30days# Backup database
pg_dump -U vlog vlog > /backup/vlog-$(date +%Y%m%d).sql
# Backup with compression
pg_dump -U vlog -Fc vlog > /backup/vlog-$(date +%Y%m%d).dump
# Restore from backup
pg_restore -U vlog -d vlog /backup/vlog.dumpVideo files on NAS should be backed up according to your NAS backup strategy.
# Stop services
sudo systemctl stop vlog.target
# Backup database
pg_dump -U vlog vlog > /backup/vlog-pre-upgrade-$(date +%Y%m%d).sql
# Pull latest code
cd /path/to/vlog
git pull origin main
# Update dependencies
source venv/bin/activate
pip install -e .
# Run database migrations
alembic upgrade head
# Start services
sudo systemctl start vlog.targetIf upgrading from a version before the database-backed settings system:
-
First startup will auto-seed: On first startup after the upgrade, VLog automatically detects a fresh settings table and seeds it from your current environment variables.
-
Or migrate manually:
# After upgrade, migrate settings from environment to database
vlog settings migrate-from-env
# Verify settings were migrated
vlog settings list
# The command outputs which env vars are now "safe to remove"
# You can keep them as fallbacks or remove them from your environment- Update configuration approach:
Before (environment variables):
export VLOG_HLS_SEGMENT_DURATION=6
export VLOG_WATERMARK_ENABLED=trueAfter (database via CLI):
vlog settings set transcoding.hls_segment_duration 6
vlog settings set watermark.enabled trueOr via Admin UI: Navigate to Settings tab in the admin interface.
- Environment variables still work: For backwards compatibility, environment variables continue to work as fallbacks if a setting isn't found in the database.
| Aspect | Before | After |
|---|---|---|
| Configuration changes | Edit env vars, restart service | Update via UI/CLI, no restart |
| Settings visibility | Check .env files |
View in admin UI |
| Audit trail | None | All changes logged |
| Per-setting control | All or nothing | Individual settings |
| Cache behavior | Immediate | Up to 60 seconds delay |
These settings cannot be changed at runtime and still require environment variables:
VLOG_DATABASE_URLVLOG_STORAGE_PATHVLOG_PUBLIC_PORT,VLOG_ADMIN_PORT,VLOG_WORKER_API_PORTVLOG_SESSION_SECRET_KEY(required for user authentication)VLOG_WORKER_ADMIN_SECRET
# Check status and logs
sudo systemctl status vlog-public
sudo journalctl -u vlog-public -n 50
# Common issues:
# - PYTHONPATH not set correctly
# - NAS not mounted
# - Port already in use# Check worker logs
sudo journalctl -u vlog-worker -f
# Common issues:
# - ffmpeg not installed or missing codecs
# - Disk space full
# - NAS connection issuesPostgreSQL connection problems:
# Check PostgreSQL is running
sudo systemctl status postgresql
# Check connections
psql -U vlog -d vlog -c "SELECT * FROM pg_stat_activity WHERE datname = 'vlog';"
# Restart PostgreSQL if needed
sudo systemctl restart postgresql
# Restart services
sudo systemctl restart vlog.target- Check MIME types in nginx (should be
video/mp2tfor.tsfiles) - Verify CORS headers in browser dev tools
- Check that master.m3u8 exists and references correct files
# Public API
curl -s http://localhost:9000/health
# Admin API
curl -s http://localhost:9001/health
# Worker API
curl -s http://localhost:9002/api/health
# Check all services
sudo systemctl status vlog-public vlog-admin vlog-worker vlog-worker-api vlog-transcription
# Check remote workers
vlog worker status# CPU/Memory during transcoding
top -p $(pgrep -f transcoder)
# Disk usage
df -h /mnt/nas/vlog-storageVLog exposes Prometheus metrics for comprehensive monitoring:
# Admin API metrics
curl -s http://localhost:9001/metrics
# Worker API metrics
curl -s http://localhost:9002/api/metricsAdd to your prometheus.yml:
scrape_configs:
- job_name: 'vlog'
static_configs:
- targets: ['your-vlog-server:9001', 'your-vlog-server:9002']
metrics_path: /metrics
scrape_interval: 15sSee MONITORING.md for complete metrics documentation and Grafana dashboards.
VLog supports serving video content through a CDN for improved performance and reduced origin server load.
CDN settings are managed via the database-backed settings system:
# Enable CDN via CLI
vlog settings set cdn.enabled true
vlog settings set cdn.base_url https://cdn.yourdomain.comOr via the Admin UI: Settings > CDN Configuration.
Your CDN should be configured to:
- Origin: Point to your VLog server (port 9000)
- Cache Rules:
- Cache video segments (
.ts,.m4s) for long periods (1 year) - Cache manifests (
.m3u8,.mpd) for short periods (10 seconds) - Cache thumbnails (
.jpg) for medium periods (1 day)
- Cache video segments (
- Headers: Preserve CORS headers from origin
Page Rule: cdn.yourdomain.com/videos/*.ts
- Cache Level: Cache Everything
- Edge Cache TTL: 1 month
- Browser Cache TTL: 1 year
Page Rule: cdn.yourdomain.com/videos/*.m3u8
- Cache Level: Cache Everything
- Edge Cache TTL: 10 seconds
- Browser Cache TTL: 10 seconds
For a self-hosted CDN/caching layer:
# /etc/nginx/conf.d/vlog-cdn.conf
proxy_cache_path /var/cache/nginx/vlog levels=1:2 keys_zone=vlog_cache:100m max_size=50g inactive=60d;
server {
listen 80;
server_name cdn.yourdomain.com;
location /videos/ {
proxy_pass http://127.0.0.1:9000;
proxy_cache vlog_cache;
# Cache video segments for 1 year
proxy_cache_valid 200 365d;
# Cache manifests for 10 seconds
location ~ \.(m3u8|mpd)$ {
proxy_pass http://127.0.0.1:9000;
proxy_cache vlog_cache;
proxy_cache_valid 200 10s;
}
add_header X-Cache-Status $upstream_cache_status;
}
}# Create compressed backup
pg_dump -U vlog -Fc vlog > /backup/vlog-$(date +%Y%m%d).dump
# Verify backup
pg_restore --list /backup/vlog-*.dump | head
# Restore from backup
pg_restore -U vlog -d vlog --clean /backup/vlog.dumpFor Kubernetes deployments, use the provided CronJob:
# Create backup credentials secret
kubectl create secret generic postgres-backup-credentials \
--namespace vlog \
--from-literal=PGHOST=your-postgres-host \
--from-literal=PGPORT=5432 \
--from-literal=PGDATABASE=vlog \
--from-literal=PGUSER=vlog \
--from-literal=PGPASSWORD=your-password
# Deploy backup CronJob
kubectl apply -f k8s/backup-cronjob.yamlThe CronJob:
- Runs daily at 2:00 AM UTC
- Creates compressed dumps using
pg_dump --format=custom - Verifies backup integrity
- Retains 7 days of backups
- Stores backups on NAS (
/mnt/nas/vlog-storage/backups/)
For systemd deployments, create a backup script:
#!/bin/bash
# /usr/local/bin/vlog-backup.sh
BACKUP_DIR=/mnt/nas/vlog-storage/backups
RETENTION_DAYS=7
DATE=$(date +%Y-%m-%d-%H%M%S)
# Create backup
pg_dump -U vlog -Fc vlog > "${BACKUP_DIR}/vlog-${DATE}.dump"
# Verify backup
if ! pg_restore --list "${BACKUP_DIR}/vlog-${DATE}.dump" > /dev/null 2>&1; then
echo "Backup verification failed!"
rm -f "${BACKUP_DIR}/vlog-${DATE}.dump"
exit 1
fi
# Clean old backups
find "${BACKUP_DIR}" -name "vlog-*.dump" -mtime +${RETENTION_DAYS} -delete
echo "Backup completed: vlog-${DATE}.dump"Add to crontab:
0 2 * * * /usr/local/bin/vlog-backup.sh >> /var/log/vlog-backup.log 2>&1Video files on NAS should be backed up using your NAS's backup features:
- RAID for redundancy
- Periodic snapshots
- Off-site replication for disaster recovery
Important: Video files can be regenerated from source files, but source files in uploads/ are deleted after transcoding. Consider keeping source files if re-encoding might be needed.
VLog logs security-relevant operations for compliance and troubleshooting.
By default: /var/log/vlog/audit.log
Configure via environment variables:
VLOG_AUDIT_LOG_ENABLED=true
VLOG_AUDIT_LOG_PATH=/var/log/vlog/audit.log
VLOG_AUDIT_LOG_LEVEL=INFO| Event | Description |
|---|---|
auth.login |
Admin login attempts |
auth.logout |
Admin logout |
video.upload |
Video upload initiated |
video.delete |
Video deleted |
video.restore |
Video restored from archive |
settings.update |
Runtime setting changed |
worker.register |
New worker registered |
worker.revoke |
Worker API key revoked |
{
"timestamp": "2025-12-27T10:30:00Z",
"event": "video.delete",
"user": "admin",
"ip": "192.168.1.100",
"details": {
"video_id": 123,
"video_title": "Example Video"
}
}Audit logs use RotatingFileHandler with automatic rotation:
VLOG_AUDIT_LOG_MAX_BYTES=10485760 # 10 MB per file
VLOG_AUDIT_LOG_BACKUP_COUNT=5 # Keep 5 backup filesFor systemd/journald, logs are automatically managed. Configure retention in /etc/systemd/journald.conf:
[Journal]
SystemMaxUse=1G
MaxRetentionSec=90days# View recent audit entries
tail -f /var/log/vlog/audit.log | jq .
# Search for specific events
grep '"event":"settings.update"' /var/log/vlog/audit.log | jq .
# Filter by date
grep '2025-12-27' /var/log/vlog/audit.log | jq .Before going to production, verify:
- Admin API (9001) not exposed to internet
-
VLOG_SESSION_SECRET_KEYis set (required for auth) - Initial admin account created via setup wizard
-
VLOG_REGISTRATION_MODEconfigured (invite, open, or disabled) -
VLOG_WORKER_ADMIN_SECRETis set -
VLOG_SECURE_COOKIES=true(when using HTTPS) - HTTPS enabled via reverse proxy
- Rate limiting enabled
- Firewall rules configured
- PostgreSQL backups configured
- Log rotation configured
- Health checks responding
- Monitoring/alerting set up
- Redis enabled for job queue
- CDN configured (if needed)
- GPU workers deployed (if available)
- NAS storage adequate
- Systemd services enabled
- Prometheus scraping configured
- Runbooks documented
- On-call procedures defined