This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
This is the Solarpunk Gift Economy Mesh Network - a production application implementing a fully-distributed, offline-first gift economy on DTN-based mesh networks. This is a WORKING APPLICATION using proven multi-agent patterns, OpenSpec workflows, and ValueFlows economic coordination.
This is NOT a meta-framework or template system. It's the actual implementation of a solarpunk resistance infrastructure for mutual aid, economic withdrawal, sanctuary networks, and community resilience.
Key Features:
- DTN (Delay-Tolerant Networking) mesh infrastructure with WiFi Direct/BATMAN-adv
- ValueFlows v1.0 economic coordination (offers, needs, exchanges, commitments)
- Web of Trust with vouch chains and trust scoring
- 14 AI agents for matchmaking, scheduling, governance, and resource optimization
- End-to-end encrypted messaging with panic features (duress codes, secure wipe)
- Sanctuary network coordination for people at risk
- Rapid response system for emergency situations
- OpenSpec workflow for proposal management and quality gates
-
OpenSpec Workflow System (
openspec/,coordination_templates/)- Structured spec-driven development with proposal → approval → implementation → validation → archive lifecycle
- Replaces traditional roadmap files with structured requirements (SHALL/MUST, WHEN/THEN scenarios)
- Three directories:
specs/(living requirements),changes/(active proposals),archive/(completed work)
-
Agent Definitions (
agents/*.md)- Specialized agents with domain expertise (orchestrator, architect, feature-implementer, validators, researchers, skeptics)
- Agent memory system (
agents/memories/*.json) for persistent context across sessions - Quality gates: architect approval → PM validation → test validation
-
MCP Hot-Reload Proxy System (
mcp_proxy_system/)- 94% context savings (3 proxy tools vs 50+ individual tools)
- Hot-reload MCP servers without restarting Claude Code
- Programmatic tool orchestration (loops, conditionals, batch operations)
- Dynamic installation from git repositories
-
NATS Event Streaming (NATS_*.md,
scripts/nats_helpers.sh)- Project-namespaced event streaming for multi-agent coordination
- Shared server at
nats://34.185.163.86:4222(GCP Europe-West3) - Each project MUST use unique namespace:
{PROJECT_NAME}_{STREAM_NAME}format
-
Autonomous Worker Templates (
autonomous_worker_templates/)- Scheduled autonomous Claude Code execution (hourly/daily)
- Systemd service/timer templates for VM deployment
- Pattern: pull code → process work queues → fix bugs → create PRs → log metrics
-
Solarpunk Node Spec (
solarpunk_node_full_spec.md)- Full specification for building DTN-based mesh networks on Android phones
- Multi-AP islands + bridge nodes + ValueFlows economic coordination
- Not yet implemented - this is the target system specification
# Start all services
./run_all_services.sh
# Services started:
# - DTN Bundle System (port 8000)
# - ValueFlows Node (port 8001)
# - Discovery & Search
# - File Chunking
# - Bridge Management (port 8002)
# Stop all services
./stop_all_services.sh
# Run tests
source venv/bin/activate
pytest tests/ -v# Source environment (always do this first)
source .env
# Helper functions from scripts/nats_helpers.sh
source scripts/nats_helpers.sh
# Create project-namespaced stream
nats_create_stream "errors" "errors.>" "workqueue"
# Creates: ${PROJECT_NAME}_ERRORS with subject ${project_name}.errors.>
# List streams for this project
nats_list_project_streams
# Publish to namespaced subject
nats_publish "errors.critical" '{"error": "..."}'
# Subscribe to subjects
nats_subscribe "errors.>"
# Direct NATS CLI usage
nats stream list --context=gcp-orchestrator
nats pub "${NATS_NAMESPACE,,}.errors" '{"error": "..."}' --context=gcp-orchestrator
nats sub "${NATS_NAMESPACE,,}.errors.>" --context=gcp-orchestrator# Load MCP server dynamically (no restart!)
load_mcp_server_dynamically("server-name")
# Call tools programmatically
call_dynamic_server_tool("server-name", "tool_name", {"param": "value"})
# See what's loaded
get_loaded_servers()
# Install from git and load
install_and_load_mcp_server("https://github.com/user/mcp-server")
# Reload after code changes
reload_mcp_server("server-name")# Copy template to project
cp autonomous_worker_templates/autonomous-worker.sh /path/to/project/
chmod +x /path/to/project/autonomous-worker.sh
# Customize the TASK_EOF section in autonomous-worker.sh for your project
# Test manually
cd /path/to/project
./autonomous-worker.sh
# Set up systemd timer (on VM)
sudo cp autonomous_worker_templates/autonomous-worker.service.template \
/etc/systemd/system/autonomous-worker.service
sudo cp autonomous_worker_templates/autonomous-worker.timer.template \
/etc/systemd/system/autonomous-worker.timer
# Edit templates (replace {{VARIABLES}})
sudo nano /etc/systemd/system/autonomous-worker.service
# Enable and start
sudo systemctl daemon-reload
sudo systemctl enable autonomous-worker.timer
sudo systemctl start autonomous-worker.timer
# Check status
systemctl status autonomous-worker.timer
systemctl list-timers autonomous-worker.timer
journalctl -u autonomous-worker.service -n 50// Orchestrator - coordinates multi-agent workflows
Task({ subagent_type: "orchestrator", description: "...", prompt: "..." })
// Architect - validates proposals, maintains roadmap, archives completed work
Task({ subagent_type: "architect", description: "...", prompt: "..." })
// Feature Implementer - executes approved proposals
Task({ subagent_type: "feature-implementer", description: "...", prompt: "..." })
// PM Validator - verifies requirements met
Task({ subagent_type: "pm-validator", description: "...", prompt: "..." })
// Test Validator - enforces quality gates (no placeholders, tests pass)
Task({ subagent_type: "test-validator", description: "...", prompt: "..." })
// Research agents
Task({ subagent_type: "super-alignment-researcher", description: "...", prompt: "..." })
Task({ subagent_type: "research-skeptic", description: "...", prompt: "..." })
Task({ subagent_type: "architecture-skeptic", description: "...", prompt: "..." })-
Create Proposal: Any agent creates proposal in
openspec/changes/{feature-name}/- Required files:
proposal.md,tasks.md - Status: Draft
- Required files:
-
Architect Review: Architect validates proposal quality
- Checks: clear requirements, reasonable scope, proper SHALL/MUST format
- Status: Draft → Approved or Needs Revision
-
Implementation: Feature implementer executes approved proposal
- Follows tasks.md breakdown
- Status: Approved → In Progress
-
Validation: PM and Test validators check quality gates
- PM validator: requirements met, scenarios pass
- Test validator: tests exist and pass, no placeholders
- Status: In Progress → Validation
-
Archive: Architect moves to
openspec/archive/with timestamp- Updates changelog
- Status: Validation → Completed → Archived
This NATS server is shared across multiple projects. ALWAYS use project namespacing:
# Python
import os
namespace = os.getenv('NATS_NAMESPACE', 'default')
stream_name = f"{namespace.upper()}_ERROR_REPORTS"
subject = f"{namespace.lower()}.errors.production"# Shell
source .env
STREAM_NAME="${NATS_NAMESPACE^^}_ERROR_REPORTS"
SUBJECT="${NATS_NAMESPACE,,}.errors.production"Never create streams without namespace prefix!
All proposals MUST pass:
- Architect approval (before implementation)
- PM validation (requirements met)
- Test validation (tests exist, pass, no placeholders)
Agents use persistent memory stored in agents/memories/{agent-name}-memory.json:
- Context preserved across sessions
- Learning from past decisions
- Consistency maintenance
- MCP-based storage and retrieval
| Component | Location | Purpose |
|---|---|---|
| Agent definitions | agents/*.md |
Agent personas and workflows |
| Agent memories | agents/memories/*.json |
Persistent agent context |
| OpenSpec specs | openspec/specs/ |
Living requirements |
| Active proposals | openspec/changes/ |
Work in progress |
| Completed work | openspec/archive/ |
Historical record |
| NATS helpers | scripts/nats_helpers.sh |
Shell functions for NATS |
| MCP proxy | mcp_proxy_system/ |
Hot-reload proxy system |
| Autonomous worker | autonomous_worker_templates/ |
Scheduled worker templates |
| Documentation | Root *.md files |
Integration guides |
Required .env variables for projects using this framework:
# Project identification (REQUIRED)
PROJECT_NAME=your_project_name
NATS_NAMESPACE=your_project_name # Must match PROJECT_NAME
# NATS configuration
NATS_URL=nats://34.185.163.86:4222
NATS_USER=orchestrator
NATS_PASSWORD=f3LJamuke3FMecv0JYNBhf8z
NATS_CONTEXT=gcp-orchestrator
# Matrix (optional - for agent coordination)
MATRIX_HOMESERVER=https://matrix.org
MATRIX_USER=@your-bot:matrix.org
MATRIX_PASSWORD=your-password
# Agent memory
AGENT_MEMORY_PATH=./agents/memoriesAdd proxy to .mcp.json in target project:
{
"mcpServers": {
"yourproject-proxy": {
"command": "python",
"args": ["-m", "mcp_proxy_system.servers.proxy_server"],
"cwd": "/absolute/path/to/project",
"env": {
"PYTHONPATH": "/absolute/path/to/project",
"DATABASE_URL": "your-database-url"
}
}
}
}- Create a new proposal in
openspec/changes/{feature-name}/- Add
proposal.mdwith requirements (SHALL/MUST statements) - Add
tasks.mdwith implementation breakdown
- Add
- Request architect review and approval
- Implement the feature following the task breakdown
- Write tests (unit + integration + E2E as needed)
- Run full test suite:
pytest tests/ -v - Update proposal status to "Implemented"
- Commit changes with descriptive message
- Archive completed proposal to
openspec/archive/
- Create
agents/new-agent.mdfollowing existing agent format - Define persona, expertise, responsibilities, constraints
- Specify decision-making patterns and quality gates
- Create memory file:
agents/memories/new-agent-memory.json - Invoke via Task tool:
Task({ subagent_type: "new-agent", ... })
mkdir -p openspec/changes/feature-name- Create
proposal.mdwith SHALL/MUST requirements, WHEN/THEN scenarios - Create
tasks.mdwith implementation breakdown - Set Status: Draft
- Request architect review
- Make changes to MCP server code
reload_mcp_server("server-name")(orload_mcp_server_dynamicallyif first load)- Test immediately - no Claude Code restart needed
- Iterate rapidly
- Create NATS streams without project namespace prefix
- Commit
.envfiles (use.env.exampletemplates) - Skip quality gates (architect → PM → test validation)
- Use this repo as application code (it's a template/pattern repository)
- Assume this repo has tests or build commands (it's documentation-heavy)
- Always use NATS namespacing:
{PROJECT_NAME}_{STREAM_NAME} - Follow OpenSpec workflow for all significant changes
- Use agent memory system for context preservation
- Test autonomous workers manually before scheduling
- Review all autonomous PRs before merging
- Set budget limits on autonomous workers
- Daily + haiku: ~$15/month (recommended starting point)
- Hourly + haiku: ~$360/month (scale up after proven stable)
- Hourly + sonnet: ~$3,600/month (expensive, use sparingly)
- No ongoing costs - just saves context and speeds up responses
Patterns migrated from these production projects:
- AI Tutor:
/Users/annhoward/src/ai_tutor/- Education platform with DevOps agent, NATS error processing - Super Alignment to Utopia:
/Users/annhoward/src/super_alignment_to_utopia/- Research simulation with quality gates - Multiverse School:
/Users/annhoward/src/themultiverse.school/- Multi-tenant platform with MCP hot-reload
Refer to these for working examples of patterns in action.
Read these files for deep dives:
INSTANTIATION_GUIDE.md- Complete setup walkthroughINTEGRATION.md- Integration into existing projectsMIGRATED_PATTERNS.md- Production patterns and best practicesNATS_INTEGRATION.md- NATS event streaming setupNATS_NAMESPACING.md- Critical namespacing requirementsMCP_PROXY_USAGE.md- Hot-reload proxy guideopenspec/AGENTS.md- Detailed agent workflowopenspec/WORKFLOW_SUMMARY.md- Workflow overviewautonomous_worker_templates/README.md- Worker setupmcp_proxy_system/README.md- MCP proxy architecturesolarpunk_node_full_spec.md- DTN mesh network specification
This is a meta-framework that evolves through:
- Extraction: Proven patterns from production projects
- Generalization: Remove project-specific details
- Documentation: Comprehensive guides and templates
- Reuse: Instantiate into new projects
- Iteration: Improve based on real-world usage
Continuous improvement through abstraction.