This document outlines the immediate next steps for the devloop project.
Implemented env capture and cascade for shell_files and sequential commands in executeNow().
- Env Capture Once: Shell file env vars are captured once before the command loop via
captureShellFileEnv()(runssource ... && env -0, diffs againstos.Environ()). This avoids re-executing non-idempotent scripts (e.g., token fetch) per command. - Env Cascade: By default (
reset_env: false), env vars exported by cmd1 are visible to cmd2. Non-last commands append&& env -0 > tmpfile, and the temp file is parsed aftercmd.Wait()to updatecurrentEnvfor the next command. - reset_env: Added
bool reset_env = 15to Settings andoptional bool reset_env = 21to Rule in proto. Rule overrides global. When true, each command gets only the shell_files base env (no cascade). - Shell files still sourced per-command: For functions/aliases (idempotent). Env vars are injected via
cmd.Env. - New functions:
parseEnvOutput(),envDiff(),envSliceToMap(),captureShellFileEnv(),parseEnvFile()inagent/common.go - YAML parsing: Added
reset_envtoLoadConfig()for both settings and rules - Tests: 15 new unit tests + 5 integration tests exercising the full
executeNow()flow via orchestrator (cascade, reset_env, shell file capture-once, functions, rule override)
Issue #3: Added disabled boolean field to Rule configuration. Users can now toggle rules off with disabled: true in YAML without commenting out config blocks.
- Added
bool disabled = 19to Rule proto message - Config parsing maps YAML
disabledfield to proto - Orchestrator skips disabled rules during initialization (no RuleRunner, no file watcher, zero overhead)
TriggerRulereturns "rule is disabled" error for disabled rulesGetRuleStatusreturns DISABLED status for disabled rules- CLI
devloop statusshows "(disabled)" label and "Disabled" status - Service
GetRuleincludesDisabledfield in response - Added
TestConfigDisabledandTestDisabledRulesSkippedtests - Added pre-push git hook (
scripts/pre-push) andmake install-hookstarget
Design doc: docs/streaming-logs-design.md
All three tasks completed:
-
Single-Poller-Per-Rule with Fan-Out (completed 2026-03-05) -- Replaced per-client file polling with one
LogBroadcasterper rule that reads new lines once and fans out to all subscribers. AddedLogSourceinterface,FileLogSourcewith truncation detection,RingBufferforlast_n_lineshistory replay, andlast_n_linesfield toStreamLogsRequestproto. Fixed thefinishedRulesbug where rules were stuck in finished state after re-run. -
Configurable Truncate vs Append (completed 2026-03-05) -- Added
append_on_restartsper-rule config (default:false= truncate). Whentrue, consecutive runs append to the same log file with a separator line. Proto field:Rule.append_on_restarts. YAML field:append_on_restarts: true. File writing, YAML loading, and RuleRunner integration all completed. Fixed broadcaster to useSignalNewRun(appendMode)so append mode preserves ring buffer history and source offset across restarts instead of resetting. -
Structured Log Events (completed 2026-03-05) -- Added typed
LogEventmessages (RUN_STARTED,RUN_COMPLETED,RUN_FAILED,TIMEOUT) toStreamLogsResponse. Replaces plain-text control messages with structured events.RUN_STARTEDincludes atruncatedflag so clients can decide to clear their view.SignalFinishednow accepts success/error info soRUN_FAILEDevents carry error messages. CLIdevloop logsformats events as--- [rule] Run completed ---lines. MovedSignalFinishedcall into the RuleRunner defer block so it fires on all exit paths (success or failure).
Fixed 6 bugs across agent/rule_runner.go, agent/graceful_shutdown_test.go, and agent/e2e_test.go. Several were pre-existing issues on main (TestSemaphoreLogging, TestMediumPatternMatching), others were test issues (synchronous Start calls, timing-dependent assertions). All tests now pass.
- Fix:
RuleRunner.Stop()hang when eventLoop never started (addedstartedatomic flag) - Fix: Inverted success/failure status in
executeNow()defer block - Fix:
orchestrator.Start()called synchronously in shutdown tests (blocks forever) - Fix: Double-close panic risk on
stopChan(addedstoppedatomic with Swap) - Fix: Debounce acting as rate limiter instead of proper debounce (defer to ticker)
- Fix:
TestRelativePathPatternsfragile sleep replaced withassert.Eventuallypolling
- ✅ Architecture Simplification - Unified WorkerPool:
- Problem: Dual-execution (LROManager vs WorkerPool) overcomplicated the system for the target use case (< 10 tasks)
- Solution: Unified all job execution through single WorkerPool with intelligent process management
- Simplifications Made:
- Removed LRO Components: Deleted
lro_manager.go,lro_manager_test.go, and LRO-specific logic (~500 lines removed) - Unified Job Routing: All rules route through WorkerPool regardless of duration
- Smart Process Killing: WorkerPool handles process termination with debounce-aware logic
- Removed Configuration Complexity: No more
lro: true/falseflags needed
- Removed LRO Components: Deleted
- Architecture Benefits:
- Simpler Mental Model: "DevLoop runs your tasks and restarts them when files change"
- Unified Execution Path: Single code path for all job types eliminates complexity
- Global Worker Pool: Users configure
max_parallel_rulesonce for all jobs - Debounce-Aware Killing: Manual triggers restart immediately; file changes respect debounce window
- Developer Experience: Much cleaner configuration without LRO concepts to understand
- Result: Same functionality with significantly reduced complexity
- Impact: Alignment with core philosophy - devloop as a simple, powerful live-reloader
- ✅ Event-Driven Architecture Implementation (now simplified):
- Original dual-execution architecture with LROManager and WorkerPool separation
- Clean TriggerEvent messaging and status callback system (retained)
- Comprehensive test suite with 57% coverage (updated for unified architecture)
- ✅ Per-Rule File Watchers Implementation:
- Problem: Single shared watcher with union policy caused rule conflicts and debugging difficulties
- Root Cause: Complex union logic where one rule's exclude patterns could prevent other rules from watching needed directories
- Solution: Implemented independent fsnotify.Watcher instance per rule for complete isolation
- Implementation:
- New Watcher Class: Created
agent/watcher.gowith dedicated file watching logic - RuleRunner Integration: Each RuleRunner now owns and manages its file watcher
- Resource Analysis: Benchmarked watcher overhead (~2-5KB per watcher, minimal impact)
- Removed Union Policy: Eliminated complex directory watching union logic from Orchestrator
- Enhanced Logging: Added detailed rule/pattern information to "Skipping directory" messages
- New Watcher Class: Created
- Features Delivered:
- Independent file watching per rule - no more cross-rule interference
- Better debugging with rule-specific directory skipping messages
- Cleaner configuration - rules can't interfere with each other's watching
- Improved resource efficiency - only watch directories each rule actually needs
- Eliminated union policy confusion and conflicts
- Result: Frontend rules can watch web/ directories while backend rules exclude them
- Impact: Major architecture improvement - eliminates rule watching conflicts and enables better debugging
- ✅ Project Initialization Command (
devloop init):- Problem: New users needed to manually create
.devloop.yamlconfiguration files, often with boilerplate or incorrect syntax - Solution: Added
devloop initcommand with predefined project profiles for common development scenarios - Implementation:
- Profile-Based Architecture: Created
cmd/profiles/directory with embedded YAML templates using//go:embed - Multiple Profile Support: Added profiles for
golang(Go projects),typescript(Node.js/TS), andpython(Flask) development - Flexible Profile Selection: Support for multiple profiles (
devloop init go ts py) and convenient aliases - Smart Configuration Generation: Default "Hello World" configuration when no profiles specified
- CLI Integration: Full Cobra integration with help documentation, output customization, and force overwrite options
- Error Handling: Graceful handling of invalid profiles with warnings, preventing accidental file overwrites
- Profile-Based Architecture: Created
- Features Delivered:
- Quick project bootstrapping with
devloop initfor instant setup - Pre-configured templates for common project types (Go, TypeScript, Python Flask)
- Profile aliases for convenience (
goforgolang,tsfortypescript,pyforpython) - Multiple profile combinations in single command (
devloop init golang ts python) - Customizable output location (
-oflag) and force overwrite (-fflag) - Comprehensive help documentation (
devloop init --help) - Embedded profile assets for zero external dependencies
- Quick project bootstrapping with
- Architecture Benefits:
- Easy extensibility - new profiles added by creating YAML files in
cmd/profiles/ - Maintainable templates - each profile is a separate, readable YAML file
- Embedded deployment - profiles bundled in binary with no external file dependencies
- Easy extensibility - new profiles added by creating YAML files in
- Usage Examples:
devloop init- Basic configuration with file watching exampledevloop init golang- Go project with build and run commandsdevloop init ts python- Multi-service TypeScript and Python setupdevloop init --force --output custom.yaml go- Custom output with overwrite
- Result: New users can instantly bootstrap devloop configurations without manual YAML creation
- Impact: Significantly improved onboarding experience and reduced time-to-first-success for new devloop users
- Problem: New users needed to manually create
- ✅ Subprocess Color Preservation Fix:
- Problem: Devloop was suppressing colors from subprocess output (like
npm build,go test, etc.) while correctly coloring its own prefixes - Root Cause: Devloop was globally controlling
color.NoColor = true/falsewhich affected all subprocesses using the same color detection logic - Solution: Decoupled devloop's color control from subprocess color control
- Implementation:
- ColorManager Enhancement: Removed global
color.NoColorcontrol, let subprocesses decide their own color support - ColoredPrefixWriter Update: Enhanced ANSI stripping with proper regex, preserve colors for terminal output while stripping for file logs
- Environment Variables: Added
FORCE_COLOR=1,CLICOLOR_FORCE=1,COLORTERM=truecolorfor subprocess color detection - Configuration Option: Added
suppress_subprocess_colors: boolsetting (defaults tofalse) - Comprehensive Testing: Added tests for ANSI color preservation and stripping functionality
- ColorManager Enhancement: Removed global
- Features Delivered:
- Subprocess tools (npm, go test, etc.) now output their native colors in terminal
- Devloop prefixes remain properly colored (
[frontend]in blue,[backend]in red) - Log files receive clean output without ANSI codes
- User can disable subprocess colors via
suppress_subprocess_colors: trueif needed - Backwards compatible - existing configs continue working unchanged
- Result:
npm builderrors show in red, success messages in green, while devloop prefixes maintain their assigned colors - Impact: Significantly improved developer experience with full color preservation from all tools while maintaining clean log files
- Problem: Devloop was suppressing colors from subprocess output (like
- ✅ Color Scheme Prefix Fix:
- Problem: Color schemes were not working on rule prefixes despite
color_logs: trueconfiguration - Root Cause: Two issues prevented color functionality:
- YAML Parsing Issue:
color_logs,color_scheme, andcustom_colorsfields were not being parsed correctly from YAML to protobuf structs - TTY Detection Issue:
fatih/colorlibrary was checking TTY status in subprocesses instead of letting devloop control color decisions globally
- YAML Parsing Issue:
- Solution: Fixed both configuration parsing and TTY management
- Implementation:
- Enhanced LoadConfig(): Added proper YAML-to-protobuf parsing for color configuration fields in
agent/common.go - Fixed ColorManager: Modified TTY detection to let devloop control
color.NoColorglobally instead of per-subprocess checks - Global Color Control: Set
color.NoColoronce at startup based on devloop's TTY status and user configuration
- Enhanced LoadConfig(): Added proper YAML-to-protobuf parsing for color configuration fields in
- Features Delivered:
- Rule prefixes now display in configured colors (red, blue, etc.)
- Auto-assigned colors from palette for rules without explicit color configuration
- Proper TTY detection respects NO_COLOR environment variable and terminal capabilities
- Colors work correctly in interactive terminals while being disabled in non-TTY environments (CI, logs, pipes)
- Result:
[test-red]displays in red,[test-blue]in blue, with proper ANSI color codes - Impact: Enhanced developer experience with color-coded rule output for better visual distinction
- Problem: Color schemes were not working on rule prefixes despite
-
✅ Non-Blocking Auto-Restart Fix:
- Problem: Auto-restart was blocking file watching, preventing responses to file changes during startup
- Root Cause: Orchestrator.Start() was waiting synchronously for all rules to complete startup with retry logic before starting file watching
- Solution: Moved retry logic from orchestrator startup to background rule execution
- Implementation:
- Modified RuleRunner.Start(): Made non-blocking by moving retry logic to background goroutine
- Added startWithRetry(): Background method that runs initialization retry logic using debounced execution
- Added triggerDebouncedWithRetry(): Specialized debounced trigger that includes retry logic for startup
- Enhanced Orchestrator.Start(): Starts file watching immediately without waiting for rule initialization
- Critical Failure Channel: Added
criticalFailurechannel for rules withexit_on_failed_init: true - Test Fix: Corrected
TestDebouncingto use properskip_run_on_init: trueinstead of invalidrun_on_init: false
- Features Delivered:
- File watching starts immediately on orchestrator startup
- Rules initialize in background with full retry logic preserved
- File changes trigger during startup, enabling continuous development workflow
- Critical rules can still exit devloop via background channel communication
- All existing retry configuration and logging preserved
- Result: File changes now trigger during startup while rules retry initialization in background
- Impact: Eliminates blocking behavior - developers can respond to file changes immediately
-
✅ Startup Resilience & Exponential Backoff Retry Logic:
- Problem: When devloop starts, if any rule fails the first time, devloop quits entirely - preventing development from continuing even during transient failures
- Solution: Comprehensive startup retry system with exponential backoff that allows devloop to continue running while retrying failed rules
- Implementation:
- New Rule Configuration Fields:
exit_on_failed_init: bool(default: false) - Controls whether devloop exits when this rule fails startupmax_init_retries: uint32(default: 10) - Maximum retry attempts for failed startupinit_retry_backoff_base: uint64(default: 3000ms) - Base backoff duration for exponential backoff
- Enhanced RuleRunner.Start(): Added
executeWithRetry()method with configurable exponential backoff - Modified Orchestrator.Start(): Collects startup failures instead of exiting immediately, only exits if critical rules fail
- Comprehensive Logging: Detailed retry attempt logging with next retry time and success notifications
- New Rule Configuration Fields:
- Features Delivered:
- Exponential backoff retry logic (3s, 6s, 12s, 24s, etc.) with configurable base duration
- Independent rule failure handling - rules fail independently without stopping devloop
- Configurable exit behavior for critical rules via
exit_on_failed_init: true - Graceful degradation - failed rules can still be triggered manually later
- Backward compatibility - default behavior allows devloop to continue running despite startup failures
- Result: Users can fix errors while devloop continues looping and watching for file changes
- Impact: Major usability improvement - eliminates the frustration of devloop quitting on transient startup failures
-
✅ Complete Cycle Detection Implementation:
- Problem: Devloop rules could create infinite cycles by watching files they modify, causing runaway resource consumption
- Solution: Comprehensive cycle detection system with static validation and dynamic protection
- Implementation:
- Phase 1 - Static Validation: Added startup validation to detect self-referential patterns in rule configurations
- Phase 2 - Dynamic Rate Limiting: Implemented TriggerTracker with frequency monitoring and exponential backoff
- Phase 3 - Advanced Dynamic Detection: Added cross-rule cycle detection, file thrashing detection, and emergency breaks
- Config Parser Fix: Fixed critical bug where YAML cycle_detection settings weren't being parsed into protobuf structs
- Features Delivered:
- Static self-reference detection with pattern overlap analysis relative to rule workdir
- Rate limiting with configurable max_triggers_per_minute and exponential backoff
- Cross-rule cycle detection using TriggerChain tracking with max_chain_depth limits
- File thrashing detection with sliding window frequency analysis
- Emergency cycle breaking with rule disabling and cycle resolution suggestions
- Comprehensive configuration options in cycle_detection settings block
- Result: Rules are prevented from creating infinite cycles while maintaining normal operation
- Impact: Critical reliability improvement - prevents runaway processes and resource exhaustion
-
✅ Watcher Robustness & Pattern Resolution Fix:
- Problem: Three critical watcher issues affecting reliability and intuitive behavior
- Patterns resolved relative to project root instead of rule's working directory
- Relative paths in patterns not properly honored
- Watcher flakiness when directories are added/removed
- Solution: Complete overhaul of pattern resolution and directory watching logic
- Implementation:
- Modified
LoadConfig()to preserve relative patterns instead of resolving to absolute paths - Created
resolvePattern()helper for dynamic pattern resolution relative to rule'sworkdir - Enhanced
shouldWatchDirectory()with pattern-based logic instead of hard-coded exclusions - Added dynamic directory watching for CREATE/DELETE events
- Updated
RuleMatches()to use runtime pattern resolution
- Modified
- Result: Patterns now work intuitively relative to each rule's working directory
- Impact: Rules with different workdirs properly isolate their pattern matching, watcher handles dynamic filesystem changes
- Problem: Three critical watcher issues affecting reliability and intuitive behavior
- ✅ CLI Restructuring with Cobra Framework:
- Problem: Monolithic main.go with basic flag parsing and poor user experience
- Solution: Complete refactoring to modern Cobra-based CLI with subcommands
- Implementation:
- Created
cmd/package with root, server, config, status, trigger, paths, convert commands - Extracted server logic to
server/package with proper signal handling - Added
client/package for gRPC client utilities - Default behavior:
devloopstarts server, subcommands act as client
- Created
- Result: Professional CLI experience with help, subcommands, and client functionality
- Impact: Users can now interact with running devloop servers via CLI commands
-
✅ Architecture Simplification:
- Problem: Dual orchestrator implementations (V1/V2) created complexity
- Solution: Simplified to single orchestrator implementation with Agent Service wrapper
- Implementation: Removed factory pattern, consolidated to single agent/orchestrator.go
- Result: Cleaner codebase, easier maintenance, and simplified testing
- Impact: All tests passing with single implementation
-
✅ Agent Service Integration:
- Problem: Complex gateway integration and API management
- Solution: Created dedicated Agent Service (agent/service.go) providing gRPC API access
- Implementation: Clean separation between file watching (Orchestrator) and API access (Agent Service)
- Result: Better modularity and preparation for grpcrouter-based gateway
- API: Provides GetConfig, GetRule, ListWatchedPaths, TriggerRule, StreamLogs endpoints
-
✅ MCP Integration Simplification:
- Root Cause: Complex MCP mode with separate service management was hard to maintain
- Solution: Simplified MCP to HTTP handler using existing Agent Service
- Implementation: MCP runs on
/mcpendpoint in startHttpServer method - Result: MCP enabled with
--enable-mcpflag, uses same AgentServiceServer - Benefits: Easier maintenance, better integration with core functionality
- Compatibility: Still uses StreamableHTTP transport (MCP 2025-03-26 spec) for universal compatibility
-
✅ Default Port Configuration & Auto-Discovery:
- Problem: Default ports 8080 (HTTP) and 50051 (gRPC) cause frequent conflicts with other services
- Solution: Updated defaults to 19080 (HTTP) and 5555 (gRPC) - less common ports
- Auto-Discovery: Added
--auto-portsflag for automatic port conflict resolution - Implementation: Port discovery logic in
runOrchestrator()with fallback search - User Experience:
devloopnow starts without port conflicts in most scenarios
-
✅ Complete Orchestrator Simplification:
- Deleted legacy dual orchestrator implementations and factory pattern
- Simplified codebase to single orchestrator implementation
- Updated all tests to use simplified architecture without version switching
- Preserved modern process management and rule execution features
-
✅ Critical Rule Matching Bug Fix:
- Root Cause: Orchestrator was ignoring
Actionfield in matcher patterns - Impact: Exclude patterns matched but still triggered rule execution
- Solution: Implemented proper action-based filtering logic
- Result: SDL project's
web/**exclusions now work correctly
- Root Cause: Orchestrator was ignoring
-
✅ Enhanced Configuration System:
- Added
default_actionfield to Rule struct for per-rule defaults - Added
default_watch_actionfield to Settings struct for global defaults - Implemented proper precedence: rule-specific → global → hardcoded fallback
- Added
-
✅ Architecture Refactoring:
- Separated file watching (Orchestrator) from command execution (RuleRunner)
- Implemented OrchestratorV2 with cleaner separation of concerns
- Each rule now has its own RuleRunner managing its lifecycle
- Fixed various race conditions in process management
-
✅ Process Management Improvements:
- Fixed zombie process issues when devloop is killed
- Implemented platform-specific process management (Pdeathsig on Linux, Setpgid on Darwin)
- Commands now execute sequentially with proper failure propagation (like GNU Make)
- Added process existence checks before termination to avoid errors
-
✅ Cross-Platform Command Execution:
- Fixed hardcoded
bash -ccommands that failed on Windows - Implemented
createCrossPlatformCommand()usingcmd /con Windows,bash -con Unix - Added fallback to
sh -cfor POSIX compatibility when bash is unavailable - Fixed WorkDir behavior to default to config file directory instead of current working directory
- Fixed hardcoded
-
✅ Color-Coded Rule Output:
- Added comprehensive color support using
github.com/fatih/colorlibrary - Implemented configurable color schemes (auto, dark, light, custom)
- Hash-based consistent color assignment ensures same rule always gets same color
- Separate handling for terminal (colored) vs file (plain) output
- Configuration options:
color_logs,color_scheme,custom_colors - Per-rule color overrides via
colorfield in rule configuration - Automatic terminal detection with sensible defaults
- Added comprehensive color support using
-
✅ Testing Infrastructure:
- Created factory pattern for testing both orchestrator implementations
- Environment variable
DEVLOOP_ORCHESTRATOR_VERSIONselects v1 or v2 - Added make targets:
testv1,testv2, andtest(runs both) - All tests pass for both implementations
-
✅ Configuration Enhancements:
- Added rule-specific configuration options:
debounce_delay: 500ms- Per-rule debounce settingsverbose: true- Per-rule verbose loggingcolor: "blue"- Per-rule color overrides
- Added global defaults in settings:
default_debounce_delay: 200msverbose: falsecolor_logs: true- Enable colored outputcolor_scheme: "auto"- Auto-detect terminal theme
- Rule-specific settings override global defaults
- Added rule-specific configuration options:
- ✅ Fixed All Failing Tests: Successfully resolved all test failures
- ✅ Improved Glob Pattern Matching: Switched to
bmatcuk/doublestarlibrary - ✅ Code Reorganization: Moved files into
agent/directory
-
✅ Complete OrchestratorV2 Gateway Integration:
- ✅ Port missing gateway communication methods from v1 to v2
- ✅ Implement handleGatewayStreamRecv methods (GetConfig, GetRuleStatus, TriggerRule, etc.)
- ✅ Test gateway integration with both orchestrator versions (all tests passing)
-
✅ Production Readiness:
- ✅ Switch default orchestrator to v2 after thorough testing
- Add migration guide for users
- Performance benchmarking between v1 and v2
-
✅ MCP (Model Context Protocol) Integration Simplification:
- ✅ Simplified MCP as HTTP Handler: Changed from separate mode to simple HTTP endpoint integration
- ✅ Auto-generated MCP tools from protobuf definitions using protoc-gen-go-mcp
- ✅ StreamableHTTP Transport: Uses modern MCP 2025-03-26 specification for universal compatibility
- ✅ Stateless Design: No sessionId requirement for seamless Claude Code integration
- ✅ Agent Service Integration: MCP uses same Agent Service as gRPC API for consistency
- ✅ Enhanced protobuf documentation with comprehensive field descriptions and usage examples
- ✅ Core MCP tools: GetConfig, GetRule, ListWatchedPaths, TriggerRule, StreamLogs
- ✅ Updated integration approach using existing Agent Service architecture
- ✅ Manual project ID configuration for consistent AI tool identification
-
Implement grpcrouter-based Gateway Mode:
- Research and integrate grpcrouter library for automatic gateway/proxy/reverse tunnel functionality
- Implement simplified gateway mode using grpcrouter instead of custom implementation
- Add client logic for agents to connect to grpcrouter-based gateway
- Add comprehensive tests for distributed mode operations with new architecture
-
Add gRPC API Tests: Create a new suite of tests specifically for the gRPC and HTTP endpoints to ensure the API is robust and reliable
-
MCP Enhancement Opportunities:
- Add streaming log support to MCP tools for real-time build/test monitoring
- Consider implementing MCP resources for project files and configurations
- Explore MCP prompts for common development workflows
- Enhance MCP integration with additional Agent Service features
-
Performance Optimization:
- Profile the file watching and command execution pipeline
- Optimize debouncing logic for large numbers of file changes
- Consider implementing command queuing for better resource management
-
Enhanced Configuration:
- Add support for environment-specific configurations
- Implement configuration validation and better error messages
- Consider adding a configuration wizard for new users
-
Documentation: Update all project documentation (README, etc.) to reflect:
- The simplified single-orchestrator architecture with Agent Service integration
- The three core operating modes (standalone, agent, gateway) with MCP as HTTP handler
- The glob pattern behavior with doublestar and proper action-based filtering
- The new
agent/directory structure and Agent Service architecture - Simplified MCP integration as HTTP handler using Agent Service
-
User Experience:
- Refine the CLI flags and output for the new modes to ensure they are clear and intuitive
- Add better progress indicators for long-running commands
- Improve error messages and debugging information