Skip to content

feat(v0.1.5): Moon Console — interactive data client#73

Merged
pilotspacex-byte merged 1 commit into
mainfrom
feat/clients-connects
Apr 12, 2026
Merged

feat(v0.1.5): Moon Console — interactive data client#73
pilotspacex-byte merged 1 commit into
mainfrom
feat/clients-connects

Conversation

@pilotspacex-byte
Copy link
Copy Markdown
Contributor

@pilotspacex-byte pilotspacex-byte commented Apr 12, 2026

Summary

  • Moon Console — embedded interactive web client with HTTP/WebSocket/SSE gateway, KV browser, query console, 3D vector/graph visualization, and real-time dashboard. Served at /ui/ from the Moon binary with zero additional deployment.
  • 47 commits across 9 phases (128-136): 7 core feature phases + 2 gap-closure phases (frontend tests, live integration). 63/63 requirements satisfied per /gsd:audit-milestonepassed.
  • New Rust endpoints (feature-gated --features console): /api/v1/* (REST command dispatch via SPSC — no TCP self-connect), /ws (RESP3-over-JSON bridge), /events (SSE metrics at 1 Hz), /ui/* (rust-embed static serving), /api/v1/memory/treemap (server-side namespace aggregation), /api/v1/hnsw/trace.
  • React frontend (~7.8K LOC, 77 files): Vite 8 + React 19 + Tailwind 4 + shadcn/ui + Zustand + Monaco + Three.js/R3F + d3-force-3d + umap-js. 56 Vitest unit tests + 9 Playwright E2E specs + built-in Help view.
  • 4 real bugs fixed via live user verification: execCommand result unwrapping (type badges showed [object Object]), WebSocket protocol mismatch (Console hung on Cmd+Enter), server dropping id on error responses, multi-line paste → ERR syntax error (now Cmd+Enter runs current line, Cmd+Shift+Enter runs batch).

Test plan

  • cargo check (default features) — clean
  • cargo check --features console — clean on Linux (OrbStack moon-dev) and macOS
  • cargo test --features console — 45 unit tests pass (26 gateway + 19 Phase 136 endpoints)
  • cargo clippy -- -D warnings — zero warnings on default, console, and tokio+jemalloc feature sets
  • cd console && npx tsc --noEmit — clean
  • cd console && pnpm test — 56 Vitest tests pass
  • cd console && pnpm build — produces dist/ embedded by rust-embed
  • Live verification against running Moon (--shards 1 --admin-port 9100):
    • All 6 views render in Chrome via agent-browser
    • SSE stream fires live server_stats events at 1 Hz
    • Dashboard ops/sec chart shows spike during redis-benchmark -n 100000
    • KV Browser renders all type badges (STRING/HASH/LIST/SET/ZSET/STREAM) with correct colors
    • TTL live countdown works (set EXPIRE k 3600 → UI shows 59m 43s ticking)
    • Hash editor opens populated fields on key click
    • Memory treemap returns real namespace tree via /api/v1/memory/treemap
    • Console executes RESP commands via Cmd+Enter (current line) and Cmd+Shift+Enter (batch)
  • Push-verified CI (will run on this PR)

Known limitations (documented, not introduced by this PR)

  • FLUSHALL / DBSIZE / FLUSHDB / DEBUG / MEMORY USAGE return ERR unknown command — pre-existing Moon dispatch-table gap. Memory treemap shows 0 B for byte sizes as a result. Deferred to a v0.1.6 core-commands-gap-closure milestone.
  • REST SCAN queries shard 0 only (by design for keyless commands). Run with --shards 1 for a unified keyspace in the UI, or use hash tags to co-locate keys.
  • Playwright baked-in smoke specs from Phase 135 have a baseURL path-resolution quirk (page.goto("/vectors")/vectors instead of /ui/vectors). Ad-hoc tests with absolute URLs pass; trivial one-line fix deferred.
  • Playwright E2E harness from Phase 136 requires a running Moon + Linux CI runner. Harness script + workflow are in place; live CI execution is v0.2 ops work.

Planning artifacts

  • .planning/ROADMAP.md — 9 phases with goals, requirements, and success criteria
  • .planning/REQUIREMENTS.md — 63 requirements across 9 categories (GW/DB/KV/QC/VE/GE/MO/TEST/INT)
  • .planning/v0.1.5-MILESTONE-AUDIT.md — final audit with status: passed
  • .planning/phases/{128..136}-*/ — per-phase CONTEXT / PLAN / SUMMARY / VERIFICATION / VALIDATION
  • .planning/research/{STACK,FEATURES,ARCHITECTURE,PITFALLS,SUMMARY}.md — domain research from /gsd:new-milestone

Summary by CodeRabbit

Release Notes

  • New Features

    • Added Moon Console—an interactive web-based admin UI accessible at /ui/ featuring a real-time dashboard, key-value browser with filtering and TTL management, multi-language query editor with history, vector/graph explorers with visualization, and memory analysis tools
    • Enabled real-time metrics streaming via SSE with live charts and server statistics
    • Added Bearer token authentication and CORS origin allowlist for console access control
    • Implemented per-IP rate limiting to protect admin endpoints
  • Bug Fixes

    • Fixed WebSocket responses to include client request ID in error messages
    • Corrected console type badge rendering
    • Improved multi-line query handling in console
  • Documentation

    • Added console setup and usage guide
  • Tests

    • Added comprehensive end-to-end and integration test coverage

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 12, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Introduces Moon Console, a feature-gated interactive React 19 admin UI with WebSocket and REST backends, enabling real-time key browsing, query execution, vector visualization, and graph exploration. Includes admin authentication, CORS policies, rate limiting, new server commands, and comprehensive integration testing infrastructure.

Changes

Cohort / File(s) Summary
Console Frontend Application
console/src/App.tsx, console/src/main.tsx, console/src/app.css, console/index.html, console/src/views/*, console/src/components/*
Complete React 19 application with routing, layout system, and multi-view support (Dashboard, Browser, Console, Graph, Vector, Memory). Includes dark-mode theming via Tailwind, Monaco editor integration, and real-time data visualization.
Console State Management
console/src/stores/*
Zustand-backed stores for browser (key scanning, selection, filtering), console (tab management, history, language detection), graph (node/edge rendering, force layout), vector (UMAP projection, KNN search, lasso selection), and memory (treemap navigation) with persisted tab state and history.
Console Type Definitions & Utilities
console/src/types/*, console/src/lib/utils.ts, console/src/lib/toast.ts, console/src/lib/monarch-*.ts
TypeScript interfaces for metrics, query results, vector indexes, graph data; utility helpers for class merging, toast notifications, and Monaco Monarch syntax highlighters for RESP and Cypher languages.
Console API & Communication
console/src/lib/api.ts, console/src/lib/ws.ts, console/src/lib/sse.ts, console/src/lib/graph-api.ts, console/src/lib/vector-api.ts, console/src/lib/completions.ts
REST/WebSocket/SSE client layers for key operations (SCAN, GET, SET, TTL, memory), metrics streaming, graph/vector APIs, command autocomplete with 200+ Redis commands, and WebSocket-based multi-line query execution.
Console Workers & Build
console/src/workers/*, console/vite.config.ts, console/vitest.config.ts, console/tsconfig*.json, console/package.json
Web Workers for force-directed graph layout (D3) and UMAP dimensionality reduction; Vite configuration with React plugin, Tailwind integration, dev API proxying, and chunking for large libraries (Monaco, Three.js, Recharts, TanStack).
Console Testing
console/tests/e2e/*.spec.ts, console/tests/unit/**/*.test.ts*, console/playwright.config.ts
Playwright E2E smoke tests (dashboard, browser, console, vectors, graph, memory) with performance benchmarks; Vitest unit tests for stores, components, and Monaco completion/syntax; jsdom/React Testing Library setup.
Backend Console Gateway & API Handlers
src/admin/console_gateway.rs, src/admin/http_server.rs, src/admin/sse_stream.rs, src/admin/ws_bridge.rs, src/admin/scan_fanout.rs
SPSC ring-buffer-based gateway routing commands to shards; REST/WS/SSE request dispatching; multi-shard SCAN fan-out with composite cursors; RESP-to-JSON frame conversion and command result streaming.
Backend Admin Security & Middleware
src/admin/auth.rs, src/admin/cors.rs, src/admin/rate_limit.rs, src/admin/middleware.rs, src/admin/http_server_support.rs
Bearer token auth via HMAC-SHA256; CORS origin allowlist (wildcard rejection when auth required); per-IP token-bucket rate limiting; middleware pipeline for preflight, auth exemptions, rate-limit enforcement, and header attachment.
Backend Admin API Handlers
src/admin/hnsw_trace.rs, src/admin/memory_treemap.rs, src/admin/static_files.rs
REST endpoints for hierarchical memory treemap (key scanning + type/memory aggregation), HNSW vector search tracing (base64 query vectors), and embedded SPA fallback serving from rust-embed.
Server Admin Commands
src/command/server_admin.rs, src/command/key.rs
New FLUSHDB/FLUSHALL, DEBUG (OBJECT/SLEEP/HELP subcommands), MEMORY (USAGE/STATS/DOCTOR/HELP subcommands), and read-only dbsize variant; includes serialized-length estimation and per-command access control.
Configuration & Main
src/config.rs, src/main.rs, src/admin/metrics_setup.rs, src/admin/mod.rs
CLI flags for console auth (required, secret), CORS origins, rate limits; ConsoleGateway initialization and per-shard admin channel wiring; metrics publisher spawning for SSE events.
Vector Search Enhancement
src/vector/hnsw/search.rs, src/command/vector_search/mod.rs
HNSW search tracing (per-layer visited/selected counts) and comma-separated float query vector support alongside binary format.
Command & Build Configuration
Cargo.toml, src/command/metadata.rs, src/command/mod.rs, src/shard/mesh.rs
New optional/feature-gated dependencies (hyper-tungstenite, rust-embed, monaco-editor, three.js, zustand, etc.); console feature aggregation; per-shard admin SPSC channel creation; AOF write-command verification.
Integration Testing
scripts/test-integration.sh, scripts/seed-console-fixtures.py, tests/console_gateway_test.rs, tests/admin_auth_cors_ratelimit.rs, tests/cmd_flush_dbsize_debug_memory.rs, tests/scan_fanout_multishard.rs, .github/workflows/console-integration.yml
Full integration harness spawning moon with console feature, seeding KV/vector/graph fixtures, running Playwright suite; HTTP/WebSocket/HMAC/CORS/rate-limit validation; FLUSH/DEBUG/MEMORY command testing; multi-shard SCAN validation.
Documentation & Metadata
console/README.md, console/.gitignore, .gitignore, .planning, CHANGELOG.md, Dockerfile, .github/workflows/release.yml
Console development/build/test guide; TypeScript build artifacts ignored; planning submodule pointer updated; version 0.1.5 changelog with console feature, hardening changes, and validation checklist; Rust version bump to 1.94.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant Browser as Console Browser
    participant React as React Frontend
    participant SSE as SSE Stream
    participant REST as REST API
    participant WS as WebSocket
    participant Gateway as ConsoleGateway
    participant Shard as Shard Worker

    User->>Browser: Navigate to /ui/
    Browser->>React: Load App.tsx
    React->>SSE: connectSSE()
    SSE-->>React: MetricEvent stream (ops/sec, memory, clients)
    React->>React: useMetricsStore.pushMetric()

    User->>Browser: Browse keys
    Browser->>REST: GET /api/v1/keys?pattern=*&count=20
    REST->>Gateway: scanKeys(cursor, pattern, count)
    Gateway->>Shard: SCAN 0 MATCH * COUNT 20
    Shard-->>Gateway: [cursor, [keys]]
    Gateway-->>REST: {cursor, keys}
    REST-->>Browser: JSON response

    User->>Browser: View key details
    Browser->>REST: GET /api/v1/key/mykey
    REST->>Gateway: executeCommand("GET", ["mykey"])
    Gateway->>Shard: GET mykey
    Shard-->>Gateway: value (string/hash/list/set/zset/stream)
    Gateway->>Gateway: frameToJson(Frame)
    Gateway-->>REST: {type, value}
    REST-->>Browser: JSON response

    User->>Browser: Execute RESP query
    Browser->>WS: send({cmd: "SET", args: ["key", "val"]})
    WS->>Gateway: executeCommand("SET", ["key", "val"])
    Gateway->>Shard: SET key val
    Shard-->>Gateway: OK
    Gateway-->>WS: {type: "simple_string", data: "OK"}
    WS-->>Browser: JSON response

    Browser->>REST: GET /api/v1/info
    REST->>Gateway: executeCommand("INFO", [])
    Gateway->>Shard: INFO
    Shard-->>Gateway: info (bulk string)
    Gateway-->>REST: {type, data}
    REST-->>Browser: Server info

    React->>React: useGraphStore
    React->>REST: POST /api/v1/command (GRAPH.QUERY cypher)
    REST->>Gateway: executeCommand("GRAPH.QUERY", [cypher])
    Gateway->>Shard: GRAPH.QUERY cypher
    Shard-->>Gateway: nodes, edges
    Gateway-->>REST: {nodes, edges, ...}
    REST-->>React: GraphData

    React->>React: useVectorStore
    React->>REST: GET /api/v1/indexes
    REST->>Gateway: executeCommand("FT._LIST", [])
    Gateway->>Shard: FT._LIST
    Shard-->>Gateway: [index_names]
    Gateway-->>REST: {indexes}
    React->>React: runUmap() via Worker
    React->>Browser: Render 3D point cloud
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

enhancement, feature

Poem

🐰 A leap through the UI, a web of pure delight,
React and Rust dancing in the midnight light,
WebSockets and metrics flow—a real-time gleam,
Console keys unlocking your data dream! ✨

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/clients-connects

@qodo-code-review
Copy link
Copy Markdown

ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan

Review Summary by Qodo

Moon Console — Interactive Data Client with REST/WebSocket Gateway and 3D Visualization

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• **Moon Console** — comprehensive embedded interactive web client with HTTP/WebSocket/SSE gateway,
  KV browser, query console, 3D vector/graph visualization, and real-time dashboard served at /ui/
  from the Moon binary with zero additional deployment
• **Rust backend** (feature-gated --features console): New REST API endpoints (/api/v1/*) for
  command dispatch via SPSC channels, WebSocket bridge at /ws for RESP3-over-JSON, SSE metrics
  stream at /events (1 Hz), static file serving at /ui/*, memory treemap aggregation, and HNSW
  trace visualization
• **Console gateway** — translates REST/WS requests to RESP Frame commands, routes via SPSC to
  appropriate shards based on key hashing, converts RESP responses to JSON with base64 encoding for
  binary data
• **React frontend** (~7.8K LOC, 77 files): Vite 8 + React 19 + Tailwind 4 + shadcn/ui + Zustand +
  Monaco + Three.js/R3F + d3-force-3d + umap-js with 56 Vitest unit tests and 9 Playwright E2E specs
• **Six interactive views**: Dashboard (ops/sec chart, memory, hit ratio), KV Browser (virtualized
  key list with type badges, TTL countdown, inline editors for all types), Console (tabbed RESP/Cypher
  editor with history), Vector Explorer (3D point cloud with KNN search and lasso selection), Graph
  Explorer (force-directed 3D visualization with Cypher queries), Memory (treemap drill-down and
  command statistics)
• **Real bugs fixed**: execCommand result unwrapping (type badges showed [object Object]),
  WebSocket protocol mismatch (Console hung on Cmd+Enter), server dropping id on error responses,
  multi-line paste handling (Cmd+Enter runs current line, Cmd+Shift+Enter runs batch)
• **Integration testing**: Bash harness with deterministic fixture seeding (2K KV keys, 50K vectors,
  10K graph nodes), live verification against running Moon, Playwright E2E specs, and 45 unit tests
  (26 gateway + 19 endpoint tests)
Diagram
flowchart LR
  Client["Client Browser"]
  REST["REST API<br/>/api/v1/*"]
  WS["WebSocket<br/>/ws"]
  SSE["SSE Stream<br/>/events"]
  Static["Static Files<br/>/ui/*"]
  Gateway["Console Gateway<br/>SPSC Dispatcher"]
  Shards["Shard Event Loops<br/>RESP Command Execution"]
  
  Client -->|HTTP| REST
  Client -->|WS| WS
  Client -->|EventSource| SSE
  Client -->|GET| Static
  
  REST --> Gateway
  WS --> Gateway
  SSE -.->|Metrics| Shards
  
  Gateway -->|SPSC Channel| Shards
  Shards -->|RESP Frame| Gateway
  Gateway -->|JSON Response| Client
Loading

Grey Divider

File Changes

1. src/admin/http_server.rs ✨ Enhancement +673/-22

REST API gateway and WebSocket bridge for console

• Refactored HTTP response types to use BoxBody for unified handling of streaming and static
 responses
• Added comprehensive REST API routing for /api/v1/* endpoints (command execution, key CRUD, TTL
 management, INFO, memory treemap, HNSW trace)
• Implemented WebSocket upgrade handler at /ws with configurable buffer limits and error handling
• Added SSE endpoint at /events for real-time metric streaming
• Integrated static file serving with SPA fallback for the console frontend
• Upgraded hyper connection builder to support WebSocket upgrades via auto::Builder

src/admin/http_server.rs


2. src/admin/console_gateway.rs ✨ Enhancement +435/-0

Console gateway SPSC dispatcher and Frame-to-JSON converter

• New module implementing the console gateway: translates REST/WS requests into RESP Frame commands
• Routes commands to appropriate shards via SPSC channels based on key hashing or keyless command
 detection
• Converts RESP Frame responses to JSON with support for all frame types (strings, arrays, maps,
 binary data as base64)
• Provides global singleton access via OnceLock for initialization during startup
• Comprehensive unit tests for frame-to-JSON conversion and command building

src/admin/console_gateway.rs


3. src/admin/memory_treemap.rs ✨ Enhancement +404/-0

Server-side memory treemap aggregation endpoint

• Implements GET /api/v1/memory/treemap endpoint for server-side keyspace memory aggregation
• Builds namespace tree by iterating SCAN batches and folding TYPE + MEMORY USAGE into hierarchical
 structure
• Yields every 200 keys to prevent starving other admin-http tasks on single-threaded runtime
• Supports prefix filtering and configurable scan limits (default 10K, max 200K keys)
• Serializes tree with children sorted by bytes descending for UI visualization

src/admin/memory_treemap.rs


View more (128)
4. src/vector/hnsw/search.rs ✨ Enhancement +218/-0

HNSW search trace collection for console visualization

• Added hnsw_search_with_trace function for debug-only per-layer HNSW search path visualization
• Returns HnswSearchTrace containing layer-level visited/selected counters and search results
• Walks graph top-down from entry point, collecting neighbor counts at each layer for animation
• Includes comprehensive tests for empty graphs, non-empty graphs, and deterministic trace output

src/vector/hnsw/search.rs


5. src/admin/hnsw_trace.rs ✨ Enhancement +216/-0

HNSW trace HTTP endpoint with request validation

• Implements GET /api/v1/hnsw/trace HTTP handler for debug HNSW search visualization
• Validates request parameters: base64-encoded query vector, k, ef_search with clamping
• Verifies index existence via FT.INFO before responding
• Returns stub response with trace_not_implemented note (full wiring deferred to future phase)
• Comprehensive error handling for malformed requests and missing indices

src/admin/hnsw_trace.rs


6. src/admin/ws_bridge.rs ✨ Enhancement +233/-0

WebSocket RESP3 bridge with session state management

• WebSocket-to-RESP3 bridge for interactive command sessions on /ws
• Handles JSON message protocol with optional id field for request correlation
• Implements local SELECT command handling to maintain per-connection database state
• Soft backpressure via MAX_SEND_QUEUE (256 messages) with hard limit via tungstenite buffer
• Converts RESP frames to JSON responses with type information

src/admin/ws_bridge.rs


7. src/admin/http_server_support.rs ✨ Enhancement +191/-0

Shared HTTP utilities for console REST endpoints

• Extracted shared HTTP helpers for console REST handlers: json_response, add_cors_headers,
 percent_decode, parse_query_params
• Implements percent-decoding for URL-encoded key names with fallback for invalid escapes
• Query parameter parsing with automatic percent-decoding of values
• CORS header attachment for cross-origin console requests
• Comprehensive unit tests for all utility functions

src/admin/http_server_support.rs


8. tests/console_gateway_test.rs 🧪 Tests +141/-0

Integration tests for console REST API

• Integration test suite for console REST API endpoints
• Tests SET/GET commands, key CRUD operations, TTL management, SCAN, INFO, and OPTIONS preflight
• Spawns release-mode moon binary with console feature and validates HTTP responses
• Includes server startup detection and curl-based endpoint verification

tests/console_gateway_test.rs


9. src/admin/sse_stream.rs ✨ Enhancement +92/-0

SSE metric streaming with broadcast fan-out

• Server-Sent Events (SSE) streaming for real-time metric delivery at /events
• Uses tokio::sync::broadcast for fan-out to multiple clients with automatic lag handling
• Defines MetricEvent struct with ops/sec, memory, connected clients, uptime, and key count
• Slow consumers receive Lagged errors and skip to current data without backpressure

src/admin/sse_stream.rs


10. src/admin/static_files.rs ✨ Enhancement +66/-0

Embedded static file serving with SPA fallback

• Serves embedded console frontend assets via rust-embed at /ui/*
• Implements SPA fallback: unknown paths serve index.html for client-side routing
• Applies appropriate cache headers: no-cache for index.html, immutable for hashed assets
• Detects MIME types automatically from file extensions

src/admin/static_files.rs


11. src/admin/metrics_setup.rs ✨ Enhancement +45/-0

Background metrics publisher for SSE stream

• Added spawn_metrics_publisher task that publishes MetricEvent to SSE broadcast at ~1 Hz
• Reads from existing atomic counters (total commands, connected clients, RSS memory)
• Runs on admin thread's single-threaded tokio runtime
• Fire-and-forget publishing: dropped if no SSE clients connected

src/admin/metrics_setup.rs


12. src/main.rs ✨ Enhancement +27/-1

Console gateway initialization in main startup

• Created admin SPSC channels for console gateway (one per shard) during initialization
• Instantiated global ConsoleGateway with producers and shard notifiers
• Appended admin consumer to each shard's consumer list for command dispatch
• Gated behind console feature flag

src/main.rs


13. src/shard/mesh.rs ✨ Enhancement +19/-0

Admin SPSC channel factory for gateway-to-shard communication

• Added create_admin_channels function to create N SPSC channels for admin/console gateway
• Returns (producers, consumers) tuple for wiring between admin thread and shards
• Producers given to admin thread, consumers given to shard event loops

src/shard/mesh.rs


14. src/admin/mod.rs ⚙️ Configuration changes +17/-0

Console module registration and feature-gating

• Registered new console-gated modules: console_gateway, sse_stream, static_files, ws_bridge
• Registered internal modules: hnsw_trace, http_server_support, memory_treemap
• All console modules feature-gated behind console flag

src/admin/mod.rs


15. scripts/test-integration.sh 🧪 Tests +81/-0

Integration test harness for console E2E validation

• Bash harness for INT-03 live integration testing
• Builds moon with --features console, seeds KV/vector/graph fixtures
• Starts moon server, waits for readiness, runs Playwright E2E tests
• Includes deterministic teardown with signal handling

scripts/test-integration.sh


16. console/src/components/dashboard/OpsChart.tsx ✨ Enhancement +73/-0

Dashboard operations per second chart component

• React component for real-time operations/sec chart using Recharts
• Displays current ops/sec in large font alongside area chart of historical data
• Formats large numbers (M/K suffixes) and timestamps (MM:SS)
• Integrates with metrics store for live data updates

console/src/components/dashboard/OpsChart.tsx


17. scripts/gen-verification.sh 📝 Documentation +163/-0

Retroactive verification document generator for phase audits

• Bash script that generates retroactive VERIFICATION.md files for phases 128-134 by parsing PLAN
 and SUMMARY frontmatter
• Extracts requirements, must-have truths, and created files from phase documentation
• Produces template-based verification documents with goal-backward evidence and integration
 commands
• Idempotent and re-runnable after documentation edits

scripts/gen-verification.sh


18. scripts/seed-console-fixtures.py 🧪 Tests +131/-0

Deterministic fixture seeder for console integration testing

• Python script for deterministic fixture seeding (KV, vectors, graph) using redis-py and HTTP
 REST API
• Populates 2000 KV keys across namespaces with TTL, 50K vector embeddings via HNSW index, and 10K
 graph nodes with edges
• Uses seeded numpy RNG (0xC0DE) for reproducibility; idempotent via FT.CREATE and GRAPH.QUERY
• Supports configurable counts and ports via CLI arguments

scripts/seed-console-fixtures.py


19. console/src/lib/completions.ts ✨ Enhancement +325/-0

RESP command metadata and Monaco autocomplete provider

• Exports COMMANDS array with 206 Moon-supported RESP commands grouped by category (string, hash,
 list, set, zset, key, server, pubsub, stream, script, transaction, graph, search, hyperloglog, geo,
 bitmap, acl, cluster)
• Each command includes name, summary, parameter hints, and group classification
• Provides createCompletionProvider() factory for Monaco editor integration with autocomplete
 filtering

console/src/lib/completions.ts


20. console/src/views/Help.tsx 📝 Documentation +424/-0

Interactive help guide for Moon Console users

• Comprehensive built-in help view covering quick start, per-view workflows, command cheatsheet,
 keyboard shortcuts, query examples, and FAQs
• Includes 6 view cards (Dashboard, Browser, Console, Vectors, Graph, Memory) with descriptions and
 tips
• Provides command groups (Strings & Keys, Collections, Vectors & Graph) and shortcut reference
• Addresses common gotchas (sharding, MEMORY USAGE, network binding, production safety)

console/src/views/Help.tsx


21. console/src/components/console/ResultPanel.tsx ✨ Enhancement +298/-0

Multi-view result display panel for console queries

• React component for displaying query results in three modes: table (for array of objects), JSON
 (expandable tree), and raw (text)
• Auto-detects optimal view mode; allows manual override via buttons
• Includes table with row numbers and column headers, JSON tree with depth limit and syntax
 coloring, and execution timing
• Handles loading state, errors, and empty results

console/src/components/console/ResultPanel.tsx


22. console/src/components/graph/GraphScene.tsx ✨ Enhancement +218/-0

3D graph visualization with instanced node and edge rendering

• Three.js/R3F scene for 3D force-directed graph visualization with instanced rendering
• GraphNodes component renders nodes as colored spheres with label-based palette, selection/hover
 highlighting, and click interaction
• GraphEdges component renders line segments between visible nodes with edge type filtering and
 connection highlighting
• Includes OrbitControls for camera interaction and deselection on empty space click

console/src/components/graph/GraphScene.tsx


23. console/src/lib/api.ts ✨ Enhancement +217/-0

REST API client for Moon admin gateway endpoints

• REST API client functions for server info, command execution, key scanning, TTL management, memory
 usage, slowlog, and command stats
• execCommand() unwraps RESP responses and throws on errors; scanKeys() supports
 pattern/count/type filtering
• fetchMemoryTreemap() builds hierarchical namespace tree from key metadata via SCAN + MEMORY
 USAGE
• fetchSlowlog() and fetchCommandStats() parse INFO output into structured arrays

console/src/lib/api.ts


24. console/src/lib/ws.ts ✨ Enhancement +256/-0

WebSocket RESP bridge with command parsing and auto-reconnect

• WebSocket bridge for RESP command execution with auto-reconnect (3s delay), request/response
 correlation via ID, and fallback to REST API
• parseCommandLine() tokenizes input respecting double-quoted strings for multi-word values
• sendCommand() handles multi-line batch execution (one command per line, skipping comments);
 sendSingleCommand() for Cypher/multi-line queries
• onMessage() listener registration for non-request messages; connection state queries

console/src/lib/ws.ts


25. console/src/components/memory/SlowlogPanel.tsx ✨ Enhancement +208/-0

Slowlog viewer with latency distribution histogram

• React component displaying slowlog entries in sortable table with latency histogram distribution
• Buckets entries by duration ranges (<1ms, 1-10ms, 10-100ms, 100ms-1s, >1s) with color-coded bar
 chart
• Sortable columns (ID, timestamp, duration, command) with ascending/descending toggle; duration
 badges with severity variants
• Shows command name and first 3 args with overflow indicator

console/src/components/memory/SlowlogPanel.tsx


26. console/src/stores/browser.ts ✨ Enhancement +203/-0

Zustand store for key browser pagination and filtering

• Zustand store for key browser state: key list, cursor pagination, selection (single/multi),
 filters (pattern/type/TTL)
• loadKeys() fetches paginated results via SCAN or SCAN+TYPE; enrichKey() fetches
 type/TTL/memory metadata
• buildTree() constructs namespace hierarchy from colon-delimited key names for prefix-based
 navigation
• deleteSelected() removes selected keys and updates UI; setActivePrefix() filters by namespace
 prefix

console/src/stores/browser.ts


27. console/src/views/VectorExplorer.tsx ✨ Enhancement +194/-0

Vector Explorer view with 3D visualization and metadata

• New vector index explorer view with 3D point cloud visualization and interactive controls
• Displays index metadata (dimensions, metric, documents, segments) in right sidebar
• Includes toolbar for index selection, HNSW edge toggle, lasso selection, and color-by controls
• Integrates PointCloudScene, UmapProgress, PointInspector, KnnSearchPanel, LassoSelect, and
 ClusterStats components

console/src/views/VectorExplorer.tsx


28. console/src/components/vector/PointCloudScene.tsx ✨ Enhancement +191/-0

3D point cloud scene with interactive selection

• Three.js-based 3D point cloud renderer using React Three Fiber and custom shaders
• Implements raycasting for point selection with dynamic coloring (segment/label/highlight modes)
• Supports KNN result highlighting in cyan and lasso selection in gold
• Includes OrbitControls for camera manipulation and adaptive point sizing

console/src/components/vector/PointCloudScene.tsx


29. console/src/components/browser/KeyList.tsx ✨ Enhancement +176/-0

Virtualized key list with type/TTL/memory display

• Virtual scrolling list of Redis keys with lazy loading and infinite scroll
• Displays key type badges, TTL countdown, and memory size for each entry
• Multi-select checkbox support with enrichment of visible keys (type/TTL/memory)
• Responsive row height (36px) with header and footer metadata

console/src/components/browser/KeyList.tsx


30. console/src/components/memory/MemoryTreemap.tsx ✨ Enhancement +194/-0

Memory distribution treemap with drill-down navigation

• Interactive treemap visualization of Redis keyspace memory distribution by namespace and type
• Drill-down navigation with breadcrumb trail and back button
• Custom rendering with type-based color coding (string/hash/list/set/zset/stream)
• Responsive sizing with byte formatting and loading state

console/src/components/memory/MemoryTreemap.tsx


31. console/src/components/memory/CommandStatsTable.tsx ✨ Enhancement +169/-0

Command statistics table with sorting and highlighting

• Sortable table of command statistics (calls, latency, CPU time, rejected/failed counts)
• Dynamic column visibility based on data presence (rejected/failed columns)
• Top-3 commands by CPU time highlighted with amber background
• Microsecond formatting with color-coded severity badges

console/src/components/memory/CommandStatsTable.tsx


32. console/src/stores/vector.ts ✨ Enhancement +190/-0

Vector store with UMAP and KNN search

• Zustand store managing vector index state (indexes, selected index, points, projections)
• UMAP dimensionality reduction via Web Worker with progress tracking
• KNN search functionality with point-to-point and manual vector queries
• Lasso selection and HNSW edge visualization state management

console/src/stores/vector.ts


33. console/src/components/browser/editors/ZSetEditor.tsx ✨ Enhancement +150/-0

Sorted set editor with inline score editing

• Inline editor for sorted set members with score editing and deletion
• Rank-ordered display with click-to-edit score fields
• Add new member form with automatic sorting by score
• Executes ZADD/ZREM commands via API

console/src/components/browser/editors/ZSetEditor.tsx


34. console/src/stores/graph.ts ✨ Enhancement +194/-0

Graph store with force layout and Cypher queries

• Zustand store for graph visualization state (nodes, edges, layout progress)
• Force-directed layout computation via Web Worker
• Cypher query execution and result parsing
• Label and relationship type filtering with visibility toggles

console/src/stores/graph.ts


35. console/src/stores/console.ts ✨ Enhancement +188/-0

Console store with tabs, history, and persistence

• Zustand store with persistence for console tabs and query history
• Per-tab language detection (RESP/Cypher/FT.Search) and execution state
• History navigation (up/down) and search functionality
• Tab management (add/close/switch) with at-least-one-tab guarantee

console/src/stores/console.ts


36. console/src/components/console/Editor.tsx ✨ Enhancement +183/-0

Monaco editor with RESP/Cypher languages and shortcuts

• Monaco editor with custom RESP and Cypher language definitions
• Cmd/Ctrl+Enter executes current line (RESP) or entire query (Cypher)
• Cmd/Ctrl+Shift+Enter executes all lines in batch mode
• Lazy-loaded MonacoEditor with custom theme and completion providers

console/src/components/console/Editor.tsx


37. console/src/components/browser/editors/HashEditor.tsx ✨ Enhancement +139/-0

Hash editor with inline field editing

• Inline editor for hash fields with click-to-edit values
• Add new field form with Enter key submission
• Executes HSET/HDEL commands via API with loading indicators
• Grid layout with field, value, and delete action columns

console/src/components/browser/editors/HashEditor.tsx


38. console/src/components/browser/KeyToolbar.tsx ✨ Enhancement +133/-0

Key browser toolbar with filtering and bulk actions

• Search bar with pattern filtering and refresh button
• Type and TTL status filter dropdowns
• Bulk selection with select-all and clear actions
• Delete confirmation dialog for selected keys

console/src/components/browser/KeyToolbar.tsx


39. console/src/components/vector/IndexMetadataPanel.tsx ✨ Enhancement +148/-0

Vector index metadata sidebar panel

• Right sidebar panel displaying vector index metadata and segments
• Index selector dropdown with loading state
• Color-by mode toggle (segment/label/none)
• Segment table with type indicators and vector counts

console/src/components/vector/IndexMetadataPanel.tsx


40. console/src/components/vector/KnnSearchPanel.tsx ✨ Enhancement +132/-0

KNN search panel with manual and point-based queries

• KNN search interface with K parameter input
• Search from selected point or manual vector input
• Results list with distance scores and point highlighting
• Clear results button and query point display

console/src/components/vector/KnnSearchPanel.tsx


41. console/src/components/console/HistoryPanel.tsx ✨ Enhancement +133/-0

Query history panel with search and restore

• Searchable query history sidebar with language badges and timestamps
• Relative time formatting (e.g., "5m ago")
• Virtual scrolling with 50-item visible window and 10-item buffer
• Click to restore query to editor

console/src/components/console/HistoryPanel.tsx


42. console/src/views/Console.tsx ✨ Enhancement +149/-0

Console view with tabs, results, and history

• Main console view with tabbed editor, result panel, and optional history sidebar
• Resizable result panel via drag handle (10-80% height range)
• Global keyboard shortcuts (Cmd+T new tab, Cmd+W close, Cmd+H toggle history)
• WebSocket lifecycle management for command execution

console/src/views/Console.tsx


43. console/src/lib/graph-api.ts ✨ Enhancement +140/-0

Graph API with Cypher query parsing

• API layer for GRAPH.INFO and GRAPH.QUERY commands
• Parses graph results into nodes and edges with support for array and object encodings
• Handles label counts and relationship type counts extraction
• Builds node/edge maps to deduplicate results

console/src/lib/graph-api.ts


44. console/src/components/browser/editors/StreamViewer.tsx ✨ Enhancement +130/-0

Stream viewer with consumer groups and entries

• Stream entry viewer with expandable timeline display
• Consumer group summary with consumer count and pending count
• Loads entries via XRANGE and groups via XINFO
• Field-value pairs displayed in expanded entries

console/src/components/browser/editors/StreamViewer.tsx


45. console/src/components/vector/PointInspector.tsx ✨ Enhancement +73/-0

Vector point inspector overlay card

• Floating inspector card for hovered vector point details
• Displays dimensions, segment type, KNN distance (if applicable)
• Payload fields in scrollable section
• Quick-action button to search KNN from selected point

console/src/components/vector/PointInspector.tsx


46. .github/workflows/console-integration.yml Additional files +35/-0

...

.github/workflows/console-integration.yml


47. .planning Additional files +1/-1

...

.planning


48. Cargo.toml Additional files +7/-1

...

Cargo.toml


49. console/README.md Additional files +92/-0

...

console/README.md


50. console/dist/index.html Additional files +13/-0

...

console/dist/index.html


51. console/index.html Additional files +12/-0

...

console/index.html


52. console/package.json Additional files +64/-0

...

console/package.json


53. console/playwright.config.ts Additional files +29/-0

...

console/playwright.config.ts


54. console/pnpm-lock.yaml Additional files +5071/-0

...

console/pnpm-lock.yaml


55. console/src/App.tsx Additional files +63/-0

...

console/src/App.tsx


56. console/src/app.css Additional files +19/-0

...

console/src/app.css


57. console/src/components/browser/KeyMetadata.tsx Additional files +41/-0

...

console/src/components/browser/KeyMetadata.tsx


58. console/src/components/browser/NamespaceTree.tsx Additional files +112/-0

...

console/src/components/browser/NamespaceTree.tsx


59. console/src/components/browser/TtlManager.tsx Additional files +113/-0

...

console/src/components/browser/TtlManager.tsx


60. console/src/components/browser/ValuePanel.tsx Additional files +72/-0

...

console/src/components/browser/ValuePanel.tsx


61. console/src/components/browser/editors/ListEditor.tsx Additional files +90/-0

...

console/src/components/browser/editors/ListEditor.tsx


62. console/src/components/browser/editors/SetEditor.tsx Additional files +69/-0

...

console/src/components/browser/editors/SetEditor.tsx


63. console/src/components/browser/editors/StringEditor.tsx Additional files +100/-0

...

console/src/components/browser/editors/StringEditor.tsx


64. console/src/components/console/TabBar.tsx Additional files +63/-0

...

console/src/components/console/TabBar.tsx


65. console/src/components/dashboard/ClientsCard.tsx Additional files +37/-0

...

console/src/components/dashboard/ClientsCard.tsx


66. console/src/components/dashboard/HitRatioCard.tsx Additional files +62/-0

...

console/src/components/dashboard/HitRatioCard.tsx


67. console/src/components/dashboard/InfoCards.tsx Additional files +60/-0

...

console/src/components/dashboard/InfoCards.tsx


68. console/src/components/dashboard/KeyspaceCard.tsx Additional files +74/-0

...

console/src/components/dashboard/KeyspaceCard.tsx


69. console/src/components/dashboard/MemoryChart.tsx Additional files +74/-0

...

console/src/components/dashboard/MemoryChart.tsx


70. console/src/components/dashboard/SlowlogTable.tsx Additional files +122/-0

...

console/src/components/dashboard/SlowlogTable.tsx


71. console/src/components/graph/CypherInput.tsx Additional files +53/-0

...

console/src/components/graph/CypherInput.tsx


72. console/src/components/graph/GraphInfoPanel.tsx Additional files +80/-0

...

console/src/components/graph/GraphInfoPanel.tsx


73. console/src/components/graph/LabelFilter.tsx Additional files +125/-0

...

console/src/components/graph/LabelFilter.tsx


74. console/src/components/graph/NodeInspector.tsx Additional files +122/-0

...

console/src/components/graph/NodeInspector.tsx


75. console/src/components/layout/AppShell.tsx Additional files +13/-0

...

console/src/components/layout/AppShell.tsx


76. console/src/components/layout/Sidebar.tsx Additional files +74/-0

...

console/src/components/layout/Sidebar.tsx


77. console/src/components/ui/badge.tsx Additional files +30/-0

...

console/src/components/ui/badge.tsx


78. console/src/components/ui/card.tsx Additional files +64/-0

...

console/src/components/ui/card.tsx


79. console/src/components/vector/ClusterStats.tsx Additional files +90/-0

...

console/src/components/vector/ClusterStats.tsx


80. console/src/components/vector/ColorByControls.tsx Additional files +27/-0

...

console/src/components/vector/ColorByControls.tsx


81. console/src/components/vector/HnswOverlay.tsx Additional files +74/-0

...

console/src/components/vector/HnswOverlay.tsx


82. console/src/components/vector/LassoSelect.tsx Additional files +125/-0

...

console/src/components/vector/LassoSelect.tsx


83. console/src/components/vector/UmapProgress.tsx Additional files +23/-0

...

console/src/components/vector/UmapProgress.tsx


84. console/src/lib/monarch-cypher.ts Additional files +47/-0

...

console/src/lib/monarch-cypher.ts


85. console/src/lib/monarch-resp.ts Additional files +82/-0

...

console/src/lib/monarch-resp.ts


86. console/src/lib/sse.ts Additional files +57/-0

...

console/src/lib/sse.ts


87. console/src/lib/utils.ts Additional files +6/-0

...

console/src/lib/utils.ts


88. console/src/lib/vector-api.ts Additional files +141/-0

...

console/src/lib/vector-api.ts


89. console/src/main.tsx Additional files +10/-0

...

console/src/main.tsx


90. console/src/stores/memory.ts Additional files +90/-0

...

console/src/stores/memory.ts


91. console/src/stores/metrics.ts Additional files +57/-0

...

console/src/stores/metrics.ts


92. console/src/types/browser.ts Additional files +54/-0

...

console/src/types/browser.ts


93. console/src/types/console.ts Additional files +38/-0

...

console/src/types/console.ts


94. console/src/types/d3-force-3d.d.ts Additional files +74/-0

...

console/src/types/d3-force-3d.d.ts


95. console/src/types/graph.ts Additional files +40/-0

...

console/src/types/graph.ts


96. console/src/types/memory.ts Additional files +24/-0

...

console/src/types/memory.ts


97. console/src/types/metrics.ts Additional files +40/-0

...

console/src/types/metrics.ts


98. console/src/types/vector.ts Additional files +40/-0

...

console/src/types/vector.ts


99. console/src/views/Browser.tsx Additional files +36/-0

...

console/src/views/Browser.tsx


100. console/src/views/Dashboard.tsx Additional files +54/-0

...

console/src/views/Dashboard.tsx


101. console/src/views/GraphExplorer.tsx Additional files +70/-0

...

console/src/views/GraphExplorer.tsx


102. console/src/views/MemoryView.tsx Additional files +36/-0

...

console/src/views/MemoryView.tsx


103. console/src/vite-env.d.ts Additional files +1/-0

...

console/src/vite-env.d.ts


104. console/src/workers/force-layout.worker.ts Additional files +62/-0

...

console/src/workers/force-layout.worker.ts


105. console/src/workers/umap.worker.ts Additional files +46/-0

...

console/src/workers/umap.worker.ts


106. console/tests/e2e/benchmarks.spec.ts Additional files +41/-0

...

console/tests/e2e/benchmarks.spec.ts


107. console/tests/e2e/browser.spec.ts Additional files +10/-0

...

console/tests/e2e/browser.spec.ts


108. console/tests/e2e/console.spec.ts Additional files +12/-0

...

console/tests/e2e/console.spec.ts


109. console/tests/e2e/dashboard.spec.ts Additional files +17/-0

...

console/tests/e2e/dashboard.spec.ts


110. console/tests/e2e/fixtures.ts Additional files +22/-0

...

console/tests/e2e/fixtures.ts


111. console/tests/e2e/graph.spec.ts Additional files +12/-0

...

console/tests/e2e/graph.spec.ts


112. console/tests/e2e/integration.spec.ts Additional files +44/-0

...

console/tests/e2e/integration.spec.ts


113. console/tests/e2e/memory.spec.ts Additional files +15/-0

...

console/tests/e2e/memory.spec.ts


114. console/tests/e2e/vectors.spec.ts Additional files +14/-0

...

console/tests/e2e/vectors.spec.ts


115. console/tests/unit/components/NamespaceTree.test.tsx Additional files +57/-0

...

console/tests/unit/components/NamespaceTree.test.tsx


116. console/tests/unit/lib/completions.test.ts Additional files +128/-0

...

console/tests/unit/lib/completions.test.ts


117. console/tests/unit/lib/monarch-cypher.test.ts Additional files +42/-0

...

console/tests/unit/lib/monarch-cypher.test.ts


118. console/tests/unit/lib/monarch-resp.test.ts Additional files +43/-0

...

console/tests/unit/lib/monarch-resp.test.ts


119. console/tests/unit/setup.ts Additional files +7/-0

...

console/tests/unit/setup.ts


120. console/tests/unit/stores/browser.test.ts Additional files +87/-0

...

console/tests/unit/stores/browser.test.ts


121. console/tests/unit/stores/console.test.ts Additional files +104/-0

...

console/tests/unit/stores/console.test.ts


122. console/tests/unit/stores/graph.test.ts Additional files +102/-0

...

console/tests/unit/stores/graph.test.ts


123. console/tests/unit/stores/memory.test.ts Additional files +85/-0

...

console/tests/unit/stores/memory.test.ts


124. console/tests/unit/stores/metrics.test.ts Additional files +93/-0

...

console/tests/unit/stores/metrics.test.ts


125. console/tests/unit/stores/vector.test.ts Additional files +70/-0

...

console/tests/unit/stores/vector.test.ts


126. console/tsconfig.app.json Additional files +25/-0

...

console/tsconfig.app.json


127. console/tsconfig.json Additional files +7/-0

...

console/tsconfig.json


128. console/tsconfig.node.json Additional files +19/-0

...

console/tsconfig.node.json


129. console/tsconfig.test.json Additional files +8/-0

...

console/tsconfig.test.json


130. console/vite.config.ts Additional files +24/-0

...

<a href='https://git...

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review Bot commented Apr 12, 2026

Code Review by Qodo

🐞 Bugs (4)   📘 Rule violations (4)   📎 Requirement gaps (0)   🎨 UX Issues (0)
🐞\ ≡ Correctness (2) ☼ Reliability (1) ⛨ Security (1)
📘\ ➹ Performance (1) ▣ Testability (3)

Grey Divider


Action required

1. ConsoleGateway locks SPSC producer 📘
Description
The console gateway uses parking_lot::Mutex to lock a per-shard SPSC producer on every command
dispatch instead of using flume::Sender/Receiver for lock-free stage handoff. This violates the
hot-path pipeline communication requirement and can add avoidable synchronization overhead per
message.
Code

src/admin/console_gateway.rs[R143-148]

+        // Push to the SPSC ring buffer. If full, the shard is overloaded.
+        {
+            let mut prod = self.shard_producers[shard_id].lock();
+            prod.try_push(msg)
+                .map_err(|_| "shard SPSC buffer full (overloaded)".to_string())?;
+        }
Evidence
PR Compliance ID 209930 requires lock-free flume channels for per-item producer/consumer messaging;
the new console pipeline uses a mutex-guarded HeapProd and locks it for each dispatched
ShardMessage push, and channel creation uses ringbuf rather than flume.

Rule 209930: Use lock-free flume channels for hot-path pipeline communication
src/admin/console_gateway.rs[33-38]
src/admin/console_gateway.rs[143-148]
src/shard/mesh.rs[176-193]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The console gateway dispatch path locks `Mutex<HeapProd<ShardMessage>>` on every command push and uses ringbuf SPSC plumbing instead of `flume::Sender`/`Receiver`, violating the rule requiring flume channels for hot-path pipeline communication.

## Issue Context
This affects the admin/console request pipeline (REST/WS → shard dispatch). The compliance requirement specifically calls for flume-based producer/consumer messaging instead of per-item locking.

## Fix Focus Areas
- src/admin/console_gateway.rs[33-38]
- src/admin/console_gateway.rs[143-148]
- src/shard/mesh.rs[176-193]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Admin tests outside mod.rs 📘
Description
New unit tests were added in non-mod.rs files under the split src/admin/ module tree. This
violates the requirement that split-module unit tests live in the module’s mod.rs only, to keep
tests centralized and coordinated.
Code

src/admin/console_gateway.rs[R252-256]

+#[cfg(test)]
+mod tests {
+    use super::*;
+
+    #[test]
Evidence
PR Compliance ID 302093 requires unit tests for split modules to live only in the directory’s
mod.rs, but the PR adds #[cfg(test)] mod tests blocks in multiple src/admin/*.rs leaf modules.

Rule 302093: Keep test code for split Rust modules in mod.rs
src/admin/mod.rs[6-25]
src/admin/console_gateway.rs[252-256]
src/admin/memory_treemap.rs[295-300]
src/admin/hnsw_trace.rs[175-179]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Unit tests were added to leaf module files under `src/admin/`, but the compliance rule requires tests for split modules to be centralized in `src/admin/mod.rs`.

## Issue Context
`src/admin/` is a split module directory with a `mod.rs` that declares submodules. Tests should be declared only in `mod.rs` (e.g., `#[cfg(test)] mod tests { ... }`) and import the items they test.

## Fix Focus Areas
- src/admin/mod.rs[6-25]
- src/admin/console_gateway.rs[252-256]
- src/admin/memory_treemap.rs[295-300]
- src/admin/hnsw_trace.rs[175-179]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. HNSW trace tests outside mod.rs 📘
Description
New console-feature tests were added in src/vector/hnsw/search.rs, which is a leaf file in a split
module directory that already has src/vector/hnsw/mod.rs. This violates the rule that unit tests
for split modules must live in mod.rs rather than sibling leaf files.
Code

src/vector/hnsw/search.rs[R1409-1416]

+    // ── Trace tests (console feature) ────────────────────────────────
+
+    #[cfg(feature = "console")]
+    #[test]
+    fn trace_empty_graph_returns_empty() {
+        distance::init();
+        let collection = CollectionMetadata::new(
+            1,
Evidence
PR Compliance ID 302093 prohibits unit tests in non-mod.rs files for split modules; the PR adds a
new trace test section in src/vector/hnsw/search.rs while the module is split under
src/vector/hnsw/mod.rs.

Rule 302093: Keep test code for split Rust modules in mod.rs
src/vector/hnsw/mod.rs[1-10]
src/vector/hnsw/search.rs[1409-1473]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Unit tests were added to `src/vector/hnsw/search.rs`, but the compliance rule requires split-module unit tests to live in `src/vector/hnsw/mod.rs`.

## Issue Context
`src/vector/hnsw/` is a split module directory (it has `mod.rs` plus multiple sibling leaf files). Tests should be centralized in `mod.rs` and import functions/types from `search`.

## Fix Focus Areas
- src/vector/hnsw/mod.rs[1-10]
- src/vector/hnsw/search.rs[1409-1473]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (2)
4. Unauthenticated admin command exec 🐞
Description
/api/v1/command and /ws execute arbitrary commands via ConsoleGateway without any
authentication/ACL enforcement, while API responses set permissive CORS
(Access-Control-Allow-Origin: *) enabling cross-origin browser access. If the admin port is
reachable (e.g., --bind 0.0.0.0), this is remote RCE-on-data: attackers can read/modify/delete DB
contents via HTTP/WS.
Code

src/admin/http_server.rs[R279-336]

+async fn handle_command(
+    req: Request<Incoming>,
+    gw: &crate::admin::console_gateway::ConsoleGateway,
+) -> Response<BoxBody<Bytes, Infallible>> {
+    let body = match read_body(req).await {
+        Ok(b) => b,
+        Err(resp) => return resp,
+    };
+
+    let parsed: serde_json::Value = match serde_json::from_slice(&body) {
+        Ok(v) => v,
+        Err(e) => {
+            return json_response(
+                StatusCode::BAD_REQUEST,
+                &serde_json::json!({"error": format!("Invalid JSON: {}", e)}),
+            );
+        }
+    };
+
+    let cmd = match parsed["cmd"].as_str() {
+        Some(c) => c,
+        None => {
+            return json_response(
+                StatusCode::BAD_REQUEST,
+                &serde_json::json!({"error": "Missing 'cmd' field"}),
+            );
+        }
+    };
+
+    let args: Vec<Bytes> = parsed["args"]
+        .as_array()
+        .map(|arr| {
+            arr.iter()
+                .map(|v| match v.as_str() {
+                    Some(s) => Bytes::from(s.to_string()),
+                    None => Bytes::from(v.to_string()),
+                })
+                .collect()
+        })
+        .unwrap_or_default();
+
+    let db_index = parsed["db"].as_u64().unwrap_or(0) as usize;
+
+    match gw.execute_command(db_index, cmd, &args).await {
+        Ok(frame) => {
+            let result = crate::admin::console_gateway::ConsoleGateway::frame_to_json(&frame);
+            let type_name = frame_type_name(&frame);
+            json_response(
+                StatusCode::OK,
+                &serde_json::json!({"result": result, "type": type_name}),
+            )
+        }
+        Err(e) => json_response(
+            StatusCode::INTERNAL_SERVER_ERROR,
+            &serde_json::json!({"error": e}),
+        ),
+    }
+}
Evidence
The REST command handler directly forwards user-provided cmd/args/db into gw.execute_command (no
auth gates). CORS is globally permissive for JSON responses and SSE, and protected-mode checks are
only enforced on the Redis/TLS listeners, not the admin HTTP server, so console endpoints are
exposed whenever the admin port is bound/exposed.

src/admin/http_server.rs[167-173]
src/admin/http_server.rs[276-344]
src/admin/http_server_support.rs[32-54]
src/admin/sse_stream.rs[78-85]
src/admin/metrics_setup.rs[37-68]
src/server/listener.rs[279-317]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Console REST/WS/SSE endpoints on the admin HTTP port allow unauthenticated arbitrary command execution, and JSON/SSE responses set `Access-Control-Allow-Origin: *`, enabling cross-origin browser access.

### Issue Context
- `/api/v1/command` forwards user JSON directly into `ConsoleGateway::execute_command`.
- `json_response()` attaches permissive CORS headers.
- `/ws` upgrade path has no Origin validation.
- Protected mode currently guards Redis/TLS listeners only.

### Fix Focus Areas
- Add an explicit authentication/authorization gate for all console endpoints (REST + WS + SSE), ideally reusing existing auth config (`requirepass`/ACL) or introducing a dedicated admin token.
- Default to **no CORS** (or same-origin only) in production builds; only allow `*` in explicit dev mode.
- For WebSocket: validate `Origin`/`Host` and reject cross-site origins by default.
- Consider enforcing that admin HTTP (when `console` is enabled) only binds to loopback unless explicitly opted out.

### Fix Focus Areas (code locations)
- src/admin/http_server.rs[114-191]
- src/admin/http_server.rs[203-277]
- src/admin/http_server.rs[276-344]
- src/admin/http_server_support.rs[32-54]
- src/admin/sse_stream.rs[78-85]
- src/admin/ws_bridge.rs[31-93]
- src/admin/metrics_setup.rs[37-68]
- src/config.rs[8-20]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. SPA fallback breaks missing assets 🐞
Description
serve_static_file() serves index.html for *any* unknown path (including missing JS/CSS under
/assets/*), so when referenced assets aren’t embedded/present the browser receives HTML instead of
JS/CSS and the console fails to load. This is exacerbated because console/dist/index.html
references hashed /assets/... files, but the Rust embed serves only what exists under
console/dist/.
Code

src/admin/static_files.rs[R21-64]

+pub fn serve_static_file(path: &str) -> Response<Full<Bytes>> {
+    let path = path.trim_start_matches('/');
+    let path = if path.is_empty() { "index.html" } else { path };
+
+    match ConsoleAssets::get(path) {
+        Some(file) => {
+            let mime = mime_guess::from_path(path).first_or_text_plain();
+            let cache_control = if path == "index.html" {
+                // index.html must not be aggressively cached (SPA entry point)
+                "no-cache"
+            } else {
+                // Hashed assets (JS, CSS) can be cached immutably
+                "public, max-age=31536000, immutable"
+            };
+
+            Response::builder()
+                .status(StatusCode::OK)
+                .header("content-type", mime.as_ref())
+                .header("cache-control", cache_control)
+                .body(Full::new(Bytes::from(file.data.into_owned())))
+                .unwrap_or_else(|_| {
+                    Response::new(Full::new(Bytes::from_static(b"Internal Server Error")))
+                })
+        }
+        None => {
+            // SPA fallback: serve index.html for client-side routing
+            match ConsoleAssets::get("index.html") {
+                Some(file) => Response::builder()
+                    .status(StatusCode::OK)
+                    .header("content-type", "text/html; charset=utf-8")
+                    .header("cache-control", "no-cache")
+                    .body(Full::new(Bytes::from(file.data.into_owned())))
+                    .unwrap_or_else(|_| {
+                        Response::new(Full::new(Bytes::from_static(b"Internal Server Error")))
+                    }),
+                None => Response::builder()
+                    .status(StatusCode::NOT_FOUND)
+                    .header("content-type", "text/plain")
+                    .body(Full::new(Bytes::from_static(b"Console not available")))
+                    .unwrap_or_else(|_| {
+                        Response::new(Full::new(Bytes::from_static(b"Not Found")))
+                    }),
+            }
+        }
Evidence
The embedded index.html references /assets/... bundles. The static file server falls back to
returning index.html (200) for any missing file, rather than returning 404 for missing assets, which
will cause asset requests to succeed with HTML content and break the client at runtime. The README
indicates dist/ is an input to the Rust build via rust-embed, but Cargo build does not build the
frontend, making it easy to embed incomplete/stale output.

console/dist/index.html[1-9]
src/admin/static_files.rs[16-63]
console/README.md[24-32]
build.rs[1-20]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The embedded console can fail to load because `serve_static_file()` returns `index.html` for any unknown path, including missing `/assets/*` bundles referenced by `dist/index.html`.

### Issue Context
- `console/dist/index.html` references hashed JS/CSS under `/assets/...`.
- Rust static serving uses rust-embed from `console/dist/`.
- The server returns index.html on any miss, which masks missing bundles and produces HTML responses to JS/CSS requests.
- The Rust build does not automatically run `pnpm build`, so it’s easy to ship a binary with incomplete/stale `dist/`.

### Fix Focus Areas
- Change SPA fallback to only apply to likely client-routes (e.g., paths without a file extension) and/or only under `/ui/*`.
- Return **404** for missing static assets (especially under `/assets/`), rather than serving index.html.
- Add a build-time guarantee that `console/dist` contains the expected assets when `--features console` is enabled:
 - Option A: check in built assets (if that’s intended).
 - Option B: add a build step (e.g., in `build.rs` gated by `CARGO_FEATURE_CONSOLE`) that runs `pnpm build` or fails fast with a clear error if assets are missing.

### Fix Focus Areas (code locations)
- src/admin/static_files.rs[16-65]
- console/dist/index.html[1-9]
- console/README.md[24-32]
- build.rs[1-20]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

6. No fuzz target for WS parser 📘
Description
The PR introduces new parsing/deserialization entry points for untrusted WebSocket JSON messages but
does not add or wire a corresponding cargo-fuzz target. This reduces coverage for malformed-input
handling in the new parser logic.
Code

src/admin/ws_bridge.rs[R106-118]

+async fn process_ws_message(
+    text: &str,
+    gateway: &ConsoleGateway,
+    selected_db: &mut usize,
+) -> serde_json::Value {
+    let parsed: serde_json::Value = match serde_json::from_str(text) {
+        Ok(v) => v,
+        Err(e) => {
+            return serde_json::json!({
+                "error": format!("Invalid JSON: {}", e)
+            });
+        }
+    };
Evidence
PR Compliance ID 302085 requires new parsers/deserializers to have corresponding fuzz targets under
fuzz/fuzz_targets/ and wired in fuzz/Cargo.toml; the new process_ws_message JSON parsing code
has no associated fuzz target listed.

Rule 302085: Fuzz new parsers and deserializers with cargo-fuzz
src/admin/ws_bridge.rs[95-188]
fuzz/Cargo.toml[22-62]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A new JSON parsing/deserialization entry point (`process_ws_message`) was added, but no corresponding cargo-fuzz target was added/registered.

## Issue Context
The fuzz harness lives under `fuzz/` and enumerates targets in `fuzz/Cargo.toml`. Add a new fuzz target that feeds arbitrary `&[u8]` (or UTF-8 `&str`) into the parsing layer used by `process_ws_message` (or a thin wrapper that shares the same parsing/validation logic).

## Fix Focus Areas
- src/admin/ws_bridge.rs[95-188]
- fuzz/Cargo.toml[22-62]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


7. Unbounded REST request body 🐞
Description
read_body() collects the full HTTP request body without any size cap, so a large POST/PUT to
console endpoints can exhaust memory and crash/unstabilize the admin runtime. This is especially
risky because the endpoints are otherwise open and easy to hit.
Code

src/admin/http_server.rs[R55-63]

+async fn read_body(req: Request<Incoming>) -> Result<Bytes, Response<BoxBody<Bytes, Infallible>>> {
+    match req.collect().await {
+        Ok(collected) => Ok(collected.to_bytes()),
+        Err(_) => Err(json_response(
+            StatusCode::BAD_REQUEST,
+            &serde_json::json!({"error": "Failed to read request body"}),
+        )),
+    }
+}
Evidence
read_body() calls req.collect().await and converts to Bytes without checking Content-Length
or enforcing a maximum, so the server will buffer arbitrarily large payloads in memory.

src/admin/http_server.rs[54-63]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Console REST handlers read request bodies with no maximum size, allowing memory exhaustion via large requests.

### Issue Context
`read_body()` uses `req.collect().await` and `to_bytes()` with no cap.

### Fix Focus Areas
- Implement a maximum request body size (e.g., 1 MiB or 10 MiB) for `/api/v1/*` JSON endpoints.
- Enforce via:
 - Rejecting if `Content-Length` exceeds the limit, and/or
 - Streaming frames and stopping once the limit is exceeded.
- Return `413 Payload Too Large` on overflow.

### Fix Focus Areas (code locations)
- src/admin/http_server.rs[54-63]
- src/admin/http_server.rs[279-336]
- src/admin/http_server.rs[489-536]
- src/admin/http_server.rs[580-639]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


8. DB index silently clamped 🐞
Description
WebSocket SELECT and REST db accept out-of-range indices and report success, but shard execution
clamps db_index to the last DB, causing commands to run against an unexpected database. This
diverges from the real SELECT command semantics which return ERR DB index is out of range.
Code

src/admin/ws_bridge.rs[R138-175]

+    // Handle SELECT locally (changes session db).
+    if cmd == "SELECT" {
+        if let Some(db) = parsed
+            .get("args")
+            .and_then(|a| a.as_array())
+            .and_then(|a| a.first())
+            .and_then(|v| v.as_str().or_else(|| v.as_u64().map(|_| "")))
+        {
+            // Try parsing from string representation or direct integer
+            let db_num = if db.is_empty() {
+                parsed
+                    .get("args")
+                    .and_then(|a| a.as_array())
+                    .and_then(|a| a.first())
+                    .and_then(|v| v.as_u64())
+                    .map(|n| n as usize)
+            } else {
+                db.parse::<usize>().ok()
+            };
+
+            if let Some(db_num) = db_num {
+                *selected_db = db_num;
+                let mut resp = serde_json::json!({"result": "OK", "type": "simple_string"});
+                if let Some(id) = request_id {
+                    resp["id"] = id;
+                }
+                return resp;
+            }
+        }
+    }
+
+    // If request specifies db, use that; otherwise use session db.
+    let db_index = parsed
+        .get("db")
+        .and_then(|v| v.as_u64())
+        .map(|v| v as usize)
+        .unwrap_or(*selected_db);
+
Evidence
The WS bridge implements SELECT locally by assigning selected_db without validating bounds. The
shard SPSC handler clamps db_index via min(db_count - 1), so out-of-range indices silently map
to the last DB. The canonical SELECT handler validates and errors when index >= db_count.

src/admin/ws_bridge.rs[138-175]
src/shard/spsc_handler.rs[195-205]
src/command/connection.rs[59-97]
src/admin/http_server.rs[320-333]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Console WS/REST allow selecting/using an out-of-range DB index and return OK, but the shard layer clamps to the last DB, leading to operations executing on the wrong database.

### Issue Context
- `ws_bridge` handles `SELECT` locally and sets `selected_db` without checking configured DB count.
- `/api/v1/command` accepts `db` and forwards it as `usize`.
- Shard execution clamps `db_index` to `db_count - 1`.
- Core `SELECT` command returns an explicit out-of-range error.

### Fix Focus Areas
- Plumb configured `db_count` (from `ServerConfig.databases`) into `ConsoleGateway` and validate `db_index < db_count`.
- In WS `SELECT`: return an error frame/message matching server semantics when out of range.
- In REST `/api/v1/command`: return `400` for invalid/out-of-range `db`.

### Fix Focus Areas (code locations)
- src/admin/ws_bridge.rs[138-175]
- src/admin/http_server.rs[279-336]
- src/admin/console_gateway.rs[83-159]
- src/main.rs[248-259]
- src/command/connection.rs[59-97]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +143 to +148
// Push to the SPSC ring buffer. If full, the shard is overloaded.
{
let mut prod = self.shard_producers[shard_id].lock();
prod.try_push(msg)
.map_err(|_| "shard SPSC buffer full (overloaded)".to_string())?;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. consolegateway locks spsc producer 📘 Rule violation ➹ Performance

The console gateway uses parking_lot::Mutex to lock a per-shard SPSC producer on every command
dispatch instead of using flume::Sender/Receiver for lock-free stage handoff. This violates the
hot-path pipeline communication requirement and can add avoidable synchronization overhead per
message.
Agent Prompt
## Issue description
The console gateway dispatch path locks `Mutex<HeapProd<ShardMessage>>` on every command push and uses ringbuf SPSC plumbing instead of `flume::Sender`/`Receiver`, violating the rule requiring flume channels for hot-path pipeline communication.

## Issue Context
This affects the admin/console request pipeline (REST/WS → shard dispatch). The compliance requirement specifically calls for flume-based producer/consumer messaging instead of per-item locking.

## Fix Focus Areas
- src/admin/console_gateway.rs[33-38]
- src/admin/console_gateway.rs[143-148]
- src/shard/mesh.rs[176-193]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +252 to +256
#[cfg(test)]
mod tests {
use super::*;

#[test]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Admin tests outside mod.rs 📘 Rule violation ▣ Testability

New unit tests were added in non-mod.rs files under the split src/admin/ module tree. This
violates the requirement that split-module unit tests live in the module’s mod.rs only, to keep
tests centralized and coordinated.
Agent Prompt
## Issue description
Unit tests were added to leaf module files under `src/admin/`, but the compliance rule requires tests for split modules to be centralized in `src/admin/mod.rs`.

## Issue Context
`src/admin/` is a split module directory with a `mod.rs` that declares submodules. Tests should be declared only in `mod.rs` (e.g., `#[cfg(test)] mod tests { ... }`) and import the items they test.

## Fix Focus Areas
- src/admin/mod.rs[6-25]
- src/admin/console_gateway.rs[252-256]
- src/admin/memory_treemap.rs[295-300]
- src/admin/hnsw_trace.rs[175-179]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread src/vector/hnsw/search.rs
Comment on lines +1409 to +1416
// ── Trace tests (console feature) ────────────────────────────────

#[cfg(feature = "console")]
#[test]
fn trace_empty_graph_returns_empty() {
distance::init();
let collection = CollectionMetadata::new(
1,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Hnsw trace tests outside mod.rs 📘 Rule violation ▣ Testability

New console-feature tests were added in src/vector/hnsw/search.rs, which is a leaf file in a split
module directory that already has src/vector/hnsw/mod.rs. This violates the rule that unit tests
for split modules must live in mod.rs rather than sibling leaf files.
Agent Prompt
## Issue description
Unit tests were added to `src/vector/hnsw/search.rs`, but the compliance rule requires split-module unit tests to live in `src/vector/hnsw/mod.rs`.

## Issue Context
`src/vector/hnsw/` is a split module directory (it has `mod.rs` plus multiple sibling leaf files). Tests should be centralized in `mod.rs` and import functions/types from `search`.

## Fix Focus Areas
- src/vector/hnsw/mod.rs[1-10]
- src/vector/hnsw/search.rs[1409-1473]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread src/admin/http_server.rs
Comment on lines +279 to +336
async fn handle_command(
req: Request<Incoming>,
gw: &crate::admin::console_gateway::ConsoleGateway,
) -> Response<BoxBody<Bytes, Infallible>> {
let body = match read_body(req).await {
Ok(b) => b,
Err(resp) => return resp,
};

let parsed: serde_json::Value = match serde_json::from_slice(&body) {
Ok(v) => v,
Err(e) => {
return json_response(
StatusCode::BAD_REQUEST,
&serde_json::json!({"error": format!("Invalid JSON: {}", e)}),
);
}
};

let cmd = match parsed["cmd"].as_str() {
Some(c) => c,
None => {
return json_response(
StatusCode::BAD_REQUEST,
&serde_json::json!({"error": "Missing 'cmd' field"}),
);
}
};

let args: Vec<Bytes> = parsed["args"]
.as_array()
.map(|arr| {
arr.iter()
.map(|v| match v.as_str() {
Some(s) => Bytes::from(s.to_string()),
None => Bytes::from(v.to_string()),
})
.collect()
})
.unwrap_or_default();

let db_index = parsed["db"].as_u64().unwrap_or(0) as usize;

match gw.execute_command(db_index, cmd, &args).await {
Ok(frame) => {
let result = crate::admin::console_gateway::ConsoleGateway::frame_to_json(&frame);
let type_name = frame_type_name(&frame);
json_response(
StatusCode::OK,
&serde_json::json!({"result": result, "type": type_name}),
)
}
Err(e) => json_response(
StatusCode::INTERNAL_SERVER_ERROR,
&serde_json::json!({"error": e}),
),
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. Unauthenticated admin command exec 🐞 Bug ⛨ Security

/api/v1/command and /ws execute arbitrary commands via ConsoleGateway without any
authentication/ACL enforcement, while API responses set permissive CORS
(Access-Control-Allow-Origin: *) enabling cross-origin browser access. If the admin port is
reachable (e.g., --bind 0.0.0.0), this is remote RCE-on-data: attackers can read/modify/delete DB
contents via HTTP/WS.
Agent Prompt
### Issue description
Console REST/WS/SSE endpoints on the admin HTTP port allow unauthenticated arbitrary command execution, and JSON/SSE responses set `Access-Control-Allow-Origin: *`, enabling cross-origin browser access.

### Issue Context
- `/api/v1/command` forwards user JSON directly into `ConsoleGateway::execute_command`.
- `json_response()` attaches permissive CORS headers.
- `/ws` upgrade path has no Origin validation.
- Protected mode currently guards Redis/TLS listeners only.

### Fix Focus Areas
- Add an explicit authentication/authorization gate for all console endpoints (REST + WS + SSE), ideally reusing existing auth config (`requirepass`/ACL) or introducing a dedicated admin token.
- Default to **no CORS** (or same-origin only) in production builds; only allow `*` in explicit dev mode.
- For WebSocket: validate `Origin`/`Host` and reject cross-site origins by default.
- Consider enforcing that admin HTTP (when `console` is enabled) only binds to loopback unless explicitly opted out.

### Fix Focus Areas (code locations)
- src/admin/http_server.rs[114-191]
- src/admin/http_server.rs[203-277]
- src/admin/http_server.rs[276-344]
- src/admin/http_server_support.rs[32-54]
- src/admin/sse_stream.rs[78-85]
- src/admin/ws_bridge.rs[31-93]
- src/admin/metrics_setup.rs[37-68]
- src/config.rs[8-20]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread src/admin/static_files.rs
Comment on lines +21 to +64
pub fn serve_static_file(path: &str) -> Response<Full<Bytes>> {
let path = path.trim_start_matches('/');
let path = if path.is_empty() { "index.html" } else { path };

match ConsoleAssets::get(path) {
Some(file) => {
let mime = mime_guess::from_path(path).first_or_text_plain();
let cache_control = if path == "index.html" {
// index.html must not be aggressively cached (SPA entry point)
"no-cache"
} else {
// Hashed assets (JS, CSS) can be cached immutably
"public, max-age=31536000, immutable"
};

Response::builder()
.status(StatusCode::OK)
.header("content-type", mime.as_ref())
.header("cache-control", cache_control)
.body(Full::new(Bytes::from(file.data.into_owned())))
.unwrap_or_else(|_| {
Response::new(Full::new(Bytes::from_static(b"Internal Server Error")))
})
}
None => {
// SPA fallback: serve index.html for client-side routing
match ConsoleAssets::get("index.html") {
Some(file) => Response::builder()
.status(StatusCode::OK)
.header("content-type", "text/html; charset=utf-8")
.header("cache-control", "no-cache")
.body(Full::new(Bytes::from(file.data.into_owned())))
.unwrap_or_else(|_| {
Response::new(Full::new(Bytes::from_static(b"Internal Server Error")))
}),
None => Response::builder()
.status(StatusCode::NOT_FOUND)
.header("content-type", "text/plain")
.body(Full::new(Bytes::from_static(b"Console not available")))
.unwrap_or_else(|_| {
Response::new(Full::new(Bytes::from_static(b"Not Found")))
}),
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

5. Spa fallback breaks missing assets 🐞 Bug ≡ Correctness

serve_static_file() serves index.html for *any* unknown path (including missing JS/CSS under
/assets/*), so when referenced assets aren’t embedded/present the browser receives HTML instead of
JS/CSS and the console fails to load. This is exacerbated because console/dist/index.html
references hashed /assets/... files, but the Rust embed serves only what exists under
console/dist/.
Agent Prompt
### Issue description
The embedded console can fail to load because `serve_static_file()` returns `index.html` for any unknown path, including missing `/assets/*` bundles referenced by `dist/index.html`.

### Issue Context
- `console/dist/index.html` references hashed JS/CSS under `/assets/...`.
- Rust static serving uses rust-embed from `console/dist/`.
- The server returns index.html on any miss, which masks missing bundles and produces HTML responses to JS/CSS requests.
- The Rust build does not automatically run `pnpm build`, so it’s easy to ship a binary with incomplete/stale `dist/`.

### Fix Focus Areas
- Change SPA fallback to only apply to likely client-routes (e.g., paths without a file extension) and/or only under `/ui/*`.
- Return **404** for missing static assets (especially under `/assets/`), rather than serving index.html.
- Add a build-time guarantee that `console/dist` contains the expected assets when `--features console` is enabled:
  - Option A: check in built assets (if that’s intended).
  - Option B: add a build step (e.g., in `build.rs` gated by `CARGO_FEATURE_CONSOLE`) that runs `pnpm build` or fails fast with a clear error if assets are missing.

### Fix Focus Areas (code locations)
- src/admin/static_files.rs[16-65]
- console/dist/index.html[1-9]
- console/README.md[24-32]
- build.rs[1-20]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

🟡 Minor comments (24)
scripts/gen-verification.sh-115-115 (1)

115-115: ⚠️ Potential issue | 🟡 Minor

Quote nested expansion in prefix stripping to avoid glob-pattern mismatch.

Line 115 uses unquoted nested expansion in parameter removal. When REPO_ROOT contains glob metacharacters like [, the pattern matching fails and the prefix is not stripped.

Suggested fix
-      rel="${f#${REPO_ROOT}/}"
+      rel="${f#"$REPO_ROOT"/}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/gen-verification.sh` at line 115, The prefix-stripping uses an
unquoted nested expansion rel="${f#${REPO_ROOT}/}" which can be treated as a
glob pattern if REPO_ROOT contains metacharacters; update the code to quote the
nested expansion to prevent pathname/glob expansion, e.g. set a quoted prefix or
use rel="${f#"$REPO_ROOT"/}" (or prefix="${REPO_ROOT}/"; rel="${f#"$prefix"}")
so f, REPO_ROOT and rel are handled safely.
src/vector/hnsw/search.rs-870-875 (1)

870-875: ⚠️ Potential issue | 🟡 Minor

Base-layer selected cap does not match the documented contract.

Line 874 uses k_cap.max(ef_cap), but ef_cap is already max(ef_search, k) (Line 845), so this effectively caps by ef_search. The docs/comments state layer 0 should be capped by k.

Proposed fix
-        selected: l0_visited.min(k_cap.max(ef_cap)),
+        selected: l0_visited.min(k_cap),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vector/hnsw/search.rs` around lines 870 - 875, The HnswTraceLayer for
layer 0 currently sets selected using k_cap.max(ef_cap), which because ef_cap is
already max(ef_search, k) ends up effectively capping by ef_search rather than
k; change the selected computation in the HnswTraceLayer construction (the
layer: 0 entry where HnswTraceLayer is created) to use k_cap as the cap (e.g.,
selected should be l0_visited.min(k_cap)) so layer 0 follows the documented
contract of being capped by k.
src/admin/console_gateway.rs-22-24 (1)

22-24: ⚠️ Potential issue | 🟡 Minor

Don't silently ignore duplicate gateway initialization.

A second set_global_gateway() currently fails with no signal and leaves the stale gateway installed. That makes bootstrap and test reinitialization bugs much harder to diagnose.

🩹 Suggested change
-pub fn set_global_gateway(gw: Arc<ConsoleGateway>) {
-    let _ = CONSOLE_GATEWAY.set(gw);
+pub fn set_global_gateway(gw: Arc<ConsoleGateway>) -> Result<(), Arc<ConsoleGateway>> {
+    CONSOLE_GATEWAY.set(gw)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/admin/console_gateway.rs` around lines 22 - 24, Currently
set_global_gateway(gw: Arc<ConsoleGateway>) silently ignores a second
initialization because CONSOLE_GATEWAY.set is swallowed; change it to detect
duplicate initialization (use CONSOLE_GATEWAY.try_set or check the Result from
set) and surface an error instead of ignoring it — either return a Result from
set_global_gateway (e.g., Result<(), InitError>) or log/panic with a clear
message including the new gw and the already-installed gateway; update call
sites to handle the new Result if you choose that route and reference
set_global_gateway and CONSOLE_GATEWAY when making the change.
console/src/components/vector/UmapProgress.tsx-8-19 (1)

8-19: ⚠️ Potential issue | 🟡 Minor

Clamp progress percentage before rendering width.

pct can exceed bounds when progress reports transient values outside [0, total], which can produce misleading UI.

🐛 Proposed fix
-  const pct = progress.total > 0 ? (progress.epoch / progress.total) * 100 : 0;
+  const rawPct = progress.total > 0 ? (progress.epoch / progress.total) * 100 : 0;
+  const pct = Math.max(0, Math.min(100, rawPct));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/vector/UmapProgress.tsx` around lines 8 - 19, The
computed pct in UmapProgress (currently const pct = progress.total > 0 ?
(progress.epoch / progress.total) * 100 : 0) may fall outside 0–100; clamp pct
to that range before using it in the inline style. Update the pct computation
(or add a clamp step) so pct = Math.min(100, Math.max(0, calculatedPct)) and use
that clamped value in the style width for the inner progress div to prevent
overflow/negative widths when progress.epoch is out of bounds.
console/src/lib/monarch-cypher.ts-32-34 (1)

32-34: ⚠️ Potential issue | 🟡 Minor

Separate arrow operators from arithmetic minus to fix semantic tokenization.

Line 32 currently classifies bare - as operator.arrow, causing expressions like 1-2 to be mis-tokenized. The hyphen in Cypher has different semantic roles: relationship arrows (->, <-) versus arithmetic minus, and should be tokenized separately.

Suggested fix
-      [/->|<-|-/, "operator.arrow"],
+      [/->|<-/, "operator.arrow"],
+      [/=~|<>|<=|>=|=|<|>|\+|-|\*|\/|%/, "operator"],
tests/console_gateway_test.rs-13-18 (1)

13-18: ⚠️ Potential issue | 🟡 Minor

Hardcoded ports conflict across test files and could cause CI flakiness when tests run together.

Tests in both jepsen_lite.rs (port 16399) and crash_matrix.rs (port 16400) use the same hardcoded ports as this test. While all tests are marked #[ignore] and won't run automatically, if invoked together (manually or by a test harness), they will fail due to port collisions. Consider using environment variables or a test coordination mechanism to allocate distinct ports for each test, or ensure the test suite enforces serial execution.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/console_gateway_test.rs` around lines 13 - 18, The test uses hardcoded
ports in the start_server() helper
(Command::new("./target/release/moon").args(["--port", "16399", "--admin-port",
"16400", "--shards", "2"])) which can collide with other tests; modify
start_server to accept ports from environment variables (e.g., read PORT and
ADMIN_PORT) or generate randomized free ports at runtime and pass them into
Command::args, or add a coordination mechanism to ensure unique ports per test;
update any callers of start_server to provide/propagate the chosen ports and
ensure the spawned server logs the actual ports used for easier debugging.
console/src/components/browser/editors/ListEditor.tsx-9-21 (1)

9-21: ⚠️ Potential issue | 🟡 Minor

Add error handling for LPUSH/RPUSH operations.

Similar to HashEditor, these operations perform optimistic state updates without catching errors. Network or server failures would leave the UI out of sync.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/browser/editors/ListEditor.tsx` around lines 9 - 21,
The LPUSH/RPUSH handlers (handleLPush and handleRPush) do optimistic UI updates
without error handling, so wrap the execCommand call in try/catch and only
update state (setItems and setNewValue) after a successful execCommand; on
failure log or surface the error (e.g., via processLogger/console/error toast)
and avoid mutating items. Also consider disabling the input while the async call
is pending (use a local isSaving flag) to prevent duplicate requests and ensure
execCommand is awaited before clearing newValue.
console/src/components/browser/editors/HashEditor.tsx-46-57 (1)

46-57: ⚠️ Potential issue | 🟡 Minor

Add error handling for delete and add operations.

handleDelete and handleAdd perform optimistic state updates without catching errors. If HDEL or HSET fails, the UI will show stale state (item removed/added locally but not on server).

🛡️ Proposed error handling
   const handleDelete = useCallback(async (field: string) => {
-    await execCommand("HDEL", [keyName, field]);
-    setFields((prev) => prev.filter((f) => f.field !== field));
+    try {
+      await execCommand("HDEL", [keyName, field]);
+      setFields((prev) => prev.filter((f) => f.field !== field));
+    } catch (err) {
+      console.error("Failed to delete field:", err);
+      // Optionally show user feedback
+    }
   }, [keyName]);
 
   const handleAdd = async () => {
     if (!newField.trim()) return;
-    await execCommand("HSET", [keyName, newField, newValue]);
-    setFields((prev) => [...prev, { field: newField, value: newValue, editing: false, editValue: newValue }]);
-    setNewField("");
-    setNewValue("");
+    try {
+      await execCommand("HSET", [keyName, newField, newValue]);
+      setFields((prev) => [...prev, { field: newField, value: newValue, editing: false, editValue: newValue }]);
+      setNewField("");
+      setNewValue("");
+    } catch (err) {
+      console.error("Failed to add field:", err);
+    }
   };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/browser/editors/HashEditor.tsx` around lines 46 - 57,
handleDelete and handleAdd optimistically update state without handling
execCommand failures; wrap each execCommand call in try/catch and only mutate
state after a successful response (or revert on error). For handleDelete, call
execCommand("HDEL", ...) inside try and only call setFields(...) after success
(or store the removed item and restore it in catch), and surface/log the error;
for handleAdd, perform execCommand("HSET", ...) inside try before calling
setFields([...]), setNewField("") and setNewValue(""), and handle/log failures
in catch (or remove the newly added entry on failure). Use the existing symbols
execCommand, handleDelete, handleAdd, setFields, setNewField and setNewValue
when making these changes.
console/src/components/dashboard/HitRatioCard.tsx-17-20 (1)

17-20: ⚠️ Potential issue | 🟡 Minor

Non-deterministic Math.random() inside useMemo.

Using Math.random() inside useMemo makes the computation impure—the sparkline will regenerate with different random values each time serverInfo changes, causing visual jumpiness. If jitter is desired, consider seeding it deterministically (e.g., based on index and hit rate) or generating it once outside the memo.

🔧 Deterministic jitter alternative
     // Generate synthetic sparkline from ratio (single-value sparkline)
     const spark = Array.from({ length: 10 }, (_, i) => ({
       idx: i,
-      value: rate + (Math.random() - 0.5) * 2, // slight jitter for visual
+      value: rate + ((i % 2 === 0 ? 1 : -1) * (i * 0.2)), // deterministic wave
     }));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/dashboard/HitRatioCard.tsx` around lines 17 - 20, The
sparkline generation in the useMemo (variable spark) is non-deterministic
because it uses Math.random(), causing visual jumpiness when serverInfo changes;
replace the impure randomness by either (a) computing a deterministic jitter
based on stable inputs (e.g., derive a pseudo-random value from rate and index
using a small hash or Math.sin(index + rate * constant)) inside the same useMemo
for spark, or (b) generate the jitter once outside the memo (on component mount)
and store it in state so spark stays stable; update the code that builds spark
(referencing spark, rate, and useMemo) to use the deterministic function or
stored jitter instead of Math.random().
console/src/components/browser/editors/HashEditor.tsx-84-85 (1)

84-85: ⚠️ Potential issue | 🟡 Minor

Potential double-save when pressing Enter.

When Enter is pressed, handleSave is called via onKeyDown. This sets editing: false, which unmounts the input and triggers onBlur, calling handleSave again. This could result in two HSET commands for the same edit.

🔧 Prevent double-save by checking editing state or using a flag
-              onKeyDown={(e) => e.key === "Enter" && handleSave(f.field, f.editValue)}
-              onBlur={() => handleSave(f.field, f.editValue)}
+              onKeyDown={(e) => {
+                if (e.key === "Enter") {
+                  e.currentTarget.blur(); // Let blur handler do the save
+                }
+              }}
+              onBlur={() => handleSave(f.field, f.editValue)}

Or alternatively, track a "saving in progress" flag per field to skip redundant calls.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/browser/editors/HashEditor.tsx` around lines 84 - 85,
Pressing Enter triggers handleSave via onKeyDown and then onBlur again when the
input unmounts, causing duplicate saves; update the logic so handleSave is
idempotent or guarded: add a per-field "saving" boolean or check the field's
editing state inside handleSave (reference handleSave, onKeyDown, onBlur,
editing, f.field, f.editValue) and early-return if a save is already in progress
or editing is false, set the saving flag before starting the HSET call and clear
it after completion so the second trigger is ignored.
console/src/components/browser/ValuePanel.tsx-63-68 (1)

63-68: ⚠️ Potential issue | 🟡 Minor

Add a fallback for unsupported/unknown key types.

If res.type is unexpected, the body renders nothing. Show an explicit unsupported-type state instead of a blank panel.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/browser/ValuePanel.tsx` around lines 63 - 68, The
panel currently renders nothing for unexpected key types (res.type -> keyType) —
update the ValuePanel component to add a clear fallback branch that renders an
explicit "Unsupported key type" state (e.g., simple message or Placeholder
component) when keyType is not "string", "hash", "list", "set", "zset", or
"stream"; modify the conditional rendering around keyType (the lines that choose
StringEditor, HashEditor, ListEditor, SetEditor, ZSetEditor, StreamViewer) to
include this default case so users see a visible unsupported-type message
instead of a blank panel.
console/src/components/graph/NodeInspector.tsx-29-29 (1)

29-29: ⚠️ Potential issue | 🟡 Minor

Guard node.properties before Object.entries.

If properties is null/undefined, this throws at runtime. A safe fallback avoids inspector crashes on partial data.

Suggested fix
-const propertyEntries = Object.entries(node.properties);
+const propertyEntries = Object.entries(node.properties ?? {});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/graph/NodeInspector.tsx` at line 29, In NodeInspector,
the line creating propertyEntries uses Object.entries(node.properties) which
will throw if node.properties is null/undefined; update the property extraction
(where propertyEntries is defined) to guard against missing properties by using
a safe fallback such as Object.entries(node.properties ?? {}) or
Object.entries(node.properties || {}) so the inspector won't crash on partial
data (refer to the propertyEntries variable and NodeInspector component when
making the change).
console/src/components/browser/TtlManager.tsx-48-59 (1)

48-59: ⚠️ Potential issue | 🟡 Minor

API calls lack error handling.

Both handleSave and handlePersist don't handle API failures. Users receive no feedback if setKeyTtl fails, and the local state would become inconsistent with the server.

🛡️ Proposed fix with error handling
  const handleSave = async () => {
    const seconds = parseInt(inputValue, 10);
    if (isNaN(seconds) || seconds <= 0) return;
-   await setKeyTtl(keyName, seconds);
-   setTtl(seconds);
-   setEditing(false);
+   try {
+     await setKeyTtl(keyName, seconds);
+     setTtl(seconds);
+     setEditing(false);
+   } catch (err) {
+     console.error("Failed to set TTL:", err);
+     // Optionally show toast/notification to user
+   }
  };

  const handlePersist = async () => {
-   await setKeyTtl(keyName, -1);
-   setTtl(-1);
+   try {
+     await setKeyTtl(keyName, -1);
+     setTtl(-1);
+   } catch (err) {
+     console.error("Failed to persist key:", err);
+   }
  };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/browser/TtlManager.tsx` around lines 48 - 59,
handleSave and handlePersist call setKeyTtl without handling API errors, which
can leave local state inconsistent; wrap the setKeyTtl calls in try/catch inside
handleSave and handlePersist, validate the parseInt result before calling, and
only call setTtl (and setEditing(false)) on success, while logging or surfacing
the error (e.g., via a passed showError/toast) when the API throws; reference
the functions handleSave, handlePersist, setKeyTtl, setTtl, setEditing,
inputValue and keyName to locate where to add the try/catch and conditional
state updates.
console/src/components/browser/TtlManager.tsx-35-44 (1)

35-44: ⚠️ Potential issue | 🟡 Minor

Interval effect dependency may cause stale closures or multiple intervals.

The dependency [ttl !== null && ttl > 0] with the eslint-disable is unusual. While the intent is to start/stop the interval based on TTL positivity, this creates issues:

  1. The interval captures setTtl in closure, which is stable, so that's fine.
  2. However, when keyName changes (new key selected), a new TTL is fetched but the interval from the previous key may still be running since the boolean expression might remain true.

Consider including keyName in dependencies or restructuring:

🐛 Proposed fix to properly reset interval on key change
  // Live countdown
  useEffect(() => {
+   if (ttl === null || ttl <= 0) return;
+   
-   if (ttl !== null && ttl > 0) {
-     intervalRef.current = setInterval(() => {
-       setTtl((prev) => (prev !== null && prev > 0 ? prev - 1 : prev));
-     }, 1000);
-   }
+   intervalRef.current = setInterval(() => {
+     setTtl((prev) => (prev !== null && prev > 0 ? prev - 1 : prev));
+   }, 1000);
+   
    return () => {
      if (intervalRef.current) clearInterval(intervalRef.current);
    };
-  }, [ttl !== null && ttl > 0]); // eslint-disable-line react-hooks/exhaustive-deps
+  }, [ttl, keyName]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/browser/TtlManager.tsx` around lines 35 - 44, The
effect that manages the countdown interval (useEffect using intervalRef and
setTtl) currently only depends on the boolean expression ttl !== null && ttl > 0
which can leave an interval running when a new key is selected; update the
effect to depend on both ttl and keyName (or key identifier prop) so the
interval is reset whenever the selected key changes, and inside the effect
always clear any existing intervalRef.current before creating a new setInterval
and on cleanup; remove the eslint-disable and ensure you only start the interval
when ttl > 0 and setTtl is used in the interval callback.
console/README.md-80-83 (1)

80-83: ⚠️ Potential issue | 🟡 Minor

E2E suite counts appear stale.

Line 80-83 says “7 tests across 6 spec files,” but this PR’s stated scope references 9 Playwright specs. Please sync the README count (or avoid hardcoded totals) to prevent confusion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/README.md` around lines 80 - 83, The README line "The suite contains
7 tests across 6 spec files — one per view (Dashboard, Browser, Console,
Vectors, Graph, Memory)." is stale; either update the hardcoded totals to match
the current 9 Playwright specs mentioned in the PR or remove the numeric counts
entirely to avoid future drift. Locate that sentence and replace it with a
neutral phrasing like "The suite contains Playwright specs covering the views:
Dashboard, Browser, Console, Vectors, Graph, Memory, ..." or update the numbers
to the correct counts so the README and the PR scope stay in sync.
console/src/lib/completions.ts-248-252 (1)

248-252: ⚠️ Potential issue | 🟡 Minor

Duplicate command entries will cause duplicate suggestions in Monaco.

OBJECT HELP appears on both line 248 and line 281. SRANDMEMBER appears on both line 71 and line 283. These duplicates will produce duplicate completion items.

🧹 Remove duplicates from the Misc section
   // ── Misc (1) ──
   { name: "SUBSTR", summary: "Get substring (deprecated, use GETRANGE)", args: "key start end", group: "string" },
-  { name: "OBJECT HELP", summary: "Show OBJECT subcommands", args: "", group: "key" },
   { name: "LCS", summary: "Find longest common subsequence", args: "key1 key2 [LEN] [IDX] [MINMATCHLEN len] [WITHMATCHLEN]", group: "string" },
-  { name: "SRANDMEMBER", summary: "Get random member(s) from set", args: "key [count]", group: "set" },
   { name: "WAITAOF", summary: "Wait for AOF flush on replicas", args: "numlocal numreplicas timeout", group: "server" },
 ];

Also applies to: 281-284

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/lib/completions.ts` around lines 248 - 252, The completions list
contains duplicate entries (e.g., "OBJECT HELP" and "SRANDMEMBER") which produce
duplicate Monaco suggestions; open console/src/lib/completions.ts and remove the
redundant duplicate objects (or implement a dedupe step) so each command name
appears only once — locate the literal entries "OBJECT HELP" and "SRANDMEMBER"
in the array and keep a single canonical entry for each (or filter the array by
unique name before exporting) to eliminate duplicate completion items.
scripts/test-integration.sh-50-62 (1)

50-62: ⚠️ Potential issue | 🟡 Minor

Readiness loop doesn't fail explicitly after timeout.

If the /readyz endpoint never becomes available within 30 seconds, the loop exits silently and the script continues to seed fixtures against a potentially unhealthy server. Add an explicit failure after the loop completes without success.

🛡️ Add explicit timeout failure
 echo "[integration] waiting for /readyz"
+ready=false
 for i in $(seq 1 30); do
   if curl -fsS "http://127.0.0.1:${ADMIN_PORT}/readyz" >/dev/null 2>&1; then
     echo "[integration] moon ready after ${i}s"
+    ready=true
     break
   fi
   if ! kill -0 "${MOON_PID}" 2>/dev/null; then
     echo "[integration] moon exited during startup; log:"
     tail -n 80 "${MOON_LOG}" || true
     exit 1
   fi
   sleep 1
 done
+
+if [ "$ready" = false ]; then
+  echo "[integration] moon did not become ready within 30s; log:"
+  tail -n 80 "${MOON_LOG}" || true
+  exit 1
+fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/test-integration.sh` around lines 50 - 62, The readiness loop may
exit after 30s without detecting /readyz but currently continues; modify
scripts/test-integration.sh to detect that timeout and fail explicitly: after
the for loop, perform a final check (e.g., attempt curl -fsS
"http://127.0.0.1:${ADMIN_PORT}/readyz" or test a flag set on success) and if it
still fails, print an explicit error message, dump recent logs from ${MOON_LOG}
(tail -n 80), and exit 1 so seeding does not proceed against an unhealthy
server; reference ADMIN_PORT, MOON_PID, MOON_LOG and the /readyz readiness check
to locate and implement this change.
console/src/components/browser/KeyList.tsx-88-97 (1)

88-97: ⚠️ Potential issue | 🟡 Minor

Many concurrent requests when enriching visible keys on scroll.

The effect re-triggers on scroll since virtualItems changes, and calls enrichKey for each entry with type === null. While enrichKey includes a guard (if (entry.type !== null) return;) to prevent re-enriching, if many visible keys are unenriched, you'll still fire numerous concurrent requests (e.g., 50 visible items × 3 API calls each = 150 requests per scroll).

Consider batching enrichment requests by key name, or tracking in-flight requests locally in the effect to avoid redundant function invocations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/browser/KeyList.tsx` around lines 88 - 97, The effect
over virtualItems in KeyList.tsx causes many concurrent enrichKey calls on
scroll; change the effect to deduplicate and/or batch requests by collecting
visible key names (from virtualItems -> keys[vItem.index].name) into a Set and
then either call a new batch helper (e.g., enrichKeysBatch(names: string[])) or
call enrichKey only once per name while tracking an in-effect inFlight Set to
avoid duplicate invocations; ensure you reference virtualizer.getVirtualItems(),
virtualItems, keys, and enrichKey when implementing the dedupe/batch logic so
repeated scroll updates don't spawn redundant requests.
console/src/workers/umap.worker.ts-13-18 (1)

13-18: ⚠️ Potential issue | 🟡 Minor

Guard against degenerate cases where count < 2.

When count is 0 or 1, nNeighbors becomes negative or zero (Math.min(15, 0-1) = -1), which may cause undefined behavior in umap-js. UMAP requires at least 2 points to compute a meaningful embedding.

🛡️ Add early return for degenerate input
 self.onmessage = (e: MessageEvent<UmapWorkerRequest>) => {
   const { vectors, dims, count, nNeighbors = 15, minDist = 0.1 } = e.data;
 
+  // UMAP requires at least 2 points
+  if (count < 2) {
+    const positions = new Float32Array(count * 3);
+    const msg: UmapWorkerResponse = { type: "complete", positions };
+    self.postMessage(msg, { transfer: [positions.buffer] });
+    return;
+  }
+
   // Convert flattened Float32Array to array-of-arrays for umap-js
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/workers/umap.worker.ts` around lines 13 - 18, Guard against
degenerate inputs before constructing the UMAP instance: if the point count
(variable count) is less than 2, return early (or resolve with a sensible
default) to avoid passing negative or zero nNeighbors into new UMAP. Update the
code around the UMAP construction (the block that creates const umap = new
UMAP({...}) and related nNeighbors/nEpochs calculations) to check count < 2
first and handle it (e.g., return an empty embedding or reject) so UMAP is only
created when count >= 2.
console/src/types/graph.ts-5-8 (1)

5-8: ⚠️ Potential issue | 🟡 Minor

Make x/y/z optional, or keep them out of GraphNode.

The store keeps layout output in a separate Float32Array, and the worker API never writes coordinates back onto GraphNode. Requiring x, y, and z here doesn't match the current data flow and usually leads to dummy values or as-casts around queryGraph().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/types/graph.ts` around lines 5 - 8, GraphNode currently requires
numeric x, y, z which conflicts with the worker/store design; make x, y, z
optional on GraphNode (change x: number; y: number; z: number; to x?: number;
y?: number; z?: number;) or remove them from GraphNode and create a separate
LayoutCoordinates type used only where coordinates are produced/consumed (e.g.,
the force worker and layout store). Update any call sites of
queryGraph()/GraphNode consumers to handle optional coordinates (or cast to
LayoutCoordinates where appropriate) and keep the GraphNode interface free of
mandatory layout fields.
console/src/components/browser/editors/StringEditor.tsx-27-31 (1)

27-31: ⚠️ Potential issue | 🟡 Minor

Reset mode when keyName/value changes.

mode is only derived from the first value. After switching from a raw string to a JSON string (or back), this effect refreshes editValue but keeps the previous key's mode, so the new key can open in the wrong view.

Suggested fix
   useEffect(() => {
-    const formatted = detectedJson ? JSON.stringify(JSON.parse(value), null, 2) : value;
-    setEditValue(mode === "json" && detectedJson ? formatted : value);
+    const nextIsJson = isValidJson(value);
+    const nextMode: ViewMode = nextIsJson ? "json" : "raw";
+    setMode(nextMode);
+    setEditValue(
+      nextMode === "json" ? JSON.stringify(JSON.parse(value), null, 2) : value,
+    );
     setDirty(false);
-  }, [keyName, value]); // eslint-disable-line react-hooks/exhaustive-deps
+  }, [keyName, value]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/browser/editors/StringEditor.tsx` around lines 27 -
31, When keyName or value changes, reset the editor mode so the new key opens in
the correct view: inside the useEffect that currently computes formatted and
calls setEditValue/setDirty, also call setMode(detectedJson ? "json" : "text")
(use the existing detectedJson, mode, setEditValue, setDirty, and setMode
identifiers) before setting editValue; ensure the effect depends on keyName and
value (and/or detectedJson) so mode is updated whenever the key/value changes.
console/src/components/vector/LassoSelect.tsx-31-55 (1)

31-55: ⚠️ Potential issue | 🟡 Minor

Capture the pointer outside the SVG during an active lasso.

Releasing the mouse outside this <svg> never triggers onMouseUp, so drawing stays true and the next re-entry keeps appending to the old polygon until Escape is pressed. Use pointer capture or window-level move/up listeners for the active drag.

Also applies to: 106-112

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/vector/LassoSelect.tsx` around lines 31 - 55, The
lasso stays stuck because mouseup outside the SVG doesn't fire; update
handleMouseDown/handleMouseUp/handleMouseMove to capture the pointer or add
global window listeners: in handleMouseDown (where you call svgRef and
setDrawing/setLassoPath) call element.setPointerCapture(e.pointerId) or register
window.addEventListener('pointermove'/'pointerup') and in handleMouseUp release
capture (element.releasePointerCapture) or remove the window listeners and call
setDrawing(false) and finalize/reset setLassoPath; ensure svgRef.current is used
to capture/release and cleanup listeners in the same scope so drawing is always
cleared even if pointer is released outside the SVG.
src/admin/ws_bridge.rs-139-167 (1)

139-167: ⚠️ Potential issue | 🟡 Minor

Keep all SELECT handling local, including failures.

When args[0] is missing or unparsable, this falls through into execute_command. That breaks the per-WebSocket session contract in this module and makes invalid SELECT behavior depend on backend command handling instead of returning a deterministic client error.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/admin/ws_bridge.rs` around lines 139 - 167, The SELECT branch (when cmd
== "SELECT") must handle missing or unparsable args[0] locally instead of
falling through to execute_command; if parsed.get("args").first() is missing or
cannot be parsed into a usize, immediately build and return an error response
JSON (e.g., {"error": "ERR invalid DB index", "type":"error"}) and include
request_id if present, rather than allowing execute_command to handle it. Update
the block around parsed / db / db_num / selected_db and ensure the failure path
returns the error response (referencing cmd == "SELECT", parsed, selected_db,
request_id, and execute_command) so SELECT always yields a deterministic
per-WebSocket error on invalid input.
console/src/components/console/ResultPanel.tsx-233-237 (1)

233-237: ⚠️ Potential issue | 🟡 Minor

Actually reset the view override when the result changes.

The comment says this should happen, but override is never cleared. After a new query, the footer can highlight the previous manual mode while the body renders a different fallback view.

Suggested fix
-import { useState, useMemo, useCallback } from "react";
+import { useState, useMemo, useCallback, useEffect } from "react";
@@
 export function ResultPanel({ result, executing }: ResultPanelProps) {
   const autoView = useMemo(() => (result ? detectView(result.data) : "raw"), [result]);
   const [override, setOverride] = useState<ViewMode | null>(null);
 
-  // Reset override when result changes
+  useEffect(() => {
+    setOverride(null);
+  }, [result]);
+
   const currentView = override ?? autoView;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/console/ResultPanel.tsx` around lines 233 - 237, The
override state is never cleared so the manual view remains highlighted after a
new result; update the component to reset override when result (or result.data)
changes by using an effect that calls setOverride(null) whenever the result
changes (the computed autoView from useMemo should remain). Target the
override/setOverride state declared with useState and the currentView
calculation (override ?? autoView) — add a useEffect that depends on result (or
result.data) to clear override on new results.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: fc582ae2-264e-4b53-9968-5fbc918d430e

📥 Commits

Reviewing files that changed from the base of the PR and between d7d10da and 2437cf1.

⛔ Files ignored due to path filters (3)
  • Cargo.lock is excluded by !**/*.lock
  • console/dist/index.html is excluded by !**/dist/**
  • console/pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (131)
  • .github/workflows/console-integration.yml
  • .gitignore
  • .planning
  • Cargo.toml
  • console/.gitignore
  • console/README.md
  • console/index.html
  • console/package.json
  • console/playwright.config.ts
  • console/src/App.tsx
  • console/src/app.css
  • console/src/components/browser/KeyList.tsx
  • console/src/components/browser/KeyMetadata.tsx
  • console/src/components/browser/KeyToolbar.tsx
  • console/src/components/browser/NamespaceTree.tsx
  • console/src/components/browser/TtlManager.tsx
  • console/src/components/browser/ValuePanel.tsx
  • console/src/components/browser/editors/HashEditor.tsx
  • console/src/components/browser/editors/ListEditor.tsx
  • console/src/components/browser/editors/SetEditor.tsx
  • console/src/components/browser/editors/StreamViewer.tsx
  • console/src/components/browser/editors/StringEditor.tsx
  • console/src/components/browser/editors/ZSetEditor.tsx
  • console/src/components/console/Editor.tsx
  • console/src/components/console/HistoryPanel.tsx
  • console/src/components/console/ResultPanel.tsx
  • console/src/components/console/TabBar.tsx
  • console/src/components/dashboard/ClientsCard.tsx
  • console/src/components/dashboard/HitRatioCard.tsx
  • console/src/components/dashboard/InfoCards.tsx
  • console/src/components/dashboard/KeyspaceCard.tsx
  • console/src/components/dashboard/MemoryChart.tsx
  • console/src/components/dashboard/OpsChart.tsx
  • console/src/components/dashboard/SlowlogTable.tsx
  • console/src/components/graph/CypherInput.tsx
  • console/src/components/graph/GraphInfoPanel.tsx
  • console/src/components/graph/GraphScene.tsx
  • console/src/components/graph/LabelFilter.tsx
  • console/src/components/graph/NodeInspector.tsx
  • console/src/components/layout/AppShell.tsx
  • console/src/components/layout/Sidebar.tsx
  • console/src/components/memory/CommandStatsTable.tsx
  • console/src/components/memory/MemoryTreemap.tsx
  • console/src/components/memory/SlowlogPanel.tsx
  • console/src/components/ui/badge.tsx
  • console/src/components/ui/card.tsx
  • console/src/components/vector/ClusterStats.tsx
  • console/src/components/vector/ColorByControls.tsx
  • console/src/components/vector/HnswOverlay.tsx
  • console/src/components/vector/IndexMetadataPanel.tsx
  • console/src/components/vector/KnnSearchPanel.tsx
  • console/src/components/vector/LassoSelect.tsx
  • console/src/components/vector/PointCloudScene.tsx
  • console/src/components/vector/PointInspector.tsx
  • console/src/components/vector/UmapProgress.tsx
  • console/src/lib/api.ts
  • console/src/lib/completions.ts
  • console/src/lib/graph-api.ts
  • console/src/lib/monarch-cypher.ts
  • console/src/lib/monarch-resp.ts
  • console/src/lib/sse.ts
  • console/src/lib/utils.ts
  • console/src/lib/vector-api.ts
  • console/src/lib/ws.ts
  • console/src/main.tsx
  • console/src/stores/browser.ts
  • console/src/stores/console.ts
  • console/src/stores/graph.ts
  • console/src/stores/memory.ts
  • console/src/stores/metrics.ts
  • console/src/stores/vector.ts
  • console/src/types/browser.ts
  • console/src/types/console.ts
  • console/src/types/d3-force-3d.d.ts
  • console/src/types/graph.ts
  • console/src/types/memory.ts
  • console/src/types/metrics.ts
  • console/src/types/vector.ts
  • console/src/views/Browser.tsx
  • console/src/views/Console.tsx
  • console/src/views/Dashboard.tsx
  • console/src/views/GraphExplorer.tsx
  • console/src/views/Help.tsx
  • console/src/views/MemoryView.tsx
  • console/src/views/VectorExplorer.tsx
  • console/src/vite-env.d.ts
  • console/src/workers/force-layout.worker.ts
  • console/src/workers/umap.worker.ts
  • console/tests/e2e/benchmarks.spec.ts
  • console/tests/e2e/browser.spec.ts
  • console/tests/e2e/console.spec.ts
  • console/tests/e2e/dashboard.spec.ts
  • console/tests/e2e/fixtures.ts
  • console/tests/e2e/graph.spec.ts
  • console/tests/e2e/integration.spec.ts
  • console/tests/e2e/memory.spec.ts
  • console/tests/e2e/vectors.spec.ts
  • console/tests/unit/components/NamespaceTree.test.tsx
  • console/tests/unit/lib/completions.test.ts
  • console/tests/unit/lib/monarch-cypher.test.ts
  • console/tests/unit/lib/monarch-resp.test.ts
  • console/tests/unit/setup.ts
  • console/tests/unit/stores/browser.test.ts
  • console/tests/unit/stores/console.test.ts
  • console/tests/unit/stores/graph.test.ts
  • console/tests/unit/stores/memory.test.ts
  • console/tests/unit/stores/metrics.test.ts
  • console/tests/unit/stores/vector.test.ts
  • console/tsconfig.app.json
  • console/tsconfig.json
  • console/tsconfig.node.json
  • console/tsconfig.test.json
  • console/vite.config.ts
  • console/vitest.config.ts
  • scripts/gen-verification.sh
  • scripts/seed-console-fixtures.py
  • scripts/test-integration.sh
  • src/admin/console_gateway.rs
  • src/admin/hnsw_trace.rs
  • src/admin/http_server.rs
  • src/admin/http_server_support.rs
  • src/admin/memory_treemap.rs
  • src/admin/metrics_setup.rs
  • src/admin/mod.rs
  • src/admin/sse_stream.rs
  • src/admin/static_files.rs
  • src/admin/ws_bridge.rs
  • src/main.rs
  • src/shard/mesh.rs
  • src/vector/hnsw/search.rs
  • tests/console_gateway_test.rs

Comment thread console/src/lib/api.ts
Comment on lines +60 to +67
const vectorStr = queryVector.join(",");
const result = await execCommand("FT.SEARCH", [
indexName,
`*=>[KNN ${k} @vector $BLOB]`,
"PARAMS", "2", "BLOB", vectorStr,
"SORTBY", "__vector_score",
"LIMIT", "0", String(k),
]);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n console/src/lib/vector-api.ts | head -90

Repository: pilotspace/moon

Length of output: 3449


🏁 Script executed:

# Search for DIALECT usage in the codebase
rg "DIALECT" --type ts --type js -i -B 2 -A 2

Repository: pilotspace/moon

Length of output: 351


🏁 Script executed:

# Check if there are any RediSearch protocol definitions or documentation
find . -type f \( -name "*.ts" -o -name "*.js" -o -name "*.md" \) | xargs rg "FT\.SEARCH|VECTOR|KNN" -l | head -20

Repository: pilotspace/moon

Length of output: 477


🌐 Web query:

RediSearch FT.SEARCH vector KNN DIALECT parameter syntax requirements

💡 Result:

For RediSearch FT.SEARCH with vector KNN queries, the DIALECT parameter is required and must be set to 2 (or higher dialects like 3 or 4 where supported). Dialect 1 does not support vector search. Syntax requirements: - Append DIALECT 2 (or higher) at the end of the FT.SEARCH command. - Vector KNN syntax: "()=>[KNN <num_neighbours> @<vector_field> $vec]" or with pre-filtering "()=>[KNN <num_neighbours> @<vector_field> $vec]". - Use PARAMS 2 vec <binary_vector_blob> to provide the query vector. - Typically sort by distance: SORTBY <distance_field> ASC. - Full example: FT.SEARCH "=>[KNN 10 @vector_field $vec AS dist]" PARAMS 2 vec SORTBY dist DIALECT 2 Dialect 2 introduced vector search support in RediSearch 2.4. Higher dialects add enhancements but 2 is the minimum for vectors. Default dialect is 1 without specification.

Citations:


Fix FT.SEARCH vector query syntax: add DIALECT 2 and encode vector as binary blob.

The RediSearch protocol requires DIALECT 2 for vector search support (dialect 1 is the default and does not support vectors). Additionally, the vector parameter must be encoded as a binary blob, not a comma-separated string. The code at lines 60–67 omits DIALECT and sends vectorStr as CSV text, which violates the protocol and will cause the query to fail or be misinterpreted.

Reference example in console/src/views/Help.tsx correctly shows the expected format: FT.SEARCH embeddings "*=>[KNN 10 @v $query]" PARAMS 2 query "..." DIALECT 2 with binary vector encoding.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/lib/vector-api.ts` around lines 60 - 67, The FT.SEARCH call is
using a CSV string (vectorStr) and missing DIALECT 2; convert queryVector (the
numeric array) into a binary Float32 buffer and pass that buffer as the PARAMS
value (instead of vectorStr), and add "DIALECT", "2" to the argument list for
execCommand; update the call site where execCommand("FT.SEARCH", [...]) is built
(symbols: queryVector, vectorStr, k, indexName, execCommand) so the param key
(e.g., "BLOB" or "query") receives the binary blob and the command arguments
include DIALECT 2.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/command/mod.rs (1)

524-595: ⚠️ Potential issue | 🔴 Critical

Test script entries are incomplete — 6 commands lack coverage.

Commands are missing test entries in scripts/test-commands.sh and scripts/test-consistency.sh: FLUSHDB, HEALTHZ, PFCOUNT, PFMERGE, SLOWLOG, and ZMSCORE. Per project rules, every new command requires at least one entry in each test script. Ensure these are added.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/command/mod.rs` around lines 524 - 595, Add missing test-script entries
for the six commands introduced in mod.rs: server_admin::flushdb (FLUSHDB),
connection::healthz (HEALTHZ), hll::pfcount (PFCOUNT), hll::pfmerge (PFMERGE),
crate::admin::slowlog::handle_slowlog/global_slowlog (SLOWLOG), and
sorted_set::zmscore (ZMSCORE). Update scripts/test-commands.sh and
scripts/test-consistency.sh to include at least one test-line per command
(invoke the command with minimal valid args and assert expected response/exit
code), using the exact command names (FLUSHDB, HEALTHZ, PFCOUNT, PFMERGE,
SLOWLOG, ZMSCORE) so CI covers these code paths. Ensure test entries follow
existing script conventions (same formatting, invocation helpers, and cleanup)
to avoid flakiness.
🧹 Nitpick comments (2)
console/src/lib/api.ts (2)

104-139: Sequential deletion is fine for UI context but consider batching for large selections.

The sequential loop enables per-key failure tracking, which is good for UX. For bulk operations (100+ keys), consider using a batched DELETE endpoint if available, or Promise.allSettled with a concurrency limit.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/lib/api.ts` around lines 104 - 139, The current deleteKeys(keys:
string[]) implementation performs sequential per-key DELETEs which is fine for
small sets but will be slow for large selections; change deleteKeys to use a
batched approach: if a bulk DELETE endpoint exists on API_BASE (e.g.,
POST/DELETE /keys with body of keys) call that and parse total/failed, otherwise
run parallel requests with Promise.allSettled and a concurrency limiter
(preserving per-key success/failure tracking and building the failed array) so
UX toasts still reflect total deleted and failed keys; ensure the function still
returns the total deleted and retains the same toast/error message logic.

240-265: Consider batching key enrichment for better throughput.

The current loop processes one key at a time despite using Promise.all for TYPE + MEMORY. For large key sets, you could batch the enrichment (e.g., 50 keys at a time) to improve throughput while still respecting the server.

♻️ Optional: Batch enrichment example
   do {
     const result = await scanKeys(cursor, "*", 500);
     cursor = result.cursor;
-    for (const key of result.keys) {
-      if (keys.length >= maxKeys) {
-        cursor = "0";
-        break;
-      }
-      const [typeResult, memResult] = await Promise.all([
-        execCommand("TYPE", [key]),
-        execCommand("MEMORY", ["USAGE", key]),
-      ]);
-      keys.push({
-        key,
-        type: String(typeResult).toLowerCase(),
-        bytes: typeof memResult === "number" ? memResult : 0,
-      });
-    }
+    const batch = result.keys.slice(0, maxKeys - keys.length);
+    if (batch.length === 0) { cursor = "0"; break; }
+    const enriched = await Promise.all(
+      batch.map(async (key) => {
+        const [typeResult, memResult] = await Promise.all([
+          execCommand("TYPE", [key]),
+          execCommand("MEMORY", ["USAGE", key]),
+        ]);
+        return {
+          key,
+          type: String(typeResult).toLowerCase(),
+          bytes: typeof memResult === "number" ? memResult : 0,
+        };
+      })
+    );
+    keys.push(...enriched);
+    if (keys.length >= maxKeys) cursor = "0";
   } while (cursor !== "0");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/lib/api.ts` around lines 240 - 265, fetchMemoryTreemap currently
enriches keys one-by-one using execCommand inside the scan loop which is slow
for large sets; change it to batch enrichment: while scanning collect raw key
names into a buffer (up to a chosen batch size, e.g., 50 or remaining to reach
maxKeys), then call Promise.all (or Promise.allSettled) on mapped
execCommand("TYPE", key) and execCommand("MEMORY", ["USAGE", key]) pairs for
that batch, transform the results into the same {key,type,bytes} objects and
push them into the keys array, and continue scanning until cursor="0" or
keys.length >= maxKeys; update flow in fetchMemoryTreemap and keep calling
buildTreemapFromKeys(keys) at the end.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/admin/auth.rs`:
- Around line 77-83: verify_header currently uses token.split_once('.') which
splits at the first '.' and will truncate user IDs containing dots; change the
split to use the right-most separator (e.g. token.rsplit_once('.') in
verify_header) so the tuple (user, b64) captures the full user portion and the
signature portion correctly for HMAC verification.

In `@src/admin/rate_limit.rs`:
- Around line 57-75: The cleanup task branch uses Tokio APIs (tokio::spawn,
tokio::runtime::Handle::try_current, tokio::time::interval) unguarded; wrap the
entire async cleanup spawn logic inside a #[cfg(feature = "runtime-tokio")]
block (or provide an alternative no-op for non-Tokio builds) so the module can
compile under other runtimes; locate the code around the enabled branch where
Arc::downgrade(&limiter) and the weak.upgrade() loop call
l.cleanup(Duration::from_secs(300)) and apply the cfg guard or add a conditional
stub function that compiles when runtime-tokio is disabled.

In `@src/admin/scan_fanout.rs`:
- Around line 123-126: The code currently treats a non-Array Frame for
keys_frame by returning an empty Vec (ks), silently dropping keys; change this
to surface a protocol error instead: in the match on keys_frame (the branch
handling Frame::Array(ks) / _), return an Err(...) for the non-Array case
(similar to other malformed-reply handling in this module) rather than
Vec::new(), so that the caller (scan_fanout / surrounding function) receives an
error when as_bytes/frame parsing fails; reference the keys_frame match, the
as_bytes conversion used in the Frame::Array arm, and the ks variable to locate
and replace the fallback with an Err describing a malformed SCAN reply.

In `@src/admin/sse_stream.rs`:
- Around line 79-85: The SSE response builder in Response::builder() is setting
a hardcoded header header("access-control-allow-origin", "*") which bypasses the
policy-driven CORS applied by the middleware; remove the explicit
access-control-allow-origin header from the SSE handler (the Response::builder()
call that constructs the StreamBody) so that CORS is solely enforced by the
existing middleware (src/admin/middleware.rs) and ensure no other code in
sse_stream.rs sets that header elsewhere.

In `@src/admin/static_files.rs`:
- Around line 45-61: The SPA fallback currently serves index.html for any
missing ConsoleAssets (ConsoleAssets::get returning None); change it to only
return index.html for route-like URLs and return a proper 404 for missing
asset-like requests. Detect this in the None branch by inspecting the requested
path (e.g., the path string passed into the static handler or the variable used
to lookup ConsoleAssets) and treating requests whose last path segment contains
a file extension (a dot like ".js", ".css", ".png", etc.) as assets — on those
return StatusCode::NOT_FOUND/plain text; for requests without an extension
(route URLs) return index.html as before. Update the logic around
ConsoleAssets::get and the fallback match so asset misses yield 404s while route
misses yield the SPA index.html.

In `@src/admin/ws_bridge.rs`:
- Around line 53-68: The send_queue_depth counter in the receive/process/send
loop (send_queue_depth, MAX_SEND_QUEUE, ws_sender) cannot accumulate because
each send is awaited, so replace this false soft-limit with a bounded outbound
channel and dedicated writer task: create a bounded mpsc channel for outgoing
messages, spawn a writer task that pulls from the channel and calls
ws_sender.send(Message::text(...)).await, and in the main receive loop try_send
to the channel and drop/log the message if the channel is full (enforcing the
soft limit); alternatively implement gating by checking sink readiness before
reading. Update references in this module to push outgoing payloads into the new
channel and remove the send_queue_depth counter logic.

In `@src/command/mod.rs`:
- Around line 790-797: DEBUG is marked only as ADMIN in the command metadata but
is routed through the read path (dispatch_read_inner,
is_dispatch_read_supported) and the handlers gate on !metadata::is_write(), so
the metadata and documented contract for dispatch_read() (which states it is
only called when metadata::is_read() is true) are inconsistent; fix by adding
the READONLY flag to DEBUG's metadata (making it READONLY|ADMIN like INFO or RA)
so metadata::is_read() returns true for DEBUG, or alternatively update the
documentation/comment at dispatch_read()/is_dispatch_read_supported() to reflect
that commands allowed on the read path may be ADMIN-only and the real gate is
metadata::is_write().

In `@src/command/server_admin.rs`:
- Around line 297-301: memory_stats currently reports the current estimated
memory as "peak.allocated", which is misleading because memory() and
memory_readonly() call memory_stats with db.estimated_memory(); change this by
either (A) adding and maintaining a true peak counter (e.g., track a peak_memory
field that is updated in the logic that calls or computes db.estimated_memory()
and use that in memory_stats) or (B) renaming the exported key from
"peak.allocated" to a Moon-specific current field like "allocated" or
"current.allocated" and update all callers (memory(), memory_readonly(), and any
consumers) to use the new name; locate the helper function memory_stats and the
callers memory() and memory_readonly() as the places to implement the chosen fix
and ensure backward compatibility or documentation if renaming.
- Around line 130-133: Replace the heap-allocation via format!() when
constructing the error frame (the Err(Frame::Error(Bytes::from(format!(...))))
use) with a non-allocating stack-based format: prepare a fixed-size stack buffer
(e.g. [u8; 128] or appropriate), write the static prefix "ERR DEBUG subcommand
'" into it, append the sub slice (use String::from_utf8_lossy(sub).as_bytes()
only if that avoids allocation, otherwise validate/append sub bytes directly),
append the trailing "' not supported", and use itoa/fast integer writers if you
need to format numbers; then create the Bytes from the written buffer (e.g.
Bytes::copy_from_slice(&buf[..len])). Apply the same change to the other
occurrences flagged (the similar format! uses at the other noted locations) so
command-path code avoids allocations.

In `@src/config.rs`:
- Around line 50-53: The docstring promises console_rate_burst defaults to 2x
console_rate_limit but the struct uses default_value_t = 2000.0, so change
console_rate_burst to an Option<f64> (remove default_value_t) and after parsing
compute the effective burst from console_rate_limit (e.g., if
config.console_rate_burst.is_none() set effective_burst = 2.0 *
config.console_rate_limit), storing or returning that derived value where the
limiter is constructed; update references that expect a f64 to use the derived
value and adjust the #[arg(...)] on console_rate_burst accordingly and the
docstring to match the new behavior (use symbols console_rate_burst and
console_rate_limit to locate and modify the code).

In `@src/main.rs`:
- Around line 129-174: The console auth/CORS policy construction (the block that
computes (console_auth, console_cors) using config.console_auth_required,
moon::admin::auth::AuthPolicy::enabled, and moon::admin::cors::CorsPolicy::new)
runs after the early check config return (config.check_config) so validation is
skipped; move this construction into the config validation path (or execute it
before the `if config.check_config { return Ok(()) }` early return) so
wildcard-CORS/auth conflicts and auth-policy construction errors are detected
during `--check-config`.
- Around line 177-188: The admin server is started by
moon::admin::metrics_setup::init_metrics before the global gateway is
registered, allowing /api/v1/* and /ws to race with an unset gateway; fix by
ensuring the gateway is registered before starting the HTTP server or making the
HTTP layer return a deterministic 503 until registration completes: either move
the call to set_global_gateway(...) to run prior to calling init_metrics
(referencing set_global_gateway and init_metrics), or change init_metrics to
accept or expose a readiness flag (the readiness_flag returned now) that is only
flipped true after set_global_gateway runs so all handlers check that flag and
return 503 until the gateway is set.

In `@tests/admin_auth_cors_ratelimit.rs`:
- Around line 77-89: Remove the dead call to bin_path() in the timeout branch of
the loop in tests/admin_auth_cors_ratelimit.rs (the "let _ = bin_path();" on the
path where Instant::now() >= deadline) because it does nothing; either delete
that statement or replace it with a meaningful action (e.g., logging) if
intended, leaving the rest of the timeout handling to break out and return None
(so the Moon { child, port, admin } return path remains unchanged).

In `@tests/scan_fanout_multishard.rs`:
- Around line 17-18: The test uses hard-coded RESP_PORT and ADMIN_PORT and
spawns a `moon` child without guaranteeing cleanup, so reserve free ports
up-front (e.g., bind to port 0 or use a helper to claim available ports and
assign them to RESP_PORT/ADMIN_PORT variables) before calling `spawn` and
replace the constants with those reserved values, and wrap the spawned child
process in a drop guard (a scoped RAII guard around the child returned by the
`spawn_moon`/spawn code) that unconditionally kills and awaits the child on
panic or normal exit; also ensure `wait_for_port()` is called against the
reserved ports tied to that child so stale processes cannot be mistaken for the
test child.

---

Outside diff comments:
In `@src/command/mod.rs`:
- Around line 524-595: Add missing test-script entries for the six commands
introduced in mod.rs: server_admin::flushdb (FLUSHDB), connection::healthz
(HEALTHZ), hll::pfcount (PFCOUNT), hll::pfmerge (PFMERGE),
crate::admin::slowlog::handle_slowlog/global_slowlog (SLOWLOG), and
sorted_set::zmscore (ZMSCORE). Update scripts/test-commands.sh and
scripts/test-consistency.sh to include at least one test-line per command
(invoke the command with minimal valid args and assert expected response/exit
code), using the exact command names (FLUSHDB, HEALTHZ, PFCOUNT, PFMERGE,
SLOWLOG, ZMSCORE) so CI covers these code paths. Ensure test entries follow
existing script conventions (same formatting, invocation helpers, and cleanup)
to avoid flakiness.

---

Nitpick comments:
In `@console/src/lib/api.ts`:
- Around line 104-139: The current deleteKeys(keys: string[]) implementation
performs sequential per-key DELETEs which is fine for small sets but will be
slow for large selections; change deleteKeys to use a batched approach: if a
bulk DELETE endpoint exists on API_BASE (e.g., POST/DELETE /keys with body of
keys) call that and parse total/failed, otherwise run parallel requests with
Promise.allSettled and a concurrency limiter (preserving per-key success/failure
tracking and building the failed array) so UX toasts still reflect total deleted
and failed keys; ensure the function still returns the total deleted and retains
the same toast/error message logic.
- Around line 240-265: fetchMemoryTreemap currently enriches keys one-by-one
using execCommand inside the scan loop which is slow for large sets; change it
to batch enrichment: while scanning collect raw key names into a buffer (up to a
chosen batch size, e.g., 50 or remaining to reach maxKeys), then call
Promise.all (or Promise.allSettled) on mapped execCommand("TYPE", key) and
execCommand("MEMORY", ["USAGE", key]) pairs for that batch, transform the
results into the same {key,type,bytes} objects and push them into the keys
array, and continue scanning until cursor="0" or keys.length >= maxKeys; update
flow in fetchMemoryTreemap and keep calling buildTreemapFromKeys(keys) at the
end.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a0298d37-146d-4c81-8aa8-3b6d96e2c0fc

📥 Commits

Reviewing files that changed from the base of the PR and between 2437cf1 and 416248b.

⛔ Files ignored due to path filters (3)
  • Cargo.lock is excluded by !**/*.lock
  • console/dist/index.html is excluded by !**/dist/**
  • console/pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (39)
  • .planning
  • Cargo.toml
  • console/.npmrc
  • console/package.json
  • console/src/lib/api.ts
  • console/src/lib/toast.ts
  • console/src/main.tsx
  • console/tests/e2e/benchmarks.spec.ts
  • console/tests/e2e/browser.spec.ts
  • console/tests/e2e/console.spec.ts
  • console/tests/e2e/dashboard.spec.ts
  • console/tests/e2e/fixtures.ts
  • console/tests/e2e/graph.spec.ts
  • console/tests/e2e/integration.spec.ts
  • console/tests/e2e/memory.spec.ts
  • console/tests/e2e/vectors.spec.ts
  • console/vite.config.ts
  • src/admin/auth.rs
  • src/admin/console_gateway.rs
  • src/admin/cors.rs
  • src/admin/http_server.rs
  • src/admin/http_server_support.rs
  • src/admin/metrics_setup.rs
  • src/admin/middleware.rs
  • src/admin/mod.rs
  • src/admin/rate_limit.rs
  • src/admin/scan_fanout.rs
  • src/admin/sse_stream.rs
  • src/admin/static_files.rs
  • src/admin/ws_bridge.rs
  • src/command/key.rs
  • src/command/metadata.rs
  • src/command/mod.rs
  • src/command/server_admin.rs
  • src/config.rs
  • src/main.rs
  • tests/admin_auth_cors_ratelimit.rs
  • tests/cmd_flush_dbsize_debug_memory.rs
  • tests/scan_fanout_multishard.rs
✅ Files skipped from review due to trivial changes (9)
  • console/.npmrc
  • console/tests/e2e/browser.spec.ts
  • .planning
  • console/tests/e2e/console.spec.ts
  • console/tests/e2e/graph.spec.ts
  • console/tests/e2e/dashboard.spec.ts
  • console/package.json
  • console/tests/e2e/fixtures.ts
  • tests/cmd_flush_dbsize_debug_memory.rs
🚧 Files skipped from review as they are similar to previous changes (8)
  • console/tests/e2e/vectors.spec.ts
  • console/src/main.tsx
  • console/vite.config.ts
  • console/tests/e2e/memory.spec.ts
  • console/tests/e2e/benchmarks.spec.ts
  • console/tests/e2e/integration.spec.ts
  • src/admin/metrics_setup.rs
  • src/admin/mod.rs

Comment thread src/admin/auth.rs
Comment thread src/admin/auth.rs Outdated
Comment thread src/admin/rate_limit.rs
Comment on lines +57 to +75
if enabled {
// Spawn cleanup on the current tokio runtime. The weak reference
// ensures the task exits automatically when the last Arc is dropped.
let weak = Arc::downgrade(&limiter);
// `tokio::spawn` requires a tokio runtime context. Callers in
// tests (`#[tokio::test]`) and in the admin server runtime both
// satisfy that. If invoked outside a runtime we silently skip.
if tokio::runtime::Handle::try_current().is_ok() {
tokio::spawn(async move {
let mut ticker = tokio::time::interval(Duration::from_secs(60));
loop {
ticker.tick().await;
let Some(l) = weak.upgrade() else {
return;
};
l.cleanup(Duration::from_secs(300));
}
});
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, explore the structure of src/admin/rate_limit.rs
head -100 src/admin/rate_limit.rs

Repository: pilotspace/moon

Length of output: 3842


🏁 Script executed:

# Check for cfg guards in the file
rg '\[cfg' src/admin/rate_limit.rs -A 2

Repository: pilotspace/moon

Length of output: 193


🏁 Script executed:

# Check if the entire admin module is cfg-gated
head -50 src/admin/mod.rs

Repository: pilotspace/moon

Length of output: 1100


🏁 Script executed:

# Look at Cargo.toml to understand feature structure
cat Cargo.toml | grep -A 20 '\[features\]'

Repository: pilotspace/moon

Length of output: 1325


🏁 Script executed:

# Search for runtime abstraction patterns in the codebase
rg 'cfg.*runtime-tokio|cfg.*runtime-monoio' src/ -l

Repository: pilotspace/moon

Length of output: 758


🏁 Script executed:

# Check if there are any documented constraints or feature conflicts
rg 'console.*runtime|runtime.*console' . --type toml --type md

Repository: pilotspace/moon

Length of output: 41


🏁 Script executed:

# Look for integration tests or examples that might show feature combinations
fd -e toml -type f | xargs rg 'console.*runtime|runtime.*console' 2>/dev/null

Repository: pilotspace/moon

Length of output: 231


🏁 Script executed:

# Check for any references to "console requires" or similar constraints
rg 'console.*tokio|console.*require' src/ --type rs -i

Repository: pilotspace/moon

Length of output: 85


🏁 Script executed:

# Verify if the issue is actually a problem by checking what happens with the feature combo
# Look at how hyper is used in console (hyper typically is runtime-agnostic but may have tokio deps)
rg 'hyper-tungstenite' . --type toml -A 3 -B 3

Repository: pilotspace/moon

Length of output: 1397


Cleanup task uses Tokio APIs without explicit runtime feature guard.

The code directly calls tokio::spawn, tokio::runtime::Handle::try_current(), and tokio::time::interval inside the enabled branch without a #[cfg(feature = "runtime-tokio")] guard or runtime abstraction. While the console feature depends on tokio-stream (making it implicitly Tokio-bound), there is no explicit constraint preventing the module from being compiled with only runtime-monoio.

Add a #[cfg(feature = "runtime-tokio")] guard around the cleanup task spawn, or provide a no-op stub for non-Tokio runtimes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/admin/rate_limit.rs` around lines 57 - 75, The cleanup task branch uses
Tokio APIs (tokio::spawn, tokio::runtime::Handle::try_current,
tokio::time::interval) unguarded; wrap the entire async cleanup spawn logic
inside a #[cfg(feature = "runtime-tokio")] block (or provide an alternative
no-op for non-Tokio builds) so the module can compile under other runtimes;
locate the code around the enabled branch where Arc::downgrade(&limiter) and the
weak.upgrade() loop call l.cleanup(Duration::from_secs(300)) and apply the cfg
guard or add a conditional stub function that compiles when runtime-tokio is
disabled.

Comment thread src/admin/scan_fanout.rs
Comment thread src/admin/sse_stream.rs
Comment thread src/config.rs
Comment on lines +50 to +53
/// Token-bucket burst capacity for the admin rate limiter.
/// Default: 2x --console-rate-limit.
#[arg(long = "console-rate-burst", default_value_t = 2000.0)]
pub console_rate_burst: f64,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

The documented burst default is not what this parser does.

With default_value_t = 2000.0, --console-rate-limit 25 still leaves burst at 2000, not 50 as the docstring promises. Make console_rate_burst optional and derive the effective default from console_rate_limit after parsing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config.rs` around lines 50 - 53, The docstring promises
console_rate_burst defaults to 2x console_rate_limit but the struct uses
default_value_t = 2000.0, so change console_rate_burst to an Option<f64> (remove
default_value_t) and after parsing compute the effective burst from
console_rate_limit (e.g., if config.console_rate_burst.is_none() set
effective_burst = 2.0 * config.console_rate_limit), storing or returning that
derived value where the limiter is constructed; update references that expect a
f64 to use the derived value and adjust the #[arg(...)] on console_rate_burst
accordingly and the docstring to match the new behavior (use symbols
console_rate_burst and console_rate_limit to locate and modify the code).

Comment thread src/main.rs
Comment on lines +129 to +174
// ── Admin/console hardening (HARD-01/02/03, Phase 137) ──────────
// Build the auth + CORS policies BEFORE the admin listener binds so
// misconfiguration (wildcard CORS + auth required, empty secret) fails
// fast and never opens a port that would satisfy probes while silently
// accepting unauthenticated requests.
//
// The rate limiter is constructed inside the admin runtime (its
// cleanup task needs `tokio::spawn`); we thread the raw rps/burst
// through to `spawn_admin_server`.
#[cfg(feature = "console")]
let (console_auth, console_cors) = {
let auth_policy = if config.console_auth_required {
let secret = if config.console_auth_secret.is_empty() {
// Operator did not supply a secret: generate an ephemeral
// 32-byte secret and warn that issued tokens won't survive
// restart.
let bytes: [u8; 32] = rand::random();
use base64::Engine;
let s = base64::engine::general_purpose::URL_SAFE_NO_PAD.encode(bytes);
tracing::warn!(
"--console-auth-required set without --console-auth-secret; \
generated ephemeral secret (tokens will not survive restart). \
Set --console-auth-secret=... for reproducible deploys."
);
s
} else {
config.console_auth_secret.clone()
};
match moon::admin::auth::AuthPolicy::enabled(secret.as_bytes()) {
Ok(p) => std::sync::Arc::new(p),
Err(e) => return Err(anyhow::anyhow!("--console-auth-secret: {}", e)),
}
} else {
std::sync::Arc::new(moon::admin::auth::AuthPolicy::disabled())
};

let cors_policy = match moon::admin::cors::CorsPolicy::new(
&config.console_cors_origin,
config.console_auth_required,
) {
Ok(p) => std::sync::Arc::new(p),
Err(e) => return Err(anyhow::anyhow!(e)),
};

(auth_policy, cors_policy)
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

--check-config still skips the new console validation.

This block is after the early if config.check_config { return Ok(()) }, so wildcard-CORS/auth conflicts and auth-policy construction errors are only caught on a real boot. Move the console policy construction into the validation path, or run it before the early return.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main.rs` around lines 129 - 174, The console auth/CORS policy
construction (the block that computes (console_auth, console_cors) using
config.console_auth_required, moon::admin::auth::AuthPolicy::enabled, and
moon::admin::cors::CorsPolicy::new) runs after the early check config return
(config.check_config) so validation is skipped; move this construction into the
config validation path (or execute it before the `if config.check_config {
return Ok(()) }` early return) so wildcard-CORS/auth conflicts and auth-policy
construction errors are detected during `--check-config`.

Comment thread src/main.rs
Comment on lines +177 to +188
let readiness_flag = moon::admin::metrics_setup::init_metrics(
config.admin_port,
&config.bind,
#[cfg(feature = "console")]
console_auth,
#[cfg(feature = "console")]
console_cors,
#[cfg(feature = "console")]
config.console_rate_limit,
#[cfg(feature = "console")]
config.console_rate_burst,
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The admin server can start before the console gateway is registered.

Per src/admin/metrics_setup.rs:41-85, init_metrics() spawns the admin HTTP server immediately, but set_global_gateway(...) happens much later. Early /api/v1/* or /ws requests can race with an unset gateway during startup. Either register the gateway first or have the HTTP layer return a deterministic 503 until registration completes.

Also applies to: 306-321

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main.rs` around lines 177 - 188, The admin server is started by
moon::admin::metrics_setup::init_metrics before the global gateway is
registered, allowing /api/v1/* and /ws to race with an unset gateway; fix by
ensuring the gateway is registered before starting the HTTP server or making the
HTTP layer return a deterministic 503 until registration completes: either move
the call to set_global_gateway(...) to run prior to calling init_metrics
(referencing set_global_gateway and init_metrics), or change init_metrics to
accept or expose a readiness flag (the readiness_flag returned now) that is only
flipped true after set_global_gateway runs so all handlers check that flag and
return 503 until the gateway is set.

Comment on lines +77 to +89
loop {
if Instant::now() >= deadline {
let _ = bin_path();
break;
}
let url = format!("http://127.0.0.1:{}/healthz", admin);
if let Ok(resp) = ureq::get(&url).timeout(Duration::from_millis(500)).call()
&& resp.status() == 200
{
return Some(Moon { child, port, admin });
}
thread::sleep(Duration::from_millis(100));
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Dead code in timeout branch.

Line 79 calls bin_path() but discards the result. This appears to be leftover debug code or a copy-paste artifact.

🧹 Suggested fix
         if Instant::now() >= deadline {
-            let _ = bin_path();
             break;
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
loop {
if Instant::now() >= deadline {
let _ = bin_path();
break;
}
let url = format!("http://127.0.0.1:{}/healthz", admin);
if let Ok(resp) = ureq::get(&url).timeout(Duration::from_millis(500)).call()
&& resp.status() == 200
{
return Some(Moon { child, port, admin });
}
thread::sleep(Duration::from_millis(100));
}
loop {
if Instant::now() >= deadline {
break;
}
let url = format!("http://127.0.0.1:{}/healthz", admin);
if let Ok(resp) = ureq::get(&url).timeout(Duration::from_millis(500)).call()
&& resp.status() == 200
{
return Some(Moon { child, port, admin });
}
thread::sleep(Duration::from_millis(100));
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/admin_auth_cors_ratelimit.rs` around lines 77 - 89, Remove the dead
call to bin_path() in the timeout branch of the loop in
tests/admin_auth_cors_ratelimit.rs (the "let _ = bin_path();" on the path where
Instant::now() >= deadline) because it does nothing; either delete that
statement or replace it with a meaningful action (e.g., logging) if intended,
leaving the rest of the timeout handling to break out and return None (so the
Moon { child, port, admin } return path remains unchanged).

Comment on lines +17 to +18
const RESP_PORT: u16 = 16499;
const ADMIN_PORT: u16 = 16500;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make the spawned server lifecycle failure-safe.

A panic before Lines 137-139 leaves moon running on the hard-coded ports, and the next run can then pass wait_for_port() against the stale process instead of the child from this test. Reserve free ports up front and wrap the child in a drop guard so cleanup is unconditional.

Also applies to: 73-139

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/scan_fanout_multishard.rs` around lines 17 - 18, The test uses
hard-coded RESP_PORT and ADMIN_PORT and spawns a `moon` child without
guaranteeing cleanup, so reserve free ports up-front (e.g., bind to port 0 or
use a helper to claim available ports and assign them to RESP_PORT/ADMIN_PORT
variables) before calling `spawn` and replace the constants with those reserved
values, and wrap the spawned child process in a drop guard (a scoped RAII guard
around the child returned by the `spawn_moon`/spawn code) that unconditionally
kills and awaits the child on panic or normal exit; also ensure
`wait_for_port()` is called against the reserved ports tied to that child so
stale processes cannot be mistaken for the test child.

TinDang97 added a commit that referenced this pull request Apr 12, 2026
Archived phases 128-137 to .planning/milestones/v0.1.5-phases/.
Full milestone artifacts at .planning/milestones/v0.1.5-*.
See PR #73 for the implementation.
TinDang97 added a commit that referenced this pull request Apr 12, 2026
- CHANGELOG.md: add v0.1.5 Moon Console section (unblocks Lint gate)
- console-integration.yml: pin dtolnay action to 1.94.1 and install
  rustfmt/clippy up-front to avoid rustup 'rustfmt-preview' vs
  'rustfmt' conflict when cargo syncs via rust-toolchain.toml
- Dockerfile: bump RUST_VERSION 1.88 → 1.94 (cargo-chef panicked
  because moon requires MSRV 1.94)
- release.yml: cargo-cyclonedx rejects --output-file; switch to
  --override-filename + mv to artifacts/ for deterministic paths
@pilotspacex-byte
Copy link
Copy Markdown
Contributor Author

Review Response

Thanks for the thorough reviews. Below is a disposition for each finding.

Fixed in follow-up commits on this PR

Commits 6a6fd7d + cf592c1:

Source Finding Commit
CodeRabbit src/vector/hnsw/search.rs:870 layer-0 selected cap wrong 6a6fd7d — cap by k_cap only
CodeRabbit src/admin/console_gateway.rs:22 silent set_global_gateway 6a6fd7d + cf592c1 — returns Result, call site panics on double-init
CodeRabbit scripts/gen-verification.sh:115 unquoted nested expansion 6a6fd7d${f#"$REPO_ROOT"/}
CodeRabbit console/src/components/vector/UmapProgress.tsx unclamped pct 6a6fd7d — clamped to [0, 100]
CodeRabbit console/src/components/dashboard/HitRatioCard.tsx Math.random in useMemo 6a6fd7d — deterministic sine jitter
CodeRabbit console/src/lib/monarch-cypher.ts arrow vs minus 6a6fd7d — split into operator.arrow + operator
CodeRabbit console/HashEditor.tsx missing error handling + double-save 6a6fd7d — try/catch + toastError + saving-guard
CodeRabbit console/ListEditor.tsx missing error handling 6a6fd7d — try/catch + toastError

Plus CI unblockers in commits 2fa6fed, fc3c43e, 69751e0:

  • CHANGELOG.md — added v0.1.5 section
  • console-integration.yml — pinned dtolnay/rust-toolchain@1.94.1 + components: rustfmt, clippy
  • DockerfileRUST_VERSION 1.88 → 1.94 (cargo-chef was failing)
  • release.ymlcargo cyclonedx --output-file--override-filename + mv
  • scripts/test-integration.sh--persistence-dir--dir (actual CLI flag)
  • console/.npmrc — removed machine-specific store-dir=/Users/tindang/...

Obsolete on current HEAD

Qodo Bug #4 — unauthenticated admin command exec. This finding is against commit 2437cf1 (Phase 136). Phase 137-03 (already on this branch) added the full middleware chain in src/admin/http_server.rs:93-130:

CORS preflight → Auth check → Rate limit

See src/admin/middleware.rs, src/admin/auth.rs, src/admin/cors.rs, src/admin/rate_limit.rs — HMAC-SHA256 Bearer tokens, per-IP token-bucket limiter, origin allowlist. /metrics is the only auth-exempt path (for Prometheus scrape).

Deferred with justification

Qodo Rule #1 — flume channels vs Mutex<HeapProd> in ConsoleGateway. The Mutex<HeapProd<ShardMessage>> serializes admin tokio tasks onto the SPSC producer. Admin traffic is low-rate (<100/s in interactive UI use), not a RESP hot path. Uncontended parking_lot::Mutex is ~10ns — negligible next to the actual shard dispatch + oneshot await. Converting to flume::Sender would require reshaping every shard event loop's consumer drain to be heterogeneous (ringbuf inter-shard + flume admin), which is a Phase-128-level change.

Flagging as v0.1.6 followup — appropriate scope is to migrate all admin-lane channels uniformly, not just console dispatch.

Qodo Rule #2/#3 — unit tests in leaf files instead of mod.rs. Codebase-wide audit:

Location Files with #[cfg(test)] in leaf .rs
src/vector/hnsw/ 5 (graph.rs, neighbor_codec.rs, build.rs, search.rs, search_sq.rs)
src/graph/ 23 (store, csr, cypher, traversal, segment, wal, recovery, …)
src/admin/ 8 (the files flagged here)

Moving only the 9 flagged files creates asymmetry, not compliance. The CLAUDE.md rule (“Test code stays in mod.rs”) was authored for the narrower src/command/<group>/ split pattern (hash_read + hash_write + shared helpers in mod.rs). Applying it to src/admin/ + src/vector/hnsw/ would be a multi-thousand-line churn across settled code — out of scope for v0.1.5.

Flagging as tooling followup — either (a) update the rule to reflect the observed convention, or (b) plan a sweep across all 36 affected files as a separate PR.

Low-priority CodeRabbit findings not addressed

  • tests/console_gateway_test.rs:13-18 hardcoded ports 16399/16400 — tests are #[ignore] (not run in CI); collision only matters on manual bulk-invocation. Keeping simple over defensive.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

♻️ Duplicate comments (2)
src/admin/auth.rs (1)

66-67: ⚠️ Potential issue | 🟠 Major

Remove lint-suppressed unwrap() from non-test auth code.

Line 66 and Line 93 suppress clippy::unwrap_used and call .unwrap() in library code. This bypasses the repo ratchet and keeps a panic path in the auth flow.

Proposed minimal fix
-        #[allow(clippy::unwrap_used)] // invariant: secret verified non-empty in `enabled`
-        let mut mac = HmacSha256::new_from_slice(&self.secret).unwrap();
+        let mut mac = match HmacSha256::new_from_slice(&self.secret) {
+            Ok(mac) => mac,
+            Err(_) => panic!("AuthPolicy invariant violated: non-empty secret required"),
+        };
@@
-        #[allow(clippy::unwrap_used)] // invariant: secret non-empty
-        let mut mac = HmacSha256::new_from_slice(&self.secret).unwrap();
+        let mut mac = match HmacSha256::new_from_slice(&self.secret) {
+            Ok(mac) => mac,
+            Err(_) => return Err("invalid auth policy secret"),
+        };
#!/bin/bash
# Verify unwrap/expect and suppressions are removed from this library auth path.
rg -n --type=rust -C2 '#\[allow\(clippy::unwrap_used\)\]|\.unwrap\(|\.expect\(' src/admin/auth.rs

As per coding guidelines: "No unwrap() or expect() in library code outside tests. Use pattern matching and if let instead of unwrap."

Also applies to: 93-94

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/admin/auth.rs` around lines 66 - 67, Replace the panic-prone unwraps and
the #[allow(clippy::unwrap_used)] around
HmacSha256::new_from_slice(&self.secret) (and the similar call later) with
proper error handling: call HmacSha256::new_from_slice(&self.secret) and match
on the Result, mapping the Err case into a suitable error (e.g., an AuthError or
boxed error) and returning it from the enclosing function (or using ? to
propagate) instead of unwrapping; remove the allow attribute; ensure the
enclosing functions (where HmacSha256::new_from_slice and the other unwrap
occur) return Result so you can propagate the failure with context rather than
panicking.
src/main.rs (1)

129-174: ⚠️ Potential issue | 🟠 Major

Hoist console policy validation above --check-config.

This block still runs after the early return at Lines 98-126, so --check-config can report success without ever validating --console-auth-secret or the console CORS/auth combination.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main.rs` around lines 129 - 174, The console auth/CORS validation
(creating console_auth and console_cors via config.console_auth_required,
config.console_auth_secret, moon::admin::auth::AuthPolicy::enabled and
moon::admin::cors::CorsPolicy::new) is executed after the early --check-config
return so checks can be skipped; move/hoist this entire #[cfg(feature =
"console")] block so it runs before the --check-config early-return (the code
that handles --check-config around the earlier lines), ensuring
--console-auth-secret and console CORS/auth combos are validated during
config-only checks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@console/src/components/browser/editors/HashEditor.tsx`:
- Around line 106-121: The clickable <span> used to trigger edit (where
setFields(...) sets editing: true) and the icon-only buttons (Trash2 and other
icon buttons around handleDelete/Loader2) are not keyboard- or
screen-reader-accessible; replace the span with a focusable element (preferably
a <button> or an element with role="button" and tabindex="0") and ensure it
handles onKeyDown (Enter/Space) to call the same setFields edit toggle, and add
clear aria-label attributes to the icon-only buttons (e.g., the button that
calls handleDelete and any other icon buttons) so screen readers announce their
purpose; also ensure visible focus styles are preserved when making these
elements focusable and update any tests/attributes that expect the previous
span.
- Around line 63-68: handleAdd currently always appends a new row after
execCommand("HSET", ...), causing duplicates when the field already exists;
change the setFields update so it upserts: after successful execCommand("HSET",
[keyName, newField, newValue]) use setFields(prev => { if an entry with
entry.field === newField exists, return prev with that entry replaced (update
value and editValue and editing=false), otherwise return [...prev, newEntry] });
keep clearing setNewField("") as before.
- Around line 27-31: HashEditor reuses local state when props change (useState
initializer parseHashArray(value) only runs once), causing stale fields; add a
useEffect in the HashEditor component that watches keyName and value and resets
local state by calling setFields(parseHashArray(value)) and clearing
newField/newValue (setNewField(""), setNewValue("")) and optionally
setSaving(null) so the editor resyncs whenever a different key is selected;
reference the HashEditor function and the state setters setFields, setNewField,
setNewValue, and setSaving to locate where to add the effect.

In `@src/admin/console_gateway.rs`:
- Around line 179-186: The current frame_to_json converts Frame::PreSerialized
to a placeholder {"raw": true}, dropping the payload; instead, extract the
PreSerialized payload bytes and serialize them like other binary/string frames
(attempt UTF-8 decode and otherwise base64), returning the actual value (or
object with {"raw": <string_or_base64>} if you need to preserve metadata).
Update frame_to_json's match arm for Frame::PreSerialized to read the contained
bytes and apply the same UTF-8 / base64 logic used for Frame::BulkString or
Frame::SimpleString, and make the same change where PreSerialized is handled
elsewhere in the codebase so hot-path replies (e.g., GET) render real values
rather than a placeholder.
- Around line 48-85: is_keyless_command currently treats many commands (e.g.,
DBSIZE, KEYS, SCAN, MEMORY, OBJECT, MGET, EVAL, etc.) as shard-0/keyless which
causes partial or incorrect shard-local execution in ShardMessage::Execute;
update is_keyless_command to only return true for truly global commands with no
key semantics (e.g., PING, INFO, COMMAND, CLUSTER, SAVE/BGSAVE) and remove
commands that are key- or subcommand-dependent, and/or implement
subcommand-aware routing: in the console gateway routing path call a new helper
that parses the command and its first subcommand/argument (use the same parsing
logic used by shard/spsc_handler.rs) to either (a) fan out the command to all
shards for global ops (DBSIZE, FLUSHALL), (b) route by the actual key argument
for keyed ops (MGET, MEMORY USAGE, OBJECT ENCODING, EVAL with KEYS), or (c) pin
to shard 0 only for truly console-only commands; reference is_keyless_command
and the ShardMessage::Execute path when making these changes.

In `@src/admin/scan_fanout.rs`:
- Around line 123-124: The code currently uses keys_frame matched as
Frame::Array(ks) and builds ks: Vec<Bytes> via ks.iter().filter_map(|k|
as_bytes(k).cloned()).collect(), which silently drops non-string entries; change
this to explicitly validate each element (using as_bytes on every k) and return
an error on the first malformed entry instead of skipping it. Locate the
Frame::Array arm that constructs ks and replace the filter_map/collect with an
explicit iteration or try-collect that maps each k -> as_bytes(k).cloned() and
propagates a malformed-reply error if as_bytes returns None, so malformed SCAN
elements fail like the other branches.

In `@src/admin/static_files.rs`:
- Around line 49-55: The current asset-detection uses path.contains('.') which
treats any dot anywhere as an asset; change it to inspect only the final path
segment: compute the last segment from the path (e.g. let last_segment =
path.rsplit('/').next().unwrap_or(path)) and then set has_extension based on
last_segment.contains('.') (still account for trailing '/' as before), replacing
the existing has_extension logic in the code around the variable `path` in
static_files.rs so dotted route segments don't incorrectly 404.

In `@src/command/vector_search/mod.rs`:
- Around line 476-481: The CSV vector parser currently builds parsed via
filter_map(parse().ok()) which silently drops invalid tokens; update the parsing
in the vector parsing code (the logic around `let parsed: Vec<f32> = ...` and
the subsequent `if parsed.len() != dim`) to parse tokens one-by-one: trim each
token, attempt `parse::<f32>()`, immediately return an error on the first parse
failure, push valid values into a SmallVec (or Vec::with_capacity(dim) if not
hot), and stop early with an error if the token count exceeds `dim`; ensure the
function that uses `parsed` still checks exact length == dim after successful
incremental parsing.

---

Duplicate comments:
In `@src/admin/auth.rs`:
- Around line 66-67: Replace the panic-prone unwraps and the
#[allow(clippy::unwrap_used)] around HmacSha256::new_from_slice(&self.secret)
(and the similar call later) with proper error handling: call
HmacSha256::new_from_slice(&self.secret) and match on the Result, mapping the
Err case into a suitable error (e.g., an AuthError or boxed error) and returning
it from the enclosing function (or using ? to propagate) instead of unwrapping;
remove the allow attribute; ensure the enclosing functions (where
HmacSha256::new_from_slice and the other unwrap occur) return Result so you can
propagate the failure with context rather than panicking.

In `@src/main.rs`:
- Around line 129-174: The console auth/CORS validation (creating console_auth
and console_cors via config.console_auth_required, config.console_auth_secret,
moon::admin::auth::AuthPolicy::enabled and moon::admin::cors::CorsPolicy::new)
is executed after the early --check-config return so checks can be skipped;
move/hoist this entire #[cfg(feature = "console")] block so it runs before the
--check-config early-return (the code that handles --check-config around the
earlier lines), ensuring --console-auth-secret and console CORS/auth combos are
validated during config-only checks.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 22e00cc7-06cb-4df4-9eda-dec11d86abd8

📥 Commits

Reviewing files that changed from the base of the PR and between fc3c43e and 58118c3.

📒 Files selected for processing (22)
  • console/src/components/browser/editors/HashEditor.tsx
  • console/src/components/browser/editors/ListEditor.tsx
  • console/src/components/dashboard/HitRatioCard.tsx
  • console/src/components/vector/UmapProgress.tsx
  • console/src/lib/api.ts
  • console/src/lib/monarch-cypher.ts
  • console/tests/e2e/benchmarks.spec.ts
  • console/tests/e2e/dashboard.spec.ts
  • console/tests/e2e/integration.spec.ts
  • console/tests/e2e/memory.spec.ts
  • console/tests/e2e/vectors.spec.ts
  • scripts/gen-verification.sh
  • scripts/test-integration.sh
  • src/admin/auth.rs
  • src/admin/console_gateway.rs
  • src/admin/scan_fanout.rs
  • src/admin/sse_stream.rs
  • src/admin/static_files.rs
  • src/command/server_admin.rs
  • src/command/vector_search/mod.rs
  • src/main.rs
  • src/vector/hnsw/search.rs
✅ Files skipped from review due to trivial changes (4)
  • console/tests/e2e/memory.spec.ts
  • console/tests/e2e/dashboard.spec.ts
  • console/tests/e2e/vectors.spec.ts
  • scripts/test-integration.sh
🚧 Files skipped from review as they are similar to previous changes (8)
  • console/src/components/vector/UmapProgress.tsx
  • console/tests/e2e/benchmarks.spec.ts
  • console/src/components/browser/editors/ListEditor.tsx
  • console/src/lib/monarch-cypher.ts
  • console/src/components/dashboard/HitRatioCard.tsx
  • console/tests/e2e/integration.spec.ts
  • scripts/gen-verification.sh
  • console/src/lib/api.ts

Comment on lines +27 to +31
export function HashEditor({ keyName, value }: { keyName: string; value: unknown }) {
const [fields, setFields] = useState<HashField[]>(() => parseHashArray(value));
const [newField, setNewField] = useState("");
const [newValue, setNewValue] = useState("");
const [saving, setSaving] = useState<string | null>(null);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify whether HashEditor is keyed/remounted by keyName in parent render paths.
rg -nP --type=tsx '<HashEditor\b' -C3
rg -nP --type=tsx '<HashEditor\b[^>]*\bkey=' -C2

Repository: pilotspace/moon

Length of output: 114


🏁 Script executed:

#!/bin/bash
# Find where HashEditor is imported and used
rg 'HashEditor' --type ts -A 2 -B 2

Repository: pilotspace/moon

Length of output: 1746


🏁 Script executed:

#!/bin/bash
# List the structure to understand the codebase
find console/src -name '*.tsx' -o -name '*.ts' | head -20

Repository: pilotspace/moon

Length of output: 1007


🏁 Script executed:

#!/bin/bash
# Search for HashEditor usage more broadly
rg 'HashEditor' -A 3 -B 3

Repository: pilotspace/moon

Length of output: 2351


🏁 Script executed:

#!/bin/bash
# Read ValuePanel to understand the component structure and key handling
cat -n console/src/components/browser/ValuePanel.tsx | head -100

Repository: pilotspace/moon

Length of output: 3333


🏁 Script executed:

#!/bin/bash
# Check if ValuePanel or its parents use key props for HashEditor or manage remounting
rg 'keyName|key=' console/src/components/browser/ValuePanel.tsx -B 3 -A 1

Repository: pilotspace/moon

Length of output: 1444


🏁 Script executed:

#!/bin/bash
# Read HashEditor to see handleAdd and handleSave logic
cat -n console/src/components/browser/editors/HashEditor.tsx | sed -n '30,80p'

Repository: pilotspace/moon

Length of output: 2423


Add useEffect to resync local state when keyName or value props change.

When switching between different hash keys in ValuePanel, HashEditor is reused without remounting. The useState initializer on line 28 only runs once, leaving stale hash fields displayed. This causes confusing UI where users see and edit fields from a previously viewed key.

Proposed fix
-import { useState, useCallback } from "react";
+import { useState, useCallback, useEffect } from "react";
 
 export function HashEditor({ keyName, value }: { keyName: string; value: unknown }) {
   const [fields, setFields] = useState<HashField[]>(() => parseHashArray(value));
   const [newField, setNewField] = useState("");
   const [newValue, setNewValue] = useState("");
   const [saving, setSaving] = useState<string | null>(null);
+
+  useEffect(() => {
+    setFields(parseHashArray(value));
+    setNewField("");
+    setNewValue("");
+    setSaving(null);
+  }, [keyName, value]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/browser/editors/HashEditor.tsx` around lines 27 - 31,
HashEditor reuses local state when props change (useState initializer
parseHashArray(value) only runs once), causing stale fields; add a useEffect in
the HashEditor component that watches keyName and value and resets local state
by calling setFields(parseHashArray(value)) and clearing newField/newValue
(setNewField(""), setNewValue("")) and optionally setSaving(null) so the editor
resyncs whenever a different key is selected; reference the HashEditor function
and the state setters setFields, setNewField, setNewValue, and setSaving to
locate where to add the effect.

Comment on lines +63 to +68
const handleAdd = async () => {
if (!newField.trim()) return;
try {
await execCommand("HSET", [keyName, newField, newValue]);
setFields((prev) => [...prev, { field: newField, value: newValue, editing: false, editValue: newValue }]);
setNewField("");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

handleAdd should upsert existing fields, not always append.

Line 67 always appends a new row after HSET. For an existing hash field, backend value is replaced, but UI becomes duplicated and inconsistent.

Proposed fix
   const handleAdd = async () => {
-    if (!newField.trim()) return;
+    const fieldName = newField.trim();
+    if (!fieldName) return;
     try {
-      await execCommand("HSET", [keyName, newField, newValue]);
-      setFields((prev) => [...prev, { field: newField, value: newValue, editing: false, editValue: newValue }]);
+      await execCommand("HSET", [keyName, fieldName, newValue]);
+      setFields((prev) => {
+        const idx = prev.findIndex((f) => f.field === fieldName);
+        if (idx >= 0) {
+          return prev.map((f) =>
+            f.field === fieldName
+              ? { ...f, value: newValue, editValue: newValue, editing: false }
+              : f
+          );
+        }
+        return [...prev, { field: fieldName, value: newValue, editing: false, editValue: newValue }];
+      });
       setNewField("");
       setNewValue("");
     } catch (err) {
-      toastError(`HSET ${keyName} ${newField} failed: ${err instanceof Error ? err.message : String(err)}`);
+      toastError(`HSET ${keyName} ${fieldName} failed: ${err instanceof Error ? err.message : String(err)}`);
     }
   };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/browser/editors/HashEditor.tsx` around lines 63 - 68,
handleAdd currently always appends a new row after execCommand("HSET", ...),
causing duplicates when the field already exists; change the setFields update so
it upserts: after successful execCommand("HSET", [keyName, newField, newValue])
use setFields(prev => { if an entry with entry.field === newField exists, return
prev with that entry replaced (update value and editValue and editing=false),
otherwise return [...prev, newEntry] }); keep clearing setNewField("") as
before.

Comment on lines +106 to +121
<span
className="text-xs font-mono text-zinc-400 truncate cursor-pointer hover:text-zinc-100"
onClick={() =>
setFields((prev) =>
prev.map((x) => (x.field === f.field ? { ...x, editing: true } : x))
)
}
>
{f.value}
</span>
)}
<div className="flex items-center gap-1 justify-end">
{saving === f.field && <Loader2 className="h-3 w-3 animate-spin text-muted-foreground" />}
<button onClick={() => handleDelete(f.field)} className="text-muted-foreground/50 hover:text-destructive">
<Trash2 className="h-3 w-3" />
</button>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Editing and icon actions are not fully accessible.

Line 106 uses a clickable <span> (not keyboard-focusable), and Lines 119/142 define icon-only buttons without accessible names. Keyboard and screen-reader users can miss core actions.

Proposed fix
-              <span
-                className="text-xs font-mono text-zinc-400 truncate cursor-pointer hover:text-zinc-100"
+              <button
+                type="button"
+                aria-label={`Edit value for field ${f.field}`}
+                className="text-left text-xs font-mono text-zinc-400 truncate hover:text-zinc-100"
                 onClick={() =>
                   setFields((prev) =>
                     prev.map((x) => (x.field === f.field ? { ...x, editing: true } : x))
                   )
                 }
               >
                 {f.value}
-              </span>
+              </button>
@@
-              <button onClick={() => handleDelete(f.field)} className="text-muted-foreground/50 hover:text-destructive">
+              <button
+                type="button"
+                aria-label={`Delete field ${f.field}`}
+                onClick={() => handleDelete(f.field)}
+                className="text-muted-foreground/50 hover:text-destructive"
+              >
                 <Trash2 className="h-3 w-3" />
               </button>
@@
-        <button
+        <button
+          type="button"
+          aria-label="Add hash field"
           onClick={handleAdd}
           className="flex items-center justify-center gap-1 text-xs text-primary hover:text-primary/80"
         >

Also applies to: 142-147

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@console/src/components/browser/editors/HashEditor.tsx` around lines 106 -
121, The clickable <span> used to trigger edit (where setFields(...) sets
editing: true) and the icon-only buttons (Trash2 and other icon buttons around
handleDelete/Loader2) are not keyboard- or screen-reader-accessible; replace the
span with a focusable element (preferably a <button> or an element with
role="button" and tabindex="0") and ensure it handles onKeyDown (Enter/Space) to
call the same setFields edit toggle, and add clear aria-label attributes to the
icon-only buttons (e.g., the button that calls handleDelete and any other icon
buttons) so screen readers announce their purpose; also ensure visible focus
styles are preserved when making these elements focusable and update any
tests/attributes that expect the previous span.

Comment on lines +48 to +85
/// Commands that do not operate on a specific key and should route to shard 0.
fn is_keyless_command(cmd: &str) -> bool {
matches!(
cmd,
"PING"
| "INFO"
| "DBSIZE"
| "COMMAND"
| "CONFIG"
| "CLIENT"
| "FLUSHALL"
| "FLUSHDB"
| "SLOWLOG"
| "SELECT"
| "SUBSCRIBE"
| "UNSUBSCRIBE"
| "PSUBSCRIBE"
| "PUNSUBSCRIBE"
| "MONITOR"
| "DEBUG"
| "CLUSTER"
| "MULTI"
| "EXEC"
| "DISCARD"
| "WAIT"
| "SAVE"
| "BGSAVE"
| "BGREWRITEAOF"
| "LASTSAVE"
| "TIME"
| "MEMORY"
| "LATENCY"
| "RANDOMKEY"
| "SCAN"
| "KEYS"
| "OBJECT"
)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The command-routing heuristic will misroute real console commands.

This path either pins the command to shard 0 or hashes args[0]. src/shard/spsc_handler.rs:196-312 executes ShardMessage::Execute against one shard-local DB, so DBSIZE/KEYS/SCAN become partial shard-0 reads, while MEMORY USAGE key, OBJECT ENCODING key, MGET, EVAL, etc. can hit the wrong shard entirely. Please restrict this gateway to commands with known key specs, or add proper fanout/subcommand-aware routing.

Also applies to: 121-125

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/admin/console_gateway.rs` around lines 48 - 85, is_keyless_command
currently treats many commands (e.g., DBSIZE, KEYS, SCAN, MEMORY, OBJECT, MGET,
EVAL, etc.) as shard-0/keyless which causes partial or incorrect shard-local
execution in ShardMessage::Execute; update is_keyless_command to only return
true for truly global commands with no key semantics (e.g., PING, INFO, COMMAND,
CLUSTER, SAVE/BGSAVE) and remove commands that are key- or subcommand-dependent,
and/or implement subcommand-aware routing: in the console gateway routing path
call a new helper that parses the command and its first subcommand/argument (use
the same parsing logic used by shard/spsc_handler.rs) to either (a) fan out the
command to all shards for global ops (DBSIZE, FLUSHALL), (b) route by the actual
key argument for keyed ops (MGET, MEMORY USAGE, OBJECT ENCODING, EVAL with
KEYS), or (c) pin to shard 0 only for truly console-only commands; reference
is_keyless_command and the ShardMessage::Execute path when making these changes.

Comment on lines +179 to +186
/// Convert a RESP `Frame` into a `serde_json::Value`.
///
/// Handles all Frame variants recursively:
/// - Strings attempt UTF-8, fall back to base64
/// - Errors produce `{"error": "..."}`
/// - Maps produce JSON objects (keys stringified)
/// - PreSerialized produces `{"raw": true}` (should not appear in console)
pub fn frame_to_json(frame: &Frame) -> serde_json::Value {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

PreSerialized replies lose the actual command result.

src/protocol/frame.rs:152-155 notes that Frame::PreSerialized is used for hot-path replies such as GET. Converting it to {"raw": true} drops the payload, so common console queries will render a placeholder instead of the real value.

Also applies to: 245-245

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/admin/console_gateway.rs` around lines 179 - 186, The current
frame_to_json converts Frame::PreSerialized to a placeholder {"raw": true},
dropping the payload; instead, extract the PreSerialized payload bytes and
serialize them like other binary/string frames (attempt UTF-8 decode and
otherwise base64), returning the actual value (or object with {"raw":
<string_or_base64>} if you need to preserve metadata). Update frame_to_json's
match arm for Frame::PreSerialized to read the contained bytes and apply the
same UTF-8 / base64 logic used for Frame::BulkString or Frame::SimpleString, and
make the same change where PreSerialized is handled elsewhere in the codebase so
hot-path replies (e.g., GET) render real values rather than a placeholder.

Comment thread src/admin/scan_fanout.rs
Comment on lines +123 to +124
let ks: Vec<Bytes> = match keys_frame {
Frame::Array(ks) => ks.iter().filter_map(|k| as_bytes(k).cloned()).collect(),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Don't silently drop malformed key entries.

filter_map(...).cloned() turns a bad SCAN element into missing data while still advancing the composite cursor. If any member of the keys array is not a string frame, this should fail like the other malformed-reply branches.

Suggested fix
-                Frame::Array(ks) => ks.iter().filter_map(|k| as_bytes(k).cloned()).collect(),
+                Frame::Array(ks) => ks
+                    .iter()
+                    .map(|k| {
+                        as_bytes(k)
+                            .cloned()
+                            .ok_or_else(|| {
+                                "malformed SCAN reply: key element is not a string".to_string()
+                            })
+                    })
+                    .collect::<Result<Vec<_>, _>>()?,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/admin/scan_fanout.rs` around lines 123 - 124, The code currently uses
keys_frame matched as Frame::Array(ks) and builds ks: Vec<Bytes> via
ks.iter().filter_map(|k| as_bytes(k).cloned()).collect(), which silently drops
non-string entries; change this to explicitly validate each element (using
as_bytes on every k) and return an error on the first malformed entry instead of
skipping it. Locate the Frame::Array arm that constructs ks and replace the
filter_map/collect with an explicit iteration or try-collect that maps each k ->
as_bytes(k).cloned() and propagates a malformed-reply error if as_bytes returns
None, so malformed SCAN elements fail like the other branches.

Comment thread src/admin/static_files.rs Outdated
Comment on lines +49 to +55
let has_extension = path.contains('.') && !path.ends_with('/');
if has_extension {
return Response::builder()
.status(StatusCode::NOT_FOUND)
.header("content-type", "text/plain")
.body(Full::new(Bytes::from_static(b"Not Found")))
.unwrap_or_else(|_| Response::new(Full::new(Bytes::from_static(b"Not Found"))));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use the last path segment for asset detection.

This heuristic treats any . anywhere in the URL as an asset miss, so route URLs with dotted params/segments will 404 instead of falling back to index.html. Restrict the check to the final path segment.

Suggested fix
-            let has_extension = path.contains('.') && !path.ends_with('/');
+            let has_extension = path
+                .rsplit('/')
+                .next()
+                .is_some_and(|segment| segment.contains('.'));
             if has_extension {
                 return Response::builder()
                     .status(StatusCode::NOT_FOUND)
                     .header("content-type", "text/plain")
                     .body(Full::new(Bytes::from_static(b"Not Found")))
                     .unwrap_or_else(|_| Response::new(Full::new(Bytes::from_static(b"Not Found"))));
             }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let has_extension = path.contains('.') && !path.ends_with('/');
if has_extension {
return Response::builder()
.status(StatusCode::NOT_FOUND)
.header("content-type", "text/plain")
.body(Full::new(Bytes::from_static(b"Not Found")))
.unwrap_or_else(|_| Response::new(Full::new(Bytes::from_static(b"Not Found"))));
let has_extension = path
.rsplit('/')
.next()
.is_some_and(|segment| segment.contains('.'));
if has_extension {
return Response::builder()
.status(StatusCode::NOT_FOUND)
.header("content-type", "text/plain")
.body(Full::new(Bytes::from_static(b"Not Found")))
.unwrap_or_else(|_| Response::new(Full::new(Bytes::from_static(b"Not Found"))));
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/admin/static_files.rs` around lines 49 - 55, The current asset-detection
uses path.contains('.') which treats any dot anywhere as an asset; change it to
inspect only the final path segment: compute the last segment from the path
(e.g. let last_segment = path.rsplit('/').next().unwrap_or(path)) and then set
has_extension based on last_segment.contains('.') (still account for trailing
'/' as before), replacing the existing has_extension logic in the code around
the variable `path` in static_files.rs so dotted route segments don't
incorrectly 404.

Comment on lines +476 to +481
let parsed: Vec<f32> = text
.split(',')
.filter(|s| !s.is_empty())
.filter_map(|s| s.trim().parse::<f32>().ok())
.collect();
if parsed.len() != dim {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reject malformed CSV query vectors; current parser silently drops invalid tokens.

filter_map(...parse().ok()) discards bad elements, so malformed input can still pass the dimension check (e.g., dim=3, "1,2,bad,3" becomes [1,2,3]). Parse incrementally and fail on the first invalid token; also stop early when element count exceeds dim.

Proposed fix
-        let parsed: Vec<f32> = text
-            .split(',')
-            .filter(|s| !s.is_empty())
-            .filter_map(|s| s.trim().parse::<f32>().ok())
-            .collect();
-        if parsed.len() != dim {
-            return Frame::Error(Bytes::from_static(
-                b"ERR query vector dimension mismatch",
-            ));
-        }
-        parsed
+        let mut parsed: SmallVec<[f32; 64]> = SmallVec::with_capacity(dim.min(64));
+        for token in text.split(',').map(str::trim).filter(|s| !s.is_empty()) {
+            let value = match token.parse::<f32>() {
+                Ok(v) => v,
+                Err(_) => {
+                    return Frame::Error(Bytes::from_static(
+                        b"ERR query vector dimension mismatch",
+                    ))
+                }
+            };
+            parsed.push(value);
+            if parsed.len() > dim {
+                return Frame::Error(Bytes::from_static(
+                    b"ERR query vector dimension mismatch",
+                ));
+            }
+        }
+        if parsed.len() != dim {
+            return Frame::Error(Bytes::from_static(
+                b"ERR query vector dimension mismatch",
+            ));
+        }
+        parsed.into_vec()

As per coding guidelines: src/{command,protocol}/**/*.rs: “Use SmallVec and itoa instead of allocating Vec and format!() on hot paths. Vec::with_capacity() is acceptable for result building at the end of a command path.”

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let parsed: Vec<f32> = text
.split(',')
.filter(|s| !s.is_empty())
.filter_map(|s| s.trim().parse::<f32>().ok())
.collect();
if parsed.len() != dim {
let mut parsed: SmallVec<[f32; 64]> = SmallVec::with_capacity(dim.min(64));
for token in text.split(',').map(str::trim).filter(|s| !s.is_empty()) {
let value = match token.parse::<f32>() {
Ok(v) => v,
Err(_) => {
return Frame::Error(Bytes::from_static(
b"ERR query vector dimension mismatch",
))
}
};
parsed.push(value);
if parsed.len() > dim {
return Frame::Error(Bytes::from_static(
b"ERR query vector dimension mismatch",
));
}
}
if parsed.len() != dim {
return Frame::Error(Bytes::from_static(
b"ERR query vector dimension mismatch",
));
}
parsed.into_vec()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/command/vector_search/mod.rs` around lines 476 - 481, The CSV vector
parser currently builds parsed via filter_map(parse().ok()) which silently drops
invalid tokens; update the parsing in the vector parsing code (the logic around
`let parsed: Vec<f32> = ...` and the subsequent `if parsed.len() != dim`) to
parse tokens one-by-one: trim each token, attempt `parse::<f32>()`, immediately
return an error on the first parse failure, push valid values into a SmallVec
(or Vec::with_capacity(dim) if not hot), and stop early with an error if the
token count exceeds `dim`; ensure the function that uses `parsed` still checks
exact length == dim after successful incremental parsing.

@TinDang97 TinDang97 force-pushed the feat/clients-connects branch from 489846a to 3d6b3f1 Compare April 12, 2026 14:01
HTTP/WebSocket gateway with REST API, WS-to-RESP3 bridge, SSE metrics
streaming. React 19 console (7 views: Dashboard, Browser, Console,
Vector Explorer, Graph Explorer, Memory, Help) served at /ui/ via
rust-embed. 50.9 KB gzipped initial bundle.

- Real-time Dashboard: 7 widgets driven by SSE at 1 Hz
- KV Data Browser: namespace tree, virtual-scrolled keys, type editors
- Query Console: Monaco + RESP/Cypher syntax, 233-cmd autocomplete
- Vector 3D Explorer: UMAP projection, HNSW overlay, KNN search
- Graph 3D Explorer: force-directed layout, Cypher integration
- Memory: treemap, slowlog, command stats
- Built-in Help guide with seed examples

Server additions:
- FLUSHALL/FLUSHDB/DBSIZE/DEBUG/MEMORY USAGE commands
- Multi-shard SCAN fan-out with composite cursor
- Admin-port hardening: Bearer auth, CORS allowlist, rate limiting
- FT.SEARCH CSV float fallback for JSON bridge compatibility

CI/review fixes:
- CHANGELOG.md v0.1.5 entry
- Dockerfile RUST_VERSION 1.88→1.94
- console-integration.yml rustup fix
- release.yml cargo-cyclonedx flag fix
- Playwright E2E hardened for headless CI
- CodeRabbit/Qodo review findings addressed (fetchSlowlog, SSE CORS,
  auth token parsing, scan_fanout error, static_files SPA fallback,
  HNSW trace layer-0 cap, HashEditor/ListEditor error handling)

56 Vitest tests + 11 Playwright specs pass. Zero clippy warnings.
@TinDang97 TinDang97 force-pushed the feat/clients-connects branch from 3d6b3f1 to 077667d Compare April 12, 2026 14:08
@pilotspacex-byte pilotspacex-byte merged commit 08b1562 into main Apr 12, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants