Skip to content

v0.1.3 Production Readiness — Phases 92-105 (gap closure)#65

Merged
TinDang97 merged 31 commits into
mainfrom
v0.1.3-phases-92-100
Apr 10, 2026
Merged

v0.1.3 Production Readiness — Phases 92-105 (gap closure)#65
TinDang97 merged 31 commits into
mainfrom
v0.1.3-phases-92-100

Conversation

@TinDang97
Copy link
Copy Markdown
Collaborator

@TinDang97 TinDang97 commented Apr 9, 2026

Summary

Phases 92-105 of the v0.1.3 Production Readiness milestone. Builds on PR #64 (Phases 87-91).

Score: 41/53 requirements satisfied (77%) — all correctness and security foundations complete. Remaining 12 items are architectural refactors, packaging infrastructure, or wall-clock gated (24h/7d/4w soak tests).

Phases delivered

Phase Deliverables
92: Observability Prometheus /metrics via metrics-exporter-prometheus, SLOWLOG GET/LEN/RESET/HELP, --admin-port, --check-config, /healthz + /readyz admin HTTP server, INFO extended (Server/Clients/Memory/Stats/CPU), tracing spans, command latency histograms wired into all 3 handlers
93: Disk-Offload Pre-completed (PR #43)
94: Durability Proof Crash-injection matrix, torn-write WAL v3 tests (4/4 CRC pass), Jepsen-lite linearizability harness, backup/restore workflow, disk-offload crash test
95: Replication PSYNC partial resync, full resync outside backlog, network partition recovery, kill-restart parity, replica promotion
96: Compat Matrix Client compat CI (redis-py, go-redis, ioredis, node-redis), docs/redis-compat.md, 24 Redis compat integration tests, vector client smoke script
97: Perf Gate Criterion regression gate CI with baseline caching, RSS-per-key memory bench script
98: Security deny.toml, SECURITY.md, docs/THREAT-MODEL.md (5 attacker classes), docs/security/lua-sandbox.md (full mlua audit), TLS cipher freeze, SBOM + cosign in release pipeline
99: Release Eng docs/versioning.md, 6 operator runbooks, CHANGELOG CI gate, release pipeline checksums
101: Observability Wiring HEALTHZ + READYZ commands in dispatch table, record_command() in all handlers, replication lag metric
103: Durability Jepsen-lite harness (4 writers, 3 kill cycles), full resync + partition tests
104: Compat & Perf tests/redis_compat.rs (24 tests), scripts/bench-memory.sh, scripts/test-vector-clients.sh
105: Release Pipeline User docs (3 guides), CHANGELOG gate, release checksums, upgrade test stub

Key stats

  • 72 files changed, +6386/-172 lines
  • 26 commits (phases + PR review fixes + gap closure)
  • New dependencies: metrics 0.24, metrics-exporter-prometheus 0.16
  • New CI workflows: fuzz.yml (PR + nightly), compat.yml (client matrix), bench-gate.yml (regression), changelog-gate.yml

Deferred to v0.1.4 (12 items)

  • TRACE-01: tracing spans (invasive)
  • CONFIG-02: TLS SIGHUP reload
  • HYGIENE-02: ConnectionCore extraction (multi-day refactor)
  • HYGIENE-03: lib.rs allow reduction (27 files)
  • PERF-02/05: 24h HDR + 7-day soak (wall-clock)
  • PERF-04: x86_64 monoio accept-loop fix
  • REL-03/07: deb/rpm/Docker packaging + full release pipeline
  • GA-01/02: 4-week RC soak

Test plan

  • cargo clippy -- -D warnings (both feature sets)
  • cargo test --lib (1882+ unit tests)
  • cargo test --test durability_tests (torn-write CRC validation)
  • SLOWLOG: redis-cli SLOWLOG GET, SLOWLOG LEN, SLOWLOG RESET
  • Health: redis-cli HEALTHZ+OK, redis-cli READYZ+OK after startup
  • INFO: redis-cli INFO → parses with redis-py parse_info()
  • Metrics: curl http://localhost:6380/metrics (when --admin-port 6380)
  • Config: ./moon --check-config → exits 0 without binding

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 9, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds an admin observability subsystem (Prometheus exporter, HTTP admin server, metrics helpers, global slowlog), CLI flags and startup hooks, widespread metrics/slowlog instrumentation, robustness/lint adjustments, new durability/replication tests, operational/security docs and policies, cargo-deny config, and new CI workflows including performance, compatibility, and changelog gates.

Changes

Cohort / File(s) Summary
CI Workflows
\.github/workflows/bench-gate.yml, \.github/workflows/compat.yml, \.github/workflows/changelog-gate.yml, \.github/workflows/release.yml
Add performance gating (Criterion benches/artifact), client compatibility matrix jobs, changelog enforcement, and SBOM + cosign signing in release pipeline.
Security & Policy
SECURITY.md, deny.toml
Add project security policy and cargo-deny configuration (advisories, license rules, registry/git restrictions).
Cargo / Dependencies
Cargo.toml
Refine Tokio feature scoping; add metrics, Prometheus exporter, and HTTP/hyper-related deps.
Admin Observability
src/admin/...src/admin/mod.rs, src/admin/metrics_setup.rs, src/admin/http_server.rs, src/admin/slowlog.rs
New admin module: global metrics recorder, HTTP admin server (/metrics,/healthz,/readyz), metric recording helpers, and in-memory SLOWLOG implementation with public API.
Config & Startup
src/config.rs, src/main.rs, src/lib.rs
New CLI flags (--admin-port, --slowlog-*, --check-config), early persistence dir check, init metrics/slowlog at startup, and readiness flag set after shard recovery; export admin module.
Instrumentation
src/command/mod.rs, src/server/conn/handler_sharded.rs
Wrap dispatch/dispatch_read with timing/metrics and error counters, add SLOWLOG command routing, record connection open/close, sample slowlog on slow commands, and add tracing spans.
ACL / Lock Safety
src/command/acl.rs, src/command/connection.rs, src/server/conn/shared.rs
Replace many unconditional RwLock unwraps with poison-aware/fallible locking and return ERR internal ... on lock failures.
Protocol & Parsing
src/protocol/parse.rs
Add strict integer parsing (strict_atoi), tighten RESP3 Null/Boolean parsing, and related parsing tests.
Storage & Robustness
src/storage/*, src/persistence/redis_rdb.rs, src/tls.rs
Clippy allow annotations for targeted try_into/unwrap sites, safer float ordering, TLS cipher-suite provider use, and small safety refactors.
Shard / SPSC / Tracing
src/shard/*, src/shard/spsc_handler.rs
Add clippy allow attributes and tracing instrumentation on shard/SPSC handlers.
Durability & Replication Tests
tests/durability/*, tests/replication_hardening.rs, tests/durability_tests.rs
Add multiple ignored Unix/integration tests: backup/restore, crash matrix, torn-write/CRC checks, jepsen-lite, and replication hardening scenarios.
Docs & Runbooks
docs/*, docs/runbooks/*
Add threat model, Redis compatibility, versioning, Lua sandbox audit, and several operational runbooks (AOF recovery, WAL rotation/disk-full, OOM during snapshot, rolling restart, TLS cert rotation, etc.).
Scripts & Misc
scripts/audit-unwrap.sh, .gitignore
Tighten unwrap audit script logic and update gitignore to ignore fuzz and shard-*/.

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client (TCP)
  participant Server as Moon Server
  participant Dispatch as Dispatch Path
  participant Admin as Admin HTTP/metrics
  participant Prom as Prometheus

  rect rgba(200,230,255,0.5)
    Client->>Server: TCP command
    Server->>Dispatch: dispatch(cmd) / dispatch_read(cmd)
    Dispatch->>Dispatch: measure latency & result
    Dispatch->>Admin: record_command / record_command_error
    Dispatch->>Admin: maybe_record slowlog entry
    Server->>Admin: record_connection_opened / record_connection_closed
  end

  rect rgba(220,255,220,0.5)
    Prom->>Admin: scrape /metrics
    Admin->>Prom: return Prometheus metrics text
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

enhancement, dependencies

🐰
I logged each hop and timed each chase,
Slowlogs nibble commands with careful grace,
Metrics bloom in Prometheus light,
Tests guard data through crash and night,
Carrots counted, exports bright — hop, hop, delight! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is comprehensive with a detailed summary of phases delivered, key statistics, test plan, and objectives. However, it does not follow the provided template structure with the required sections (Summary, Checklist, Performance Impact, Notes). Restructure the description to match the template: add explicit Summary section, Checklist with all four items, Performance Impact section, and Notes section. Current content can be reorganized into these sections.
✅ Passed checks (2 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Title check ✅ Passed The title clearly and concisely captures the main change: v0.1.3 Production Readiness covering Phases 92-105 (gap closure), which is the primary focus of the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch v0.1.3-phases-92-100

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Phase 92-99: Observability, Durability Testing, Security Hardening

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• Add Prometheus metrics infrastructure with admin HTTP port
• Implement Redis-compatible SLOWLOG with per-shard ring buffers
• Add durability test matrix for crash recovery scenarios
• Create security documentation and threat model
• Add client compatibility CI workflows and runbooks
Diagram
flowchart LR
  A["Config Flags"] -->|admin_port, slowlog_*| B["Admin Module"]
  B -->|metrics_setup| C["Prometheus Exporter"]
  B -->|slowlog| D["SLOWLOG Commands"]
  E["Crash Injection"] -->|SIGKILL| F["Recovery Tests"]
  F -->|DBSIZE Parity| G["Durability Verification"]
  H["Security Docs"] -->|Threat Model| I["Risk Matrix"]
  J["CI Workflows"] -->|redis-py, go-redis| K["Compatibility Matrix"]
Loading

Grey Divider

File Changes

1. src/admin/metrics_setup.rs ✨ Enhancement +186/-0

Prometheus metrics initialization and recording helpers

src/admin/metrics_setup.rs


2. src/admin/mod.rs ✨ Enhancement +7/-0

Admin HTTP server module structure

src/admin/mod.rs


3. src/admin/slowlog.rs ✨ Enhancement +254/-0

Redis-compatible SLOWLOG with per-shard buffers

src/admin/slowlog.rs


View more (22)
4. src/config.rs ⚙️ Configuration changes +16/-0

Add admin_port, slowlog, check-config flags

src/config.rs


5. src/lib.rs ✨ Enhancement +1/-0

Export new admin module

src/lib.rs


6. src/main.rs ✨ Enhancement +9/-0

Initialize metrics and handle check-config flag

src/main.rs


7. tests/durability/backup_restore.rs 🧪 Tests +118/-0

BGSAVE → copy → restore data parity test

tests/durability/backup_restore.rs


8. tests/durability/crash_matrix.rs 🧪 Tests +238/-0

Crash injection matrix for persistence modes

tests/durability/crash_matrix.rs


9. tests/durability/mod.rs 🧪 Tests +9/-0

Durability test infrastructure module

tests/durability/mod.rs


10. tests/durability/torn_write.rs 🧪 Tests +106/-0

WAL v3 torn-write detection via CRC32C

tests/durability/torn_write.rs


11. tests/durability_tests.rs 🧪 Tests +6/-0

Durability test suite entry point

tests/durability_tests.rs


12. tests/replication_hardening.rs 🧪 Tests +217/-0

Replication PSYNC2 hardening tests

tests/replication_hardening.rs


13. .github/workflows/bench-gate.yml ⚙️ Configuration changes +39/-0

Criterion regression gate for critical benchmarks

.github/workflows/bench-gate.yml


14. .github/workflows/compat.yml ⚙️ Configuration changes +113/-0

Client compatibility CI for redis-py and go-redis

.github/workflows/compat.yml


15. Cargo.toml Dependencies +2/-0

Add metrics and metrics-exporter-prometheus dependencies

Cargo.toml


16. SECURITY.md 📝 Documentation +55/-0

Vulnerability disclosure policy and security measures

SECURITY.md


17. deny.toml ⚙️ Configuration changes +38/-0

cargo-deny configuration for supply chain security

deny.toml


18. docs/THREAT-MODEL.md 📝 Documentation +128/-0

Threat model with attacker classes and risk matrix

docs/THREAT-MODEL.md


19. docs/redis-compat.md 📝 Documentation +77/-0

Redis protocol and command compatibility matrix

docs/redis-compat.md


20. docs/runbooks/corrupted-aof-recovery.md 📝 Documentation +61/-0

Operator runbook for corrupted AOF recovery

docs/runbooks/corrupted-aof-recovery.md


21. docs/runbooks/disk-full-during-wal-rotation.md 📝 Documentation +47/-0

Operator runbook for disk full during WAL rotation

docs/runbooks/disk-full-during-wal-rotation.md


22. docs/runbooks/oom-during-snapshot.md 📝 Documentation +47/-0

Operator runbook for OOM during BGSAVE

docs/runbooks/oom-during-snapshot.md


23. docs/runbooks/replica-fell-behind.md 📝 Documentation +56/-0

Operator runbook for replication lag recovery

docs/runbooks/replica-fell-behind.md


24. docs/security/lua-sandbox.md 📝 Documentation +104/-0

Lua sandbox audit and CVE review

docs/security/lua-sandbox.md


25. docs/versioning.md 📝 Documentation +57/-0

SemVer policy and format versioning guarantees

docs/versioning.md


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review Bot commented Apr 9, 2026

Code Review by Qodo

🐞 Bugs (2)   📘 Rule violations (0)   📎 Requirement gaps (0)   🎨 UX Issues (0)
🐞\ ≡ Correctness (1) ☼ Reliability (1)

Grey Divider


Action required

1. libc::kill unsafe lacks SAFETY📘
Description
A new unsafe block calls libc::kill without a nearby // SAFETY: justification, and it is not
#[cfg(target_os = "linux")]-guarded. This increases UB/audit risk and can break non-Linux builds
when compiling tests.
Code

tests/durability/crash_matrix.rs[R122-124]

+    unsafe {
+        libc::kill(server.id() as i32, libc::SIGKILL);
+    }
Evidence
PR Compliance ID 2 requires every unsafe block to have a nearby // SAFETY: comment explaining
invariants; the added unsafe { libc::kill(...) } has none. PR Compliance ID 9 requires
Linux-only/libc::-dependent code to be properly cfg-guarded for non-Linux compilation; this added
libc::kill call is unguarded.

CLAUDE.md
CLAUDE.md
tests/durability/crash_matrix.rs[122-124]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`tests/durability/crash_matrix.rs` introduces an `unsafe { libc::kill(...) }` block without a `// SAFETY:` justification comment, and the `libc` call is not guarded for non-Linux builds.
## Issue Context
- Compliance requires every `unsafe` to document its invariants.
- The repo must compile on non-Linux hosts; direct `libc::kill` usage should be `#[cfg(target_os = "linux")]`-guarded with a compile-time fallback (e.g., `Child::kill()` or skipping the test/module).
## Fix Focus Areas
- tests/durability/crash_matrix.rs[122-124]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. SLOWLOG never dispatched 🐞
Description
SLOWLOG is declared in command metadata and a handler exists, but the main command dispatcher has no
SLOWLOG branch, so clients will receive an "unknown command" error and slowlog functionality is
unreachable.
Code

src/admin/slowlog.rs[R138-193]

+/// Handle the SLOWLOG command (GET/LEN/RESET/HELP).
+pub fn handle_slowlog(slowlog: &Slowlog, args: &[Frame]) -> Frame {
+    if args.is_empty() {
+        return Frame::Error(Bytes::from_static(
+            b"ERR wrong number of arguments for 'slowlog' command",
+        ));
+    }
+
+    let subcmd = match &args[0] {
+        Frame::BulkString(b) => b.to_ascii_uppercase(),
+        _ => {
+            return Frame::Error(Bytes::from_static(b"ERR invalid slowlog subcommand"));
+        }
+    };
+
+    match subcmd.as_slice() {
+        b"GET" => {
+            let count = if args.len() > 1 {
+                match &args[1] {
+                    Frame::BulkString(b) => atoi::atoi::<usize>(b),
+                    Frame::Integer(n) => Some(*n as usize),
+                    _ => None,
+                }
+            } else {
+                None
+            };
+
+            let entries = slowlog.get(count);
+            let frames: Vec<Frame> = entries.iter().map(entry_to_frame).collect();
+            Frame::Array(crate::protocol::FrameVec::from(frames))
+        }
+        b"LEN" => Frame::Integer(slowlog.len() as i64),
+        b"RESET" => {
+            slowlog.reset();
+            Frame::SimpleString(Bytes::from_static(b"OK"))
+        }
+        b"HELP" => {
+            let help = vec![
+                Frame::BulkString(Bytes::from_static(b"SLOWLOG GET [<count>]")),
+                Frame::BulkString(Bytes::from_static(
+                    b"    Return top <count> entries from the slowlog (default 10).",
+                )),
+                Frame::BulkString(Bytes::from_static(b"SLOWLOG LEN")),
+                Frame::BulkString(Bytes::from_static(
+                    b"    Return the number of entries in the slowlog.",
+                )),
+                Frame::BulkString(Bytes::from_static(b"SLOWLOG RESET")),
+                Frame::BulkString(Bytes::from_static(b"    Reset the slowlog.")),
+            ];
+            Frame::Array(crate::protocol::FrameVec::from(help))
+        }
+        _ => Frame::Error(Bytes::from_static(
+            b"ERR unknown slowlog subcommand. Try SLOWLOG HELP.",
+        )),
+    }
+}
Evidence
Command metadata includes SLOWLOG, but dispatch()’s 7-letter command section only handles
COMMAND/HGETALL/etc and does not include SLOWLOG; unhandled commands fall through to err_unknown().
No connection-level interceptor adds SLOWLOG handling either, so the new slowlog handler is never
invoked.

src/command/metadata.rs[300-308]
src/command/mod.rs[451-496]
src/command/mod.rs[621-633]
src/admin/slowlog.rs[138-193]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`SLOWLOG` is implemented (`handle_slowlog`) and advertised (metadata), but it is not reachable because it is not dispatched anywhere.
### Issue Context
- `src/command/metadata.rs` declares `SLOWLOG`.
- `src/command/mod.rs` does not dispatch `SLOWLOG` (7-letter bucket doesn’t include it), so it falls through to `err_unknown`.
### Fix Focus Areas
- Add a `SLOWLOG` dispatch path in the appropriate layer:
- If it is shard-local state: handle in connection handler(s) (like `BGSAVE`), routing to the shard’s slowlog.
- If it is pure command dispatch: add `SLOWLOG` to `crate::command::dispatch()`.
- Ensure `SLOWLOG GET` merges across shards if that’s the intended behavior.
### Fix Focus Areas (code references)
- src/command/mod.rs[451-496]
- src/command/mod.rs[621-633]
- src/admin/slowlog.rs[138-193]
- src/server/conn/handler_sharded.rs[1236-1278]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. --check-config skips TLS validation🐞
Description
The new --check-config path returns early before existing startup validation runs, so it can print
"Configuration is valid." even when required TLS flags are missing and normal startup would fail.
Code

src/main.rs[R37-41]

+    // --check-config: validate and exit without starting
+    if config.check_config {
+        info!("Configuration is valid.");
+        return Ok(());
+    }
Evidence
--check-config returns before the TLS config block that enforces --tls-cert-file and
--tls-key-file when --tls-port is set, so invalid TLS configurations are incorrectly reported as
valid.

src/main.rs[37-41]
src/main.rs[55-71]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`--check-config` exits before running existing validation (notably TLS required-argument checks), leading to false positives.
### Issue Context
Normal startup validates TLS inputs when `--tls-port > 0`. With `--check-config`, that code is skipped.
### Fix Focus Areas
- Move the `--check-config` early-return to *after* all validation-only logic (TLS config build, address parsing, size parsing if any), but *before* binding sockets / spawning runtimes.
- Alternatively, factor validation into a `validate_config(&ServerConfig) -> anyhow::Result<()>` and call it from both paths.
### Fix Focus Areas (code references)
- src/main.rs[37-41]
- src/main.rs[55-77]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (1)
4. Slowlog max_len=0 grows unbounded🐞
Description
Slowlog::maybe_record() still pushes entries when max_len is 0, causing unbounded growth of the
VecDeque instead of keeping zero entries.
Code

src/admin/slowlog.rs[R96-101]

+        let mut entries = self.entries.lock();
+        if entries.len() >= self.max_len {
+            entries.pop_back();
+        }
+        entries.push_front(entry);
+    }
Evidence
When max_len == 0, the condition entries.len() >= self.max_len is always true; pop_back() on
an empty deque does nothing and push_front() always inserts, so the buffer can grow without bound.

src/admin/slowlog.rs[42-50]
src/admin/slowlog.rs[96-101]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`Slowlog::maybe_record()` does not respect `max_len == 0` and will grow the log without bound.
### Issue Context
The current eviction logic only pops one entry when `len >= max_len`, which fails when `max_len` is zero.
### Fix Focus Areas
- Add an early return in `maybe_record()` when `self.max_len == 0`.
- Optionally also guard in `new()` (e.g., cap/normalize max_len).
### Fix Focus Areas (code references)
- src/admin/slowlog.rs[52-63]
- src/admin/slowlog.rs[96-101]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

5. Metrics init leaves flag true 🐞
Description
init_metrics() sets METRICS_INITIALIZED=true before install(); if install() fails, recording helpers
will still run (and allocate label strings) even though the exporter never started.
Code

src/admin/metrics_setup.rs[R26-48]

+    // Install as the global recorder — panics if called twice
+    if METRICS_INITIALIZED
+        .compare_exchange(false, true, Ordering::SeqCst, Ordering::SeqCst)
+        .is_ok()
+    {
+        match builder
+            .with_http_listener(addr.parse::<std::net::SocketAddr>().unwrap_or_else(|_| {
+                tracing::warn!(
+                    "Invalid admin bind address '{}', using 0.0.0.0:{}",
+                    addr,
+                    admin_port
+                );
+                std::net::SocketAddr::from(([0, 0, 0, 0], admin_port))
+            }))
+            .install()
+        {
+            Ok(()) => {
+                tracing::info!("Admin metrics server listening on {}", addr);
+            }
+            Err(e) => {
+                tracing::error!("Failed to start metrics exporter: {}", e);
+            }
+        }
Evidence
The initialized flag is flipped via compare_exchange before calling install(), and is never reset
on the error path; the record_* helpers gate only on this flag.

src/admin/metrics_setup.rs[26-48]
src/admin/metrics_setup.rs[56-63]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
If Prometheus exporter installation fails, `METRICS_INITIALIZED` remains true, so metric recording paths continue running even though there is no working recorder.
### Issue Context
`METRICS_INITIALIZED` is used as the sole fast-path gate in all record_* helpers.
### Fix Focus Areas
- Only set `METRICS_INITIALIZED = true` after `install()` returns `Ok(())`.
- Or reset the flag back to false on the `Err(e)` branch.
- Consider logging the actual bound SocketAddr (parsed/fallback) on success.
### Fix Focus Areas (code references)
- src/admin/metrics_setup.rs[26-48]
- src/admin/metrics_setup.rs[56-72]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. Metrics label lowercasing allocates🐞
Description
record_command() and record_command_error() allocate a new String via to_ascii_lowercase() (twice
per record_command call), increasing overhead on every recorded command when metrics are enabled.
Code

src/admin/metrics_setup.rs[R56-62]

+pub fn record_command(cmd: &str, latency_us: u64) {
+    if !METRICS_INITIALIZED.load(Ordering::Relaxed) {
+        return;
+    }
+    counter!("moon_commands_total", "cmd" => cmd.to_ascii_lowercase()).increment(1);
+    histogram!("moon_command_duration_microseconds", "cmd" => cmd.to_ascii_lowercase())
+        .record(latency_us as f64);
Evidence
The command label is produced by cmd.to_ascii_lowercase() directly inside the metrics macros; in
record_command this happens once for the counter and once for the histogram.

src/admin/metrics_setup.rs[54-72]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Per-command metrics recording allocates label Strings, including duplicating the lowercase conversion in `record_command()`.
### Issue Context
When metrics are enabled, this can add measurable overhead at high QPS.
### Fix Focus Areas
- Compute `let cmd_lc = cmd.to_ascii_lowercase();` once and reuse for both metrics in `record_command`.
- If feasible, avoid allocation entirely by using a pre-normalized `&'static str` for known commands or by recording without per-command labels.
### Fix Focus Areas (code references)
- src/admin/metrics_setup.rs[54-72]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment thread tests/durability/crash_matrix.rs Outdated
Comment thread src/admin/slowlog.rs
Comment on lines +138 to +193
/// Handle the SLOWLOG command (GET/LEN/RESET/HELP).
pub fn handle_slowlog(slowlog: &Slowlog, args: &[Frame]) -> Frame {
if args.is_empty() {
return Frame::Error(Bytes::from_static(
b"ERR wrong number of arguments for 'slowlog' command",
));
}

let subcmd = match &args[0] {
Frame::BulkString(b) => b.to_ascii_uppercase(),
_ => {
return Frame::Error(Bytes::from_static(b"ERR invalid slowlog subcommand"));
}
};

match subcmd.as_slice() {
b"GET" => {
let count = if args.len() > 1 {
match &args[1] {
Frame::BulkString(b) => atoi::atoi::<usize>(b),
Frame::Integer(n) => Some(*n as usize),
_ => None,
}
} else {
None
};

let entries = slowlog.get(count);
let frames: Vec<Frame> = entries.iter().map(entry_to_frame).collect();
Frame::Array(crate::protocol::FrameVec::from(frames))
}
b"LEN" => Frame::Integer(slowlog.len() as i64),
b"RESET" => {
slowlog.reset();
Frame::SimpleString(Bytes::from_static(b"OK"))
}
b"HELP" => {
let help = vec![
Frame::BulkString(Bytes::from_static(b"SLOWLOG GET [<count>]")),
Frame::BulkString(Bytes::from_static(
b" Return top <count> entries from the slowlog (default 10).",
)),
Frame::BulkString(Bytes::from_static(b"SLOWLOG LEN")),
Frame::BulkString(Bytes::from_static(
b" Return the number of entries in the slowlog.",
)),
Frame::BulkString(Bytes::from_static(b"SLOWLOG RESET")),
Frame::BulkString(Bytes::from_static(b" Reset the slowlog.")),
];
Frame::Array(crate::protocol::FrameVec::from(help))
}
_ => Frame::Error(Bytes::from_static(
b"ERR unknown slowlog subcommand. Try SLOWLOG HELP.",
)),
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Slowlog never dispatched 🐞 Bug ≡ Correctness

SLOWLOG is declared in command metadata and a handler exists, but the main command dispatcher has no
SLOWLOG branch, so clients will receive an "unknown command" error and slowlog functionality is
unreachable.
Agent Prompt
### Issue description
`SLOWLOG` is implemented (`handle_slowlog`) and advertised (metadata), but it is not reachable because it is not dispatched anywhere.

### Issue Context
- `src/command/metadata.rs` declares `SLOWLOG`.
- `src/command/mod.rs` does not dispatch `SLOWLOG` (7-letter bucket doesn’t include it), so it falls through to `err_unknown`.

### Fix Focus Areas
- Add a `SLOWLOG` dispatch path in the appropriate layer:
  - If it is shard-local state: handle in connection handler(s) (like `BGSAVE`), routing to the shard’s slowlog.
  - If it is pure command dispatch: add `SLOWLOG` to `crate::command::dispatch()`.
- Ensure `SLOWLOG GET` merges across shards if that’s the intended behavior.

### Fix Focus Areas (code references)
- src/command/mod.rs[451-496]
- src/command/mod.rs[621-633]
- src/admin/slowlog.rs[138-193]
- src/server/conn/handler_sharded.rs[1236-1278]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment thread src/main.rs Outdated
Comment thread src/admin/slowlog.rs
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🟡 Minor comments (13)
tests/durability/torn_write.rs-63-70 (1)

63-70: ⚠️ Potential issue | 🟡 Minor

Strengthen the torn-write expectation to exact recovered records.

The current check (records.len() >= 2) can still pass if replay incorrectly accepts an extra corrupted/truncated tail record. Assert the exact recovered sequence instead.

💡 Proposed fix
-        assert!(
-            records.len() >= 2,
-            "Expected at least 2 records recovered, got {}",
-            records.len()
-        );
-        assert_eq!(records[0], 1);
-        assert_eq!(records[1], 2);
+        assert_eq!(
+            records,
+            vec![1, 2],
+            "Expected replay to stop at torn record boundary"
+        );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/durability/torn_write.rs` around lines 63 - 70, Replace the loose check
that allows extra recovered data with a strict assertion of the exact expected
sequence: in the torn_write test where the local variable records is validated,
change the current assertions (records.len() >= 2 and subsequent element checks)
to assert the exact recovered vector (e.g., assert_eq!(records, vec![1, 2]) or
equivalent exact-length + element assertions) so the test fails if any
truncated/corrupted tail record is accepted.
tests/durability/crash_matrix.rs-186-189 (1)

186-189: ⚠️ Potential issue | 🟡 Minor

Update the run command in the test comment.

The documented target durability_crash_matrix does not match this module-based test layout; the command should reference the actual integration target to avoid operator confusion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/durability/crash_matrix.rs` around lines 186 - 189, Update the test
comment's run command so it references the actual integration test target name
(the file-derived test target) instead of "durability_crash_matrix": replace the
example command string "cargo test --test durability_crash_matrix -- --ignored"
with "cargo test --test crash_matrix -- --ignored" so it matches the
crash_matrix.rs integration test; edit the comment near the top of
tests/durability/crash_matrix.rs accordingly.
docs/THREAT-MODEL.md-120-120 (1)

120-120: ⚠️ Potential issue | 🟡 Minor

Risk matrix status appears stale for Lua audit.

Line 120 states “Audit pending (SEC-04)”, but this PR’s Phase 98 summary says the Lua sandbox/bindings audit was completed. Please reconcile to avoid mixed security posture messaging.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/THREAT-MODEL.md` at line 120, Update the risk matrix row that currently
reads "| Lua sandbox escape | Low | Critical (RCE) | **High** | Audit pending
(SEC-04) |" to reflect the completed Lua sandbox/bindings audit: change "Audit
pending (SEC-04)" to the appropriate completed status (e.g., "Audit completed
(SEC-04)" or "Mitigated (SEC-04)"), and optionally add a brief date or Phase 98
reference to avoid mixed messaging.
docs/redis-compat.md-27-27 (1)

27-27: ⚠️ Potential issue | 🟡 Minor

go-redis note overstates current CI coverage.

Line 27 says “hash, pipelines”, but the current go-redis smoke test only checks basic string/hash ops. Please update the note or extend the test to include pipeline assertions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/redis-compat.md` at line 27, The docs entry for go-redis overstates CI
coverage; either remove "pipelines" from the table row in docs/redis-compat.md
or actually add pipeline checks to the go-redis smoke test. If you choose docs:
edit the table row for "go-redis" to read "Basic ops, hash" (remove
"pipelines"). If you choose tests: update the go-redis smoke test (the test
named like "go-redis smoke test" or TestGoRedisSmoke) to perform a simple
pipeline round-trip (e.g., queue two commands in a pipeline, Exec, and assert
both responses) so the documentation claim is accurate. Ensure any new
assertions are added to the same CI-tested smoke test file and run in CI.
docs/versioning.md-29-31 (1)

29-31: ⚠️ Potential issue | 🟡 Minor

Add a language tag to the fenced code block.

The code fence at Line 29 is missing a language specifier (markdownlint MD040).

💡 Proposed fix
-```
+```text
 Error: RDB version 2 is not supported by this Moon build (max: 1). Upgrade Moon first.
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @docs/versioning.md around lines 29 - 31, The fenced code block containing
the error message "Error: RDB version 2 is not supported by this Moon build
(max: 1). Upgrade Moon first." should include a language tag to satisfy
markdownlint MD040; update the triple-backtick fence to use a language specifier
(e.g., add "text" after the opening ), leaving the block contents unchanged so the block becomes text ... ``` to fix the lint warning.


</details>

</blockquote></details>
<details>
<summary>docs/security/lua-sandbox.md-45-45 (1)</summary><blockquote>

`45-45`: _⚠️ Potential issue_ | _🟡 Minor_

**Verify redis.log sanitization claim.**

Line 45 states the message "is sanitized", but reviewing `src/scripting/sandbox.rs` (context snippet 2), the implementation simply passes the message directly to `tracing::info!`:

```rust
tracing::info!(target: "lua_script", "{}", msg);
```

This doesn't appear to perform explicit sanitization. While `tracing` handles the message safely (no injection risk), the "sanitized" claim could be misleading. Consider removing or clarifying this safety note.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@docs/security/lua-sandbox.md` at line 45, Update the docs entry for
`redis.log(level, msg)` to remove or clarify the "Safe — message is sanitized"
claim; instead state that the message is forwarded to the Rust tracing layer
without explicit sanitization (see `src/scripting/sandbox.rs` where the message
is passed to `tracing::info!(target: "lua_script", "{}", msg)`), or explicitly
document what guarantees `tracing` provides and that no application-level
sanitization is performed; edit the line in docs/security/lua-sandbox.md
accordingly and keep the reference to the `redis.log` API name so readers can
map it to the implementation.
```

</details>

</blockquote></details>
<details>
<summary>docs/runbooks/replica-fell-behind.md-5-7 (1)</summary><blockquote>

`5-7`: _⚠️ Potential issue_ | _🟡 Minor_

**Remove or update the non-existent replication lag metric reference on line 7.**

The metric `moon_replication_lag_bytes` is referenced in the runbook but does not exist in the codebase. The replication information layer only exposes `INFO replication` fields (e.g., `repl_backlog_first_byte_offset`, `repl_backlog_size`) via the INFO command, not as a Prometheus metric gauge. Either implement this metric as part of the observability work or remove the reference from line 7 and replace it with the actual available metrics (e.g., monitor `repl_backlog_first_byte_offset` from `INFO replication`).

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@docs/runbooks/replica-fell-behind.md` around lines 5 - 7, The runbook
references a non-existent Prometheus metric moon_replication_lag_bytes; update
docs/runbooks/replica-fell-behind.md to remove that metric and instead point
users to the actual replication info fields exposed by the server (use INFO
replication and the fields repl_backlog_first_byte_offset and repl_backlog_size)
or, if you prefer to keep a metric reference, implement and instrument a new
gauge named moon_replication_lag_bytes in the observability layer—either
remove/replace the line to reference INFO replication fields or add the metric
implementation and ensure its name matches the docs.
```

</details>

</blockquote></details>
<details>
<summary>docs/runbooks/disk-full-during-wal-rotation.md-3-7 (1)</summary><blockquote>

`3-7`: _⚠️ Potential issue_ | _🟡 Minor_

**Update symptom to match actual error logging.**

The documented log message on line 5 (`"Error: WAL segment rotation failed: No space left on device"`) does not appear in the codebase. When WAL rotation fails due to disk full, operators will actually see one of these messages instead:
- `"WAL v3 sync failed: No space left on device"` (from timers.rs, event_loop.rs)
- `"Checkpoint WAL flush failed: No space left on device"` (from persistence_tick.rs)

Update the symptom to reflect what operators will actually observe in logs, or add explicit logging in segment.rs's `rotate_segment()` error path to emit the documented message.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@docs/runbooks/disk-full-during-wal-rotation.md` around lines 3 - 7, The doc's
symptom line is inaccurate: when WAL rotation fails due to disk-full the code
emits messages like "WAL v3 sync failed: No space left on device" (timers.rs /
event_loop.rs) or "Checkpoint WAL flush failed: No space left on device"
(persistence_tick.rs), not "Error: WAL segment rotation failed: No space left on
device"; either update the symptom text to list the actual log messages
operators will see, referencing the existing emitters (timers.rs, event_loop.rs,
persistence_tick.rs), or modify segment.rs's rotate_segment() error path to also
log the documented message so it appears in logs (add a processLogger/error!
call in rotate_segment() that includes the disk-full error text and context).
```

</details>

</blockquote></details>
<details>
<summary>docs/runbooks/corrupted-aof-recovery.md-40-43 (1)</summary><blockquote>

`40-43`: _⚠️ Potential issue_ | _🟡 Minor_

**Manual truncation may discard recoverable data.**

The `head -c <offset>` approach discards everything after the corruption point. However, Moon's automatic recovery attempts to skip past corrupted bytes to find valid RESP frames beyond the corruption. Consider adding a note that manual truncation is more aggressive than automatic recovery and may result in additional data loss compared to letting Moon's skip-recovery handle it.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@docs/runbooks/corrupted-aof-recovery.md` around lines 40 - 43, Add a
cautionary note to the "Use redis-check-aof equivalent (if available) or
truncate manually" section explaining that using head -c <offset> to truncate
appendonly.aof is aggressive and may discard recoverable data, and recommend
considering Moon's automatic skip-recovery (which attempts to skip corrupted
bytes and find valid RESP frames) before performing manual truncation of
appendonly.aof; reference the head -c <offset> command and the appendonly.aof
filename and suggest preferring Moon's skip-recovery unless truncation is
explicitly required.
```

</details>

</blockquote></details>
<details>
<summary>src/admin/metrics_setup.rs-15-17 (1)</summary><blockquote>

`15-17`: _⚠️ Potential issue_ | _🟡 Minor_

**Doc comment mentions unimplemented endpoints.**

The comment states this "Also responds to `/healthz` (liveness) and `/readyz` (readiness)" but the code only installs the Prometheus exporter, which provides `/metrics`. The healthz/readyz endpoints are not implemented here. Either remove this claim or implement the endpoints.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@src/admin/metrics_setup.rs` around lines 15 - 17, The doc comment in
src/admin/metrics_setup.rs claims the server "Also responds to `/healthz`
(liveness) and `/readyz` (readiness)" but the current implementation only
installs the Prometheus exporter (the code path that registers the `/metrics`
handler); either remove that sentence from the module/function doc comment or
implement simple health endpoints: add handlers for `/healthz` and `/readyz`
alongside the existing Prometheus exporter registration (in the function that
sets up the HTTP listener / installs the exporter), returning 200 OK (and
appropriate readiness logic if available), and wire those routes into the same
server so the comment matches the implementation.
```

</details>

</blockquote></details>
<details>
<summary>src/admin/slowlog.rs-155-163 (1)</summary><blockquote>

`155-163`: _⚠️ Potential issue_ | _🟡 Minor_

**Negative count wraps to large value instead of being handled explicitly.**

When `args[1]` is `Frame::Integer(n)` with a negative value, `*n as usize` wraps to a very large number. While this accidentally behaves like Redis (negative returns all entries), it's not intentional and could cause issues if `get()` implementation changes.

<details>
<summary>🛡️ Proposed fix to handle negative explicitly</summary>

```diff
             let count = if args.len() > 1 {
                 match &args[1] {
                     Frame::BulkString(b) => atoi::atoi::<usize>(b),
-                    Frame::Integer(n) => Some(*n as usize),
+                    Frame::Integer(n) if *n >= 0 => Some(*n as usize),
+                    Frame::Integer(_) => None, // negative means all entries
                     _ => None,
                 }
             } else {
                 None
             };
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@src/admin/slowlog.rs` around lines 155 - 163, The code currently casts
Frame::Integer(n) to usize which wraps negative values; update the match for
args[1] to explicitly handle negatives: for Frame::Integer(n) return Some(n as
usize) only if *n >= 0, otherwise treat it as None (meaning "all entries"); keep
the existing atoi::atoi::<usize>(b) handling for Frame::BulkString(b) and ensure
the variable name count and the match on Frame::Integer(n) are updated
accordingly so negative integers do not wrap to large usize values.
```

</details>

</blockquote></details>
<details>
<summary>src/admin/slowlog.rs-1-4 (1)</summary><blockquote>

`1-4`: _⚠️ Potential issue_ | _🟡 Minor_

**Doc comment describes unimplemented cross-shard merging.**

The module comment states "SLOWLOG GET merges across shards sorted by timestamp," but the implementation operates on a single `Slowlog` instance with no cross-shard aggregation. Either implement the merging logic or update the comment to reflect actual behavior.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@src/admin/slowlog.rs` around lines 1 - 4, Module doc comment is incorrect: it
claims "SLOWLOG GET merges across shards sorted by timestamp" but the code only
operates on a single Slowlog instance. Fix by updating the module-level
documentation to reflect the actual behavior (per-instance/per-shard slowlog
only) or, if you prefer to implement the feature, add a cross-shard aggregation
routine (e.g., a new function like merge_across_shards or
Slowlog::merge_from_shards) and update SLOWLOG GET to collect entries from all
shard Slowlog instances and perform a timestamp-based merge before returning
results; reference the Slowlog type and the SLOWLOG GET command in the
doc/update so the comment matches the implementation.
```

</details>

</blockquote></details>
<details>
<summary>src/admin/slowlog.rs-110-113 (1)</summary><blockquote>

`110-113`: _⚠️ Potential issue_ | _🟡 Minor_

**Missing `is_empty()` method will trigger clippy warning.**

The Slowlog struct defines a public `len()` method at lines 111-113 but lacks a corresponding `is_empty()` method. Under CI's `clippy -- -D warnings` policy, the `len_without_is_empty` lint will fail the build. Add an `is_empty()` method:

<details>
<summary>🔧 Proposed fix</summary>

```diff
     /// Get the number of entries.
     pub fn len(&self) -> usize {
         self.entries.lock().len()
     }

+    /// Check if the slowlog is empty.
+    pub fn is_empty(&self) -> bool {
+        self.entries.lock().is_empty()
+    }
+
     /// Reset (clear) all entries.
     pub fn reset(&self) {
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@src/admin/slowlog.rs` around lines 110 - 113, Add a public is_empty() method
to Slowlog that mirrors len() to satisfy the clippy lint: implement pub fn
is_empty(&self) -> bool { self.entries.lock().is_empty() } so callers can check
emptiness without calling len() == 0; update the impl block that contains pub fn
len(&self) -> usize to include this new pub fn is_empty(&self) -> bool and
reference the same entries field and lock usage.
```

</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>🧹 Nitpick comments (7)</summary><blockquote>

<details>
<summary>.github/workflows/compat.yml (1)</summary><blockquote>

`21-24`: **Startup readiness check is flaky with fixed sleep.**

A fixed `sleep 2` can intermittently fail on loaded runners. Use a bounded readiness loop against `127.0.0.1:6399` before launching client checks.




Also applies to: 82-85

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/compat.yml around lines 21 - 24, The startup step "Start
Moon" currently uses a fixed "sleep 2" after launching "./target/release/moon
--port 6399 --shards 1 &", which is flaky; replace the fixed sleep with a
bounded readiness loop that polls 127.0.0.1:6399 (e.g., using nc, curl, or bash
TCP connect) with a short interval and a total timeout (e.g., 30s) and exit
non‑zero if the port never becomes ready; apply the same change to the other
identical block referenced around the second occurrence (lines similar to 82-85)
so both startup waits use the readiness loop instead of sleep.
```

</details>

</blockquote></details>
<details>
<summary>src/admin/mod.rs (1)</summary><blockquote>

`1-7`: **Doc comments mention endpoints that are deferred.**

The module documentation references `/healthz` (liveness) and `/readyz` (readiness) endpoints, but according to the PR objectives, these are noted as deferred items. Consider updating the doc comments to reflect what's currently implemented (`/metrics` only) to avoid confusion.

<details>
<summary>📝 Suggested doc comment update</summary>

```diff
 //! Admin HTTP server for observability endpoints.
 //!
-//! Serves `/metrics` (Prometheus), `/healthz` (liveness), `/readyz` (readiness)
-//! on a separate port from the RESP data port.
+//! Serves `/metrics` (Prometheus) on a separate port from the RESP data port.
+//!
+//! Planned endpoints (not yet implemented): `/healthz` (liveness), `/readyz` (readiness).
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@src/admin/mod.rs` around lines 1 - 7, Update the module doc comment at the
top of src/admin/mod.rs to accurately reflect current functionality: mention
that the Admin HTTP server currently serves only the /metrics (Prometheus)
endpoint and remove or mark /healthz and /readyz as deferred; reference the
existing module names (metrics_setup and slowlog) in the doc comment so readers
know what is implemented and what remains to be added.
```

</details>

</blockquote></details>
<details>
<summary>docs/security/lua-sandbox.md (2)</summary><blockquote>

`25-29`: **Clarify `os` library status — partially restricted, not fully blocked.**

Line 28 states `os` is "**Blocked**", but reviewing `src/scripting/sandbox.rs` (context snippet 1), the implementation actually restricts `os.*` to `clock()` only rather than blocking it entirely. Consider updating the table to reflect this nuance:

<details>
<summary>📝 Suggested clarification</summary>

```diff
-| `os` | **Blocked** | Command execution, env vars, file ops |
+| `os` | **Restricted** | Only `os.clock()` allowed; `execute`, `exit`, `getenv`, etc. blocked |
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@docs/security/lua-sandbox.md` around lines 25 - 29, Update the table row for
the Lua `os` library to reflect that it is not fully blocked but partially
restricted: indicate `os` as "Partially blocked" (or similar) and note that only
`os.clock()` is allowed while other `os.*` functions are blocked by the sandbox
implementation (see the `os.*` restriction and `clock()` allowance in the
sandbox code such as the `sandbox` logic that permits `os.clock()`).
```

</details>

---

`68-79`: **Consider adding practical guidance for checking vendored Lua versions during mlua upgrades.**

The document mentions monitoring mlua releases (line 97) and re-auditing when upgrading (line 104), but doesn't explain *how* to verify which Lua version a new mlua release vendors. Adding a brief note—such as "Check the mlua crate's Cargo.toml or release notes to find the vendored Lua version before upgrading"—would help maintainers quickly verify compatibility during updates.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@docs/security/lua-sandbox.md` around lines 68 - 79, Add a short practical
checklist to the mlua upgrade guidance that tells maintainers how to verify the
vendored Lua version: instruct them to inspect the mlua crate’s Cargo.toml and
release notes/CHANGELOG for the vendored-lua version, check the mlua GitHub
release/tag description and commit history (or build.rs) for the exact Lua
version string, and confirm in the vendored source or vendor README that the Lua
version matches expected CVE fixes before upgrading mlua.
```

</details>

</blockquote></details>
<details>
<summary>src/admin/metrics_setup.rs (2)</summary><blockquote>

`56-63`: **Duplicate `to_ascii_lowercase()` call on hot path.**

`cmd.to_ascii_lowercase()` is called twice, creating two separate allocations per command. Cache the result to avoid redundant work.

<details>
<summary>♻️ Proposed fix</summary>

```diff
 pub fn record_command(cmd: &str, latency_us: u64) {
     if !METRICS_INITIALIZED.load(Ordering::Relaxed) {
         return;
     }
-    counter!("moon_commands_total", "cmd" => cmd.to_ascii_lowercase()).increment(1);
-    histogram!("moon_command_duration_microseconds", "cmd" => cmd.to_ascii_lowercase())
-        .record(latency_us as f64);
+    let cmd_lower = cmd.to_ascii_lowercase();
+    counter!("moon_commands_total", "cmd" => cmd_lower.clone()).increment(1);
+    histogram!("moon_command_duration_microseconds", "cmd" => cmd_lower)
+        .record(latency_us as f64);
 }
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@src/admin/metrics_setup.rs` around lines 56 - 63, In record_command, avoid
calling cmd.to_ascii_lowercase() twice on the hot path by computing let lower =
cmd.to_ascii_lowercase() once and reusing it for both metrics calls; keep the
METRICS_INITIALIZED check as-is and pass &lower (or lower.as_str()) into
counter!("moon_commands_total", "cmd" => ...) and
histogram!("moon_command_duration_microseconds", "cmd" => ...) before calling
increment and record respectively so you only allocate the lowercase string
once.
```

</details>

---

`147-155`: **Unnecessary String allocation on each shard drain metric.**

`itoa::Buffer::new().format(shard_id).to_string()` creates a heap allocation on every call. Consider reusing the buffer's `&str` directly or using an alternative approach.

<details>
<summary>♻️ Proposed fix</summary>

```diff
 pub fn record_spsc_drain(shard_id: usize, count: u64) {
     if !METRICS_INITIALIZED.load(Ordering::Relaxed) {
         return;
     }
-    let shard = itoa::Buffer::new().format(shard_id).to_string();
-    histogram!("moon_spsc_drain_batch_size", "shard" => shard).record(count as f64);
+    let mut buf = itoa::Buffer::new();
+    let shard = buf.format(shard_id);
+    histogram!("moon_spsc_drain_batch_size", "shard" => shard.to_owned()).record(count as f64);
 }
```

Note: If `metrics` crate accepts `&str` labels, you may avoid allocation entirely by passing `shard` directly.
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@src/admin/metrics_setup.rs` around lines 147 - 155, record_spsc_drain
currently allocates a String for the shard label every call; replace the
allocation by keeping the itoa::Buffer alive and passing the buffer's &str
directly to the histogram! macro (e.g. create a local itoa::Buffer variable,
call buffer.format(shard_id) and pass that &str as the "shard" label) so
record_spsc_drain and the histogram!("moon_spsc_drain_batch_size", "shard" =>
...) call avoid heap allocation.
```

</details>

</blockquote></details>
<details>
<summary>docs/runbooks/corrupted-aof-recovery.md (1)</summary><blockquote>

`5-6`: **Verify error message format matches actual output.**

The actual code in `src/persistence/aof.rs` logs `"AOF parse error at byte offset {} after {} commands: {}"` — slightly different from the symptom listed here. Consider updating to match the actual log format so operators can grep effectively.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@docs/runbooks/corrupted-aof-recovery.md` around lines 5 - 6, The runbook's
symptom text doesn't match the actual log string emitted by the code; update
docs/runbooks/corrupted-aof-recovery.md to use the exact log message from
src/persistence/aof.rs ("AOF parse error at byte offset {} after {} commands:
{}") or include that exact literal as a grep-able symptom so operators can
reliably search logs; reference the log format string in the doc and optionally
keep the human-readable description alongside it.
```

</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>🤖 Prompt for all review comments with AI agents</summary>

Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/bench-gate.yml:

  • Around line 25-31: The workflow runs cargo bench and writes bench_results.txt
    but never compares results or fails on regressions; modify the "Run critical
    benchmarks" job to fetch a stored baseline artifact (previous
    bench_results.txt), run cargo bench as currently done producing
    bench_results.txt, then run a comparison step (e.g., bencher/benchcmp or a small
    script) to compute deltas between the new bench_results.txt and the baseline and
    exit non‑zero if any benchmark from get_hotpath/dispatch_baseline/resp_parsing
    regresses beyond an allowed threshold; ensure the step uploads the new
    bench_results.txt as an artifact and that the comparison step uses that artifact
    name and the baseline artifact name to decide pass/fail.

In @.github/workflows/compat.yml:

  • Around line 93-113: The file being created is named compat_test.go which go
    run refuses to execute; change the created filename (the here-doc target and the
    subsequent go run argument) from "compat_test.go" to a non-_test suffix like
    "compat_check.go" (so update the cat >> here-doc marker and the final "go run
    compat_test.go" to "go run compat_check.go"); no code changes inside main are
    needed—only rename the file references so go run can execute the program.

In @docs/runbooks/replica-fell-behind.md:

  • Around line 44-45: Remove Option A's unsupported CONFIG SET command: delete
    the "Option A: Increase replication backlog" section that suggests CONFIG SET repl-backlog-size 64mb (since repl-backlog-size is not configurable in Moon)
    and either remove Option A entirely or replace it with supported alternatives
    such as reducing write load, increasing network bandwidth, or recommending the
    BGSAVE approach from Option B; update any heading/text that references Option A
    accordingly.

In @tests/durability/backup_restore.rs:

  • Around line 71-80: Replace the fixed thread::sleep + conditional copy with a
    deterministic wait-for-file-with-timeout: after calling send_command("BGSAVE")
    use an Instant deadline and loop (sleeping short intervals) checking
    rdb_src.exists() until either it appears or the timeout elapses; then assert (or
    expect) that rdb_src.exists() before performing std::fs::copy(&rdb_src,
    &rdb_dst). Update the code around send_command, rdb_src and rdb_dst to use this
    wait-and-assert pattern so the test fails deterministically if dump.rdb never
    appears instead of silently skipping the copy.

In @tests/durability/crash_matrix.rs:

  • Around line 31-33: The test spawns a child process in
    tests/durability/crash_matrix.rs using .stdout(Stdio::piped()) and
    .stderr(Stdio::piped()) without readers, which can deadlock when OS pipe buffers
    fill; change the spawn to either inherit or redirect standard streams (e.g.
    .stdout(Stdio::null()) and .stderr(Stdio::null())) or implement background
    readers that drain the child's stdout/stderr immediately after spawn (attach
    threads or async tasks to read from the pipes returned by the child process) so
    the child cannot block; locate the Command construction where .stdout and
    .stderr are set and apply one of these fixes.
  • Around line 121-124: Add a // SAFETY: comment above the unsafe block that
    calls libc::kill(...) describing the invariants (we hold a valid PID from
    server.id() and are intentionally sending SIGKILL to that process), and check
    the return value of libc::kill instead of ignoring it: if it returns -1, read
    errno (e.g., via std::io::Error::last_os_error() or libc::errno) and fail
    the test (panic or assert) with a clear message so a failed syscall is not
    silently ignored; keep the cast server.id() as i32 and the libc::SIGKILL
    usage but handle and report errors from libc::kill and document safety in the
    // SAFETY: comment.

In @tests/replication_hardening.rs:

  • Around line 42-49: send_cmd currently stops reading on lines that start with
    '+', '-', or ':' but doesn't handle RESP bulk strings (lines starting with '$'),
    so GET replies hang until socket timeout; update send_cmd to detect a '$' header
    from the reader.lines() loop, parse the bulk length (handle "$-1" as nil), and
    when length >= 0 read exactly that many bytes plus the trailing CRLF from the
    stream (append them to resp) before breaking; use the same reader used for lines
    to consume the raw bytes after parsing the header so the function continues to
    return a complete RESP bulk reply.
  • Around line 106-107: Add a SAFETY comment above each unsafe libc::kill(...)
    call explaining that we are calling the platform syscall to forcibly terminate
    the test child process using the pid from replica.id() and SIGKILL, and that the
    pid is obtained from a valid child process handle; then replace the ignored
    return value with a checked result (e.g., let ret = unsafe {
    libc::kill(replica.id() as i32, libc::SIGKILL) }; assert_eq!(ret, 0, "kill
    failed: {:?}", std::io::Error::last_os_error()); ) so failures are detected
    before calling replica.wait(); apply this change to both occurrences that
    currently use unsafe { libc::kill(...) } and the subsequent let _ =
    replica.wait() pattern.

Minor comments:
In @docs/redis-compat.md:

  • Line 27: The docs entry for go-redis overstates CI coverage; either remove
    "pipelines" from the table row in docs/redis-compat.md or actually add pipeline
    checks to the go-redis smoke test. If you choose docs: edit the table row for
    "go-redis" to read "Basic ops, hash" (remove "pipelines"). If you choose tests:
    update the go-redis smoke test (the test named like "go-redis smoke test" or
    TestGoRedisSmoke) to perform a simple pipeline round-trip (e.g., queue two
    commands in a pipeline, Exec, and assert both responses) so the documentation
    claim is accurate. Ensure any new assertions are added to the same CI-tested
    smoke test file and run in CI.

In @docs/runbooks/corrupted-aof-recovery.md:

  • Around line 40-43: Add a cautionary note to the "Use redis-check-aof
    equivalent (if available) or truncate manually" section explaining that using
    head -c to truncate appendonly.aof is aggressive and may discard
    recoverable data, and recommend considering Moon's automatic skip-recovery
    (which attempts to skip corrupted bytes and find valid RESP frames) before
    performing manual truncation of appendonly.aof; reference the head -c
    command and the appendonly.aof filename and suggest preferring Moon's
    skip-recovery unless truncation is explicitly required.

In @docs/runbooks/disk-full-during-wal-rotation.md:

  • Around line 3-7: The doc's symptom line is inaccurate: when WAL rotation fails
    due to disk-full the code emits messages like "WAL v3 sync failed: No space left
    on device" (timers.rs / event_loop.rs) or "Checkpoint WAL flush failed: No space
    left on device" (persistence_tick.rs), not "Error: WAL segment rotation failed:
    No space left on device"; either update the symptom text to list the actual log
    messages operators will see, referencing the existing emitters (timers.rs,
    event_loop.rs, persistence_tick.rs), or modify segment.rs's rotate_segment()
    error path to also log the documented message so it appears in logs (add a
    processLogger/error! call in rotate_segment() that includes the disk-full error
    text and context).

In @docs/runbooks/replica-fell-behind.md:

  • Around line 5-7: The runbook references a non-existent Prometheus metric
    moon_replication_lag_bytes; update docs/runbooks/replica-fell-behind.md to
    remove that metric and instead point users to the actual replication info fields
    exposed by the server (use INFO replication and the fields
    repl_backlog_first_byte_offset and repl_backlog_size) or, if you prefer to keep
    a metric reference, implement and instrument a new gauge named
    moon_replication_lag_bytes in the observability layer—either remove/replace the
    line to reference INFO replication fields or add the metric implementation and
    ensure its name matches the docs.

In @docs/security/lua-sandbox.md:

  • Line 45: Update the docs entry for redis.log(level, msg) to remove or
    clarify the "Safe — message is sanitized" claim; instead state that the message
    is forwarded to the Rust tracing layer without explicit sanitization (see
    src/scripting/sandbox.rs where the message is passed to
    tracing::info!(target: "lua_script", "{}", msg)), or explicitly document what
    guarantees tracing provides and that no application-level sanitization is
    performed; edit the line in docs/security/lua-sandbox.md accordingly and keep
    the reference to the redis.log API name so readers can map it to the
    implementation.

In @docs/THREAT-MODEL.md:

  • Line 120: Update the risk matrix row that currently reads "| Lua sandbox
    escape | Low | Critical (RCE) | High | Audit pending (SEC-04) |" to reflect
    the completed Lua sandbox/bindings audit: change "Audit pending (SEC-04)" to the
    appropriate completed status (e.g., "Audit completed (SEC-04)" or "Mitigated
    (SEC-04)"), and optionally add a brief date or Phase 98 reference to avoid mixed
    messaging.

In @docs/versioning.md:

  • Around line 29-31: The fenced code block containing the error message "Error:
    RDB version 2 is not supported by this Moon build (max: 1). Upgrade Moon first."
    should include a language tag to satisfy markdownlint MD040; update the
    triple-backtick fence to use a language specifier (e.g., add "text" after the
    opening ), leaving the block contents unchanged so the block becomes text
    ... ``` to fix the lint warning.

In @src/admin/metrics_setup.rs:

  • Around line 15-17: The doc comment in src/admin/metrics_setup.rs claims the
    server "Also responds to /healthz (liveness) and /readyz (readiness)" but
    the current implementation only installs the Prometheus exporter (the code path
    that registers the /metrics handler); either remove that sentence from the
    module/function doc comment or implement simple health endpoints: add handlers
    for /healthz and /readyz alongside the existing Prometheus exporter
    registration (in the function that sets up the HTTP listener / installs the
    exporter), returning 200 OK (and appropriate readiness logic if available), and
    wire those routes into the same server so the comment matches the
    implementation.

In @src/admin/slowlog.rs:

  • Around line 155-163: The code currently casts Frame::Integer(n) to usize which
    wraps negative values; update the match for args[1] to explicitly handle
    negatives: for Frame::Integer(n) return Some(n as usize) only if *n >= 0,
    otherwise treat it as None (meaning "all entries"); keep the existing
    atoi::atoi::(b) handling for Frame::BulkString(b) and ensure the variable
    name count and the match on Frame::Integer(n) are updated accordingly so
    negative integers do not wrap to large usize values.
  • Around line 1-4: Module doc comment is incorrect: it claims "SLOWLOG GET
    merges across shards sorted by timestamp" but the code only operates on a single
    Slowlog instance. Fix by updating the module-level documentation to reflect the
    actual behavior (per-instance/per-shard slowlog only) or, if you prefer to
    implement the feature, add a cross-shard aggregation routine (e.g., a new
    function like merge_across_shards or Slowlog::merge_from_shards) and update
    SLOWLOG GET to collect entries from all shard Slowlog instances and perform a
    timestamp-based merge before returning results; reference the Slowlog type and
    the SLOWLOG GET command in the doc/update so the comment matches the
    implementation.
  • Around line 110-113: Add a public is_empty() method to Slowlog that mirrors
    len() to satisfy the clippy lint: implement pub fn is_empty(&self) -> bool {
    self.entries.lock().is_empty() } so callers can check emptiness without calling
    len() == 0; update the impl block that contains pub fn len(&self) -> usize to
    include this new pub fn is_empty(&self) -> bool and reference the same entries
    field and lock usage.

In @tests/durability/crash_matrix.rs:

  • Around line 186-189: Update the test comment's run command so it references
    the actual integration test target name (the file-derived test target) instead
    of "durability_crash_matrix": replace the example command string "cargo test
    --test durability_crash_matrix -- --ignored" with "cargo test --test
    crash_matrix -- --ignored" so it matches the crash_matrix.rs integration test;
    edit the comment near the top of tests/durability/crash_matrix.rs accordingly.

In @tests/durability/torn_write.rs:

  • Around line 63-70: Replace the loose check that allows extra recovered data
    with a strict assertion of the exact expected sequence: in the torn_write test
    where the local variable records is validated, change the current assertions
    (records.len() >= 2 and subsequent element checks) to assert the exact recovered
    vector (e.g., assert_eq!(records, vec![1, 2]) or equivalent exact-length +
    element assertions) so the test fails if any truncated/corrupted tail record is
    accepted.

Nitpick comments:
In @.github/workflows/compat.yml:

  • Around line 21-24: The startup step "Start Moon" currently uses a fixed "sleep
    2" after launching "./target/release/moon --port 6399 --shards 1 &", which is
    flaky; replace the fixed sleep with a bounded readiness loop that polls
    127.0.0.1:6399 (e.g., using nc, curl, or bash TCP connect) with a short interval
    and a total timeout (e.g., 30s) and exit non‑zero if the port never becomes
    ready; apply the same change to the other identical block referenced around the
    second occurrence (lines similar to 82-85) so both startup waits use the
    readiness loop instead of sleep.

In @docs/runbooks/corrupted-aof-recovery.md:

  • Around line 5-6: The runbook's symptom text doesn't match the actual log
    string emitted by the code; update docs/runbooks/corrupted-aof-recovery.md to
    use the exact log message from src/persistence/aof.rs ("AOF parse error at byte
    offset {} after {} commands: {}") or include that exact literal as a grep-able
    symptom so operators can reliably search logs; reference the log format string
    in the doc and optionally keep the human-readable description alongside it.

In @docs/security/lua-sandbox.md:

  • Around line 25-29: Update the table row for the Lua os library to reflect
    that it is not fully blocked but partially restricted: indicate os as
    "Partially blocked" (or similar) and note that only os.clock() is allowed
    while other os.* functions are blocked by the sandbox implementation (see the
    os.* restriction and clock() allowance in the sandbox code such as the
    sandbox logic that permits os.clock()).
  • Around line 68-79: Add a short practical checklist to the mlua upgrade
    guidance that tells maintainers how to verify the vendored Lua version: instruct
    them to inspect the mlua crate’s Cargo.toml and release notes/CHANGELOG for the
    vendored-lua version, check the mlua GitHub release/tag description and commit
    history (or build.rs) for the exact Lua version string, and confirm in the
    vendored source or vendor README that the Lua version matches expected CVE fixes
    before upgrading mlua.

In @src/admin/metrics_setup.rs:

  • Around line 56-63: In record_command, avoid calling cmd.to_ascii_lowercase()
    twice on the hot path by computing let lower = cmd.to_ascii_lowercase() once and
    reusing it for both metrics calls; keep the METRICS_INITIALIZED check as-is and
    pass &lower (or lower.as_str()) into counter!("moon_commands_total", "cmd" =>
    ...) and histogram!("moon_command_duration_microseconds", "cmd" => ...) before
    calling increment and record respectively so you only allocate the lowercase
    string once.
  • Around line 147-155: record_spsc_drain currently allocates a String for the
    shard label every call; replace the allocation by keeping the itoa::Buffer alive
    and passing the buffer's &str directly to the histogram! macro (e.g. create a
    local itoa::Buffer variable, call buffer.format(shard_id) and pass that &str as
    the "shard" label) so record_spsc_drain and the
    histogram!("moon_spsc_drain_batch_size", "shard" => ...) call avoid heap
    allocation.

In @src/admin/mod.rs:

  • Around line 1-7: Update the module doc comment at the top of src/admin/mod.rs
    to accurately reflect current functionality: mention that the Admin HTTP server
    currently serves only the /metrics (Prometheus) endpoint and remove or mark
    /healthz and /readyz as deferred; reference the existing module names
    (metrics_setup and slowlog) in the doc comment so readers know what is
    implemented and what remains to be added.

</details>

<details>
<summary>🪄 Autofix (Beta)</summary>

Fix all unresolved CodeRabbit comments on this PR:

- [ ] <!-- {"checkboxId": "4b0d0e0a-96d7-4f10-b296-3a18ea78f0b9"} --> Push a commit to this branch (recommended)
- [ ] <!-- {"checkboxId": "ff5b1114-7d8c-49e6-8ac1-43f82af23a33"} --> Create a new PR with the fixes

</details>

---

<details>
<summary>ℹ️ Review info</summary>

<details>
<summary>⚙️ Run configuration</summary>

**Configuration used**: defaults

**Review profile**: CHILL

**Plan**: Pro

**Run ID**: `0c4ba25a-a190-4f5f-980c-810688b5dad0`

</details>

<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between 71a99c41a15a9c88da85e0cef5f424664bd8cd6b and d4e33aacdf6eef4d8f81ce89fce18c235862890f.

</details>

<details>
<summary>⛔ Files ignored due to path filters (1)</summary>

* `Cargo.lock` is excluded by `!**/*.lock`

</details>

<details>
<summary>📒 Files selected for processing (25)</summary>

* `.github/workflows/bench-gate.yml`
* `.github/workflows/compat.yml`
* `Cargo.toml`
* `SECURITY.md`
* `deny.toml`
* `docs/THREAT-MODEL.md`
* `docs/redis-compat.md`
* `docs/runbooks/corrupted-aof-recovery.md`
* `docs/runbooks/disk-full-during-wal-rotation.md`
* `docs/runbooks/oom-during-snapshot.md`
* `docs/runbooks/replica-fell-behind.md`
* `docs/security/lua-sandbox.md`
* `docs/versioning.md`
* `src/admin/metrics_setup.rs`
* `src/admin/mod.rs`
* `src/admin/slowlog.rs`
* `src/config.rs`
* `src/lib.rs`
* `src/main.rs`
* `tests/durability/backup_restore.rs`
* `tests/durability/crash_matrix.rs`
* `tests/durability/mod.rs`
* `tests/durability/torn_write.rs`
* `tests/durability_tests.rs`
* `tests/replication_hardening.rs`

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

Comment thread .github/workflows/bench-gate.yml
Comment thread .github/workflows/compat.yml Outdated
Comment thread docs/runbooks/replica-fell-behind.md Outdated
Comment thread tests/durability/backup_restore.rs Outdated
Comment thread tests/durability/crash_matrix.rs Outdated
Comment thread tests/durability/crash_matrix.rs Outdated
Comment thread tests/replication_hardening.rs Outdated
Comment thread tests/replication_hardening.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/command/mod.rs`:
- Around line 726-730: The function signature for dispatch_read_inner (fn
dispatch_read_inner(db: &Database, cmd: &[u8], args: &[Frame], now_ms: u64,
...)) is misformatted and causing cargo fmt checks to fail; run cargo fmt (or
rustfmt) on src/command/mod.rs to reformat the extracted helper and ensure the
signature/parameter list follows standard Rust formatting (proper commas,
spacing, and line breaks) so cargo fmt --check passes.
- Around line 45-52: The dispatch hot path currently always computes elapsed
time and converts `cmd` to UTF-8 then calls
`metrics_setup::record_command`/`record_command_error`, causing allocations
because `record_command` lowercases labels; change this to first check a cheap
metrics-enabled flag (e.g. `metrics_setup::enabled()` or add one) and only then
do timing and UTF-8/normalization work; move label normalization/lowercasing out
of the hot path into `metrics_setup` (accept raw `&[u8]` or a pre-normalized
cached label) so `record_command(cmd, elapsed_us)` is a no-allocation call when
metrics are disabled, and update calls around `dispatch_inner`,
`DispatchResult`, and the error path (`record_command_error`) accordingly to
avoid UTF-8/formatting unless metrics are enabled.
- Around line 717-723: dispatch_read currently only records latency; inspect the
returned result and if it is DispatchResult::Response(Frame::Error(_)) increment
the command error metric so read-path errors count toward
moon_command_errors_total. After calling dispatch_read_inner and before
returning, convert cmd to cmd_str as already done, then match on result and when
it is DispatchResult::Response(Frame::Error(_)) call the same metrics increment
used by dispatch() (the metrics_setup helper that increments
moon_command_errors_total), then continue to record_command(cmd_str, elapsed_us)
and return result.

In `@src/server/conn/handler_sharded.rs`:
- Line 227: The unconditional call to
crate::admin::metrics_setup::record_connection_closed() should be moved behind a
check of the handler result so shard handoffs don't count as disconnects:
inspect the return value of handle_connection_sharded_inner() (the HandlerResult
enum) and only call record_connection_closed() when the result represents an
actual closed/terminated connection (not HandlerResult::MigrateConnection);
update the code around where handle_connection_sharded_inner() is
awaited/returned to conditionally invoke record_connection_closed() based on
that enum variant.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 02a13936-4272-489b-9a1b-afb297b0a318

📥 Commits

Reviewing files that changed from the base of the PR and between d4e33aa and 2086f49.

📒 Files selected for processing (2)
  • src/command/mod.rs
  • src/server/conn/handler_sharded.rs

Comment thread src/command/mod.rs Outdated
Comment thread src/command/mod.rs Outdated
Comment thread src/command/mod.rs Outdated
Comment thread src/server/conn/handler_sharded.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.github/workflows/release.yml (1)

61-62: ⚠️ Potential issue | 🔴 Critical

Add id-token: write permission for keyless cosign signing.

The release job uses cosign sign-blob with COSIGN_EXPERIMENTAL=1 but only grants contents: write permissions. Keyless signing requires GitHub OIDC token access, which is only available when id-token: write is granted. Without this permission, the signing step will fail.

Suggested fix
   release:
     name: Create Release
     needs: build
     runs-on: ubuntu-latest
     permissions:
       contents: write
+      id-token: write

Additionally, the SBOM generated at line 78 uses defaults (cargo cyclonedx --format json --output-file artifacts/moon-sbom.json) and does not match the feature/target-specific binaries produced by the build matrix. Consider passing --target and --features flags to match each binary's configuration.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/release.yml around lines 61 - 62, The GitHub Actions
release job currently grants only "contents: write" which prevents OIDC token
access needed for keyless cosign signing; update the permissions block to also
include "id-token: write" so COSIGN_EXPERIMENTAL keyless signing can obtain the
GitHub OIDC token. Also adjust the SBOM generation step that runs "cargo
cyclonedx --format json --output-file artifacts/moon-sbom.json" to generate
SBOMs that match each built binary by invoking cargo cyclonedx with the
appropriate --target and --features flags (or generate per-matrix entry) so the
SBOM aligns with the feature/target-specific artifacts produced by the build
matrix.
♻️ Duplicate comments (3)
tests/replication_hardening.rs (2)

106-107: ⚠️ Potential issue | 🔴 Critical

Unchecked libc::kill + missing // SAFETY: comments violate policy and hide failures.

All three SIGKILL call sites use unsafe without a // SAFETY: rationale, and each ignores the syscall result. If kill fails, the test can continue with invalid assumptions.

💡 Suggested fix pattern (apply to all three call sites)
-        unsafe { libc::kill(replica.id() as i32, libc::SIGKILL) };
+        let pid = replica.id() as i32;
+        // SAFETY: `pid` comes from a live `std::process::Child` handle created in this test.
+        // Sending SIGKILL is intentional for crash/restart replication scenarios.
+        let rc = unsafe { libc::kill(pid, libc::SIGKILL) };
+        assert_eq!(rc, 0, "kill({pid}, SIGKILL) failed: {}", std::io::Error::last_os_error());
         let _ = replica.wait();
#!/bin/bash
# Verify all libc::kill call sites and nearby SAFETY comments / return-value handling.
rg -n -C2 'libc::kill\(' tests/replication_hardening.rs
rg -n -C2 '// SAFETY:' tests/replication_hardening.rs

As per coding guidelines, **/*.rs: “Never introduce new unsafe blocks without explicit user approval. Every unsafe block MUST have a // SAFETY: comment explaining the invariant.”

Also applies to: 152-153, 253-254

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/replication_hardening.rs` around lines 106 - 107, The unsafe libc::kill
calls in tests/replication_hardening.rs (e.g., the call using replica.id()
before replica.wait()) lack a // SAFETY: rationale and ignore the syscall return
value; update each of the three call sites to (1) add a clear // SAFETY: comment
explaining why calling libc::kill with replica.id() is safe (e.g., valid PID,
owned process for test teardown), and (2) capture and check the return value
(use libc::kill(...) as i32 or check libc::errno) and propagate failure into the
test with an assert or bail (so the test fails instead of proceeding on error).
Ensure the changes are applied consistently to the calls around lines referenced
(the replica.id() SIGKILL sites) and keep replica.wait() semantics unchanged.

42-49: ⚠️ Potential issue | 🟠 Major

send_cmd still misses RESP bulk-string termination ($...) and can stall until timeout.

GET responses are bulk strings, so this loop keeps waiting for another line and typically exits only on read-timeout. That makes promotion checks slow/flaky.

💡 Suggested fix
-use std::io::{BufRead, BufReader, Write};
+use std::io::{BufRead, BufReader, Read, Write};
@@
-fn send_cmd(addr: &str, cmd: &str) -> String {
+fn send_cmd(addr: &str, cmd: &str) -> String {
     let Ok(mut stream) = TcpStream::connect(addr) else {
         return String::new();
     };
@@
-    let reader = BufReader::new(&stream);
+    let mut reader = BufReader::new(&stream);
     let mut resp = String::new();
-    for line in reader.lines() {
-        match line {
-            Ok(l) => {
-                resp.push_str(&l);
-                resp.push('\n');
-                if l.starts_with('+') || l.starts_with('-') || l.starts_with(':') {
-                    break;
-                }
-            }
-            Err(_) => break,
-        }
+    loop {
+        let mut line = String::new();
+        let Ok(n) = reader.read_line(&mut line) else { break };
+        if n == 0 {
+            break;
+        }
+        let l = line.trim_end_matches(['\r', '\n']);
+        resp.push_str(l);
+        resp.push('\n');
+
+        if l.starts_with('+') || l.starts_with('-') || l.starts_with(':') {
+            break;
+        }
+        if let Some(rest) = l.strip_prefix('$') {
+            let len = rest.trim().parse::<isize>().unwrap_or(-1);
+            if len >= 0 {
+                let mut buf = vec![0u8; len as usize + 2]; // payload + CRLF
+                if reader.read_exact(&mut buf).is_ok() {
+                    if let Ok(payload) = std::str::from_utf8(&buf[..len as usize]) {
+                        resp.push_str(payload);
+                        resp.push('\n');
+                    }
+                }
+            }
+            break;
+        }
     }
     resp
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/replication_hardening.rs` around lines 42 - 49, The loop in send_cmd
that reads response lines (the reader.lines() match in
tests/replication_hardening.rs) incorrectly treats GET bulk-string replies as
line-delimited and can hang until timeout; update send_cmd to parse RESP types:
read the first byte/token, and if it is '$' parse the following length, then
read exactly that many bytes plus the trailing CRLF (handle -1 as null bulk),
instead of waiting for another newline, so bulk strings are consumed correctly
and the test no longer stalls.
src/server/conn/handler_sharded.rs (1)

228-228: ⚠️ Potential issue | 🟠 Major

Don't count shard handoff as a disconnect.

record_connection_closed() still runs after HandlerResult::MigrateConnection, so a successful migration decrements the connected-client metric even though the socket stays alive on the destination shard.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/conn/handler_sharded.rs` at line 228, The metrics call
record_connection_closed() is executed even when the handler returns
HandlerResult::MigrateConnection, which incorrectly decrements the
connected-client metric for a migrated (still-alive) socket; modify the control
flow in handler_sharded.rs so record_connection_closed() is only invoked for
actual disconnects (e.g., when the handler returned CloseConnection or an
error), by either moving the call into the branch that handles CloseConnection
or by checking the HandlerResult enum and skipping record_connection_closed()
when the result is HandlerResult::MigrateConnection (and keep it for
CloseConnection/Err cases) so successful migrations do not decrement the metric.
🧹 Nitpick comments (3)
src/tls.rs (1)

67-68: Consider adding a comment to clarify the idempotent install pattern.

The let _ = ...install_default() intentionally ignores the result since it's safe to call multiple times. A brief comment would clarify this isn't an oversight, especially since builder_with_provider on line 121 uses an explicit provider anyway.

📝 Suggested clarification
-    // Install aws-lc-rs as default crypto provider
-    let _ = rustls::crypto::aws_lc_rs::default_provider().install_default();
+    // Install aws-lc-rs as default crypto provider (idempotent; ignored if already set).
+    // This is defensive for any code paths that might use ServerConfig::builder()
+    // without an explicit provider, though this function always uses builder_with_provider.
+    let _ = rustls::crypto::aws_lc_rs::default_provider().install_default();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tls.rs` around lines 67 - 68, Add an explanatory comment above the
rustls::crypto::aws_lc_rs::default_provider().install_default() call clarifying
that the result is intentionally ignored because install_default() is idempotent
and safe to call multiple times; mention this prevents confusion with the
explicit provider used later by builder_with_provider so reviewers know it’s not
an oversight. Reference the install_default() call and builder_with_provider
usage for context.
tests/durability/jepsen_lite.rs (1)

44-78: RESP parsing may hang on multi-line or array responses.

The send_cmd parser doesn't break after reading the bulk string data line, potentially causing hangs on unexpected responses. For a test utility this is likely acceptable given the 5-second read timeout, but consider adding an explicit break after reading bulk string content.

Proposed improvement
                 // Bulk string: read the $N header then the data line
                 if l.starts_with('$') {
                     let len: i64 = l[1..].trim().parse().unwrap_or(-1);
                     if len < 0 {
                         break;
                     }
-                    // read the actual data line
-                    continue;
+                    // Read the actual data line and break
+                    if let Some(Ok(data)) = reader.lines().next() {
+                        resp.push_str(&data);
+                        resp.push('\n');
+                    }
+                    break;
                 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/durability/jepsen_lite.rs` around lines 44 - 78, The send_cmd
function's RESP loop may continue after seeing a bulk string header ('$')
because it doesn't consume the following data line; modify the loop in send_cmd
so that when l.starts_with('$') and len >= 0 you explicitly read the next data
line from the same BufReader (append it to resp, with a trailing '\n' like other
lines) and then break out of the for loop to avoid hanging on multi-line or
array responses; reference the reader variable and the branch that checks
l.starts_with('$') to locate where to add the extra read-and-break logic.
src/storage/tiered/cold_tier.rs (1)

294-294: Use a total ordering here instead of unwrap_or(Ordering::Equal) for deterministic sorting.

At Line 294, mapping non-comparable floats to Equal can violate comparator semantics and create nondeterministic top-k ordering when NaNs appear. Since this function verifies "deterministic query vectors," use total_cmp() or explicit finite handling to order NaNs consistently.

♻️ Proposed fix
-        bf_dists.sort_by(|a, b| a.0.partial_cmp(&b.0).unwrap_or(std::cmp::Ordering::Equal));
+        bf_dists.sort_by(|a, b| a.0.total_cmp(&b.0));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/storage/tiered/cold_tier.rs` at line 294, The sort comparator for
bf_dists uses partial_cmp(...).unwrap_or(Ordering::Equal) which is not a total
order and can yield nondeterministic results when floats are NaN; change the
comparison to use a total ordering (e.g., f64::total_cmp) or explicitly handle
NaN so the comparator is total and deterministic—update the call that sorts
bf_dists (the closure comparing a.0 and b.0) to use total_cmp on the float keys
(or map floats to a deterministic wrapper like ordered_float::NotNan / a
canonical NaN position) so sorting of bf_dists is consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/runbooks/rolling-restart.md`:
- Line 141: Replace the absolute sentence "No data loss occurs because the other
node was never stopped." with a qualified statement such as "Risk of data loss
is minimized when the other node remains running; zero-loss failover requires
synchronous replication, confirmation of committed transactions on the surviving
node, and no split-brain," and add a short bullet or parenthetical listing the
conditions required for zero-loss failover (synchronous replication enabled,
commit acknowledgements persisted on the surviving node, and a quorum/leader
election that prevents split-brain).
- Around line 14-20: The fenced code block containing the ASCII topology diagram
(the triple-backtick block that starts with "[Client] --> [LB / Sentinel]") is
missing a language tag and triggers markdownlint MD040; add a language tag
(e.g., ```text) to the opening fence so the block becomes a tagged code block
and linting passes.
- Around line 64-74: The current polling only checks master_link_status via the
STATUS variable and can allow promotion while the replica is still catching up;
change the gate to query and compare replication offsets from both master and
replica (e.g., read master_repl_offset on the master and
slave_repl_offset/master_repl_offset on the replica), verify
master_sync_in_progress is false and master_link_status is "up", and only break
when the offset difference is within an acceptable threshold (zero or one) so
replication lag is effectively zero before proceeding to Step 7.

In `@scripts/audit-unwrap.sh`:
- Around line 48-53: The current check that skips files when a nearby allow
exists is too broad (it greps for '#\[allow') and can hide unrelated allows;
update the conditional that inspects the preceding 30 lines (the block using
start, lineno, file and the sed | grep pipeline) to only match clippy
unwrap/expect allows by changing the grep pattern to explicitly look for
clippy::unwrap_used or clippy::expect_used (e.g. match '#[allow' that contains
'clippy::unwrap_used' or 'clippy::expect_used') so the script only skips when
those specific lints are present.

In `@src/admin/http_server.rs`:
- Around line 25-30: The response() helper and the other spots that call
unwrap()/expect() (notably around lines 53-60, 82-89, and 129) must not panic;
change them to return Results and propagate errors instead of calling
unwrap/expect. For response(), replace the current signature to return
Result<Response<Full<Bytes>>, E> (or a crate-specific error type) and convert
the builder.body(...) unwrap into a mapped error (e.g., .body(...).map_err(|e|
...) ) so failures produce an Err you propagate; do the same for the runtime
creation (replace direct Runtime::new().unwrap() with a fallible creation that
returns Err on failure) and for thread spawn (use thread::Builder::spawn and
handle its Result, returning an error or an appropriate HTTP 500 response
instead of aborting). Ensure callers of response(), the runtime creation site,
and the thread spawn path handle the Result (propagate or convert to an HTTP
error) so no unwrap/expect remains in library code.

In `@src/admin/slowlog.rs`:
- Around line 74-77: The guard treats threshold_us == 0 as “disabled,” which is
wrong because zero should mean “log everything”; change the representation so
enable/disable is separate from the numeric threshold. Either convert the field
threshold_us to an Option<u64> (use AtomicU64 + a companion AtomicBool or
replace with AtomicUsize/AtomicPtr to hold Option-like semantics) or add a
separate AtomicBool like slowlog_enabled and keep threshold_us where 0 means log
everything; then update the check in the slowlog path (the code that calls
self.threshold_us.load and the if that returns early) to first check the new
enabled flag (or Option::is_none) and only compare duration_us < threshold when
enabled and threshold > 0. Ensure to update any constructors/setters that touch
threshold_us or the new enabled flag (and use the same Ordering as before).
- Around line 169-178: The SLOWLOG GET branch currently uses
atoi::atoi::<usize>(b) and simply casts Frame::Integer to usize, which silently
accepts inputs like "foo" or negative numbers; replace this with strict parsing
and validation: for the b"GET" arm (handling args, Frame::BulkString and
Frame::Integer) convert bulk strings via std::str::from_utf8 and then parse into
a signed integer (e.g., i64 or isize) using parse(), or reject on UTF-8/parse
errors; for Frame::Integer check the value is >= 0 before converting to usize;
if parsing fails or the number is negative return a protocol/error reply instead
of treating it as “no count” so invalid counts are rejected.

In `@src/command/acl.rs`:
- Around line 49-51: Replace uses of std::sync::RwLock with parking_lot::RwLock
across this file and remove all poison-aware Result handling (the Ok(...) else {
return ... } patterns) because parking_lot locks never poison. Concretely,
update the type of ACL and runtime config locks to parking_lot::RwLock and
simplify call sites such as acl_table.read(), runtime_config.read()/write(), and
any other .read()/.write() usages (remove Result matching like let Ok(table) =
acl_table.read() else { ... } and use the guard directly). Ensure imports are
updated to use parking_lot::RwLock and delete any branches that handle poison
errors (e.g., the error returns at lines referencing acl_table.read(),
runtime_config handling, and similar guards listed in the review).

In `@src/command/connection.rs`:
- Around line 215-219: The replication block currently hardcodes "role:master"
and "connected_slaves:0" in the INFO output (see the sections.push_str calls in
src/command/connection.rs); change it to read the actual replication state
instead of hardcoding: query the replication manager/struct (e.g., a Replication
or ServerState instance or functions like
get_replication_role()/replication.connected_slaves()) and write
"role:<actual_role>" and "connected_slaves:<actual_count>" only when that state
is available, or skip emitting the whole "# Replication" section if replication
state is not initialized; update the code paths that call sections.push_str("#
Replication...") to conditionally build the block from the real state.
- Around line 298-305: The ACL state is currently wrapped in std::sync::RwLock
which forces handling poisoned locks; change the acl_table type to use
parking_lot::RwLock (e.g. replace std::sync::Arc<std::sync::RwLock<...>> with
std::sync::Arc<parking_lot::RwLock<...>>) and update imports, then remove the
poison-handling branches that use let Ok(table) = acl_table.read() else { ... }
and replace them with direct reads (let table = acl_table.read();) at each usage
(notably around the authenticate call in connection.rs where the variable
acl_table is read and match table.authenticate("default", &password) is invoked
and the two other similar sites at the reported ranges), ensuring
signatures/parameter types that accept acl_table are updated to the
parking_lot::RwLock variant.

In `@src/main.rs`:
- Around line 37-40: The early return for config.check_config currently happens
before the actual semantic startup validations run, so extract or call the
existing validation logic (the TLS combination checks, cert/key file readability
checks, persistence directory validation and other startup invariants) and run
it when config.check_config is true before returning; specifically, move or
expose the startup validation code into a function (e.g.,
validate_startup_config or reuse the existing validation routine) and invoke it
just prior to the if config.check_config branch returning so that
config.check_config triggers the same TLS/cert/path/persistence checks the
normal startup performs, then return Ok(()) only if that validation succeeds.

In `@src/protocol/parse.rs`:
- Around line 428-436: In parse_frame_zerocopy's b'#' and b'_' branches,
validate the exact payload length before advancing pos: for b'#' ensure crlf -
*pos == 1 and the single byte is b't' or b'f' (otherwise return Frame::Null
without moving pos), and for b'_' ensure crlf - *pos == 0 (otherwise return
Frame::Null without moving pos); only when the length checks pass advance *pos =
crlf + 2 and construct Frame::Boolean or Frame::Null respectively.

In `@src/server/conn/handler_sharded.rs`:
- Around line 1405-1411: The slowlog call is currently passing empty client
address/name (b"") so entries lack metadata; while the frame and its args are in
scope, call
crate::admin::metrics_setup::global_slowlog().maybe_record(elapsed_us,
args.as_slice(), peer_addr.as_ref().map(|a|
a.to_string().as_bytes()).unwrap_or(b""), client_name.as_deref().map(|s|
s.as_bytes()).unwrap_or(b"")) (use the actual local variables peer_addr and
client_name available in this scope) so SlowlogEntry gets populated; apply the
same change at the second site (around lines 1483-1490) where maybe_record is
called with b"" values.

In `@src/server/conn/shared.rs`:
- Around line 48-56: The runtime_config variable and function signature
currently use std::sync::RwLock<RuntimeConfig>, which enforces poison-aware
handling; change the import and type annotation to use parking_lot::RwLock so
reads/writes no longer return PoisonError results — update the use/import of
RwLock to parking_lot::RwLock and adjust the signature/type of runtime_config
(and any place declaring RuntimeConfig locks) so calls like
runtime_config.read() and runtime_config.write() return the parking_lot guard
types; remove any existing poison-error handling that expects std::sync errors
accordingly.

In `@tests/durability/jepsen_lite.rs`:
- Around line 181-186: The assert! macro invocation using result.is_ok(), cycle,
and result.unwrap_err() is misformatted; run cargo fmt to reformat the assertion
or manually reformat the macro call so it matches rustfmt style (ensure
arguments are on correct lines and spacing around commas matches rustfmt), e.g.,
adjust the assert!( result.is_ok(), "Cycle {}: {}", cycle, result.unwrap_err() )
usage around the variables result and cycle or simply run `cargo fmt` to fix the
formatting in the test containing the assert! call.
- Around line 171-173: The unsafe block calling libc::kill must include a //
SAFETY: comment explaining the invariants; add a comment above the unsafe {
libc::kill(server.id() as i32, libc::SIGKILL); } that documents why this is safe
(e.g., server.id() returns a valid OS PID, the cast to i32 is safe for expected
PID range, no aliasing or concurrent UB arises from this call, and terminating
the process with SIGKILL is the intended effect), so reviewers can verify the
required safety rationale for the unsafe block.
- Around line 10-16: Reorder and format the use imports in the test file to
match rustfmt's standard grouping and alphabetical ordering (group std::io,
std::net, std::process, std::sync, std::thread, std::time) and ensure items like
BufRead, BufReader, Write, TcpStream, Command, Stdio, AtomicBool, Ordering, Arc,
thread, and Duration are ordered alphabetically within their groups; run cargo
fmt to apply the exact formatting and verify cargo fmt --check passes.

---

Outside diff comments:
In @.github/workflows/release.yml:
- Around line 61-62: The GitHub Actions release job currently grants only
"contents: write" which prevents OIDC token access needed for keyless cosign
signing; update the permissions block to also include "id-token: write" so
COSIGN_EXPERIMENTAL keyless signing can obtain the GitHub OIDC token. Also
adjust the SBOM generation step that runs "cargo cyclonedx --format json
--output-file artifacts/moon-sbom.json" to generate SBOMs that match each built
binary by invoking cargo cyclonedx with the appropriate --target and --features
flags (or generate per-matrix entry) so the SBOM aligns with the
feature/target-specific artifacts produced by the build matrix.

---

Duplicate comments:
In `@src/server/conn/handler_sharded.rs`:
- Line 228: The metrics call record_connection_closed() is executed even when
the handler returns HandlerResult::MigrateConnection, which incorrectly
decrements the connected-client metric for a migrated (still-alive) socket;
modify the control flow in handler_sharded.rs so record_connection_closed() is
only invoked for actual disconnects (e.g., when the handler returned
CloseConnection or an error), by either moving the call into the branch that
handles CloseConnection or by checking the HandlerResult enum and skipping
record_connection_closed() when the result is HandlerResult::MigrateConnection
(and keep it for CloseConnection/Err cases) so successful migrations do not
decrement the metric.

In `@tests/replication_hardening.rs`:
- Around line 106-107: The unsafe libc::kill calls in
tests/replication_hardening.rs (e.g., the call using replica.id() before
replica.wait()) lack a // SAFETY: rationale and ignore the syscall return value;
update each of the three call sites to (1) add a clear // SAFETY: comment
explaining why calling libc::kill with replica.id() is safe (e.g., valid PID,
owned process for test teardown), and (2) capture and check the return value
(use libc::kill(...) as i32 or check libc::errno) and propagate failure into the
test with an assert or bail (so the test fails instead of proceeding on error).
Ensure the changes are applied consistently to the calls around lines referenced
(the replica.id() SIGKILL sites) and keep replica.wait() semantics unchanged.
- Around line 42-49: The loop in send_cmd that reads response lines (the
reader.lines() match in tests/replication_hardening.rs) incorrectly treats GET
bulk-string replies as line-delimited and can hang until timeout; update
send_cmd to parse RESP types: read the first byte/token, and if it is '$' parse
the following length, then read exactly that many bytes plus the trailing CRLF
(handle -1 as null bulk), instead of waiting for another newline, so bulk
strings are consumed correctly and the test no longer stalls.

---

Nitpick comments:
In `@src/storage/tiered/cold_tier.rs`:
- Line 294: The sort comparator for bf_dists uses
partial_cmp(...).unwrap_or(Ordering::Equal) which is not a total order and can
yield nondeterministic results when floats are NaN; change the comparison to use
a total ordering (e.g., f64::total_cmp) or explicitly handle NaN so the
comparator is total and deterministic—update the call that sorts bf_dists (the
closure comparing a.0 and b.0) to use total_cmp on the float keys (or map floats
to a deterministic wrapper like ordered_float::NotNan / a canonical NaN
position) so sorting of bf_dists is consistent.

In `@src/tls.rs`:
- Around line 67-68: Add an explanatory comment above the
rustls::crypto::aws_lc_rs::default_provider().install_default() call clarifying
that the result is intentionally ignored because install_default() is idempotent
and safe to call multiple times; mention this prevents confusion with the
explicit provider used later by builder_with_provider so reviewers know it’s not
an oversight. Reference the install_default() call and builder_with_provider
usage for context.

In `@tests/durability/jepsen_lite.rs`:
- Around line 44-78: The send_cmd function's RESP loop may continue after seeing
a bulk string header ('$') because it doesn't consume the following data line;
modify the loop in send_cmd so that when l.starts_with('$') and len >= 0 you
explicitly read the next data line from the same BufReader (append it to resp,
with a trailing '\n' like other lines) and then break out of the for loop to
avoid hanging on multi-line or array responses; reference the reader variable
and the branch that checks l.starts_with('$') to locate where to add the extra
read-and-break logic.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4d1802a3-a322-48da-a83a-94708247aa1b

📥 Commits

Reviewing files that changed from the base of the PR and between 2086f49 and 5e5f2ea.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (36)
  • .github/workflows/bench-gate.yml
  • .github/workflows/changelog-gate.yml
  • .github/workflows/compat.yml
  • .github/workflows/release.yml
  • Cargo.toml
  • docs/runbooks/rolling-restart.md
  • docs/runbooks/tls-cert-rotation.md
  • scripts/audit-unwrap.sh
  • src/admin/http_server.rs
  • src/admin/metrics_setup.rs
  • src/admin/mod.rs
  • src/admin/slowlog.rs
  • src/command/acl.rs
  • src/command/connection.rs
  • src/command/set/set_write.rs
  • src/command/sorted_set/sorted_set_write.rs
  • src/main.rs
  • src/persistence/redis_rdb.rs
  • src/protocol/parse.rs
  • src/server/conn/blocking.rs
  • src/server/conn/handler_sharded.rs
  • src/server/conn/shared.rs
  • src/shard/conn_accept.rs
  • src/shard/mesh.rs
  • src/shard/spsc_handler.rs
  • src/storage/compact_key.rs
  • src/storage/compact_value.rs
  • src/storage/dashtable/mod.rs
  • src/storage/stream.rs
  • src/storage/tiered/cold_tier.rs
  • src/storage/tiered/spill_thread.rs
  • src/tls.rs
  • tests/durability/crash_matrix.rs
  • tests/durability/jepsen_lite.rs
  • tests/durability/mod.rs
  • tests/replication_hardening.rs
✅ Files skipped from review due to trivial changes (11)
  • src/shard/mesh.rs
  • src/storage/compact_value.rs
  • src/persistence/redis_rdb.rs
  • src/storage/dashtable/mod.rs
  • src/storage/tiered/spill_thread.rs
  • src/command/sorted_set/sorted_set_write.rs
  • src/storage/compact_key.rs
  • src/shard/conn_accept.rs
  • src/shard/spsc_handler.rs
  • .github/workflows/compat.yml
  • docs/runbooks/tls-cert-rotation.md
🚧 Files skipped from review as they are similar to previous changes (5)
  • src/admin/mod.rs
  • tests/durability/mod.rs
  • .github/workflows/bench-gate.yml
  • tests/durability/crash_matrix.rs
  • src/admin/metrics_setup.rs

Comment thread .github/workflows/release.yml Outdated
Comment thread docs/runbooks/rolling-restart.md Outdated
Comment thread docs/runbooks/rolling-restart.md
Comment thread docs/runbooks/rolling-restart.md Outdated
Comment thread src/admin/http_server.rs Outdated
Comment thread src/server/conn/handler_sharded.rs
Comment thread src/server/conn/shared.rs Outdated
Comment thread tests/durability/jepsen_lite.rs
Comment thread tests/durability/jepsen_lite.rs Outdated
Comment thread tests/durability/jepsen_lite.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

♻️ Duplicate comments (1)
src/protocol/parse.rs (1)

441-449: ⚠️ Potential issue | 🟠 Major

Reject non-t/f booleans in the zero-copy pass.

This still accepts any 1-byte payload and maps it with val == b't', so #x\r\n becomes Frame::Boolean(false) instead of degrading to Frame::Null.

Suggested fix
         b'#' => {
             let crlf = crlf_or_null!(buf, pos);
             // Defensive: exactly one byte (t or f) before CRLF
             if crlf != *pos + 1 {
                 return Frame::Null;
             }
             let val = buf[*pos];
+            if val != b't' && val != b'f' {
+                return Frame::Null;
+            }
             *pos = crlf + 2;
             Frame::Boolean(val == b't')
         }

Based on learnings, parse_frame_zerocopy returns Frame::Null on ANY parse failure instead of panicking.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/protocol/parse.rs` around lines 441 - 449, The boolean branch currently
treats any single-byte payload as a boolean by computing Frame::Boolean(val ==
b't'), which accepts invalid bytes like 'x' and returns false; instead, in the
parse_frame_zerocopy boolean arm (the b'#' match using crlf_or_null!, buf, pos),
explicitly accept only b't' or b'f': read val = buf[*pos]; if val == b't'
advance *pos = crlf + 2 and return Frame::Boolean(true); else if val == b'f'
advance *pos and return Frame::Boolean(false); otherwise return Frame::Null on
parse failure and do not modify *pos.
🧹 Nitpick comments (3)
tests/replication_hardening.rs (1)

103-115: Convergence waits are sleep-based and may flake under load.

Consider replacing these fixed delays with bounded poll loops (DBSIZE/replication status checks) to make ignored integration tests reproducible in slower CI or local environments.

Also applies to: 136-139, 171-172, 185-188, 259-260, 267-269, 289-292, 320-321, 328-329, 346-347

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/replication_hardening.rs` around lines 103 - 115, Replace the brittle
thread::sleep calls used to wait for replication convergence with bounded
polling loops that query the server until the expected state is reached;
specifically, in the test using write_keys, start_moon and send_cmd (the replica
variable created by start_moon and the "REPLICAOF" send_cmd), poll the
primary/replica using a safe command such as DBSIZE or a replication-status/INFO
check in a loop with a short sleep and an overall timeout, asserting failure if
the timeout elapses—apply the same pattern to the other sleep sites referenced
in this file so tests wait deterministically for replication instead of sleeping
fixed intervals.
tests/durability/backup_restore.rs (1)

59-60: Replace fixed sleeps with readiness polling to reduce flakiness.

Line 59 and Line 110 rely on hard-coded delays; slow CI nodes can still race. A short poll loop against PING/DBSIZE with timeout will make this test deterministic.

Also applies to: 110-112

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/durability/backup_restore.rs` around lines 59 - 60, Replace the fixed
thread::sleep(Duration::from_millis(500)) calls in
tests/durability/backup_restore.rs with a short readiness-poll loop that
repeatedly issues a PING or DBSIZE command (using the same Redis/client helper
the test uses) until success or a deadline (e.g., 5s) is reached; for each poll
sleep a small interval (e.g., 50–100ms), and fail the test if the timeout
elapses. Update both occurrences (the one that currently calls
thread::sleep(Duration::from_millis(500)) and the similar block around lines
110–112) to use this polling pattern so the test waits deterministically for the
server to be ready instead of sleeping a fixed time.
tests/durability/crash_matrix.rs (1)

106-107: Use readiness/recovery polling instead of fixed sleeps.

The fixed delays before write/recovery are still timing-sensitive. Polling for a healthy command response with timeout will reduce sporadic failures.

Also applies to: 137-139

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/durability/crash_matrix.rs` around lines 106 - 107, Replace the fixed
thread::sleep(Duration::from_millis(500)) calls (and the similar sleeps at the
later block around lines 137-139) with a polling loop that queries the
process/command readiness or recovery endpoint until a success condition or
overall timeout is reached; specifically, implement a helper like
poll_until_ready that repeatedly invokes the same command/health-check used
elsewhere in the test (e.g., the command that should respond after recovery)
with short intervals and returns success or error on timeout, then use that
helper in place of the static sleeps in tests/durability/crash_matrix.rs to wait
deterministically for readiness/recovery.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/audit-unwrap.sh`:
- Around line 25-30: The current double-grep is too strict and sometimes misses
compound cfg predicates; replace the two independent greps with a single
adjacency check on parent_mod that looks for a cfg attribute immediately
preceding "mod tests" and matches any cfg predicate that contains "test" (but
not "not(test)"). Concretely, update the conditional that inspects parent_mod
(where basename/tests.rs and parent_mod are computed) to search for a pattern
like a #[cfg(...test...)] line directly followed by a line with "mod tests" (use
a multiline regex or a small awk/perl/grep -P/-z invocation) and ensure the
regex excludes "not(test)" so it accepts compound predicates such as
#[cfg(all(test, feature = "foo"))].
- Around line 48-53: The 30-line proximity check (start/lineno + sed/grep) is
causing false positives; replace it with a scope-aware AST check by parsing the
Rust file (e.g., using tree-sitter-rust or a small Rust parser) to find the
enclosing item/node for the unwrap/expect location and then verify whether that
same node has a #[allow(...clippy::unwrap_used...)] attribute; specifically
remove the sed/grep block that uses start/lineno and instead compute the
enclosing node for the reported lineno and test its attributes (and parent/arm
nodes as appropriate) so only allows on the same function/impl/match-arm scope
suppress the warning.

In `@src/command/mod.rs`:
- Around line 714-715: The SLOWLOG entry in the read prefilter/dispatcher
incorrectly treats all subcommands as read-only; modify the read-path dispatch
table so only safe subcommands (GET, LEN, HELP) are handled as reads, and ensure
RESET and any mutating variants are routed to the normal (write) dispatcher or
through handle_slowlog() on the write path; update the dispatch tuples that
currently include SLOWLOG (e.g., the (7, b's') / SLOWLOG entry and the similar
block around the other occurrence) to either restrict to GET/LEN/HELP or remove
SLOWLOG from the read table so handle_slowlog() processes mutating subcommands
correctly.

In `@src/main.rs`:
- Around line 89-90: The readiness flag is being set too early because
init_metrics(...) starts the admin HTTP listener immediately; move the
readiness_flag assignment (the result of init_metrics) or the act of marking
readiness so it happens only after shard threads are spawned and the main
listener has successfully bound and begun accepting connections (i.e., after
your shard spawn code and after main listener bind/accept setup). Specifically,
ensure init_metrics or at least readiness_flag is initialized/marked after the
main listener bind/accept and shard-thread startup (referencing init_metrics,
readiness_flag, the main listener bind/accept logic, and the shard spawn code)
so /readyz only returns 200 when the server can accept traffic.

In `@tests/durability/backup_restore.rs`:
- Around line 74-85: The block polling for rdb_src creation is not
rustfmt-clean; run cargo fmt and commit the formatting-only changes so CI
passes. Locate the loop that defines poll_deadline and checks rdb_src.exists()
(variables: poll_deadline, rdb_src, Duration, thread::sleep) and apply rustfmt
formatting (or run `cargo fmt`) to normalize spacing and line breaks around the
let binding, while loop, and assert, then commit only the formatting delta.

In `@tests/durability/crash_matrix.rs`:
- Around line 88-126: The crash_test function uses libc::kill (a Unix-only API)
but isn't conditionally compiled; add a Unix cfg so non-Unix targets don't try
to compile it. Specifically, annotate the crash_test function (or the
surrounding test module) with #[cfg(unix)] (or #[cfg(target_family = "unix")])
so the function definition that calls libc::kill and server.id() is only
compiled on Unix systems; ensure any test harness items that reference
crash_test are similarly guarded or use cfg_attr to avoid undefined-symbol
compile errors on non-Unix platforms.

In `@tests/replication_hardening.rs`:
- Around line 30-33: The test helper send_cmd currently swallows
TcpStream::connect failures by returning String::new(), which then allows
callers like dbsize to treat errors as -1; change send_cmd (and related
helper(s) around lines 75-82) to return Result<String, std::io::Error> (or a
crate-specific error) instead of a bare String, propagate errors through dbsize
as Result<i64, _>, and update all call sites in tests to unwrap()/expect() or
assert_ok! the Result so transport/connect failures are surfaced rather than
coerced into empty/"-1" values; look for TcpStream::connect and functions named
send_cmd and dbsize to locate where to change signatures and call semantics.

---

Duplicate comments:
In `@src/protocol/parse.rs`:
- Around line 441-449: The boolean branch currently treats any single-byte
payload as a boolean by computing Frame::Boolean(val == b't'), which accepts
invalid bytes like 'x' and returns false; instead, in the parse_frame_zerocopy
boolean arm (the b'#' match using crlf_or_null!, buf, pos), explicitly accept
only b't' or b'f': read val = buf[*pos]; if val == b't' advance *pos = crlf + 2
and return Frame::Boolean(true); else if val == b'f' advance *pos and return
Frame::Boolean(false); otherwise return Frame::Null on parse failure and do not
modify *pos.

---

Nitpick comments:
In `@tests/durability/backup_restore.rs`:
- Around line 59-60: Replace the fixed thread::sleep(Duration::from_millis(500))
calls in tests/durability/backup_restore.rs with a short readiness-poll loop
that repeatedly issues a PING or DBSIZE command (using the same Redis/client
helper the test uses) until success or a deadline (e.g., 5s) is reached; for
each poll sleep a small interval (e.g., 50–100ms), and fail the test if the
timeout elapses. Update both occurrences (the one that currently calls
thread::sleep(Duration::from_millis(500)) and the similar block around lines
110–112) to use this polling pattern so the test waits deterministically for the
server to be ready instead of sleeping a fixed time.

In `@tests/durability/crash_matrix.rs`:
- Around line 106-107: Replace the fixed
thread::sleep(Duration::from_millis(500)) calls (and the similar sleeps at the
later block around lines 137-139) with a polling loop that queries the
process/command readiness or recovery endpoint until a success condition or
overall timeout is reached; specifically, implement a helper like
poll_until_ready that repeatedly invokes the same command/health-check used
elsewhere in the test (e.g., the command that should respond after recovery)
with short intervals and returns success or error on timeout, then use that
helper in place of the static sleeps in tests/durability/crash_matrix.rs to wait
deterministically for readiness/recovery.

In `@tests/replication_hardening.rs`:
- Around line 103-115: Replace the brittle thread::sleep calls used to wait for
replication convergence with bounded polling loops that query the server until
the expected state is reached; specifically, in the test using write_keys,
start_moon and send_cmd (the replica variable created by start_moon and the
"REPLICAOF" send_cmd), poll the primary/replica using a safe command such as
DBSIZE or a replication-status/INFO check in a loop with a short sleep and an
overall timeout, asserting failure if the timeout elapses—apply the same pattern
to the other sleep sites referenced in this file so tests wait deterministically
for replication instead of sleeping fixed intervals.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: be3c2007-7495-49e4-87e5-4d7e207a6fa5

📥 Commits

Reviewing files that changed from the base of the PR and between 5e5f2ea and 780be9d.

📒 Files selected for processing (18)
  • .github/workflows/bench-gate.yml
  • .github/workflows/compat.yml
  • .github/workflows/release.yml
  • docs/runbooks/replica-fell-behind.md
  • docs/runbooks/rolling-restart.md
  • scripts/audit-unwrap.sh
  • src/admin/http_server.rs
  • src/admin/metrics_setup.rs
  • src/admin/slowlog.rs
  • src/command/connection.rs
  • src/command/mod.rs
  • src/main.rs
  • src/protocol/parse.rs
  • src/server/conn/handler_sharded.rs
  • tests/durability/backup_restore.rs
  • tests/durability/crash_matrix.rs
  • tests/durability/jepsen_lite.rs
  • tests/replication_hardening.rs
✅ Files skipped from review due to trivial changes (4)
  • docs/runbooks/replica-fell-behind.md
  • docs/runbooks/rolling-restart.md
  • .github/workflows/bench-gate.yml
  • src/admin/metrics_setup.rs
🚧 Files skipped from review as they are similar to previous changes (7)
  • src/admin/http_server.rs
  • src/server/conn/handler_sharded.rs
  • .github/workflows/release.yml
  • .github/workflows/compat.yml
  • src/command/connection.rs
  • tests/durability/jepsen_lite.rs
  • src/admin/slowlog.rs

Comment thread scripts/audit-unwrap.sh
Comment thread scripts/audit-unwrap.sh
Comment on lines +48 to +53
# Check preceding 30 lines for #[allow — covers function-level annotations
start=$((lineno - 30))
if [ "$start" -lt 1 ]; then start=1; fi
if sed -n "${start},${lineno}p" "$file" 2>/dev/null | grep -q '#\[allow.*clippy::unwrap_used\|#\[allow.*clippy::expect_used'; then
continue
fi
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
python - <<'PY'
import pathlib, re

allow_re = re.compile(r'#\[allow\([^]]*clippy::(unwrap_used|expect_used)')
call_re = re.compile(r'\.unwrap\(\)|\.expect\(')

for path in pathlib.Path("src").rglob("*.rs"):
    try:
        lines = path.read_text().splitlines()
    except Exception:
        continue
    allow_lines = [i + 1 for i, line in enumerate(lines) if allow_re.search(line)]
    for lineno, line in enumerate(lines, 1):
        if call_re.search(line):
            nearby = [a for a in allow_lines if 0 < lineno - a <= 30]
            if nearby:
                print(f"{path}:{lineno} has allow(s) within 30 lines -> {nearby}")
PY

Repository: pilotspace/moon

Length of output: 5999


🏁 Script executed:

#!/bin/bash
# Examine a few cases where allows and unwraps are within 30 lines
# Check if the allow actually protects the unwrap (same function/item)

# Case 1: src/command/key.rs lines 431 and 456
echo "=== Case 1: src/command/key.rs ==="
sed -n '425,460p' src/command/key.rs | cat -n | sed 's/^/  /'

echo ""
echo "=== Case 2: src/server/response_slot.rs (lines 96, 99, 113, 114) ==="
sed -n '90,120p' src/server/response_slot.rs | cat -n | sed 's/^/  /'

echo ""
echo "=== Case 3: src/storage/db.rs (lines 522, 537, 542, 546) ==="
sed -n '515,555p' src/storage/db.rs | cat -n | sed 's/^/  /'

Repository: pilotspace/moon

Length of output: 5724


Replace line-proximity heuristic with scope-aware analysis.

The 30-line window conflates separate code blocks. In src/server/response_slot.rs:99–114, the #[allow] at line 96 (fast path) should not mask the unwrap at line 114 (slow path after waker registration). The script reports both as "protected" by the line-96 allow, creating a false positive.

Use AST-level analysis to verify that allows and their corresponding unwraps belong to the same Rust item (function, impl, match arm, etc.), rather than relying on line proximity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/audit-unwrap.sh` around lines 48 - 53, The 30-line proximity check
(start/lineno + sed/grep) is causing false positives; replace it with a
scope-aware AST check by parsing the Rust file (e.g., using tree-sitter-rust or
a small Rust parser) to find the enclosing item/node for the unwrap/expect
location and then verify whether that same node has a
#[allow(...clippy::unwrap_used...)] attribute; specifically remove the sed/grep
block that uses start/lineno and instead compute the enclosing node for the
reported lineno and test its attributes (and parent/arm nodes as appropriate) so
only allows on the same function/impl/match-arm scope suppress the warning.

Comment thread src/command/mod.rs Outdated
Comment thread src/main.rs
Comment thread tests/durability/backup_restore.rs Outdated
Comment thread tests/durability/crash_matrix.rs
Comment on lines +30 to +33
fn send_cmd(addr: &str, cmd: &str) -> String {
let Ok(mut stream) = TcpStream::connect(addr) else {
return String::new();
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not silently coerce transport failures into empty/-1 values.

Returning String::new() on connect failure (Line 31) and then unwrap_or(-1) in dbsize can mask root-cause failures and weaken parity assertions. Prefer returning Result from send_cmd/dbsize and asserting success at call sites.

Minimal direction for stricter helper contract
-fn send_cmd(addr: &str, cmd: &str) -> String {
-    let Ok(mut stream) = TcpStream::connect(addr) else {
-        return String::new();
-    };
+fn send_cmd(addr: &str, cmd: &str) -> Result<String, std::io::Error> {
+    let mut stream = TcpStream::connect(addr)?;
@@
-    resp
+    Ok(resp)
 }

Also applies to: 75-82

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/replication_hardening.rs` around lines 30 - 33, The test helper
send_cmd currently swallows TcpStream::connect failures by returning
String::new(), which then allows callers like dbsize to treat errors as -1;
change send_cmd (and related helper(s) around lines 75-82) to return
Result<String, std::io::Error> (or a crate-specific error) instead of a bare
String, propagate errors through dbsize as Result<i64, _>, and update all call
sites in tests to unwrap()/expect() or assert_ok! the Result so
transport/connect failures are surfaced rather than coerced into empty/"-1"
values; look for TcpStream::connect and functions named send_cmd and dbsize to
locate where to change signatures and call semantics.

TinDang97 added a commit that referenced this pull request Apr 9, 2026
The bench-gate workflow now actually gates on regressions:

1. On main branch push: runs benchmarks and saves Criterion results
   to GitHub Actions cache as the baseline (key: criterion-baseline-main).

2. On PR: restores the cached baseline, runs benchmarks, then compares
   the 3 critical benches (get_hotpath, dispatch_baseline, resp_parsing)
   against the baseline. Exits non-zero if any regresses > 5% (configurable
   via REGRESSION_THRESHOLD env var).

3. If no baseline exists yet (first run): benchmarks run and results are
   recorded, but regression check is skipped with a NOTE.

Also: RSS memory gate now exits non-zero (was warning-only) when
RSS exceeds the 150 MB baseline for 100K keys.

Addresses PR #65 review: "workflow does not currently gate regressions."
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/shard/conn_accept.rs (1)

122-130: ⚠️ Potential issue | 🟠 Major

Replace unwrap/expect suppressions with explicit error handling in Lua initialization.

The #[allow(clippy::expect_used, clippy::unwrap_used)] at lines 122, 303, 429, and 696 suppresses a violation of the library code guideline: "No unwrap() or expect() in library code outside tests." Lint suppression does not resolve the panic path; it hides it. Replace with explicit error handling and early return instead.

Proposed refactor
+fn get_or_init_lua(
+    lua_rc: &Rc<RefCell<Option<Rc<mlua::Lua>>>>,
+    shard_id: usize,
+) -> Option<Rc<mlua::Lua>> {
+    let mut lua_opt = lua_rc.borrow_mut();
+    if lua_opt.is_none() {
+        match crate::scripting::setup_lua_vm() {
+            Ok(lua) => *lua_opt = Some(lua),
+            Err(e) => {
+                tracing::error!("Shard {}: Lua VM initialization failed: {}", shard_id, e);
+                return None;
+            }
+        }
+    }
+    lua_opt.clone()
+}

Call-site replacement (apply at lines 122, 303, 429, 696):

-#[allow(clippy::expect_used, clippy::unwrap_used)]
-// Startup: Lua VM init failure is fatal; as_ref() after is_none() guard
-let lua = {
-    let mut lua_opt = lua_rc.borrow_mut();
-    if lua_opt.is_none() {
-        *lua_opt = Some(crate::scripting::setup_lua_vm().expect("Lua VM initialization failed"));
-    }
-    lua_opt.as_ref().unwrap().clone()
-};
+let Some(lua) = get_or_init_lua(lua_rc, shard_id) else {
+    return;
+};
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/durability/jepsen_lite.rs`:
- Around line 83-87: The verifier's ordering check is reversed relative to
writer_loop(): writer_loop() writes keys in ascending k for each seq, so after a
crash it's valid for lower-index keys to be at seq while higher-index keys
remain at seq-1 or nil; update the verifier (the ordering/check function that
inspects keys produced by writer_loop/send_cmd) to treat nils as a trailing
suffix rather than skipping them and accept the pattern "prefix at seq, suffix
at seq-1/nil" when keys are scanned in ascending k; specifically, adjust the
ordering logic to validate that once a key with a lower seq or nil is observed,
all subsequent higher-index keys must be <= that seq (or nil) and not reject the
legitimate mixed-state, ensuring consistency with writer_loop/send_cmd behavior.
- Around line 44-77: send_cmd currently swallows connection/read/write errors
and returns an empty or partial String; change send_cmd to return a
Result<String, std::io::Error> (or a custom error) and propagate all TcpStream
connect/write/read errors instead of returning String::new() or partial
responses, ensuring parse failures (e.g., invalid length parse) also return Err;
then update callers such as verify_linearizability to treat Err as a hard test
failure (return Err or panic) rather than skipping/continuing, so
transport/protocol failures cause the harness to fail.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 65a97e13-62d1-4cc0-b62a-591855aab89e

📥 Commits

Reviewing files that changed from the base of the PR and between 780be9d and 9e3c762.

⛔ Files ignored due to path filters (1)
  • fuzz/Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (7)
  • .gitignore
  • src/command/sorted_set/sorted_set_write.rs
  • src/persistence/redis_rdb.rs
  • src/shard/conn_accept.rs
  • src/storage/tiered/spill_thread.rs
  • tests/durability/backup_restore.rs
  • tests/durability/jepsen_lite.rs
✅ Files skipped from review due to trivial changes (4)
  • .gitignore
  • src/command/sorted_set/sorted_set_write.rs
  • src/storage/tiered/spill_thread.rs
  • src/persistence/redis_rdb.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/durability/backup_restore.rs

Comment on lines +44 to +77
fn send_cmd(addr: &str, cmd: &str) -> String {
let Ok(mut stream) = TcpStream::connect(addr) else {
return String::new();
};
stream.set_read_timeout(Some(Duration::from_secs(5))).ok();
stream
.write_all(format!("{}\r\n", cmd).as_bytes())
.expect("write");
stream.flush().ok();

let reader = BufReader::new(&stream);
let mut resp = String::new();
for line in reader.lines() {
match line {
Ok(l) => {
resp.push_str(&l);
resp.push('\n');
if l.starts_with('+') || l.starts_with('-') || l.starts_with(':') {
break;
}
// Bulk string: read the $N header then the data line
if l.starts_with('$') {
let len: i64 = l[1..].trim().parse().unwrap_or(-1);
if len < 0 {
break;
}
// read the actual data line
continue;
}
}
Err(_) => break,
}
}
resp
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail the harness on transport/protocol errors.

send_cmd() collapses connect/read/write failures into an empty or partial string, and verify_linearizability() treats those replies as continue. A failed restart or bind can therefore make this test pass without ever querying Moon.

Suggested direction
-fn send_cmd(addr: &str, cmd: &str) -> String {
-    let Ok(mut stream) = TcpStream::connect(addr) else {
-        return String::new();
-    };
-    stream.set_read_timeout(Some(Duration::from_secs(5))).ok();
+fn send_cmd(addr: &str, cmd: &str) -> std::io::Result<String> {
+    let mut stream = TcpStream::connect(addr)?;
+    stream.set_read_timeout(Some(Duration::from_secs(5)))?;
     stream
         .write_all(format!("{}\r\n", cmd).as_bytes())
-        .expect("write");
-    stream.flush().ok();
+        ?;
+    stream.flush()?;
@@
-            Err(_) => break,
+            Err(e) => return Err(e),
         }
     }
-    resp
+    Ok(resp)
 }

Then make callers fail the test on Err instead of skipping the key.

Also applies to: 102-120

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/durability/jepsen_lite.rs` around lines 44 - 77, send_cmd currently
swallows connection/read/write errors and returns an empty or partial String;
change send_cmd to return a Result<String, std::io::Error> (or a custom error)
and propagate all TcpStream connect/write/read errors instead of returning
String::new() or partial responses, ensuring parse failures (e.g., invalid
length parse) also return Err; then update callers such as
verify_linearizability to treat Err as a hard test failure (return Err or panic)
rather than skipping/continuing, so transport/protocol failures cause the
harness to fail.

Comment thread tests/durability/jepsen_lite.rs
@TinDang97 TinDang97 changed the title v0.1.3 Production Readiness — Phases 92-99 v0.1.3 Production Readiness — Phases 92-105 (gap closure) Apr 9, 2026
@TinDang97
Copy link
Copy Markdown
Collaborator Author

Security Review — 3 HIGH Blockers Fixed ✅

Deep security review completed. 3 HIGH issues identified and fixed in commit 27f141b:

Fixed

  1. Double-counted metricsdispatch() wrapper AND handlers both called record_command(). Every command counted 2x. Fixed: removed duplicate calls from dispatch wrapper.
  2. Unbounded Prometheus label cardinality — unknown commands created unlimited time series (DoS vector). Fixed: added sanitize_cmd_label() allowlist mapping unknown commands to "unknown" static string. Also eliminates per-call to_ascii_lowercase() heap allocation.
  3. spawn_local without LocalSet — admin HTTP server would panic on first connection. Fixed: replaced with tokio::spawn().

Remaining (non-blocking)

  • MEDIUM: Hot-path allocations in slowlog when slowlog-log-slower-than 0
  • MEDIUM: Readiness flag uses Relaxed ordering on ARM64
  • MEDIUM: Instant::now() called per-command even when metrics disabled
  • LOW: Unpinned dependency versions in compat CI

Full review: unsafe code PASS, protocol parser PASS, CI/CD injection-safe, TLS cipher freeze sound, test infrastructure meaningful.

@TinDang97 TinDang97 enabled auto-merge April 9, 2026 16:14
TinDang97 added a commit that referenced this pull request Apr 9, 2026
The bench-gate workflow now actually gates on regressions:

1. On main branch push: runs benchmarks and saves Criterion results
   to GitHub Actions cache as the baseline (key: criterion-baseline-main).

2. On PR: restores the cached baseline, runs benchmarks, then compares
   the 3 critical benches (get_hotpath, dispatch_baseline, resp_parsing)
   against the baseline. Exits non-zero if any regresses > 5% (configurable
   via REGRESSION_THRESHOLD env var).

3. If no baseline exists yet (first run): benchmarks run and results are
   recorded, but regression check is skipped with a NOTE.

Also: RSS memory gate now exits non-zero (was warning-only) when
RSS exceeds the 150 MB baseline for 100K keys.

Addresses PR #65 review: "workflow does not currently gate regressions."
TinDang97 added a commit that referenced this pull request Apr 9, 2026
- Remove SLOWLOG from read-only dispatch path (RESET mutates state)
- Add #[cfg(unix)] guard to crash_matrix::crash_test (libc portability)
- Fix jepsen_lite send_cmd: properly read RESP bulk string data
- Fix jepsen_lite ordering check: val > pv (non-increasing, not non-decreasing)
- Fix replication_hardening dbsize: panic on connect failure instead of silent -1
- Fix audit-unwrap.sh: use awk adjacency check for compound cfg(test) patterns
TinDang97 added a commit that referenced this pull request Apr 9, 2026
Code fixes:
- Eliminate hot-path allocation in sanitize_cmd_label (stack buffer vs heap)
- Add error metric recording for dispatch_read() on both local and cross-shard paths
- Wire error metrics on write dispatch path (record_command_error)
- Strengthen --check-config validation (port conflicts, shard count)
- Replace fixed 2s sleep with polling loop in backup_restore test
- Tighten torn_write assertion from >= 2 to exact [1, 2]
- Add TODO for INFO replication placeholder + replid/offset fields
- Fix stale inline dispatch tests (SET now correctly falls through)
- Gate integration.rs and replication_test.rs behind runtime-tokio
- Fix parking_lot::RwLock type mismatch in integration tests
- Add missing ServerConfig fields (admin_port, slowlog, check_config)

CI/Docs fixes:
- Add GitHub warning annotation when bench-gate has no baseline
- Generate per-variant SBOMs in release workflow (tokio + monoio)
- Clarify repl-backlog-size as future/unimplemented in runbook
- Strengthen rolling-restart promotion gate (verify offset convergence)
- Soften "no data loss" guarantee for async replication
@qodo-code-review
Copy link
Copy Markdown

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: jedis (Java)

Failed stage: Run jedis smoke test [❌]

Failed test name: ""

Failure summary:

The action failed during the Java/Jedis compatibility step when compiling CompatTest.java.
- javac
exited with code 2 because it could not read the dependency gson.jar: error reading gson.jar; zip
END header not found (line 821).
- This indicates the downloaded /tmp/jedis-test/gson.jar is
corrupted or not a valid JAR (likely a partial/failed curl download or an HTML error response saved
as the JAR), causing compilation to fail.

Relevant error logs:
1:  ##[group]Runner Image Provisioner
2:  Hosted Compute Agent
...

150:  �[36;1mecho "components=$(for c in ${components//,/ }; do echo -n ' --component' $c; done)" >> $GITHUB_OUTPUT�[0m
151:  �[36;1mecho "downgrade=" >> $GITHUB_OUTPUT�[0m
152:  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
153:  env:
154:  targets: 
155:  components: 
156:  ##[endgroup]
157:  ##[group]Run : set $CARGO_HOME
158:  �[36;1m: set $CARGO_HOME�[0m
159:  �[36;1mecho CARGO_HOME=${CARGO_HOME:-"$HOME/.cargo"} >> $GITHUB_ENV�[0m
160:  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
161:  ##[endgroup]
162:  ##[group]Run : install rustup if needed
163:  �[36;1m: install rustup if needed�[0m
164:  �[36;1mif ! command -v rustup &>/dev/null; then�[0m
165:  �[36;1m  curl --proto '=https' --tlsv1.2 --retry 10 --retry-connrefused --location --silent --show-error --fail https://sh.rustup.rs | sh -s -- --default-toolchain none -y�[0m
166:  �[36;1m  echo "$CARGO_HOME/bin" >> $GITHUB_PATH�[0m
...

224:  �[36;1mif [ -z "${CARGO_REGISTRIES_CRATES_IO_PROTOCOL+set}" -o -f "/home/runner/work/_temp"/.implicit_cargo_registries_crates_io_protocol ]; then�[0m
225:  �[36;1m  if rustc +1.94.0 --version --verbose | grep -q '^release: 1\.6[89]\.'; then�[0m
226:  �[36;1m    touch "/home/runner/work/_temp"/.implicit_cargo_registries_crates_io_protocol || true�[0m
227:  �[36;1m    echo CARGO_REGISTRIES_CRATES_IO_PROTOCOL=sparse >> $GITHUB_ENV�[0m
228:  �[36;1m  elif rustc +1.94.0 --version --verbose | grep -q '^release: 1\.6[67]\.'; then�[0m
229:  �[36;1m    touch "/home/runner/work/_temp"/.implicit_cargo_registries_crates_io_protocol || true�[0m
230:  �[36;1m    echo CARGO_REGISTRIES_CRATES_IO_PROTOCOL=git >> $GITHUB_ENV�[0m
231:  �[36;1m  fi�[0m
232:  �[36;1mfi�[0m
233:  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
234:  env:
235:  CARGO_HOME: /home/runner/.cargo
236:  CARGO_INCREMENTAL: 0
237:  CARGO_TERM_COLOR: always
238:  ##[endgroup]
239:  ##[group]Run : work around spurious network errors in curl 8.0
240:  �[36;1m: work around spurious network errors in curl 8.0�[0m
241:  �[36;1m# https://rust-lang.zulipchat.com/#narrow/stream/246057-t-cargo/topic/timeout.20investigation�[0m
...

312:  - CARGO_TERM_COLOR
313:  .. Lockfiles considered:
314:  - /home/runner/work/moon/moon/Cargo.lock
315:  - /home/runner/work/moon/moon/Cargo.toml
316:  - /home/runner/work/moon/moon/rust-toolchain.toml
317:  ##[endgroup]
318:  ... Restoring cache ...
319:  No cache found.
320:  ##[group]Run cargo build --release --no-default-features --features runtime-tokio,jemalloc
321:  �[36;1mcargo build --release --no-default-features --features runtime-tokio,jemalloc�[0m
322:  shell: /usr/bin/bash -e {0}
323:  env:
324:  CARGO_HOME: /home/runner/.cargo
325:  CARGO_INCREMENTAL: 0
326:  CARGO_TERM_COLOR: always
327:  CACHE_ON_FAILURE: false
328:  MOON_NO_URING: 1
...

367:  �[1m�[92m  Downloaded�[0m dashmap v6.1.0
368:  �[1m�[92m  Downloaded�[0m sketches-ddsketch v0.3.1
369:  �[1m�[92m  Downloaded�[0m lazy_static v1.5.0
370:  �[1m�[92m  Downloaded�[0m futures-macro v0.3.32
371:  �[1m�[92m  Downloaded�[0m version_check v0.9.5
372:  �[1m�[92m  Downloaded�[0m tikv-jemallocator v0.6.1
373:  �[1m�[92m  Downloaded�[0m heck v0.5.0
374:  �[1m�[92m  Downloaded�[0m tower-service v0.3.3
375:  �[1m�[92m  Downloaded�[0m want v0.3.1
376:  �[1m�[92m  Downloaded�[0m tokio-macros v2.7.0
377:  �[1m�[92m  Downloaded�[0m untrusted v0.9.0
378:  �[1m�[92m  Downloaded�[0m ipnet v2.12.0
379:  �[1m�[92m  Downloaded�[0m untrusted v0.7.1
380:  �[1m�[92m  Downloaded�[0m rand_xoshiro v0.7.0
381:  �[1m�[92m  Downloaded�[0m shlex v1.3.0
382:  �[1m�[92m  Downloaded�[0m thiserror-impl v1.0.69
383:  �[1m�[92m  Downloaded�[0m getrandom v0.2.17
384:  �[1m�[92m  Downloaded�[0m tracing-log v0.2.0
385:  �[1m�[92m  Downloaded�[0m thiserror v1.0.69
386:  �[1m�[92m  Downloaded�[0m zeroize v1.8.2
387:  �[1m�[92m  Downloaded�[0m thiserror-impl v2.0.18
388:  �[1m�[92m  Downloaded�[0m hashbrown v0.16.1
...

416:  �[1m�[92m  Downloaded�[0m serde_derive v1.0.228
417:  �[1m�[92m  Downloaded�[0m tracing v0.1.44
418:  �[1m�[92m  Downloaded�[0m rustls-webpki v0.103.10
419:  �[1m�[92m  Downloaded�[0m portable-atomic-util v0.2.6
420:  �[1m�[92m  Downloaded�[0m futures-util v0.3.32
421:  �[1m�[92m  Downloaded�[0m hyper v1.9.0
422:  �[1m�[92m  Downloaded�[0m h2 v0.4.13
423:  �[1m�[92m  Downloaded�[0m which v8.0.2
424:  �[1m�[92m  Downloaded�[0m twox-hash v2.1.2
425:  �[1m�[92m  Downloaded�[0m regex-automata v0.4.14
426:  �[1m�[92m  Downloaded�[0m tracing-attributes v0.1.31
427:  �[1m�[92m  Downloaded�[0m luajit-src v210.6.6+707c12b
428:  �[1m�[92m  Downloaded�[0m lua-src v550.0.0
429:  �[1m�[92m  Downloaded�[0m libc v0.2.184
430:  �[1m�[92m  Downloaded�[0m tokio-rustls v0.26.4
431:  �[1m�[92m  Downloaded�[0m thiserror v2.0.18
432:  �[1m�[92m  Downloaded�[0m spin v0.9.8
...

615:  �[1m�[92m   Compiling�[0m libmimalloc-sys v0.1.44
616:  �[1m�[92m   Compiling�[0m is_terminal_polyfill v1.70.2
617:  �[1m�[92m   Compiling�[0m cpufeatures v0.3.0
618:  �[1m�[92m   Compiling�[0m untrusted v0.9.0
619:  �[1m�[92m   Compiling�[0m rand_core v0.10.0
620:  �[1m�[92m   Compiling�[0m anstyle v1.0.14
621:  �[1m�[92m   Compiling�[0m fastrand v2.4.1
622:  �[1m�[92m   Compiling�[0m anstyle-query v1.1.5
623:  �[1m�[92m   Compiling�[0m httpdate v1.0.3
624:  �[1m�[92m   Compiling�[0m getrandom v0.4.2
625:  �[1m�[92m   Compiling�[0m colorchoice v1.0.5
626:  �[1m�[92m   Compiling�[0m rustls v0.23.37
627:  �[1m�[92m   Compiling�[0m foldhash v0.1.5
628:  �[1m�[92m   Compiling�[0m zmij v1.0.21
629:  �[1m�[92m   Compiling�[0m regex-syntax v0.8.10
630:  �[1m�[92m   Compiling�[0m thiserror v1.0.69
631:  �[1m�[92m   Compiling�[0m quanta v0.12.6
632:  �[1m�[92m   Compiling�[0m hashbrown v0.15.5
633:  �[1m�[92m   Compiling�[0m regex-automata v0.4.14
634:  �[1m�[92m   Compiling�[0m anstream v1.0.0
635:  �[1m�[92m   Compiling�[0m hyper v1.9.0
636:  �[1m�[92m   Compiling�[0m phf_generator v0.13.1
637:  �[1m�[92m   Compiling�[0m metrics v0.24.3
638:  �[1m�[92m   Compiling�[0m rand v0.9.2
639:  �[1m�[92m   Compiling�[0m crc32c v0.6.8
640:  �[1m�[92m   Compiling�[0m crypto-common v0.2.1
641:  �[1m�[92m   Compiling�[0m block-buffer v0.12.0
642:  �[1m�[92m   Compiling�[0m rand_xoshiro v0.7.0
643:  �[1m�[92m   Compiling�[0m crossbeam-epoch v0.9.18
644:  �[1m�[92m   Compiling�[0m thiserror-impl v1.0.69
645:  �[1m�[92m   Compiling�[0m lazy_static v1.5.0
646:  �[1m�[92m   Compiling�[0m const-oid v0.10.2
647:  �[1m�[92m   Compiling�[0m tower-service v0.3.3
648:  �[1m�[92m   Compiling�[0m crc16 v0.4.0
649:  �[1m�[92m   Compiling�[0m serde_json v1.0.149
650:  �[1m�[92m   Compiling�[0m serde v1.0.228
651:  �[1m�[92m   Compiling�[0m crc32fast v1.5.0
652:  �[1m�[92m   Compiling�[0m sketches-ddsketch v0.3.1
653:  �[1m�[92m   Compiling�[0m clap_lex v1.1.0
654:  �[1m�[92m   Compiling�[0m thiserror v2.0.18
655:  �[1m�[92m   Compiling�[0m strsim v0.11.1
...

659:  �[1m�[92m   Compiling�[0m anyhow v1.0.102
660:  �[1m�[92m   Compiling�[0m clap_derive v4.6.0
661:  �[1m�[92m   Compiling�[0m clap_builder v4.6.0
662:  �[1m�[92m   Compiling�[0m metrics-util v0.19.1
663:  �[1m�[92m   Compiling�[0m hyper-util v0.1.20
664:  �[1m�[92m   Compiling�[0m digest v0.11.2
665:  �[1m�[92m   Compiling�[0m sharded-slab v0.1.7
666:  �[1m�[92m   Compiling�[0m matchers v0.2.0
667:  �[1m�[92m   Compiling�[0m phf_macros v0.13.1
668:  �[1m�[92m   Compiling�[0m chacha20 v0.10.0
669:  �[1m�[92m   Compiling�[0m parking_lot v0.12.5
670:  �[1m�[92m   Compiling�[0m tracing-log v0.2.0
671:  �[1m�[92m   Compiling�[0m http-body-util v0.1.3
672:  �[1m�[92m   Compiling�[0m futures-executor v0.3.32
673:  �[1m�[92m   Compiling�[0m spin v0.9.8
674:  �[1m�[92m   Compiling�[0m thiserror-impl v2.0.18
675:  �[1m�[92m   Compiling�[0m serde_derive v1.0.228
...

711:  �[1m�[92m   Compiling�[0m sha1_smol v1.0.1
712:  �[1m�[92m   Compiling�[0m hex v0.4.3
713:  �[1m�[92m   Compiling�[0m bumpalo v3.20.2
714:  �[1m�[92m   Compiling�[0m tikv-jemallocator v0.6.1
715:  �[1m�[92m   Compiling�[0m rustls-webpki v0.103.10
716:  �[1m�[92m   Compiling�[0m tokio-rustls v0.26.4
717:  �[1m�[92m    Finished�[0m `release` profile [optimized] target(s) in 3m 01s
718:  ##[group]Run ./target/release/moon --port 6399 --shards 1 &
719:  �[36;1m./target/release/moon --port 6399 --shards 1 &�[0m
720:  �[36;1msleep 2�[0m
721:  shell: /usr/bin/bash -e {0}
722:  env:
723:  CARGO_HOME: /home/runner/.cargo
724:  CARGO_INCREMENTAL: 0
725:  CARGO_TERM_COLOR: always
726:  CACHE_ON_FAILURE: false
727:  MOON_NO_URING: 1
728:  ##[endgroup]
729:  �[2m2026-04-09T22:26:32.242240Z�[0m �[33m WARN�[0m �[2mmoon�[0m�[2m:�[0m WARNING: no password set. Protected mode is enabled. Only loopback connections are accepted. Use --requirepass or --protected-mode no to change this.
730:  �[2m2026-04-09T22:26:32.242295Z�[0m �[32m INFO�[0m �[2mmoon�[0m�[2m:�[0m Starting with 1 shards
731:  �[2m2026-04-09T22:26:32.242641Z�[0m �[32m INFO�[0m �[2mmoon::server::listener�[0m�[2m:�[0m Listening on 127.0.0.1:6399 (1 shards)
732:  �[2m2026-04-09T22:26:32.242894Z�[0m �[32m INFO�[0m �[2mmoon::shard::numa�[0m�[2m:�[0m Shard 0 pinned to core 0
733:  �[2m2026-04-09T22:26:32.243007Z�[0m �[32m INFO�[0m �[2mmoon::shard::event_loop�[0m�[2m:�[0m Shard 0 io_uring disabled via MOON_NO_URING
734:  �[2m2026-04-09T22:26:32.243046Z�[0m �[33m WARN�[0m �[2mmoon::shard::event_loop�[0m�[2m:�[0m Shard 0: SO_REUSEPORT bind failed: Address already in use (os error 98), using conn_rx
735:  �[2m2026-04-09T22:26:32.243220Z�[0m �[32m INFO�[0m �[2mmoon::shard::event_loop�[0m�[2m:�[0m Shard 0: WAL v3 writer initialized (segment_size=16777216)
...

740:  with:
741:  distribution: temurin
742:  java-version: 21
743:  java-package: jdk
744:  check-latest: false
745:  server-id: github
746:  server-username: GITHUB_ACTOR
747:  server-password: GITHUB_TOKEN
748:  overwrite-settings: true
749:  job-status: success
750:  token: ***
751:  env:
752:  CARGO_HOME: /home/runner/.cargo
753:  CARGO_INCREMENTAL: 0
754:  CARGO_TERM_COLOR: always
755:  CACHE_ON_FAILURE: false
756:  ##[endgroup]
...

772:  �[36;1mcurl -sL "https://repo1.maven.org/maven2/org/slf4j/slf4j-api/${SLF4J_VERSION}/slf4j-api-${SLF4J_VERSION}.jar" -o /tmp/jedis-test/slf4j-api.jar�[0m
773:  �[36;1mcurl -sL "https://repo1.maven.org/maven2/org/slf4j/slf4j-simple/${SLF4J_VERSION}/slf4j-simple-${SLF4J_VERSION}.jar" -o /tmp/jedis-test/slf4j-simple.jar�[0m
774:  �[36;1mcurl -sL "https://repo1.maven.org/maven2/org/apache/commons/commons-pool2/${POOL_VERSION}/commons-pool2-${POOL_VERSION}.jar" -o /tmp/jedis-test/commons-pool2.jar�[0m
775:  �[36;1mcurl -sL "https://repo1.maven.org/maven2/com/google/gson/gson/${GSON_VERSION}/gson-${GSON_VERSION}.jar" -o /tmp/jedis-test/gson.jar�[0m
776:  �[36;1mcat > /tmp/jedis-test/CompatTest.java << 'JEOF'�[0m
777:  �[36;1mimport redis.clients.jedis.Jedis;�[0m
778:  �[36;1mimport redis.clients.jedis.Pipeline;�[0m
779:  �[36;1mimport java.util.List;�[0m
780:  �[36;1m�[0m
781:  �[36;1mpublic class CompatTest {�[0m
782:  �[36;1m    public static void main(String[] args) {�[0m
783:  �[36;1m        try (Jedis jedis = new Jedis("127.0.0.1", 6399)) {�[0m
784:  �[36;1m            // SET / GET�[0m
785:  �[36;1m            jedis.set("java_key", "java_value");�[0m
786:  �[36;1m            String v = jedis.get("java_key");�[0m
787:  �[36;1m            assert "java_value".equals(v) : "GET failed";�[0m
788:  �[36;1m            // HSET / HGET�[0m
789:  �[36;1m            jedis.hset("java_hash", "f1", "v1");�[0m
790:  �[36;1m            String hv = jedis.hget("java_hash", "f1");�[0m
791:  �[36;1m            assert "v1".equals(hv) : "HGET failed";�[0m
792:  �[36;1m            // Pipeline�[0m
793:  �[36;1m            Pipeline p = jedis.pipelined();�[0m
794:  �[36;1m            p.set("jp1", "pv1");�[0m
795:  �[36;1m            p.set("jp2", "pv2");�[0m
796:  �[36;1m            p.get("jp1");�[0m
797:  �[36;1m            p.get("jp2");�[0m
798:  �[36;1m            List<Object> results = p.syncAndReturnAll();�[0m
799:  �[36;1m            assert "pv1".equals(results.get(2)) : "pipeline GET1 failed";�[0m
800:  �[36;1m            assert "pv2".equals(results.get(3)) : "pipeline GET2 failed";�[0m
801:  �[36;1m            System.out.println("jedis: ALL TESTS PASSED");�[0m
802:  �[36;1m        }�[0m
803:  �[36;1m    }�[0m
804:  �[36;1m}�[0m
805:  �[36;1mJEOF�[0m
806:  �[36;1mcd /tmp/jedis-test && javac -cp "jedis.jar:commons-pool2.jar:slf4j-api.jar:gson.jar" CompatTest.java�[0m
807:  �[36;1mcd /tmp/jedis-test && java -ea -cp ".:jedis.jar:commons-pool2.jar:slf4j-api.jar:slf4j-simple.jar:gson.jar" CompatTest�[0m
808:  shell: /usr/bin/bash -e {0}
809:  env:
810:  CARGO_HOME: /home/runner/.cargo
811:  CARGO_INCREMENTAL: 0
812:  CARGO_TERM_COLOR: always
813:  CACHE_ON_FAILURE: false
814:  JAVA_HOME: /opt/hostedtoolcache/Java_Temurin-Hotspot_jdk/21.0.10-7/x64
815:  JAVA_HOME_21_X64: /opt/hostedtoolcache/Java_Temurin-Hotspot_jdk/21.0.10-7/x64
816:  JEDIS_VERSION: 5.2.0
817:  SLF4J_VERSION: 2.0.16
818:  POOL_VERSION: 2.12.0
819:  GSON_VERSION: 2.11.0
820:  ##[endgroup]
821:  error: error reading gson.jar; zip END header not found
822:  ##[error]Process completed with exit code 2.
823:  Post job cleanup.

METRICS-01: Prometheus metrics via `metrics` + `metrics-exporter-prometheus`
  - Admin HTTP port: `--admin-port N` (default 0 = disabled)
  - /metrics endpoint serves Prometheus text format
  - Metric helpers: command latency histograms, connection gauges,
    keyspace hits/misses, evictions, AOF fsync, WAL rotations,
    SPSC drain batch size, pub/sub message counts, RSS gauge
  - Zero overhead when admin_port=0 (atomic check fast-path)

SLOWLOG-01: Redis-compatible SLOWLOG GET/LEN/RESET/HELP
  - Per-shard Mutex<VecDeque> ring buffer with configurable threshold
  - --slowlog-log-slower-than (default 10000us)
  - --slowlog-max-len (default 128)
  - Monotonic IDs, truncated args (128 bytes), client addr/name
  - 4 unit tests passing

CONFIG-01: --check-config flag
  - Validates config and exits 0 without binding ports
  - Enables config validation in CI/deployment pipelines

New dependencies:
  - metrics 0.24, metrics-exporter-prometheus 0.16

Partial delivery (deferred items):
  - HEALTH-01: /healthz + /readyz need custom HTTP handler (axum)
  - TRACE-01: tracing spans on lifecycle events (requires handler changes)
  - INFO-01: INFO parity extension (existing INFO works, needs more sections)
  - CONFIG-02: TLS SIGHUP hot-reload

Closes METRICS-01 (infrastructure), SLOWLOG-01, CONFIG-01.
Partial: HEALTH-01, TRACE-01, INFO-01, CONFIG-02 deferred.
CRASH-01: Crash injection matrix (tests/durability/crash_matrix.rs)
  - Server lifecycle: start → SIGKILL → restart → verify DBSIZE
  - Parameterized crash_test() for {aof-always, aof-everysec, none}
  - RPO validation per persistence mode

TORN-01: Torn write WAL v3 (tests/durability/torn_write.rs)
  - Truncate mid-record → CRC32C rejects, 2/3 records recovered
  - Bit-flip in payload → CRC mismatch detected
  - 4/4 tests pass

BACKUP-01: Backup/restore (tests/durability/backup_restore.rs)
  - BGSAVE → copy RDB → restore → DBSIZE parity check

Crash-matrix and backup tests marked #[ignore] (need server binary).
Torn-write tests run in CI without server.

Closes TORN-01 (verified). CRASH-01, BACKUP-01 infra shipped.
REPL-01: Partial resync within backlog window test
REPL-04: Replica kill-9 + restart data parity test
REPL-06: Replica promotion (REPLICAOF NO ONE) test

All tests marked #[ignore] — require built server binary.
Test infrastructure: server lifecycle, RESP inline helpers,
DBSIZE-based parity verification.

Closes REPL-01, REPL-04, REPL-06 (test infrastructure).
REPL-02 (full resync), REPL-03 (partition), REPL-05 (lag metric)
deferred — need metrics integration for lag.
COMPAT-01: CI workflow (.github/workflows/compat.yml)
  - redis-py: basic ops, hash, list, set, zset, pipeline, INFO parse
  - go-redis: basic ops, hash, pipeline
  - Weekly schedule + PR trigger
  - Both run against Moon built with tokio runtime

COMPAT-03: docs/redis-compat.md published
  - Protocol compatibility table (RESP2/RESP3/inline/pipeline/MULTI)
  - Client matrix (redis-py, go-redis tested; 6 more planned)
  - Known incompatibilities list (DEBUG DIGEST, RESP3 push, RDB format)
  - Vector search (RediSearch subset) compatibility table

Partial: COMPAT-02 (vector clients), COMPAT-04 (Redis TCL) deferred.
PERF-01: CI workflow (.github/workflows/bench-gate.yml)
  - Runs on PR when src/ or benches/ change
  - Critical benchmarks: get_hotpath, dispatch_baseline, resp_parsing
  - Results archived as artifact for trend analysis
  - Gate blocks PRs with regressions (manual review of output)

Existing benches leveraged: 10 Criterion benchmarks already in benches/
(get_hotpath, dispatch_baseline, resp_parsing, pubsub_hotpath,
distance_bench, hnsw_bench, fwht_bench, entry_memory, compact_key,
bptree_memory).

Partial: PERF-02 (24h HDR rig), PERF-03 (RSS gate), PERF-04 (x86_64
monoio fix), PERF-05 (7-day soak) need wall-clock time + hardware.
…ECURITY.md

SEC-01: deny.toml for cargo-deny
  - Advisory DB: deny vulnerabilities, warn unmaintained/yanked
  - License allowlist: MIT, Apache-2.0, BSD-2/3, ISC, Unicode, Zlib, etc.
  - Bans: warn multiple versions, deny wildcards
  - Sources: deny unknown registries/git

SEC-03: docs/THREAT-MODEL.md
  - 5 attacker classes (network, authenticated client, Lua script,
    replica impersonator, local user)
  - Asset inventory with protection mapping
  - Trust boundary diagram
  - Risk matrix with likelihood × impact × mitigation status

SEC-04: docs/security/lua-sandbox.md
  - Full mlua binding audit (allowed/blocked libraries)
  - redis.* API safety review
  - Resource limits documentation
  - CVE review of vendored Lua 5.4.7 (no open CVEs)
  - 10 escape vector paths reviewed, all blocked

SEC-07: SECURITY.md disclosure policy
  - 48h acknowledgment, 7d triage, 30/90d fix timeline
  - 90-day embargo, coordinated disclosure
  - Scope definition (in/out)

Partial: SEC-02 (SBOM + cosign), SEC-06 (TLS cipher freeze) deferred
to Phase 99 release pipeline.
REL-01: docs/versioning.md
  - SemVer policy for data stores (major = format break, minor = feature, patch = fix)
  - MOON_FORMAT_VERSION fields in RDB/WAL/AOF headers
  - Forward compatibility (refuse newer formats with clear error)
  - Upgrade/downgrade procedures

REL-05: Operator runbooks (4 of 6 planned)
  - corrupted-aof-recovery.md — identify, auto-recover, manual truncate, prevention
  - oom-during-snapshot.md — restart, verify, address root cause, monitor
  - disk-full-during-wal-rotation.md — free space, restart, compact, prevent
  - replica-fell-behind.md — check lag, partial/full resync, prevent

Partial: REL-02 (upgrade/downgrade tests), REL-03 (artifacts), REL-04
(CHANGELOG gate), REL-06 (user docs), REL-07 (release pipeline) deferred.
…fecycle

Metrics now populate on /metrics endpoint:
  - moon_commands_total{cmd="..."} — per-command counter
  - moon_command_duration_microseconds{cmd="...",quantile="..."} — latency histogram
  - moon_command_errors_total{cmd="..."} — error counter
  - moon_connected_clients — gauge (open/close tracked)

Wiring points:
  - dispatch() and dispatch_read() in src/command/mod.rs wrapped with
    Instant::now() timing → record_command() on every command
  - handle_connection_sharded() tracks open/close via record_connection_opened/closed

Verified on moon-dev (monoio runtime):
  - 31 SETs → moon_commands_total{cmd="set"} 31
  - SET p50 < 1µs, p99 3µs, HGETALL 4µs
  - /metrics HTTP 200 with full Prometheus text format
…partition tests

- G14: SIGKILL during disk-offload spill with low threshold + AOF-always
- G15: Jepsen-lite linearizability harness (4 writers, 3 SIGKILL cycles, monotonicity check)
- G17: Full-resync test with tiny backlog overflow forcing re-sync
- G18: Network partition recovery via REPLICAOF NO ONE -> REPLICAOF <master>
- All new tests marked #[ignore] (require built binary + server)
- Existing torn_write tests still pass
…sign, CHANGELOG gate

- Add ioredis (Node.js), redis-rs (Rust), hiredis (C), jedis (Java) CI jobs to compat.yml
- Expand bench-gate.yml from 3 to all 10 Criterion benchmarks
- Add RSS memory regression gate (100K keys, /proc/PID/status)
- Add SBOM generation (cargo-cyclonedx) and cosign artifact signing to release.yml
- Add changelog-gate.yml requiring CHANGELOG.md updates on PRs
…estart runbook

- Freeze TLS cipher suites to explicit AEAD+PFS allowlist instead of rustls defaults
- Add docs/runbooks/tls-cert-rotation.md (SIGHUP-based zero-downtime rotation)
- Add docs/runbooks/rolling-restart.md (primary+replica upgrade procedure)
…racing

- Replace metrics-exporter-prometheus built-in HTTP listener with custom
  hyper server serving /metrics, /healthz (liveness), /readyz (readiness)
- Add Arc<AtomicBool> readiness flag, set after shard recovery completes
- Extend INFO with # Stats (total_commands_processed, total_connections_received),
  # CPU (getrusage on Linux), and # Replication (role:master placeholder)
- Add tracing::instrument to handle_connection_sharded and drain_spsc_shared
- Wire SLOWLOG into dispatch: timing around local read/write dispatch paths,
  global Slowlog with atomic reconfiguration, SLOWLOG command routing
- Make tokio always available with minimal features (rt, net, macros) for
  admin HTTP server; runtime-tokio adds full feature set
Two root causes fixed:

1. atoi::atoi() silently ignores trailing non-digit bytes (e.g. "5\n"
   parses as 5). Replaced all atoi::atoi calls with strict_atoi() that
   verifies ALL bytes in the CRLF-terminated line are consumed. This
   prevents validate_frame from accepting malformed counts like "*5\n\r\n".

2. parse_frame_zerocopy's _ (null) and # (boolean) handlers used
   hardcoded pos offsets (+2 and +3) instead of find_crlf, causing them
   to advance to different positions than validate_frame when garbage
   bytes appear between the type byte and CRLF terminator.

Both fixes eliminate the pass-1/pass-2 position divergence that caused
the fuzzer crash in crash-d910d9fd212b15baaaea4eefbe68be05fb2dc3d9.

Verified: 1.6M fuzz iterations (60s), all 3 crash artifacts clean,
49/49 parser tests pass.
…path modules

- Replace std::sync::RwLock .unwrap() with fail-closed let Ok(..) else patterns
  in ACL, connection, and config command handlers
- Replace .unwrap() with safe alternatives: is_some_and(), if let, map_or(),
  unwrap_or(), and let Some(..) else patterns in sorted_set, set, stream,
  dashtable, and blocking modules
- Add #[allow(clippy::unwrap_used/expect_used)] with invariant comments for
  structurally guaranteed sites (compact_key, compact_value, redis_rdb,
  conn_accept Lua init, mesh take_conn_rx, spill_thread spawn)
- Improve audit-unwrap.sh: 30-line lookback for function-level annotations,
  detect separate test files via parent mod.rs #[cfg(test)], skip comment lines
- Set audit baseline to 0 (was 98)
…ply handling, BGSAVE poll

- crash_matrix.rs: Stdio::piped() → Stdio::null() to prevent child blocking
- crash_matrix.rs: add SAFETY comment + return code check on libc::kill
- backup_restore.rs: replace sleep(2) with poll loop + timeout for dump.rdb
- replication_hardening.rs: add bulk string ($N) handling to send_cmd
- replication_hardening.rs: add SAFETY comments + return code checks on all libc::kill calls
- jepsen_lite.rs: Stdio::piped() → Stdio::null(), SAFETY comment + return code on libc::kill
- All test modules with libc::kill gated with #[cfg(unix)]
…d semantics, hot-path alloc, error metrics

- Add SLOWLOG to dispatch() and dispatch_read() tables (7, b's')
- Fix threshold_us==0 to mean "log everything" (Redis convention)
- Fix max_len==0 to mean "disabled" (Redis convention, prevents unbounded growth)
- Reject negative/non-numeric SLOWLOG GET count with error frame
- Reduce to_ascii_lowercase() allocations in record_command (single alloc reused)
- Move Instant::now() timing behind METRICS_INITIALIZED check (zero-cost when off)
- Add error recording to dispatch_read() matching dispatch() pattern
- Only call record_connection_closed() on actual close, not migration
- Pass real peer_addr and client_name to slowlog maybe_record
- Add 4 new unit tests for threshold/max_len/negative/non-numeric edge cases
…ig validation, RESP3 strictness, INFO placeholder

- admin/http_server.rs: replace unwrap() with unwrap_or_else returning 500,
  replace expect() on runtime build and thread spawn with error logging
- main.rs: move --check-config after TLS validation, protected mode check,
  and persistence dir validation so real errors are caught before exit
- protocol/parse.rs: reject junk before CRLF in RESP3 null (_) handler
  across all four parse paths (validate_frame, parse_frame_zerocopy,
  parse_single_frame_zc, parse_single_frame); add defensive check for
  boolean (#) in zerocopy path; add test for _junk\r\n rejection
- command/connection.rs: add placeholder comment on INFO replication section
  noting values should be wired to ReplicationState

Note on ACL std::sync::RwLock (issue 6): the RwLock type flows from main.rs
through shard threads to acl.rs — switching to parking_lot requires changing
the type in main.rs, shard/event_loop.rs, server/conn, and all handlers.
Left as-is; broader refactor tracked separately.
- Reformat #[allow] + comment lines for consistency (conn_accept,
  redis_rdb, spill_thread, sorted_set_write)
- Update fuzz/Cargo.lock after dependency resolution
- Update .gitignore
- Fix backup_restore and jepsen_lite test formatting
The bench-gate workflow now actually gates on regressions:

1. On main branch push: runs benchmarks and saves Criterion results
   to GitHub Actions cache as the baseline (key: criterion-baseline-main).

2. On PR: restores the cached baseline, runs benchmarks, then compares
   the 3 critical benches (get_hotpath, dispatch_baseline, resp_parsing)
   against the baseline. Exits non-zero if any regresses > 5% (configurable
   via REGRESSION_THRESHOLD env var).

3. If no baseline exists yet (first run): benchmarks run and results are
   recorded, but regression check is skipped with a NOTE.

Also: RSS memory gate now exits non-zero (was warning-only) when
RSS exceeds the 150 MB baseline for 100K keys.

Addresses PR #65 review: "workflow does not currently gate regressions."
- Replace std::sync::RwLock with parking_lot::RwLock for RuntimeConfig
- Remove .unwrap() / .map().unwrap_or() chains (parking_lot doesn't poison)
- Re-add std::sync::RwLock import in handler_single.rs for AclTable/ReplicationState
- Fix monoio-gated code paths in conn_accept.rs and handler_monoio.rs
- All other lock types (AclTable, ReplicationState, ClusterState) remain std::sync
The test helper make_runtime_config() return type was still
Arc<std::sync::RwLock<RuntimeConfig>> while its body created
Arc<parking_lot::RwLock<RuntimeConfig>> after the migration.
Fixed return type to match.

1895/1895 tests pass, clippy clean on both feature sets.
…ements

- REL-04: Add changelog CI gate job that checks every PR touches
  CHANGELOG.md or has a skip-changelog label
- REL-06: Create docs/guides/ stubs — getting-started.md (install, run,
  connect, basic SET/GET), configuration.md (all CLI flags and defaults
  from src/config.rs), monitoring.md (Prometheus admin port setup)
- REL-07/SEC-02: Add SHA256SUMS.txt checksum generation to release
  workflow (SBOM via cargo-cyclonedx was already present)
- REL-02: Add tests/upgrade_test.rs stub (#[ignore]) — writes AOF data,
  verifies persistence survives simulated restart
… bench, vector tests

Phase 103 (Durability & Replication):
  - tests/jepsen_lite.rs: 4 concurrent writers, 3 SIGKILL-restart cycles,
    per-key linearizability verification under appendfsync=always

Phase 104 (Compat & Perf):
  - tests/redis_compat.rs: 24 integration tests across 8 command categories
    (string, hash, list, set, zset, key, transaction, pubsub)
  - scripts/bench-memory.sh: RSS-per-key regression gate (120B/key baseline)
  - scripts/test-vector-clients.sh: FT.CREATE/SEARCH/DROPINDEX smoke tests

All tests #[ignore] (require running server binary).
Cherry-picked from parallel worktree agents.
HEALTH-01: Liveness and readiness checks via RESP commands.
  - HEALTHZ: always returns +OK (liveness — server process is running)
  - READYZ: returns +OK when server is fully initialized, -ERR otherwise
  - set_server_ready() / is_server_ready() via AtomicBool in metrics_setup
  - Added to dispatch table: (7, b'h') for HEALTHZ, (6, b'r') for READYZ
  - Clients check via: redis-cli HEALTHZ / redis-cli READYZ

Clippy clean on both feature sets.
- Wire record_command() + slowlog into handler_monoio (read + write paths)
- Wire record_command() + slowlog into handler_single (3 dispatch sites)
- Add record_replication_lag() gauge in metrics_setup.rs (REPL-05)
- Verified: SBOM already in release.yml (SEC-02), TLS ciphers already frozen (SEC-06)
1. Remove double-counted metrics: dispatch() and dispatch_read() wrappers
   no longer call record_command/record_command_error — handlers own timing.
2. Bound Prometheus label cardinality: sanitize_cmd_label() allowlists known
   commands as static strings; unknown/malformed input maps to "unknown".
3. Replace spawn_local with tokio::spawn in admin HTTP server — no !Send
   data involved, avoids LocalSet requirement on current_thread runtime.
- Remove SLOWLOG from read-only dispatch path (RESET mutates state)
- Add #[cfg(unix)] guard to crash_matrix::crash_test (libc portability)
- Fix jepsen_lite send_cmd: properly read RESP bulk string data
- Fix jepsen_lite ordering check: val > pv (non-increasing, not non-decreasing)
- Fix replication_hardening dbsize: panic on connect failure instead of silent -1
- Fix audit-unwrap.sh: use awk adjacency check for compound cfg(test) patterns
Code fixes:
- Eliminate hot-path allocation in sanitize_cmd_label (stack buffer vs heap)
- Add error metric recording for dispatch_read() on both local and cross-shard paths
- Wire error metrics on write dispatch path (record_command_error)
- Strengthen --check-config validation (port conflicts, shard count)
- Replace fixed 2s sleep with polling loop in backup_restore test
- Tighten torn_write assertion from >= 2 to exact [1, 2]
- Add TODO for INFO replication placeholder + replid/offset fields
- Fix stale inline dispatch tests (SET now correctly falls through)
- Gate integration.rs and replication_test.rs behind runtime-tokio
- Fix parking_lot::RwLock type mismatch in integration tests
- Add missing ServerConfig fields (admin_port, slowlog, check_config)

CI/Docs fixes:
- Add GitHub warning annotation when bench-gate has no baseline
- Generate per-variant SBOMs in release workflow (tokio + monoio)
- Clarify repl-backlog-size as future/unimplemented in runbook
- Strengthen rolling-restart promotion gate (verify offset convergence)
- Soften "no data loss" guarantee for async replication
Go ignores go.mod in system temp root (/tmp). Use mktemp -d under
RUNNER_TEMP to create a proper module directory. Also use `go mod tidy`
instead of deprecated `go get` outside a module.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants