Skip to content

feat: Hexagonal HNSW — diversity heuristic, A2 codebook, sub-centroid signs, multi-segment fix#36

Merged
pilotspacex-byte merged 22 commits into
mainfrom
feat/hexagon-hnws
Apr 1, 2026
Merged

feat: Hexagonal HNSW — diversity heuristic, A2 codebook, sub-centroid signs, multi-segment fix#36
pilotspacex-byte merged 22 commits into
mainfrom
feat/hexagon-hnws

Conversation

@TinDang97
Copy link
Copy Markdown
Collaborator

@TinDang97 TinDang97 commented Apr 1, 2026

Summary

Phase 74: Hexagonal HNSW — lattice-inspired improvements to the vector search engine, plus 4 critical bug fixes discovered during benchmarking.

Hexagonal HNSW Architecture

Features

  • Diversity heuristic (Algorithm 4) — Replaces simple nearest-M neighbor selection with the standard HNSW diversity heuristic from Malkov & Yashunin 2018. Candidates accepted only if closer to query than to all already-selected neighbors. keepPrunedConnections fills remaining slots.
  • A2 hexagonal lattice codebook — 16-cell hexagonal quantizer (TQ4A2) for paired-dimension encoding. Density-adapted 1+6+6+3 ring centroid layout with Lloyd refinement. Available via FT.CREATE ... QUANTIZATION TQ4A2.
  • Sub-centroid signs at insert time — Computes sign bits from actual pre-quantization FWHT values during encoding, doubling effective quantization resolution from 16 to 32 levels in HNSW beam search. Previously only worked in Exact mode.
  • Mixed workload benchmark scriptsbench-mixed-workload.py (5-phase simulation) and bench-mixed-1k-compact.py (multi-compaction stress test).

Bug Fixes

  • Global vector ID collision across segments (Critical) — Each compacted immutable segment assigned VectorIds 0..N-1 independently. With multiple compaction cycles, IDs collided causing search to return wrong documents. Added global_id to MvccHeader with base offset tracking. Without fix: recall drops to 9.7%.
  • TQ-ADC used for HNSW graph construction (Major) — Light mode built graphs using asymmetric TQ-ADC pairwise distance, causing poor topology at 384d+. Replaced with decoded-centroid symmetric L2. Without fix: 41% recall at 384d.
  • Sub-centroid signs never computed in Light mode (Major) — Light mode couldn't compute signs because it doesn't retain raw f32 vectors. Fixed by computing signs at insert time from pre-quantization FWHT values. Without fix: 92.7% recall (vs 96.9% with fix).
  • compact_parallel breaks at high dimensions (Major) — Cell partitioning used only first 2 coordinates (0.5% of variance at 384d), producing random partitions. Disabled until PCA-based partitioning implemented. Without fix: Exact mode gets 68.7% recall.

Benchmark Results (MiniLM 384d, 10K vectors)

Before → After this PR

Metric Before After Change
Recall@10 92.5% 96.9% +4.4%
Insert 30,312 v/s 71,351 v/s +2.4x
Search QPS 1,126 1,239 +10%
Compact time 6.5s 3.9s -40%

vs Competitors (all-MiniLM-L6-v2, 10K vectors, single-threaded)

Metric Moon (TQ4) Redis 8.6.1 Qdrant 1.17.1
Insert 71K v/s 6.6K 6.5K
Search QPS 1,239 3,787 470
Recall@10 0.969 0.983 0.985
Recall gap -1.4%

Moon is 10.8x faster insert than both competitors. Recall gap narrowed to 1.4% (from 5.6%) while maintaining 6x memory compression via TQ-4bit.

Multi-compaction stress test (compact_threshold=1000, 10 cycles)

Metric Value
Steady-state recall (avg all batches) 97.2%
Final recall@10 96.9%
Compaction spikes 10 × ~246ms
Recall stability 95-100% across all 10K inserts

Files Changed

Area Files What
HNSW build hnsw/build.rs Diversity heuristic, use_heuristic flag
A2 codebook turbo_quant/a2_lattice.rs (new) 16-cell hex quantizer
Encoder turbo_quant/encoder.rs TqCodeWithSigns, encode_tq_mse_scaled_with_signs
Collection turbo_quant/collection.rs TQ4A2 variant, A2 code bytes
Mutable segment segment/mutable.rs Sub-centroid signs at insert, global ID base
Immutable segment segment/immutable.rs Global ID remap, MvccHeader.global_id
Compaction segment/compaction.rs Decoded-centroid L2 build, sign remap, parallel disabled
Store store.rs Global ID base propagation on compaction swap
Search command command/vector_search.rs ef_search dimension scaling
Persistence persistence/segment_io.rs MvccHeader global_id field
Benchmarks scripts/bench-mixed-*.py Mixed workload simulation

Test plan

  • cargo test --no-default-features --features runtime-tokio,jemalloc — 1540 tests pass
  • cargo clippy -- -D warnings — zero warnings
  • HNSW build tests: BFS connectivity (1000 nodes), recall >= 0.85 on clustered data
  • Compaction tests: parallel compact recall >= 0.90, stitch connectivity
  • A2 lattice tests: roundtrip, distortion comparison, all 16 cells reachable
  • Multi-segment recall: 97.2% steady-state across 10 compaction cycles
  • Competitive benchmark: Moon vs Redis vs Qdrant on real MiniLM embeddings
  • Light vs Exact mode comparison: identical recall (96.9% vs 96.85%)

Summary by CodeRabbit

  • New Features

    • Added TurboQuant4A2 (A2 lattice) quantization and related encoder/decoder for improved compression.
    • HNSW diversity heuristic option for neighbor selection and parallel cell-based graph construction.
  • Improvements

    • Dimension-aware ef_search auto-tuning and MVCC header v2 with global_id/key_hash.
    • Search results now report global vector IDs consistently after compaction.
  • Bug Fixes

    • Fixed payload filtering/global-ID consistency during compaction.
  • Infrastructure

    • New mixed insert/search benchmark scripts and longer CI test timeout.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 1, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ea05220a-d3bc-40e2-9164-2b0c66efa50f

📥 Commits

Reviewing files that changed from the base of the PR and between 5f711ed and e279442.

📒 Files selected for processing (10)
  • .github/workflows/ci.yml
  • src/shard/spsc_handler.rs
  • src/vector/hnsw/build.rs
  • src/vector/persistence/segment_io.rs
  • src/vector/segment/compaction.rs
  • src/vector/segment/immutable.rs
  • src/vector/segment/mutable.rs
  • src/vector/store.rs
  • src/vector/turbo_quant/a2_lattice.rs
  • src/vector/turbo_quant/collection.rs

📝 Walkthrough

Walkthrough

Introduces TurboQuant4A2 (A2 hexagonal paired-dimension quantization), heuristic HNSW neighbor selection, MVCC global_id/key_hash tracking with on-disk v2 headers, compaction changes to preserve global IDs, many TurboQuant encoder/decoder additions, and new mixed insert/search benchmark scripts.

Changes

Cohort / File(s) Summary
Benchmark Suites
scripts/bench-mixed-1k-compact.py, scripts/bench-mixed-workload.py
Added two new executable benchmark scripts that generate/load MiniLM vectors and run mixed insert/search workloads against Moon, Redis, and Qdrant adapters; measure per-query latency, recall@10, QPS, compaction events, and persist JSON results.
A2 Lattice Quantization
src/vector/turbo_quant/a2_lattice.rs, src/vector/turbo_quant/encoder.rs, src/vector/turbo_quant/mod.rs
Added A2 hexagonal paired-dimension codebook and encoder/decoder (quantize/decode pair indices, Lloyd refinement, sizing); added A2 encoder/decoder functions and TqCodeWithSigns support; exported a2_lattice module.
Quantization Config & Collection
src/vector/turbo_quant/collection.rs, src/command/vector_search/mod.rs
Added TurboQuant4A2 variant (4-bit) and integrated it into collection sizing and FT command parsing/reporting; adjusted ef_search auto-calculation to a dimension-aware formula with higher clamp.
HNSW Graph Construction
src/vector/hnsw/build.rs
Implemented diversity-based neighbor selection heuristic (select_neighbors_heuristic) with a configurable use_heuristic toggle and tests; insertion/pruning logic updated to use heuristic when enabled.
Segment MVCC & Persistence
src/vector/persistence/segment_io.rs, src/vector/segment/immutable.rs
On-disk MVCC layout versioned to v2 (adds global_id and key_hash), read logic updated for v1/v2 fallback; MvccHeader extended and search results remapped to global IDs; added helper to fetch key_hash by BFS pos.
Mutable/Frozen Segment & Encoding
src/vector/segment/mutable.rs
Added global_id_base, sub_centroid_signs, sign-byte sizing and A2/scalar TQ encoding dispatch; freeze snapshots include sign state; brute-force search and append encoding now use global IDs and A2 decoding paths; new APIs to set/read global_id_base.
Compaction, Stitching & Store
src/vector/segment/compaction.rs, src/vector/store.rs, src/shard/spsc_handler.rs
Parallel cell-based HNSW construction and stitching helpers added; compaction uses A2 decoding where needed; MVCC population updated with global_id/key_hash; payload index writes now use global_id; VectorIndex sets new mutable's global_id_base after compaction.
TurboQuant Collection Helpers & Tests
src/vector/turbo_quant/collection.rs, src/vector/turbo_quant/encoder.rs (tests)
Updated collection metadata sizing for A2 and added unit tests covering config, sizing, encode/decode, and centroid properties.
CI Workflow
.github/workflows/ci.yml
Increased cargo test step timeout from 10 to 15 minutes.
Submodule Pointer
.planning
Updated Git submodule pointer from 9c8405f280e2... to d8cf743c9469....

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

enhancement

Poem

🐰 I nibble centroids in a hexagonal land,
global IDs stitched gently by paw and hand,
HNSW hops with a heuristic cheer,
benches hum, recall draws near,
carrots encoded, compact and grand 🥕✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Description check ❓ Inconclusive The PR description is comprehensive and well-structured with summary, detailed feature descriptions, bug fix explanations, benchmarks, files changed, and test plan. However, it does not follow the required template structure. Reorganize the description to match the template: include only the summary paragraph under '## Summary', add a '## Checklist' section with verification of format, clippy, tests, and consistency tests, add '## Performance Impact' with benchmark results, and move design notes to '## Notes'.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'feat: Hexagonal HNSW — diversity heuristic, A2 codebook, sub-centroid signs, multi-segment fix' accurately summarizes the major features and critical bug fix covered in the changeset.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/hexagon-hnws

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Hexagonal HNSW with diversity heuristic, A2 lattice quantization, and critical bug fixes

✨ Enhancement 🐞 Bug fix 🧪 Tests

Grey Divider

Walkthroughs

Description
• **Diversity heuristic for HNSW neighbor selection** — Implements Algorithm 4 from Malkov &
  Yashunin 2018 to replace simple nearest-M selection, accepting candidates only if closer to query
  than to already-selected neighbors
• **A2 hexagonal lattice quantization (TQ4A2)** — New 16-cell hexagonal quantizer with
  density-adapted 1+6+6+3 ring centroid layout for paired-dimension encoding, achieving 2x compression
  vs scalar TQ4
• **Sub-centroid signs at insert time** — Computes sign bits from pre-quantization FWHT values
  during encoding, doubling effective quantization resolution from 16 to 32 levels in HNSW beam
  search, now working in Light mode
• **Global vector ID collision fix** — Critical bug: each compacted segment independently assigned
  VectorIds 0..N-1, causing collisions across compaction cycles. Fixed by adding global_id and
  key_hash to MvccHeader with base offset tracking (without fix: recall drops to 9.7%)
• **TQ-ADC graph construction fix** — Major bug: Light mode built HNSW graphs using asymmetric
  TQ-ADC pairwise distance instead of decoded-centroid symmetric L2, causing poor topology at 384d+
  (without fix: 41% recall)
• **Sub-centroid signs in Light mode fix** — Major bug: Light mode couldn't compute signs due to
  lack of raw f32 vectors. Fixed by computing signs at insert time from pre-quantization FWHT values
  (without fix: 92.7% vs 96.9% recall)
• **Parallel compaction infrastructure** — Implemented cell-based parallel HNSW graph construction
  with spatial grid partitioning and cross-cell boundary stitching (currently disabled pending
  PCA-based partitioning)
• **Benchmark improvements** — Recall@10 improved from 92.5% to 96.9% (+4.4%), insert throughput
  increased 2.4x to 71,351 v/s, compact time reduced 40% to 3.9s
• **Mixed workload benchmarks** — Added bench-mixed-workload.py (5-phase simulation) and
  bench-mixed-1k-compact.py (multi-compaction stress test) for comprehensive performance validation
Diagram
flowchart LR
  A["Insert Vector"] -->|"compute signs at insert time"| B["Sub-centroid Signs"]
  A -->|"paired-dimension encoding"| C["A2 Lattice Quantization"]
  B --> D["HNSW Graph Construction"]
  C --> D
  D -->|"diversity heuristic"| E["Neighbor Selection"]
  E --> F["HNSW Index"]
  G["Compaction Cycle"] -->|"assign global_id_base"| H["Global ID Tracking"]
  H --> I["Immutable Segment"]
  I -->|"remap to global IDs"| J["Search Results"]
  F --> J
Loading

Grey Divider

File Changes

1. src/vector/segment/compaction.rs ✨ Enhancement +661/-44

Parallel HNSW compaction with A2 lattice and global ID tracking

• Added PARALLEL_THRESHOLD constant and implemented compact_parallel() function for cell-based
 parallel HNSW graph construction using std::thread::scope
• Implemented assign_to_cells() for spatial grid partitioning and stitch_subgraphs() to merge
 sub-graphs with cross-cell boundary edges
• Added add_neighbor_to_flat() helper for managing neighbor lists during stitching
• Disabled parallel compaction path (set to false) due to 2-coordinate partitioning being
 ineffective at high dimensions; added TODO for PCA-based replacement
• Fixed HNSW graph construction to use decoded-centroid symmetric L2 instead of asymmetric TQ-ADC,
 improving topology at 384d+
• Added A2 codebook support in compaction with conditional decoding paths for A2 vs scalar TQ4
• Enhanced sub-centroid sign computation to handle both A2 and scalar TQ4 paths; added fallback for
 Light mode using insert-time signs
• Added global_id and key_hash fields to MvccHeader for tracking globally unique vector IDs
 across compaction cycles
• Comprehensive test suite for cell partitioning, parallel graph connectivity, cross-cell stitching,
 and recall validation

src/vector/segment/compaction.rs


2. src/vector/hnsw/build.rs ✨ Enhancement +434/-27

Diversity heuristic for HNSW neighbor selection (Algorithm 4)

• Implemented select_neighbors_heuristic() function (Algorithm 4, Malkov & Yashunin 2018) for
 diversity-based neighbor selection with keepPrunedConnections fallback
• Added use_heuristic flag to HnswBuilder with set_use_heuristic() method to toggle between
 diversity heuristic and simple nearest-M selection
• Updated add_neighbor_with_prune() to use diversity heuristic when enabled, with stack-allocated
 SmallVec to avoid heap allocation
• Marked select_neighbors_simple() with #[allow(dead_code)] for backward compatibility
• Added extensive test suite validating heuristic behavior on collinear/diverse candidates,
 reachability, and recall improvement on clustered data

src/vector/hnsw/build.rs


3. src/vector/segment/mutable.rs ✨ Enhancement +156/-37

Sub-centroid signs and global ID tracking in mutable segments

• Added sub_centroid_signs and sub_sign_bytes_per_vec fields to FrozenSegment and
 MutableSegmentInner for storing sign bits computed at insert time
• Added global_id_base field to track base offset for computing globally unique vector IDs
• Updated insert() to compute sub-centroid signs for scalar TQ4 via
 encode_tq_mse_scaled_with_signs() and handle A2 encoding via encode_tq_mse_a2()
• Added set_global_id_base(), next_global_id(), and global_id_base() methods for managing
 global ID offsets
• Updated search methods to use global_id_base + entry.internal_id for globally unique result IDs
• Added A2 codebook support in freeze/decode paths with conditional logic for A2 vs scalar TQ4
• Updated encode_vector() to support both A2 and scalar TQ4 quantization paths

src/vector/segment/mutable.rs


View more (11)
4. src/vector/turbo_quant/a2_lattice.rs ✨ Enhancement +415/-0

A2 hexagonal lattice codebook for paired-dimension quantization

• New module implementing A2 hexagonal lattice quantization with 16 density-optimized centroids for
 bivariate Gaussian
• Implemented A2Codebook struct with quantize_pair(), decode_pair(), and lloyd_refine()
 methods for paired-dimension encoding
• Added RAW_A2_CENTROIDS constant with 1+6+6+3 ring layout optimized for N(0,1) distribution
• Provided centroids_x(), centroids_y() accessors and code_bytes_per_vector() calculator for
 ADC and memory tracking
• Comprehensive test suite validating sigma scaling, quantize/decode roundtrips, cell reachability,
 and distortion comparison vs scalar 2-bit quantization

src/vector/turbo_quant/a2_lattice.rs


5. src/vector/turbo_quant/encoder.rs ✨ Enhancement +283/-0

A2 lattice encoding/decoding and sub-centroid sign computation

• Added TqCodeWithSigns struct to hold TQ codes alongside per-coordinate sign bits computed at
 encode time
• Implemented encode_tq_mse_scaled_with_signs() to compute sub-centroid signs (doubling effective
 quantization resolution from 16 to 32 levels)
• Implemented encode_tq_mse_a2() for A2 hexagonal lattice encoding with paired-dimension
 quantization and nibble-packing
• Implemented decode_tq_mse_a2() for inverse A2 decoding via inverse FWHT
• Added comprehensive test suite for A2 encoding including byte-length validation, roundtrip
 accuracy, zero-vector handling, and norm preservation

src/vector/turbo_quant/encoder.rs


6. src/vector/turbo_quant/collection.rs ✨ Enhancement +102/-2

A2 hexagonal lattice quantization configuration support

• Added TurboQuant4A2 variant to QuantizationConfig enum (value 6) for hexagonal lattice
 quantization
• Updated bits() method to return 4 for A2 (same as scalar TQ4)
• Updated is_turbo_quant() to include A2 in the TurboQuant family
• Modified code_bytes_per_vector() to return padded_dim / 4 for A2 (2x more compressed than
 scalar TQ4's padded_dim / 2)
• Added comprehensive test suite validating A2 configuration, code byte calculations, checksum
 differentiation, and backward compatibility with TQ4

src/vector/turbo_quant/collection.rs


7. src/vector/segment/immutable.rs 🐞 Bug fix +28/-0

Global ID remapping for cross-segment search result consistency

• Added global_id and key_hash fields to MvccHeader for tracking globally unique vector
 identifiers across segments
• Implemented remap_to_global_ids() method to convert per-segment internal IDs to globally unique
 IDs in search results
• Updated search() and search_filtered() to call remap_to_global_ids() before returning
 candidates
• Added key_hash_for_bfs_pos() accessor for retrieving key hash from BFS-reordered positions

src/vector/segment/immutable.rs


8. src/vector/persistence/segment_io.rs ✨ Enhancement +6/-0

A2 quantization and global ID persistence support

• Added TurboQuant4A2 serialization/deserialization support in quant_to_string() and
 string_to_quant() functions
• Updated read_immutable_segment() to initialize global_id and key_hash fields in MvccHeader
 during persistence reload
• Updated test helper to include global_id and key_hash in MvccHeader construction

src/vector/persistence/segment_io.rs


9. src/command/vector_search.rs ✨ Enhancement +15/-3

TQ4A2 quantization support and dimension-adaptive ef_search tuning

• Added support for TQ4A2 quantization config option in ft_create command parsing
• Updated error message to include TQ4A2 in list of supported quantization methods
• Enhanced ef_search auto-computation with dimension-aware boosting (50% boost at 384d+, 100% at
 768d+)
• Changed base ef calculation from k*15 to k*20 with improved clamping range (200-1000)

src/command/vector_search.rs


10. src/vector/store.rs 🐞 Bug fix +11/-4

Global vector ID collision fix across compacted segments

• Added global_id_base tracking to ensure unique vector IDs across multiple compacted segments
• Modified compaction swap logic to compute next_global_id() from mutable segment before creating
 new segment
• New mutable segment now calls set_global_id_base() with continuation ID to prevent ID collisions
• Improved comments explaining global ID continuity across segment boundaries

src/vector/store.rs


11. src/vector/turbo_quant/mod.rs ✨ Enhancement +1/-0

A2 hexagonal lattice quantizer module integration

• Added new module declaration pub mod a2_lattice for A2 hexagonal lattice codebook
• Module provides 16-cell hexagonal quantizer (TQ4A2) implementation with density-adapted ring
 layout

src/vector/turbo_quant/mod.rs


12. scripts/bench-mixed-workload.py 🧪 Tests +561/-0

Mixed workload benchmark with concurrent insert and search simulation

• New comprehensive 5-phase mixed insert+search simulation benchmark (561 lines)
• Generates or caches MiniLM embeddings (10K vectors, 200 queries) with ground truth
• Implements adapters for Moon, Redis, and Qdrant with unified interface for insert/search
 operations
• Phases: bulk load (1K), steady-state (9K inserts + 4.5K searches), search-heavy, burst insert,
 final recall measurement
• Tracks recall, latency percentiles (p50/p99), compaction interference spikes, and QPS metrics
• Generates comparison table and detailed per-system metrics report

scripts/bench-mixed-workload.py


13. scripts/bench-mixed-1k-compact.py 🧪 Tests +364/-0

Multi-compaction stress test with per-batch recall tracking

• New stress test benchmark for multi-compaction scenarios (364 lines)
• Simulates 10K vector insertion with compact_threshold=1000 triggering ~10 compaction cycles
• Pattern: insert 100 vectors → search 10 queries, repeated 100 times
• Tracks per-batch recall, latency, and detects compaction events (>100ms latency spikes)
• Compares Moon vs Redis steady-state recall across multiple immutable segments
• Reports compaction event timeline, final recall@10, QPS, and latency distribution

scripts/bench-mixed-1k-compact.py


14. .planning Miscellaneous +1/-1

Subproject commit update

• Updated subproject commit reference from 9c8405f280e23e9b44265dcb64b868ca5bfd18d2 to
 4f89e8a57b808199ead40583e2c7ef30fa0e541a

.planning


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review Bot commented Apr 1, 2026

Code Review by Qodo

🐞 Bugs (0) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. compact_parallel uses .expect()📘 Rule violation ☼ Reliability
Description
New .expect() calls can panic in production if a thread panics or if A2/scalar codebook
assumptions are violated during compaction. This violates the no-panic policy for library code and
can crash the server during compaction/index build.
Code

src/vector/segment/compaction.rs[R126-158]

+    // Build sub-graphs in parallel using std::thread::scope
+    let sub_graphs: Vec<(crate::vector::hnsw::graph::HnswGraph, Vec<usize>)> =
+        std::thread::scope(|s| {
+            let handles: Vec<_> = cell_assignments
+                .iter()
+                .enumerate()
+                .filter(|(_, cell)| !cell.is_empty())
+                .map(|(cell_idx, cell)| {
+                    let cell = cell.clone();
+                    let cell_seed = seed.wrapping_add(cell_idx as u64 * 0x9E37_79B9_7F4A_7C15);
+                    s.spawn(move || {
+                        let cell_vecs: Vec<&[f32]> =
+                            cell.iter().map(|&idx| live_f32[idx]).collect();
+                        let cell_n = cell_vecs.len();
+
+                        let mut builder =
+                            HnswBuilder::new(HNSW_M, HNSW_EF_CONSTRUCTION, cell_seed);
+                        for _ in 0..cell_n {
+                            builder.insert(|a: u32, b: u32| {
+                                (dist_table.l2_f32)(
+                                    cell_vecs[a as usize],
+                                    cell_vecs[b as usize],
+                                )
+                            });
+                        }
+                        let graph = builder.build(bytes_per_code as u32);
+                        (graph, cell)
+                    })
+                })
+                .collect();
+
+            handles.into_iter().map(|h| h.join().expect("cell build thread panicked")).collect()
+        });
Evidence
PR Compliance ID 7 forbids .unwrap()/.expect() in non-test library code. compact_parallel()
uses .expect() on thread join and additional .expect() calls were added for A2/scalar codebook
access in compaction logic, which can panic if invariants are ever violated.

CLAUDE.md
src/vector/segment/compaction.rs[126-158]
src/vector/segment/compaction.rs[500-503]
src/vector/segment/compaction.rs[688-689]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`src/vector/segment/compaction.rs` introduces `.expect()` in non-test code (thread join and codebook access), which can panic and crash the server.
## Issue Context
Compliance requires avoiding `.unwrap()`/`.expect()` in library code. Even if some paths are currently disabled (e.g., `if false && ...`), the code still compiles and can be re-enabled later, so panics should be eliminated now.
## Fix Focus Areas
- src/vector/segment/compaction.rs[126-158]
- src/vector/segment/compaction.rs[500-503]
- src/vector/segment/compaction.rs[688-689]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. MutableSegment adds .expect()📘 Rule violation ☼ Reliability
Description
New .expect() calls in MutableSegment decoding paths can panic in production if quantization
mode assumptions are violated. This violates the library no-panic requirement and risks crashing on
query-time residual/norm computations.
Code

src/vector/segment/mutable.rs[R642-649]

+            let decoded = if is_a2 {
+                crate::vector::turbo_quant::encoder::decode_tq_mse_a2(
+                    &tq_code, signs, a2_cb.as_ref().expect("A2 codebook"), dim, &mut work_buf,
+                )
+            } else {
+                crate::vector::turbo_quant::encoder::decode_tq_mse_scaled(
+                    &tq_code, signs, centroids_opt.expect("scalar codebook"), dim, &mut work_buf,
+                )
Evidence
PR Compliance ID 7 forbids .unwrap()/.expect() in non-test library code. The modified decoding
logic introduces .expect("A2 codebook") and .expect("scalar codebook") in
src/vector/segment/mutable.rs, which can panic if the Option invariants ever break.

CLAUDE.md
src/vector/segment/mutable.rs[642-650]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`src/vector/segment/mutable.rs` introduces `.expect()` in library code for codebook access, which can panic.
## Issue Context
The code already branches on `is_a2`, so it can be structured to avoid `Option` + `.expect()` entirely (e.g., create the needed codebook in the branch and pass it by reference).
## Fix Focus Areas
- src/vector/segment/mutable.rs[642-650]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. A2 search panics/incorrect🐞 Bug ≡ Correctness
Description
TurboQuant4A2 disables TQ-ADC in MutableSegment::brute_force_search_filtered but then uses
query_state.unwrap() in the non-ADC branch, which panics in Light mode where query_state is None;
even in Exact mode the scoring assumes scalar TQ codes of size padded_dim/2, which A2 violates
(padded_dim/4), producing incorrect distances.
Code

src/vector/segment/mutable.rs[R270-286]

+        // A2 collections don't have a scalar codebook; TQ-ADC not applicable.
+        let is_a2 = self.collection.quantization == QuantizationConfig::TurboQuant4A2;
+        // Placeholder codebook for A2 (unused in L2 fallback path).
+        let a2_placeholder = [0.0f32; 16];
+        let centroids: &[f32; 16] = if is_a2 { &a2_placeholder } else { self.collection.codebook_16() };
  let mut heap: BinaryHeap<DistF32> = BinaryHeap::with_capacity(k + 1);
  // Prepare FWHT-rotated query for TQ-ADC path (Light mode or fallback)
-        let use_tq_adc = query_state.is_none()
-            || self.collection.build_mode
-                == crate::vector::turbo_quant::collection::BuildMode::Light;
+        // A2 collections always use TQ-ADC with decoded-vector L2 (no scalar ADC).
+        let use_tq_adc = !is_a2
+            && (query_state.is_none()
+                || self.collection.build_mode
+                    == crate::vector::turbo_quant::collection::BuildMode::Light);
  let q_rotated: Vec<f32> = if use_tq_adc {
      let mut buf = vec![0.0f32; padded];
      buf[..dim].copy_from_slice(query_f32);
Evidence
In mutable brute-force search, A2 forces use_tq_adc=false, which routes into the else-branch that
unwraps query_state (None in Light mode) and calls TurboQuant_prod scoring that assumes scalar
nibble codes. The TurboQuant_prod scorer explicitly documents/implements tq_code as padded_dim/2
bytes and indexes q_rotated at j*2 and j*2+1, which cannot represent A2’s paired-dimension encoding
(padded_dim/4 bytes).

src/vector/segment/mutable.rs[270-348]
src/vector/turbo_quant/collection.rs[192-208]
src/vector/turbo_quant/inner_product.rs[302-319]
src/vector/turbo_quant/a2_lattice.rs[9-18]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
TurboQuant4A2 is encoded as 1×4-bit index per *pair* of dimensions (padded_dim/4 bytes), but search/scoring code paths assume scalar TQ4 layout (padded_dim/2 bytes). In Light mode, mutable search can panic due to `query_state.unwrap()` when A2 disables TQ-ADC.
### Issue Context
- Mutable brute-force search must work in Light mode (query_state=None) and must compute distances consistent with the A2 encoding.
- TurboQuant_prod `score_l2_prod` explicitly assumes scalar nibble-packed codes.
### Fix Focus Areas
- src/vector/segment/mutable.rs[270-348]
- src/vector/turbo_quant/inner_product.rs[302-319]
- src/vector/segment/immutable.rs[89-164]
### What to change
1. Decide on an A2 search/scoring strategy:
- Preferred: implement an A2-aware ADC/LUT distance for both brute-force and HNSW traversal (pair-LUT over 16 A2 centroids).
- Acceptable correctness fallback: decode A2 codes to rotated f32 (using A2Codebook) and compute symmetric L2 against rotated query (slow but correct).
2. Ensure Light mode never calls `query_state.unwrap()` for A2.
3. Ensure ImmutableSegment’s HNSW traversal/scoring (including filtered/sub-centroid paths) uses an A2-aware distance, or explicitly reject `QUANTIZATION TQ4A2` until implemented.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (3)
4. Filter bitmap ID mismatch🐞 Bug ≡ Correctness
Description
Filtered search and payload indexing still use per-segment internal_id, but mutable search results
and immutable results are now remapped to global_id; this makes filtered search incorrect (and
post-filtering can drop valid results) once global_id_base is non-zero after compaction.
Code

src/vector/segment/mutable.rs[R331-343]

          )
      };
+            let global_id = inner.global_id_base + entry.internal_id;
      if heap.len() < k {
-                heap.push(DistF32(dist, entry.internal_id));
+                heap.push(DistF32(dist, global_id));
      } else if let Some(&DistF32(worst, _)) = heap.peek() {
          if dist < worst {
              heap.pop();
-                    heap.push(DistF32(dist, entry.internal_id));
+                    heap.push(DistF32(dist, global_id));
          }
      }
  }
Evidence
PayloadIndex is populated using the internal_id returned by MutableSegment::append, while mutable
filtered search checks allow_bitmap against entry.internal_id but returns
VectorId(global_id_base+internal_id). Immutable filtered HNSW checks allow_bitmap against
graph.to_original(...) (local per-segment IDs) and then remaps results to global IDs. These ID
spaces diverge after compaction when global_id_base is advanced, so the bitmap and returned IDs no
longer align.

src/shard/spsc_handler.rs[952-987]
src/vector/segment/mutable.rs[300-348]
src/vector/hnsw/search.rs[439-532]
src/vector/segment/holder.rs[186-207]
src/vector/segment/immutable.rs[134-178]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Filter bitmaps (PayloadIndex) are built with per-segment `internal_id`, while search results now use `global_id`. This breaks filtered searches after the first compaction (global_id_base != 0).
### Issue Context
- `auto_index_hset` inserts payload values using the ID returned by `mutable.append(...)`.
- Mutable filtered search checks `allow_bitmap.contains(entry.internal_id)` but returns `VectorId(global_id)`.
- Immutable filtered HNSW checks `allow_bitmap` in the graph’s local ID space, then remaps to `global_id`.
### Fix Focus Areas
- src/shard/spsc_handler.rs[952-987]
- src/vector/segment/mutable.rs[300-348]
- src/vector/hnsw/search.rs[439-532]
- src/vector/segment/immutable.rs[134-178]
### What to change
1. Define a single canonical ID space for filtering and search results (recommended: global_id).
2. When inserting into PayloadIndex, use `global_id = mutable.global_id_base() + internal_id` (or change `append()` to return global_id).
3. In mutable filtered search, check `allow_bitmap.contains(global_id)` (not internal_id).
4. For immutable filtered HNSW, either:
- Precompute a per-segment bitmap in *local* ID space by scanning mvcc/global_id and translating the global allow-bitmap to local orig-ids, then pass that to HNSW; or
- Change `hnsw_search_filtered` to accept a predicate/closure that can check `mvcc[bfs_pos].global_id` directly.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Global IDs not persisted🐞 Bug ☼ Reliability
Description
Immutable segment persistence writes only internal_id/LSNs; on reload global_id is forced to
internal_id and key_hash to 0, so restarting with multiple segments reintroduces VectorId collisions
and loses key_hash needed for correct delete semantics.
Code

src/vector/persistence/segment_io.rs[1]

  pos += 8;
Evidence
The on-disk mvcc_headers.bin format currently serializes only internal_id/insert_lsn/delete_lsn.
During read, the new MvccHeader fields are reconstructed as global_id=internal_id and key_hash=0.
Recovery loads multiple immutable segments, so this reconstruction guarantees collisions across
segments after restart and loses key_hash data entirely.

src/vector/persistence/segment_io.rs[152-162]
src/vector/persistence/segment_io.rs[320-370]
src/vector/persistence/recovery.rs[270-295]
src/vector/segment/immutable.rs[24-35]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Immutable segment persistence does not serialize `MvccHeader.global_id` and `MvccHeader.key_hash`. On reload, values are defaulted (global_id=internal_id, key_hash=0), reintroducing multi-segment ID collisions and breaking key-hash based operations.
### Issue Context
- `MvccHeader` now includes `global_id` and `key_hash`.
- `write_immutable_segment()` writes only 20 bytes/header (internal_id + 2 LSNs).
- `read_immutable_segment()` reconstructs `global_id` and `key_hash` with defaults.
- Recovery loads multiple immutable segments.
### Fix Focus Areas
- src/vector/persistence/segment_io.rs[152-162]
- src/vector/persistence/segment_io.rs[320-370]
- src/vector/segment/immutable.rs[24-35]
### What to change
1. Bump the on-disk MVCC header format version (and/or SegmentMeta.version).
2. Serialize and deserialize the additional fields:
- internal_id (u32)
- global_id (u32)
- key_hash (u64)
- insert_lsn (u64)
- delete_lsn (u64)
(32 bytes per header)
3. Add backward compatibility:
- If old format detected, either refuse to load multi-segment indexes (explicit error) or load with a clearly marked degraded mode.
- Note: key_hash cannot be reconstructed from old segments; consider persisting it going forward and documenting that old segments must be rebuilt for correct semantics.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. Sub-sign remap uses wrong ID🐞 Bug ≡ Correctness
Description
Compaction remaps insert-time sub-centroid signs to BFS order using graph.to_original(bfs_pos) but
then treats that value as a MutableEntry.internal_id, so it can copy the wrong sign vector and
corrupt sub-centroid refinement accuracy.
Code

src/vector/segment/compaction.rs[R702-717]

+    } else if need_cpu_build && !frozen.sub_centroid_signs.is_empty() {
+        // Light mode with insert-time sub-centroid signs: remap to BFS order.
+        // Signs were computed from actual pre-quantization FWHT values at insert,
+        // so they are accurate (not the useless decoded-centroid fallback).
+        for bfs_pos in 0..n {
+            let orig_id = graph.to_original(bfs_pos as u32) as usize;
+            let live_idx = live_entries
+                .iter()
+                .position(|e| e.internal_id as usize == orig_id)
+                .unwrap_or(orig_id);
+            let src_offset = live_entries[live_idx].internal_id as usize * sub_bpv;
+            let dst_offset = bfs_pos * sub_bpv;
+            if src_offset + sub_bpv <= frozen.sub_centroid_signs.len() {
+                sub_signs_bfs[dst_offset..dst_offset + sub_bpv]
+                    .copy_from_slice(&frozen.sub_centroid_signs[src_offset..src_offset + sub_bpv]);
+            }
Evidence
The HNSW builder inserts nodes with sequential IDs 0..n-1 (matching live_entries index). In
compaction’s insert-time sign remap, orig_id comes from graph.to_original and is therefore an index
into live_entries; searching live_entries by internal_id==orig_id is the wrong mapping and can
select a different entry than live_entries[orig_id], causing signs from an unrelated internal_id to
be copied into the BFS position.

src/vector/segment/compaction.rs[538-577]
src/vector/segment/compaction.rs[702-717]
src/vector/hnsw/build.rs[88-90]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
During compaction, `graph.to_original(bfs_pos)` refers to the builder’s original node ID (0..n-1), i.e., the index into `live_entries`, not the mutable segment’s `internal_id`. The current remap code tries to locate by `internal_id == orig_id`, which can select the wrong entry and copy incorrect sign bits.
### Issue Context
- HnswBuilder assigns sequential IDs starting at 0 for each `insert()`.
- Compaction builds the graph by calling `builder.insert(...)` `n` times, so graph original IDs align with `live_entries` indices.
### Fix Focus Areas
- src/vector/segment/compaction.rs[702-717]
- src/vector/segment/compaction.rs[538-577]
### What to change
Replace the `.position(|e| e.internal_id == orig_id)` lookup with direct indexing:
- `let live_entry = &live_entries[orig_id];`
- `let src_internal = live_entry.internal_id as usize;`
- `let src_offset = src_internal * sub_bpv;`
This ensures BFS positions receive the correct insert-time sign vector.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

7. HNSW prune buffer too small🐞 Bug ☼ Reliability
Description
HnswBuilder::add_neighbor_with_prune uses a fixed 65-element stack buffer but writes up to max_nb+1
entries; if the builder is ever configured with M such that max_nb exceeds 64, this will write out
of bounds and panic.
Code

src/vector/hnsw/build.rs[R421-443]

+        if self.use_heuristic {
+            // Heuristic re-prune: collect all neighbors + candidate, re-select with diversity.
+            let mut combined_buf = [(0.0f32, 0u32); 65];
+            let mut combined_len = 0usize;
-        let neighbors = if level == 0 {
-            &self.layer0_flat[start..start + max_nb]
-        } else {
-            let sv = &self.upper_layers[target as usize];
-            let end = (start + max_nb).min(sv.len());
-            &sv[start..end]
-        };
+            let neighbors = if level == 0 {
+                &self.layer0_flat[start..start + max_nb]
+            } else {
+                let sv = &self.upper_layers[target as usize];
+                let end = (start + max_nb).min(sv.len());
+                &sv[start..end]
+            };
-        for (i, &nb) in neighbors.iter().enumerate() {
-            if nb == SENTINEL {
-                break;
-            }
-            let d = dist_fn(target, nb);
-            if d > worst_dist {
-                worst_dist = d;
-                worst_idx = i;
+            for &nb in neighbors {
+                if nb == SENTINEL {
+                    break;
+                }
+                combined_buf[combined_len] = (dist_fn(target, nb), nb);
+                combined_len += 1;
      }
-        }
+            combined_buf[combined_len] = (dist_fn(target, node_id), node_id);
+            combined_len += 1;
+
Evidence
The heuristic re-prune path collects all existing neighbors (up to max_nb) plus the new candidate
into a fixed-size array of length 65. max_nb is derived from m0=2*m, and FT.CREATE accepts an
arbitrary M with no upper bound enforcement shown in parsing. This makes the implementation fragile
if M is increased beyond current assumptions.

src/vector/hnsw/build.rs[421-443]
src/command/vector_search.rs[88-143]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`add_neighbor_with_prune` uses `combined_buf` sized for at most 64 neighbors (+1 candidate). If `max_nb` grows (via larger M), this can overflow and panic.
### Issue Context
- `combined_buf` is fixed-length 65.
- `max_nb` depends on `m` and can be increased if M is configurable.
### Fix Focus Areas
- src/vector/hnsw/build.rs[421-443]
- src/command/vector_search.rs[88-143]
### What to change
Choose one (or both):
1. Enforce an upper bound on `M` in FT.CREATE parsing (e.g., 2..=32) and assert in HnswBuilder::new.
2. Replace the fixed array with a dynamically sized buffer:
- `SmallVec<[(f32,u32); 65]>` and `reserve(max_nb+1)`; or
- `Vec<(f32,u32)>` with `with_capacity(max_nb+1)`.
This removes the latent panic and keeps the heuristic robust to parameter changes.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment thread src/vector/segment/compaction.rs
Comment thread src/vector/segment/mutable.rs Outdated
Comment thread src/vector/segment/mutable.rs
Comment thread src/vector/segment/mutable.rs
Comment thread src/vector/segment/compaction.rs
@pilotspacex-byte
Copy link
Copy Markdown
Contributor

@coderabbitai review

Base automatically changed from feat/vector-engine to main April 1, 2026 07:57
TinDang97 added 19 commits April 1, 2026 14:59
Phase 74: Hexagonal HNSW improvements to vector engine
- Plan 01 (W1): Diversity heuristic neighbor selection (Algorithm 4)
- Plan 02 (W1): A2 hexagonal lattice codebook + quantization
- Plan 03 (W2): Cell-parallel compaction with stitching
- Plan 04 (W3): TQ4A2 config wiring + backward compat
…Quant

- 16 density-optimized hex centroids for bivariate N(0, sigma^2)
- quantize_pair/decode_pair with brute-force nearest of 16 (fast at 16 comparisons)
- Lloyd refinement for runtime centroid optimization
- Distortion test proves A2 MSE < scalar 2-bit MSE at same 4-bit budget
- Byte layout: padded_dim/4 bytes per code (one nibble per dimension pair)
- centroids_x/centroids_y accessors for ADC lookup tables
- 9 tests: roundtrip, distortion, scaling, cell reachability, convergence
Replace select_neighbors_simple with select_neighbors_heuristic from
Malkov & Yashunin 2018. Candidates accepted only if closer to query
than to all already-selected neighbors, ensuring angular diversity.

- select_neighbors_heuristic with keepPrunedConnections
- Updated insert() and add_neighbor_with_prune() to use heuristic
- SmallVec for zero-alloc combined buffer in prune path
- brute_force_knn oracle for recall testing
- Tests: collinear rejection, diverse selection, keepPruned fill,
  BFS connectivity (1000 nodes), recall@10 >= 0.85 on 4-cluster data
- test_assign_to_cells_partitions_all_vectors: verifies cell assignment covers all indices
- test_parallel_compact_bfs_reaches_all: BFS connectivity on 500-vector parallel graph
- test_compact_parallel_recall: recall@10 >= 0.90 with brute-force L2 oracle
- test_stitch_cross_cell_edges: stitched graph fully connected across cells
- brute_force_knn helper for recall oracle
- Stub functions: assign_to_cells, compact_parallel, stitch_subgraphs
…ning and stitching

- assign_to_cells: grid-based spatial partitioning on first 2 f32 coordinates
- compact_parallel: std::thread::scope builds HNSW sub-graphs per cell in parallel
- stitch_subgraphs: merges sub-graphs with bidirectional cross-cell boundary edges
  using centroid-based boundary detection and farthest-replacement insertion
- add_neighbor_to_flat: farthest-replacement for full neighbor lists ensures connectivity
- Dispatch in compact(): parallel path when has_raw && n >= PARALLEL_THRESHOLD (10K)
- BFS connectivity verified on 500-vector test, recall@10 >= 0.90 on 1000-vector test
- Zero unsafe blocks, no rayon dependency, zero clippy warnings
- All 364 vector tests pass, all 10 compaction tests pass
- Add QuantizationConfig::TurboQuant4A2 variant (repr=6) for hexagonal A2 lattice
- Implement encode_tq_mse_a2: paired-dimension A2 encoding via A2Codebook
- Implement decode_tq_mse_a2: inverse path for roundtrip verification
- Override code_bytes_per_vector for A2: padded_dim/4 (2x more compressed than TQ4)
- Update persistence serialization (segment_io.rs) for TQ4A2
- Update FT.CREATE parser to accept TQ4A2 quantization option
- Add 12 tests: byte length, roundtrip MSE, backward compat, norm preservation
- Update MutableSegment::append to dispatch A2 encoding for TQ4A2 collections
- Fix bytes_per_code to use collection.code_bytes_per_vector() (dimension-adaptive)
- Add A2 decode path in compaction all_rotated (pair-based nibble decode)
- Add A2 L2 fallback in HNSW build when raw f32 unavailable
- Update sub-centroid sign computation for A2 pair layout
- Guard codebook_16() calls with is_a2 check (placeholder for A2 search path)
- Update QJL and residual norm recomputation for A2 decode
- All 376 vector tests pass, zero clippy warnings
Moon 20.5x faster insert (57K vs 2.8K vec/s), higher recall than Redis
(0.594 vs 0.462). Diversity heuristic adds +3.4% build cost for +15-30%
recall improvement. Zero search regression.
The diversity heuristic's inter-neighbor distance comparisons amplify
TQ-ADC quantization noise, especially at high dimensions (768d+) where
quantization error is proportionally larger. This caused the heuristic
to over-reject good candidates, producing poorly connected graphs.

Fix: add use_heuristic flag to HnswBuilder. Set to false when building
with TQ-ADC fallback (Light mode). Exact mode (f32 L2) still uses the
heuristic since distance comparisons are accurate.

Root cause of low recall at 768d/10K is actually the codebook sigma
mismatch (1/sqrt(padded_dim) vs 1/sqrt(original_dim)) and zero-padding
overhead — a pre-existing issue not introduced by Phase 74.
…ning

Two fixes for recall regression in Light mode:

1. Replace TQ-ADC pairwise distance with decoded centroid symmetric L2
   for HNSW graph construction. TQ-ADC's asymmetric noise caused poor
   graph topology at 384d+ (0.41 recall → 0.71 random, 0.93 MiniLM).

2. Scale ef_search auto formula with dimension: base=k*20, +50% at 384d,
   +100% at 768d. Higher beam width compensates for TQ distance noise.

3. Add use_heuristic flag to HnswBuilder — diversity heuristic only used
   with accurate distances (f32 L2), disabled for noisy distance functions.

Benchmark (10K MiniLM 384d, Light mode):
  Recall@10: 0.9265 (was 0.59 with TQ-ADC build + random data)
  Insert: 92K vec/s | Search: 1250 QPS | Compact: 3.8s
Compute sub-centroid sign bits during TQ encoding (at insert time)
from actual pre-quantization FWHT values. Previously, Light mode
could only compute signs from decoded centroids (always 1 = useless).

The fix:
- encode_tq_mse_scaled_with_signs: new encoder that returns TqCodeWithSigns
- MutableSegment stores signs alongside TQ codes (128 bytes/vec at 384d)
- FrozenSegment carries signs to compaction
- Compaction remaps signs to BFS order for ImmutableSegment
- ImmutableSegment::search uses 32-level LUT (was 16-level)

This doubles effective quantization resolution from 16 to 32 levels
in the HNSW beam search, cutting distance error roughly in half.

Benchmark (10K MiniLM 384d, Light mode):
  Recall@10: 0.9690 (was 0.9265, +4.25%)
  Insert: 77K vec/s (was 92K, -16% from sign computation)
  QPS: 1127 (was 1250, -10% from 32-entry LUT)
  Compact: 4.1s (was 3.8s)
5-phase workload simulating real production patterns:
  Phase 1: Bulk load 1K vectors
  Phase 2: Steady-state (insert 10 + search 5) × 900 turns
  Phase 3: Search-heavy (insert 1 + search 20)
  Phase 4: Burst insert + search
  Phase 5: Final recall measurement

Tests recall during active ingestion, compaction interference,
and latency stability under mixed load.
Root cause: each compacted immutable segment assigned VectorIds 0..N-1
independently. When multiple segments exist (compact_threshold=1000 with
10K vectors = 10 segments), search results from different segments had
colliding IDs (vec:42 in segment 0 ≠ vec:42 in segment 5). The merge
picked the closest distance but returned the wrong document name.

Fix:
- MvccHeader gains global_id (globally unique sequential number)
- MutableSegment tracks global_id_base offset, incremented on compaction
- ImmutableSegment::remap_to_global_ids() converts per-segment IDs
- Mutable brute-force search also uses global_id_base + internal_id

Benchmark (10K MiniLM, compact_threshold=1000, 10 compaction cycles):
  Before: steady recall 0.308, final 0.097 (IDs colliding across segments)
  After:  steady recall 0.972, final 0.969 (global IDs, correct merge)

Also adds bench-mixed-1k-compact.py simulation script.
…call

Cell-parallel compaction partitions vectors by their first 2 coordinates,
which carries <0.5% of variance at 384d+. This produces essentially
random partitions, and the 3-edge stitching cannot recover graph quality.

Result: Exact mode at 10K vectors got 68.7% recall (should be 96.9%).
Disabling compact_parallel restores Exact to 96.85% — matching Light.

The parallel compaction concept is sound but needs PCA-based or random
partitioning instead of coordinate-based. Disabled until fixed.
1. Remove .expect() in compaction.rs — use match/if-let for codebook
   access, filter_map for thread join (no panics in library code)

2. Remove .expect() in mutable.rs — use match on (is_a2, a2_cb, centroids)
   triple for decoded path selection

3. Fix A2 search panic — add decoded-vector symmetric L2 fallback for
   TQ4A2 brute-force search. Never calls query_state.unwrap() for A2.
   Builds A2Codebook once per search, decodes pairs inline.

4. Fix filter bitmap ID mismatch — PayloadIndex now uses global_id
   (global_id_base + internal_id) instead of per-segment internal_id.
   Mutable filtered search checks bitmap against global_id.

5. Persist global_id and key_hash — mvcc_headers.bin v2 format writes
   32 bytes/header (was 20). Backward-compatible reader detects v1
   by checking first byte != 2.

6. Fix sub-sign remap — use direct live_entries[orig_id] indexing
   instead of .position(|e| e.internal_id == orig_id) which can
   select wrong entry when internal_ids don't match builder sequence.

7. Fix prune buffer overflow — increase combined_buf from 65 to 129
   elements (M0 can be up to 128 when M=64).
@TinDang97 TinDang97 force-pushed the feat/hexagon-hnws branch from 10b5cd9 to 3ddb7c5 Compare April 1, 2026 08:07
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/vector/segment/mutable.rs (1)

458-458: ⚠️ Potential issue | 🟡 Minor

unwrap() used in library code violates coding guidelines.

Line 458 calls query_state.unwrap() which can panic. Per coding guidelines, library code should use pattern matching or if let instead of unwrap().

The surrounding logic already handles the None case via use_tq_adc, so this should be unreachable, but the explicit unwrap() is still a violation.

Proposed fix
             let dist = if use_tq_adc {
                 tq_l2_adc_scaled(&q_rotated, tq_code, entry.norm, centroids)
-            } else {
-                let qs = query_state.unwrap();
+            } else if let Some(qs) = query_state {
                 let qjl_bpv = inner.qjl_bytes_per_vec;
                 let qjl_offset = id * qjl_bpv;
                 let qjl_signs = &inner.qjl_signs[qjl_offset..qjl_offset + qjl_bpv];
                 let residual_norm = inner.residual_norms[id];
                 let single_qjl_bpv = (dim + 7) / 8;
                 crate::vector::turbo_quant::inner_product::score_l2_prod(
                     qs,
                     tq_code,
                     entry.norm,
                     qjl_signs,
                     residual_norm,
                     centroids,
                     dim,
                     single_qjl_bpv,
                 )
+            } else {
+                // Unreachable: non-A2 with Light mode always uses TQ-ADC
+                continue;
             };

Based on learnings: "No unwrap() or expect() in library code outside tests; use pattern matching or if let"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vector/segment/mutable.rs` at line 458, Replace the direct unwrap() call
on query_state with explicit pattern matching: instead of let qs =
query_state.unwrap(); match on query_state (or use if let Some(qs) = query_state
{ ... } else { unreachable!("query_state must be Some when use_tq_adc is true")
}) so the code does not call unwrap() in library code; reference the symbols
query_state and use_tq_adc in your change to keep the existing control flow and
document the unreachable branch with a clear message.
🧹 Nitpick comments (2)
src/vector/segment/compaction.rs (2)

501-502: Redundant variable re-declarations shadow earlier definitions.

has_raw and dim are re-declared on lines 501-502, shadowing the identical declarations on lines 450-451. While Rust allows this, it's confusing and suggests incomplete refactoring.

Suggested cleanup
-    let has_raw = !frozen.raw_f32.is_empty();
-    let dim = frozen.dimension as usize;
-
     let live_f32: Vec<&[f32]> = if has_raw && need_cpu_build {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vector/segment/compaction.rs` around lines 501 - 502, The two later let
bindings `let has_raw = !frozen.raw_f32.is_empty();` and `let dim =
frozen.dimension as usize;` are redundant and shadow earlier identical bindings;
remove these re-declarations in src/vector/segment/compaction.rs and reuse the
previously declared `has_raw` and `dim` variables (or, if different semantics
were intended, rename the new bindings to a distinct name and adjust their usage
accordingly), ensuring all uses of `has_raw` and `dim` in functions like the
compaction logic reference the single, correctly-typed variables derived from
`frozen.raw_f32` and `frozen.dimension`.

444-445: Unused variables _codebook and _code_len should be removed.

Lines 444-445 define _codebook and _code_len with underscore prefixes but are never used. They were likely left over from refactoring. code_len is re-declared correctly on line 499.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vector/segment/compaction.rs` around lines 444 - 445, Remove the two
unused temporary variables by deleting the declarations of `_codebook` (from
collection.codebook_16()) and `_code_len` (computed as bytes_per_code - 4);
these are vestigial after refactoring and duplicate the correctly declared
`code_len` later, so simply remove the `_codebook` and `_code_len` lines to
avoid unused-variable warnings and keep `collection.codebook_16()` unused unless
actually needed elsewhere.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/bench-mixed-1k-compact.py`:
- Around line 114-122: The current loop quietly swallows all exceptions when
parsing IDs from the Redis response (variables result, raw, ids), which can hide
format changes; replace the broad except Exception: pass with targeted exception
handling (e.g., except (ValueError, IndexError, UnicodeDecodeError,
AttributeError) as e:) and re-raise a new exception or call sys.exit with a
clear message that includes the offending payload (raw and/or result) and the
original exception to abort the run; apply the same change to the other
identical blocks handling FT.SEARCH/VSIM responses so any unparseable ID causes
the benchmark to fail loudly instead of silently dropping IDs.
- Around line 41-57: The benchmark reuses fixed Redis names ("idx", key prefix
"doc:", set "vecset", and keys like "vec:*") causing FT.CREATE conflicts and
leftover state between runs; update run_moon (and other routines that create the
index/keys) to generate a run-specific namespace (e.g., append a unique
run_id/timestamp to the index name, prefix, and set/key names) or perform an
explicit teardown before creating resources so every execution is isolated;
ensure all usages of "idx", "doc:", "vecset", and "vec:*" in the script are
replaced to consistently use the per-run namespace or are cleaned up prior to
FT.CREATE.

In `@scripts/bench-mixed-workload.py`:
- Around line 305-314: The variable phase2_compact_spikes is being incremented
for any backend when do_search() returns lat > 50ms but the final report labels
it as "Compaction spikes", which is only meaningful for Moon; update the logic
so either (a) only increment phase2_compact_spikes when adapter.name() == "Moon"
(i.e., gate the increment around adapter.name() check where
phase2_compact_spikes is updated) or (b) rename the metric and its report label
to a generic "Latency spikes (>50ms)" everywhere it is used (including the
report generation at the end) so it correctly reflects a backend-agnostic
tail-latency counter. Ensure you update both the increment sites (where
phase2_compact_spikes is modified) and the final reporting (the lines that print
"Compaction spikes") to keep names consistent.
- Around line 108-186: The issue is reuse of fixed namespaces (idx, doc:,
vecset, vec:) causing collisions between runs; fix by adding a per-run unique
namespace/id and using it in all commands or by tearing down existing state
before creation: generate a run-specific identifier in the adapter/connector
(e.g., set self.run_id or self.index_name in the class __init__ or
create_index), replace literal "idx", "doc:", "vecset", "vec:" and f"vec:{id}"
usages in RedisMoonAdapter.create_index / insert / insert_batch / search and
RedisAdapter.create_index / insert / insert_batch / search with names that
include that run-specific id, or call the appropriate teardown commands
(FT.DROPINDEX for the RediSearch adapter and DEL/UNLINK for vecset/vec keys and
VDEL if available) before creating/adding to avoid inheriting state.
- Around line 283-410: The script currently exhausts the cached 10K corpus
before Phase 3 because Phase1 (do_insert_batch(1000)) + Phase2 (900 turns × 10
inserts = 9000) use the full budget; add a guard before Phase 1 that checks n
and next_id against the planned inserts (1000 + 9000) and either (a) reduce
phase2 turns/inserts to fit the configured n (adjust the loop using the symbol
range(900) or compute a dynamic turns variable) or (b) fail fast with a clear
error asking to regenerate the cache with a larger corpus and update the cache
key. Also ensure any cache key/path change is applied when changing n so old 10K
artifacts don’t mask the fix; reference next_id, n, do_insert_batch, and the
Phase 2 loop (range(900)) when making the change.

In `@src/command/vector_search/mod.rs`:
- Around line 474-483: The dimension-boost mapping is inverted:
idx.meta.dimension uses dim_factor to scale beam but the high-d branch currently
gives no boost; update the dim_factor logic so that higher dimensions get larger
multipliers (use dim_factor = 4 for idx.meta.dimension >= 768, dim_factor = 3
for idx.meta.dimension >= 384, else dim_factor = 2) so the expression (base *
dim_factor / 2).clamp(200, 1000) yields +50% at >=384 and +100% at >=768 as the
comment describes; adjust the dim_factor assignment near the variables base and
dim_factor in this module accordingly.

In `@src/vector/persistence/segment_io.rs`:
- Around line 384-386: The current v1 fallback sets key_hash to 0 in
src/vector/persistence/segment_io.rs (the tuple branch returning (internal_id,
0u64)), which collides with mark_deleted_by_key_hash in
src/vector/segment/mutable.rs and can cause mass deletions; change the v1
fallback key_hash from 0u64 to a sentinel value (e.g. u64::MAX) unlikely to
match real hashes, and add a brief code comment documenting this decision and
that legacy v1 segments should be compacted or treated specially; also scan
mark_deleted_by_key_hash to ensure it will never intentionally target the
sentinel value (or explicitly ignore sentinel) so legacy entries are not
matched.
- Around line 349-360: The current version-detection using mvcc_bytes[0] == 2
can misclassify v1 files whose LE count low byte is 0x02; update the detection
in segment_io.rs to only treat it as v2 when mvcc_bytes[0] == 2 AND the parsed
v2 count (from mvcc_bytes[1..5]) is plausible — e.g., ensure mvcc_bytes.len() >=
5 and that pos + count * HEADER_SIZE (or the expected per-entry size) <=
mvcc_bytes.len(); if that validation fails, fall back to parsing as v1 (read
count from mvcc_bytes[0..4] and set pos=4). Use the same symbols mvcc_bytes,
mvcc_version, mvcc_count, and pos so callers remain unchanged.

In `@src/vector/segment/compaction.rs`:
- Line 159: The current join of worker threads uses
handles.into_iter().filter_map(|h| h.join().ok()).collect(), which silently
drops any panicked thread and loses its sub-graph; replace this with explicit
panic propagation or error reporting by iterating handles and calling join()
while handling the Err case: for each thread handle returned from the parallel
graph builder (the variable `handles`), call join() and if it Errs either
propagate the panic (e.g., unwrap/expect) or log the thread panic including
context (cell id or worker index) and return an error/Result upstream so the
caller knows a sub-graph was lost; ensure the calling function/sig for the
compaction/graph-building path (the function that owns `handles`) returns a
Result or panics consistently so incomplete graphs are not silently accepted.

In `@src/vector/segment/mutable.rs`:
- Around line 408-416: brute_force_search_mvcc currently sets use_tq_adc =
!is_a2 && ... and then assumes query_state is Some, causing a panic when is_a2
is true and query_state.is_none(); fix by adding the A2 decoded-L2 fallback path
similar to brute_force_search_filtered: when is_a2 && query_state.is_none()
decode A2 centroids into q_rotated (use the a2_placeholder/centroids logic
already present) and set use_tq_adc false or an equivalent flag so the search
uses decoded L2 distances; additionally add an A2 distance branch in the scoring
loop (the same branch used in brute_force_search_filtered) so the scorer
computes distances for A2-encoded vectors instead of calling
query_state.unwrap(), and ensure all references to query_state are guarded or
use the decoded path to avoid unwrap panics.

In `@src/vector/store.rs`:
- Around line 109-115: When attaching recovered segments in create_index(),
compute the current max global ID by iterating immutable segments' graph node
counts (sum or max of each immutable segment's graph.node_count()) and call
set_global_id_base(...) on the recovered mutable (the Arc<MutableSegment> named
new_mutable or recovered mutable) with that max value before swapping it into
the index; ensure you use MutableSegment::set_global_id_base on the recovered
mutable so new entries start after existing IDs and avoid collisions.

---

Outside diff comments:
In `@src/vector/segment/mutable.rs`:
- Line 458: Replace the direct unwrap() call on query_state with explicit
pattern matching: instead of let qs = query_state.unwrap(); match on query_state
(or use if let Some(qs) = query_state { ... } else { unreachable!("query_state
must be Some when use_tq_adc is true") }) so the code does not call unwrap() in
library code; reference the symbols query_state and use_tq_adc in your change to
keep the existing control flow and document the unreachable branch with a clear
message.

---

Nitpick comments:
In `@src/vector/segment/compaction.rs`:
- Around line 501-502: The two later let bindings `let has_raw =
!frozen.raw_f32.is_empty();` and `let dim = frozen.dimension as usize;` are
redundant and shadow earlier identical bindings; remove these re-declarations in
src/vector/segment/compaction.rs and reuse the previously declared `has_raw` and
`dim` variables (or, if different semantics were intended, rename the new
bindings to a distinct name and adjust their usage accordingly), ensuring all
uses of `has_raw` and `dim` in functions like the compaction logic reference the
single, correctly-typed variables derived from `frozen.raw_f32` and
`frozen.dimension`.
- Around line 444-445: Remove the two unused temporary variables by deleting the
declarations of `_codebook` (from collection.codebook_16()) and `_code_len`
(computed as bytes_per_code - 4); these are vestigial after refactoring and
duplicate the correctly declared `code_len` later, so simply remove the
`_codebook` and `_code_len` lines to avoid unused-variable warnings and keep
`collection.codebook_16()` unused unless actually needed elsewhere.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 398b5521-b766-4162-ae04-dd2565d94c4e

📥 Commits

Reviewing files that changed from the base of the PR and between be6797d and 5f711ed.

📒 Files selected for processing (15)
  • .planning
  • scripts/bench-mixed-1k-compact.py
  • scripts/bench-mixed-workload.py
  • src/command/vector_search/mod.rs
  • src/shard/spsc_handler.rs
  • src/vector/hnsw/build.rs
  • src/vector/persistence/segment_io.rs
  • src/vector/segment/compaction.rs
  • src/vector/segment/immutable.rs
  • src/vector/segment/mutable.rs
  • src/vector/store.rs
  • src/vector/turbo_quant/a2_lattice.rs
  • src/vector/turbo_quant/collection.rs
  • src/vector/turbo_quant/encoder.rs
  • src/vector/turbo_quant/mod.rs

Comment on lines +41 to +57
def run_moon(port, vectors, queries, gt_final, compact_threshold):
import redis as redis_lib

r = redis_lib.Redis(port=port, decode_responses=False, socket_timeout=600)
r.ping()

n, dim = vectors.shape

# Create index with specified compact threshold
r.execute_command(
"FT.CREATE", "idx", "ON", "HASH",
"PREFIX", "1", "doc:",
"SCHEMA", "vec", "VECTOR", "HNSW", "10",
"TYPE", "FLOAT32", "DIM", str(dim),
"DISTANCE_METRIC", "L2", "QUANTIZATION", "TQ4",
"COMPACT_THRESHOLD", str(compact_threshold),
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This benchmark also needs run-specific namespaces.

Lines 51/53/83/99/165/218/228/256 reuse fixed idx, doc:*, vecset, and vec:* names. On a rerun against the same ports, Moon can fail at FT.CREATE, and the baseline is not guaranteed to start from a clean namespace. Use a per-run prefix/set name or explicit teardown so repeated executions stay isolated.

Also applies to: 78-103, 214-228, 254-256

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/bench-mixed-1k-compact.py` around lines 41 - 57, The benchmark reuses
fixed Redis names ("idx", key prefix "doc:", set "vecset", and keys like
"vec:*") causing FT.CREATE conflicts and leftover state between runs; update
run_moon (and other routines that create the index/keys) to generate a
run-specific namespace (e.g., append a unique run_id/timestamp to the index
name, prefix, and set/key names) or perform an explicit teardown before creating
resources so every execution is isolated; ensure all usages of "idx", "doc:",
"vecset", and "vec:*" in the script are replaced to consistently use the per-run
namespace or are cleaned up prior to FT.CREATE.

Comment on lines +114 to +122
if isinstance(result, list) and len(result) > 1:
for j in range(1, len(result), 2):
try:
raw = result[j]
if isinstance(raw, bytes):
raw = raw.decode()
ids.append(int(raw.split(":")[-1]))
except Exception:
pass
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Abort the run if ID parsing stops matching the response format.

These except Exception: pass blocks quietly drop unparseable IDs. If FT.SEARCH or VSIM changes shape, the benchmark will look like a recall regression instead of a broken parser. Narrow the exception and raise with the offending payload so the run fails obviously.

Also applies to: 173-181, 233-240, 259-266

🧰 Tools
🪛 Ruff (0.15.7)

[error] 121-122: try-except-pass detected, consider logging the exception

(S110)


[warning] 121-121: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/bench-mixed-1k-compact.py` around lines 114 - 122, The current loop
quietly swallows all exceptions when parsing IDs from the Redis response
(variables result, raw, ids), which can hide format changes; replace the broad
except Exception: pass with targeted exception handling (e.g., except
(ValueError, IndexError, UnicodeDecodeError, AttributeError) as e:) and re-raise
a new exception or call sys.exit with a clear message that includes the
offending payload (raw and/or result) and the original exception to abort the
run; apply the same change to the other identical blocks handling FT.SEARCH/VSIM
responses so any unparseable ID causes the benchmark to fail loudly instead of
silently dropping IDs.

Comment on lines +108 to +186
def create_index(self, dim):
self.dim = dim
if not self.created:
self.r.execute_command(
"FT.CREATE", "idx", "ON", "HASH",
"PREFIX", "1", "doc:",
"SCHEMA", "vec", "VECTOR", "HNSW", "10",
"TYPE", "FLOAT32", "DIM", str(dim),
"DISTANCE_METRIC", "L2", "QUANTIZATION", "TQ4",
"COMPACT_THRESHOLD", "10000",
)
self.created = True

def insert(self, doc_id, vector):
self.r.execute_command("HSET", f"doc:{doc_id}", "vec", vector.tobytes())

def insert_batch(self, start_id, vectors):
pipe = self.r.pipeline(transaction=False)
for i, v in enumerate(vectors):
pipe.execute_command("HSET", f"doc:{start_id + i}", "vec", v.tobytes())
pipe.execute()

def search(self, query, k=10):
t0 = time.perf_counter()
result = self.r.execute_command(
"FT.SEARCH", "idx",
f"*=>[KNN {k} @vec $query]",
"PARAMS", "2", "query", query.tobytes(),
)
latency = (time.perf_counter() - t0) * 1000
ids = []
if isinstance(result, list) and len(result) > 1:
for j in range(1, len(result), 2):
try:
raw = result[j]
if isinstance(raw, bytes):
raw = raw.decode()
ids.append(int(raw.split(":")[-1]))
except Exception:
pass
return ids, latency


class RedisAdapter:
def __init__(self, port):
import redis as redis_lib
self.r = redis_lib.Redis(port=port, decode_responses=False, socket_timeout=600)
self.port = port

def name(self):
return "Redis"

def create_index(self, dim):
pass # VADD auto-creates

def insert(self, doc_id, vector):
self.r.execute_command("VADD", "vecset", "FP32", vector.tobytes(), f"vec:{doc_id}")

def insert_batch(self, start_id, vectors):
pipe = self.r.pipeline(transaction=False)
for i, v in enumerate(vectors):
pipe.execute_command("VADD", "vecset", "FP32", v.tobytes(), f"vec:{start_id + i}")
pipe.execute()

def search(self, query, k=10):
t0 = time.perf_counter()
result = self.r.execute_command(
"VSIM", "vecset", "FP32", query.tobytes(), "COUNT", str(k)
)
latency = (time.perf_counter() - t0) * 1000
ids = []
if isinstance(result, list):
for item in result:
try:
raw = item.decode() if isinstance(item, bytes) else str(item)
ids.append(int(raw.split(":")[-1]))
except Exception:
pass
return ids, latency
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Isolate each backend run instead of reusing fixed namespaces.

Line 112 always creates idx, and the Moon/Redis paths reuse the same doc:*, vecset, and vec:* namespaces on every invocation. That makes reruns non-reproducible: Moon will fail at FT.CREATE, and the Redis baseline can inherit old state if the dataset or ID range changes. Give each run a unique index/set/prefix or explicitly tear the old state down before loading data.

🧰 Tools
🪛 Ruff (0.15.7)

[error] 146-147: try-except-pass detected, consider logging the exception

(S110)


[warning] 146-146: Do not catch blind exception: Exception

(BLE001)


[error] 184-185: try-except-pass detected, consider logging the exception

(S110)


[warning] 184-184: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/bench-mixed-workload.py` around lines 108 - 186, The issue is reuse
of fixed namespaces (idx, doc:, vecset, vec:) causing collisions between runs;
fix by adding a per-run unique namespace/id and using it in all commands or by
tearing down existing state before creation: generate a run-specific identifier
in the adapter/connector (e.g., set self.run_id or self.index_name in the class
__init__ or create_index), replace literal "idx", "doc:", "vecset", "vec:" and
f"vec:{id}" usages in RedisMoonAdapter.create_index / insert / insert_batch /
search and RedisAdapter.create_index / insert / insert_batch / search with names
that include that run-specific id, or call the appropriate teardown commands
(FT.DROPINDEX for the RediSearch adapter and DEL/UNLINK for vecset/vec keys and
VDEL if available) before creating/adding to avoid inheriting state.

Comment on lines +283 to +410
# ── Phase 1: Bulk load 1000 vectors ──────────────────────────────
print(f" Phase 1: Bulk load 1000 vectors...")
t0 = time.time()
do_insert_batch(1000)
phase1_time = time.time() - t0

# One search to trigger compaction (Moon) / indexing
_, _ = do_search()
time.sleep(0.5) # Let indexing settle

results["phases"].append({
"name": "Bulk Load",
"inserts": 1000,
"time_s": phase1_time,
"vps": 1000 / phase1_time,
})

# ── Phase 2: Steady state (insert 10, search 5) × 900 turns ─────
print(f" Phase 2: Steady-state (insert 10 + search 5) × 900 turns...")
t0 = time.time()
phase2_lats = []
phase2_recalls = []
phase2_compact_spikes = 0

for turn in range(900):
do_insert_batch(10)
for _ in range(5):
lat, recall = do_search()
phase2_lats.append(lat)
phase2_recalls.append(recall)
if lat > 50: # >50ms = likely compaction interference
phase2_compact_spikes += 1

if (turn + 1) % 300 == 0:
print(f" Turn {turn+1}/900: {next_id} vectors, "
f"avg recall={np.mean(phase2_recalls[-100:]):.4f}, "
f"p50={np.percentile(phase2_lats[-100:], 50):.1f}ms")

phase2_time = time.time() - t0
results["phases"].append({
"name": "Steady State",
"inserts": 9000,
"searches": 4500,
"turns": 900,
"time_s": phase2_time,
"avg_recall": float(np.mean(phase2_recalls)),
"p50_ms": float(np.percentile(phase2_lats, 50)),
"p99_ms": float(np.percentile(phase2_lats, 99)),
"compact_spikes": phase2_compact_spikes,
})

# ── Phase 3: Search-heavy (insert 1, search 20) × 100 turns ─────
# Tests search quality after most data is loaded
# Remaining vectors may not be enough, cap at what we have
remaining = min(100, n - next_id)
print(f" Phase 3: Search-heavy (insert 1 + search 20) × {remaining} turns...")
t0 = time.time()
phase3_lats = []
phase3_recalls = []

for turn in range(remaining):
if next_id < n:
do_insert_batch(1)
for _ in range(20):
lat, recall = do_search()
phase3_lats.append(lat)
phase3_recalls.append(recall)

phase3_time = time.time() - t0
results["phases"].append({
"name": "Search Heavy",
"inserts": remaining,
"searches": remaining * 20,
"time_s": phase3_time,
"avg_recall": float(np.mean(phase3_recalls)) if phase3_recalls else 0,
"p50_ms": float(np.percentile(phase3_lats, 50)) if phase3_lats else 0,
"p99_ms": float(np.percentile(phase3_lats, 99)) if phase3_lats else 0,
})

# ── Phase 4: Burst insert (remaining vectors, then 50 searches) ──
burst_count = n - next_id
if burst_count > 0:
print(f" Phase 4: Burst insert ({burst_count} vectors, then 50 searches)...")
t0 = time.time()
do_insert_batch(burst_count)
burst_insert_time = time.time() - t0

# Wait for indexing
time.sleep(1)

phase4_lats = []
phase4_recalls = []
for _ in range(50):
lat, recall = do_search()
phase4_lats.append(lat)
phase4_recalls.append(recall)

results["phases"].append({
"name": "Burst Insert",
"inserts": burst_count,
"searches": 50,
"insert_time_s": burst_insert_time,
"insert_vps": burst_count / burst_insert_time if burst_insert_time > 0 else 0,
"avg_recall": float(np.mean(phase4_recalls)),
"p50_ms": float(np.percentile(phase4_lats, 50)),
"p99_ms": float(np.percentile(phase4_lats, 99)),
})

# ── Phase 5: Final recall (200 queries against full dataset) ─────
print(f" Phase 5: Final recall (200 queries against full dataset)...")
final_lats = []
final_recalls = []

for i, q in enumerate(queries):
ids, lat = adapter.search(q)
final_lats.append(lat)
# Use pre-computed ground truth for full dataset
recall = len(set(ids) & set(gt_final[i])) / 10
final_recalls.append(recall)

results["phases"].append({
"name": "Final Recall",
"searches": len(queries),
"recall": float(np.mean(final_recalls)),
"p50_ms": float(np.percentile(final_lats, 50)),
"p99_ms": float(np.percentile(final_lats, 99)),
"qps": 1000 / np.mean(final_lats),
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Configured phases consume the full 10K corpus before Phase 3 starts.

Line 286 inserts 1,000 vectors and the loop at Line 307 inserts 9,000 more, so by Line 337 remaining is always 0 with the cached 10K dataset. The advertised "Search Heavy" and "Burst Insert" phases therefore never execute, which means this script does not actually benchmark the workload described in the docstring/PR summary. Either reserve vector budget before Phase 2 or fail fast unless the cache is regenerated with a larger corpus. If you change the corpus size, the cache path/key should change too, otherwise old 10K artifacts will keep masking the fix.

🧮 Minimal guard to stop silently skipping the later phases
+    required_vectors = 1000 + 900 * 10 + 100 + 500
+    if n < required_vectors:
+        raise ValueError(
+            f"Configured 5-phase workload needs at least {required_vectors} vectors, got {n}"
+        )
+
     # ── Phase 1: Bulk load 1000 vectors ──────────────────────────────
     print(f"  Phase 1: Bulk load 1000 vectors...")
🧰 Tools
🪛 Ruff (0.15.7)

[error] 284-284: f-string without any placeholders

Remove extraneous f prefix

(F541)


[warning] 300-300: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?

(RUF003)


[error] 301-301: f-string without any placeholders

Remove extraneous f prefix

(F541)


[warning] 301-301: String contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?

(RUF001)


[warning] 334-334: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?

(RUF003)


[warning] 338-338: String contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?

(RUF001)


[warning] 343-343: Loop control variable turn not used within loop body

(B007)


[error] 392-392: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/bench-mixed-workload.py` around lines 283 - 410, The script currently
exhausts the cached 10K corpus before Phase 3 because Phase1
(do_insert_batch(1000)) + Phase2 (900 turns × 10 inserts = 9000) use the full
budget; add a guard before Phase 1 that checks n and next_id against the planned
inserts (1000 + 9000) and either (a) reduce phase2 turns/inserts to fit the
configured n (adjust the loop using the symbol range(900) or compute a dynamic
turns variable) or (b) fail fast with a clear error asking to regenerate the
cache with a larger corpus and update the cache key. Also ensure any cache
key/path change is applied when changing n so old 10K artifacts don’t mask the
fix; reference next_id, n, do_insert_batch, and the Phase 2 loop (range(900))
when making the change.

Comment on lines +305 to +314
phase2_compact_spikes = 0

for turn in range(900):
do_insert_batch(10)
for _ in range(5):
lat, recall = do_search()
phase2_lats.append(lat)
phase2_recalls.append(recall)
if lat > 50: # >50ms = likely compaction interference
phase2_compact_spikes += 1
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

compact_spikes is only meaningful for Moon in this script.

Lines 313-314 increment this counter for every backend, but the report at Lines 476-478 labels it as "Compaction spikes". For Redis and Qdrant this is just a generic >50ms tail-latency counter, so the current name/reporting is misleading. Either gate it on adapter.name() == "Moon" or rename it to a backend-agnostic latency-spike metric.

Also applies to: 475-478

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/bench-mixed-workload.py` around lines 305 - 314, The variable
phase2_compact_spikes is being incremented for any backend when do_search()
returns lat > 50ms but the final report labels it as "Compaction spikes", which
is only meaningful for Moon; update the logic so either (a) only increment
phase2_compact_spikes when adapter.name() == "Moon" (i.e., gate the increment
around adapter.name() check where phase2_compact_spikes is updated) or (b)
rename the metric and its report label to a generic "Latency spikes (>50ms)"
everywhere it is used (including the report generation at the end) so it
correctly reflects a backend-agnostic tail-latency counter. Ensure you update
both the increment sites (where phase2_compact_spikes is modified) and the final
reporting (the lines that print "Compaction spikes") to keep names consistent.

Comment thread src/vector/persistence/segment_io.rs
Comment on lines +384 to +386
} else {
(internal_id, 0u64) // v1 fallback: global_id = internal_id
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

v1 fallback key_hash=0 causes mass deletion when mark_deleted_by_key_hash is called.

Per src/vector/segment/mutable.rs:575-586, mark_deleted_by_key_hash(key_hash, lsn) marks ALL entries where entry.key_hash == key_hash. If v1 segments set key_hash=0 for all entries, any DEL/UNLINK on a key that hashes to 0 (or any delete operation on v1 data) could inadvertently mark unrelated entries.

This is a latent correctness issue for v1→v2 upgrades. Consider:

  1. Documenting that v1 segments should be recompacted before enabling delete operations, or
  2. Using a sentinel value like u64::MAX that's unlikely to match real key hashes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vector/persistence/segment_io.rs` around lines 384 - 386, The current v1
fallback sets key_hash to 0 in src/vector/persistence/segment_io.rs (the tuple
branch returning (internal_id, 0u64)), which collides with
mark_deleted_by_key_hash in src/vector/segment/mutable.rs and can cause mass
deletions; change the v1 fallback key_hash from 0u64 to a sentinel value (e.g.
u64::MAX) unlikely to match real hashes, and add a brief code comment
documenting this decision and that legacy v1 segments should be compacted or
treated specially; also scan mark_deleted_by_key_hash to ensure it will never
intentionally target the sentinel value (or explicitly ignore sentinel) so
legacy entries are not matched.

})
.collect();

handles.into_iter().filter_map(|h| h.join().ok()).collect()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Thread panics are silently ignored during parallel graph building.

filter_map(|h| h.join().ok()) discards any sub-graph from a thread that panicked. This silently loses vectors assigned to that cell, causing incomplete graphs and potential data loss without any error indication.

Consider propagating thread panics or at least logging when a sub-graph is lost:

Proposed fix
-            handles.into_iter().filter_map(|h| h.join().ok()).collect()
+            handles
+                .into_iter()
+                .filter_map(|h| match h.join() {
+                    Ok(result) => Some(result),
+                    Err(_) => {
+                        // Log or track that a cell's sub-graph was lost due to panic
+                        // For now, skip the cell (data loss risk)
+                        None
+                    }
+                })
+                .collect()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
handles.into_iter().filter_map(|h| h.join().ok()).collect()
handles
.into_iter()
.filter_map(|h| match h.join() {
Ok(result) => Some(result),
Err(_) => {
// Log or track that a cell's sub-graph was lost due to panic
// For now, skip the cell (data loss risk)
None
}
})
.collect()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vector/segment/compaction.rs` at line 159, The current join of worker
threads uses handles.into_iter().filter_map(|h| h.join().ok()).collect(), which
silently drops any panicked thread and loses its sub-graph; replace this with
explicit panic propagation or error reporting by iterating handles and calling
join() while handling the Err case: for each thread handle returned from the
parallel graph builder (the variable `handles`), call join() and if it Errs
either propagate the panic (e.g., unwrap/expect) or log the thread panic
including context (cell id or worker index) and return an error/Result upstream
so the caller knows a sub-graph was lost; ensure the calling function/sig for
the compaction/graph-building path (the function that owns `handles`) returns a
Result or panics consistently so incomplete graphs are not silently accepted.

Comment thread src/vector/segment/mutable.rs
Comment thread src/vector/store.rs Outdated
Comment on lines +109 to +115
let new_mutable = Arc::new(
crate::vector::segment::mutable::MutableSegment::new(
self.meta.dimension,
self.collection.clone(),
)),
),
);
new_mutable.set_global_id_base(next_global);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check how recovered mutable segment's global_id_base is initialized
rg -nC5 'attach_recovered|global_id_base' src/vector/persistence/ src/vector/store.rs

Repository: pilotspace/moon

Length of output: 2529


🏁 Script executed:

# Find the recovery module and understand RecoveredState structure
fd -t f 'recovery' src/vector/persistence/

Repository: pilotspace/moon

Length of output: 93


🏁 Script executed:

# Check what RecoveredState contains and how it's used
rg -n 'struct RecoveredState|pub recovery' src/vector/persistence/ -A 10

Repository: pilotspace/moon

Length of output: 880


🏁 Script executed:

# Look for where mutable segments are created from recovered state
rg -n 'MutableSegment::new' src/vector/ --context 5

Repository: pilotspace/moon

Length of output: 10728


🏁 Script executed:

# Check if there's any logic that computes max global_id from immutable segments
rg -n 'max.*global|global.*max|next_global' src/vector/store.rs src/vector/segment/ -A 3 -B 3

Repository: pilotspace/moon

Length of output: 2148


🏁 Script executed:

# Look for FrozenSegment and ImmutableSegment structures to see if max_global_id is stored
rg -n 'struct.*Segment|max_global|max_id' src/vector/segment/ --type rs -A 5 | head -100

Repository: pilotspace/moon

Length of output: 85


🏁 Script executed:

# Check if immutable segment metadata contains global_id information
rg -n 'pub struct.*Meta|max_global|entries\.len' src/vector/segment/immutable.rs -A 3 -B 1 | head -80

Repository: pilotspace/moon

Length of output: 41


🏁 Script executed:

# Look at the recover function to see the full flow
sed -n '1,300p' src/vector/persistence/recovery.rs

Repository: pilotspace/moon

Length of output: 11223


🏁 Script executed:

# Continue reading recovery.rs to see the complete flow
sed -n '300,400p' src/vector/persistence/recovery.rs

Repository: pilotspace/moon

Length of output: 3449


🏁 Script executed:

# Look for FrozenSegment or ImmutableSegment structure to see available metadata
rg -n 'pub struct.*Segment|pub max_|pub entries' src/vector/segment/immutable.rs -B 2 -A 8 | head -100

Repository: pilotspace/moon

Length of output: 591


🏁 Script executed:

# Check if there's any post-recovery logic that computes max global_id
rg -n 'RecoveredCollection|attach_recovered' src/vector/store.rs -A 20

Repository: pilotspace/moon

Length of output: 1564


🏁 Script executed:

# Look at what FrozenSegment contains (freeze output)
rg -n 'struct Frozen|pub entries' src/vector/segment/mutable.rs -A 5 | head -60

Repository: pilotspace/moon

Length of output: 379


🏁 Script executed:

# Check if immutable segment entry count is available and how it's used
rg -n 'immutable.*len|entries.*len|num_entries' src/vector/segment/immutable.rs -A 2 -B 2

Repository: pilotspace/moon

Length of output: 41


🏁 Script executed:

# See complete attach_recovered and create_index to understand the flow
sed -n '169,250p' src/vector/store.rs

Repository: pilotspace/moon

Length of output: 2974


🏁 Script executed:

# Check if there's any logic that sets global_id_base on recovered mutable when attaching
rg -n 'set_global_id_base|attach_recovered' src/vector/store.rs src/vector/ -B 5 -A 15 | head -150

Repository: pilotspace/moon

Length of output: 6611


🏁 Script executed:

# Check if immutable segment has a len() method to get entry count
rg -n 'pub fn len|pub fn num_' src/vector/segment/immutable.rs -A 3

Repository: pilotspace/moon

Length of output: 41


🏁 Script executed:

# Check ImmutableSegment for entry count methods
rg -n 'impl.*ImmutableSegment|pub fn len|pub fn num_entries|pub fn graph' src/vector/segment/immutable.rs -A 5 | head -80

Repository: pilotspace/moon

Length of output: 433


🏁 Script executed:

# Look at what happens in create_index when attaching recovered segments - more context
sed -n '200,235p' src/vector/store.rs

Repository: pilotspace/moon

Length of output: 1354


Set global_id_base on recovered mutable segment when attaching to index.

In create_index() (lines 218–235), when recovered segments are attached to an index, the recovered mutable segment has global_id_base = 0. It must be initialized from the maximum global_id across immutable segments before swapping to prevent ID collisions with existing entries. Compute max_global_id from each immutable segment's graph node count and call set_global_id_base() on the recovered mutable to ensure new entries continue from the highest assigned ID.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vector/store.rs` around lines 109 - 115, When attaching recovered
segments in create_index(), compute the current max global ID by iterating
immutable segments' graph node counts (sum or max of each immutable segment's
graph.node_count()) and call set_global_id_base(...) on the recovered mutable
(the Arc<MutableSegment> named new_mutable or recovered mutable) with that max
value before swapping it into the index; ensure you use
MutableSegment::set_global_id_base on the recovered mutable so new entries start
after existing IDs and avoid collisions.

@qodo-code-review
Copy link
Copy Markdown

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: Test

Failed stage: Run cargo test --no-default-features --features runtime-tokio,jemalloc [❌]

Failed test name: ""

Failure summary:

The GitHub Actions job failed because the step Run cargo test --no-default-features --features
runtime-tokio,jemalloc exceeded the workflow time limit.
- The test suite itself was still running
when the timeout occurred (log line 2100: The action ... has timed out after 10 minutes.).
- Unit
tests completed successfully (1558 passed; 0 failed) and several integration test binaries also
finished, but the run was still executing long-running vector recall benchmark tests (e.g.,
recall_10k_128d_ef128, recall_10k_768d_ef128, recall_f32_hnsw_10k_128d_ef200), with multiple “has
been running for over 60 seconds” messages shortly before the timeout.

Relevant error logs:
1:  ##[group]Runner Image Provisioner
2:  Hosted Compute Agent
...

158:  env:
159:  CARGO_TERM_COLOR: always
160:  targets: 
161:  components: 
162:  ##[endgroup]
163:  ##[group]Run : set $CARGO_HOME
164:  �[36;1m: set $CARGO_HOME�[0m
165:  �[36;1mecho CARGO_HOME=${CARGO_HOME:-"$HOME/.cargo"} >> $GITHUB_ENV�[0m
166:  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
167:  env:
168:  CARGO_TERM_COLOR: always
169:  ##[endgroup]
170:  ##[group]Run : install rustup if needed
171:  �[36;1m: install rustup if needed�[0m
172:  �[36;1mif ! command -v rustup &>/dev/null; then�[0m
173:  �[36;1m  curl --proto '=https' --tlsv1.2 --retry 10 --retry-connrefused --location --silent --show-error --fail https://sh.rustup.rs | sh -s -- --default-toolchain none -y�[0m
174:  �[36;1m  echo "$CARGO_HOME/bin" >> $GITHUB_PATH�[0m
...

242:  �[36;1mif [ -z "${CARGO_REGISTRIES_CRATES_IO_PROTOCOL+set}" -o -f "/home/runner/work/_temp"/.implicit_cargo_registries_crates_io_protocol ]; then�[0m
243:  �[36;1m  if rustc +stable --version --verbose | grep -q '^release: 1\.6[89]\.'; then�[0m
244:  �[36;1m    touch "/home/runner/work/_temp"/.implicit_cargo_registries_crates_io_protocol || true�[0m
245:  �[36;1m    echo CARGO_REGISTRIES_CRATES_IO_PROTOCOL=sparse >> $GITHUB_ENV�[0m
246:  �[36;1m  elif rustc +stable --version --verbose | grep -q '^release: 1\.6[67]\.'; then�[0m
247:  �[36;1m    touch "/home/runner/work/_temp"/.implicit_cargo_registries_crates_io_protocol || true�[0m
248:  �[36;1m    echo CARGO_REGISTRIES_CRATES_IO_PROTOCOL=git >> $GITHUB_ENV�[0m
249:  �[36;1m  fi�[0m
250:  �[36;1mfi�[0m
251:  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
252:  env:
253:  CARGO_TERM_COLOR: always
254:  CARGO_HOME: /home/runner/.cargo
255:  CARGO_INCREMENTAL: 0
256:  ##[endgroup]
257:  ##[group]Run : work around spurious network errors in curl 8.0
258:  �[36;1m: work around spurious network errors in curl 8.0�[0m
259:  �[36;1m# https://rust-lang.zulipchat.com/#narrow/stream/246057-t-cargo/topic/timeout.20investigation�[0m
...

333:  ... Restoring cache ...
334:  Cache hit for: v0-rust-test-Linux-x64-8b64806b-a58781a7
335:  Received 268435456 of 366758905 (73.2%), 252.7 MBs/sec
336:  Received 366758905 of 366758905 (100.0%), 259.7 MBs/sec
337:  Cache Size: ~350 MB (366758905 B)
338:  [command]/usr/bin/tar -xf /home/runner/work/_temp/08e8c179-8298-49fc-8f79-b271b9479281/cache.tzst -P -C /home/runner/work/moon/moon --use-compress-program unzstd
339:  Cache restored successfully
340:  Restored from cache key "v0-rust-test-Linux-x64-8b64806b-a58781a7" full match: true.
341:  ##[group]Run cargo test --no-default-features --features runtime-tokio,jemalloc
342:  �[36;1mcargo test --no-default-features --features runtime-tokio,jemalloc�[0m
343:  shell: /usr/bin/bash -e {0}
344:  env:
345:  CARGO_TERM_COLOR: always
346:  CARGO_HOME: /home/runner/.cargo
347:  CARGO_INCREMENTAL: 0
348:  CACHE_ON_FAILURE: false
349:  MOON_NO_URING: 1
...

401:  test acl::table::tests::test_authenticate_disabled_user ... ok
402:  test acl::table::tests::test_load_or_default_nopass ... ok
403:  test acl::table::tests::test_del_user ... ok
404:  test acl::table::tests::test_list_users_sorted ... ok
405:  test acl::table::tests::test_check_key_permission_read_write_patterns ... ok
406:  test acl::table::tests::test_load_or_default_with_password ... ok
407:  test blocking::tests::test_has_waiters_empty ... ok
408:  test blocking::tests::test_fifo_order ... ok
409:  test blocking::tests::test_register_and_pop_front ... ok
410:  test blocking::tests::test_remove_wait_cross_key ... ok
411:  test cluster::command::tests::test_cluster_info_contains_enabled ... ok
412:  test cluster::command::tests::test_cluster_meet_adds_node ... ok
413:  test cluster::command::tests::test_cluster_myid_length ... ok
414:  test cluster::command::tests::test_addslots_updates_bitmap ... ok
415:  test cluster::command::tests::test_delslots ... ok
416:  test cluster::command::tests::test_failover_normal_sets_waiting_delay ... ok
417:  test cluster::command::tests::test_cluster_nodes_format ... ok
418:  test cluster::command::tests::test_failover_rejects_on_master ... ok
419:  test cluster::command::tests::test_keyslot_foo ... ok
420:  test cluster::command::tests::test_setslot_migrating_importing ... ok
421:  test cluster::command::tests::test_setslot_node_clears_migration ... ok
422:  test cluster::command::tests::test_failover_force_promotes_replica ... ok
423:  test cluster::command::tests::test_failover_invalid_subcommand ... ok
424:  test cluster::command::tests::test_failover_takeover_promotes_replica ... ok
425:  test cluster::failover::tests::test_compute_failover_delay_includes_rank ... ok
426:  test cluster::failover::tests::test_failover_vote_epoch_guard ... ok
427:  test cluster::failover::tests::test_no_failover_when_master_healthy ... ok
428:  test cluster::failover::tests::test_try_mark_fail_needs_majority ... ok
429:  test cluster::gossip::tests::test_gossip_section_roundtrip ... ok
430:  test cluster::gossip::tests::test_bad_magic_returns_err ... ok
431:  test cluster::gossip::tests::test_ping_roundtrip ... ok
432:  test cluster::failover::tests::test_failover_initiates_when_master_fail ... ok
433:  test cluster::gossip::tests::test_pong_with_sections_roundtrip ... ok
434:  test cluster::gossip::tests::test_truncated_returns_err ... ok
435:  test cluster::migration::tests::test_get_keys_in_slot_filters_correctly ... ok
436:  test cluster::slots::tests::test_empty_hash_tag_uses_full_key ... ok
437:  test cluster::migration::tests::test_migrating_slot_returns_ask_route ... ok
438:  test cluster::migration::tests::test_nodes_conf_roundtrip ... ok
439:  test cluster::slots::tests::test_error_format ... ok
440:  test cluster::slots::tests::test_foo_slot ... ok
441:  test cluster::slots::tests::test_hash_tag_co_location ... ok
442:  test cluster::slots::tests::test_local_shard_for_slot ... ok
443:  test cluster::tests::test_asking_flag_with_importing_slot ... ok
444:  test cluster::tests::test_asking_without_importing_still_moved ... ok
445:  test cluster::tests::test_owns_slot_bitmap ... ok
446:  test cluster::tests::test_moved_error_frame_format ... ok
447:  test cluster::tests::test_my_node_id ... ok
448:  test command::acl::tests::test_acl_cat_all_categories ... ok
449:  test cluster::tests::test_route_local_owned_slot ... ok
450:  test cluster::tests::test_route_moved_for_peer_slot ... ok
451:  test command::acl::tests::test_acl_cat_unknown_category ... ok
452:  test command::acl::tests::test_acl_cat_string_category ... ok
453:  test command::acl::tests::test_acl_deluser_default_fails ... ok
454:  test command::acl::tests::test_acl_deluser ... ok
...

459:  test command::acl::tests::test_acl_save_and_load ... ok
460:  test command::acl::tests::test_acl_save_no_aclfile ... ok
461:  test command::acl::tests::test_acl_setuser_and_getuser ... ok
462:  test command::acl::tests::test_acl_unknown_subcommand ... ok
463:  test command::acl::tests::test_acl_whoami ... ok
464:  test command::client::tests::test_parse_tracking_off ... ok
465:  test command::client::tests::test_parse_tracking_on ... ok
466:  test command::client::tests::test_parse_tracking_on_bcast ... ok
467:  test command::client::tests::test_parse_tracking_on_bcast_prefix ... ok
468:  test command::client::tests::test_parse_tracking_on_bcast_noloop_multiple_prefixes ... ok
469:  test command::client::tests::test_parse_tracking_on_noloop ... ok
470:  test command::acl::tests::test_acl_load_no_aclfile ... ok
471:  test command::client::tests::test_parse_tracking_on_redirect ... ok
472:  test command::client::tests::test_parse_tracking_too_few_args ... ok
473:  test command::client::tests::test_parse_tracking_redirect_invalid_int ... ok
474:  test command::client::tests::test_parse_tracking_prefix_without_bcast_fails ... ok
475:  test command::config::tests::test_config_get_all ... ok
...

487:  test command::connection::tests::test_auth_acl_disabled_user ... ok
488:  test command::connection::tests::test_auth_acl_2arg_named_user ... ok
489:  test command::connection::tests::test_auth_acl_wrong_arity ... ok
490:  test command::connection::tests::test_auth_correct_password ... ok
491:  test command::connection::tests::test_auth_no_password_configured ... ok
492:  test command::connection::tests::test_auth_wrong_arity ... ok
493:  test command::connection::tests::test_client_id_returns_integer ... ok
494:  test command::connection::tests::test_auth_wrong_password ... ok
495:  test command::connection::tests::test_command_bare ... ok
496:  test command::connection::tests::test_command_docs ... ok
497:  test command::connection::tests::test_command_docs_lowercase ... ok
498:  test command::connection::tests::test_echo ... ok
499:  test command::connection::tests::test_echo_wrong_arity ... ok
500:  test command::connection::tests::test_hello_acl_no_args ... ok
501:  test command::connection::tests::test_hello_downgrade_to_resp2 ... ok
502:  test command::connection::tests::test_hello_acl_with_auth_failure ... ok
503:  test command::connection::tests::test_hello_no_args_returns_current_proto ... ok
504:  test command::connection::tests::test_hello_acl_with_auth_success ... ok
505:  test command::connection::tests::test_hello_noproto ... ok
506:  test command::connection::tests::test_hello_upgrade_to_resp3 ... ok
507:  test command::connection::tests::test_hello_with_auth_failure ... ok
508:  test command::connection::tests::test_hello_with_auth_success ... ok
...

539:  test command::hash::tests::test_hkeys_hvals_missing ... ok
540:  test command::hash::tests::test_hlen ... ok
541:  test command::hash::tests::test_hmget_missing_key ... ok
542:  test command::hash::tests::test_hscan_missing_key ... ok
543:  test command::hash::tests::test_hscan_basic ... ok
544:  test command::hash::tests::test_hmset_hmget ... ok
545:  test command::hash::tests::test_hset_new_fields ... ok
546:  test command::hash::tests::test_hscan_with_count ... ok
547:  test command::hash::tests::test_hscan_with_match ... ok
548:  test command::hash::tests::test_hset_wrong_args ... ok
549:  test command::hash::tests::test_hset_update_existing ... ok
550:  test command::key::tests::test_del_missing ... ok
551:  test command::key::tests::test_del_multiple ... ok
552:  test command::hash::tests::test_hsetnx ... ok
553:  test command::key::tests::test_del_single ... ok
554:  test command::hash::tests::test_wrongtype_error ... ok
555:  test command::key::tests::test_exists_duplicate_counted ... ok
...

856:  test command::vector_search::tests::test_extract_param_blob ... ok
857:  test command::vector_search::tests::test_end_to_end_create_insert_search ... ok
858:  test command::vector_search::tests::test_extract_param_blob_missing ... ok
859:  test command::vector_search::tests::test_ft_create_missing_dim ... ok
860:  test command::vector_search::tests::test_ft_create_duplicate ... ok
861:  test command::vector_search::tests::test_ft_create_parse_full_syntax ... ok
862:  test command::vector_search::tests::test_ft_info_returns_correct_data ... ok
863:  test command::vector_search::tests::test_ft_info ... ok
864:  test command::vector_search::tests::test_ft_search_dimension_mismatch ... ok
865:  test command::vector_search::tests::test_ft_search_empty_index ... ok
866:  test command::vector_search::tests::test_ft_search_unknown_index ... ok
867:  test command::vector_search::tests::test_ft_search_with_filter_no_regression ... ok
868:  test command::vector_search::tests::test_ft_dropindex ... ok
869:  test command::vector_search::tests::test_merge_search_results_combines_shards ... ok
870:  test command::vector_search::tests::test_merge_search_results_empty ... ok
871:  test command::vector_search::tests::test_merge_search_results_handles_errors ... ok
872:  test command::vector_search::tests::test_parse_filter_clause_compound ... ok
...

883:  test config::tests::test_aclfile_default_none ... ok
884:  test config::tests::test_aclfile_custom ... ok
885:  test config::tests::test_custom_port ... ok
886:  test config::tests::test_default_values ... ok
887:  test config::tests::test_custom_bind_and_databases ... ok
888:  test config::tests::test_maxmemory_defaults ... ok
889:  test config::tests::test_maxmemory_custom ... ok
890:  test config::tests::test_persistence_defaults ... ok
891:  test config::tests::test_persistence_custom_values ... ok
892:  test command::tests::test_object_encoding_hash_upgrade ... ok
893:  test config::tests::test_requirepass_default_none ... ok
894:  test config::tests::test_runtime_config_default ... ok
895:  test config::tests::test_requirepass ... ok
896:  test config::tests::test_shards_custom ... ok
897:  test config::tests::test_shards_default ... ok
898:  test error::tests::moon_error_from_aof_error ... ok
899:  test error::tests::moon_error_from_io_error ... ok
900:  test error::tests::moon_error_from_rdb_error ... ok
901:  test error::tests::moon_error_from_snapshot_error ... ok
902:  test error::tests::moon_error_from_wal_error ... ok
903:  test error::tests::moon_result_alias_works ... ok
904:  test io::buf_ring::tests::test_buf_ring_manager_new ... ok
...

931:  test io::static_responses::tests::test_ok_bytes ... ok
932:  test io::static_responses::tests::test_one_bytes ... ok
933:  test io::static_responses::tests::test_pong_bytes ... ok
934:  test io::static_responses::tests::test_queued_bytes ... ok
935:  test io::static_responses::tests::test_zero_bytes ... ok
936:  test io::tests::test_conn_id_truncated_to_24_bits ... ok
937:  test io::tests::test_encode_decode_max_conn_id ... ok
938:  test io::tests::test_encode_decode_all_event_types ... ok
939:  test io::tests::test_encode_decode_max_event_type ... ok
940:  test io::tests::test_encode_decode_max_aux ... ok
941:  test io::tests::test_encode_decode_roundtrip ... ok
942:  test io::tests::test_encode_decode_zero ... ok
943:  test io::tests::test_event_constants ... ok
944:  test io::tests::test_event_constants_unique ... ok
945:  test io::tokio_driver::tests::test_tokio_driver_type_exists ... ok
946:  test persistence::aof::tests::test_aof_replay_corrupt_truncated_logs_error_loads_what_it_can ... ok
947:  test persistence::aof::tests::test_aof_replay_collection_types ... ok
...

950:  test persistence::aof::tests::test_aof_replay_set_commands_restores_string_keys ... ok
951:  test persistence::aof::tests::test_aof_replay_with_expire_preserves_ttls ... ok
952:  test persistence::aof::tests::test_aof_replay_with_select_switches_databases ... ok
953:  test persistence::aof::tests::test_fsync_policy_from_str ... ok
954:  test persistence::aof::tests::test_generate_aof_command_produces_valid_resp_that_round_trips ... ok
955:  test persistence::aof::tests::test_generate_rewrite_commands_all_5_types ... ok
956:  test persistence::aof::tests::test_serialize_command_round_trip_hset ... ok
957:  test persistence::aof::tests::test_generate_rewrite_round_trip_preserves_state ... ok
958:  test persistence::aof::tests::test_generate_rewrite_commands_with_ttl ... ok
959:  test persistence::auto_save::tests::test_parse_save_rules_empty_string ... ok
960:  test persistence::auto_save::tests::test_parse_save_rules_none ... ok
961:  test persistence::auto_save::tests::test_parse_save_rules_standard ... ok
962:  test persistence::auto_save::tests::test_parse_save_rules_single ... ok
963:  test persistence::auto_save::tests::test_parse_save_rules_three_pairs ... ok
964:  test persistence::rdb::tests::test_empty_database_produces_valid_rdb ... ok
965:  test persistence::rdb::tests::test_missing_file_returns_error ... ok
966:  test persistence::rdb::tests::test_expired_keys_skipped_during_save ... ok
...

1015:  test persistence::wal::tests::test_wal_v2_block_crc_valid ... ok
1016:  test persistence::wal::tests::test_wal_replay_with_collections ... ok
1017:  test persistence::wal::tests::test_wal_replay_round_trip ... ok
1018:  test persistence::wal::tests::test_wal_truncate_after_snapshot ... ok
1019:  test persistence::wal::tests::test_wal_v2_header_format ... ok
1020:  test persistence::wal::tests::test_wal_v2_empty_after_header ... ok
1021:  test protocol::frame::tests::frame_size_measurement ... ok
1022:  test protocol::frame::tests::test_frame_empty_array_is_valid ... ok
1023:  test protocol::frame::tests::test_frame_null_not_equal_to_empty_bulk_string ... ok
1024:  test protocol::frame::tests::test_frame_simple_string_debug_clone_partialeq ... ok
1025:  test protocol::frame::tests::test_parse_config_default_max_array_depth ... ok
1026:  test protocol::frame::tests::test_parse_config_default_max_array_length ... ok
1027:  test protocol::frame::tests::test_parse_config_default_max_bulk_string_size ... ok
1028:  test persistence::wal::tests::test_wal_v2_corruption_stops_replay ... ok
1029:  test persistence::wal::tests::test_wal_v2_truncated_block_stops ... ok
1030:  test protocol::frame::tests::test_parse_error_incomplete_display ... ok
1031:  test protocol::frame::tests::test_parse_error_invalid_display ... ok
1032:  test protocol::inline::tests::test_parse_inline_buffer_consumed ... ok
...

1037:  test protocol::inline::tests::test_parse_inline_leading_whitespace ... ok
1038:  test protocol::inline::tests::test_parse_inline_ping ... ok
1039:  test protocol::inline::tests::test_parse_inline_sequential ... ok
1040:  test protocol::inline::tests::test_parse_inline_set_key_value ... ok
1041:  test protocol::inline::tests::test_parse_inline_tab_separated ... ok
1042:  test protocol::inline::tests::test_parse_inline_whitespace_only ... ok
1043:  test protocol::parse::tests::test_buffer_consumed_after_parse ... ok
1044:  test protocol::parse::tests::test_parse_array_depth_exceeding_max ... ok
1045:  test protocol::parse::tests::test_parse_array_of_bulk_strings ... ok
1046:  test protocol::parse::tests::test_parse_bulk_string ... ok
1047:  test protocol::parse::tests::test_parse_bulk_string_exceeding_max_size ... ok
1048:  test protocol::parse::tests::test_parse_binary_data_in_bulk_string ... ok
1049:  test protocol::parse::tests::test_parse_array_with_null_element ... ok
1050:  test protocol::parse::tests::test_parse_empty_array ... ok
1051:  test protocol::parse::tests::test_parse_empty_buffer ... ok
1052:  test protocol::parse::tests::test_parse_error ... ok
1053:  test protocol::parse::tests::test_parse_incomplete_array ... ok
...

1076:  test protocol::parse::tests::test_parse_resp3_push ... ok
1077:  test protocol::parse::tests::test_parse_resp3_set ... ok
1078:  test protocol::parse::tests::test_parse_resp3_verbatim_string ... ok
1079:  test protocol::parse::tests::test_parse_resp3_map ... ok
1080:  test protocol::parse::tests::test_parse_resp_array_not_inline ... ok
1081:  test protocol::parse::tests::test_parse_resp_simple_string_not_inline ... ok
1082:  test protocol::parse::tests::test_parse_simple_string ... ok
1083:  test protocol::parse::tests::test_parse_simple_string_long ... ok
1084:  test protocol::parse::tests::test_parse_two_frames_sequentially ... ok
1085:  test protocol::resp3::tests::test_array_to_map ... ok
1086:  test protocol::resp3::tests::test_array_to_set ... ok
1087:  test protocol::resp3::tests::test_bulk_to_double ... ok
1088:  test protocol::resp3::tests::test_array_to_map_empty_passthrough ... ok
1089:  test protocol::resp3::tests::test_int_to_bool ... ok
1090:  test protocol::resp3::tests::test_bulk_to_double_null ... ok
1091:  test protocol::resp3::tests::test_maybe_convert_error_passthrough ... ok
1092:  test protocol::resp3::tests::test_maybe_convert_get_unchanged ... ok
1093:  test protocol::resp3::tests::test_maybe_convert_hgetall_resp2_unchanged ... ok
1094:  test protocol::resp3::tests::test_maybe_convert_hgetall_resp3 ... ok
1095:  test protocol::resp3::tests::test_maybe_convert_null_passthrough ... ok
1096:  test protocol::resp3::tests::test_maybe_convert_zscore_resp3 ... ok
1097:  test protocol::serialize::tests::test_resp2_downgrade_boolean_to_integer ... ok
1098:  test protocol::resp3::tests::test_maybe_convert_sismember_resp3 ... ok
1099:  test protocol::resp3::tests::test_maybe_convert_smembers_resp3 ... ok
1100:  test protocol::serialize::tests::test_resp2_downgrade_double_to_bulk_string ... ok
1101:  test protocol::serialize::tests::test_resp2_downgrade_map_to_flat_array ... ok
1102:  test protocol::serialize::tests::test_resp2_downgrade_set_to_array ... ok
1103:  test protocol::serialize::tests::test_resp2_null_still_dollar_minus_one ... ok
1104:  test protocol::serialize::tests::test_round_trip_array ... ok
1105:  test protocol::serialize::tests::test_round_trip_bulk_string ... ok
1106:  test protocol::serialize::tests::test_round_trip_error ... ok
1107:  test protocol::serialize::tests::test_round_trip_null ... ok
...

1111:  test protocol::serialize::tests::test_round_trip_resp3_big_number ... ok
1112:  test protocol::serialize::tests::test_round_trip_resp3_double ... ok
1113:  test protocol::serialize::tests::test_round_trip_integer ... ok
1114:  test protocol::serialize::tests::test_round_trip_resp3_map ... ok
1115:  test protocol::serialize::tests::test_round_trip_resp3_null ... ok
1116:  test protocol::serialize::tests::test_round_trip_resp3_push ... ok
1117:  test protocol::serialize::tests::test_round_trip_resp3_set ... ok
1118:  test protocol::serialize::tests::test_round_trip_resp3_verbatim_string ... ok
1119:  test protocol::serialize::tests::test_serialize_array_of_bulk_strings ... ok
1120:  test protocol::serialize::tests::test_round_trip_simple_string ... ok
1121:  test protocol::serialize::tests::test_serialize_bulk_string ... ok
1122:  test protocol::serialize::tests::test_serialize_empty_bulk_string ... ok
1123:  test protocol::serialize::tests::test_serialize_empty_array ... ok
1124:  test protocol::serialize::tests::test_serialize_integer_positive ... ok
1125:  test protocol::serialize::tests::test_serialize_integer_zero ... ok
1126:  test protocol::serialize::tests::test_serialize_error ... ok
1127:  test protocol::serialize::tests::test_serialize_null ... ok
...

1204:  test scripting::sandbox::tests::test_sandbox_blocks_os_other_fns ... ok
1205:  test scripting::sandbox::tests::test_sandbox_removes_dangerous_globals ... ok
1206:  test scripting::sandbox::tests::test_sandbox_allows_string_math_table ... ok
1207:  test scripting::tests::test_handle_eval_basic ... ok
1208:  test scripting::sandbox::tests::test_timeout_hook ... ok
1209:  test scripting::tests::test_handle_evalsha_after_eval ... ok
1210:  test scripting::tests::test_handle_evalsha_noscript ... ok
1211:  test scripting::tests::test_handle_script_subcommand_exists ... ok
1212:  test scripting::tests::test_handle_script_subcommand_flush ... ok
1213:  test scripting::tests::test_handle_script_subcommand_load ... ok
1214:  test scripting::tests::test_parse_eval_args_basic ... ok
1215:  test scripting::tests::test_parse_eval_args_numkeys_exceeds_args ... ok
1216:  test scripting::tests::test_parse_eval_args_too_few_args ... ok
1217:  test scripting::tests::test_parse_eval_args_with_keys_and_argv ... ok
1218:  test scripting::tests::test_run_script_keys_argv ... ok
1219:  test scripting::tests::test_run_script_redis_pcall_catches_error ... ok
1220:  test scripting::tests::test_run_script_simple ... ok
1221:  test scripting::tests::test_run_script_type_conversions ... ok
1222:  test scripting::tests::test_run_script_with_redis_call ... ok
1223:  test scripting::types::tests::test_frame_array_to_lua ... ok
1224:  test scripting::tests::test_setup_lua_vm ... ok
1225:  test scripting::types::tests::test_frame_boolean_to_lua ... ok
1226:  test scripting::types::tests::test_frame_bulk_string_to_lua ... ok
1227:  test scripting::types::tests::test_frame_double_to_lua ... ok
1228:  test scripting::types::tests::test_frame_error_to_lua ... ok
1229:  test scripting::types::tests::test_frame_integer_to_lua ... ok
...

1704:  test vector::mvcc::visibility::tests::test_non_transactional_read_sees_committed ... ok
1705:  test vector::mvcc::visibility::tests::test_read_your_own_writes_even_after_snapshot ... ok
1706:  test vector::mvcc::visibility::tests::test_read_your_own_writes_visible ... ok
1707:  test vector::persistence::recovery::tests::test_recover_checkpoint_records_lsn ... ok
1708:  test vector::persistence::recovery::tests::test_recover_committed_txn_survives ... ok
1709:  test vector::persistence::recovery::tests::test_recover_corrupt_crc_stops_replay ... ok
1710:  test vector::persistence::recovery::tests::test_recover_empty_wal_and_no_segments ... ok
1711:  test vector::persistence::recovery::tests::test_recover_mutable_delete_marks_entry ... ok
1712:  test vector::persistence::recovery::tests::test_recover_mutable_delete_nonexistent_no_panic ... ok
1713:  test vector::persistence::recovery::tests::test_recover_mutable_upsert_count ... ok
1714:  test vector::persistence::recovery::tests::test_recover_txn_abort_rolls_back ... ok
1715:  test vector::persistence::recovery::tests::test_recover_uncommitted_at_eof_rolled_back ... ok
1716:  test vector::persistence::recovery::tests::test_recover_vector_store_from_wal ... ok
1717:  test vector::persistence::recovery::tests::test_wal_writer_append_vector_record_roundtrip ... ok
1718:  test vector::persistence::segment_io::tests::test_checksum_mismatch_on_read ... ok
1719:  test vector::persistence::segment_io::tests::test_missing_graph_file_returns_error ... ok
1720:  test vector::persistence::segment_io::tests::test_roundtrip_preserves_counts ... ok
1721:  test vector::persistence::segment_io::tests::test_roundtrip_search_works ... ok
1722:  test vector::persistence::segment_io::tests::test_segment_meta_valid_json ... ok
1723:  test vector::persistence::segment_io::tests::test_write_creates_4_files ... ok
1724:  test vector::persistence::wal_record::tests::test_checkpoint_roundtrip ... ok
1725:  test vector::persistence::wal_record::tests::test_crc_mismatch_returns_error ... ok
1726:  test vector::persistence::wal_record::tests::test_delete_roundtrip ... ok
1727:  test vector::persistence::wal_record::tests::test_from_wal_frame_rejects_bad_tag ... ok
1728:  test vector::persistence::wal_record::tests::test_to_wal_frame_has_tag_and_length ... ok
1729:  test vector::persistence::wal_record::tests::test_truncated_frame_returns_error ... ok
1730:  test vector::persistence::wal_record::tests::test_txn_abort_roundtrip ... ok
1731:  test vector::persistence::wal_record::tests::test_txn_commit_roundtrip ... ok
1732:  test vector::persistence::wal_record::tests::test_upsert_roundtrip ... ok
1733:  test vector::segment::compaction::tests::test_assign_to_cells_partitions_all_vectors ... ok
1734:  test vector::segment::compaction::tests::test_compact_100_vectors ... ok
1735:  test vector::segment::compaction::tests::test_compact_empty_returns_error ... ok
1736:  test vector::segment::compaction::tests::test_compact_filters_deleted ... ok
...

1808:  test vector::turbo_quant::codebook::tests::test_centroids_sorted_ascending ... ok
1809:  test vector::turbo_quant::codebook::tests::test_centroids_symmetric ... ok
1810:  test vector::turbo_quant::codebook::tests::test_code_bytes_per_vector ... ok
1811:  test vector::turbo_quant::codebook::tests::test_codebook_version ... ok
1812:  test vector::turbo_quant::codebook::tests::test_quantize_centroids_are_fixed_points ... ok
1813:  test vector::turbo_quant::codebook::tests::test_quantize_extreme_values ... ok
1814:  test vector::turbo_quant::codebook::tests::test_quantize_just_below_boundary ... ok
1815:  test vector::turbo_quant::codebook::tests::test_quantize_with_boundaries_n_1bit ... ok
1816:  test vector::turbo_quant::codebook::tests::test_quantize_with_boundaries_n_2bit ... ok
1817:  test vector::turbo_quant::codebook::tests::test_quantize_with_boundaries_n_3bit ... ok
1818:  test vector::turbo_quant::codebook::tests::test_scaled_centroids_n_sizes ... ok
1819:  test vector::turbo_quant::codebook::tests::test_scaled_centroids_n_values ... ok
1820:  test vector::turbo_quant::collection::tests::test_bits_helper ... ok
1821:  test vector::turbo_quant::collection::tests::test_checksum_changes_when_quantization_changes ... ok
1822:  test vector::turbo_quant::collection::tests::test_checksum_deterministic ... ok
1823:  test vector::turbo_quant::collection::tests::test_checksum_mismatch_error_display ... ok
1824:  test vector::turbo_quant::collection::tests::test_code_bytes_per_vector ... ok
...

1909:  test vector::turbo_quant::tq_adc::tests::test_tq_l2_adc_multibit_self_distance_2bit ... ok
1910:  test vector::turbo_quant::tq_adc::tests::test_tq_l2_adc_multibit_self_distance_3bit ... ok
1911:  test vector::turbo_quant::tq_adc::tests::test_tq_l2_distant_vectors ... ok
1912:  test vector::turbo_quant::tq_adc::tests::test_tq_l2_matches_decoded_l2 ... ok
1913:  test vector::turbo_quant::tq_adc::tests::test_tq_l2_non_negative ... ok
1914:  test vector::turbo_quant::tq_adc::tests::test_tq_l2_norm_scaling ... ok
1915:  test vector::turbo_quant::tq_adc::tests::test_tq_l2_self_distance_small ... ok
1916:  test vector::types::tests::test_distance_metric_repr ... ok
1917:  test vector::types::tests::test_search_result_ordering ... ok
1918:  test vector::types::tests::test_vector_id_newtype ... ok
1919:  test vector::segment::ivf::tests::test_recall_at_10_nprobe_32 ... ok
1920:  test vector::hnsw::search_sq::tests::test_f32_recall_10k_128d has been running for over 60 seconds
1921:  test vector::hnsw::search_sq::tests::test_f32_recall_1k_768d has been running for over 60 seconds
1922:  test vector::hnsw::search_sq::tests::test_f32_recall_1k_768d ... ok
1923:  test vector::hnsw::search_sq::tests::test_f32_recall_10k_128d ... ok
1924:  test result: ok. 1558 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 293.99s
1925:  �[1m�[92m     Running�[0m unittests src/main.rs (target/debug/deps/moon-948e28c1944183ef)
1926:  running 0 tests
1927:  test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
1928:  �[1m�[92m     Running�[0m tests/integration.rs (target/debug/deps/integration-4146a66fb58437b2)
...

2030:  test test_sharded_pubsub_psubscribe ... ok
2031:  test test_sharded_pubsub_subscribe ... ok
2032:  test test_sharded_pubsub_publish_count ... ok
2033:  test test_sharded_scan_all_shards ... ok
2034:  test test_sharded_set_commands ... ok
2035:  test test_sharded_sorted_set_commands ... ok
2036:  test test_sharded_pubsub_unsubscribe_cleanup ... ok
2037:  test test_sharded_set_get_across_shards ... ok
2038:  test test_sorted_set_commands ... ok
2039:  test test_type_command ... ok
2040:  test test_sharded_transaction_same_shard ... ok
2041:  test test_type_command_all_types ... ok
2042:  test test_unlink ... ok
2043:  test test_watch_abort ... ok
2044:  test test_watch_success ... ok
2045:  test test_wrongtype_error ... ok
2046:  test result: ok. 106 passed; 0 failed; 10 ignored; 0 measured; 0 filtered out; finished in 3.69s
2047:  �[1m�[92m     Running�[0m tests/replication_test.rs (target/debug/deps/replication_test-1fcdc07635938e19)
2048:  running 5 tests
2049:  test test_info_replication_master ... ok
2050:  test test_replconf_ok ... ok
2051:  test test_readonly_replica ... ok
2052:  test test_replicaof_no_one ... ok
2053:  test test_wait_no_replicas ... ok
2054:  test result: ok. 5 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.22s
2055:  �[1m�[92m     Running�[0m tests/vector_edge_cases.rs (target/debug/deps/vector_edge_cases-e23b5615a763ee78)
2056:  running 16 tests
2057:  test test_drop_nonexistent_index ... ok
2058:  test test_delete_nonexistent_id ... ok
2059:  test test_duplicate_index_create ... ok
2060:  test test_empty_index_search ... ok
2061:  test test_ft_create_invalid_dim ... ok
2062:  test test_ft_create_missing_schema ... ok
2063:  test test_ft_create_missing_args ... ok
2064:  test test_ft_dropindex_missing_args ... ok
2065:  test test_ft_info_nonexistent_index ... ok
2066:  test test_ft_search_nonexistent_index ... ok
2067:  test test_ft_search_dimension_mismatch_returns_error ... ok
2068:  test test_ft_search_missing_query_vector ... ok
2069:  test test_search_k_zero ... ok
2070:  test test_zero_vector_insert_and_search ... ok
2071:  test test_search_k_larger_than_index ... ok
2072:  test test_max_dimension_3072 ... ok
2073:  test result: ok. 16 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
2074:  �[1m�[92m     Running�[0m tests/vector_insert_bench.rs (target/debug/deps/vector_insert_bench-b06e58f0dc73c6ac)
2075:  running 4 tests
2076:  test bench_full_insert_pipeline_128d ... ok
2077:  test bench_raw_append_768d ... ok
2078:  test bench_raw_append_128d ... ok
2079:  test bench_full_insert_pipeline_768d ... ok
2080:  test result: ok. 4 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 5.50s
2081:  �[1m�[92m     Running�[0m tests/vector_memory_audit.rs (target/debug/deps/vector_memory_audit-428c84ae3d631081)
2082:  running 4 tests
2083:  test test_memory_budget_1m_768d_tq4 ... ok
2084:  test test_struct_sizes ... ok
2085:  test test_aligned_buffer_no_waste ... ok
2086:  test test_per_vector_overhead_breakdown ... ok
2087:  test result: ok. 4 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.03s
2088:  �[1m�[92m     Running�[0m tests/vector_recall_benchmark.rs (target/debug/deps/vector_recall_benchmark-cc7d8c28982961e2)
2089:  running 8 tests
2090:  test recall_1k_128d_ef128 ... ok
2091:  test recall_1k_128d_ef64 ... ok
2092:  test recall_10k_128d_ef128 has been running for over 60 seconds
2093:  test recall_10k_768d_ef128 has been running for over 60 seconds
2094:  test recall_10k_768d_ef256 has been running for over 60 seconds
2095:  test recall_1k_768d_ef128 has been running for over 60 seconds
2096:  test recall_1k_768d_ef128 ... ok
2097:  test recall_debug_1k_128d ... ok
2098:  test recall_f32_hnsw_10k_128d_ef200 has been running for over 60 seconds
2099:  test recall_10k_128d_ef128 ... ok
2100:  ##[error]The action 'Run cargo test --no-default-features --features runtime-tokio,jemalloc' has timed out after 10 minutes.
2101:  Post job cleanup.

@pilotspacex-byte pilotspacex-byte merged commit 6a5de2f into main Apr 1, 2026
5 of 7 checks passed
@pilotspacex-byte pilotspacex-byte deleted the feat/hexagon-hnws branch April 1, 2026 08:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants