You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+70-38Lines changed: 70 additions & 38 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,12 +21,12 @@
21
21
## 🎯 **Latest Features** ✅
22
22
23
23
**Production-ready distributed cache with full observability stack:**
24
-
- ✅ Multi-node cluster deployment with full replication
24
+
- ✅ Multi-node cluster deployment with hash-ring partitioned routing
25
25
- ✅ Full Redis client compatibility (RESP protocol)
26
26
- ✅ Lamport timestamps for causal ordering of distributed writes
27
-
- ✅ Read-repair for gossip propagation window
28
-
- ✅ Early Cuckoo filter sync across nodes
29
-
- ✅ Enterprise persistence (AOF + Snapshots)
27
+
- ✅ Read-repair for replication propagation window
28
+
- ✅ Sharded locks (32 independent shards) for high-concurrency writes
29
+
- ✅ Enterprise persistence (AOF + Snapshots) with background writes
30
30
- ✅ Structured JSON logging with correlation ID tracing
31
31
- ✅ Real-time monitoring with Grafana + Elasticsearch
32
32
- ✅ HTTP API + RESP protocol support
@@ -309,17 +309,18 @@ make deps Download and tidy dependencies
309
309
- Multi-store commands: SELECT, STORES
310
310
311
311
### **Distributed Resilience**
312
-
-**Full Replication**: Every node stores every key — maximum availability, any node serves any request
313
-
-**Lamport Timestamps**: Logical clocks for causal ordering of distributed operations. Stale writes from out-of-order gossip are automatically rejected
314
-
-**Read-Repair**: On local cache miss, peer nodes are queried before returning 404. Bridges the gossip propagation window (~50-500ms) so clients never see stale misses
315
-
-**Early Cuckoo Filter Sync**: Filter is updated immediately on gossip receive, before data is written. Eliminates false "definitely not here" rejections during replication lag
316
-
-**Idempotent Replication**: DELETE on a missing key is a no-op, not an error. Designed for eventual consistency
312
+
-**Hash-Ring Routing**: Consistent hashing with 256 virtual nodes routes each key to its primary owner. Non-owner nodes transparently proxy requests to the correct node
313
+
-**Targeted Replication**: Writes replicate to N hash-ring replicas (default 3) via direct HTTP — not gossip broadcast to all nodes
314
+
-**Lamport Timestamps**: Logical clocks for causal ordering of distributed operations. Stale writes from out-of-order replication are automatically rejected
315
+
-**Read-Repair**: On local cache miss, hash-ring replicas are queried before returning 404. Bridges the replication propagation window
316
+
-**Sharded Concurrency**: 32 independent lock shards eliminate the global mutex bottleneck. Each key locks only its shard
317
+
-**Background Eviction**: Memory pressure triggers a background evictor goroutine — eviction never blocks the write path
317
318
-**Correlation ID Tracing**: Every request gets a unique ID that flows across all nodes for end-to-end debugging
0 commit comments