Skip to content

Commit 9afdcb6

Browse files
committed
Add documentation for new store implementations and update index
- Introduced detailed documentation for the Handle, HashMap, Slab, and Weight store modules, outlining architecture, key components, core operations, performance trade-offs, and usage examples. - Updated the main documentation index to include links to the new store documentation, enhancing navigability and resource accessibility for users.
1 parent 9088d92 commit 9afdcb6

6 files changed

Lines changed: 264 additions & 0 deletions

File tree

docs/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ Welcome to the CacheKit documentation site.
55
## Getting started
66

77
- [Design overview](design.md)
8+
- [Stores](stores/README.md)
89
- [Policy overview](policies/README.md)
910
- [Policy roadmap](policies/roadmap/README.md)
1011
- [Policy data structures](policy-ds/README.md)

docs/stores/README.md

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
# Stores
2+
3+
CacheKit “stores” are the underlying key/value containers used by policies. They provide:
4+
5+
- A capacity limit (by entries, weight, or other accounting)
6+
- Basic operations (get/insert/remove/clear)
7+
- Optional concurrency wrappers
8+
- Metrics counters (hits/misses/inserts/updates/removes/evictions)
9+
10+
Most policies are generic over store traits defined in `cachekit::store::traits`:
11+
12+
- `StoreCore<K, V>`: read-only operations + metrics
13+
- `StoreMut<K, V>`: mutation operations
14+
- `ConcurrentStore<K, V>`: `Send + Sync` stores (typically via `RwLock`)
15+
16+
## Store implementations
17+
18+
- [HashMap store](hashmap.md): simplest entry-count store; has concurrent and sharded variants.
19+
- [Slab store](slab.md): stable `EntryId` handles via indirection; good for policy metadata.
20+
- [Weight store](weight.md): enforces both entry count and total “weight” (e.g. bytes).
21+
- [Handle store](handle.md): keyed by compact handles (IDs) instead of full keys.
22+
23+
## Choosing a store (quick guide)
24+
25+
- Default: use the HashMap store.
26+
- If the policy needs stable entry IDs: use the Slab store.
27+
- If you want size-based capacity: use the Weight store.
28+
- If you already have an interner / stable IDs for keys: use the Handle store.

docs/stores/handle.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
# Handle store
2+
3+
This store module is implemented in `cachekit::store::handle` and provides a store keyed by compact handles (IDs) instead of full keys. It’s intended to be used alongside a `KeyInterner` (or any other handle allocator) so that policy metadata never has to clone large keys.
4+
5+
## Architecture
6+
- Stores values in a `HashMap<H, Arc<V>>`, where `H` is a compact handle type.
7+
- Policies operate on handles; an interner maps keys ↔ handles outside the store.
8+
9+
## Key Components
10+
- `HandleStore<H, V>`: single-threaded handle-backed store.
11+
- `ConcurrentHandleStore<H, V>`: `RwLock`-protected store for multi-threaded use.
12+
- `KeyInterner`: a common way to obtain stable handles for keys (in `cachekit::ds`).
13+
14+
## Core Operations
15+
- `try_insert`: insert/update by handle.
16+
- `get`: fetch by handle (updates hit/miss counters).
17+
- `remove`, `clear`.
18+
19+
## Performance Trade-offs
20+
- Avoids cloning/storing large keys inside policy data structures.
21+
- Requires an extra mapping layer (key → handle) managed by the caller.
22+
- Stores `Arc<V>` values for cheap cloning on reads.
23+
24+
## When to Use
25+
- You already have stable handles for keys (interning, IDs, indices).
26+
- Keys are large/expensive to clone and you want to keep policies “handle-only”.
27+
28+
## Example Usage
29+
```rust
30+
use std::sync::Arc;
31+
32+
use cachekit::ds::KeyInterner;
33+
use cachekit::store::handle::HandleStore;
34+
use cachekit::store::traits::StoreMut;
35+
36+
let mut interner = KeyInterner::new();
37+
let handle = interner.intern("alpha".to_string());
38+
39+
let mut store: HandleStore<_, String> = HandleStore::new(2);
40+
store.try_insert(handle, Arc::new("value".to_string())).unwrap();
41+
```
42+
43+
## Type Constraints
44+
- `H: Copy + Eq + Hash` for handle lookup.
45+
46+
## Thread Safety
47+
- `HandleStore` is single-threaded.
48+
- `ConcurrentHandleStore` is `Send + Sync` via `RwLock`.
49+
50+
## Implementation Notes
51+
- Handles must remain stable for as long as the entry is expected to be retrievable.
52+
- Key lifecycle (interning and cleanup) is managed by the caller, not the store.

docs/stores/hashmap.md

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
# HashMap store
2+
3+
This store module is implemented in `cachekit::store::hashmap` and provides `HashMap`-backed stores with entry-count capacity enforcement.
4+
5+
## Architecture
6+
- Keys are stored directly in a `HashMap<K, Arc<V>>`.
7+
- Capacity is enforced by entry count (`len() <= capacity()`), not by bytes.
8+
9+
## Key Components
10+
- `HashMapStore<K, V>`: single-threaded store.
11+
- `ConcurrentHashMapStore<K, V>`: `RwLock`-protected store for multi-threaded use.
12+
- `ShardedHashMapStore<K, V>`: per-shard locks to reduce contention.
13+
14+
## Core Operations
15+
- `try_insert`: insert/update by key; fails with `StoreFull` when at capacity.
16+
- `get`: fetch by key (updates hit/miss counters).
17+
- `remove`, `clear`, `contains`, `len`.
18+
19+
## Performance Trade-offs
20+
- O(1) average lookup/insert/remove.
21+
- Stores `Arc<V>`; clones are cheap on reads, but inserts still allocate the `Arc`.
22+
- Sharding reduces lock contention but adds an extra hashing step to pick the shard.
23+
24+
## When to Use
25+
- You want the simplest general-purpose store keyed by owned keys.
26+
- Capacity by entry count is sufficient.
27+
- You want a straightforward concurrent store (global lock) or a sharded one.
28+
29+
## Example Usage
30+
```rust
31+
use std::sync::Arc;
32+
33+
use cachekit::store::hashmap::HashMapStore;
34+
use cachekit::store::traits::StoreMut;
35+
36+
let mut store: HashMapStore<u64, String> = HashMapStore::new(2);
37+
store.try_insert(1, Arc::new("a".to_string())).unwrap();
38+
assert!(store.contains(&1));
39+
```
40+
41+
## Type Constraints
42+
- `K: Eq + Hash` for key lookup.
43+
44+
## Thread Safety
45+
- `HashMapStore` is single-threaded.
46+
- `ConcurrentHashMapStore` and `ShardedHashMapStore` are `Send + Sync`.
47+
48+
## Implementation Notes
49+
- Capacity is checked before insertion; eviction is driven by the policy layer.
50+
- Metrics are stored using atomics for compatibility with concurrent variants.

docs/stores/slab.md

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
# Slab store
2+
3+
This store module is implemented in `cachekit::store::slab` and provides a slab-backed store with stable `EntryId` indirection (useful for policy metadata structures that want stable handles).
4+
5+
## Architecture
6+
- Values are stored in a slab (`Vec<Option<Entry<...>>>`) with a free-list for slot reuse.
7+
- A `HashMap<K, EntryId>` maps keys to slab slots.
8+
- `EntryId` is a compact handle into the slab.
9+
10+
## Key Components
11+
- `EntryId`: stable handle to a slot (until that slot is freed).
12+
- `SlabStore<K, V, M>`: core store with a configurable `ValueModel`.
13+
- `SharedSlabStore<K, V>`: returns `Arc<V>` values (cheap clone on reads).
14+
- `OwnedSlabStore<K, V>`: stores `V` and returns `&V` on reads.
15+
- `ConcurrentSlabStore<K, V>`: `RwLock`-protected `SharedSlabStore`.
16+
17+
## Core Operations
18+
- `try_insert`: insert/update by key, reusing free slots when possible.
19+
- `entry_id`: get the `EntryId` for an existing key.
20+
- `get_by_id`, `key_by_id`: stable handle lookups.
21+
- `remove`, `clear`.
22+
23+
## Performance Trade-offs
24+
- `EntryId` avoids storing large keys in policy data structures.
25+
- One extra indirection (key → id → entry) vs direct map lookup.
26+
- Slot reuse can reduce allocation churn in eviction-heavy workloads.
27+
28+
## When to Use
29+
- A policy needs stable IDs to maintain O(1) metadata updates (e.g. lists/arenas).
30+
- You want to store a value once and pass around compact handles.
31+
- You expect heavy churn and want reuse-friendly allocation behavior.
32+
33+
## Example Usage
34+
```rust
35+
use std::sync::Arc;
36+
37+
use cachekit::store::slab::{SharedSlabStore, SlabStore};
38+
use cachekit::store::traits::StoreMut;
39+
40+
let mut store: SharedSlabStore<u64, String> = SlabStore::new(4);
41+
store.try_insert(1, Arc::new("a".to_string())).unwrap();
42+
let id = store.entry_id(&1).unwrap();
43+
assert_eq!(store.get_by_id(id).as_deref().map(String::as_str), Some("a"));
44+
```
45+
46+
## Type Constraints
47+
- `K: Eq + Hash` for key lookup.
48+
- `M: ValueModel<V>` controls how values are stored and returned.
49+
50+
## Thread Safety
51+
- `SlabStore` is single-threaded.
52+
- `ConcurrentSlabStore` is `Send + Sync` via `RwLock`.
53+
54+
## Implementation Notes
55+
- `EntryId` values become invalid after removal of that entry (IDs are reused).
56+
- `OwnedSlabStore` is useful when you want zero `Arc` overhead on reads (borrowed output).

docs/stores/weight.md

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
# Weight store
2+
3+
This store module is implemented in `cachekit::store::weight` and provides a weight-aware store that enforces both an entry-count limit and a total weight limit (typically “bytes”). For an overview of all store types, see `docs/stores/README.md`.
4+
5+
## Architecture
6+
- Stores `Arc<V>` values in a `HashMap<K, WeightEntry<V>>`.
7+
- Tracks a running `total_weight` to enforce a weight capacity.
8+
- Uses a caller-provided weight function `F: Fn(&V) -> usize`.
9+
10+
## Capacity Semantics
11+
- Two limits are enforced: entry count (`capacity()`) and total weight (`capacity_weight()`).
12+
- `try_insert` returns `Err(StoreFull)` when inserting a new key would exceed the entry limit, or when inserting/updating would exceed the weight limit.
13+
- Updates recompute weight and may fail if the updated value is “too large” for the configured weight capacity.
14+
- The store does not evict on its own; eviction is driven by the policy layer (which should call `remove` and then `record_eviction` to keep eviction metrics accurate).
15+
16+
## Key Components
17+
- `WeightStore<K, V, F>`: single-threaded weight-aware store.
18+
- `ConcurrentWeightStore<K, V, F>`: `RwLock`-protected store for multi-threaded use.
19+
20+
## Core Operations
21+
- `try_insert`: insert/update by key while enforcing entry and weight limits.
22+
- `get`: fetch by key (updates hit/miss counters).
23+
- `remove`: delete by key and adjust `total_weight`.
24+
- `total_weight`, `capacity_weight`, `clear`.
25+
26+
## Performance Trade-offs
27+
- Inserts/updates compute weight; the weight function is on the hot path.
28+
- Reads are cheap (weight is stored per entry, not recomputed on get).
29+
- Keeps “size accounting” separate from the eviction policy (policy decides what to evict).
30+
31+
## When to Use
32+
- Values vary widely in size and you want size-based capacity.
33+
- You want observability into how “full” the store is in bytes/weight.
34+
35+
## Example Usage
36+
```rust
37+
use std::sync::Arc;
38+
39+
use cachekit::store::traits::StoreMut;
40+
use cachekit::store::weight::WeightStore;
41+
42+
let mut store = WeightStore::with_capacity(10, 64, |v: &String| v.len());
43+
store.try_insert("k1", Arc::new("value".to_string())).unwrap();
44+
assert!(store.total_weight() <= store.capacity_weight());
45+
```
46+
47+
## Example: Concurrent usage
48+
```rust
49+
use std::sync::Arc;
50+
51+
use cachekit::store::traits::ConcurrentStore;
52+
use cachekit::store::weight::ConcurrentWeightStore;
53+
54+
let store = ConcurrentWeightStore::with_capacity(10, 64, |v: &String| v.len());
55+
store.try_insert("k1", Arc::new("value".to_string())).unwrap();
56+
assert!(store.total_weight() <= store.capacity_weight());
57+
```
58+
59+
## Type Constraints
60+
- `K: Eq + Hash` for key lookup.
61+
- `F: Fn(&V) -> usize` to compute weight.
62+
63+
## Thread Safety
64+
- `WeightStore` is single-threaded.
65+
- `ConcurrentWeightStore` is `Send + Sync` via `RwLock`.
66+
67+
## Implementation Notes
68+
- Updates recompute weight and adjust `total_weight` (can fail with `StoreFull`).
69+
- Entry capacity and weight capacity are both enforced.
70+
- In `ConcurrentWeightStore`, `get` takes a write lock because it updates metrics; this can increase contention in read-heavy workloads.
71+
- Weight is an accounting mechanism, not a guarantee of actual memory usage; the accuracy depends on your chosen weight function.
72+
73+
## Weight Function Guidelines
74+
- Keep it fast (it runs on every insert/update).
75+
- Keep it deterministic and stable for a given value (avoid time/randomness/global state).
76+
- Prefer “monotonic with size” (larger values should not report smaller weights).
77+
- For `ConcurrentWeightStore`, the weight function must be `Send + Sync` (capture-only thread-safe state).

0 commit comments

Comments
 (0)