Skip to content

Commit e1aab07

Browse files
authored
Merge pull request #4 from agkloop/asyncio
asyncio support
2 parents 81411e3 + 1c3cfe4 commit e1aab07

16 files changed

Lines changed: 1039 additions & 1560 deletions

CHANGELOG.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,20 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8+
## [0.2.0] - 2025-12-23
9+
10+
### Changed
11+
- **Major Architecture Overhaul**: The library is now fully async-native.
12+
- `TTLCache`, `SWRCache`, and `BGCache` now support `async def` functions natively using `await`.
13+
- Synchronous functions are still supported via intelligent inspection, maintaining backward compatibility.
14+
- **Unified Scheduling**: `SWRCache` (in sync mode) and `BGCache` now use `APScheduler` (`SharedScheduler` and `SharedAsyncScheduler`) for all background tasks, replacing ad-hoc threading.
15+
- **Testing**: Integration tests rewritten to use `pytest-asyncio` with `mode="auto"`.
16+
17+
### Added
18+
- `AsyncTTLCache`, `AsyncStaleWhileRevalidateCache`, `AsyncBackgroundCache` classes (aliased to `TTLCache`, `SWRCache`, `BGCache`).
19+
- `SharedAsyncScheduler` for managing async background jobs.
20+
- `pytest-asyncio` configuration in `pyproject.toml`.
21+
822
## [0.1.6] - 2025-12-15
923

1024
### Changed

README.md

Lines changed: 17 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -44,23 +44,35 @@ uv pip install "advanced-caching[redis]" # Redis support
4444
```python
4545
from advanced_caching import TTLCache, SWRCache, BGCache
4646
47+
# Sync function
4748
@TTLCache.cached("user:{}", ttl=300)
4849
def get_user(user_id: int) -> dict:
4950
return db.fetch(user_id)
5051
52+
# Async function (works natively)
53+
@TTLCache.cached("user:{}", ttl=300)
54+
async def get_user_async(user_id: int) -> dict:
55+
return await db.fetch(user_id)
56+
57+
# Stale-While-Revalidate (Sync)
5158
@SWRCache.cached("product:{}", ttl=60, stale_ttl=30)
5259
def get_product(product_id: int) -> dict:
5360
return api.fetch_product(product_id)
5461
55-
# Background refresh
62+
# Stale-While-Revalidate (Async)
63+
@SWRCache.cached("async:product:{}", ttl=60, stale_ttl=30)
64+
async def get_product_async(product_id: int) -> dict:
65+
return await api.fetch_product(product_id)
66+
67+
# Background refresh (Sync)
5668
@BGCache.register_loader("inventory", interval_seconds=300)
5769
def load_inventory() -> list[dict]:
5870
return warehouse_api.get_all_items()
5971
60-
# Async works too
61-
@TTLCache.cached("user:{}", ttl=300)
62-
async def get_user_async(user_id: int) -> dict:
63-
return await db.fetch(user_id)
72+
# Background refresh (Async)
73+
@BGCache.register_loader("inventory_async", interval_seconds=300)
74+
async def load_inventory_async() -> list[dict]:
75+
return await warehouse_api.get_all_items()
6476
```
6577

6678
---

docs/benchmarking-and-profiling.md

Lines changed: 23 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,9 @@
22

33
This repo includes a small, reproducible benchmark harness and a profiler-friendly workload script.
44

5-
- Benchmark runner: `tests/benchmark.py`
5+
- Benchmark suite: `tests/benchmark.py`
66
- Profiler workload: `tests/profile_decorators.py`
77
- Benchmark log (append-only JSON-lines): `benchmarks.log`
8-
- Run comparison helper: `tests/compare_benchmarks.py`
98

109
## 1) Benchmarking (step-by-step)
1110

@@ -17,14 +16,14 @@ This repo uses `uv`. From the repo root:
1716
uv sync
1817
```
1918

20-
### Step 1 — Run the default benchmark
19+
### Step 1 — Run the benchmark suite
2120

2221
```bash
2322
uv run python tests/benchmark.py
2423
```
2524

2625
What you get:
27-
- A printed table for **cold** (always miss), **hot** (always hit), and **mixed** (hits + misses).
26+
- Printed tables for **hot cache hits** (comparing TTLCache, SWRCache, BGCache).
2827
- A new JSON entry appended to `benchmarks.log` with the config + median/mean/stdev per strategy.
2928

3029
### Step 2 — Tune benchmark parameters (optional)
@@ -35,60 +34,42 @@ What you get:
3534
- `BENCH_WORK_MS` (default `5.0`) — simulated I/O latency (sleep)
3635
- `BENCH_WARMUP` (default `10`)
3736
- `BENCH_RUNS` (default `300`)
38-
- `BENCH_MIXED_KEY_SPACE` (default `100`)
39-
- `BENCH_MIXED_RUNS` (default `500`)
4037

4138
Examples:
4239

4340
```bash
44-
BENCH_RUNS=1000 BENCH_MIXED_RUNS=2000 uv run python tests/benchmark.py
45-
```
46-
47-
```bash
48-
# Focus on decorator overhead (no artificial sleep)
49-
BENCH_WORK_MS=0 BENCH_RUNS=200000 BENCH_MIXED_RUNS=300000 uv run python tests/benchmark.py
41+
BENCH_RUNS=1000 uv run python tests/benchmark.py
5042
```
5143

5244
### Step 3 — Compare two runs
5345

54-
There are two ways to select runs:
55-
56-
- Relative: `last` / `last-N`
57-
- Explicit: integer indices (0-based; negatives allowed)
58-
59-
List run indices quickly:
46+
The benchmark appends JSON lines to `benchmarks.log`. A quick helper to list runs:
6047

6148
```bash
6249
uv run python - <<'PY'
6350
import json
6451
from pathlib import Path
6552
runs=[]
53+
if not Path('benchmarks.log').exists():
54+
print("No benchmarks.log found")
55+
exit(0)
6656
for line in Path('benchmarks.log').read_text(encoding='utf-8', errors='replace').splitlines():
67-
line=line.strip()
68-
if not line.startswith('{'):
69-
continue
70-
try:
71-
obj=json.loads(line)
72-
except Exception:
73-
continue
74-
if isinstance(obj,dict) and 'results' in obj:
75-
runs.append(obj)
57+
line=line.strip()
58+
if not line.startswith('{'):
59+
continue
60+
try:
61+
obj=json.loads(line)
62+
except Exception:
63+
continue
64+
if isinstance(obj,dict) and 'sections' in obj:
65+
runs.append(obj)
7666
print('count',len(runs))
7767
for i,r in enumerate(runs):
78-
print(i,r.get('ts'))
68+
print(i,r.get('ts'))
7969
PY
8070
```
8171

82-
Compare (example: index 2 vs index 11):
83-
84-
```bash
85-
uv run python tests/compare_benchmarks.py --a 2 --b 11
86-
```
87-
88-
What to look at:
89-
- **Hot TTL/SWR** medians: these are the pure “cache-hit overhead” numbers.
90-
- **Mixed** medians: reflect a real-ish distribution; watch for regressions here.
91-
- Ignore small (<5–10%) deltas unless they repeat across multiple clean runs.
72+
To compare two indices (e.g., 2 vs 11), load the JSON objects in a notebook or script and diff the `sections` (hot medians for TTL/SWR/BG are the most sensitive to overhead changes).
9273

9374
### Step 4 — Make results stable (recommended practice)
9475

@@ -163,6 +144,10 @@ PROFILE_N=5000000 \
163144
- `SWRCache` hot: overhead of key generation + `get_entry()` + freshness checks.
164145
- `BGCache` hot: overhead of key lookup + `get()` + return.
165146

147+
- **Async results (important)**
148+
- Async medians include the cost of creating/awaiting a coroutine and event-loop scheduling.
149+
- For AsyncBG/AsyncSWR, compare against the `async_baseline` row (plain `await` with no cache) to estimate *cache-specific* overhead.
150+
166151
- **Mixed path**
167152
- A high mean + low median typically indicates occasional slow misses/refreshes.
168153

0 commit comments

Comments
 (0)