Skip to content

Commit 78b653a

Browse files
Copilotmax-lt
andcommitted
Update documentation and comments to use relative timing values
Co-authored-by: max-lt <9805205+max-lt@users.noreply.github.com>
1 parent 6b728a8 commit 78b653a

13 files changed

Lines changed: 68 additions & 62 deletions

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ V8-based JavaScript runtime for serverless workers, built on [rusty_v8](https://
55
## Quick Start
66

77
```rust
8-
use openworkers_runtime_v8::{init_pool, execute_pooled, RuntimeLimits, Script, Task};
8+
use openworkers_runtime_v8::{init_pool, execute_pooled, RuntimeLimits, Script, Event};
99

1010
// Initialize pool once at startup
1111
init_pool(1000, RuntimeLimits::default());
@@ -17,12 +17,12 @@ let script = Script::new(r#"
1717
});
1818
"#);
1919

20-
execute_pooled("worker-id", script, ops, task).await?;
20+
execute_pooled("worker-id", script, ops, event).await?;
2121
```
2222

2323
## Features
2424

25-
- **Isolate pooling**<10µs warm start, ~100µs cold start
25+
- **Isolate pooling**Fast warm start (~µs), cold start with snapshot (~µs)
2626
- **Streaming** — ReadableStream with backpressure
2727
- **Web APIs** — fetch, setTimeout, Response, Request, URL, console
2828
- **Async/await** — Full Promise support
@@ -31,8 +31,8 @@ execute_pooled("worker-id", script, ops, task).await?;
3131

3232
| Mode | Cold Start | Warm Start |
3333
| ----------- | ---------- | ---------- |
34-
| IsolatePool | ~100µs | <10µs |
35-
| Worker | ~2-3ms | ~2-3ms |
34+
| IsolatePool | ~µs | Fastest |
35+
| Worker | ~ms | ~ms |
3636

3737
## Documentation
3838

docs/architecture.md

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -24,17 +24,17 @@ platform.rs ← V8 Platform (singleton, once per process)
2424
| **Runtime** | `runtime/mod.rs` | V8 isolate + context + channels | New isolate |
2525
| **Worker** | `worker.rs` | High-level API around Runtime | New isolate/req |
2626
| **SharedIsolate** | `shared_isolate.rs` | Thread-local reusable isolate | Once/thread |
27-
| **ExecutionContext** | `execution_context.rs` | Disposable context | ~100µs |
27+
| **ExecutionContext** | `execution_context.rs` | Disposable context | Fast (~µs) |
2828
| **LockerManagedIsolate** | `locker_managed_isolate.rs` | Pool-compatible isolate | Once/worker |
2929
| **IsolatePool** | `isolate_pool.rs` | Global LRU cache | Manages lifecycle |
3030

3131
## Execution Modes
3232

33-
| Mode | API | Performance | Use Case |
34-
| ----------------- | ------------------ | ----------- | -------------------- |
35-
| **Legacy** | `Worker::new()` | ~700µs/req | Max isolation, tests |
36-
| **Shared Pool** | `execute_pooled()` | ~200µs/req | Single-thread |
37-
| **Thread-Pinned** | `execute_pinned()` | ~170µs/req | **Production** |
33+
| Mode | API | Performance | Use Case |
34+
| ----------------- | ------------------ | -------------- | -------------------- |
35+
| **Legacy** | `Worker::new()` | Slow (~ms/req) | Max isolation, tests |
36+
| **Shared Pool** | `execute_pooled()` | Fast (~µs/req) | Single-thread |
37+
| **Thread-Pinned** | `execute_pinned()` | Fastest | **Production** |
3838

3939
See [execution_modes.md](./execution_modes.md) for details.
4040

@@ -75,6 +75,7 @@ pub trait EventLoopRuntime {
7575
fn pump_and_checkpoint(&mut self);
7676
}
7777

78+
// Implemented by Runtime and ExecutionContext
7879
// Used by Worker, ExecutionContext, WorkerFuture
7980
drain_and_process(cx, runtime, buffer) -> Result<()>
8081
```
@@ -132,15 +133,15 @@ Used by: Runtime Used by: IsolatePool
132133

133134
## Key Files
134135

135-
| File | Lines | Purpose |
136-
| ---------------------- | ----- | ----------------------------- |
137-
| `runtime/mod.rs` | ~800 | V8 setup, callback processing |
138-
| `runtime/bindings.rs` | ~600 | JS native functions |
139-
| `worker.rs` | ~700 | Worker API, event loop |
140-
| `execution_context.rs` | ~500 | Pooled execution context |
141-
| `isolate_pool.rs` | ~300 | LRU cache, v8::Locker |
142-
| `event_loop.rs` | ~80 | Shared polling logic |
143-
| `platform.rs` | ~20 | V8 platform singleton |
136+
| File | Lines | Purpose |
137+
| ----------------------- | ------ | ----------------------------- |
138+
| `runtime/mod.rs` | ~700 | V8 setup, callback processing |
139+
| `runtime/bindings/` | ~2500 | JS native functions (folder) |
140+
| `worker.rs` | ~1700 | Worker API, event loop |
141+
| `execution_context.rs` | ~1300 | Pooled execution context |
142+
| `isolate_pool.rs` | ~450 | LRU cache, v8::Locker |
143+
| `event_loop.rs` | ~80 | Shared polling logic |
144+
| `platform.rs` | ~80 | V8 platform singleton |
144145

145146
## See Also
146147

docs/execution_modes.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,11 @@ Three ways to run JavaScript in the V8 runtime.
44

55
## Comparison
66

7-
| Mode | Cold Start | Warm Start | Thread Model | Use Case |
8-
| ----------------- | ---------- | ---------- | ------------------------ | -------------- |
9-
| **IsolatePool** | ~100µs | <10µs | Multi-thread, v8::Locker | **Production** |
10-
| **Worker** | ~2-3ms | ~2-3ms | Single, per-request | Max isolation |
11-
| **SharedIsolate** | ~100µs | ~100µs | Thread-local | Legacy |
7+
| Mode | Cold Start | Warm Start | Thread Model | Use Case |
8+
| ----------------- | ------------- | ---------- | ------------------------ | -------------- |
9+
| **IsolatePool** | ~µs | Fastest | Multi-thread, v8::Locker | **Production** |
10+
| **Worker** | ~ms | ~ms | Single, per-request | Max isolation |
11+
| **SharedIsolate** | ~µs | ~µs | Thread-local | Legacy |
1212

1313
## IsolatePool (Recommended)
1414

@@ -21,7 +21,7 @@ use openworkers_runtime_v8::{init_pool, execute_pooled};
2121
init_pool(1000, RuntimeLimits::default());
2222

2323
// Per request
24-
execute_pooled("worker-id", script, ops, task).await?;
24+
execute_pooled("worker-id", script, ops, event).await?;
2525
```
2626

2727
**How it works:**
@@ -33,8 +33,8 @@ execute_pooled("worker-id", script, ops, task).await?;
3333

3434
**Cache behavior:**
3535

36-
- Hit → reuse existing isolate (<10µs)
37-
- Miss → create new (~100µs with snapshot)
36+
- Hit → reuse existing isolate (fastest)
37+
- Miss → create new (~µs with snapshot, ~ms without)
3838
- Full → evict LRU, create new
3939

4040
See [isolate_pool.md](./isolate_pool.md) for implementation details.
@@ -102,11 +102,11 @@ execute_pinned("owner-id", script, ops, task).await?;
102102

103103
### Benchmark Comparison
104104

105-
| Scenario | Shared Pool | Thread-Pinned |
106-
| -------------- | ----------- | ----------------- |
107-
| Warm cache | 0.64ms | **0.48ms** (+34%) |
108-
| CPU-bound | 1.73ms | 1.80ms |
109-
| With I/O (5ms) | 7.95ms | 7.92ms |
105+
| Scenario | Shared Pool | Thread-Pinned |
106+
| -------------- | ----------- | ------------- |
107+
| Warm cache | Fast | **Faster** |
108+
| CPU-bound | Similar | Similar |
109+
| With I/O | Similar | Similar |
110110

111111
Under high contention (many threads, few isolates), shared pool can degrade to **worse than no pooling**. Thread-pinned avoids this.
112112

docs/isolate_pool.md

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,9 @@ pub struct LockerManagedIsolate {
5959
pub platform: &'static v8::SharedRef<v8::Platform>,
6060
pub limits: RuntimeLimits,
6161
pub memory_limit_hit: Arc<AtomicBool>,
62+
pub use_snapshot: bool,
63+
pub deferred_destruction_queue: Arc<DeferredDestructionQueue>,
64+
// ... heap limit state
6265
}
6366
```
6467

@@ -130,11 +133,11 @@ pub async fn with_lock_async<F, Fut, R>(&self, f: F) -> R {
130133

131134
## Performance
132135

133-
| Operation | Time |
134-
| -------------------------- | ------ |
135-
| Cache hit + lock | <10µs |
136-
| Cache miss (with snapshot) | ~100µs |
137-
| Cache miss (no snapshot) | ~2-3ms |
136+
| Operation | Time |
137+
| -------------------------- | -------------- |
138+
| Cache hit + lock | Fastest (~µs) |
139+
| Cache miss (with snapshot) | Fast (~µs) |
140+
| Cache miss (no snapshot) | Slower (~ms) |
138141

139142
### Contention Scenarios
140143

docs/streams.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -33,9 +33,11 @@ Rust ↔ JavaScript streaming bridge for efficient data transfer without full bu
3333

3434
```rust
3535
pub struct StreamManager {
36-
senders: Arc<Mutex<HashMap<StreamId, UnboundedSender<StreamChunk>>>>,
37-
receivers: Arc<Mutex<HashMap<StreamId, UnboundedReceiver<StreamChunk>>>>,
36+
senders: Arc<Mutex<HashMap<StreamId, Sender<StreamChunk>>>>,
37+
receivers: Arc<Mutex<HashMap<StreamId, Receiver<StreamChunk>>>>,
38+
metadata: Arc<Mutex<HashMap<StreamId, String>>>,
3839
next_id: Arc<Mutex<StreamId>>,
40+
high_water_mark: usize, // Channel capacity for backpressure
3941
}
4042

4143
pub enum StreamChunk {
@@ -139,7 +141,7 @@ while (true) {
139141
## Thread Safety
140142

141143
- `StreamManager` is `Clone` + `Send` via `Arc<Mutex<...>>`
142-
- Channels are thread-safe (`mpsc::unbounded`)
144+
- Channels are thread-safe (`mpsc::channel` with bounded capacity)
143145
- One reader per stream (WHATWG spec)
144146

145147
## Memory

src/execution_context.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
//! a fresh V8 Context within an existing SharedIsolate, providing complete
55
//! isolation from other executions.
66
//!
7-
//! The context is cheap to create (~100µs) compared to an isolate (~3-5ms).
7+
//! The context is cheap to create (~µs) compared to an isolate (~ms without snapshot).
88
99
use std::cell::RefCell;
1010
use std::collections::HashMap;
@@ -228,7 +228,7 @@ impl ExecutionContext {
228228

229229
/// Create a new execution context within a shared isolate
230230
///
231-
/// This is relatively cheap (~100µs) compared to creating an isolate.
231+
/// This is relatively cheap (~µs) compared to creating an isolate.
232232
///
233233
/// # Safety
234234
/// The SharedIsolate must remain valid for the lifetime of this ExecutionContext.

src/lib.rs

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,10 @@
1515
//! init_pool(1000, limits);
1616
//!
1717
//! // Execute worker (handles everything internally)
18-
//! execute_pooled("worker-id", script, ops, task).await?;
18+
//! execute_pooled("worker-id", script, ops, event).await?;
1919
//! ```
2020
//!
21-
//! Performance: <10µs warm start, ~100µs cold start (with snapshot)
21+
//! Performance: Fast warm start (~µs), cold start with snapshot (~µs)
2222
//!
2323
//! ### Worker (Maximum Isolation)
2424
//!
@@ -28,10 +28,10 @@
2828
//! use openworkers_runtime_v8::Worker;
2929
//!
3030
//! let mut worker = Worker::new_with_ops(script, limits, ops).await?;
31-
//! worker.exec(task).await?;
31+
//! worker.exec(event).await?;
3232
//! ```
3333
//!
34-
//! Performance: ~2-3ms per request (creates new isolate)
34+
//! Performance: Slower (~ms per request, creates new isolate)
3535
3636
pub mod event_loop;
3737
pub mod execution_context;

src/locker_managed_isolate.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ pub struct LockerManagedIsolate {
4141
impl LockerManagedIsolate {
4242
/// Create a new locker-managed isolate
4343
///
44-
/// This is expensive (~3-5ms without snapshot, ~100µs with snapshot)
44+
/// This is expensive (~ms without snapshot, ~µs with snapshot)
4545
/// and should be done lazily by the pool, not per-request.
4646
pub fn new(limits: RuntimeLimits) -> Self {
4747
// Get global V8 platform (initialized once, shared across all modules)

src/pooled_execution.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,8 @@ use openworkers_core::{Event, OperationsHandle, Script, TerminationReason};
2323
/// * `task` - Task to execute (HTTP request, scheduled event, etc.)
2424
///
2525
/// # Performance
26-
/// - Cache hit: <10µs (isolate reused from pool)
27-
/// - Cache miss: ~100µs (with snapshot) or ~3-5ms (without)
26+
/// - Cache hit: Fastest (~µs, isolate reused from pool)
27+
/// - Cache miss: Fast with snapshot (~µs), slower without (~ms)
2828
/// - Same worker_id always reuses the same isolate (warm cache)
2929
///
3030
/// # Example

src/shared_isolate.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,8 @@ pub struct SharedIsolate {
2525
impl SharedIsolate {
2626
/// Create a new shared isolate
2727
///
28-
/// This is expensive (~3-5ms) and should be done once at startup,
29-
/// not per-request.
28+
/// This is expensive (~ms without snapshot, ~µs with snapshot)
29+
/// and should be done once at startup, not per-request.
3030
pub fn new(limits: RuntimeLimits) -> Self {
3131
// Get global V8 platform (initialized once, shared across all modules)
3232
let platform = crate::platform::get_platform();

0 commit comments

Comments
 (0)