You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: contributing/PIPELINES.md
+49Lines changed: 49 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,6 +37,55 @@ Brief checklist for implementing a new pipeline:
37
37
8. Register the pipeline in `PipelineManager` and hint fetch from services after commit via `pipeline_hinter.hint_fetch(Model.__name__)`.
38
38
9. Add minimum tests: fetch eligibility/order, successful unlock path, stale lock token path, and related lock contention retry path.
39
39
40
+
## Typical worker structure
41
+
42
+
Most workers are easiest to reason about when `process()` is split into three phases:
43
+
44
+
1. Load/refetch: open a short DB session, refetch the locked main row by `id + lock_token`, lock any required related rows, and gather any extra data needed for processing.
45
+
2. Process: do the heavy work outside DB sessions and build result objects or update maps instead of mutating detached ORM models.
46
+
3. Apply: open a short DB session, guard the main update by `id + lock_token`, resolve time placeholders, apply related updates, emit events, and unlock rows.
47
+
48
+
A dedicated context object is often useful for the load step when the worker needs multiple loaded models, related lock metadata, or derived values that should be passed cleanly into processing and apply. For very small pipelines, a direct load -> process -> apply flow may still be clearer.
49
+
50
+
Workers can share one context type and one apply function across all states even if the processing logic differs by state:
Sometimes state-specific helpers are still the cleanest option, but they can still share a common apply phase if all states write results in the same general shape:
If different states have materially different write-side behavior, different apply paths are fine as well. This commonly happens when one state does a normal guarded update while another does delete-or-cleanup work with different related updates:
78
+
79
+
```python
80
+
asyncdefprocess(item):
81
+
if item.to_be_deleted:
82
+
await _process_to_be_deleted_item(item)
83
+
elif item.status == Status.SUBMITTED:
84
+
await _process_submitted_item(item)
85
+
```
86
+
87
+
It's ok not to force all pipelines into one exact shape.
0 commit comments