You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -85,6 +87,16 @@ Path templates use pattern replacement to reduce metric cardinality:
85
87
86
88
UUIDs, numeric IDs, and MongoDB ObjectIds are replaced with `{id}` to prevent metric explosion.
87
89
90
+
## CSRF Protection
91
+
92
+
`CSRFMiddleware` implements the **double-submit cookie** pattern. At login the server issues two cookies: an httpOnly `access_token` cookie (the JWT) and a readable `csrf_token` cookie. The CSRF token is HMAC-signed against the `access_token` so it cannot be forged independently.
93
+
94
+
On every mutating request (POST, PUT, DELETE, PATCH) that targets an `/api/` path, the middleware reads the `csrf_token` cookie and the `X-CSRF-Token` request header, then validates that they match (constant-time comparison) and that the token's HMAC signature is valid for the current `access_token`. Requests that fail any check receive a 403 response.
95
+
96
+
Safe methods (GET, HEAD, OPTIONS), auth endpoints (`/api/v1/auth/login`, `/api/v1/auth/register`), non-API paths, and unauthenticated requests (no `access_token` cookie) are exempt.
97
+
98
+
**Frontend behaviour:** The API interceptor in `api-interceptors.ts` auto-injects `authStore.csrfToken` into the `X-CSRF-Token` header for every non-GET request. The store obtains the token from the login response body and refreshes it by reading the `csrf_token` cookie on auth verification (`auth.svelte.ts`).
99
+
88
100
## System Metrics
89
101
90
102
In addition to HTTP metrics, the middleware module provides system-level observables:
@@ -119,4 +131,8 @@ These expose:
119
131
120
132
HTTP request telemetry and system-level observables
The backend manages MongoDB collection schemas — indexes, validators, and TTL policies. These are initialized at process start, whether the process is the main API or a standalone worker.
3
+
The backend manages MongoDB collection schemas — indexes and TTL policies. These are initialized at process start, whether the process is the main API or a standalone worker.
4
4
5
5
Kafka event serialization is handled entirely by FastStream with Pydantic JSON; there is no schema registry involved. See [Event System Design](../architecture/event-system-design.md) for details on event serialization.
6
6
7
7
## MongoDB schema
8
8
9
-
The `SchemaManager` class in `app/db/schema/schema_manager.py` applies idempotent, versioned migrations to MongoDB. Each migration is a short async function that creates indexes or sets collection validators. The class tracks which migrations have run in a `schema_versions` collection, storing the migration id, description, and timestamp.
9
+
MongoDB indexes are defined declaratively on each Beanie `Document` subclass via an inner `Settings` class. When `init_beanie()` runs at startup, Beanie reads the `indexes` list from every registered document model and calls `create_indexes` on the collection. Because `create_indexes` is idempotent (a no-op when an index with the same name and spec already exists), every process can safely call `init_beanie()` on boot without worrying about duplicate work.
10
10
11
-
When `apply_all()` is called, it walks through an ordered list of migrations and skips any that already have a record in `schema_versions`. If the migration hasn't been applied, it runs the function and then marks it done. This design means you can safely call `apply_all()` on every startup without worrying about duplicate work — MongoDB's `create_indexes` is a no-op when indexes with matching names and specs already exist.
12
-
13
-
The system currently has nine migrations covering the main collections. The events collection gets the most attention: a unique index on `event_id`, compound indexes for queries by event type, aggregate, user, service, and status, a TTL index for automatic expiration, and a text search index across several fields. It also has a JSON schema validator set to moderate/warn mode, meaning MongoDB logs validation failures but doesn't reject writes.
14
-
15
-
Other migrations create indexes for user settings snapshots, replay sessions, notifications and notification rules, idempotency keys (with a one-hour TTL), sagas, execution results, and DLQ messages (with a seven-day TTL). The idempotency and DLQ TTL indexes automatically clean up old documents without manual intervention.
11
+
Each document class owns its own index definitions. For example, the events collection declares a unique index on `event_id`, compound indexes for queries by event type, aggregate, user, service, and status, a TTL index for automatic expiration, and a text search index across several fields. Other documents define TTL indexes for idempotency keys (one-hour TTL) and DLQ messages (seven-day TTL) so old documents are cleaned up automatically.
16
12
17
13
Repositories don't create their own indexes — they only read and write. This separation keeps startup behavior predictable and prevents the same index being created from multiple code paths.
18
14
@@ -22,13 +18,11 @@ During API startup, the `lifespan` function in `dishka_lifespan.py` initializes
22
18
23
19
## Local development
24
20
25
-
To force a specific MongoDB migration to run again, delete its document from `schema_versions`. To start fresh, point the app at a new database. Migrations are designed to be additive; the system doesn't support automatic rollbacks. If you need to undo a migration in production, you'll have to drop indexes or modify validators manually.
21
+
Indexes are additive and idempotent. To rebuild an index with different options, drop it manually in `mongosh` and restart the process — `init_beanie()` will recreate it from the document class definition. To start fresh, point the app at a new database.
0 commit comments