Skip to content

Commit 16189a5

Browse files
authored
Background task: complete never-completed audit log entries (#10026)
Closes #8817 The most important thing about an audit log is that every event you care about is in there. This PR adds a job to make sure requests that never complete (e.g., because Nexus crashed in the middle) appear in the audit log after some timeout. The audit log gets written in two steps. At the beginning of a request we initialize the log entry row. If this fails, the request fails. This guarantees we never miss an initialization. Then, at the end of the request, we "complete" the row with the result of the operation and a `time_completed`. The audit log list API response is ordered by `time_completed`. This lets us guarantee that if you retrieve a chunk of the audit log that is fully in the past, it will never change — you never have to fetch that chunk of the log again. But this requires thate entries do not appear in the log until they are completed. Completion can fail for variety of hopefully very rare reasons. It seemed unreasonably complicated to guarantee completion by rolling back whatever operation happened on a failed completion. Instead, the idea was that we would run a job that cleans up incomplete rows after a long enough amount of time that it's very unlikely the request will complete. When we clean up such rows, we don't know what actually happened to the request, so we have to give it a special `result_kind` of `timeout` (as opposed to `success` or `error`). https://github.com/oxidecomputer/omicron/blob/3d2d0c17002ef6291e1b4f88755ac7f7ba90dbd1/nexus/db-model/src/audit_log.rs#L275-L283 In the API response this becomes `AuditLogEntryResult::Unknown` because I feel like "timeout" is an implementation detail and makes it sound like the operation itself timed out. https://github.com/oxidecomputer/omicron/blob/3d2d0c17002ef6291e1b4f88755ac7f7ba90dbd1/nexus/db-model/src/audit_log.rs#L413-L415 ## How often are log entries left incomplete? Fortunately, this should be a very rare event — I checked on dogfood and there were around 200k rows in the table and there was exactly 1 incomplete `disk_create` call from October 2025. Need to check on the colo rack. ## Config options All three live in `[background_tasks.audit_log_timeout_incomplete]` in the Nexus config (`AuditLogTimeoutIncompleteConfig` in `nexus_config.rs`): * `period_secs` — how often the background task activates (currently 600 = 10 min) * `timeout_secs` — how old an incomplete entry must be before it gets timed out (default 14400 = 4 hours) * `max_update_per_activation` — max rows updated per activation (currently 1000), following the naming convention from `session_cleanup.max_delete_per_activation` On the timeout, I think 4 hours is pretty conservative. My worry is that we could have a legitimate API call that takes an hour or two because it's a post of a giant file or something. I don't know how reasonable that worry is. This timeout could probably go down to two hours or an hour. On the interval, 10 minutes seems maybe overkill for something that happens once every 5 months? But it's just one query, basically. ## Claude skill for adding background tasks After #10009 I had Claude write the `add-background-task` skill and used it to make this one. It was pretty good, so I think it's probably worth checking in.
1 parent 677afab commit 16189a5

23 files changed

Lines changed: 803 additions & 15 deletions

File tree

Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
---
2+
name: add-background-task
3+
description: Add a new Nexus background task. Use when the user wants to create a periodic background task in Nexus that runs on a timer.
4+
---
5+
6+
# Add a Nexus background task
7+
8+
All background tasks live in Nexus. A task implements the `BackgroundTask` trait (`nexus/src/app/background/mod.rs`), runs on a configurable period, and reports status as `serde_json::Value`.
9+
10+
## General approach
11+
12+
There are many existing background tasks in `nexus/src/app/background/tasks/`. Before writing anything, read a few tasks that are similar in shape to the one you're adding (e.g., a simple periodic cleanup vs. a task that watches a channel). Use those as models for structure, naming, logging, error handling, and status reporting. The goal is to conform to the patterns already in use, not to invent new ones.
13+
14+
## Checklist
15+
16+
These are the touch points for adding a new background task. Follow them in order.
17+
18+
### 1. Status type (`nexus/types/src/internal_api/background.rs`)
19+
20+
Define a struct for the task's activation status. Derive `Clone, Debug, Deserialize, Serialize, PartialEq, Eq`. For errors, use `Option<String>` if the task can only fail in one way per activation, or `Vec<String>` if it accumulates multiple independent errors. Match what similar tasks do.
21+
22+
### 2. Task implementation (`nexus/src/app/background/tasks/<name>.rs`)
23+
24+
Create the task module. The struct holds whatever state it needs (typically `Arc<DataStore>` plus config). Implement `BackgroundTask::activate` by delegating to an `actually_activate` helper, then serialize the status to `serde_json::Value`. The `actually_activate` pattern makes unit testing easy without going through the trait.
25+
26+
`actually_activate` can either build and return the status (`async fn actually_activate(&mut self, opctx) -> YourStatus`), or take a mutable reference to one (`async fn actually_activate(&mut self, opctx, status: &mut YourStatus) -> Result<(), Error>`). The first is simpler and works well when the task either fully succeeds or fully fails. The second is better when the task can partially complete (e.g., it loops over work items): `activate` creates the status struct up front, passes it in, and serializes it afterward regardless of `Ok`/`Err`, so any progress already recorded in `status` (items processed, partial counts, earlier errors) is preserved even if the method bails out with `?` later.
27+
28+
Logging conventions: `debug` when there's nothing to do, `info` when routine work was done, `warn` when the work done indicates something is wrong (e.g., cleaning up after a crash), `error` on failure. Log errors as structured fields with the `; &err` slog syntax (which uses the `SlogInlineError` trait), not by interpolating into the message string. For the error string in the status struct, use `InlineErrorChain::new(&err).to_string()` (from `slog_error_chain`) to capture the full cause chain. Status error strings should not repeat the task name — omdb already shows which task you're looking at.
29+
30+
If the task takes config values that need conversion or validation (e.g., converting a `Duration` to `TimeDelta`, or checking a numeric range), do it once in `new()` and store the validated form. Don't re-validate on every activation — if the config is invalid, panic in `new()` with a message that includes the invalid value.
31+
32+
Include a unit test in the same file using `TestDatabase::new_with_datastore` that calls `actually_activate` directly. If the task has a datastore method, a single test exercising the task end-to-end (including the limit/batching behavior) is sufficient — don't add a redundant test for the datastore method separately unless it has complex logic worth testing in isolation.
33+
34+
### 3. Register the module (`nexus/src/app/background/tasks/mod.rs`)
35+
36+
Add `pub mod <name>;` in alphabetical order.
37+
38+
### 4. Activator (`nexus/background-task-interface/src/init.rs`)
39+
40+
Add `pub task_<name>: Activator` to the `BackgroundTasks` struct, maintaining alphabetical order among the task fields.
41+
42+
### 5. Config (`nexus-config/src/nexus_config.rs`)
43+
44+
Add a config struct (e.g., `YourTaskConfig`) with at minimum `period_secs: Duration` (using `#[serde_as(as = "DurationSeconds<u64>")]`). If the task does bounded work per activation, name the limit field `max_<past_tense_verb>_per_activation` (e.g., `max_deleted_per_activation`, `max_timed_out_per_activation`) to match existing conventions. Add the field to `BackgroundTaskConfig`. Update the test config literal and expected parse output at the bottom of the file.
45+
46+
### 6. Config files
47+
48+
Add the new config fields to all of these:
49+
- `nexus/examples/config.toml`
50+
- `nexus/examples/config-second.toml`
51+
- `nexus/tests/config.test.toml`
52+
- `smf/nexus/single-sled/config-partial.toml`
53+
- `smf/nexus/multi-sled/config-partial.toml`
54+
55+
### 7. Wire up in `nexus/src/app/background/init.rs`
56+
57+
- Import the task module.
58+
- Add `Activator::new()` in the `BackgroundTasks` constructor.
59+
- Destructure it in the `start` method.
60+
- Call `driver.register(TaskDefinition { ... })` with the task. The last task registered should pass `datastore` by move (not `.clone()`), so adjust the previous last task if needed.
61+
- If extra data is needed from `BackgroundTasksData`, add the field there and plumb it from `nexus/src/app/mod.rs`.
62+
63+
### 8. Schema migration (if needed)
64+
65+
If the task needs a new index or schema change to support its query, add a migration under `schema/crdb/`. See `schema/crdb/README.adoc` for the procedure. Also update `dbinit.sql` and bump the version in `nexus/db-model/src/schema_versions.rs`.
66+
67+
### 9. Datastore method (if needed)
68+
69+
If the task needs a new query, add it in the appropriate `nexus/db-queries/src/db/datastore/` file. Prefer the Diesel typed DSL over raw SQL (`diesel::sql_query`) for queries and test helpers. Only fall back to raw SQL when the DSL genuinely can't express the query.
70+
71+
If the task modifies rows that other code paths also modify, think about races: what happens if both run concurrently on the same row? Both paths should typically guard their writes so only one succeeds.
72+
73+
### 10. omdb output (`dev-tools/omdb/src/bin/omdb/nexus.rs`)
74+
75+
Add a `print_task_<name>` function and wire it into the match in `print_task_details` (alphabetical order). Import the status type. Use the `const_max_len` + `WIDTH` pattern to align columns:
76+
77+
```rust
78+
const LABEL: &str = "label:";
79+
const WIDTH: usize = const_max_len(&[LABEL, ...]) + 1;
80+
println!(" {LABEL:<WIDTH$}{}", status.field);
81+
```
82+
83+
### 11. Update test output (`dev-tools/omdb/tests/`)
84+
85+
Run the omdb tests with `EXPECTORATE=overwrite` to update the expected output snapshots (`env.out` and `successes.out`):
86+
87+
```
88+
EXPECTORATE=overwrite cargo nextest run -p omicron-omdb
89+
```
90+
91+
Review the diff to make sure only your new task's output was added.
92+
93+
### 12. Verify
94+
95+
- `cargo check -p omicron-nexus --all-targets`
96+
- `cargo fmt`
97+
- `cargo xtask clippy`
98+
- Run the new task's unit tests
99+
- Run the omdb tests: `cargo nextest run -p omicron-omdb`

dev-tools/omdb/src/bin/omdb/nexus.rs

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,7 @@ use nexus_types::deployment::OximeterReadPolicy;
5252
use nexus_types::fm;
5353
use nexus_types::internal_api::background::AbandonedVmmReaperStatus;
5454
use nexus_types::internal_api::background::AttachedSubnetManagerStatus;
55+
use nexus_types::internal_api::background::AuditLogTimeoutIncompleteStatus;
5556
use nexus_types::internal_api::background::BlueprintPlannerStatus;
5657
use nexus_types::internal_api::background::BlueprintRendezvousStats;
5758
use nexus_types::internal_api::background::BlueprintRendezvousStatus;
@@ -1222,6 +1223,9 @@ fn print_task_details(bgtask: &BackgroundTask, details: &serde_json::Value) {
12221223
"attached_subnet_manager" => {
12231224
print_task_attached_subnet_manager_status(details);
12241225
}
1226+
"audit_log_timeout_incomplete" => {
1227+
print_task_audit_log_timeout_incomplete(details);
1228+
}
12251229
"blueprint_planner" => {
12261230
print_task_blueprint_planner(details);
12271231
}
@@ -1273,6 +1277,9 @@ fn print_task_details(bgtask: &BackgroundTask, details: &serde_json::Value) {
12731277
"read_only_region_replacement_start" => {
12741278
print_task_read_only_region_replacement_start(details);
12751279
}
1280+
"reconfigurator_config_watcher" => {
1281+
print_task_reconfigurator_config_watcher(details);
1282+
}
12761283
"region_replacement" => {
12771284
print_task_region_replacement(details);
12781285
}
@@ -2323,6 +2330,16 @@ fn print_task_read_only_region_replacement_start(details: &serde_json::Value) {
23232330
}
23242331
}
23252332

2333+
fn print_task_reconfigurator_config_watcher(details: &serde_json::Value) {
2334+
match details.get("config_updated").and_then(|v| v.as_bool()) {
2335+
Some(updated) => println!(" config updated: {updated}"),
2336+
None => eprintln!(
2337+
"warning: failed to interpret task details: {:?}",
2338+
details
2339+
),
2340+
}
2341+
}
2342+
23262343
fn print_task_region_replacement(details: &serde_json::Value) {
23272344
match serde_json::from_value::<RegionReplacementStatus>(details.clone()) {
23282345
Err(error) => eprintln!(
@@ -2671,6 +2688,38 @@ fn print_task_saga_recovery(details: &serde_json::Value) {
26712688
}
26722689
}
26732690

2691+
fn print_task_audit_log_timeout_incomplete(details: &serde_json::Value) {
2692+
match serde_json::from_value::<AuditLogTimeoutIncompleteStatus>(
2693+
details.clone(),
2694+
) {
2695+
Err(error) => eprintln!(
2696+
"warning: failed to interpret task details: {:?}: {:?}",
2697+
error, details
2698+
),
2699+
Ok(status) => {
2700+
const TIMED_OUT: &str = "timed_out:";
2701+
const CUTOFF: &str = "cutoff:";
2702+
const MAX_UPDATE: &str = "max_timed_out_per_activation:";
2703+
const ERROR: &str = "error:";
2704+
const WIDTH: usize =
2705+
const_max_len(&[TIMED_OUT, CUTOFF, MAX_UPDATE, ERROR]) + 1;
2706+
2707+
println!(" {TIMED_OUT:<WIDTH$}{}", status.timed_out);
2708+
println!(
2709+
" {CUTOFF:<WIDTH$}{}",
2710+
status.cutoff.to_rfc3339_opts(SecondsFormat::AutoSi, true),
2711+
);
2712+
println!(
2713+
" {MAX_UPDATE:<WIDTH$}{}",
2714+
status.max_timed_out_per_activation
2715+
);
2716+
if let Some(error) = &status.error {
2717+
println!(" {ERROR:<WIDTH$}{error}");
2718+
}
2719+
}
2720+
};
2721+
}
2722+
26742723
fn print_task_session_cleanup(details: &serde_json::Value) {
26752724
match serde_json::from_value::<SessionCleanupStatus>(details.clone()) {
26762725
Err(error) => eprintln!(

dev-tools/omdb/tests/env.out

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,11 @@ task: "attached_subnet_manager"
3838
distributes attached subnets to sleds and switch
3939

4040

41+
task: "audit_log_timeout_incomplete"
42+
transitions stale incomplete audit log entries to timeout status so they
43+
become visible in the audit log
44+
45+
4146
task: "bfd_manager"
4247
Manages bidirectional fowarding detection (BFD) configuration on rack
4348
switches
@@ -288,6 +293,11 @@ task: "attached_subnet_manager"
288293
distributes attached subnets to sleds and switch
289294

290295

296+
task: "audit_log_timeout_incomplete"
297+
transitions stale incomplete audit log entries to timeout status so they
298+
become visible in the audit log
299+
300+
291301
task: "bfd_manager"
292302
Manages bidirectional fowarding detection (BFD) configuration on rack
293303
switches
@@ -525,6 +535,11 @@ task: "attached_subnet_manager"
525535
distributes attached subnets to sleds and switch
526536

527537

538+
task: "audit_log_timeout_incomplete"
539+
transitions stale incomplete audit log entries to timeout status so they
540+
become visible in the audit log
541+
542+
528543
task: "bfd_manager"
529544
Manages bidirectional fowarding detection (BFD) configuration on rack
530545
switches

dev-tools/omdb/tests/successes.out

Lines changed: 23 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -273,6 +273,11 @@ task: "attached_subnet_manager"
273273
distributes attached subnets to sleds and switch
274274

275275

276+
task: "audit_log_timeout_incomplete"
277+
transitions stale incomplete audit log entries to timeout status so they
278+
become visible in the audit log
279+
280+
276281
task: "bfd_manager"
277282
Manages bidirectional fowarding detection (BFD) configuration on rack
278283
switches
@@ -593,6 +598,14 @@ task: "attached_subnet_manager"
593598
no dendrite instances found
594599
no sleds found
595600

601+
task: "audit_log_timeout_incomplete"
602+
configured period: every <REDACTED_DURATION>m
603+
last completed activation: <REDACTED ITERATIONS>, triggered by <TRIGGERED_BY_REDACTED>
604+
started at <REDACTED_TIMESTAMP> (<REDACTED DURATION>s ago) and ran for <REDACTED DURATION>ms
605+
timed_out: 0
606+
cutoff: <REDACTED_TIMESTAMP>
607+
max_timed_out_per_activation: 1000
608+
596609
task: "bfd_manager"
597610
configured period: every <REDACTED_DURATION>s
598611
last completed activation: <REDACTED ITERATIONS>, triggered by <TRIGGERED_BY_REDACTED>
@@ -782,7 +795,7 @@ task: "reconfigurator_config_watcher"
782795
configured period: every <REDACTED_DURATION>s
783796
last completed activation: <REDACTED ITERATIONS>, triggered by <TRIGGERED_BY_REDACTED>
784797
started at <REDACTED_TIMESTAMP> (<REDACTED DURATION>s ago) and ran for <REDACTED DURATION>ms
785-
warning: unknown background task: "reconfigurator_config_watcher" (don't know how to interpret details: Object {"config_updated": Bool(false)})
798+
config updated: <CONFIG_UPDATED_REDACTED>
786799

787800
task: "region_replacement"
788801
configured period: every <REDACTED_DURATION>days <REDACTED_DURATION>h <REDACTED_DURATION>m <REDACTED_DURATION>s
@@ -1194,6 +1207,14 @@ task: "attached_subnet_manager"
11941207
no dendrite instances found
11951208
no sleds found
11961209

1210+
task: "audit_log_timeout_incomplete"
1211+
configured period: every <REDACTED_DURATION>m
1212+
last completed activation: <REDACTED ITERATIONS>, triggered by <TRIGGERED_BY_REDACTED>
1213+
started at <REDACTED_TIMESTAMP> (<REDACTED DURATION>s ago) and ran for <REDACTED DURATION>ms
1214+
timed_out: 0
1215+
cutoff: <REDACTED_TIMESTAMP>
1216+
max_timed_out_per_activation: 1000
1217+
11971218
task: "bfd_manager"
11981219
configured period: every <REDACTED_DURATION>s
11991220
last completed activation: <REDACTED ITERATIONS>, triggered by <TRIGGERED_BY_REDACTED>
@@ -1383,7 +1404,7 @@ task: "reconfigurator_config_watcher"
13831404
configured period: every <REDACTED_DURATION>s
13841405
last completed activation: <REDACTED ITERATIONS>, triggered by <TRIGGERED_BY_REDACTED>
13851406
started at <REDACTED_TIMESTAMP> (<REDACTED DURATION>s ago) and ran for <REDACTED DURATION>ms
1386-
warning: unknown background task: "reconfigurator_config_watcher" (don't know how to interpret details: Object {"config_updated": Bool(false)})
1407+
config updated: <CONFIG_UPDATED_REDACTED>
13871408

13881409
task: "region_replacement"
13891410
configured period: every <REDACTED_DURATION>days <REDACTED_DURATION>h <REDACTED_DURATION>m <REDACTED_DURATION>s

dev-tools/omdb/tests/test_all_output.rs

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -333,6 +333,10 @@ async fn test_omdb_success_cases(cptestctx: &ControlPlaneTestContext) {
333333
redactor.extra_variable_length("cockroachdb_version", &crdb_version);
334334
}
335335

336+
// The `reconfigurator_config_watcher` task's output depends on
337+
// whether it has had time to complete an activation.
338+
redactor.field("config updated:", r"\w+");
339+
336340
// The `tuf_artifact_replication` task's output depends on how
337341
// many sleds happened to register with Nexus before its first
338342
// execution. These redactions work around the issue described in

nexus-config/src/nexus_config.rs

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -437,6 +437,8 @@ pub struct BackgroundTaskConfig {
437437
pub attached_subnet_manager: AttachedSubnetManagerConfig,
438438
/// configuration for console session cleanup task
439439
pub session_cleanup: SessionCleanupConfig,
440+
/// configuration for audit log incomplete timeout task
441+
pub audit_log_timeout_incomplete: AuditLogTimeoutIncompleteConfig,
440442
}
441443

442444
#[serde_as]
@@ -450,6 +452,21 @@ pub struct SessionCleanupConfig {
450452
pub max_delete_per_activation: u32,
451453
}
452454

455+
#[serde_as]
456+
#[derive(Clone, Debug, Deserialize, Eq, PartialEq, Serialize)]
457+
pub struct AuditLogTimeoutIncompleteConfig {
458+
/// period (in seconds) for periodic activations of this task
459+
#[serde_as(as = "DurationSeconds<u64>")]
460+
pub period_secs: Duration,
461+
462+
/// how old an incomplete entry must be before it is timed out
463+
#[serde_as(as = "DurationSeconds<u64>")]
464+
pub timeout_secs: Duration,
465+
466+
/// max rows per SQL statement
467+
pub max_timed_out_per_activation: u32,
468+
}
469+
453470
#[serde_as]
454471
#[derive(Clone, Debug, Deserialize, Eq, PartialEq, Serialize)]
455472
pub struct DnsTasksConfig {
@@ -1276,6 +1293,9 @@ mod test {
12761293
attached_subnet_manager.period_secs = 60
12771294
session_cleanup.period_secs = 300
12781295
session_cleanup.max_delete_per_activation = 10000
1296+
audit_log_timeout_incomplete.period_secs = 600
1297+
audit_log_timeout_incomplete.timeout_secs = 14400
1298+
audit_log_timeout_incomplete.max_timed_out_per_activation = 1000
12791299
[default_region_allocation_strategy]
12801300
type = "random"
12811301
seed = 0
@@ -1544,6 +1564,12 @@ mod test {
15441564
period_secs: Duration::from_secs(300),
15451565
max_delete_per_activation: 10_000,
15461566
},
1567+
audit_log_timeout_incomplete:
1568+
AuditLogTimeoutIncompleteConfig {
1569+
period_secs: Duration::from_secs(600),
1570+
timeout_secs: Duration::from_secs(14400),
1571+
max_timed_out_per_activation: 1000,
1572+
},
15471573
},
15481574
multicast: MulticastConfig { enabled: false },
15491575
default_region_allocation_strategy:
@@ -1652,6 +1678,9 @@ mod test {
16521678
attached_subnet_manager.period_secs = 60
16531679
session_cleanup.period_secs = 300
16541680
session_cleanup.max_delete_per_activation = 10000
1681+
audit_log_timeout_incomplete.period_secs = 600
1682+
audit_log_timeout_incomplete.timeout_secs = 14400
1683+
audit_log_timeout_incomplete.max_timed_out_per_activation = 1000
16551684
16561685
[default_region_allocation_strategy]
16571686
type = "random"

nexus/background-task-interface/src/init.rs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ pub struct BackgroundTasks {
3737
pub task_instance_reincarnation: Activator,
3838
pub task_service_firewall_propagation: Activator,
3939
pub task_abandoned_vmm_reaper: Activator,
40+
pub task_audit_log_timeout_incomplete: Activator,
4041
pub task_vpc_route_manager: Activator,
4142
pub task_saga_recovery: Activator,
4243
pub task_lookup_region_port: Activator,

nexus/db-model/src/audit_log.rs

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -224,7 +224,7 @@ impl From<AuditLogEntryInitParams> for AuditLogEntryInit {
224224

225225
/// `audit_log_complete` is a view on `audit_log` filtering for rows with
226226
/// non-null `time_completed`, not its own table.
227-
#[derive(Queryable, Selectable, Clone, Debug)]
227+
#[derive(Queryable, Selectable, Clone, Debug, PartialEq)]
228228
#[diesel(table_name = audit_log_complete)]
229229
pub struct AuditLogEntry {
230230
pub id: Uuid,
@@ -276,15 +276,15 @@ pub enum AuditLogCompletion {
276276
/// error, and I don't think we even have API timeouts) but rather that the
277277
/// attempts to complete the log entry failed (or were never even attempted
278278
/// because, e.g., Nexus crashed during the operation), and this entry had
279-
/// to be cleaned up later by a background job (which doesn't exist yet)
280-
/// after a timeout. Note we represent this result status as "Unknown" in
281-
/// the external API because timeout is an implementation detail and makes
282-
/// it sound like the operation timed out.
279+
/// to be cleaned up later by a background job after a timeout. Note we
280+
/// represent this result status as "Unknown" in the external API because
281+
/// timeout is an implementation detail and makes it sound like the
282+
/// operation timed out.
283283
Timeout,
284284
}
285285

286286
#[derive(AsChangeset, Clone)]
287-
#[diesel(table_name = audit_log)]
287+
#[diesel(table_name = audit_log, treat_none_as_null = true)]
288288
pub struct AuditLogCompletionUpdate {
289289
pub time_completed: DateTime<Utc>,
290290
pub result_kind: AuditLogResultKind,

0 commit comments

Comments
 (0)