Bug
ExecuteWorkflowAbility::executeDatabaseFlow() schedules Action Scheduler actions with two arguments [flow_id, job_id]:
// inc/Abilities/Job/ExecuteWorkflowAbility.php:213-218
as_schedule_single_action(
$schedule_time,
'datamachine_run_flow_now',
array( $flow_id, $job_id ),
'data-machine'
);
All other scheduling paths (recurring, one-time, migration) pass only [flow_id]. This means:
as_unschedule_all_actions('datamachine_run_flow_now', array($flow_id)) — used in FlowScheduling, PauseFlowAbility, DeleteFlowAbility — does NOT match two-arg actions
as_next_scheduled_action('datamachine_run_flow_now', array($flow_id)) — does NOT detect them
- These actions are effectively invisible to all guard/dedup/cleanup logic
Observed Impact
On 2026-04-03, flow 4 (EC Agent Progress Ping, every_3_days schedule) fired twice in 4 minutes via two rogue AS actions:
| action_id |
args |
scheduled |
schedule type |
| 7011420 |
[4, 42] |
20:35:13 |
SimpleSchedule (ad-hoc) |
| 7011422 |
[4, 44] |
20:39:28 |
SimpleSchedule (ad-hoc) |
| 7011381 |
[4] |
Apr 4, 00:15 |
IntervalSchedule (legitimate) |
The legitimate recurring schedule was unaffected — its next run is correctly at Apr 4. The two rogue runs created duplicate Discord threads and duplicate agent sessions.
Root Cause
Something invoked datamachine/execute-workflow with flow_id: 4. The only callers are:
RunFlow chat tool (inc/Api/Chat/Tools/RunFlow.php)
wp datamachine flows run CLI command
- REST API
The exact trigger could not be identified (no CLI, chat, or deploy activity at the trigger time). May have been a transient condition from the v0.60.0 migration deployment ~4 hours earlier.
Proposed Fix
Option A (simple): When ExecuteWorkflowAbility schedules actions, pass only [flow_id] and store the job_id association differently (e.g., in the job's engine_data). The RunFlowAbility already handles job_id = null by creating a new job.
Option B (defensive): Add cleanup logic that also unschedules two-arg actions. Something like:
// When cleaning up actions for a flow, also clean two-arg variants
as_unschedule_all_actions('datamachine_run_flow_now', array($flow_id), 'data-machine');
// Plus: find and cancel any actions where first arg matches flow_id
Option A is cleaner — it eliminates the dual-signature problem entirely.
Monitoring
Next legitimate run is Apr 4, 00:15 UTC. If it fires correctly once, this was likely a one-time event. If it doubles again, the bug is reproducible.
Bug
ExecuteWorkflowAbility::executeDatabaseFlow()schedules Action Scheduler actions with two arguments[flow_id, job_id]:All other scheduling paths (recurring, one-time, migration) pass only
[flow_id]. This means:as_unschedule_all_actions('datamachine_run_flow_now', array($flow_id))— used in FlowScheduling, PauseFlowAbility, DeleteFlowAbility — does NOT match two-arg actionsas_next_scheduled_action('datamachine_run_flow_now', array($flow_id))— does NOT detect themObserved Impact
On 2026-04-03, flow 4 (EC Agent Progress Ping,
every_3_daysschedule) fired twice in 4 minutes via two rogue AS actions:[4, 42][4, 44][4]The legitimate recurring schedule was unaffected — its next run is correctly at Apr 4. The two rogue runs created duplicate Discord threads and duplicate agent sessions.
Root Cause
Something invoked
datamachine/execute-workflowwithflow_id: 4. The only callers are:RunFlowchat tool (inc/Api/Chat/Tools/RunFlow.php)wp datamachine flows runCLI commandThe exact trigger could not be identified (no CLI, chat, or deploy activity at the trigger time). May have been a transient condition from the v0.60.0 migration deployment ~4 hours earlier.
Proposed Fix
Option A (simple): When
ExecuteWorkflowAbilityschedules actions, pass only[flow_id]and store thejob_idassociation differently (e.g., in the job's engine_data). TheRunFlowAbilityalready handlesjob_id = nullby creating a new job.Option B (defensive): Add cleanup logic that also unschedules two-arg actions. Something like:
Option A is cleaner — it eliminates the dual-signature problem entirely.
Monitoring
Next legitimate run is Apr 4, 00:15 UTC. If it fires correctly once, this was likely a one-time event. If it doubles again, the bug is reproducible.