Skip to content

Commit ccc7684

Browse files
bdchathamclaude
andauthored
feat: NodeUpdate plans for image rollout orchestration (#94)
* feat: NodeUpdate plans for image rollout orchestration When a Running node's spec.image differs from status.currentImage, the planner detects the drift and builds a NodeUpdate plan to roll out the update. NodeUpdate plan tasks: 1. apply-statefulset — pushes the new image to the StatefulSet via SSA 2. apply-service — keeps the headless Service in sync 3. observe-image — polls StatefulSet rollout until complete, stamps status.currentImage in-memory 4. mark-ready — re-initializes the sidecar on the new pod Planner condition management: - ResolvePlan sets NodeUpdateInProgress=True when building the plan - On plan completion: clears with Reason=UpdateComplete - On plan failure: clears with Reason=UpdateFailed, then immediately retries (builds a new plan since drift persists) - handleTerminalPlan clears completed/failed plans before building the next one Drift-gated plans: - No drift (spec.image == status.currentImage) → no plan, steady state - Image drift → NodeUpdate plan Reconciler simplification: - Removed observeCurrentImage method — replaced by observe-image task - Running-phase post-plan block removed — fully plan-driven - Deleted 7 observeCurrentImage unit tests (coverage moved to task tests) New tests: - observe-image task: rollout complete, rolling update, stale generation, StatefulSet not found - NodeUpdate planner: drift detection, plan composition, condition lifecycle (start, complete, fail/retry), terminal plan handling Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: critical — NodeUpdate plans not nilled by executor, condition lifecycle Fixes from adversarial review by Tide specialists: CRITICAL: clearCompletedConvergencePlan was nilling NodeUpdate plans, preventing handleTerminalPlan from ever observing the completed plan and clearing NodeUpdateInProgress. Now renamed to clearCompletedPlanIfSafe and checks isNodeUpdatePlan — NodeUpdate plans stay Complete for one reconcile so the planner can observe them and clear conditions. IMPORTANT: Document observe-image's Status() mutation of currentImage as a sanctioned exception to the "tasks own resources" rule, explaining why the mutation must happen at observation time. MINOR: Fix stale "convergence plans" reference in isNodeUpdatePlan comment. Update test to verify NodeUpdate plans stay Complete (not nilled). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: planner owns all plan lifecycle — executor no longer nils plans Remove clearCompletedPlanIfSafe (formerly clearCompletedConvergencePlan) from the executor entirely. The executor now only marks plans Complete or Failed — it never nils them. The planner's handleTerminalPlan handles ALL plan cleanup: clearing conditions and nilling terminal plans. This is a cleaner separation: the executor drives tasks to completion, the planner decides what to do with the result. No type-specific checks (isNodeUpdatePlan) in the executor. No plan lifecycle management outside the planner. Also removes the now-unused currentPhase helper from the executor. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: move observe-image logic from Status() to Execute() Address PR feedback: Execute() now polls the StatefulSet and stamps currentImage when the rollout completes. Status() just returns the cached result via DefaultStatus(). This keeps Execute as the action method and Status as the query method. When the rollout is not yet complete, Execute returns nil (no error). The executor will re-invoke on the next reconcile since the task remains Pending. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: conditions set directly at point of decision, no inference Move condition setting from post-processing inference to direct mutation at the point of decision: - buildNodeUpdatePlan sets NodeUpdateInProgress=True directly on the node when it builds the plan — no separate applyPlanStartConditions step that scans tasks to infer the plan type - handleTerminalPlan sets NodeUpdateInProgress=False via the shared setNodeUpdateCondition helper — no clearNodeUpdateCondition that checked whether the condition was already set Deleted: isNodeUpdatePlan, applyPlanStartConditions, clearNodeUpdateCondition. These inferred plan type from task contents and had special-case logic for condition state. The new approach is simpler: the code that creates the scenario sets the condition. The code that observes the outcome clears it. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: only clear NodeUpdateInProgress if it was set handleTerminalPlan was setting NodeUpdateInProgress=False for ALL terminal plans, including init plans that never had the condition set. This would add a spurious False condition after init completion. Now checks hasNodeUpdateCondition before clearing — only clears the condition if it was previously set to True by buildNodeUpdatePlan. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: reference TaskTypeMarkReady from task package consistently Move the mark-ready sidecar task type constant to the task package (re-exported from sidecar) so buildNodeUpdatePlan references all task types via task.TaskType* consistently. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 59ebfbb commit ccc7684

12 files changed

Lines changed: 662 additions & 347 deletions

File tree

CLAUDE.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ make docker-push IMG=<image> # Push container image
5757
- **SeiNode** creates StatefulSets (replicas=1), headless Services, and PVCs via server-side apply (fieldOwner: `seinode-controller`).
5858
- **Plan-driven reconciliation** — Both controllers use ordered task plans (stored in `.status.plan`) to drive lifecycle. Plans are built by `internal/planner/` (`ResolvePlan` for nodes, `ForGroup` for deployments), executed by `planner.Executor`, with individual tasks in `internal/task/`. The reconcile loop is: `ResolvePlan → persist plan → ExecutePlan`. See `internal/planner/doc.go` for the full plan lifecycle.
5959
- **Init plans** transition nodes from Pending → Running. They include infrastructure tasks (`ensure-data-pvc`, `apply-statefulset`, `apply-service`) followed by sidecar tasks (`configure-genesis`, `config-apply`, etc.).
60-
- **Convergence plans** keep Running nodes in sync. They contain only `apply-statefulset` + `apply-service` and are nilled from status after completion.
60+
- **NodeUpdate plans** roll out image changes on Running nodes. Built when `spec.image != status.currentImage`. Tasks: `apply-statefulset`, `apply-service`, `observe-image` (polls StatefulSet rollout, stamps `currentImage`), `mark-ready` (sidecar re-init). The planner sets `NodeUpdateInProgress` condition on creation and clears it on completion/failure. When no drift is detected, no plan is built — the node sits in steady state.
6161
- **Atomic plan creation** — New plans are persisted before any tasks execute. The reconciler flushes the plan, then requeues. Execution starts on the next reconcile. This guarantees external observers see the plan before side effects occur.
6262
- **Condition ownership** — The planner owns all condition management on the owning resource. It sets conditions when creating plans (e.g., `NodeUpdateInProgress=True`) and when observing terminal plans (e.g., `NodeUpdateInProgress=False`). The executor does not set conditions — it only mutates plan/task state and phase transitions.
6363
- **Single-patch model** — All status mutations (plan state, conditions, phase, currentImage) accumulate in-memory during a reconcile and are flushed in a single `Status().Patch()` at the end. Tasks mutate owned resources (StatefulSets, Services, PVCs); the executor mutates plan state in-memory; the reconciler flushes once.

api/v1alpha1/seinode_types.go

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -205,6 +205,12 @@ const (
205205
PhaseTerminating SeiNodePhase = "Terminating"
206206
)
207207

208+
// SeiNode condition types.
209+
const (
210+
// ConditionNodeUpdateInProgress indicates an image update is being rolled out.
211+
ConditionNodeUpdateInProgress = "NodeUpdateInProgress"
212+
)
213+
208214
// SeiNodeStatus defines the observed state of a SeiNode.
209215
type SeiNodeStatus struct {
210216
// Phase is the high-level lifecycle state.

internal/controller/node/controller.go

Lines changed: 0 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -120,15 +120,6 @@ func (r *SeiNodeReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
120120
statusDirty = true
121121
}
122122

123-
// Running phase: observe image convergence in-memory.
124-
if node.Status.Phase == seiv1alpha1.PhaseRunning {
125-
if dirty, err := r.observeCurrentImage(ctx, node); err != nil {
126-
return ctrl.Result{}, fmt.Errorf("observing current image: %w", err)
127-
} else if dirty {
128-
statusDirty = true
129-
}
130-
}
131-
132123
if statusDirty {
133124
if err := r.Status().Patch(ctx, node, statusBase); err != nil {
134125
if execErr != nil {
@@ -166,31 +157,6 @@ func (r *SeiNodeReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
166157
return result, nil
167158
}
168159

169-
// observeCurrentImage checks whether the StatefulSet rollout has completed
170-
// and stamps status.currentImage in-memory. Returns true if the image changed.
171-
func (r *SeiNodeReconciler) observeCurrentImage(ctx context.Context, node *seiv1alpha1.SeiNode) (bool, error) {
172-
sts := &appsv1.StatefulSet{}
173-
if err := r.Get(ctx, types.NamespacedName{Name: node.Name, Namespace: node.Namespace}, sts); err != nil {
174-
if apierrors.IsNotFound(err) {
175-
return false, nil
176-
}
177-
return false, err
178-
}
179-
180-
if sts.Status.ObservedGeneration < sts.Generation {
181-
return false, nil
182-
}
183-
if sts.Spec.Replicas == nil || sts.Status.UpdatedReplicas < *sts.Spec.Replicas {
184-
return false, nil
185-
}
186-
187-
if node.Status.CurrentImage != node.Spec.Image {
188-
node.Status.CurrentImage = node.Spec.Image
189-
return true, nil
190-
}
191-
return false, nil
192-
}
193-
194160
// SetupWithManager sets up the controller with the Manager.
195161
func (r *SeiNodeReconciler) SetupWithManager(mgr ctrl.Manager) error {
196162
return ctrl.NewControllerManagedBy(mgr).

internal/controller/node/plan_execution_test.go

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -677,16 +677,18 @@ func TestExecutePlan_AllComplete_TransitionsToTargetPhase(t *testing.T) {
677677
g.Expect(node.Status.Plan.Phase).To(Equal(seiv1alpha1.TaskPlanComplete))
678678
}
679679

680-
func TestExecutePlan_ConvergencePlan_NilsOnCompletion(t *testing.T) {
680+
func TestExecutePlan_CompletedPlan_StaysForPlannerCleanup(t *testing.T) {
681681
g := NewWithT(t)
682682
mock := &mockSidecarClient{}
683683
node := snapshotNode()
684684
node.Status.Phase = seiv1alpha1.PhaseRunning
685685
node.Status.Plan = nil
686-
// Build a convergence plan for a Running node.
686+
// Build a NodeUpdate plan for a Running node with drift.
687687
if err := planner.ResolvePlan(node); err != nil {
688688
t.Fatal(err)
689689
}
690+
g.Expect(node.Status.Plan).NotTo(BeNil())
691+
690692
// Pre-complete all tasks.
691693
for i := range node.Status.Plan.Tasks {
692694
node.Status.Plan.Tasks[i].Status = seiv1alpha1.TaskComplete
@@ -696,8 +698,11 @@ func TestExecutePlan_ConvergencePlan_NilsOnCompletion(t *testing.T) {
696698
_, err := r.PlanExecutor.ExecutePlan(context.Background(), node, node.Status.Plan)
697699
g.Expect(err).NotTo(HaveOccurred())
698700

699-
g.Expect(node.Status.Plan).To(BeNil(), "convergence plan should be nilled after completion")
700-
g.Expect(node.Status.Phase).To(Equal(seiv1alpha1.PhaseRunning), "phase should stay Running")
701+
// Completed plans stay in status — the planner handles cleanup
702+
// (nilling the plan, clearing conditions) on the next reconcile.
703+
g.Expect(node.Status.Plan).NotTo(BeNil(), "completed plan should stay for planner to observe")
704+
g.Expect(node.Status.Plan.Phase).To(Equal(seiv1alpha1.TaskPlanComplete))
705+
g.Expect(node.Status.Phase).To(Equal(seiv1alpha1.PhaseRunning))
701706
}
702707

703708
func TestExecutePlan_TaskFailure_SetsPlanFailedCondition(t *testing.T) {

internal/controller/node/reconciler_test.go

Lines changed: 0 additions & 269 deletions
Original file line numberDiff line numberDiff line change
@@ -224,275 +224,6 @@ func TestNodeReconcile_RunningPhase_UpdatesStatefulSetImage(t *testing.T) {
224224
g.Expect(seid.Image).To(Equal(testImageV2))
225225
}
226226

227-
func TestObserveCurrentImage_UpdatesWhenConverged(t *testing.T) {
228-
g := NewWithT(t)
229-
ctx := context.Background()
230-
231-
node := newGenesisNode("mynet-0", "default")
232-
node.Finalizers = []string{nodeFinalizerName}
233-
node.Status.Phase = seiv1alpha1.PhaseRunning
234-
node.Spec.Image = testImageV2
235-
236-
sts := &appsv1.StatefulSet{
237-
ObjectMeta: metav1.ObjectMeta{Name: "mynet-0", Namespace: "default"},
238-
Spec: appsv1.StatefulSetSpec{
239-
Replicas: ptrInt32(1),
240-
ServiceName: "mynet-0",
241-
Selector: &metav1.LabelSelector{MatchLabels: map[string]string{"sei.io/node": "mynet-0"}},
242-
Template: corev1.PodTemplateSpec{
243-
ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"sei.io/node": "mynet-0"}},
244-
Spec: corev1.PodSpec{Containers: []corev1.Container{{Name: "seid", Image: testImageV2}}},
245-
},
246-
},
247-
}
248-
249-
s := newNodeTestScheme(t)
250-
c := fake.NewClientBuilder().
251-
WithScheme(s).
252-
WithObjects(node, sts).
253-
WithStatusSubresource(&seiv1alpha1.SeiNode{}, &appsv1.StatefulSet{}).
254-
Build()
255-
256-
sts.Status.UpdatedReplicas = 1
257-
sts.Status.ObservedGeneration = sts.Generation
258-
g.Expect(c.Status().Update(ctx, sts)).To(Succeed())
259-
260-
r := &SeiNodeReconciler{Client: c, Scheme: s}
261-
_, err := r.observeCurrentImage(ctx, node)
262-
g.Expect(err).NotTo(HaveOccurred())
263-
264-
g.Expect(node.Status.CurrentImage).To(Equal(testImageV2))
265-
}
266-
267-
func TestObserveCurrentImage_SkipsWhenStaleGeneration(t *testing.T) {
268-
g := NewWithT(t)
269-
ctx := context.Background()
270-
271-
node := newGenesisNode("mynet-0", "default")
272-
node.Finalizers = []string{nodeFinalizerName}
273-
node.Status.Phase = seiv1alpha1.PhaseRunning
274-
node.Spec.Image = testImageV2
275-
276-
sts := &appsv1.StatefulSet{
277-
ObjectMeta: metav1.ObjectMeta{Name: "mynet-0", Namespace: "default", Generation: 2},
278-
Spec: appsv1.StatefulSetSpec{
279-
Replicas: ptrInt32(1),
280-
ServiceName: "mynet-0",
281-
Selector: &metav1.LabelSelector{MatchLabels: map[string]string{"sei.io/node": "mynet-0"}},
282-
Template: corev1.PodTemplateSpec{
283-
ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"sei.io/node": "mynet-0"}},
284-
Spec: corev1.PodSpec{Containers: []corev1.Container{{Name: "seid", Image: testImageV2}}},
285-
},
286-
},
287-
}
288-
289-
s := newNodeTestScheme(t)
290-
c := fake.NewClientBuilder().
291-
WithScheme(s).
292-
WithObjects(node, sts).
293-
WithStatusSubresource(&seiv1alpha1.SeiNode{}, &appsv1.StatefulSet{}).
294-
Build()
295-
296-
sts.Status.UpdatedReplicas = 1
297-
sts.Status.ObservedGeneration = 1
298-
g.Expect(c.Status().Update(ctx, sts)).To(Succeed())
299-
300-
r := &SeiNodeReconciler{Client: c, Scheme: s}
301-
_, err := r.observeCurrentImage(ctx, node)
302-
g.Expect(err).NotTo(HaveOccurred())
303-
304-
g.Expect(node.Status.CurrentImage).To(BeEmpty())
305-
}
306-
307-
func TestObserveCurrentImage_SkipsWhenRolling(t *testing.T) {
308-
g := NewWithT(t)
309-
ctx := context.Background()
310-
311-
node := newGenesisNode("mynet-0", "default")
312-
node.Finalizers = []string{nodeFinalizerName}
313-
node.Status.Phase = seiv1alpha1.PhaseRunning
314-
node.Spec.Image = testImageV2
315-
316-
sts := &appsv1.StatefulSet{
317-
ObjectMeta: metav1.ObjectMeta{Name: "mynet-0", Namespace: "default"},
318-
Spec: appsv1.StatefulSetSpec{
319-
Replicas: ptrInt32(1),
320-
ServiceName: "mynet-0",
321-
Selector: &metav1.LabelSelector{MatchLabels: map[string]string{"sei.io/node": "mynet-0"}},
322-
Template: corev1.PodTemplateSpec{
323-
ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"sei.io/node": "mynet-0"}},
324-
Spec: corev1.PodSpec{Containers: []corev1.Container{{Name: "seid", Image: testImageV2}}},
325-
},
326-
},
327-
}
328-
329-
s := newNodeTestScheme(t)
330-
c := fake.NewClientBuilder().
331-
WithScheme(s).
332-
WithObjects(node, sts).
333-
WithStatusSubresource(&seiv1alpha1.SeiNode{}, &appsv1.StatefulSet{}).
334-
Build()
335-
336-
sts.Status.CurrentRevision = "rev-1"
337-
sts.Status.UpdateRevision = testRevision
338-
g.Expect(c.Status().Update(ctx, sts)).To(Succeed())
339-
340-
r := &SeiNodeReconciler{Client: c, Scheme: s}
341-
_, err := r.observeCurrentImage(ctx, node)
342-
g.Expect(err).NotTo(HaveOccurred())
343-
344-
g.Expect(node.Status.CurrentImage).To(BeEmpty())
345-
}
346-
347-
func TestObserveCurrentImage_SkipsWhenReadyReplicasZero(t *testing.T) {
348-
g := NewWithT(t)
349-
ctx := context.Background()
350-
351-
node := newGenesisNode("mynet-0", "default")
352-
node.Finalizers = []string{nodeFinalizerName}
353-
node.Status.Phase = seiv1alpha1.PhaseRunning
354-
node.Spec.Image = testImageV2
355-
356-
sts := &appsv1.StatefulSet{
357-
ObjectMeta: metav1.ObjectMeta{Name: "mynet-0", Namespace: "default"},
358-
Spec: appsv1.StatefulSetSpec{
359-
Replicas: ptrInt32(1),
360-
ServiceName: "mynet-0",
361-
Selector: &metav1.LabelSelector{MatchLabels: map[string]string{"sei.io/node": "mynet-0"}},
362-
Template: corev1.PodTemplateSpec{
363-
ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"sei.io/node": "mynet-0"}},
364-
Spec: corev1.PodSpec{Containers: []corev1.Container{{Name: "seid", Image: testImageV2}}},
365-
},
366-
},
367-
}
368-
369-
s := newNodeTestScheme(t)
370-
c := fake.NewClientBuilder().
371-
WithScheme(s).
372-
WithObjects(node, sts).
373-
WithStatusSubresource(&seiv1alpha1.SeiNode{}, &appsv1.StatefulSet{}).
374-
Build()
375-
376-
sts.Status.CurrentRevision = testRevision
377-
sts.Status.UpdateRevision = testRevision
378-
sts.Status.ReadyReplicas = 0
379-
g.Expect(c.Status().Update(ctx, sts)).To(Succeed())
380-
381-
r := &SeiNodeReconciler{Client: c, Scheme: s}
382-
_, err := r.observeCurrentImage(ctx, node)
383-
g.Expect(err).NotTo(HaveOccurred())
384-
385-
g.Expect(node.Status.CurrentImage).To(BeEmpty())
386-
}
387-
388-
func TestObserveCurrentImage_SkipsWhenEmptyRevision(t *testing.T) {
389-
g := NewWithT(t)
390-
ctx := context.Background()
391-
392-
node := newGenesisNode("mynet-0", "default")
393-
node.Finalizers = []string{nodeFinalizerName}
394-
node.Status.Phase = seiv1alpha1.PhaseRunning
395-
node.Spec.Image = testImageV2
396-
397-
sts := &appsv1.StatefulSet{
398-
ObjectMeta: metav1.ObjectMeta{Name: "mynet-0", Namespace: "default"},
399-
Spec: appsv1.StatefulSetSpec{
400-
Replicas: ptrInt32(1),
401-
ServiceName: "mynet-0",
402-
Selector: &metav1.LabelSelector{MatchLabels: map[string]string{"sei.io/node": "mynet-0"}},
403-
Template: corev1.PodTemplateSpec{
404-
ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"sei.io/node": "mynet-0"}},
405-
Spec: corev1.PodSpec{Containers: []corev1.Container{{Name: "seid", Image: testImageV2}}},
406-
},
407-
},
408-
}
409-
410-
s := newNodeTestScheme(t)
411-
c := fake.NewClientBuilder().
412-
WithScheme(s).
413-
WithObjects(node, sts).
414-
WithStatusSubresource(&seiv1alpha1.SeiNode{}, &appsv1.StatefulSet{}).
415-
Build()
416-
417-
sts.Status.CurrentRevision = ""
418-
sts.Status.UpdateRevision = testRevision
419-
sts.Status.ReadyReplicas = 1
420-
g.Expect(c.Status().Update(ctx, sts)).To(Succeed())
421-
422-
r := &SeiNodeReconciler{Client: c, Scheme: s}
423-
_, err := r.observeCurrentImage(ctx, node)
424-
g.Expect(err).NotTo(HaveOccurred())
425-
426-
g.Expect(node.Status.CurrentImage).To(BeEmpty())
427-
}
428-
429-
func TestObserveCurrentImage_NoopWhenAlreadyCurrent(t *testing.T) {
430-
g := NewWithT(t)
431-
ctx := context.Background()
432-
433-
node := newGenesisNode("mynet-0", "default")
434-
node.Finalizers = []string{nodeFinalizerName}
435-
node.Status.Phase = seiv1alpha1.PhaseRunning
436-
node.Spec.Image = testImageV2
437-
node.Status.CurrentImage = testImageV2
438-
439-
sts := &appsv1.StatefulSet{
440-
ObjectMeta: metav1.ObjectMeta{Name: "mynet-0", Namespace: "default"},
441-
Spec: appsv1.StatefulSetSpec{
442-
Replicas: ptrInt32(1),
443-
ServiceName: "mynet-0",
444-
Selector: &metav1.LabelSelector{MatchLabels: map[string]string{"sei.io/node": "mynet-0"}},
445-
Template: corev1.PodTemplateSpec{
446-
ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{"sei.io/node": "mynet-0"}},
447-
Spec: corev1.PodSpec{Containers: []corev1.Container{{Name: "seid", Image: testImageV2}}},
448-
},
449-
},
450-
}
451-
452-
s := newNodeTestScheme(t)
453-
c := fake.NewClientBuilder().
454-
WithScheme(s).
455-
WithObjects(node, sts).
456-
WithStatusSubresource(&seiv1alpha1.SeiNode{}, &appsv1.StatefulSet{}).
457-
Build()
458-
459-
sts.Status.CurrentRevision = testRevision
460-
sts.Status.UpdateRevision = testRevision
461-
sts.Status.ReadyReplicas = 1
462-
g.Expect(c.Status().Update(ctx, sts)).To(Succeed())
463-
464-
r := &SeiNodeReconciler{Client: c, Scheme: s}
465-
_, err := r.observeCurrentImage(ctx, node)
466-
g.Expect(err).NotTo(HaveOccurred())
467-
468-
g.Expect(node.Status.CurrentImage).To(Equal(testImageV2))
469-
}
470-
471-
func TestObserveCurrentImage_StatefulSetNotFound(t *testing.T) {
472-
g := NewWithT(t)
473-
ctx := context.Background()
474-
475-
node := newGenesisNode("mynet-0", "default")
476-
node.Finalizers = []string{nodeFinalizerName}
477-
node.Status.Phase = seiv1alpha1.PhaseRunning
478-
node.Spec.Image = testImageV2
479-
480-
s := newNodeTestScheme(t)
481-
c := fake.NewClientBuilder().
482-
WithScheme(s).
483-
WithObjects(node).
484-
WithStatusSubresource(&seiv1alpha1.SeiNode{}).
485-
Build()
486-
487-
r := &SeiNodeReconciler{Client: c, Scheme: s}
488-
_, err := r.observeCurrentImage(ctx, node)
489-
g.Expect(err).NotTo(HaveOccurred())
490-
491-
g.Expect(node.Status.CurrentImage).To(BeEmpty())
492-
}
493-
494-
func ptrInt32(v int32) *int32 { return &v }
495-
496227
func TestNodeDeletion_SnapshotNode_WithoutRetain_DeletesPVC(t *testing.T) {
497228
g := NewWithT(t)
498229
ctx := context.Background()

0 commit comments

Comments
 (0)