Skip to content

Kubelet floods error logs with "[Startup|Readiness|Liveness] probe already exists for container" after upgrading to EKS 1.35 #4769

@isaac88

Description

@isaac88

Summary

After upgrading EKS from Kubernetes 1.34 to 1.35 using Bottlerocket 1.54.0, the kubelet continuously prints error-level log messages about probes already existing for containers. This is a known upstream issue introduced during the migration to contextual logging in Kubernetes 1.35, where Error-level logs were incorrectly wrapped with verbosity methods (.V()). Since go-logr/logr ignores verbosity levels for Error-level logs, these messages are emitted unconditionally, flooding the node logs.

Upstream references:

Image I'm using:

Bottlerocket 1.54.0 on EKS with EKS 1.35.

What I expected to happen:

Kubelet should not continuously print error-level log messages for probe-already-exists conditions. These should either be informational (downgraded to a lower log level) or suppressed by verbosity settings.

What actually happened:

Kubelet floods the journal/logs with error-level messages like:

Feb 25 14:55:33 ip-10-x-x-x.eu-west-3.compute.internal kubelet[1960]: E0225 14:55:33.475528    1960 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/daemonset-abc123" containerName="agent"
Feb 25 14:55:54 ip-10-x-x-x.eu-west-3.compute.internal kubelet[1960]: E0225 14:55:54.473357    1960 prober_manager.go:197] "Startup probe already exists for container" pod="my-namespace/my-app-5dd9846ccd-fwjw4" containerName="main"
Feb 25 14:56:16 ip-10-x-x-x.eu-west-3.compute.internal kubelet[1960]: E0225 14:56:16.473035    1960 prober_manager.go:197] "Startup probe already exists for container" pod="my-namespace/my-other-app-74df4b8d6c-hv4k5" containerName="main"
Feb 25 14:56:20 ip-10-x-x-x.eu-west-3.compute.internal kubelet[1960]: E0225 14:56:20.473113    1960 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/ebs-csi-node-t54vr" containerName="ebs-plugin"
Feb 25 14:56:35 ip-10-x-x-x.eu-west-3.compute.internal kubelet[1960]: E0225 14:56:35.472674    1960 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/daemonset-abc123" containerName="agent"

These messages repeat continuously for every pod that has startup, readiness, or liveness probes configured, making it difficult to spot real errors in the kubelet logs.

How to reproduce the problem:

  1. Have an EKS cluster running Kubernetes 1.34 on Bottlerocket.
  2. Upgrade the cluster to Kubernetes 1.35 using Bottlerocket 1.54.0.
  3. Observe kubelet logs on any node — error messages about probes already existing will appear continuously.

Additional context:

The root cause is in pkg/kubelet/prober/prober_manager.go (lines 197, 209, 221) where logger.V(8).Error(nil, ...) is used. Since go-logr/logr ignores the .V() level on Error() calls, these are always printed at error level regardless of the configured verbosity. The upstream fix (tracked in kubernetes/kubernetes#136027) is to convert these to logger.V(8).Info(...) instead.

Until the upstream fix is included in a Kubernetes release and picked up by Bottlerocket, this issue causes significant log noise on all nodes running Kubernetes 1.35.

Metadata

Metadata

Assignees

No one assigned

    Labels

    status/needs-triagePending triage or re-evaluationtype/bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions