You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What:
- add the `service.beta.kubernetes.io/linode-loadbalancer-enable-ipv6-backends` annotation and the `--enable-ipv6-for-nodebalancer-backends` controller flag, including Helm values and docs
- resolve the backend address family per service so the service annotation overrides the global default when needed
- program NodeBalancer backends from `node.k8s.linode.com/public-ipv6` when IPv6 backends are enabled, format IPv6 backend addresses correctly, and omit IPv4 subnet IDs from IPv6 backend nodes
- avoid applying VPC backend configuration during IPv6 backend creation while preserving the existing IPv4/VPC path for non-IPv6 services
- fall back to the Service's current status when a fresh LoadBalancer status lookup fails during reconcile
- add unit coverage for backend IP selection, annotation-vs-flag resolution, IPv6 address formatting, and the VPC/subnet behavior around IPv6 backends
- add a dedicated Chainsaw test and CAPL workflow support for validating IPv6 public backends end to end in a dual-stack cluster
- centralize CAPL manifest patching for image overrides, `LINODE_API_VERSION=v4beta`, VPC/subnet mutations, and subnet-specific test setup
- document the feature, its dual-stack requirement, and its current limitation to node public IPv6 backends rather than VPC IPv6 backend addresses
Why:
- support NodeBalancer services that need IPv6 backend targets instead of the existing IPv4-only backend programming
- make the backend selection logic explicit so IPv6 public backends do not inherit IPv4/VPC assumptions such as subnet IDs
- validate the feature in CI and local CAPL flows using a dual-stack workload cluster that can actually serve IPv6 NodePort traffic
- keep the rollout understandable for operators by documenting the opt-in controls, migration behavior, and failure mode when nodes do not expose public IPv6 addresses
How:
- factor backend-family handling into helpers for backend-state resolution, subnet lookup, backend node construction, backend address formatting, and node backend IP selection
- read IPv6 backend addresses from the node public IPv6 annotation, return a clear reconcile error when a selected node lacks one, and continue using the existing private/VPC address path when IPv6 backends are disabled
- create a dedicated IPv6 CAPL cluster path and `ipv6-backends` Chainsaw selector, run that test separately from the rest of e2e, and use a dual-stack service definition to exercise the public IPv6 backend datapath
- patch generated CAPL manifests through `hack/patch-capl-manifest.sh` so the CCM image repo/tag, pull policy, beta API env var, VPC template values, and optional subnet override are applied consistently across regular, IPv6, and subnet test flows
- route `clusterctl` usage through repo-managed tooling/devbox configuration and ignore generated local manifest and kubeconfig artifacts from dev workflows
0 commit comments