Skip to content

fix: drop ConfigMap patching and CI tests#99

Open
oleksandr-nc wants to merge 3 commits intomainfrom
fix/k8s-coredns-custom-and-ci
Open

fix: drop ConfigMap patching and CI tests#99
oleksandr-nc wants to merge 3 commits intomainfrom
fix/k8s-coredns-custom-and-ci

Conversation

@oleksandr-nc
Copy link
Copy Markdown
Contributor

@oleksandr-nc oleksandr-nc commented Apr 14, 2026

EDITED: Started out as a fix for a CoreDNS crashloop that hit us on k3s , but after investigating and trying two approaches, the cleanest answer turned out to be "just remove the whole cluster-wide DNS patcher" as coredns-custom is a k3s/RKE convention. It is not universal.

To restore the state of system how it would be after this PR is merged, it is required to:

for k3s:

kubectl delete configmap coredns-custom -n kube-system --ignore-not-found

kubectl delete configmap coredns -n kube-system
kubectl rollout restart deployment coredns -n kube-system
kubectl get configmap coredns -n kube-system -o jsonpath='{.data.Corefile}' | grep -c nextcloud - should print 0

for kind:

kubectl delete configmap coredns-custom -n kube-system --ignore-not-found

kubectl apply -f - <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30 {
           disable success cluster.local
           disable denial cluster.local
        }
        loop
        reload
        loadbalance
    }
EOF

kubectl rollout restart deployment coredns -n kube-system
kubectl rollout status deployment coredns -n kube-system --timeout=60s


  1. CoreDNS host-alias patching was broken on k3s

    _k8s_ensure_coredns_host_aliases patches the main coredns ConfigMap in kube-system to make HP_K8S_HOST_ALIASES resolve cluster-wide. The regex hosts\s*\{[^}]*\} only matches the argumentless form of the hosts plugin - k3s ships with hosts /etc/coredns/NodeHosts { ... }, which the regex misses. HaRP then falls through to the "insert before forward ." branch and appends a second hosts {} block inside the same server block.

    Fix: stop writing to the operator-owned coredns ConfigMap entirely. Write a standalone .server file into kube-system/coredns-custom instead - that's the ConfigMap k3s's CoreDNS Deployment already mounts at /etc/coredns/custom/ (optional: true) and imports via import /etc/coredns/custom/*.server. Each alias becomes its own zone with a small hosts { ... } block, so there's no way to collide with any plugin in the main server block.

  2. CI: deploy workflows copied from AppAPI

…Corefile

Signed-off-by: Oleksander Piskun <oleksandr2088@icloud.com>
@oleksandr-nc oleksandr-nc requested a review from kyteinsky as a code owner April 14, 2026 11:52
…rom PR

Signed-off-by: Oleksander Piskun <oleksandr2088@icloud.com>
@oleksandr-nc oleksandr-nc force-pushed the fix/k8s-coredns-custom-and-ci branch from 5e00e49 to 4552a05 Compare April 14, 2026 12:02
@oleksandr-nc oleksandr-nc marked this pull request as draft April 14, 2026 13:44
…liases

Signed-off-by: Oleksander Piskun <oleksandr2088@icloud.com>
@oleksandr-nc oleksandr-nc changed the title fix: k8s custom coredns and CI tests fix: drop ConfigMap patching and CI tests Apr 14, 2026
@oleksandr-nc oleksandr-nc marked this pull request as ready for review April 14, 2026 14:35
Copy link
Copy Markdown
Contributor

@kyteinsky kyteinsky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

works with kind ✔️
docs would need updates too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants