Skip to content

Latest commit

 

History

History
198 lines (145 loc) · 6.68 KB

File metadata and controls

198 lines (145 loc) · 6.68 KB

CAPI, CAPO and DevStack!

Try out your favourite tools and tech stack locally! 🚀

This repository shows you step-by-step on how to run a complete environment locally to test:

Overview

This drawing shows a brief overview on what we're trying to achieve:

Step-by-step

  1. Deploy Devstack locally, see this repository on how to do this on top of KVM.

  2. Download the OpenStack RC file via Horizon.

  3. Create a minikube cluster:

This assumes that you have a KVM network called devstack_net available.

minikube start --driver=kvm2 --kvm-network=devstack_net
  1. Download clusterctl, change the destination directory if needed:
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.9.5/clusterctl-linux-amd64 -o ~/.local/bin/clusterctl
  1. Install CAPO in the managment cluster (minikube):
export CLUSTER_TOPOLOGY=true
kubectl apply -f https://github.com/k-orc/openstack-resource-controller/releases/latest/download/install.yaml
clusterctl init --infrastructure openstack

Notes:

  1. Build an image using image-builder, used the qemu builder:
git clone https://github.com/kubernetes-sigs/image-builder.git
cd image-builder/images/capi/
make build-qemu-ubuntu-2404
  1. Upload the built image to OpenStack if you built it using anything else than the OpenStack builder:
openstack image create "ubuntu-2204-kube-v1.31.6" \
  --progress \
  --disk-format qcow2 \
  --property os_type=linux \
  --property os_distro=ubuntu2204 \
  --file output/ubuntu-2204-kube-v1.31.6/ubuntu-2204-kube-v1.31.6
  1. Create a SSH keypair:
openstack keypair create --type ssh k8s-devstack01

Take a note of that the private SSH key and store it somewhere safe.

  1. Install needed CAPO prerequisites and generate cluster manifests:

Make sure you've prepared your clouds.yaml accordingly, here's an example:

clouds:
  openstack:
    auth:
      auth_url: http://<DevStack IP>:5000//v3
      username: "demo"
      password: "secret"
      project_name: "admin"
      project_id: "<ID>"
      user_domain_name: "Default"
    region_name: "RegionOne"
    interface: "public"
    identity_api_version: 3

Use the env.rc utility script to export a common set of environment variables to be used with clusterctl init later on.

wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O /tmp/env.rc
source /tmp/env.rc clouds.yaml openstack

Export more environment variables that we'll need to define the workload cluster:

export KUBERNETES_VERSION=v1.31.6
export OPENSTACK_DNS_NAMESERVERS=1.1.1.1
export OPENSTACK_FAILURE_DOMAIN=nova
export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR=m1.medium
export OPENSTACK_NODE_MACHINE_FLAVOR=m1.medium
export OPENSTACK_IMAGE_NAME=ubuntu-2204-kube-v1.31.6
export OPENSTACK_SSH_KEY_NAME=k8s-devstack01
export OPENSTACK_EXTERNAL_NETWORK_ID=<ID>
export CLUSTER_NAME=k8s-devstack01
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=0

Please note that you'll need to fetch the public network ID and add it to the OPENSTACK_EXTERNAL_NETWORK_ID environment variable. Also the flavor needs to have at least 2 cores otherwise kubeadm will fail, this can be ignored from a kubeadm perspective but that's not covered here.

  1. Generate the cluster manifests and apply them in the minikube cluster:
clusterctl generate cluster k8s-devstack01 --infrastructure openstack > k8s-devstack01.yaml
kubectl apply -f k8s-devstack01.yaml
  1. Check the status of the cluster using clusterctl, also check the logs of, primarily, the capo-controller:
clusterctl describe cluster k8s-devstack01
NAME                                                               READY  SEVERITY  REASON  SINCE  MESSAGE
Cluster/k8s-devstack01                                             True                     14m
├─ClusterInfrastructure - OpenStackCluster/k8s-devstack01
└─ControlPlane - KubeadmControlPlane/k8s-devstack01-control-plane  True                     14m
  └─Machine/k8s-devstack01-control-plane-zkjdn                     True                     15m
  1. Download the cluster kubeconfig and test connectivity:
clusterctl get kubeconfig k8s-devstack01 > k8s-devstack01.kubeconfig
export KUBECONFIG=k8s-devstack01.kubeconfig

You should now be able to reach the cluster running within the DevStack environment! 🎉

  1. Install a CNI (Cilium), manually for now:
helm repo add cilium https://helm.cilium.io/
helm upgrade --install cilium cilium/cilium --version 1.17.1 \
  --namespace kube-system \
  --set hubble.enabled=false \
  --set envoy.enabled=false \
  --set operator.replicas=1
  1. Install the OpenStack Cloud Provider:
git clone --depth=1 https://github.com/kubernetes-sigs/cluster-api-provider-openstack.git

Generate the external cloud provider configuration with the provided helper script:

./templates/create_cloud_conf.sh ~/Downloads/clouds.yaml openstack > /tmp/cloud.conf

Note that if you want support for creating Service of type: LoadBalancer you'll need to configure this in the cloud.conf and re-create the secret.

Create the needed secret:

kubectl create secret -n kube-system generic cloud-config --from-file=/tmp/cloud.conf

Create the needed Kubernetes resources for the OpenStack cloud provider:

helm repo add cpo https://kubernetes.github.io/cloud-provider-openstack
helm repo update
helm upgrade --install \
  openstack-ccm cpo/openstack-cloud-controller-manager \
  --namespace kube-system \
  --values occm-values.yaml

If everything went as expected pending Pods should've been scheduled and all Pods shall have IP addresses assigned to them.

  1. Done! 🚀