This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Jetlag is an OpenShift cluster deployment tool that uses Ansible automation to deploy Multi Node OpenShift (MNO) and Single Node OpenShift (SNO) clusters via the Assisted Installer. It supports Red Hat performance labs, Scale Labs, and IBMcloud environments.
# Bootstrap ansible virtual environment (run from repo root)
source bootstrap.sh
# Red Hat Labs (Scale Lab/Performance Lab)
# Copy and edit configuration file
cp ansible/vars/all.sample.yml ansible/vars/all.yml
# Edit all.yml with your lab configuration (lab, lab_cloud, cluster_type, etc.)
# Create inventory file for your lab environment
ansible-playbook ansible/create-inventory.yml
# Setup bastion host (replace cloud99.local with your inventory file)
ansible-playbook -i ansible/inventory/cloud99.local ansible/setup-bastion.yml
# IBMcloud
# Copy and edit configuration file
cp ansible/vars/ibmcloud.sample.yml ansible/vars/ibmcloud.yml
# Edit ibmcloud.yml with your IBMcloud configuration (cluster_type, worker_node_count, etc.)
# Create inventory file from IBMcloud CLI data
ansible-playbook ansible/ibmcloud-create-inventory.yml
# Setup bastion host for IBMcloud
ansible-playbook -i ansible/inventory/ibmcloud.local ansible/ibmcloud-setup-bastion.yml# Red Hat Labs (Scale Lab/Performance Lab)
# Deploy Multi Node OpenShift cluster
ansible-playbook -i ansible/inventory/cloud99.local ansible/mno-deploy.yml
# Deploy Single Node OpenShift clusters
ansible-playbook -i ansible/inventory/cloud99.local ansible/sno-deploy.yml
# Deploy Virtual Multi Node OpenShift (VMNO) - requires hypervisor setup first
ansible-playbook -i ansible/inventory/cloud99.local ansible/hv-setup.yml
ansible-playbook -i ansible/inventory/cloud99.local ansible/hv-vm-create.yml
ansible-playbook -i ansible/inventory/cloud99.local ansible/mno-deploy.yml
# IBMcloud
# Deploy Multi Node OpenShift on IBMcloud
ansible-playbook -i ansible/inventory/ibmcloud.local ansible/ibmcloud-mno-deploy.yml
# Deploy Single Node OpenShift on IBMcloud
ansible-playbook -i ansible/inventory/ibmcloud.local ansible/ibmcloud-sno-deploy.yml# Scale out MNO cluster
ansible-playbook ansible/ocp-scale-out.yml
# Setup hypervisor nodes for VMs
ansible-playbook ansible/hv-setup.yml
# Create VMs on hypervisor nodes
ansible-playbook ansible/hv-vm-create.yml
# Delete VMs from hypervisor nodes
ansible-playbook ansible/hv-vm-delete.yml
# Replace VMs on hypervisor nodes (delete + recreate)
ansible-playbook ansible/hv-vm-replace.yml
# Sync OpenShift releases
ansible-playbook ansible/sync-ocp-release.ymlansible/vars/all.yml- Main configuration for Red Hat labs (copy fromansible/vars/all.sample.yml)ansible/vars/ibmcloud.yml- IBMcloud-specific configuration (copy fromansible/vars/ibmcloud.sample.yml)pull-secret.txt- OpenShift pull secret (place in repo root)ansible/inventory/$CLOUDNAME.local- Generated inventory file for your lab
lab: Environment type (performancelab,scalelab, oribmcloud)lab_cloud: Specific cloud allocation (e.g.,cloud42)cluster_type: Eithermno,sno, orvmnoworker_node_count: Number of bare metal worker nodes for MNO clustershybrid_worker_count: Number of virtual worker nodes for hybrid MNO clusters (requires hypervisor setup)ocp_build: OpenShift build type (ga,dev, orci)ocp_version: OpenShift version (e.g.,latest-4.20)
Jetlag uses a modular Ansible role architecture:
- Bastion roles:
bastion-*roles configure the bastion host with services like Assisted Installer, DNS, registry - Installation roles:
install-cluster,sno-post-cluster-installhandle cluster deployment - Hypervisor roles:
hv-*roles manage VM infrastructure on hypervisor nodes - Utility roles:
boot-iso,sync-*roles provide supporting functionality
- MNO (Multi Node OpenShift): 3 control-plane nodes + configurable bare metal worker nodes
- SNO (Single Node OpenShift): Single node clusters, one per available machine
- VMNO (Virtual Multi Node OpenShift): MNO cluster using VMs instead of bare metal (Jetlag-specific term)
- Hybrid MNO: MNO cluster with both bare metal and virtual worker nodes
- VMNO clusters allow multi-node deployment with fewer physical machines (minimum: 1 bastion + 1-2 hypervisors)
- Hybrid clusters combine bare metal workers (
worker_node_count) with virtual workers (hybrid_worker_count) - Hypervisor nodes: Unused machines become VM hosts for additional clusters or hybrid workers
- Virtual workers are created as VMs on hypervisor nodes and added to the worker inventory section
- VM placement distributed across available hypervisors based on hardware-specific VM count configurations
- Performance Lab: Dell r750, 740xd hardware
- Scale Lab: Various Dell models (r750, r660, r650, r640, r630, fc640), Supermicro systems
- IBMcloud: Supermicro E5-2620, Lenovo SR630 bare metal
- Edit
ansible/vars/all.ymlwith your lab configuration - Run
ansible-playbook ansible/create-inventory.ymlto generate inventory - Run
ansible-playbook -i ansible/inventory/cloud99.local ansible/setup-bastion.ymlto configure bastion host - Run deployment playbook (
ansible/mno-deploy.ymloransible/sno-deploy.yml) - Access clusters using kubeconfig files in
/root/mno/or/root/sno/
- Edit
ansible/vars/ibmcloud.ymlwith your IBMcloud configuration - Run
ansible-playbook ansible/ibmcloud-create-inventory.ymlto generateansible/inventory/ibmcloud.localfrom IBMcloud CLI data - Run
ansible-playbook -i ansible/inventory/ibmcloud.local ansible/ibmcloud-setup-bastion.ymlto configure bastion host - Run deployment playbook (
ansible-playbook -i ansible/inventory/ibmcloud.local ansible/ibmcloud-mno-deploy.ymloransible/ibmcloud-sno-deploy.yml) - Access clusters using kubeconfig files in
/root/mno/or/root/sno/
- Edit
ansible/vars/all.ymlwithcluster_type: vmnoand VM-specific settings - Edit
ansible/vars/hv.ymlfor hypervisor configuration - Run
ansible-playbook ansible/create-inventory.ymlto generate inventory with VM entries - Run
ansible-playbook -i ansible/inventory/cloud99.local ansible/setup-bastion.ymlto configure bastion host - Run
ansible-playbook -i ansible/inventory/cloud99.local ansible/hv-setup.ymlto configure hypervisor nodes - Run
ansible-playbook -i ansible/inventory/cloud99.local ansible/hv-vm-create.ymlto create VMs - Run
ansible-playbook -i ansible/inventory/cloud99.local ansible/mno-deploy.ymlto deploy cluster to VMs - Access cluster using kubeconfig in
/root/vmno/
- Configure both
worker_node_count(bare metal) andhybrid_worker_count(VMs) inansible/vars/all.yml - Ensure hypervisor nodes are available in allocation
- Follow standard Red Hat Labs MNO workflow - hybrid workers automatically added to inventory
- Inventory files are generated, not manually created (except for "Bring Your Own Lab" scenarios)
- Bastion machine is always the first machine in allocation and hosts Assisted Installer
- Unused machines in MNO deployments become hypervisor nodes
- SNO deployments create one cluster per available machine after bastion
- Public VLAN support available for routable environments (
public_vlan: true) - Disconnected/air-gapped deployments supported with registry mirroring
- Hardware Requirements: VMNO requires additional CPU/memory capacity for VM overhead
- VM Management: Use
hv-vm-delete.ymlorhv-vm-replace.ymlbetween VMNO deployments to avoid conflicts - Resource Planning: Configure
hw_vm_countsper hardware type to optimize VM distribution across hypervisors - Disk Configuration: VMs can span multiple disks on hypervisors (e.g., default disk + nvme for higher VM counts)
- Network Configuration: VMs use libvirt networking with static IP assignment from controlplane network range
- Scale Lab/Performance Lab Only: VMNO and hybrid deployments only supported in Scale Lab and Performance Lab environments
When encountering issues with Jetlag deployments, consult these comprehensive documentation resources:
-
docs/troubleshooting.md: Comprehensive troubleshooting guide covering:
- Common deployment issues and solutions
- Hardware-specific problems (Dell, Supermicro)
- Bastion configuration and recovery procedures
- BMC/iDRAC reset procedures
- Virtual media and discovery issues
-
docs/tips-and-vars.md: Advanced configuration guidance including:
- Network interface configuration and overrides
- Install disk configuration options
- OCP version management
- NVMe disk configuration for install and etcd
- Post-deployment tasks and optimizations
- Bastion registry management
- Network Configuration: Verify
bastion_lab_interfaceandbastion_controlplane_interfacematch your hardware - BMC Access: Ensure BMC credentials and network connectivity are correct
- DNS Services: Check bastion DNS services are running and configured correctly
- Disk Selection: Verify install disk paths and available storage
- Resource Limits: Ensure sufficient CPU/memory for VM deployments (VMNO/hybrid)
- Before troubleshooting deployment failures, read the relevant sections in
troubleshooting.md - For advanced configuration needs, reference the specific sections in
tips-and-vars.md - When working with specific hardware vendors, check the hardware-specific troubleshooting sections