-
Notifications
You must be signed in to change notification settings - Fork 4
Home
Welcome to the KubeVirtBMC wiki!
You need to have a Kubernetes cluster accessible and have KubeVirt installed on that cluster. Here, I'll walk you through using a single-node cluster.
- Install Ubuntu 22.04 on a physical server or virtual machine
- Install Kubernetes on the newly provisioned machine (I choose RKE2 as my distribution of choice)
# Install RKE2 curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_VERSION=v1.28.11+rke2r1 sh - # Start RKE2 service sudo systemctl enable rke2-server.service --now
- Prepare your environment for quick access. By default,
kubectl
binary is at/var/lib/rancher/rke2/bin/kubectl
and the kubeconfig file is at/etc/rancher/rke2/rke2.yaml
.mkdir -p ~/.kube sudo cp /etc/rancher/rke2/rke2.yaml ~/.kube/config sudo chown $(id -u):$(id -g) ~/.kube/config export PATH=$PATH:/var/lib/rancher/rke2/bin
- Check if the machine is ready for running VMs on it
sudo apt update sudo apt install libvirt-clients # Make sure there's no "FAIL" in the command output sudo virt-host-validate qemu
- Install KubeVirt (here I use v1.1.0 for example)
# Deploy KubeVirt operator kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/v1.1.0/kubevirt-operator.yaml # Instruct the operator to start deploy KubeVirt component by creating the KubeVirt custom resource kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/v1.1.0/kubevirt-cr.yaml
While we're still actively working on a Helm chart for easy installation (progress), the only way to install KubeVirtBMC at the moment is to clone the project source code and use make
to create all the Kubernetes resources. So you'll have to make sure make
and go
commands are at your disposal.
# Install make
sudo apt install make
# Install Go
wget https://go.dev/dl/go1.22.5.linux-amd64.tar.gz -O go.tar.gz
sudo tar -xzvf go.tar.gz -C /usr/local
export PATH=$HOME/go/bin:/usr/local/go/bin:$PATH
Until we find out a more convenient way to provision the certificates for the webhook service to work, deploying cert-manager to the cluster is still required:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.2/cert-manager.yaml
At last, we can install KubeVirtBMC (v0.3.0 is the latest version at the time being):
git clone https://github.com/starbops/kubevirtbmc.git
cd kubevirtbmc/
git checkout tags/v0.3.0 -b v0.3.0
make deploy IMG=starbops/virtbmc-controller:v0.3.0
Before we fix the issue, KubeVirtBMC only supports VM defined with .spec.runStrategy
. Otherwise, using the following command to create a test virtual machine is much faster:
kubectl apply -f https://kubevirt.io/labs/manifests/vm.yaml
Forget about the above command; let's do it the other way:
cat <<EOF | kubectl apply -f -
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: testvm
spec:
runStrategy: Halted
template:
metadata:
labels:
kubevirt.io/size: small
kubevirt.io/domain: testvm
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
resources:
requests:
memory: 64M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: quay.io/kubevirt/cirros-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userDataBase64: SGkuXG4=
EOF
Check if the VM is created successfully and remain in the stopped state:
$ kubectl get vms
NAME AGE STATUS READY
testvm 2m Stopped False
There should already be a VirtualMachineBMC custom resource created by virtbmc-controller
.
$ kubectl -n kubevirtbmc-system get virtualmachinebmcs
NAME AGE
default-testvm 2m
You should be able to power on the VM by the ipmitool power on
command, using a ephemeral Pod:
$ kubectl run -it --rm ipmitool --image=mikeynap/ipmitool --command -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ipmitool -I lan -U admin -P password -H default-testvm-virtbmc.kubevirtbmc-system.svc.cluster.local power status
Chassis Power is off
/ # ipmitool -I lan -U admin -P password -H default-testvm-virtbmc.kubevirtbmc-system.svc.cluster.local power on
Chassis Power Control: Up/On
/ # ipmitool -I lan -U admin -P password -H default-testvm-virtbmc.kubevirtbmc-system.svc.cluster.local power status
Chassis Power is on
/ # exit
pod "ipmitool" deleted
Before the feature is fully delivered, the only way to access the IPMI endpoint of each VM is through the cluster service network (or the pod network if you'd like.)
Check if the VM is actually start running:
$ kubectl get vms
NAME AGE STATUS READY
testvm 2m39 Running True
$ kubectl get vmis
NAME AGE PHASE IP NODENAME READY
testvm 2m39 Running 10.42.0.38 kv True