Skip to content

Commit

Permalink
KB: add guide to how to be compatible with the new IP pool feature (#40)
Browse files Browse the repository at this point in the history
Signed-off-by: yaocw2020 <yaocanwu@gmail.com>
Co-authored-by: Guangbo <guangbo.chen@suse.com>
Co-authored-by: Lucas Saintarbor <lucas.saintarbor@suse.com>
  • Loading branch information
3 people authored Nov 21, 2023
1 parent f11af33 commit 709268c
Show file tree
Hide file tree
Showing 8 changed files with 255 additions and 0 deletions.
Binary file added kb/2023-08-21/after-upgrade.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added kb/2023-08-21/before-upgrade.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
91 changes: 91 additions & 0 deletions kb/2023-08-21/compatible_with_ip_pool_new_feature.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
---
title: Upgrade Guest Kubernetes Clusters to be Compatible with Harvester IP Pools
description: Explain how to keep load balancer IP during upgrading guest cluster
slug: upgrading_guest_clusters_with_harvester_ip_pool_compatibility
authors:
- name: Canwu Yao
title: Software Engineer
url: https://github.com/yaocw2020
image_url: https://avatars.githubusercontent.com/u/7421463?s=400&v=4
tags: [harvester, load balancer, cloud provider, ip pool, upgrade]
hide_table_of_contents: false
---

As **Harvester v1.2.0** is released, a new Harvester cloud provider version **0.2.2** is integrated into RKE2 **v1.24.15+rke2r1**, **v1.25.11+rke2r1**, **v1.26.6+rke2r1**, **v1.27.3+rke2r1**, and newer versions.

With Harvester v1.2.0, the new Harvester cloud provider offers enhanced load balancing capabilities for guest Kubernetes services. Specifically, it introduces the Harvester IP Pool feature, a built-in IP address management (IPAM) solution for the Harvester load balancer. It allows you to define an IP pool specific to a particular guest cluster by specifying the guest cluster name. For example, you can create an IP pool exclusively for the guest cluster named cluster2:

![image](ippoolforcluster2.png)

However, after upgrading, the feature is not automatically compatible with existing guest Kubernetes clusters, as they do not pass the correct cluster name to the Harvester cloud provider. Refer to [issue 4232](https://github.com/harvester/harvester/issues/4232) for more details. Users can manually upgrade the Harvester cloud provider using Helm as a workaround and provide the correct cluster name after upgrading. However, this would result in a change in the load balancer IPs.

This article outlines a workaround that allows you to leverage the new IP pool feature while keeping the load balancer IPs unchanged.

## Prerequisites

- Download the Harvester kubeconfig file from the Harvester UI. If you have imported Harvester into Rancher, do not use the kubeconfig file from the Rancher UI. Refer to [Access Harvester Cluster](https://docs.harvesterhci.io/v1.1/faq#how-can-i-access-the-kubeconfig-file-of-the-harvester-cluster) to get the desired one.

- Download the kubeconfig file for the guest Kubernetes cluster you plan to upgrade. Refer to [Accessing Clusters with kubectl from Your Workstation](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig#accessing-clusters-with-kubectl-from-your-workstation) for instructions on how to download the kubeconfig file.

## Steps to Keep Load Balancer IP

1. Execute the following script before upgrading.
```
curl -sfL https://raw.githubusercontent.com/harvester/harvesterhci.io/main/kb/2023-08-21/keepip.sh | sh -s before_upgrade <Harvester-kubeconfig-path> <guest-cluster-kubeconfig-path> <guest-cluster-name> <guest-cluster-nodes-namespace>
```

- `<Harvester-kubeconfig-path>`: Path to the Harvester kubeconfig file.
- `<guest-cluster-kubeconfig-path>`: Path to the kubeconfig file of your guest Kubernetes cluster.
- `<guest-cluster-name>`: Name of your guest cluster.
- `<guest-cluster-nodes-namespace>`: Namespace where the VMs of the guest cluster are located.

The script will help users copy the DHCP information to the service annotation and modify the IP pool allocated history to make sure the IP is unchanged.

![image](before-upgrade.png)

After executing the script, the load balancer service with DHCP mode will be annotated with the DHCP information. For example:

``` yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kube-vip.io/hwaddr: 00:00:6c:4f:18:68
kube-vip.io/requestedIP: 172.19.105.215
name: lb0
namespace: default
```
As for the load balancer service with pool mode, the IP pool allocated history will be modified as the new load balancer name. For example:
``` yaml
apiVersion: loadbalancer.harvesterhci.io/v1beta1
kind: IPPool
metadata:
name: default
spec:
...
status:
allocatedHistory:
192.168.100.2: default/cluster-name-default-lb1-ddc13071 # replace the new load balancer name
```
2. Add network selector for the pool.
For example, the following cluster is under the VM network `default/mgmt-untagged`. The network selector should be `default/mgmt-untagged`.

![image](network.png)

![image](network-selector.png)

3. Upgrade the RKE2 cluster in the Rancher UI and select the new version.

![image](upgrade.png)

1. Execute the script after upgrading.
```
curl -sfL https://raw.githubusercontent.com/harvester/harvesterhci.io/main/kb/2023-08-21/keepip.sh | sh -s after_upgrade <Harvester-kubeconfig-path> <guest-cluster-kubeconfig-path> <guest-cluster-name> <guest-cluster-nodes-namespace>
```
![image](before-upgrade.png)
In this step, the script wraps the operations to upgrade the Harvester cloud provider to set the cluster name. After the Harvester cloud provider is running, the new Harvester load balancers will be created with the unchanged IPs.
Binary file added kb/2023-08-21/ippoolforcluster2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
164 changes: 164 additions & 0 deletions kb/2023-08-21/keepip.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
#!/bin/bash

set -e

default_cluster_name="kubernetes"

load_balancer_name() {
clusterName="$1"
serviceNamespace="$2"
serviceName="$3"
serviceUID="$4"

maxNameLength=63
lenOfSuffix=8

if ! echo "$base" | grep -q "^[a-zA-Z]"; then
base="a$base"
fi

base="${clusterName}-${serviceNamespace}-${serviceName}-"
checksum=$(echo -n "$base$serviceUID" | gzip -1 -c | tail -c8 | hexdump -n4 -e '"%08x"')

if [[ ${#base} -gt $((maxNameLength - lenOfSuffix)) ]]; then
base="${base:0:$((maxNameLength - lenOfSuffix))}"
fi

echo "${base}${checksum}"
}

merge_yaml() {
harvesterKubeconfig="$1"
rke2Kubeconfig="$2"
yq eval-all '. as $item ireduce ({}; . *+ $item)' $1 $2 > merged.yaml
export KUBECONFIG=merged.yaml
}

switch_to_downstream_cluster() {
kubectl config use-context "${cluster_name}"
}

switch_to_harvester() {
kubectl config use-context local
}

upgrade_harvester_cloud_provider() {
version=$(helm list -n kube-system -o json | jq -r '.[] | select(.name == "harvester-cloud-provider") | .chart')
helm upgrade harvester-cloud-provider https://github.com/harvester/charts/releases/download/$version/$version.tgz -n kube-system
}

# Update the annotations and delete the LoadBalancer for a DHCP service
update_dhcp_service() {
local name=$1
local namespace=$2
local uid=$3
local lb_name=$(load_balancer_name "$default_cluster_name" "$namespace" "$name" "$uid")

switch_to_harvester
local annotations=$(kubectl get svc "$lb_name" -n "$cluster_namespace" -o yaml | yq eval '.metadata.annotations')
local hwaddr=$(echo "$annotations" | yq eval '.["kube-vip.io/hwaddr"]' -r)
local requestedIP=$(echo "$annotations" | yq eval '.["kube-vip.io/requestedIP"]' -r)
echo $hwaddr $requestedIP
switch_to_downstream_cluster
kubectl annotate --overwrite svc "$name" -n "$namespace" "kube-vip.io/hwaddr=$hwaddr" "kube-vip.io/requestedIP=$requestedIP"

# delete the LoadBalancer
switch_to_harvester
kubectl delete lb "$lb_name" -n "$cluster_namespace"
}

# Update the allocatedHistory for a pool service
update_pool_service() {
local name=$1
local namespace=$2
local uid=$3
local lb_name=$(load_balancer_name "$default_cluster_name" "$cluster_namespace" "$name" "$uid")
local new_lb_name=$(load_balancer_name "$cluster_name" "$cluster_namespace" "$name" "$uid")

switch_to_harvester
local annotations=$(kubectl get svc "$lb_name" -n "$cluster_namespace" -o yaml | yq eval '.metadata.annotations')
local ip=$(kubectl get svc "$lb_name" -n "$cluster_namespace" -o yaml | yq eval '.status.loadBalancer.ingress[0].ip' -r)
local pool_name=$(kubectl get lb "$lb_name" -n "$cluster_namespace" -o yaml | yq eval '.status.allocatedAddress.ipPool' -r)

echo $ip $pool_name
echo $lb_name $new_lb_name

kubectl delete lb "$lb_name" -n "$cluster_namespace"

# update allocated history
kubectl patch pool "$pool_name" -n "$cluster_namespace" --type json -p '[{"op": "replace", "path": "/status/allocatedHistory/'"$ip"'", "value": '"$cluster_namespace/$new_lb_name"'}]'
}

delete_all_lb() {
cluster_name="$1"
cluster_namespace="$2"

switch_to_downstream_cluster
kubectl get service -A -o yaml |
yq -r '.items[] | select(.spec.type == "LoadBalancer") | [.metadata.name, .metadata.namespace, .metadata.uid] | @tsv' |
while IFS=$'\t' read -r name namespace uid; do
lb_name=$(load_balancer_name "$default_cluster_name" "$namespace" "$name" "$uid")
switch_to_harvester
kubectl delete lb "$lb_name" -n "$cluster_namespace"
done
}

before_upgrade() {
switch_to_downstream_cluster
kubectl scale deploy harvester-cloud-provider --replicas=0 -n kube-system
# Loop through the LoadBalancer services in the downstream cluster
kubectl get service -A -o yaml |
yq -r '.items[] | select(.spec.type == "LoadBalancer") | [.metadata.name, .metadata.namespace, .metadata.uid, .metadata.annotations["cloudprovider.harvesterhci.io/ipam"]] | @tsv' |
while IFS=$'\t' read -r name namespace uid mode; do
if [ "$mode" == "dhcp" ]; then
echo "Updating DHCP service $name in namespace $namespace"
update_dhcp_service "$name" "$namespace" "$uid"
elif [ "$mode" == "pool" ]; then
echo "Updating pool service $name in namespace $namespace"
update_pool_service "$name" "$namespace" "$uid"
else
echo "Skipping service $name in namespace $namespace"
continue
fi
done
}

after_upgrade() {
switch_to_downstream_cluster
kubectl scale deploy harvester-cloud-provider --replicas=0 -n kube-system

delete_all_lb "$cluster_name" "$cluster_namespace"

switch_to_downstream_cluster
helm upgrade harvester-cloud-provider harvester/harvester-cloud-provider -n kube-system --set global.cattle.clusterName="${cluster_name}" --set cloudConfigPath=/var/lib/rancher/rke2/etc/config-files/cloud-provider-config
}

if [ $# -ne 5 ]; then
echo "Usage: $0 <period> <harvester-kubeconfig-path> <downstream-cluster-kubeconfig-path> <cluster-name> <cluster-namespace>"
echo "Available period: before_upgrade, after_upgrade"
exit 1
fi

period="$1"
harvester_kubeconfig_path="$2"
downstream_cluster_kubeconfig_path="$3"
cluster_name="$4"
cluster_namespace="$5"

merge_yaml "$harvester_kubeconfig_path" "$downstream_cluster_kubeconfig_path"

case "$period" in
before_upgrade)
before_upgrade "$harvester_kubeconfig_path" "$downstream_cluster_kubeconfig_path" "$cluster_name" "$cluster_namespace"
;;
after_upgrade)
after_upgrade "$harvester_kubeconfig_path" "$downstream_cluster_kubeconfig_path" "$cluster_name" "$cluster_namespace"
;;
*)
echo "Invalid arguments: $period"
exit 1
;;
esac

rm -rf merged.yaml

Binary file added kb/2023-08-21/network-selector.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added kb/2023-08-21/network.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added kb/2023-08-21/upgrade.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 709268c

Please sign in to comment.