-
Notifications
You must be signed in to change notification settings - Fork 20
Provisioner_kubeadm
The kubeadm provisioner is responsible for starting kubeadm
with the right
parameters for configuring the machine as part of the kubernetes cluster.
For provisioning a machine as a master in the cluster:
# from the libvirt provider
resource "libvirt_domain" "minion" {
name = "master"
...
provisioner "kubeadm" {
config = "${kubeadm.main.config}"
install {
auto = true
}
}
}
and for provisioning some worker nodes:
# from the libvirt provider
resource "libvirt_domain" "minion" {
count = 3
name = "minion${count.index}"
...
provisioner "kubeadm" {
config = "${kubeadm.main.config}"
join = "${libvirt_domain.master.network_interface.0.addresses.0}"
}
}
-
role
- (Optional) defines the role of the machine:master
orworker
. Ifjoin
is empty, it defaults to themaster
role, otherwise it defaults to theworker
role. -
config
- a reference to thekubeadm.<resource-name>.config
attribute of the provider. -
join
- (Optional) the address (either a resolvable DNS name or an IP) of the node in the cluster to join. The absence of ajoin
indicates that this node will be used for bootstrapping the cluster and will be the seeder for the other nodes of the cluster. Whenjoin
is not empty androle
ismaster
, the node will join the cluster's Control Plane. -
install
- (Optional) options for the autoinstaller script (see section below). -
prevent_sudo
- (Optional) prevent the usage ofsudo
for running commands. -
manifests
- (Optional) list of extra manifests tokubectl apply -f
in the booststrap master after the API server is up and running. These manifests can be either local files or URLs. -
nodename
- (Optional) name for the.Metadata.Name
field of the Node API object that will be created in thiskubeadm init
orkubeadm join
operation. This is also used in the CommonName field of the kubelet's client certificate to the API server. Defaults to the hostname of the node if not provided. -
ignore_checks
- (Optional) list ofkubeadm
preflight checks to ignore when provisioning. Example:ignore_checks = [ "NumCPU", "FileContent--proc-sys-net-bridge-bridge-nf-call-iptables", "Swap", ]
The provisioner can be used for creating more than one master in the Kubernetes control plane.
This can be achieved by specifying the role = "master"
in the additional nodes in conjunction
to a join
argument for joining the first master created. We can differentiate the boostrapping
master from the rest of the additional masters in the same resource with the help of a
conditional like this:
resource "instance_type" "master" {
count = "3"
// ...
provisioner "kubeadm" {
config = "${kubeadm.main.config}"
role = "master"
join = "${count.index == 0 ? "" : instance_type.master.0.ip_address}"
}
}
This way, the first master will have an empty join
, so it will be provisioned as the
boostrapping master, while the other masters will have a join = ${instance_type.master.0.ip_address}
and they will join the boostrap master.
Take into account that, in order to support multiple masters, you must have configured an
external API address (in the resource kubeadm.api.external
). Otherwise, the provisioner
will fail when trying to add a second master.
Example:
resource "libvirt_domain" "master" {
name = "master${count.index}"
...
provisioner "kubeadm" {
config = "${kubeadm.main.config}"
install {
# try to install `kubeadm` automatically with the builin script
auto = true
}
}
}
-
auto
- (Optional) try to automatically install kubeadm with the built-in helper script. -
script
- (Optional) a user-provided installation script. It should installkubeadm
in some directory available in the default$PATH
. -
inline
- (Optional) some inline code for installing kubeadm in the remote machine. Example:resource "libvirt_domain" "master" { name = "master${count.index}" ... provisioner "kubeadm" { config = "${kubeadm.main.config}" install { auto = true inline = <<-EOT #!/bin/sh RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)" mkdir -p /opt/bin cd /opt/bin curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl} chmod +x {kubeadm,kubelet,kubectl} EOT } } }
-
version
- (Optional) kubeadm version to install by the auto-installation script.- NOTE: this can be ignored by the auto-install script in some OSes where there are not so many installation alternatives.
-
sysconfig_path
- (Optional) full path for the uploaded kubelet sysconfig file (defaults to/etc/sysconfig/kubelet
). -
service_path
- (Optional) full path for the uploaded kubelet.service file (defaults to/usr/lib/systemd/system/kubelet.service
). -
dropin_path
- (Optional) full path for the uploaded kubeadm dropin file (defaults to/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
). -
kubeadm_path
- (Optional) full path wherekubeadm
should be found (if no absolute path is provided, it will use the default$PATH
for finding it). -
kubectl_path
- (Optional) full path wherekubectl
should be found (if no absolute path is provided, it will use the default$PATH
for finding it).
You can install a destroy-time provisioner
that will drain the node from the Kubernetes cluster. In case of masters running etcd
,
it will also remove the etcd
instance from the etcd cluster.
resource "aws_instance" "worker" {
count = "${var.worker_count}"
# ...
# provisioner for the resource creation
provisioner "kubeadm" {
config = "${kubeadm.main.config}"
join = "${lookup(docker_container.master.0.network_data[0], "ip_address")}"
role = "worker"
}
# provisioner for the resource destruction
provisioner "kubeadm" {
when = "destroy"
config = "${kubeadm.main.config}"
drain = true
}
# make sure that, on rolling updates, we create a new
# node before destroying the previous one...
lifecycle {
create_before_destroy = true
}
}
Note well that you need both provisioners
: one for creation and a different
one for destruction (this is because of a Terraform limitation where provsioners
cannot be used in both scenarios). Both provisioners must be configured with a
config
attribute, but the destruction provisioner needs a when = "destroy"
attribute for being executed on destruction, and a drain = true
for signaling
that the node must be drained from the cluster.
- The
kubeadm-setup.sh
tries to does its best in order to installkubeadm
, but some distros have not been tested too much. I've usedlibvirt
with OpenSUSE Leap images for running my tests, so that could be considered the perfect combination for trying this...