Skip to content

Commit

Permalink
Merge pull request #2193 from OctopusDeploy/finnd-kube-target-workerpool
Browse files Browse the repository at this point in the history
updated kube target docs to include workerpool warning
  • Loading branch information
FinnianDempsey authored Apr 10, 2024
2 parents 94e0e4e + 984a118 commit cf47baf
Show file tree
Hide file tree
Showing 2 changed files with 35 additions and 27 deletions.
4 changes: 4 additions & 0 deletions dictionary-octopus.txt
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ hyperthreading
IMDS
inetmgr
inetsrv
inkey
internalcustomer
Istio
istioctl
Expand All @@ -90,7 +91,9 @@ itemtype
ITSM
jwks
keyrings
kubeconfig
Kubelet
kubelogin
kustomization
kustomize
lastmod
Expand Down Expand Up @@ -161,6 +164,7 @@ rehype
reindexing
releaseprogression
remoting
replicasets
reprioritize
reprovisioned
reprovisioning
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
layout: src/layouts/Default.astro
pubDate: 2023-01-01
modDate: 2023-01-01
modDate: 2024-03-04
title: Kubernetes cluster
description: How to configure a Kubernetes cluster as a deployment target in Octopus
navOrder: 50
Expand Down Expand Up @@ -45,30 +45,30 @@ A number of the fields in this configuration file map directly to the fields in
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUV5REN...
certificate-authority-data: XXXXXXXXXXXXXXXX...
server: https://kubernetes.example.org:443
name: k8scluster
name: k8s-cluster
contexts:
- context:
cluster: k8scluster
user: k8suser
name: k8suser
current-context: k8scluster
cluster: k8s-cluster
user: k8s_user
name: k8s_user
current-context: k8s-cluster
kind: Config
preferences: {}
users:
- name: k8suser
- name: k8s_user
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tL...
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS0FJQkFBS0...
token: 1234567890abcdefghijkl
- name: k8suser2
client-certificate-data: XXXXXXXXXXXXXXXX...
client-key-data: XXXXXXXXXXXXXXXX...
token: 1234567890xxxxxxxxxxxxx
- name: k8s_user2
user:
password: some-password
username: exp
- name: k8suser3
- name: k8s_user3
user:
token: 1234567890abcdefghijkl
token: 1234567890xxxxxxxxxxxxx
```
## Add a Kubernetes target
Expand Down Expand Up @@ -129,7 +129,7 @@ users:
C:\OpenSSL-Win32\bin\openssl pkcs12 `
-passout pass: `
-export `
-out certificateandkey.pfx `
-out certificate_and_key.pfx `
-in certificate.crt `
-inkey private.key
```
Expand All @@ -140,7 +140,7 @@ users:
openssl pkcs12 \
-passout pass: \
-export \
-out certificateandkey.pfx \
-out certificate_and_key.pfx \
-in certificate.crt \
-inkey private.key
```
Expand All @@ -158,25 +158,29 @@ Decoding the `certificate-authority-data` field results in a string that looks s

```
-----BEGIN CERTIFICATE-----
MIIEyDCCArCgAwIBAgIRAOBNYnhYDBamTvQn...
XXXXXXXXXXXXXXXX...
-----END CERTIFICATE-----
```

Save this text to a file called `ca.pem`, and upload it to the [Octopus certificate management area](https://oc.to/CertificatesDocumentation). The certificate can then be selected in the `cluster certificate authority` field.

9. Enter the Kubernetes Namespace.
When a single Kubernetes cluster is shared across environments, resources deployed to the cluster will often be separated by environment and by application, team, or service. In this situation, the recommended approach is to create a namespace for each application and environment (e.g., `myapplication-development` and `my-application-production`), and create a Kubernetes service account that has permissions to just that namespace.
When a single Kubernetes cluster is shared across environments, resources deployed to the cluster will often be separated by environment and by application, team, or service. In this situation, the recommended approach is to create a namespace for each application and environment (e.g., `my-application-development` and `my-application-production`), and create a Kubernetes service account that has permissions to just that namespace.

Where each environment has its own Kubernetes cluster, namespaces can be assigned to each application, team or service (e.g. `myapplication`).
Where each environment has its own Kubernetes cluster, namespaces can be assigned to each application, team or service (e.g. `my-application`).

In both scenarios, a target is then created for each Kubernetes cluster and namespace. The `Target Role` tag is set to the application name (e.g. `myapplication`), and the `Environments` are set to the matching environment.
In both scenarios, a target is then created for each Kubernetes cluster and namespace. The `Target Role` tag is set to the application name (e.g. `my-application`), and the `Environments` are set to the matching environment.

When a Kubernetes target is used, the namespace it references is created automatically if it does not already exist.

10. Select a worker pool for the target.
To make use of the Kubernetes steps, the Octopus Server or workers that will run the steps need to have the `kubectl` executable installed. Linux workers also need to have the `jq`, `xargs` and `base64` applications installed.
11. Click **SAVE**.

:::div{.warning}
Setting the Worker Pool directly on the Deployment Target will override the Worker Pool defined in a Deployment Process.
:::

## Create service accounts

The recommended approach to configuring a Kubernetes target is to have a service account for each application and namespace.
Expand Down Expand Up @@ -271,7 +275,7 @@ The token can then be saved as a Token Octopus account, and assigned to the Kube
Kubernetes targets use the `kubectl` executable to communicate with the Kubernetes cluster. This executable must be available on the path on the target where the step is run. When using workers, this means the `kubectl` executable must be in the path on the worker that is executing the step. Otherwise the `kubectl` executable must be in the path on the Octopus Server itself.

## Vendor Authentication Plugins
Prior to `kubectl` version 1.26, the logic for authenticating against various cloud providers (eg Azure Kubernetes Services, Google Kubernetes Engine) was included "in-tree" in `kubetcl`. From version 1.26 onward, the cloud-vendor specific authentication code has been removed from `kubectl`, in favor of a plugin approach.
Prior to `kubectl` version 1.26, the logic for authenticating against various cloud providers (eg Azure Kubernetes Services, Google Kubernetes Engine) was included "in-tree" in `kubectl`. From version 1.26 onward, the cloud-vendor specific authentication code has been removed from `kubectl`, in favor of a plugin approach.

What this means for your deployments:

Expand Down Expand Up @@ -337,7 +341,7 @@ kubectl version --client --output=yaml
# Write a custom kube config. This is useful when you have a config that works, and you want to confirm it works in Octopus.
Write-Host "Health check with custom config file"
Set-Content -Path "myconfig.yml" -Value @"
Set-Content -Path "my-config.yml" -Value @"
apiVersion: v1
clusters:
- cluster:
Expand All @@ -347,8 +351,8 @@ clusters:
contexts:
- context:
cluster: test
user: testadmin
name: testadmin
user: test_admin
name: test_admin
- context:
cluster: test
user: test
Expand All @@ -357,16 +361,16 @@ current-context: test
kind: Config
preferences: {}
users:
- name: testadmin
- name: test_admin
user:
token: auth-token-goes-here
- name: test
user:
client-certificate-data: certificate-data-goes-here
client-key-data: certificate-key-gies-here
client-key-data: certificate-key-goes-here
"@
kubectl version --short --kubeconfig myconfig.yml
kubectl version --short --kubeconfig my-config.yml
exit 0
Expand Down

0 comments on commit cf47baf

Please sign in to comment.