Skip to content

Commit

Permalink
[Docs] replace install command with literalinclude (#860)
Browse files Browse the repository at this point in the history
* replace install command with literalinclude

when updating the docs now we only have to update install.md
  • Loading branch information
nhennigan authored Dec 3, 2024
1 parent 4c721d4 commit a008465
Show file tree
Hide file tree
Showing 14 changed files with 242 additions and 202 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ For more information and instructions, please see the official documentation at:
Install Canonical Kubernetes and initialise the cluster with:

```bash
sudo snap install k8s --channel=1.30-classic/beta --classic
sudo snap install k8s --channel=1.31-classic/candidate --classic
sudo k8s bootstrap
```

Expand Down
2 changes: 1 addition & 1 deletion docs/canonicalk8s/reuse/substitutions.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
product: 'Canonical Kubernetes'
version: '1.31'
channel: '1.31/edge'
channel: '1.31/candidate'
multi_line_example: |-
*Multi-line* text
that uses basic **markup**.
29 changes: 26 additions & 3 deletions docs/src/_parts/install.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,26 @@
```
sudo snap install k8s --classic --channel=1.31/edge
```
<!-- snap start -->
sudo snap install k8s --classic --channel=1.31-classic/candidate
<!-- snap end -->
<!-- lxd start -->
lxc exec k8s -- sudo snap install k8s --classic --channel=1.31-classic/candidate
<!-- lxd end -->
<!-- offline start -->
sudo snap download k8s --channel 1.31-classic/candidate --basename k8s
<!-- offline end -->
<!-- juju control start -->
juju deploy k8s --channel=1.31/candidate
<!-- juju control end -->
<!-- juju worker start -->
juju deploy k8s-worker --channel=1.31/candidate -n 2
<!-- juju worker end -->
<!-- juju control constraints start -->
juju deploy k8s --channel=1.31/candidate --constraints='cores=2 mem=16G root-disk=40G'
<!-- juju control constraints end -->
<!-- juju worker constraints start -->
juju deploy k8s-worker --channel=1.31/candidate --constraints='cores=2 mem=16G root-disk=40G'
<!-- juju worker constraints end -->
<!-- juju vm start -->
juju deploy k8s --channel=latest/edge \
--base "ubuntu@22.04" \
--constraints "cores=2 mem=8G root-disk=16G virt-type=virtual-machine"
<!-- juju vm end -->
2 changes: 1 addition & 1 deletion docs/src/assets/how-to-epa-maas-cloud-init
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ write_files:
# install the snap
snap:
commands:
00: 'snap install k8s --classic --channel=1.31/beta'
00: 'snap install k8s --classic --channel=1.31/candidate'

runcmd:
# fetch dpdk driver binding script
Expand Down
22 changes: 13 additions & 9 deletions docs/src/capi/tutorial/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl

## Configure `clusterctl`

`clusterctl` contains a list of default providers. Right now, {{product}} is
`clusterctl` contains a list of default providers. Right now, {{product}} is
not yet part of that list. To make `clusterctl` aware of the new
providers we need to add them to the configuration
file. Edit `~/.cluster-api/clusterctl.yaml` and add the following:
Expand All @@ -35,21 +35,25 @@ providers:

## Set up a management cluster

The management cluster hosts the CAPI providers. You can use {{product}} as a
The management cluster hosts the CAPI providers. You can use {{product}} as a
management cluster:

```
sudo snap install k8s --classic --channel=1.31-classic/candidate
sudo k8s bootstrap
sudo k8s status --wait-ready
mkdir -p ~/.kube/
sudo k8s config > ~/.kube/config
```{literalinclude} ../../_parts/install.md
:start-after: <!-- snap start -->
:end-before: <!-- snap end -->
:append: sudo k8s bootstrap
```

When setting up the management cluster, place its kubeconfig under
`~/.kube/config` so other tools such as `clusterctl` can discover and interact
with it.

```
sudo k8s status --wait-ready
mkdir -p ~/.kube/
sudo k8s config > ~/.kube/config
```

## Prepare the infrastructure provider

Before generating a cluster, you need to configure the infrastructure provider.
Expand Down Expand Up @@ -114,7 +118,7 @@ The MAAS infrastructure provider uses these credentials to deploy machines,
create DNS records and perform various other operations for workload clusters.
```{warning}
The management cluster needs to resolve DNS records from the MAAS domain,
The management cluster needs to resolve DNS records from the MAAS domain,
therefore it should be deployed on a MAAS machine.
```
Expand Down
16 changes: 7 additions & 9 deletions docs/src/charm/howto/ceph-csi.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,13 @@ This guide assumes that you have an existing {{product}} cluster.
See the [charm installation] guide for more details.

In case of localhost/LXD Juju clouds, please make sure that the K8s units are
configured to use VM containers with Ubuntu 22.04 as the base and adding the
configured to use VM containers with Ubuntu 22.04 as the base and adding the
``virt-type=virtual-machine`` constraint. In order for K8s to function properly,
an adequate amount of resources must be allocated:

```
juju deploy k8s --channel=latest/edge \
--base "ubuntu@22.04" \
--constraints "cores=2 mem=8G root-disk=16G virt-type=virtual-machine"
```{literalinclude} ../../_parts/install.md
:start-after: <!-- juju vm start -->
:end-before: <!-- juju vm end -->
```

## Deploying Ceph
Expand All @@ -37,11 +36,11 @@ juju deploy -n 3 ceph-osd \
juju integrate ceph-osd:mon ceph-mon:osd
```

If using LXD, configure the OSD units to use VM containers by adding the
If using LXD, configure the OSD units to use VM containers by adding the
constraint: ``virt-type=virtual-machine``.

Once the units are ready, deploy ``ceph-csi``. By default, this enables
the ``ceph-xfs`` and ``ceph-ext4`` storage classes, which leverage
Once the units are ready, deploy ``ceph-csi``. By default, this enables
the ``ceph-xfs`` and ``ceph-ext4`` storage classes, which leverage
Ceph RBD.

```
Expand Down Expand Up @@ -134,4 +133,3 @@ sudo k8s kubectl wait pod/pv-writer-test \

[charm installation]: ./charm
[Ceph]: https://docs.ceph.com/

15 changes: 9 additions & 6 deletions docs/src/charm/howto/charm.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,9 @@ page][channels] for an explanation of the different types of channel.

The charm can be installed with the `juju` command:

```
juju deploy k8s --channel=1.31/candidate
```{literalinclude} ../../_parts/install.md
:start-after: <!-- juju control start -->
:end-before: <!-- juju control end -->
```

## Bootstrap the cluster
Expand Down Expand Up @@ -76,9 +77,11 @@ Rather than adding more control-plane units, we'll deploy the `k8s-worker` charm
After deployment, integrate these new nodes with control-plane units so they join
the cluster.

```
juju deploy k8s-worker --channel=latest/edge -n 2
juju integrate k8s k8s-worker:cluster

```{literalinclude} ../../_parts/install.md
:start-after: <!-- juju worker start -->
:end-before: <!-- juju worker end -->
:append: juju integrate k8s k8s-worker:cluster
```

Use `juju status` to watch these units approach the active/idle state.
Expand All @@ -90,4 +93,4 @@ Use `juju status` to watch these units approach the active/idle state.
[credentials]: https://juju.is/docs/juju/credentials
[juju]: https://juju.is/docs/juju/install-juju
[charm]: https://juju.is/docs/juju/charmed-operator
[localhost]: ../howto/install-lxd
[localhost]: ../howto/install-lxd
18 changes: 10 additions & 8 deletions docs/src/charm/tutorial/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,8 +67,9 @@ minimums required. For the Kubernetes control plane (`k8s` charm), the
recommendation is two CPU cores, 16GB of memory and 40GB of disk space. Now we
can go ahead and create a cluster:

```
juju deploy k8s --channel=1.31/candidate --constraints='cores=2 mem=16G root-disk=40G'
```{literalinclude} ../../_parts/install.md
:start-after: <!-- juju control constraints start -->
:end-before: <!-- juju control constraints end -->
```

At this point Juju will fetch the charm from Charmhub, create a new instance
Expand All @@ -83,7 +84,7 @@ juju status --watch 2s
When the status reports that K8s is "idle/ready" you have successfully deployed
a {{product}} control-plane using Juju.

```{note} For High Availability you will need at least three units of the k8s
```{note} For High Availability you will need at least three units of the k8s
charm. Scaling the deployment is covered below.
```

Expand All @@ -96,8 +97,9 @@ connection to a control-plane node to tell it what to do, but it also means
more of its resources are available for running workloads. We can deploy a
worker node in a similar way to the original K8s node:

```
juju deploy k8s-worker --channel=1.31/candidate --constraints='cores=2 mem=16G root-disk=40G'
```{literalinclude} ../../_parts/install.md
:start-after: <!-- juju worker constraints start -->
:end-before: <!-- juju worker constraints end -->
```

Once again, this will take a few minutes. In this case though, the `k8s-worker`
Expand All @@ -114,7 +116,7 @@ the 'integrate' command, adding the interface we wish to connect.
juju integrate k8s k8s-worker:cluster
```

After a short time, the worker node will share information with the control plane
After a short time, the worker node will share information with the control plane
and be joined to the cluster.

## 4. Scale the cluster (Optional)
Expand Down Expand Up @@ -152,7 +154,7 @@ mkdir ~/.kube
To fetch the configuration information from the cluster we can run:

```
juju run k8s/0 get-kubeconfig
juju run k8s/0 get-kubeconfig
```

The Juju action is a piece of code which runs on a unit to perform a specific
Expand Down Expand Up @@ -237,4 +239,4 @@ informed of updates.
[Juju tutorial]: https://juju.is/docs/juju/tutorial
[Kubectl]: https://kubernetes.io/docs/reference/kubectl/
[the channel explanation page]: ../../snap/explanation/channels
[releases page]: ../reference/releases
[releases page]: ../reference/releases
Loading

0 comments on commit a008465

Please sign in to comment.