From f443b24d1ca88cf4b1159172de1a7b7e920f029d Mon Sep 17 00:00:00 2001 From: Milson Munakami Date: Mon, 16 Dec 2024 17:30:48 -0500 Subject: [PATCH] clean build --- .../billing-process-for-harvard.md | 10 +- .../scaling-and-performance-guide.md | 20 +-- .../logging-in/web-console-overview.md | 14 +- .../access-and-security/security-groups.md | 10 +- .../set-up-a-private-network.md | 6 +- .../using-vpn/wireguard/index.md | 8 +- .../data-transfer/data-transfer-from-to-vm.md | 54 +++--- docs/openstack/openstack-cli/openstack-CLI.md | 8 +- .../mount-the-object-storage.md | 8 +- .../persistent-storage/object-storage.md | 18 +- .../persistent-storage/transfer-a-volume.md | 6 +- .../setup-github-actions-pipeline.md | 16 +- .../jenkins/setup-jenkins-CI-CD-pipeline.md | 160 +++++++++--------- docs/other-tools/apache-spark/spark.md | 50 +++--- .../kubernetes/k3s/k3s-ha-cluster.md | 4 +- .../kubernetes/k3s/k3s-using-k3d.md | 6 +- docs/other-tools/kubernetes/k3s/k3s.md | 6 +- .../kubeadm/HA-clusters-with-kubeadm.md | 86 +++++----- .../single-master-clusters-with-kubeadm.md | 60 +++---- docs/other-tools/kubernetes/kubespray.md | 12 +- docs/other-tools/kubernetes/microk8s.md | 22 +-- docs/other-tools/kubernetes/minikube.md | 36 ++-- nerc-theme/main.html | 6 +- 23 files changed, 313 insertions(+), 313 deletions(-) diff --git a/docs/get-started/cost-billing/billing-process-for-harvard.md b/docs/get-started/cost-billing/billing-process-for-harvard.md index f95afc27..5f1c7aff 100644 --- a/docs/get-started/cost-billing/billing-process-for-harvard.md +++ b/docs/get-started/cost-billing/billing-process-for-harvard.md @@ -32,11 +32,11 @@ Please follow these two steps to ensure proper billing setup: !!! abstract "What if you already have an existing Customer Code?" - Please note that if you already have an existing active NERC account, you - need to provide your HUIT Customer Code to NERC. If you think your department - may already have a HUIT account but you don’t know the corresponding Customer - Code then you can [contact HUIT Billing](https://billing.huit.harvard.edu/portal/allusers/contactus) - to get the required Customer Code. + Please note that if you already have an existing active NERC account, you + need to provide your HUIT Customer Code to NERC. If you think your department + may already have a HUIT account but you don’t know the corresponding Customer + Code then you can [contact HUIT Billing](https://billing.huit.harvard.edu/portal/allusers/contactus) + to get the required Customer Code. 2. During the Resource Allocation review and approval process, we will utilize the HUIT "Customer Code" provided by the PI in step #1 to align it with the approved diff --git a/docs/openshift/applications/scaling-and-performance-guide.md b/docs/openshift/applications/scaling-and-performance-guide.md index 912d2909..be829347 100644 --- a/docs/openshift/applications/scaling-and-performance-guide.md +++ b/docs/openshift/applications/scaling-and-performance-guide.md @@ -102,9 +102,9 @@ CPU and memory can be specified in a couple of ways: !!! note "Important Information" - If a Pod's total requests are not available on a single node, then the Pod - will remain in a *Pending* state (i.e. not running) until these resources - become available. + If a Pod's total requests are not available on a single node, then the Pod + will remain in a *Pending* state (i.e. not running) until these resources + become available. - The **limit** value specifies the max value you can consume. Limit is the value applications should be tuned to use. Pods will be memory, CPU throttled when @@ -283,11 +283,11 @@ Click the **Observe** tab to: !!! note "Detailed Monitoring your project and application metrics" - On the left navigation panel of the **Developer** perspective, click - **Observe** to see the Dashboard, Metrics, Alerts, and Events for your project. - For more information about Monitoring project and application metrics - using the Developer perspective, please - [read this](https://docs.openshift.com/container-platform/4.10/applications/odc-monitoring-project-and-application-metrics-using-developer-perspective.html). + On the left navigation panel of the **Developer** perspective, click + **Observe** to see the Dashboard, Metrics, Alerts, and Events for your project. + For more information about Monitoring project and application metrics + using the Developer perspective, please + [read this](https://docs.openshift.com/container-platform/4.10/applications/odc-monitoring-project-and-application-metrics-using-developer-perspective.html). ## Scaling manually @@ -402,8 +402,8 @@ maximum numbers to maintain the specified CPU utilization across all pods. !!! note "Configure via: Form or YAML View" - While creating or editing the horizontal pod autoscaler in the web console, - you can switch from **Form view** to **YAML view**. + While creating or editing the horizontal pod autoscaler in the web console, + you can switch from **Form view** to **YAML view**. - From the **Add HorizontalPodAutoscaler** form, define the name, minimum and maximum pod limits, the CPU and memory usage, and click **Save**. diff --git a/docs/openshift/logging-in/web-console-overview.md b/docs/openshift/logging-in/web-console-overview.md index d8170ac8..0bf9c5a7 100644 --- a/docs/openshift/logging-in/web-console-overview.md +++ b/docs/openshift/logging-in/web-console-overview.md @@ -56,9 +56,9 @@ administrators and cluster administrators can view the Administrator perspective !!! note "Important Note" - The default web console perspective that is shown depends on the role of the - user. The **Administrator** perspective is displayed by default if the user - is recognized as an administrator. +The default web console perspective that is shown depends on the role of the +user. The **Administrator** perspective is displayed by default if the user is +recognized as an administrator. ### About the Developer perspective in the web console @@ -67,8 +67,8 @@ services, and databases. !!! info "Important Note" - The default view for the OpenShift Container Platform web console is the **Developer** - perspective. +The default view for the OpenShift Container Platform web console is the **Developer** +perspective. The web console provides a comprehensive set of tools for managing your projects and applications. @@ -82,8 +82,8 @@ located on top navigation as shown below: !!! info "Important Note" - You can identify the currently selected project with **tick** mark and also - you can click on **star** icon to keep the project under your **Favorites** list. +You can identify the currently selected project with **tick** mark and also +you can click on **star** icon to keep the project under your **Favorites** list. ## Navigation Menu diff --git a/docs/openstack/access-and-security/security-groups.md b/docs/openstack/access-and-security/security-groups.md index 5221238b..9ff4d3ca 100644 --- a/docs/openstack/access-and-security/security-groups.md +++ b/docs/openstack/access-and-security/security-groups.md @@ -79,8 +79,8 @@ Enter the following values: !!! note "Note" - To accept requests from a particular range of IP addresses, specify the - IP address block in the CIDR box. + To accept requests from a particular range of IP addresses, specify the IP + address block in the CIDR box. The new rule now appears in the list. This signifies that any instances using this newly added Security Group will now have SSH port 22 open for requests @@ -141,10 +141,10 @@ Enter the following values: - CIDR: 0.0.0.0/0 - !!! note "Note" +!!! note "Note" - To accept requests from a particular range of IP addresses, specify the - IP address block in the CIDR box. + To accept requests from a particular range of IP addresses, specify the IP + address block in the CIDR box. The new rule now appears in the list. This signifies that any instances using this newly added Security Group will now have RDP port 3389 open for requests diff --git a/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md b/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md index 91eb3b08..77bce7cd 100644 --- a/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md +++ b/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md @@ -44,9 +44,9 @@ In the Create Network dialog box, specify the following values. networks, you should use IP addresses which fall within the ranges that are specifically reserved for private networks: - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16 + 10.0.0.0/8 + 172.16.0.0/12 + 192.168.0.0/16 In the example below, we configure a network containing addresses 192.168.0.1 to 192.168.0.255 using CIDR 192.168.0.0/24 diff --git a/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md b/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md index 7f11de82..5849849a 100644 --- a/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md +++ b/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md @@ -126,10 +126,10 @@ To deactivate config: `wg-quick down /path/to/file_name.config` !!! note "Important Note" - You need to contact your project administrator to get your own WireGUard - configuration file (file with .conf extension). Download it and Keep it in - your local machine so in next steps we can use this configuration client - profile file. + You need to contact your project administrator to get your own WireGUard + configuration file (file with .conf extension). Download it and Keep it in + your local machine so in next steps we can use this configuration client + profile file. A WireGuard client or compatible software is needed to connect to the WireGuard VPN server. Please install[one of these clients](https://www.wireguard.com/install/) diff --git a/docs/openstack/data-transfer/data-transfer-from-to-vm.md b/docs/openstack/data-transfer/data-transfer-from-to-vm.md index 9cd136e6..e17b29e8 100644 --- a/docs/openstack/data-transfer/data-transfer-from-to-vm.md +++ b/docs/openstack/data-transfer/data-transfer-from-to-vm.md @@ -434,20 +434,20 @@ using FTP, FTPS, SCP, SFTP, WebDAV, or S3 file transfer protocols. !!! info "Default User name based on OS" - - **all Ubuntu images**: ubuntu + - **all Ubuntu images**: ubuntu - - **all AlmaLinux images**: almalinux + - **all AlmaLinux images**: almalinux - - **all Rocky Linux images**: rocky + - **all Rocky Linux images**: rocky - - **all Fedora images**: fedora + - **all Fedora images**: fedora - - **all Debian images**: debian + - **all Debian images**: debian - - **all RHEL images**: cloud-user + - **all RHEL images**: cloud-user - If you still have VMs running with deleted **CentOS** images, you need to - use the following default username for your CentOS images: `centos`. + If you still have VMs running with deleted **CentOS** images, you need to + use the following default username for your CentOS images: `centos`. **"Password"**: "``" @@ -462,12 +462,12 @@ from the file picker. !!! tip "Helpful Tip" - You can save your above configured site with some preferred name by - clicking the "Save" button and then giving a proper name to your site. - This prevents needing to manually enter all of your configuration again the - next time you need to use WinSCP. + You can save your above configured site with some preferred name by + clicking the "Save" button and then giving a proper name to your site. + This prevents needing to manually enter all of your configuration again the + next time you need to use WinSCP. - ![Save Site WinSCP](images/winscp-save-site.png) + ![Save Site WinSCP](images/winscp-save-site.png) #### Using WinSCP @@ -516,17 +516,17 @@ connections to servers, enterprise file sharing, and various cloud storage platf !!! info "Default User name based on OS" - - **all Ubuntu images**: ubuntu + - **all Ubuntu images**: ubuntu - - **all AlmaLinux images**: almalinux + - **all AlmaLinux images**: almalinux - - **all Rocky Linux images**: rocky + - **all Rocky Linux images**: rocky - - **all Fedora images**: fedora + - **all Fedora images**: fedora - - **all Debian images**: debian + - **all Debian images**: debian - - **all RHEL images**: cloud-user + - **all RHEL images**: cloud-user **"Password"**: "``" @@ -585,20 +585,20 @@ computer (shared drives, Dropbox, etc.) !!! info "Default User name based on OS" - - **all Ubuntu images**: ubuntu + - **all Ubuntu images**: ubuntu - - **all AlmaLinux images**: almalinux + - **all AlmaLinux images**: almalinux - - **all Rocky Linux images**: rocky + - **all Rocky Linux images**: rocky - - **all Fedora images**: fedora + - **all Fedora images**: fedora - - **all Debian images**: debian + - **all Debian images**: debian - - **all RHEL images**: cloud-user + - **all RHEL images**: cloud-user - If you still have VMs running with deleted **CentOS** images, you need to - use the following default username for your CentOS images: `centos`. + If you still have VMs running with deleted **CentOS** images, you need to + use the following default username for your CentOS images: `centos`. **"Key file"**: "Browse and choose the appropriate SSH Private Key from you local machine that has corresponding Public Key attached to your VM" diff --git a/docs/openstack/openstack-cli/openstack-CLI.md b/docs/openstack/openstack-cli/openstack-CLI.md index b1d3fa37..552f5980 100644 --- a/docs/openstack/openstack-cli/openstack-CLI.md +++ b/docs/openstack/openstack-cli/openstack-CLI.md @@ -37,10 +37,10 @@ You can download the environment file with the credentials from the [OpenStack d !!! note "Important Note" - Please note that an application credential is only valid for a single - project, and to access multiple projects you need to create an application - credential for each. You can switch projects by clicking on the project name - at the top right corner and choosing from the dropdown under "Project". + Please note that an application credential is only valid for a single + project, and to access multiple projects you need to create an application + credential for each. You can switch projects by clicking on the project name + at the top right corner and choosing from the dropdown under "Project". After clicking "Create Application Credential" button, the **ID** and **Secret** will be displayed and you will be prompted to `Download openrc file` diff --git a/docs/openstack/persistent-storage/mount-the-object-storage.md b/docs/openstack/persistent-storage/mount-the-object-storage.md index aed80104..76efd12f 100644 --- a/docs/openstack/persistent-storage/mount-the-object-storage.md +++ b/docs/openstack/persistent-storage/mount-the-object-storage.md @@ -59,7 +59,7 @@ parts are `EC2_ACCESS_KEY` and `EC2_SECRET_KEY`, keep them noted. - Allow Other User option by editing fuse config by editing `/etc/fuse.conf` file and uncomment "user_allow_other" option. - sudo nano /etc/fuse.conf + sudo nano /etc/fuse.conf The output going to look like this: @@ -977,9 +977,9 @@ Also, check that binding to `localhost` is working fine by running the following !!! warning "Important Note" - The `netstat` command may not be available on your system by default. If - this is the case, you can install it (along with a number of other handy - networking tools) with the following command: `sudo apt install net-tools`. + The `netstat` command may not be available on your system by default. If + this is the case, you can install it (along with a number of other handy + networking tools) with the following command: `sudo apt install net-tools`. ##### Configuring a Redis Password diff --git a/docs/openstack/persistent-storage/object-storage.md b/docs/openstack/persistent-storage/object-storage.md index a75309bf..7c17e5cb 100644 --- a/docs/openstack/persistent-storage/object-storage.md +++ b/docs/openstack/persistent-storage/object-storage.md @@ -256,13 +256,13 @@ This is a python client for the Swift API. There's a [Python API](https://github - This example uses a `Python3` virtual environment, but you are free to choose any other method to create a local virtual environment like `Conda`. - python3 -m venv venv + python3 -m venv venv !!! note "Choosing Correct Python Interpreter" - Make sure you are able to use `python` or `python3` or **`py -3`** (For - Windows Only) to create a directory named `venv` (or whatever name you - specified) in your current working directory. + Make sure you are able to use `python` or `python3` or **`py -3`** (For + Windows Only) to create a directory named `venv` (or whatever name you + specified) in your current working directory. - Activate the virtual environment by running: @@ -526,8 +526,8 @@ directory `~/.aws/config` with the ec2 profile and credentials as shown below: !!! note "Information" - We need to have a profile that you use must have permissions to allow - the AWS operations can be performed. + We need to have a profile that you use must have permissions to allow + the AWS operations can be performed. #### Listing buckets using **aws-cli** @@ -1062,9 +1062,9 @@ respectively. !!! note "Helpful Tips" - You can save your above configured session with some preferred name by - clicking the "Save" button and then giving a proper name to your session. - So that next time you don't need to again manually enter all your configuration. + You can save your above configured session with some preferred name by + clicking the "Save" button and then giving a proper name to your session. + So that next time you don't need to again manually enter all your configuration. #### Using WinSCP diff --git a/docs/openstack/persistent-storage/transfer-a-volume.md b/docs/openstack/persistent-storage/transfer-a-volume.md index f5e5b776..951e0e25 100644 --- a/docs/openstack/persistent-storage/transfer-a-volume.md +++ b/docs/openstack/persistent-storage/transfer-a-volume.md @@ -104,9 +104,9 @@ openstack volume transfer request create my-volume !!! tip "Pro Tip" - If your volume name includes spaces, you need to enclose them in quotes, - i.e. `""`. - For example: `openstack volume transfer request create "My Volume"` + If your volume name includes spaces, you need to enclose them in quotes, + i.e. `""`. + For example: `openstack volume transfer request create "My Volume"` - The volume can be checked as in the transfer status using `openstack volume transfer request list` as follows and the volume is in status diff --git a/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md b/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md index 76168e72..d66acece 100644 --- a/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md +++ b/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md @@ -84,14 +84,14 @@ workflow. !!! info "Very Important Information" - Workflow execution on OpenShift pipelines follows these steps: - - 1. Checkout your repository - 2. Perform a container image build - 3. Push the built image to the GitHub Container Registry (GHCR) or - your preferred Registry - 4. Log in to your NERC OpenShift cluster's project space - 5. Create an OpenShift app from the image and expose it to the internet + Workflow execution on OpenShift pipelines follows these steps: + + 1. Checkout your repository + 2. Perform a container image build + 3. Push the built image to the GitHub Container Registry (GHCR) or + your preferred Registry + 4. Log in to your NERC OpenShift cluster's project space + 5. Create an OpenShift app from the image and expose it to the internet 8. Edit the top-level 'env' section as marked with '🖊️' if the defaults are not suitable for your project. diff --git a/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md b/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md index 0e89ec4b..9020eb1c 100644 --- a/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md +++ b/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md @@ -31,11 +31,11 @@ _Figure: CI/CD Pipeline To Deploy To Kubernetes Cluster Using Jenkins on NERC_ - [Assign a Floating IP](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) to your new instance so that you will be able to ssh into this machine: - ssh ubuntu@ -A -i + ssh ubuntu@ -A -i For example: - ssh ubuntu@199.94.60.4 -A -i cloud.key + ssh ubuntu@199.94.60.4 -A -i cloud.key Upon successfully SSH accessing the machine, execute the following dependencies: @@ -45,16 +45,16 @@ Upon successfully SSH accessing the machine, execute the following dependencies: - Update the repositories and packages: - sudo apt-get update && sudo apt-get upgrade -y + sudo apt-get update && sudo apt-get upgrade -y - Turn off `swap` - swapoff -a - sudo sed -i '/ swap / s/^/#/' /etc/fstab + swapoff -a + sudo sed -i '/ swap / s/^/#/' /etc/fstab - Install `curl` and `apt-transport-https` - sudo apt-get update && sudo apt-get install -y apt-transport-https curl + sudo apt-get update && sudo apt-get install -y apt-transport-https curl --- @@ -62,12 +62,12 @@ Upon successfully SSH accessing the machine, execute the following dependencies: - Download and install Docker CE: - curl -fsSL https://get.docker.com -o get-docker.sh - sudo sh get-docker.sh + curl -fsSL https://get.docker.com -o get-docker.sh + sudo sh get-docker.sh - Configure the Docker daemon: - sudo usermod -aG docker $USER && newgrp docker + sudo usermod -aG docker $USER && newgrp docker --- @@ -77,23 +77,23 @@ Upon successfully SSH accessing the machine, execute the following dependencies: - Download the Google Cloud public signing key and add key to verify releases - curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo \ - apt-key add - + curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo \ + apt-key add - - add kubernetes apt repo - cat <`" and "``" - with your actual DockerHub and GitHub usernames, respectively. Also, - ensure that the global credentials IDs mentioned above match those used - during the credential saving steps mentioned earlier. For instance, - `dockerhublogin` corresponds to the **DockerHub** ID saved during the - credential saving process for your Docker Hub Registry's username and - password. Similarly, `kubernetes` corresponds to the **'Kubeconfig'** ID - assigned for the Kubeconfig credential file. + You need to replace "``" and "``" + with your actual DockerHub and GitHub usernames, respectively. Also, + ensure that the global credentials IDs mentioned above match those used + during the credential saving steps mentioned earlier. For instance, + `dockerhublogin` corresponds to the **DockerHub** ID saved during the + credential saving process for your Docker Hub Registry's username and + password. Similarly, `kubernetes` corresponds to the **'Kubeconfig'** ID + assigned for the Kubeconfig credential file. - Below is an example of a Jenkins declarative Pipeline Script file: pipeline { - environment { - dockerimagename = "/nodeapp:${env.BUILD_NUMBER}" - dockerImage = "" - } - - agent any - - stages { - - stage('Checkout Source') { - steps { - git branch: 'main', url: 'https://github.com//nodeapp.git' - } - } - - stage('Build image') { - steps{ - script { - dockerImage = docker.build dockerimagename - } - } - } - - stage('Pushing Image') { - environment { - registryCredential = 'dockerhublogin' - } - steps{ - script { - docker.withRegistry('https://registry.hub.docker.com', registryCredential){ - dockerImage.push() - } - } - } - } - - stage('Docker Remove Image') { - steps { - sh "docker rmi -f ${dockerimagename}" - sh "docker rmi -f registry.hub.docker.com/${dockerimagename}" - } - } - - stage('Deploying App to Kubernetes') { - steps { - sh "sed -i 's/nodeapp:latest/nodeapp:${env.BUILD_NUMBER}/g' deploymentservice.yml" - withKubeConfig([credentialsId: 'kubernetes']) { - sh 'kubectl apply -f deploymentservice.yml' - } - } - } - } + environment { + dockerimagename = "/nodeapp:${env.BUILD_NUMBER}" + dockerImage = "" + } + + agent any + + stages { + + stage('Checkout Source') { + steps { + git branch: 'main', url: 'https://github.com//nodeapp.git' + } + } + + stage('Build image') { + steps{ + script { + dockerImage = docker.build dockerimagename + } + } + } + + stage('Pushing Image') { + environment { + registryCredential = 'dockerhublogin' + } + steps{ + script { + docker.withRegistry('https://registry.hub.docker.com', registryCredential){ + dockerImage.push() + } + } + } + } + + stage('Docker Remove Image') { + steps { + sh "docker rmi -f ${dockerimagename}" + sh "docker rmi -f registry.hub.docker.com/${dockerimagename}" + } + } + + stage('Deploying App to Kubernetes') { + steps { + sh "sed -i 's/nodeapp:latest/nodeapp:${env.BUILD_NUMBER}/g' deploymentservice.yml" + withKubeConfig([credentialsId: 'kubernetes']) { + sh 'kubectl apply -f deploymentservice.yml' + } + } + } + } } !!! question "Other way to Generate Pipeline Jenkinsfile" - You can generate your custom Jenkinsfile by clicking on **"Pipeline Syntax"** - link shown when you create a new Pipeline when clicking the "New Item" menu - link. + You can generate your custom Jenkinsfile by clicking on **"Pipeline Syntax"** + link shown when you create a new Pipeline when clicking the "New Item" menu + link. ## Setup a Pipeline diff --git a/docs/other-tools/apache-spark/spark.md b/docs/other-tools/apache-spark/spark.md index 0478e1f7..d9905720 100644 --- a/docs/other-tools/apache-spark/spark.md +++ b/docs/other-tools/apache-spark/spark.md @@ -65,8 +65,8 @@ and Scala applications using the IP address of the master VM. !!! note "Note" - Installing Scala means installing various command-line tools such as the - Scala compiler and build tools. + Installing Scala means installing various command-line tools such as the + Scala compiler and build tools. - Download and unpack Apache Spark: @@ -81,10 +81,10 @@ and Scala applications using the IP address of the master VM. !!! warning "Very Important Note" - Please ensure you are using the latest Spark version by modifying the - `SPARK_VERSION` in the above script. Additionally, verify that the version - exists on the `APACHE_MIRROR` website. Please note the value of `SPARK_VERSION` - as you will need it during [Preparing Jobs for Execution and Examination](#preparing-jobs-for-execution-and-examination). + Please ensure you are using the latest Spark version by modifying the + `SPARK_VERSION` in the above script. Additionally, verify that the version + exists on the `APACHE_MIRROR` website. Please note the value of `SPARK_VERSION` + as you will need it during [Preparing Jobs for Execution and Examination](#preparing-jobs-for-execution-and-examination). - Create an SSH/RSA Key by running `ssh-keygen -t rsa` without using any passphrase: @@ -145,11 +145,11 @@ and Scala applications using the IP address of the master VM. !!! note "Naming, Security Group and Flavor for Worker Nodes" - You can specify the "Instance Name" as "spark-worker", and for each instance, - it will automatically append incremental values at the end, such as - `spark-worker-1` and `spark-worker-2`. Also, make sure you have attached - the [Security Groups](../../openstack/access-and-security/security-groups.md#allowing-ssh) - to allow **ssh** using Port 22 access to the worker instances. + You can specify the "Instance Name" as "spark-worker", and for each instance, + it will automatically append incremental values at the end, such as + `spark-worker-1` and `spark-worker-2`. Also, make sure you have attached + the [Security Groups](../../openstack/access-and-security/security-groups.md#allowing-ssh) + to allow **ssh** using Port 22 access to the worker instances. Additionally, during launch, you will have the option to choose your preferred flavor for the worker nodes, which can differ from the master VM based on your @@ -189,8 +189,8 @@ computational requirements. !!! danger "Very Important Note" - Make sure to use `>>` instead of `>` to avoid overwriting the existing content - and append the new content at the end of the file. + Make sure to use `>>` instead of `>` to avoid overwriting the existing content + and append the new content at the end of the file. For example, the end of the `/etc/hosts` file looks like this: @@ -222,13 +222,13 @@ computational requirements. !!! tip "Environment Variables" - Executing this command: `readlink -f $(which java)` will display the path - to the current Java setup in your VM. For example: - `/usr/lib/jvm/java-11-openjdk-amd64/bin/java`, you need to remove the - last `bin/java` part, i.e. `/usr/lib/jvm/java-11-openjdk-amd64`, to set - it as the `JAVA_HOME` environment variable. - Learn more about other Spark settings that can be configured through environment - variables [here](https://spark.apache.org/docs/3.4.2/configuration.html#environment-variables). + Executing this command: `readlink -f $(which java)` will display the path + to the current Java setup in your VM. For example: + `/usr/lib/jvm/java-11-openjdk-amd64/bin/java`, you need to remove the + last `bin/java` part, i.e. `/usr/lib/jvm/java-11-openjdk-amd64`, to set + it as the `JAVA_HOME` environment variable. + Learn more about other Spark settings that can be configured through environment + variables [here](https://spark.apache.org/docs/3.4.2/configuration.html#environment-variables). For example: @@ -269,8 +269,8 @@ computational requirements. !!! info "How to Stop All Spark Cluster" - To stop all of the Spark cluster nodes, execute `./sbin/stop-all.sh` - command from `/usr/local/spark`. + To stop all of the Spark cluster nodes, execute `./sbin/stop-all.sh` + command from `/usr/local/spark`. ## Connect to the Spark WebUI @@ -335,9 +335,9 @@ resources for both the Spark cluster and individual applications. !!! warning "Very Important Note" - Please ensure you are using the same Spark version that you have - [downloaded and installed previously](#setup-a-master-vm) as the value - of `SPARK_VERSION` in the above script. + Please ensure you are using the same Spark version that you have + [downloaded and installed previously](#setup-a-master-vm) as the value + of `SPARK_VERSION` in the above script. - **Single Node Job:** diff --git a/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md b/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md index 74486ea0..4752f35f 100644 --- a/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md +++ b/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md @@ -199,8 +199,8 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a !!! note "Important Note" - If you're doing this from your local development machine, remove - `sudo k3s` and just use `kubectl`) + If you're doing this from your local development machine, remove `sudo k3s` + and just use `kubectl`) - Get bearer **token** diff --git a/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md b/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md index ca720590..a18250af 100644 --- a/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md +++ b/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md @@ -46,15 +46,15 @@ Availability clusters just with few commands. **kubectl**: the command line util to talk to your cluster. - snap install kubectl --classic + snap install kubectl --classic This outputs: - kubectl 1.26.1 from Canonical✓ installed + kubectl 1.26.1 from Canonical✓ installed - Now verify the kubectl version: - kubectl version -o yaml + kubectl version -o yaml --- diff --git a/docs/other-tools/kubernetes/k3s/k3s.md b/docs/other-tools/kubernetes/k3s/k3s.md index f2bd4d70..7d04aff7 100644 --- a/docs/other-tools/kubernetes/k3s/k3s.md +++ b/docs/other-tools/kubernetes/k3s/k3s.md @@ -66,9 +66,9 @@ must be accessible to each other on ports **2379** and **2380**. !!! note "Important Note" - The VXLAN overlay networking port on nodes should not be exposed to the world - as it opens up your cluster network to be accessed by anyone. Run your nodes - behind a firewall/security group that disables access to port **8472**. + The VXLAN overlay networking port on nodes should not be exposed to the world + as it opens up your cluster network to be accessed by anyone. Run your nodes + behind a firewall/security group that disables access to port **8472**. - setup Unique hostname to each machine using the following command: diff --git a/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md b/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md index ca16f8a0..5c4d0a93 100644 --- a/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md +++ b/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md @@ -142,7 +142,7 @@ apiservers. The loadbalancer will be used to loadbalance between the 2 apiserver !!! note "Note" - 6443 is the default port of **kube-apiserver** + 6443 is the default port of **kube-apiserver** ```sh backend be-apiserver @@ -183,8 +183,8 @@ apiservers. The loadbalancer will be used to loadbalance between the 2 apiserver !!! note "Note" - If you see failures for `master1` and `master2` connectivity, you can ignore - them for time being as you have not yet installed anything on the servers. + If you see failures for `master1` and `master2` connectivity, you can ignore + them for time being as you have not yet installed anything on the servers. --- @@ -352,10 +352,10 @@ same in `master2`. !!! danger "Configuring the kubelet cgroup driver" - From 1.22 onwards, if you do not set the `cgroupDriver` field under - `KubeletConfiguration`, `kubeadm` will default it to `systemd`. So you do - not need to do anything here by default but if you want you change it you can - refer to [this documentation](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/). + From 1.22 onwards, if you do not set the `cgroupDriver` field under + `KubeletConfiguration`, `kubeadm` will default it to `systemd`. So you do + not need to do anything here by default but if you want you change it you can + refer to [this documentation](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/). - Execute the below command to initialize the cluster: @@ -377,12 +377,12 @@ same in `master2`. !!! note "Important Note" - `--pod-network-cidr` value depends upon what CNI plugin you going to use so - need to be very careful while setting this CIDR values. In our case, you are - going to use **Flannel** CNI network plugin so you will use: - `--pod-network-cidr=10.244.0.0/16`. If you are opted to use **Calico** CNI - network plugin then you need to use: `--pod-network-cidr=192.168.0.0/16` and - if you are opted to use **Weave Net** no need to pass this parameter. + `--pod-network-cidr` value depends upon what CNI plugin you going to use so + need to be very careful while setting this CIDR values. In our case, you are + going to use **Flannel** CNI network plugin so you will use: + `--pod-network-cidr=10.244.0.0/16`. If you are opted to use **Calico** CNI + network plugin then you need to use: `--pod-network-cidr=192.168.0.0/16` and + if you are opted to use **Weave Net** no need to pass this parameter. For example, our `Flannel` CNI network plugin based kubeadm init command with _loadbalancer node_ with internal IP: `192.168.0.167` look like below: @@ -454,12 +454,12 @@ same in `master2`. !!! warning "Warning" - Kubeadm signs the certificate in the admin.conf to have - `Subject: O = system:masters, CN = kubernetes-admin. system:masters` is a - break-glass, super user group that bypasses the authorization layer - (e.g. RBAC). Do not share the admin.conf file with anyone and instead - grant users custom permissions by generating them a kubeconfig file using - the `kubeadm kubeconfig user` command. + Kubeadm signs the certificate in the admin.conf to have + `Subject: O = system:masters, CN = kubernetes-admin. system:masters` is a + break-glass, super user group that bypasses the authorization layer + (e.g. RBAC). Do not share the admin.conf file with anyone and instead + grant users custom permissions by generating them a kubeconfig file using + the `kubeadm kubeconfig user` command. B. Setup a new control plane (master) i.e. `master2` by running following command on **master2** node: @@ -480,9 +480,9 @@ same in `master2`. !!! note "Important Note" - **Your output will be different than what is provided here. While - performing the rest of the demo, ensure that you are executing the - command provided by your output and dont copy and paste from here.** + **Your output will be different than what is provided here. While + performing the rest of the demo, ensure that you are executing the + command provided by your output and dont copy and paste from here.** If you do not have the token, you can get it by running the following command on the control-plane node: @@ -618,11 +618,11 @@ kubeconfig and `kubectl`. !!! note "Important Note" - If you havent setup ssh connection between master node and loadbalancer, you - can manually copy the contents of the file `/etc/kubernetes/admin.conf` from - `master1` node and then paste it to `$HOME/.kube/config` file on the - loadbalancer node. Ensure that the kubeconfig file path is - **`$HOME/.kube/config`** on the loadbalancer node. + If you havent setup ssh connection between master node and loadbalancer, you + can manually copy the contents of the file `/etc/kubernetes/admin.conf` from + `master1` node and then paste it to `$HOME/.kube/config` file on the + loadbalancer node. Ensure that the kubeconfig file path is + **`$HOME/.kube/config`** on the loadbalancer node. - Provide appropriate ownership to the copied file @@ -638,21 +638,21 @@ kubeconfig and `kubectl`. **kubectl**: the command line util to talk to your cluster. - snap install kubectl --classic + snap install kubectl --classic This outputs: - kubectl 1.26.1 from Canonical✓ installed + kubectl 1.26.1 from Canonical✓ installed - Verify the cluster - kubectl get nodes + kubectl get nodes - NAME STATUS ROLES AGE VERSION - master1 NotReady control-plane,master 21m v1.26.1 - master2 NotReady control-plane,master 15m v1.26.1 - worker1 Ready 9m17s v1.26.1 - worker2 Ready 9m25s v1.26.1 + NAME STATUS ROLES AGE VERSION + master1 NotReady control-plane,master 21m v1.26.1 + master2 NotReady control-plane,master 15m v1.26.1 + worker1 Ready 9m17s v1.26.1 + worker2 Ready 9m25s v1.26.1 --- @@ -900,14 +900,14 @@ following commands: !!! info "Information" - Since 1.22, this type of Secret is no longer used to mount credentials into - Pods, and obtaining tokens via the [TokenRequest API](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) - is recommended instead of using service account token Secret objects. Tokens - obtained from the _TokenRequest API_ are more secure than ones stored in Secret - objects, because they have a bounded lifetime and are not readable by other API - clients. You can use the `kubectl create token` command to obtain a token from - the TokenRequest API. For example: `kubectl create token skooner-sa`, where - `skooner-sa` is service account name. + Since 1.22, this type of Secret is no longer used to mount credentials into + Pods, and obtaining tokens via the [TokenRequest API](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) + is recommended instead of using service account token Secret objects. Tokens + obtained from the _TokenRequest API_ are more secure than ones stored in Secret + objects, because they have a bounded lifetime and are not readable by other API + clients. You can use the `kubectl create token` command to obtain a token from + the TokenRequest API. For example: `kubectl create token skooner-sa`, where + `skooner-sa` is service account name. - Find the secret that was created to hold the token for the SA diff --git a/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md b/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md index 9918f323..27a29278 100644 --- a/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md +++ b/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md @@ -226,10 +226,10 @@ with your chosen container runtime. !!! danger "Configuring the kubelet cgroup driver" - From 1.22 onwards, if you do not set the `cgroupDriver` field under - `KubeletConfiguration`, `kubeadm` will default it to `systemd`. So you do - not need to do anything here by default but if you want you change it you - can refer to [this documentation](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/). + From 1.22 onwards, if you do not set the `cgroupDriver` field under + `KubeletConfiguration`, `kubeadm` will default it to `systemd`. So you do + not need to do anything here by default but if you want you change it you + can refer to [this documentation](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/). --- @@ -252,14 +252,14 @@ control plane. !!! note "Important Note" - Please make sure you replace the correct IP of the node with - `` which is the Internal IP of master node. - `--pod-network-cidr` value depends upon what CNI plugin you going to use - so need to be very careful while setting this CIDR values. In our case, - you are going to use **Flannel** CNI network plugin so you will use: - `--pod-network-cidr=10.244.0.0/16`. If you are opted to use **Calico** CNI - network plugin then you need to use: `--pod-network-cidr=192.168.0.0/16` - and if you are opted to use **Weave Net** no need to pass this parameter. + Please make sure you replace the correct IP of the node with + `` which is the Internal IP of master node. + `--pod-network-cidr` value depends upon what CNI plugin you going to use + so need to be very careful while setting this CIDR values. In our case, + you are going to use **Flannel** CNI network plugin so you will use: + `--pod-network-cidr=10.244.0.0/16`. If you are opted to use **Calico** CNI + network plugin then you need to use: `--pod-network-cidr=192.168.0.0/16` + and if you are opted to use **Weave Net** no need to pass this parameter. For example, our `Flannel` CNI network plugin based kubeadm init command with _master node_ with internal IP: `192.168.0.167` look like below: @@ -334,12 +334,12 @@ control plane. !!! warning "Warning" - Kubeadm signs the certificate in the admin.conf to have - `Subject: O = system:masters, CN = kubernetes-admin. system:masters` is a - break-glass, super user group that bypasses the authorization layer - (e.g. RBAC). Do not share the admin.conf file with anyone and instead - grant users custom permissions by generating them a kubeconfig file using - the `kubeadm kubeconfig user` command. + Kubeadm signs the certificate in the admin.conf to have + `Subject: O = system:masters, CN = kubernetes-admin. system:masters` is a + break-glass, super user group that bypasses the authorization layer + (e.g. RBAC). Do not share the admin.conf file with anyone and instead + grant users custom permissions by generating them a kubeconfig file using + the `kubeadm kubeconfig user` command. B. Join worker nodes running following command on individual worker nodes: @@ -350,9 +350,9 @@ control plane. !!! note "Important Note" - **Your output will be different than what is provided here. While - performing the rest of the demo, ensure that you are executing the - command provided by your output and dont copy and paste from here.** + **Your output will be different than what is provided here. While + performing the rest of the demo, ensure that you are executing the + command provided by your output and dont copy and paste from here.** If you do not have the token, you can get it by running the following command on the control-plane node: @@ -691,15 +691,15 @@ following commands: !!! info "Information" - Since 1.22, this type of Secret is no longer used to mount credentials into - Pods, and obtaining tokens via the [TokenRequest API](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) - is recommended instead of using service account token Secret objects. Tokens - obtained from the _TokenRequest API_ are more secure than ones stored in - Secret objects, because they have a bounded lifetime and are not readable - by other API clients. You can use the `kubectl create token` command to - obtain a token from the TokenRequest API. For example: - `kubectl create token skooner-sa`, where `skooner-sa` is service account - name. + Since 1.22, this type of Secret is no longer used to mount credentials into + Pods, and obtaining tokens via the [TokenRequest API](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) + is recommended instead of using service account token Secret objects. Tokens + obtained from the _TokenRequest API_ are more secure than ones stored in + Secret objects, because they have a bounded lifetime and are not readable + by other API clients. You can use the `kubectl create token` command to + obtain a token from the TokenRequest API. For example: + `kubectl create token skooner-sa`, where `skooner-sa` is service account + name. - Find the secret that was created to hold the token for the SA diff --git a/docs/other-tools/kubernetes/kubespray.md b/docs/other-tools/kubernetes/kubespray.md index 58a11dde..fe774f6e 100644 --- a/docs/other-tools/kubernetes/kubespray.md +++ b/docs/other-tools/kubernetes/kubespray.md @@ -230,10 +230,10 @@ control plane. !!! note "Very Important" - As **Ubuntu 20 kvm kernel** doesn't have **dummy module** we need to **modify** - the following two variables in `inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml`: - `enable_nodelocaldns: false` and `kube_proxy_mode: iptables` which will - _Disable nodelocal dns cache_ and _Kube-proxy proxyMode to iptables_ respectively. + As **Ubuntu 20 kvm kernel** doesn't have **dummy module** we need to **modify** + the following two variables in `inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml`: + `enable_nodelocaldns: false` and `kube_proxy_mode: iptables` which will + _Disable nodelocal dns cache_ and _Kube-proxy proxyMode to iptables_ respectively. - Deploy Kubespray with Ansible Playbook - run the playbook as `root` user. The option `--become` is required, as for example writing SSL keys in `/etc/`, @@ -246,8 +246,8 @@ control plane. !!! note "Note" - Running ansible playbook takes little time because it depends on the network - bandwidth also. + Running ansible playbook takes little time because it depends on the network + bandwidth also. --- diff --git a/docs/other-tools/kubernetes/microk8s.md b/docs/other-tools/kubernetes/microk8s.md index b5ab508b..0cf5f652 100644 --- a/docs/other-tools/kubernetes/microk8s.md +++ b/docs/other-tools/kubernetes/microk8s.md @@ -82,13 +82,13 @@ Run the below command on the Ubuntu VM: !!! note "Note" - Another way to access the default token to be used for the dashboard access - can be retrieved with: + Another way to access the default token to be used for the dashboard access + can be retrieved with: - ```sh - token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d "" -f1) - microk8s kubectl -n kube-system describe secret $token - ``` + ```sh + token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d "" -f1) + microk8s kubectl -n kube-system describe secret $token + ``` - Keep running the kubernetes-dashboad on Proxy to access it via web browser: @@ -103,11 +103,11 @@ Run the below command on the Ubuntu VM: !!! note "Important" - This tells us the IP address of the Dashboard and the port. The values assigned - to your Dashboard will differ. Please note the displayed **PORT** and - the **TOKEN** that are required to access the kubernetes-dashboard. Make - sure, the exposed **PORT** is opened in Security Groups for the instance - following [this guide](../../openstack/access-and-security/security-groups.md). + This tells us the IP address of the Dashboard and the port. The values assigned + to your Dashboard will differ. Please note the displayed **PORT** and + the **TOKEN** that are required to access the kubernetes-dashboard. Make + sure, the exposed **PORT** is opened in Security Groups for the instance + following [this guide](../../openstack/access-and-security/security-groups.md). This will show the token to login to the Dashbord shown on the url with NodePort. diff --git a/docs/other-tools/kubernetes/minikube.md b/docs/other-tools/kubernetes/minikube.md index 45d08503..379e41ce 100644 --- a/docs/other-tools/kubernetes/minikube.md +++ b/docs/other-tools/kubernetes/minikube.md @@ -220,30 +220,30 @@ with your chosen container runtime. !!! note "Note" - - To check the internal IP, run the `minikube ip` command. + - To check the internal IP, run the `minikube ip` command. - - By default, Minikube uses the driver most relevant to the host OS. To - use a different driver, set the `--driver` flag in `minikube start`. For - example, to use others or none instead of Docker, run - `minikube start --driver=none`. To persistent configuration so that - you to run minikube start without explicitly passing i.e. in global scope - the `--vm-driver docker` flag each time, run: - `minikube config set vm-driver docker`. + - By default, Minikube uses the driver most relevant to the host OS. To + use a different driver, set the `--driver` flag in `minikube start`. For + example, to use others or none instead of Docker, run + `minikube start --driver=none`. To persistent configuration so that + you to run minikube start without explicitly passing i.e. in global scope + the `--vm-driver docker` flag each time, run: + `minikube config set vm-driver docker`. - - Other start options: - `minikube start --force --driver=docker --network-plugin=cni --container-runtime=containerd` + - Other start options: + `minikube start --force --driver=docker --network-plugin=cni --container-runtime=containerd` - - In case you want to start minikube with customize resources and want installer - to automatically select the driver then you can run following command, - `minikube start --addons=ingress --cpus=2 --cni=flannel --install-addons=true - --kubernetes-version=stable --memory=6g` + - In case you want to start minikube with customize resources and want installer + to automatically select the driver then you can run following command, + `minikube start --addons=ingress --cpus=2 --cni=flannel --install-addons=true + --kubernetes-version=stable --memory=6g` - Output would like below: + Output would like below: - ![Minikube sucessfully started](images/minikube_started.png) + ![Minikube sucessfully started](images/minikube_started.png) - Perfect, above confirms that minikube cluster has been configured and started - successfully. + Perfect, above confirms that minikube cluster has been configured and started + successfully. - Run below minikube command to check status: diff --git a/nerc-theme/main.html b/nerc-theme/main.html index a2356f6f..c9dc6f40 100644 --- a/nerc-theme/main.html +++ b/nerc-theme/main.html @@ -1,12 +1,12 @@ {% extends "base.html" %} {% block announce %}
-
Upcoming NERC Network Equipment and Switch Maintenance
+
Upcoming Multi-Day NERC OpenStack Platform Version Upgrade
- (Tuesday Jan 7, 2025 9 AM ET - Wednesday Jan 8, 2025 9 AM ET) + (Dec 12, 2024 8:00 AM ET - Dec 14, 2024 8:00 PM ET) [Timeline and info]