Deploy Cluster Autoscaler on an EKS cluster using Terraform and Helm.
- Docker
- AWS user with programmatic access and high privileges
- Linux terminal
- Deploy an EKS K8 Cluster with Self managed Worker nodes on AWS using Terraform.
- Deploy a Metrics server on the above EKS cluster using Terraform and Helm.
- Clone the repository
- Set environment variable TF_VAR_AWS_PROFILE
- Review terraform variable values in variables.tf, locals.tf
- Override values in the Helm chart through the "chart_values.yaml" file
- Update kubernetes.tf with the AWS S3 bucket name and key name from the output of the EKS K8 Cluster
- Configure AWS user with AWS CLI.
docker-compose run --rm aws configure --profile $TF_VAR_AWS_PROFILE
docker-compose run --rm aws sts get-caller-identity
- Specify appropriate Terraform workspace.
docker-compose run --rm terraform workspace show
docker-compose run --rm terraform workspace select default
- Run Terraform apply to create the EKS cluster, k8 worker nodes and related AWS resources.
./run-docker-compose.sh terraform init
./run-docker-compose.sh terraform validate
./run-docker-compose.sh terraform plan
./run-docker-compose.sh terraform apply
- Verify kubectl calls and ensure Cluster Autoscaler Pod is in Running status.
./run-docker-compose.sh kubectl get pods -A | grep -i autoscaler
- Check the logs of the cluster-autoscaler deployment to view the scale-in and scale-out actions.
./run-docker-compose.sh kubectl logs -f deploy/cluster-autoscaler-aws-cluster-autoscaler -n kube-system
- kubernetes-autoscaler-nottriggerscaleup-pod-didnt-trigger-scale-up
Issue: cluster-autoscaler did not trigger ASG to add worker nodes
Fix:
Added below 2 tags to ASG of self managed node groups.
"k8s.io/cluster-autoscaler/enabled" = "TRUE"
"k8s.io/cluster-autoscaler/${local.cluster_name}" = "owned"
- 0.1
- Initial Release
This project is licensed under the MIT License - see the LICENSE file for details