In this article, we are going to learn about how to add Node Group/Worker nodes into the Amazon EKS Cluster. Before getting into this guide, refer to the below guide to learn about how to create Kubernetes Cluster (Amazon EKS) in AWS cloud.
1. Add Node Group in EKS Cluster
You can provision worker nodes from Amazon EC2 instances by adding Node Group in EKS Cluster. For that, you need to create an IAM role for Worker nodes.
1.1. Create IAM role for EKS Worker Nodes
Get into the IAM Console and create a role as we did for Master node.
Amazon Console 🡪 IAM Console 🡪 Roles 🡪 Create role.
Select AWS Service and select EC2 in use cases.
We need to have 3 policies selected for provisioning worker nodes from Amazon EC2.
Search these policies by the keywords 'AmazonEKS' and 'AmazonEC2' and select those policies.
Search for 'Amazon EC2' and choose 'AmazonEC2ContainerRegistryReadOnly' as well.
In the next page, you need to name the Role and Review. Here, we are naming as 'ostechnix_workers'.
Ensure the above mentioned 3 policies are selected and create the role.
1.2. Add worker Nodes
To add worker nodes, get into the EKS cluster that we created.
AWS Console 🡪 EKS 🡪 Clusters 🡪 ostechnix.
There are no nodes available right now. Navigate to Configuration for adding nodes.
Click the 'Add Node Group' to configure the worker nodes.
In the 'Configure Node Group' page, we are naming the node group as 'ostechnix_workers'. Select the IAM role; if not created the IAM role for worker nodes, get into the IAM console and create.
In the previous step(1.1), we have created the IAM role. Refresh the role and select the role for worker nodes. Click 'next' at the bottom to proceed.
In the next page you will get 'Set compute and scaling configuration' where you can configure the EC2 instance type and scaling options.
Node Group Compute Configuration
Here I am selecting On-demand Linux 't3.micro' instances with disk size 20GB.
Node Group scaling configuration
Here you can configure Minimum size, Maximum size and Desired size of the Nodes.
Node Group update configuration
Here you can configure the maximum number of nodes in count or percentage that can be tolerated during the node group version update.
Once done all the configuration, click 'next' to proceed further.
In this page, review all the configuration we set up in the previous steps and click 'create' at the bottom to confirm Node group creation.
Node Group creation will take a few minutes to complete.
Once created, you can verify the Node Group and nodes available in that group.
Go to Amazon console 🡪 EKS 🡪 Clusters 🡪 ostechnix 🡪 Configuration 🡪 Compute 🡪 Node Group 🡪 Nodes.
Verify the same in the CLI using kubectl command.
[[email protected] ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-15-64.ap-south-1.compute.internal Ready <none> 2m11s v1.21.5-eks-9017834 ip-172-31-27-30.ap-south-1.compute.internal Ready <none> 115s v1.21.5-eks-9017834
2. Delete the Cluster
Go to Amazon Console 🡪 EKS🡪 Clusters.
Click the cluster name that you want to delete.
Before deleting the cluster, you need to delete the node groups associated with that cluster.
Once you get into the cluster, click 'Configuration' and then click 'Compute'. Select the Node Group and click 'Delete'.
You will get this confirmation page to delete the Node Group. Type the name of the Group and Delete.
Once you deleted the Node Group, verify no Node Group is available and proceed deleting the Cluster.
Once you click the Delete Cluster, you will get this confirmation page, enter the cluster name and hit Delete button.
In this article, we have gone through in detail about the EKS cluster provisioning in AWS cloud. We will have a detailed procedure of EKS cluster provisioning through EKS CLI in the next article.