Home Kubernetes How To Add Worker Nodes To Amazon EKS Cluster

How To Add Worker Nodes To Amazon EKS Cluster

By Rudhra Sivam
Published: Updated: 24.3K views

In this article, we are going to learn about how to add Node Group/Worker nodes into the Amazon EKS Cluster. Before getting into this guide, refer to the below guide to learn about how to create Kubernetes Cluster (Amazon EKS) in AWS cloud.

1. Add Node Group in EKS Cluster

You can provision worker nodes from Amazon EC2 instances by adding Node Group in EKS Cluster. For that, you need to create an IAM role for Worker nodes.

1.1. Create IAM role for EKS Worker Nodes

Get into the IAM Console and create a role as we did for Master node.

Amazon Console ? IAM Console ? Roles ? Create role.

Create role
Create role

Select AWS Service and select EC2 in use cases.

Select trusted entity type
Select trusted entity type

We need to have 3 policies selected for provisioning worker nodes from Amazon EC2.

  • AmazonEKSWorkerNodePolicy
  • AmazonEKS_CNI_Policy
  • AmazonEC2ContainerRegistryReadOnly

Search these policies by the keywords 'AmazonEKS' and 'AmazonEC2' and select those policies.

Permissions policies for role
Permissions policies for role

Search for 'Amazon EC2' and choose 'AmazonEC2ContainerRegistryReadOnly' as well.

Choose permissions policies for role
Choose permissions policies for role

In the next page, you need to name the Role and Review. Here, we are naming as 'ostechnix_workers'.

Enter role details
Enter role details

Ensure the above mentioned 3 policies are selected and create the role.

Verify and create IAM role
Verify and create IAM role

1.2. Add worker Nodes

To add worker nodes, get into the EKS cluster that we created.

AWS Console ? EKS ? Clusters ? ostechnix.

There are no nodes available right now. Navigate to Configuration for adding nodes.

EKS Cluster Configuration
EKS Cluster Configuration

Click the 'Add Node Group' to configure the worker nodes.

Click 'Add Node Group'
Click 'Add Node Group'

In the 'Configure Node Group' page, we are naming the node group as 'ostechnix_workers'. Select the IAM role; if not created the IAM role for worker nodes, get into the IAM console and create.

Select IAM role
Select IAM role

In the previous step(1.1), we have created the IAM role. Refresh the role and select the role for worker nodes. Click 'next' at the bottom to proceed.

Configure Node Group
Configure Node Group

In the next page you will get 'Set compute and scaling configuration' where you can configure the EC2 instance type and scaling options.

Node Group Compute Configuration

Here I am selecting On-demand Linux 't3.micro' instances with disk size 20GB.

Set compute and scaling configuration
Set compute and scaling configuration

Node Group scaling configuration

Here you can configure Minimum size, Maximum size and Desired size of the Nodes.

Node Group update configuration

Here you can configure the maximum number of nodes in count or percentage that can be tolerated during the node group version update.

Once done all the configuration, click 'next' to proceed further.

Node Group Configuration
Node Group Configuration

In this page, review all the configuration we set up in the previous steps and click 'create' at the bottom to confirm Node group creation.

Node Group Created
Node Group Created

Node Group creation will take a few minutes to complete.

Node Group Creation in progress
Node Group Creation in progress

Once created, you can verify the Node Group and nodes available in that group.

Go to Amazon console ? EKS ? Clusters ? ostechnix ? Configuration ? Compute ? Node Group ? Nodes.

EKS Cluster Worker Nodes
EKS Cluster Worker Nodes

Verify the same in the CLI using kubectl command.

[root@ostechnix ~]# kubectl get nodes
NAME                                          STATUS   ROLES    AGE     VERSION
ip-172-31-15-64.ap-south-1.compute.internal   Ready    <none>   2m11s   v1.21.5-eks-9017834
ip-172-31-27-30.ap-south-1.compute.internal   Ready    <none>   115s    v1.21.5-eks-9017834
Get node list
Get node list

2. Delete the Cluster

Go to Amazon Console ? EKS? Clusters.

Click the cluster name that you want to delete.

Click EKS cluster name
Click EKS cluster name

Before deleting the cluster, you need to delete the node groups associated with that cluster.

Once you get into the cluster, click 'Configuration' and then click 'Compute'. Select the Node Group and click 'Delete'.

Choose And Delete Node Groups
Choose And Delete Node Groups

You will get this confirmation page to delete the Node Group. Type the name of the Group and Delete.

Delete Node Group Confirmation
Delete Node Group Confirmation

Once you deleted the Node Group, verify no Node Group is available and proceed deleting the Cluster.

Delete Cluster
Delete Cluster

Once you click the Delete Cluster, you will get this confirmation page, enter the cluster name and hit Delete button.

Delete Cluster Confirmation Box
Delete Cluster Confirmation Box

Conclusion

In this article, we have gone through in detail about the EKS cluster provisioning in AWS cloud. We will have a detailed procedure of EKS cluster provisioning through EKS CLI in the next article.

Resource:

You May Also Like

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. By using this site, we will assume that you're OK with it. Accept Read More