Home Kubernetes Install Kubernetes Cluster Using Kubeadm In RHEL, CentOS, AlmaLinux, Rocky Linux

Install Kubernetes Cluster Using Kubeadm In RHEL, CentOS, AlmaLinux, Rocky Linux

Setup Kubernetes Cluster Using Kubeadm In Linux

By Rudhra Sivam
Published: Last Updated on 8.1K views

In this article, we are going to learn about Kubernetes cluster installation using Kubeadm in RHEL 8, and its clones like AlmaLinux 8, CentOS 8, and Rocky Linux 8.

Before getting into it, you must have a basic understanding about Kubernetes concepts and architecture. In this article, we are going to demonstrate two node cluster.

To proceed installation, we need below mentioned basic requirements.

  • Minimum 2 hosts.
  • 2 CPUs.
  • 2GB of Physical Memory (RAM).
  • 20GB of Disk Space.
  • Internet connection to download packages.

1. Configure Hostname and IP address

Set the hostname and configure hosts in Master and Workers. The operating system file hosts is used to convert hostnames or domain names to IP addresses.

Here we are going to have two hosts:

  • ostechmaster - Master
  • ostechworker – Worker

Use the below command to set the hostname, reboot is required post setting the hostname.

# hostnamectl set-hostname ostechmaster

Edit /etc/hosts file:

# vi /etc/hosts

Add both server and client hostname and IP address in the /etc/hosts file:

Configure Hostname and IP address
Configure Hostname and IP address

Do ping test to ensure the connectivity:

[root@ostechmaster ~]# ping ostechworker
PING ostechworker (172.31.5.141) 56(84) bytes of data.
64 bytes from ostechworker (172.31.5.141): icmp_seq=1 ttl=64 time=0.472 ms
64 bytes from ostechworker (172.31.5.141): icmp_seq=2 ttl=64 time=0.492 ms
64 bytes from ostechworker (172.31.5.141): icmp_seq=3 ttl=64 time=1.43 ms
64 bytes from ostechworker (172.31.5.141): icmp_seq=4 ttl=64 time=0.425 ms

2. Disable SElinux

Disable SElinux in Master and Workers, so all containers may readily access the host filesystem if SElinux is disabled.
Make 'SELINUX=disabled' in the config file /etc/selinux/config using vi editor. Reboot is required to reflect the SElinux change.

[root@ostechmaster ~]# vi /etc/selinux/config
Disable SElinux
Disable SElinux

Ensure the SElinux status using the below command.

[root@ostechmaster ~]# sestatus
SELinux status: disabled

3. Disable Swap in Master and Worker

Swap is required to be deactivated on all Kubernetes hosts (Master & Workers). This is the Kubernetes community's preferred deployment method. The kubelet service will not start on the master and workers if swap is not disabled.

Run the below command to disable SWAP:

[root@ostechmaster ~]# swapoff -a && sed -i '/swap/d' /etc/fstab

4. Allow the required ports in firewall

For Kubernetes components to interact with one another, certain essential ports must be available. Below are the ports to be opened to avail connectivity among Kubernetes components.

Control Plane / Master Server:

ProtocolDirectionPort RangePurposeUsed By
TCPInbound6443Kubernetes API serverAll
TCPInbound2379-2380etcd server client APIkube-apiserver, etcd
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound10259kube-schedulerSelf
TCPInbound10257kube-controller-managerSelf

Worker nodes:

ProtocolDirection Port RangePurposeUsed By
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound30000-32767NodePort ServicesAll

To allow the required ports through firewall, run the following commands.

Master Node:

[root@ostechmaster ~]# firewall-cmd --permanent --add-port=6443/tcp
[root@ostechmaster ~]# firewall-cmd --permanent --add-port=2379-2380/tcp
[root@ostechmaster ~]# firewall-cmd --permanent --add-port=10250/tcp
[root@ostechmaster ~]# firewall-cmd --permanent --add-port=10251/tcp
[root@ostechmaster ~]# firewall-cmd --permanent --add-port=10259/tcp
[root@ostechmaster ~]# firewall-cmd --permanent --add-port=10257/tcp
[root@ostechmaster ~]# firewall-cmd --reload

Worker Node:

[root@ostechworker ~]# firewall-cmd --permanent --add-port=10250/tcp
[root@ostechworker ~]# firewall-cmd --permanent --add-port=30000-32767/tcp
[root@ostechworker ~]# firewall-cmd --reload

We are disabling the firewall in both Master and Worker as it is for demonstration purpose. However, it's not recommended for real-time production practice.

Use the below commands to stop and disable the firewall.

[root@ostechmaster ~]# systemctl stop firewalld
[root@ostechmaster ~]# systemctl disable firewalld

5. Install Docker

Docker makes it easier to "build" containers, whereas Kubernetes makes it possible to "manage" them in real time. To package and ship the software, use Docker. To launch and scale your app, use Kubernetes.

Add docker repository in all the machines in Cluster.

Create the file named docker.repo under /etc/yum.repos.d/ directory:

[root@ostechmaster ~]# vi /etc/yum.repos.d/docker.repo

Add the following lines in it:

[docker]
baseurl=https://download.docker.com/linux/centos/8/x86_64/stable/
gpgcheck=0

Press ESC key and type :wq to save the file and close it.

Install docker in both Master and Worker nodes:

# yum -y install docker-ce

Once installed, enable and start the Docker on both nodes:

# systemctl enable docker
# systemctl start docker

Check and ensure the Docker is running in both the machines.

# systemctl status docker
Check Docker status
Check Docker status

6. Install Kubernetes

Add Kubernetes repository in Master and Worker.

Create the file kubernetes.repo in both Master and worker under /etc/yum.repos.d/ directory:

# vi /etc/yum.repos.d/kubernetes.repo

Add the following lines:

[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Press ESC and type :wq to save the file and close it.

Install kubeadm, kubelet, kubectl in Master and Worker nodes using the below command:

# yum install -y kubelet kubeadm kubectl --disableexcludes=Kubernetes

Enable and start the kubelet service in both the machines:

# systemctl enable kubelet
# systemctl start kubelet

Check the status of the Kubelet service and make sure it is running fine in both the machines.

# systemctl status kubelet
Check Kubelet service status
Check Kubelet service status

7. Initialize the Kubernetes

Use the below command to initialize the Kubernetes in Master Server

[root@ostechmaster ~]# kubeadm init

You will get the below output saying that the Kubernetes control-plane has initialized successfully. And certain steps will be mentioned to start using the cluster, follow that.

Also copy and save the 'kubeadm join' command from the output, it will be used to join the worker node in the cluster.

Sample output:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.10.29:6443 --token 220tvj.051bkeyj5tg6v55r \
        --discovery-token-ca-cert-hash sha256:434c49c7969256a7fae3880b340202cadd4fd29d3d381ab37e1cb8b1d05e86f2
Initialize the Kubernetes
Initialize the Kubernetes

Since we are proceeding with root user, run the below command in Master server as mentioned in the above output.

[root@ostechmaster ~]#  export KUBECONFIG=/etc/kubernetes/admin.conf

8. Configure POD Network

A Kubernetes Pod network is a network of interconnected components in Kubernetes. This network concept may be implemented in several different ways. In our demonstration we are going to use 'Weave Net'.

Run the below commands in the Master server to setup the POD Network.

[root@ostechmaster ~]# export kubever=$(kubectl version | base64 | tr -d '\n')
[root@ostechmaster ~]# kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=$kubever

Sample output:

serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
[root@ostechmaster ~]#

9. Join the Worker Node

Run the 'kubeadm join' command to join the worker node into the cluster. This is the command we copied from 'kubeadm init' output.

[root@ostechworker ~]# kubeadm join 172.31.10.29:6443 --token 220tvj.051bkeyj5tg6v55r         --discovery-token-ca-cert-hash sha256:434c49c7969256a7fae3880b340202cadd4fd29d3d381ab37e1cb8b1d05e86f2
Join the worker node into the cluster
Join the worker node into the cluster

You can verify the node in Master server using the below command

# kubectl get nodes

Sample output:

NAME           STATUS     ROLES                  AGE   VERSION
ostechmaster   Ready      control-plane,master   32m   v1.23.1
ostechworker   Ready   <none>                 30m   v1.23.1

Conclusion

In this article we have seen the detailed steps to setup and configure Kubernetes Cluster using Kubeadm. Refer our previous kubernetes series articles to have a detailed understanding about Kubernetes architecture and concepts. We will see Kubernetes operations in the upcoming articles.

Read Next:

Resource:

You May Also Like

2 comments

Nilavan Senthamil Selvan December 29, 2021 - 3:17 pm

DOcker is not much useful. Due to the heaving high process.containerD is preferred.

K8s+ ContainerD combo installation is good to have.

expecting more on Rancher also

Reply
sk December 29, 2021 - 10:23 pm

Noted. Will add them in our todo list.

Reply

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. By using this site, we will assume that you're OK with it. Accept Read More