This is the continuation of Kubernetes introduction guide. In this article, we are going to learn about important features of Kubernetes which will help you to understand the functional concepts of Kubernetes in deeper level.
1. Automatic bin packing
This is one of Kubernetes’s most notable features. Kubernetes intelligently positions containers based on required resources and other constraints, without compromising availability.
Kubernetes offers resource management, and it can automatically specify how each container in a pod consumes resources like CPU & RAM.
2. Service discovery and load balancing
Service discovery & load balancing are provided as services in Kubernetes. Services connect Pods to the network in a consistent manner across clusters. The process of determining how to connect to a service is known as service discovery.
Collection of Containers is PODs. PODs which have the same set of functions are grouped together into a single set and it is called as service.
Each POD will be assigned an IP address and single DNS name for the service (set of POD). With this architecture, Kubernetes will be having well defined control over Network and Communication between PODs and can-do Load balancing.
Placing a load balancer (reverse proxy such as Nginx or HAProxy), in front of the set of instances that make up a single service is a common technique to solve the service discovery problem.
A load balancer's address (DNS name or, less usually, IP) is a considerably more reliable piece of data. It can be provided to clients during the development or configuration stages, and it can be constant over the course of a single client's lifetime.
After that, contacting the multi-instance service is no different than accessing a single network endpoint from the client's perspective. To put it another way, service discovery takes place entirely on the server-side.
3. Storage orchestration
Users can mount any storage system they want using Kubernetes, including local storage, public cloud providers, and more. The underlying storage system must still be provided.
For users and managers, Kubernetes provides an API that isolates the specifics of how storage is delivered from how it is utilised.
There are a few terms to understand about integrating persistent storage with Kubernetes. They are as follows:
- Container Storage Interface (CSI) : It is a standard that allows all container orchestrators to connect with storage systems such as Ondat in a consistent manner. Storage vendors should write their integration layer directly into the Kubernetes source code before CSI was released. As a result, upgrades were difficult and time-consuming because any defects could cause Kubernetes to crash.
- Storage Class : Admins can pre-define the types of storage that Kubernetes users will be able to supply and attach to their apps using a Kubernetes Storage Class.
- Persistent Volume (PV) : A persistent volume is a virtual storage instance that has been added to a cluster as a volume. Physical storage hardware or software-defined storage, such as Ondat, can be referenced by the PV.
- Persistent Volume Claim (PVC) : This is a request for a certain kind and configuration of storage to be provisioned.
4. Self-healing
Kubernetes' ability to self-heal is one of its most appealing features. Kubernetes will automatically reload a containerized app or an application component if it goes down.
The orchestration capabilities of Kubernetes can monitor and replace unhealthy containers as needed, depending on the setup. Pods, which are the smallest units encapsulating single or multiple containers, can also be fixed by Kubernetes.
In the self-healing process, the replication controller ensures the fault tolerance or availability of apps by performing below tasks.
- If the container fails, Kubernetes restarts the container.
- If any node goes down, Kubernetes reschedules the containers on other nodes.
- If the container is not responding to the client/user, Kubernetes kills the container.
5. Automated rollout & rollbacks
5.1. Rollout
The aim of businesses is to have zero downtime of applications though developers want to update the code of the application. The update of the application is called rollout. This is done in Kubernetes using rolling upgrades.
By incrementally upgrading Pods instances with new ones, rolling updates allow Deployments to be updated with zero downtime. The new Pods will be scheduled on Nodes that have resources available.
- Client updates with a new version of POD, say V2.
- In a replica set, only the same version of POD is allowed. So, Kubernetes will create replica set 2 where new version POD V2 will be added, and health check will be ensured. Once the POD V2 is running fine, it will replace one of the POD V1.
- It will follow the same and replaces all the POD V1 with POD V2.
5.2. Rollback
When a Deployment is not stable, such as when it crashes looping, you may want to rewind the Deployment. By default, the system saves all of the Deployment's rollout history so that you can rollback at any moment.
In the above picture, POD V1 is kept in history so that you can rollback if there is any issue found the deployment of POD V2.
6. Secret and config maps
6.1. Secret
A secret is a small piece of confidential data, such as a password, token, or key. Alternatively, such information might be included in a Pod specification or a container image.
You don't have to incorporate confidential data in your application code if you use a Secret. When it comes to working with secrets, there are two layers to consider.
The secret must first be created, and then it must be introduced into the pod. Rather than putting confidential data in a container picture or a Pod definition, it's safer and more adaptive to keep it private.
6.2. ConfigMap
A ConfigMap is a key-value pair storage, and it is an API object for storing non-confidential data. ConfigMaps can be used by pods as environment variables, command-line parameters, or volume configuration files.
It allows you to decouple environment-specific configuration from your container images, allowing you to simply port your apps.
ConfigMaps configuration is divided into two stages. The initial step is to make the ConfigMaps, followed by injecting them into the pod.
7. Horizontal scaling
When demand for a workload increases or decreases, Kubernetes can automatically increase or decrease the number of pod replicas serving the job.
The Horizontal Pod Autoscaler is implemented as a controller and an API resource in Kubernetes. The controller's behaviour is determined by the resource.
The controller adjusts the number of copies in a replication Controller or Deployment on a regular basis to match observed metrics like average CPU utilisation, average memory utilisation, or any other custom statistic to the user's aim.
8. Conclusion
In this article, we have gone through the conceptual understanding about key features of Kubernetes. We will have a deep dive about other core concepts of Kubernetes like ETCD, Kube Controller Manager, Kube Scheduler etc., in the upcoming article.
Resources: