Welcome to the introduction to Kubernetes course. Kubernetes, also known as k8s or kube, is the most popular container Orchestration tool in the industry which is a Google made product. This Kubernetes tutorial consists of series articles on Kubernetes. In the first part, we will be discussing what is Kubernetes and the basic concepts of Kubernetes.
This course is for absolute beginners, you don’t need to have any pre-requisite knowledge to learn this technology. We will walk you through all the Kubernetes basics to make you understand the concepts.
Before getting started with Kubernetes, let's have a basic understanding of Docker and Containers.
What is Docker?
Docker allows you to bundle and run an application in a container, which is a loosely isolated environment. Because of the isolation and security, you can operate multiple containers on a single host.
To run several containers on the same OS, Docker leverages resource isolation in the OS kernel. Usually people compare Docker with Virtual Machines (VMs).
VMs, on the other hand, enclose a whole operating system with executable code on top of an abstraction layer of physical hardware resources.
What is a Container?
A container image is a ready-to-run software package that includes everything a program needs to execute, including the code and any run-times it needs, application and system libraries, and default values for any important settings.
Applications are decoupled from the underlying host architecture using containers. As shown in the following diagram, we can have multiple containers on top of Docker engine utilizing the underlying machine. This facilitates deployment in a variety of Operating system or Cloud scenarios.
Containers help businesses modernize by making it easier to scale and deploy applications. However, by building a completely new infrastructure environment, containers have presented additional issues and complexity.
Thousands of container instances are being deployed daily by both large and small software organizations, posing a scalability challenge for them to manage. So, how do they pull it off?
What is Container orchestration?
- Container orchestration is concerned with the management of container lifecycles, particularly in large, dynamic environments. Container orchestration is used by software teams to control and automate a variety of tasks on container management.
- Container orchestration works in any context where containers are employed. It can assist you in deploying the same program across several environments without having to rewrite it.
Container orchestration tools
Container orchestration technologies offer a framework for controlling containers and microservices architecture. Container lifecycle management can be accomplished with a variety of container orchestration solutions. Kubernetes, Docker Swarm, and Apache Mesos are three common solutions/tools.
Docker Swarm is the Docker native tool, which is very easy to set up and configure. Kubernetes requires a number of manual interventions to configure its components such as etcd, flannel and docker engine.
Kubernetes dominates the industry because of its various advantages and features while compared to other tools.
What is Kubernetes?
Kubernetes is an open-source container orchestration technology that was originally developed by Google to automate the deployment, scaling, and administration of containerized applications.
Kubernetes makes it simple to deploy and manage microservice architecture applications. It accomplishes this by forming an abstraction layer on top of a cluster, allowing development teams to deploy applications smoothly while Kubernetes handles the following tasks mainly:
- Controlling and managing the use of resources by an application.
- Load balancing requests among many instances of an application automatically.
- Monitoring resource use and resource limits to automatically stop apps from consuming excessive amounts of resources and resume them again.
- If a host's resources are exhausted or the host dies, transferring an application instance from one host to another is a viable option.
- When a new host is added to the cluster, extra resources are automatically made accessible.
Why market recommends Kubernetes
Kubernetes, the first Cloud Native Cloud Foundation (CNCF) initiative and a Google foundation, is the fastest growing open source software project after Linux.
Why are so many enterprises relying on Kubernetes to meet their container orchestration needs today? There are numerous reasons for this:
- Portability and flexibility : Kubernetes is extremely adaptable, as it can run on a wide range of infrastructure and environment settings. Most other orchestrators don't have this flexibility; they're locked into specific runtimes or infrastructures.
- Open Source : The CNCF is in charge of Kubernetes, which is a completely open source, community-driven project. It has a number of significant corporate sponsors, but no single firm "owns" the platform or has sole control over how it evolves.
- Multi-cloud compatibility : Kubernetes can host workloads on a single cloud as well as workloads distributed across many clouds. Kubernetes can also effortlessly scale its environment from one cloud to the next. While other orchestrators may support multi-cloud architectures, Kubernetes arguably goes above and beyond in terms of multi-cloud adaptability.
- Marker Leader : Almost everyone uses Kubernetes. According to a REDHAT survey, Kubernetes is widely used by customers (88%), particularly in production situations (74%).
Kubernetes is an example of a distributed system that has been well-architected. It considers all of the machines in a cluster to be part of a single resource pool.
Kubernetes, like any other sophisticated distributed system, has two layers: head nodes and worker nodes.
The Head node or Master node consists of Control Plane and the worker nodes applications. A Kubernetes cluster is formed by a collection of head and worker nodes.
Kubernetes introduces a lot of terminology to describe the structure of your application. We will go through each term.
Master/Head node and Worker nodes consist of its own components that make sure the Orchestration runs smoothly.
Control plane is where administrators and users go to manage the different nodes. It receives commands via HTTP calls or by connecting to the system and running command-line scripts. It regulates how Kubernetes interacts with your apps, as the name implies.
The API server gives the Kubernetes cluster a REST interface. All activities on pods, services, and other objects are carried out programmatically by talking with the endpoints supplied.
The scheduler is in-charge of allocating tasks to the various nodes. It monitors resource capacity and guarantees that a worker node's performance remains within acceptable limits.
The Kubernetes controller manager is a service that manages the core control loops of Kubernetes. It is taking responsible to make sure that the cluster's shared state is functioning properly.
Kubernetes employs etcd, a distributed key-value store, to share information about a cluster's overall state.
Node is a machine, either physical or virtual, where PODs are running. Control plane manages each node in a cluster and the node consists of services required to run PODs.
A Kubernetes pod is a collection of containers that Kubernetes manages at the smallest scale. Pods have a single IP address that is assigned to all the containers in the pod The memory and storage resources of containers in a pod are shared. A Pod can have a single container also when the application is having a single process.
Kubelet is a worker node component. Its job is to keep track of pods and their containers. It is concerned with pod specifications written in YAML or JSON. Kubelet examines the pod specs and determines whether the pods are healthy or not.
Kube-proxy is a network proxy and load balancer that acts as a connection between each node and the api-server. It operates on each node in your cluster and allows you to connect to pods from both inside and outside of it.
Kubectl is CLI tool for Kubernetes. It is used to deploy applications, monitor and control cluster resources, and see logs.
From the user's perspective, kubectl is your control panel for Kubernetes. It enables you to do all Kubernetes operations. In the technical standpoint, Kubectl is a client for the Kubernetes API.
In this article, we have gone through the basic concepts of Container Orchestration and architecture of Kubernetes. Once we learn the theoretical concepts of Kubernetes, we may understand that one of Kubernetes' most difficult topics is installation. We will walk through the Single Node Kubernetes installation in the next article.