Understanding Kubernetes Architecture with Diagrams

Posted By :Harish Dhakad |9th August 2022


 

Kubernetes or k8s for short is a system that automates application deployment. Modern applications are spread across clouds, virtual machines, and servers. Manually managing apps is no longer an option.


 

Kubernetes combines physical and virtual machines into a single API surface. Then, a developer may manage, grow, and deploy container-based apps using the Kubernetes API.


 

Additionally, its architecture offers a flexible framework for distributed systems. Your applications' scaling and failovers are automatically managed by K8s, which also offers deployment patterns.


 

It ensures there is no downtime in a production environment and helps in managing the containers that run the apps. For instance, if one container fails, another one substitutes for it without the end user even realizing it.


 

Not only is Kubernetes an orchestration system. It is a collection of separate but linked control procedures. Its responsibility is to continuously improve the situation and steer the processes in the right direction.


 

Kubernetes Architecture and Components


 

Due to its decentralized nature, Kubernetes does not manage tasks in sequential order. It implements the idea of a "desired state" and operates according to a declarative paradigm. The fundamental Kubernetes procedure is shown in these steps:


 

1. An administrator creates and places the desired state of an application into a manifest file.

2. The file is provided to the Kubernetes API Server using a CLI or UI. Kubectl is the name of the default command-line tool for Kubernetes.

3. Kubernetes stores the file (an application’s desired state) in a database called the Key-Value Store (ETCD).

4. Kubernetes then implements the desired state on all the relevant applications within the cluster.

5. Kubernetes continuously monitors the elements of the cluster to ensure the application's current state does not vary from the desired state.


 


 

We will now understand the individual components of a standard Kubernetes cluster to understand the process in greater detail.


 

What is Master Node in Kubernetes Architecture?

 

A CLI (Command-Line Interface) or UI (User Interface) sends data to the Kubernetes Master (Master Node) via an API. These are the instructions you provide Kubernetes.


 

You specify the services, replica sets, and pods that Kubernetes should maintain. such as the container image to use, which ports to open, and the number of running pod replicas.


 

You also give the application(s) that are executing in that cluster the parameters of the desired state.

 



 

Kubernetes Master Node


 

API Server

 

The main Kubernetes management component is the Kube-API server. When you run a kubectl command, the kubectl utility reaches the Kube-API server. The Kube-API server first authenticates the request and validates it. It then retrieves the data from the ETCD cluster and responds with the requested information. The Kube-API server is responsible for Authenticating and validating requests, and retrieving and updating data in the ETCD data store, actually Kube-API server is the only component that interacts directly with the ETCD datastore. The other components such as the scheduler, Kube-controller-manager & kubelet use the API server to perform updates in the cluster in their respective areas.


 

Key-Value Store (ETCD)

 

The cluster's nodes, pods, configurations, secrets, accounts, roles, bindings, and other data are stored in the ETCD datastore. When you use the kubectl get command, all the information you see comes from the ETCD server. Every modification you make to your cluster, including the deployment of pods, replica sets, or extra nodes, updates the ETCD server. The change is not regarded as complete until it has been updated on the ETCD server.


 

Controller

 

A controller is a process that continuously monitors the state of various components within the system and works towards bringing the whole system to the desired functioning state. For example, the node controller is responsible for monitoring the status of the nodes and taking necessary actions to keep the application running. It does that through the Kube-API server. The node controller checks the status of the nodes every 5 seconds. That way the node controller can monitor the health of the nodes if it stops receiving heartbeat from a node the node is marked as unreachable but it waits for 40 seconds before marking it unreachable after a node is marked unreachable it gives it five minutes to come back up if it doesn’t, it removes the PODs allocated to that node and provisions them on the healthy ones.


 

Scheduler 

 

The Scheduler assigns new requests from the API Server to healthy nodes as they come in. It rates the nodes' quality before deploying pods to the most appropriate one. The pods are placed in a pending state until a suitable node materializes if there are no other suitable nodes.


 

What does the Kubernetes architecture's Worker Node mean?


 

Worker nodes monitor the API Server for any new work assignments, carry out those tasks, and then communicate their findings to the Kubernetes Master node.

 


 

Kubernetes Worker Node


 

Kubelet

 

Every node in the cluster is home to a kubelet. It serves as the main Kubernetes agent. The node's CPU, RAM, and storage are integrated into the larger cluster via installing kubelet. It keeps an eye out for tasks sent from the API Server, performs them, and then updates the Master. Additionally, it keeps an eye on the pods and notifies the control panel when one is not operating properly. The Master can then decide how to distribute tasks and resources to achieve the desired state based on that knowledge.


 

Kube-proxy

 

Every node in the Kubernetes cluster has a process called Kube-proxy running in it. Every node in the Kubernetes cluster has a process called Kube-proxy running in it. Its responsibility is to search for new services, and whenever one is found, it creates the necessary Rules on each node to direct traffic to that service.



 

Pod

 

The smallest unit of scheduling in Kubernetes is a pod. A container cannot be a part of a cluster without it. You can only add or remove pods if you need to scale your app.

For a single container containing the application code, the pod acts as a "wrapper." The Master schedules the pod on a particular node based on the resources that are available and works with the container runtime to launch the container.



 



 

Runtime for Container

The container runtime launches and stops containers while obtaining images from a container image registry. Typically, a third-party plugin or piece of software, like Docker, does this task.

 


About Author

Harish Dhakad

Harish Dhakad has very good knowledge in the field of Devops. His expertise are in kubernets and Linux. He is energetic and enthusiastic about his work.

Request For Proposal

[contact-form-7 404 "Not Found"]

Ready to innovate ? Let's get in touch

Chat With Us