It was initially developed and designed by Google. Later it was donated to Cloud Native Computing Foundation. It is an open-source container orchestration platform that automates many of the hands-on processes involved in deploying, managing, and scaling up and out the containerized applications. It is also a portable and extensible platform for managing containerized workloads and services. It helps to facilitate automation and declarative confirmation. It also has a large, rapidly growing ecosystem and widely available services, support, and tools.
A cluster is a set of multiple nodes machines for running inside containerized applications. If you are running Kubernetes, then you are running a cluster.
A cluster contains a control plane and one or multiple nodes. The control plane is responsible for maintaining the desired states of a cluster, at which applications are running on containers images. Actually, Nodes run on the applications and workloads. It is the heart of the Kubernetes key, and its advantage is the ability to schedule and run containers across the group of machines by the virtual and physical on-premises of the cloud. It is abstracted across the Cluster. We defined a cluster as a set of nodes.
Let's now start with a few other key terms of Kubernetes which are helpful to understand Cluster:-
It is the smallest and most basic deployable object in Kubernetes. It represents one instance of the running process in the Cluster. The Pod contains single or multiple containers, such as the docker containers. When Pod runs many containers, the containers will be managed as a single entity and share the resources of Pod. It represents the processes in a cluster. To limiting pods of a single process, Kubernetes can report the health of every running process inside a cluster.
Properties of Pods:
Although most pods contain a single container, many will have a few containers that work closely together to execute a desired function and service. Mainly, It can run multiple containers in a single Pod is an advanced use case.
Pods also contain shared networking and storage resources for their containers:
It will consider a Pod to be a self-contained, isolated "logical host" that contains the systemic needs for serving the application service. It meant to run one instance of the application on the Cluster.
It will consider a Pod to be a self-contained, isolated "logical host" that contains the systemic needs for serving the application service. It meant to run one instance of the application on the Cluster. It is not recommended to create individual Pods directly; instead, we should create identical Pods called replicas to run our applications. It is a set of replicated pods created and managed by a controller of a deployment.
The replication Controller manages the lifecycles of the constituent Pods. It also performs the scaling horizontally and also changes the numbers of pods whenever necessary. It might occasionally interact directly to debug, troubleshoot and inspect the pods. It is a highly used and recommended controller to manages the pods. It runs on nodes in the Cluster. Once it is created, a pod remains on its node until or unless its process is complete or terminated. The Pod is deleted and evicted from the node due to the lack of resources or when the node faces failure. If a node face failure, the Pod on the node also automatically schedules the deletion of it.
The Pod always runs inside a node. A Node is a worker machine in Kubernetes, and it may be either virtual or physical, which depends on the Cluster. The master manages every node. A node has multiple pods, and the master node automatically handles the scheduling of the pods across the nodes inside a cluster. The Master node automatic schedule takes into account the available resources on every node.
In every Kubernetes Node runs at least the:
Deployment represents the set of multiple and identical pods with the same identities. It runs numerous replicas of your application and automatically replicates any instances that fail or become unresponsive. It helps to ensure that single or multiple instances of your applications are available to serve the requests. The Kubernetes Deployment controller manages it. It also uses a Pod template that contains specifications of the Pods want to create or which type. It also specified the Pod to determine how each Pod would look like metadata, label, mount, namespace and other, etc.
Creating Deployments
You will use the "kubectl apply" or "kubectl create" command to create a deployment. After the deployment is created, ensure that the desired number of pods are started and running. Deployment automatically replaces the Pod, which fails or is evicted in the nodes.
The following is an example of a Deployment manifest file in YAML format:
apiVersion: apps/v1 kind: Deployment metadata: name: webapp1 spec: replicas: 1 selector: matchLabels: app: webapp1 template: metadata: labels: app: webapp1 spec: containers: - name: webapp1 image: 17071998/convo:frontend ports: - containerPort: 80
It is a logical abstraction for a deployed group of the pods inside a cluster that manages all performance and functions. Pods are temporary, and service is enabling a group of pods that provide specific functions by assigning a name, label, or unique IP address or cluster IP. When a service is running on that IP address, it will not change for it. It also defines policies for their access.
It connects a set of pods which an abstract service name and its IP address. Service provides discovery and routing between the pods.
For example, a service will be connected to an application front-end to its backend; each runs in separate deployments inside a cluster. Services use names, labels, specs, and selectors to match pods with another application.
The core attributes of a Kubernetes service are the:
Services are defined without pod selectors. For example, to point different services in a different namespace or Cluster.
The following is an example of a Service manifest file in YAML format:
apiVersion: v1 kind: Service metadata: name: webapp1-svc labels: app: webapp1 spec: type: NodePort ports: - port: 80 nodePort: 30000 selector: app: webapp1
Deployment is a method of launching a pod with containerized applications and ensuring that the necessary replica is always running on the Cluster.
On the other hand, a service is entirely responsible for exposing an interface to those pods, enabling network access from either within the Cluster or between external processes and the service.