A Node’s status contains the following information:
You can use kubectl to view a Node’s status and other details:
kubectl describe node <insert-node-name-here>
For nodes there are two forms of heartbeats:
Kubernetes has a “hub-and-spoke” API pattern. All API usage from nodes (or the pods they run) terminates at the API server.
There are two primary communication paths from the control plane (the API server) to the nodes. The first is from the API server to the kubelet process which runs on each node in the cluster. The second is from the API server to any node, pod, or service through the API server’s proxy functionality.
The connections from the API server to the kubelet are used for:
SSH tunnels are currently deprecated, so you shouldn’t opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel.
The Konnectivity service consists of two parts: the Konnectivity server in the control plane network and the Konnectivity agents in the nodes network.
In robotics and automation, a control loop is a non-terminating loop that regulates the state of a system.
In Kubernetes, controllers are control loops that watch the state of your cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state.
The Job controller is an example of a Kubernetes built-in controller. Built-in controllers manage state by interacting with the cluster API server.
Job is a Kubernetes resource that runs a Pod, or perhaps several Pods, to carry out a task and then stop.
In contrast with Job, some controllers need to make changes to things outside of your cluster.
For example, if you use a control loop to make sure there are enough Nodes in your cluster, then that controller needs something outside the current cluster to set up new Nodes when needed.
Kubernetes comes with a set of built-in controllers that run inside the kube-controller-manager.
Distributed systems often have a need for leases, which provide a mechanism to lock shared resources and coordinate activity between members of a set. In Kubernetes, the lease concept is represented by Lease objects in the coordination.k8s.io API Group, which are used for system-critical capabilities such as node heartbeats and component-level leader election
The cloud-controller-manager is a Kubernetes control plane component that embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider’s API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
cgroup v2 offers several improvements over cgroup v1, such as the following:
Single unified hierarchy design in API
Safer sub-tree delegation to containers
Newer features like Pressure Stall Information
Enhanced resource allocation management and isolation across multiple resources
Unified accounting for different types of memory allocations (network memory, kernel memory, etc)
Accounting for non-immediate resource changes such as page cache write backs
cgroup v2 has the following requirements:
OS distribution enables cgroup v2
Linux Kernel version is 5.8 or later
Container runtime supports cgroup v2. For example:
containerd v1.4 and later
cri-o v1.20 and later
The kubelet and the container runtime are configured to use the systemd cgroup driver
The Container Runtime Interface (CRI) is the main protocol for the communication between the kubelet and Container Runtime.
Garbage collection is a collective term for the various mechanisms Kubernetes uses to clean up cluster resources.
When you delete an object, you can control whether Kubernetes deletes the object’s dependents automatically, in a process called cascading deletion. There are two types of cascading deletion, as follows:
During foreground cascading deletion, the only dependents that block owner deletion are those that have the ownerReference.blockOwnerDeletion=true field.
In background cascading deletion, the Kubernetes API server deletes the owner object immediately and the controller cleans up the dependent objects in the background. By default, Kubernetes uses background cascading deletion unless you manually use foreground deletion or choose to orphan the dependent objects.
Kubernetes manages the lifecycle of all images through its image manager, which is part of the kubelet, with the cooperation of cadvisor. The kubelet considers the following disk usage limits when making garbage collection decisions:
Disk usage above the configured HighThresholdPercent value triggers garbage collection, which deletes images in order based on the last time they were used, starting with the oldest first. The kubelet deletes images until disk usage reaches the LowThresholdPercent value.
The kubelet garbage collects unused containers based on the following variables, which you can define:
MinAge: the minimum age at which the kubelet can garbage collect a container. Disable by setting to 0.
MaxPerPodContainer: the maximum number of dead containers each Pod can have. Disable by setting to less than 0.
MaxContainers: the maximum number of dead containers the cluster can have. Disable by setting to less than 0.