Kubenetes objects are persistent entities of Kubernetes system. Kubenetes uses these entities to represent the status of your cluster. Specifically, they can desceibe the:
1). What containerized applications are running(and on which nodes).
2). The resources available to those applications.
3). The policies around how those applications behave, such as restart policies, upgrades and fault-tolerance.
kubectl create deployment nginx --image nginx
kubectl create -f nginx.yaml
kubectl
. This enables working on directories, where different operations might be needed for different object.kubectl diff -f configs/
kubectl apply -f configs/
Names must be unique across all API versions of the same resource. API resources are distinguished by their
API group, resource type, namespace (for namespaced resources), and name.
Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID. It is intended to distinguish between historical occurrences of similar entities.
Labels are key/value pairs that are attached to objects such as Pods.
Example:
"metadata": {
"labels": {
"key1" : "value1",
"key2" : "value2"
}
}
Label Selectors
Via a label selector, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.
Namespaces provides a mechanism for isolating groups of resources within a single cluster.
When you create a Service, it creates a corresponding DNS entry. This entry is of the form ..svc.cluster.local.
The namespace itself, nodes, persistent volumns
# In a namespace
kubectl api-resources --namespaced=true
# Not in a namespace
kubectl api-resources --namespaced=false
You can use Kubernetes annotations to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can retrieve this metadata.
Filed selectors let you select Kubernetes objects based on the value of one or more resource fileds.
metadata.name=my-service
metadata.namespace!=default
status.phase=Pending
This kubectl command selects all Pods for which the value of the status.phase field is Running:
kubectl get pods --field-selector status.phase=Running
Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resoures marked for deletion.
In Kubernetes, some objects are owners of other objects. For example, a ReplicaSet is the owner of a set of Pods. These owned objects are depedents of their owner.
Dependent objects have a metadata.ownerReferences filed that references their owner object. A valid owner reference consists of the object name and a UID within the same namespace as the dependent object.
Dependent objects also have an ownerReferences.blockOwnerDeletion field that takes a boolean value and controls whether specific dependents can block garbage collection from deleting their owner object.
In order to take full advantage of using these labels, they should be applied on every resource object.
Key | Description | Example | Type |
---|---|---|---|
app.kubernetes.io/name | The name of the application | mysql | string |
app.kubernetes.io/instance | A unique name identifying the instance of an application | mysql-abcxzy | string |
{ app.kubernetes.io/version | The current version of the application (e.g., a SemVer 1.0, revision hash, etc.) | 5.7.21 | string |
app.kubernetes.io/component | The component within the architecture | database | string |
app.kubernetes.io/part-of | The name of a higher level application this one is part of | wordpress | string |
app.kubernetes.io/managed-by | The tool being used to manage the operation of an application | helm | string |
When you deploy Kubernetes, you get a cluster.
A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.
The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.
The control plane’s components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when a deployment’s replicas field is unsatisfied).
The API server is the front end for the Kubernetes control plane.
Consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.
Control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.
Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.
There are many different types of controllers. Some examples of them are:
The above is not an exhaustive list.
A Kubernetes control plane component that embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider’s API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
The following controllers can have cloud provider dependencies:
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network session inside or outside of your cluster.
kube-proxy uses the operating system packet filtering layer if there is one and it’s available. Otherwise, kube-proxy forwards the traffic itself.
A fundamental component that empowers Kubernetes to run containers effectively. It is responsible for managing the execution and lifecycle of containers within the Kubernetes environment.
Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features. Because these are providing cluster-level features, namespaced resources for addons belong within the kube-system namespace.
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster, as well as the cluster itself.
Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.
A cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.
Network plugins are software components that implement the container network interface (CNI) specification. They are responsible for allocating IP addresses to pods and enabling them to communicate with each other within the cluster.