K8S标签和污点容忍

K8S标签

  • Lable为了方便管理资源对象,可以指定让pod调度到哪个node节点上运行。
  • 一个资源可以拥有多个标签,实现不同维度的管理。
  • 一个标签可以对应多个资源,一个资源也可以有多个标签,它们是多对多的关系。
  • 标签是一对键值。
  • 如果有多个节点有相同的标签,那么会随机匹配到打了标签信息的某个节点上。

1、标签组成

  • key=value
    • key:以数字或字母开头,可以使用数字、字母、下划线、连字符,最大不能超过63位。
    • value:可以为空,只能使用字母、数字开头。

2、标签命令

# 查看所有pod标签信息
kubectl get pods --show-labels

# 查找所有pod中app=nginx的pod标签信息
kubectl get pod -l app=nginx --show-labls

# 查看所有节点的标签信息
kubectl get node --show-labels

# 给节点打标签
kubectl label node {节点名称} {标签}

# 过滤匹配的标签
kubectl get pod -l xxx!=xxx --show-lables

3、标签案例

# 给节点打标签
[root@k8s-master1 ~]# kubectl label node k8s-node1 disktype=ssd

# 查看节点标签信息
[root@k8s-master1 ~]# kubectl get node --show-labels
NAME          STATUS   ROLES    AGE   VERSION   LABELS
k8s-master1   Ready    <none>   26d   v1.23.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master1,kubernetes.io/os=linux
k8s-node1     Ready    <none>   26d   v1.23.6   app=gateway,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2     Ready    <none>   26d   v1.23.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux

[root@k8s-master1 ~]# vim nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: http
  name: http
spec:
  replicas: 1
  selector:
    matchLabels:
      disktype: ssd
  template:
    metadata:
      labels:
        disktype: ssd
    spec:
      affinity:				# node亲和性
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:	#硬策略,必须满足
            nodeSelectorTerms:
            - matchExpressions:
              - key: disktype
                operator: In
                values:
                - ssd
      containers:
      - image: nginx
        name: nginx

# 查看pod分配到打了相对应标签的节点上
[root@k8s-master1 ~]# kubectl describe pod http-774c5d79b6-ktnz8
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  32s   default-scheduler  Successfully assigned default/http-774c5d79b6-ktnz8 to k8s-node1
  Normal  Pulling    30s   kubelet            Pulling image "nginx"
  Normal  Pulled     29s   kubelet            Successfully pulled image "nginx" in 1.170642094s
  Normal  Created    29s   kubelet            Created container nginx
  Normal  Started    28s   kubelet            Started container nginx

[root@k8s-master1 ~]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS         AGE   IP               NODE          NOMINATED NODE   READINESS GATES
gateway-7cdfc7ff46-p2zrb                  1/1     Running   0                8d    10.244.36.153    k8s-node1     <none>           <none>
http-774c5d79b6-ktnz8                     1/1     Running   0                57s   10.244.36.154    k8s-node1     <none>           <none>

4、污点

# 查看节点上的污点
[root@k8s-master1 ~]# kubectl describe node k8s-master1 | grep Taints

# 污点添加,key为name,value为zhang,污点effect为NoSchedule表示没有pod可以调度到这个节点上(已经在这个节点上的不驱逐)。
#污点为PreferNoSchedule表示尽量不要调度(除非没办法)。
#污点为NoExecute表示不仅不会调度,还会驱逐此节点上已有的pod。
[root@k8s-master1 ~]# kubectl taint node k8s-master1 name=zhang:NoSchedule

[root@k8s-master1 ~]# kubectl describe node k8s-master1 | grep Taints
Taints:             name=zhang:NoSchedule

# 污点删除
[root@k8s-master1 ~]# kubectl taint node k8s-master1 name=zhang:NoSchedule-

[root@k8s-master1 ~]# kubectl describe node k8s-master1 | grep Taints
Taints:             <none>

5、污点案例

[root@k8s-master1 ~]# kubectl run pod --image=nginx

# 只有node2节点没有打污点
[root@k8s-master1 ~]# kubectl describe node | grep Taints
Taints:             node-role.kubernetes.io/master:PreferNoSchedule
Taints:             zz=yy:NoSchedule
Taints:             <none>

# 查看是否分配到node2节点上
[root@k8s-master1 ~]# kubectl get pod -o wide
pod                                       1/1     Running   0              50s   10.244.169.186   k8s-node2     <none>           <none>

6、Tolerations容忍

# 设置污点的Node将根据taint的effect:NoSchedule、PreferNoSchedule、NoExecute和Pod之间产生互斥的关系,Pod将在一定程度上不会被调度到Node上,可以设置容忍(Tolerations),设置了容忍的pod将可以容忍污点的存在,可以被调度到存在污点的Node上。

tolerations:
- key: "key"
  operator: "Equal"
  value: "value"
  effect: "NoSchedule"
---
tolerations:
- key: "key"
  operator: "Exists"
  effect: "NoSchedule"
---
tolerations:
- key: "key"
  operator: "Equal"
  value: "value"
  effect: "NoExecute"
  tolerationSeconds: 3600
---
tolerations:
- operator: "Exists"
---
tolerations:
- key: "key"
  operator: "Exists"

# key、value、effect要与Node上设置taint保持一致。
# operator的值为Exists时,将会忽略value;只要有key和effect就行。
# tolerationSeconds表示pod能够容忍effect值为NoExecute的taint;当指定tolerationSeconds值,表示pod还能在这个节点上运行多长=时间。
# 当不指定key值和effect值时,且operator为Exists,表示容忍所有的污点(能匹配污点所有的keys、values和effect)
# 当不指定effect时,能匹配污点key对应的所有effects情况

7、容忍案例

# 当有多个master存在时,为了不浪费资源,可以进行如下设置
[root@k8s-master1 ~]# kubectl taint nodes k8s-master1 node-role.kubernetes.io/master=:PreferNoSchedule

# 给所有节点打上污点
[root@k8s-master1 ~]# kubectl describe node | grep Taints
Taints:             node-role.kubernetes.io/master:PreferNoSchedule
Taints:             zz=yy:NoSchedule
Taints:             aa=bb:NoSchedule

[root@k8s-master1 ~]# vim nginx.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod
  name: pod
spec:
  tolerations:
  - key: "zz"
    operator: "Equal"
    value: "yy"
    effect: "NoSchedule"
  containers:
  - image: nginx
    name: pod
    resources: {}
  restartPolicy: Always

# 查看pod是否分配到node1节点上
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS         AGE   IP               NODE          NOMINATED NODE   READINESS GATES
pod                                       1/1     Running   0                52s   10.244.36.155    k8s-node1     <none>           <none>

你可能感兴趣的:(kubernetes,linux,docker)