kubernetes:kubernetes调度、亲和、反亲和、污点、容忍

kubernetes

      • 1. 认识kubervetes的调度
      • 2. nodeName
      • 3. nodeselector
      • 4. 亲和、反亲和
        • 4.1 requiredDuringSchedulingIgnoredDuringExecution 必须满足
        • 4.2 preferredDuringSchedulingIgnoredDuringExecution 倾向满足
      • 5. 多种规则匹配条件
      • 6. pod 亲和性和反亲和性
        • 6.1 pod亲和性示例
        • 6.2 反亲和性示例
      • 7.taints污点
        • 7.1 为什么master节点不参与调度
        • 7.2 打污点NoSchedule
        • 7.3 在PodSpec中为容器设定容忍标签
        • 7.4 删除污点
        • 7.5 打污点:NoExecute
      • 8. tolerations容忍的设置
        • 8.1 容忍所有的污点
      • 9. 影响pod调度的令3个指令:cordon、drain、delete

1. 认识kubervetes的调度

调度器通过 kubernetes 的 watch 机制来发现集群中新创建且尚未被调度到 Node 上的 Pod。调度器会将发现的每一个未调度的 Pod 调度到一个合适的 Node 上来运行。

kube-scheduler 是 Kubernetes 集群的默认调度器,并且是集群控制面的一部分。如果你真的希望或者有这方面的需求,kube-scheduler 在设计上是允许你自己写一个调度组件并替换原有的 kube-scheduler。

在做调度决定时需要考虑的因素包括:单独和整体的资源请求、硬件/软件/策略限制、亲和以及反亲和要求、数据局域性、负载间的干扰等等。

默认策略可以参考:
https://kubernetes.io/zh/docs/concepts/scheduling/kube-scheduler/

调度框架:
https://kubernetes.io/zh/docs/concepts/configuration/scheduling-framework/

2. nodeName

nodename是节点选择约束的最简单方法,但一般不推荐。如果 nodeName 在 PodSpec 中指定了,则它优先于其他的节点选择方法。

使用 nodeName 来选择节点的一些限制:
如果指定的节点不存在。
如果指定的节点没有资源来容纳 pod,则pod 调度失败。
云环境中的节点名称并非总是可预测或稳定的。

[kubeadm@server2 node]$ \vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
  nodeName: server3  # 指定pod所在的node
[kubeadm@server2 node]$ kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
nginx                                     1/1     Running   0          7s      10.244.1.119   server3   <none>           <none>

3. nodeselector

nodeSelector 是节点选择约束的最简单推荐形式。

查看节点标签:

[kubeadm@server2 node]$ kubectl get nodes --show-labels
NAME      STATUS   ROLES    AGE   VERSION   LABELS
server2   Ready    master   19d   v1.18.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux,node-role.kubernetes.io/master=
server3   Ready    <none>   19d   v1.18.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux
server4   Ready    <none>   19d   v1.18.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingress=nginx,kubernetes.io/arch=amd64,kubernetes.io/hostname=server4,kubernetes.io/os=linux

给选择的节点添加标签:

[kubeadm@server2 node]$ kubectl label nodes server3 disktype=ssd

添加 nodeSelector 字段到 pod 配置中:

[kubeadm@server2 node]$ \vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  nodeSelector:
    disktype: ssd

[kubeadm@server2 node]$ kubectl apply -f pod.yaml 
pod/nginx created

[kubeadm@server2 node]$ kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
nginx                                     1/1     Running   0          10s     10.244.1.120   server3   <none>           <none>

4. 亲和、反亲和

亲和与反亲和
nodeSelector 提供了一种非常简单的方法来将 pod 约束到具有特定标签的节点上。亲和/反亲和功能极大地扩展了你可以表达约束的类型。
你可以发现规则是“软”/“偏好”,而不是硬性要求,因此,如果调度器无法满足该要求,仍然调度该 pod
你可以使用节点上的 pod 的标签来约束,而不是使用节点本身的标签,来允许哪些 pod 可以或者不可以被放置在一起。

节点亲和

requiredDuringSchedulingIgnoredDuringExecution 	必须满足
preferredDuringSchedulingIgnoredDuringExecution	倾向满足

IgnoreDuringExecution 表示如果在Pod运行期间Node的标签发生变化,导致亲和性策略不能满足,则继续运行当前的Pod。

参考:https://kubernetes.io/zh/docs/concepts/configuration/assign-pod-node/

4.1 requiredDuringSchedulingIgnoredDuringExecution 必须满足

[kubeadm@server2 node]$ cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: node-affinity
spec:
  containers:
  - name: nginx
    image: nginx
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:  # 必须满足
           nodeSelectorTerms:
           - matchExpressions:
             - key: disktype  # 有这个标签名(存储类型)
               operator: In
               values:
                 - ssd  # 标签内容为ssh
[kubeadm@server2 node]$ kubectl apply -f pod.yaml 
pod/node-affinity created

[kubeadm@server2 node]$ kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
node-affinity                             1/1     Running   0          12s     10.244.1.121   server3   <none>           <none>

给server4添加disktype=sata

[kubeadm@server2 node]$ kubectl label nodes server4 disktype=sata --overwrite
node/server4 labeled

[kubeadm@server2 node]$ kubectl get nodes --show-labels
NAME      STATUS   ROLES    AGE   VERSION   LABELS
server2   Ready    master   19d   v1.18.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux,node-role.kubernetes.io/master=
server3   Ready    <none>   19d   v1.18.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux
server4   Ready    <none>   19d   v1.18.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=sata,ingress=nginx,kubernetes.io/arch=amd64,kubernetes.io/hostname=server4,kubernetes.io/os=linux

增加一个选项sata,满足ssh和sata其中之一即可

[kubeadm@server2 node]$ cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: node-affinity
spec:
  containers:
  - name: nginx
    image: nginx
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: disktype
               operator: In
               values:
                 - ssd
                 - sata
[kubeadm@server2 node]$ kubectl apply -f pod.yaml 
pod/node-affinity created

[kubeadm@server2 node]$ kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
node-affinity                             1/1     Running   0          8s      10.244.1.122   server3   <none>           <none>

4.2 preferredDuringSchedulingIgnoredDuringExecution 倾向满足

[kubeadm@server2 node]$ cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: node-affinity
spec:
  containers:
  - name: nginx
    image: nginx
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:  # 必须满足
           nodeSelectorTerms:
           - matchExpressions:
             - key: kubernetes.io/hostname
               operator: NotIn   # 不再server3上运行
               values:
               - server3
      preferredDuringSchedulingIgnoredDuringExecution:  # 倾向满足,IgnoredDuringExecution表示已经调度的pod不做操作
      - weight: 1
        preference:
          matchExpressions:
          - key: disktype   # 标签硬盘类型
            operator: In
            values:
            - ssd     
            - sata
[kubeadm@server2 node]$ kubectl apply -f pod.yaml 
pod/node-affinity created

[kubeadm@server2 node]$ kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE    IP             NODE      NOMINATED NODE   READINESS GATES
node-affinity                             1/1     Running   0          11s    10.244.2.113   server4   <none>           <none>

5. 多种规则匹配条件

nodeaffinity还支持多种规则匹配条件的配置如
In:label 的值在列表内
NotIn:label 的值不在列表内
Gt:label 的值大于设置的值,不支持Pod亲和性
Lt:label 的值小于设置的值,不支持pod亲和性
Exists:设置的label 存在
DoesNotExist:设置的 label 不存在

6. pod 亲和性和反亲和性

podAffinity 主要解决POD可以和哪些POD部署在同一个拓扑域中的问题(拓扑域用主机标签实现,可以是单个主机,也可以是多个主机组成的cluster、zone等。)
podAntiAffinity主要解决POD不能和哪些POD部署在同一个拓扑域中的问题。它们处理的是Kubernetes集群内部POD和POD之间的关系。
Pod 间亲和与反亲和在与更高级别的集合(例如 ReplicaSets,StatefulSets,Deployments 等)一起使用时,它们可能更加有用。可以轻松配置一组应位于相同定义拓扑(例如,节点)中的工作负载。

6.1 pod亲和性示例

[kubeadm@server2 node]$ cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx

[kubeadm@server2 node]$ kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE    IP             NODE      NOMINATED NODE   READINESS GATES
nfs-client-provisioner-55d87b5996-8clxq   1/1     Running   1          3d1h   10.244.2.112   server4   <none>           <none>
nginx                                     1/1     Running   0          59s    10.244.1.123   server3   <none>           <none>

[kubeadm@server2 node]$ kubectl get pod --show-labels
NAME                                      READY   STATUS    RESTARTS   AGE    LABELS
nfs-client-provisioner-55d87b5996-8clxq   1/1     Running   1          3d1h   app=nfs-client-provisioner,pod-template-hash=55d87b5996
nginx                                     1/1     Running   0          79s    app=nginx

[kubeadm@server2 node]$ cat pod2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  containers:
  - name: mysql
    image: mysql:5.7
    env:
     - name: "MYSQL_ROOT_PASSWORD"
       value: "westos"
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app   # 寻找标签app等域nginx的pod所在的节点
            operator: In
            values:
            - nginx
        topologyKey: kubernetes.io/hostname

[kubeadm@server2 node]$ kubectl apply -f pod2.yaml 
pod/mysql created

nginx和mysql在同一个节点上

[kubeadm@server2 node]$ kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
mysql                                     1/1     Running   0          10s     10.244.1.124   server3   <none>           <none>
nfs-client-provisioner-55d87b5996-8clxq   1/1     Running   1          3d1h    10.244.2.112   server4   <none>           <none>
nginx                                     1/1     Running   0          6m48s   10.244.1.123   server3   <none>           <none>

6.2 反亲和性示例

[kubeadm@server2 node]$ cat pod2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  containers:
  - name: mysql
    image: mysql:5.7
    env:
     - name: "MYSQL_ROOT_PASSWORD"
       value: "westos"
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - nginx
        topologyKey: "kubernetes.io/hostname"

[kubeadm@server2 node]$ kubectl apply -f pod2.yaml 
pod/mysql created

mysql和nginx一定不在同一个节点上

[kubeadm@server2 node]$ kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE    IP             NODE      NOMINATED NODE   READINESS GATES
mysql                                     1/1     Running   0          22s    10.244.2.114   server4   <none>           <none>
nfs-client-provisioner-55d87b5996-8clxq   1/1     Running   1          3d1h   10.244.2.112   server4   <none>           <none>
nginx                                     1/1     Running   0          12m    10.244.1.123   server3   <none>           <none>

7.taints污点

NodeAffinity节点亲和性,是Pod上定义的一种属性,使Pod能够按我们的要求调度到某个Node上,而Taints则恰恰相反,它可以让Node拒绝运行Pod,甚至驱逐Pod。

Taints(污点)是Node的一个属性,设置了Taints后,所以Kubernetes是不会将Pod调度到这个Node上的,于是Kubernetes就给Pod设置了个属性Tolerations(容忍),只要Pod能够容忍Node上的污点,那么Kubernetes就会忽略Node上的污点,就能够(不是必须)把Pod调度过去。

可以使用命令 kubectl taint 给节点增加一个 taint:

$ kubectl taint nodes node1 key=value:NoSchedule	//创建
$ kubectl describe nodes  server1 |grep Taints		//查询
$ kubectl taint nodes node1 key:NoSchedule-		//删除

其中[effect] 可取值: [ NoSchedule | PreferNoSchedule | NoExecute ]
NoSchedule:POD 不会被调度到标记为 taints 节点。
PreferNoSchedule:NoSchedule 的软策略版本。
NoExecute:该选项意味着一旦 Taint 生效,如该节点内正在运行的 POD 没有对应 Tolerate 设置,会直接被逐出。

7.1 为什么master节点不参与调度

因为master节点上有污点
[kubeadm@server2 node]$ kubectl describe nodes server2 |grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule

7.2 打污点NoSchedule

[kubeadm@server2 node]$ kubectl delete pod nginx
pod "nginx" deleted
[kubeadm@server2 node]$ kubectl delete pod mysql
pod "mysql" deleted

给Server3节点打上taint:server3不参与调度
[kubeadm@server2 node]$ kubectl taint node server3 key1=v1:NoSchedule
node/server3 tainted

[kubeadm@server2 node]$ kubectl describe nodes server3 |grep Taint
Taints:             key1=v1:NoSchedule

[kubeadm@server2 node]$ cat pod.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx

[kubeadm@server2 node]$ kubectl apply -f pod.yaml 
deployment.apps/web-server created

因为server3不参加调度所以全在server4上运行
[kubeadm@server2 node]$ kubectl get pod -o wide
NAME                                      READY   STATUS              RESTARTS   AGE    IP             NODE      NOMINATED NODE   READINESS GATES
nfs-client-provisioner-55d87b5996-8clxq   1/1     Running             1          3d2h   10.244.2.112   server4   <none>           <none>
web-server-f89759699-646r2                1/1     Running             0          9s     10.244.2.117   server4   <none>           <none>
web-server-f89759699-lbw8d                1/1     Running             0          9s     10.244.2.116   server4   <none>           <none>
web-server-f89759699-pzn5l                0/1     ContainerCreating   0          10s    <none>         server4   <none>           <none>

7.3 在PodSpec中为容器设定容忍标签

[kubeadm@server2 node]$ cat pod.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
      tolerations:  # 容忍key1=v1:NoSchedule这种情况
      - key: "key1"
        operator: "Equal"
        value: "v1"
        effect: "NoSchedule"

[kubeadm@server2 node]$ kubectl apply -f pod.yaml 
deployment.apps/web-server created

因为容忍了server3上的污点key1=v1:NoSchedule,所以server3又可以参加调度
[kubeadm@server2 node]$ kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE    IP             NODE      NOMINATED NODE   READINESS GATES
nfs-client-provisioner-55d87b5996-8clxq   1/1     Running   1          3d2h   10.244.2.112   server4   <none>           <none>
web-server-b98886d79-5hts6                1/1     Running   0          95s    10.244.2.118   server4   <none>           <none>
web-server-b98886d79-ksv74                1/1     Running   0          95s    10.244.1.126   server3   <none>           <none>
web-server-b98886d79-vcn7k                1/1     Running   0          95s    10.244.1.125   server3   <none>           <none>

7.4 删除污点

[kubeadm@server2 node]$ kubectl taint nodes server3 key:NoSchedule-

[kubeadm@server2 node]$ kubectl describe nodes server3 |grep Taint
Taints:             <none>

删除所有节点的污点
[kubeadm@server2 node]$ kubectl taint nodes --all node-role.kubernetes.io/master-

7.5 打污点:NoExecute

给Server1节点打上taint:
$ kubectl taint node  server1 key1=v1:NoExecute
	node/server1 tainted

可以看到server1上的Pod被驱离:
$ kubectl get pod -o wide
NAME                          READY   STATUS              RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
web-server-86c57db685-9r5pn   1/1     Running             0          80s   10.244.1.158   server2   <none>           <none>
web-server-86c57db685-d87lc   0/1     ContainerCreating   0          7s    <none>         server2   <none>           <none>
web-server-86c57db685-gsqvt   1/1     Running             0          80s   10.244.2.143   server3   <none>           <none>
web-server-86c57db685-sk4t4   0/1     Terminating         0          80s   10.244.0.79    server1   <none>           <none>

8. tolerations容忍的设置

tolerations中定义的key、value、effect,要与node上设置的taint保持一直:
如果 operator 是 Exists ,value可以省略。
如果 operator 是 Equal ,则key与value之间的关系必须相等。
如果不指定operator属性,则默认值为Equal。
还有两个特殊值:
当不指定key,再配合Exists 就能匹配所有的key与value ,可以容忍所有污点。
当不指定effect ,则匹配所有的effect。
tolerations示例:

tolerations:
- key: "key"
  operator: "Equal"
  value: "value"
  effect: "NoSchedule"
---
tolerations:
- key: "key"
  operator: "Exists"
  effect: "NoSchedule"

8.1 容忍所有的污点

给server4和server3上添加污点

[kubeadm@server2 node]$ kubectl taint node server4 key1=v1:NoSchedule
node/server4 tainted
[kubeadm@server2 node]$ kubectl taint node server3 key1=v1:NoSchedule
node/server3 tainted

[kubeadm@server2 node]$ kubectl describe nodes server4 | grep Taint
Taints:             key1=v1:NoSchedule
[kubeadm@server2 node]$ kubectl describe nodes server3 | grep Taint
Taints:             key1=v1:NoSchedule

[kubeadm@server2 node]$ cat pod.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
      tolerations:  # 容忍所有的污点
      - operator: "Exists"

[kubeadm@server2 node]$ kubectl apply -f pod.yaml 
deployment.apps/web-server created

server3和server4都可以被调度,因为容忍了所有的污点
[kubeadm@server2 node]$ kubectl get pod -o wide
NAME                                      READY   STATUS              RESTARTS   AGE    IP             NODE      NOMINATED NODE   READINESS GATES
nfs-client-provisioner-55d87b5996-8clxq   1/1     Running             1          3d3h   10.244.2.112   server4   <none>           <none>
web-server-54dd87666-284q9                0/1     ContainerCreating   0          7s     <none>         server3   <none>           <none>
web-server-54dd87666-nqd5p                0/1     ContainerCreating   0          7s     <none>         server4   <none>           <none>
web-server-54dd87666-xkshd                0/1     ContainerCreating   0          7s     <none>         server3   <none>           <none>

9. 影响pod调度的令3个指令:cordon、drain、delete

影响Pod调度的指令还有:cordon、drain、delete,后期创建的pod都不会被调度到该节点上,但操作的暴力程度不一样。
cordon 停止调度:
影响最小,只会将node调为SchedulingDisabled,新创建pod,不会被调度到该节点,节点原有pod不受影响,仍正常对外提供服务。

$ kubectl cordon server3
$ kubectl  get node
NAME      STATUS                     ROLES    AGE   VERSION
server1   Ready                      <none>   29m   v1.17.2
server2   Ready                      <none>   12d   v1.17.2
server3   Ready,SchedulingDisabled   <none>   9d    v1.17.2
$ kubectl uncordon server3 		//恢复

drain 驱逐节点:
首先驱逐node上的pod,在其他节点重新创建,然后将节点调为SchedulingDisabled。

$ kubectl  drain server3 --ignore-daemonsets(忽略daemonset的内容)
node/server3 cordoned
evicting pod "web-1"
evicting pod "coredns-9d85f5447-mgg2k"
pod/coredns-9d85f5447-mgg2k evicted
pod/web-1 evicted
node/server3 evicted
$ kubectl uncordon server3

delete 删除节点
最暴力的一个,首先驱逐node上的pod,在其他节点重新创建,然后,从master节点删除该node,master失去对其控制,如要恢复调度,需进入node节点,重启kubelet服务

$ kubectl delete node server3

# systemctl restart kubelet		//基于node的自注册功能,恢复使用,在你删除节点所在主机上执行

你可能感兴趣的:(kubernetes)