K8S的调度策略主要可以分为这5种:
搭建K8S文档:yum安装K8S
ansible搭建K8S集群:ansible搭建K8S
准备至少3个机器搭建好K8S集群测试调度
节点名称 | IP |
---|---|
k8s-master | 192.168.116.134 |
k8s-node1 | 192.168.116.135 |
k8s-node2 | 192.168.116.136 |
官网nodeSelector文档:nodeSelector
在master操作
kubectl get nodes --show-labels 查看节点状态和标签
kubectl label nodes k8s-node1 node-labels=nginx-test-1 对k8s-node1加多标签
k8s-node1 指的是节点的名称,也就是NAME这部分的名称
node-labels=nginx-test-1 标签需要以等号标记,容器创建时以键值对方式表现
kubectl get nodes --show-labels 可以看到标签已经标记上去
vi nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.18.0-alpine
#下面这2句就是指定容器创建的节点
nodeSelector:
node-labels: nginx-test-1
kubectl apply -f nginx.yml 创建容器
kubectl get pod -n default -o wide 可以看到容器被创建到了node1节点上
kubectl delete -f nginx.yml 删除容器
官网亲和调度文档:K8S-nodeAffinity(亲和)
nodeAffinity(节点亲和):
特性:
字段表达方式:
这个方式和上面的指定nodeSelector方式类似,都是只能固定节点创建。
在master操作
kubectl label nodes k8s-node1 node-labels- 删除上面创建的标签
kubectl label nodes k8s-node1 qinhe=required-1 对k8s-node1加多标签
kubectl get nodes --show-labels 查看标签
vi nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.18.0-alpine
#下面就是定义容器创建的节点标记
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: qinhe #修改这里的键,也就是等于号左边的内容
operator: In
values:
- required-1 #修改这里的值,也就是等于号右边的内容
kubectl apply -f nginx.yml 创建容器
kubectl get pod -n default -o wide 可以看到容器被创建到了node1节点上
kubectl delete -f nginx.yml 删除容器
nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution方式:
这个方式就是当有容器要创建时,优先创建在这些权重高的节点上面。
在master操作
kubectl label nodes k8s-node1 qinhe- 删除上面创建的标签
将k8s的2个节点分成2个组,一个键可以赋多个节点值
kubectl label nodes k8s-node1 qinhe/1=preferred-1 对k8s-node1加多标签
kubectl label nodes k8s-node2 qinhe/2=preferred-2 对k8s-node2加多标签
kubectl get nodes --show-labels 查看标签
在master创建容器的yml文件,测试K8S的调度
vi nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.18.0-alpine
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: qinhe
operator: In
values:
- preferred-2
kubectl apply -f nginx.yml 创建容器
kubectl get pod -n default -o wide 可以看到容器被创建到了node2节点上
kubectl delete -f nginx.yml 删除容器
inter-pod-affinity和anti-affinity文档:inter-pod-affinity和pod-anti-affinity
特性:有2种定义类型
requiredDuringSchedulingIgnoredDuringExecution
preferredDuringSchedulingIgnoredDuringExecution
实验目的:指定新的pod和老的pod都在同一个节点上。
工作方式:
在master操作
kubectl label nodes k8s-node2 qinhe- 删除上面创建的标签
将2个节点都设置同一个标签,表示2个节点是同一组
kubectl label nodes k8s-node1 k8s/zone=pod-qinhe
kubectl label nodes k8s-node2 k8s/zone=pod-qinhe
kubectl get nodes --show-labels 查看标签
vi nginx-lab.yml 创建老的pod
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
qinhe: nginx-lab #这里就是pod的标签键值对,下面要一致才行
name: nginx-lab-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
qinhe: nginx-lab
template:
metadata:
labels:
qinhe: nginx-lab
spec:
containers:
- name: nginx-lab
image: docker.io/library/nginx:1.18.0-alpine
kubectl apply -f nginx-lab.yml 创建容器
kubectl get pod -n default -o wide 查看pod创建的节点
kubectl get pod --show-labels 查看pod的标签
创建新的pod,指定老的pod的标签,这样新的pod就会创建在和老的pod同一个节点上
vi nginx-new.yml 创建新的pod
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-new
name: nginx-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx-new
template:
metadata:
labels:
app: nginx-new
spec:
containers:
- name: nginx-new
image: docker.io/library/nginx:1.18.0-alpine
affinity:
podAffinity: #这里是亲和,和下面的反亲和不一样
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: qinhe #自定义pod的标签键
operator: In
values:
- nginx-lab #自定义pod的标签值
topologyKey: k8s/zone #写自定义的节点组标签,也可以写默认标签kubernetes.io/hostname
kubectl apply -f nginx-new.yml 创建新的pod
kubectl get pod -n default -o wide 查看节点位置
kubectl get pod --show-labels 查看标签
kubectl delete -f nginx-new.yml 删除容器
反亲和注意:
实验目的:让新创建的pod和指定的老的pod不能创建在同一个节点上。
创建一个任意pod,让pod自动选择生成的节点
vi nginx-lab.yml 创建老的pod
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
qinhe: nginx-lab #这里就是pod的标签键值对,下面要一致才行
name: nginx-lab-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
qinhe: nginx-lab
template:
metadata:
labels:
qinhe: nginx-lab
spec:
containers:
- name: nginx-lab
image: docker.io/library/nginx:1.18.0-alpine
kubectl apply -f nginx-lab.yml 创建容器
kubectl get pod -n default -o wide 查看pod创建的节点是k8s-node2
kubectl get pod --show-labels 查看pod的标签
创建新的pod,指定老的pod的标签,让2个pod不能创建在同一个节点上
删除上面定义的标签
kubectl label nodes k8s-node1 k8s/zone-
kubectl label nodes k8s-node2 k8s/zone-
确认2个节点都有这个系统默认标签kubernetes.io/hostname
kubectl get nodes --show-labels
vi nginx-antiaffinity.yml 创建反亲和容器
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-antiaffinity
name: nginx-antiaffinity
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx-antiaffinity
template:
metadata:
labels:
app: nginx-antiaffinity
spec:
containers:
- name: nginx-antiaffinity
image: docker.io/library/nginx:1.18.0-alpine
affinity:
podAntiAffinity: #注意这里是podAntiAffinity,和上面的亲和不一样
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: qinhe #自定义老的pod的键
operator: In
values:
- nginx-lab #自定义老的pod的值
topologyKey: kubernetes.io/hostname #注意:这个标签不能设置为其他标签,只能是系统默认的kubernetes.io/hostname
kubectl apply -f nginx-antiaffinity.yml 创建新的pod
kubectl get pod -n default -o wide 查看到新的pod生成在k8s-node1节点上
kubectl delete -f nginx-antiaffinity.yml 删除容器
官网文档:K8S-taint-and-toleration
目的:为了加强集群管理,将K8S的集群划分一部分核心节点或者说隔离节点,使得新的pod尽量或者不要在这些节点下创建。比如在k8s集群中,master节点一般是不能创建pod,这就是污点化的作用。
工作方式:
k8s的内置污点化:
当集群中某个条件被触发时,节点控制器会自动给节点添加一个污点。
在节点被驱逐时,节点控制器或者 kubelet 会添加带有 NoExecute 效应的相关污点。 如果异常状态恢复正常,kubelet 或节点控制器能够移除相关的污点。
节点污点化用法:kubectl taint nodes 节点名称 键=值:effect 值
effect 值有3种效果:
因为节点污点化模式为NoSchedule,新创建的pod不能在这个节点创建。
vi nginx-lab.yml 创建pod,不指定节点,随机创建
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
qinhe: nginx-lab
name: nginx-lab-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
qinhe: nginx-lab
template:
metadata:
labels:
qinhe: nginx-lab
spec:
containers:
- name: nginx-lab
image: docker.io/library/nginx:1.18.0-alpine
对k8s-node1节点进行污点化
kubectl taint nodes k8s-node1 k8s/tain1=exists:NoSchedule
kubectl describe nodes k8s-node1 |grep Taints 查看节点的污点标签
kubectl apply -f nginx-lab.yml 创建容器
kubectl get pod -o wide 可以看到pod只会在k8s-node2这个没有污点的节点生成
kubectl delete -f nginx-lab.yml 删除容器
当把k8s-node2也设置污点后,可以看到pod的状态一直是Pending状态
kubectl taint nodes k8s-node2 k8s/tain1=exists:NoSchedule 设置k8s-node2节点污点
kubectl apply -f nginx-lab.yml 创建容器
kubectl get pod -o wide 查看节点位置
kubectl describe pod nginx-lab-deployment-84cfcc6c9b-qvhk4 查看pod日志
查看报错是这个,意思就是没有节点可以给pod生成,3个节点都被污点化了
Warning FailedScheduling 31s (x2 over 31s) default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
因为节点污点化模式为PreferNoSchedule,所以当只有k8s-node1节点时,容器还是能创建在污点化的节点下。
将k8s-node2去除污点化标签,关机
kubectl taint nodes k8s-node1 k8s/tain1=exists:NoSchedule- 去除污点化
kubectl taint nodes k8s-node2 k8s/tain1=exists:NoSchedule- 去除污点化
kubectl describe nodes k8s-node1 |grep Taints 查看污点化
kubectl describe nodes k8s-node2 |grep Taints 查看污点化
node.kubernetes.io/unreachable:NoExecute是内置污点化标签,表示这个节点不能调度
kubectl get nodes 因为关闭了k8s-node2节点,所以状态为NotReady
vi nginx-lab.yml 创建pod
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
qinhe: nginx-lab
name: nginx-lab-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
qinhe: nginx-lab
template:
metadata:
labels:
qinhe: nginx-lab
spec:
containers:
- name: nginx-lab
image: docker.io/library/nginx:1.18.0-alpine
将k8s-node1污点化
kubectl taint nodes k8s-node1 k8s/tain1=exists:PreferNoSchedule
kubectl describe nodes k8s-node1 |grep Taints 查看污点化
kubectl apply -f nginx-lab.yml 创建容器
kubectl get pod -o wide 查看pod被创建在了k8s-node1
因为节点污点化模式为NoExecute,所以当pod已经在k8s-node2存在时,还是会被驱逐到其他节点重建。
删除2个节点污点化标签,创建pod
vi nginx-lab.yml 创建容器
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
qinhe: nginx-lab
name: nginx-lab-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
qinhe: nginx-lab
template:
metadata:
labels:
qinhe: nginx-lab
spec:
containers:
- name: nginx-lab
image: docker.io/library/nginx:1.18.0-alpine
kubectl apply -f nginx-lab.yml 创建容器
kubectl get pod -o wide 可以看到pod在k8s-node2节点下生成
kubectl taint nodes k8s-node2 k8s/tain1=exists:NoExecute 将k8s-node2节点污点化
kubectl get pod -o wide 可以看到pod被重建到了k8s-node1
kubectl delete -f nginx-lab.yml 删除容器
上面的节点污点化主要是让新的pod创建时,不要在污点化的节点创建。而容忍度则是为了在有污点化的节点上创建pod。
容忍度用法:
因为污点化本来就是不想让pod创建在污点化过后的节点上,所以当存在没有进行过污点化的节点,pod会优先创建在那些没有污点化的节点上。
对k8s-node1进行污点化,k8s-node2不进行污点化
kubectl taint nodes k8s-node1 k8s/tain1=v1:NoSchedule
查看污点标签
kubectl describe node k8s-node1 |grep Taints
kubectl describe node k8s-node2 |grep Taints
vi nginx-taint1.yml 创建pod
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-antiaffinity
name: nginx-antiaffinity
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx-antiaffinity
template:
metadata:
labels:
app: nginx-antiaffinity
spec:
containers:
- name: nginx-antiaffinity
image: docker.io/library/nginx:1.18.0-alpine
tolerations:
- key: "k8s/tain1"
operator: "Equal"
value: "v1"
effect: "NoSchedule"
kubectl apply -f nginx-taint1.yml 创建pod
kubectl get pod -o wide 可以看到节点被创建在了k8s-node2
effect值注意:
等值模式1:operator 是 Equal (相等),如果pod文件key、value值、effect值存在但是不一致,那么就会根据key、value、effect值一一进行匹配,匹配度最高的污点化节点创建pod。
对节点1和节点2都进行污点化
对节点1和节点2都进行污点化
kubectl taint nodes k8s-node1 k8s/tain1=v1:NoSchedule
kubectl taint nodes k8s-node2 k8s/tain2=v2:NoSchedule
kubectl describe node k8s-node1 |grep Taints
kubectl describe node k8s-node2 |grep Taints
vi nginx-taint1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.18.0-alpine
tolerations:
- key: "k8s/tain1"
operator: "Equal" #等值模式下,key和value和effect的值一致
value: "v1"
effect: "NoSchedule"
kubectl apply -f nginx-taint1.yml 创建pod
因为operator指定了值必须相等,所以可以看到pod创建在了k8s-node1节点
kubectl get pod -o wide
等值模式2:operator 是 Equal (相等),如果pod文件effect值为空,污点化节点的key、value值存在但是不一致,那么key和value值匹配度越高的节点就会创建pod。
设置节点污点化
设置节点污点化
kubectl taint nodes k8s-node1 k8s/tain1=v1:NoSchedule
kubectl taint nodes k8s-node2 k8s/tain2=v2:NoExecute
查看污点化标签
kubectl describe node k8s-node1 |grep Taints
kubectl describe node k8s-node2 |grep Taints
vi nginx-taint1.yml 创建pod
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.18.0-alpine
tolerations:
- key: "k8s/tain1" #必须设置key和污点化的key一致
operator: "Equal"
value: "v1" #必须设置value和污点化的value一致
effect: "" #可以为空
kubectl apply -f nginx-taint1.yml 创建pod
可以看到指定了k8s-node1污点化的key和value时,不用指定effect就可以在node1创建pod
kubectl get pod -o wide
kubectl delete -f nginx-taint2.yml
修改文件的值为k8s-node2的key和value,effect还是为空,可以看到pod会创建在k8s-node2上
vi nginx-taint1.yml 创建pod
省略...
tolerations:
- key: "k8s/tain2"
value: "v2"
operator: "Equal"
effect: ""
可以看到指定了k8s-node2污点化的key和value时,不用指定effect就可以在node2创建pod
kubectl apply -f nginx-taint1.yml
kubectl get pod -o wide
kubectl delete -f nginx-taint1.yml
等值模式3:operator 是 Equal (相等),如果pod文件effect值为空,有多个污点化节点的key、value值、effect值存在且一致时,pod会随机在这些节点上创建。
设置节点污点化,设置key和value和effect一致
设置节点污点化
kubectl taint nodes k8s-node1 k8s/tain1=v1:NoSchedule
kubectl taint nodes k8s-node2 k8s/tain1=v1:NoSchedule
查看污点化标签
kubectl describe node k8s-node1 |grep Taints
kubectl describe node k8s-node2 |grep Taints
vi nginx-taint1.yml 创建pod
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.18.0-alpine
tolerations:
- key: "k8s/tain1"
operator: "Equal"
value: "v1"
effect: ""
kubectl apply -f nginx-taint1.yml
kubectl get pod -o wide
kubectl delete -f nginx-taint1.yml
等值模式4:operator 是 Equal (相等),如果pod文件effect值为空,有多个污点化节点的key、value值一致但是effect值不一致时,pod也是会随机在这些节点上创建。
设置节点污点化
kubectl taint nodes k8s-node1 k8s/tain1=v1:NoSchedule
kubectl taint nodes k8s-node2 k8s/tain1=v1:NoExecute
查看污点化标签
kubectl describe node k8s-node1 |grep Taints
kubectl describe node k8s-node2 |grep Taints
vi nginx-taint1.yml 创建pod
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.18.0-alpine
tolerations:
- key: "k8s/tain1"
operator: "Equal"
value: "v1"
effect: "" #只要为空,key和value一致的节点都能创建pod
kubectl apply -f nginx-taint1.yml
kubectl get pod -o wide
kubectl delete -f nginx-taint1.yml
存在模式1:如果 operator 是 Exists ,不能设置value值。
将2个节点污点化,key和value和effect的值都设置不一样
kubectl taint nodes k8s-node1 k8s/tain1=v1:NoSchedule
kubectl taint nodes k8s-node2 k8s/tain2=v2:NoExecute
kubectl describe node k8s-node1 |grep Taints
kubectl describe node k8s-node2 |grep Taints
vi nginx-taint.yml 创建pod
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.18.0-alpine
tolerations:
- key: "k8s/tain1" #设置和k8s-node1污点化标签一直
operator: "Exists"
effect: "NoSchedule" #设置和k8s-node1污点化标签一直
kubectl apply -f nginx-taint.yml 创建容器
kubectl get pod -o wide
kubectl delete -f nginx-taint.yml 删除容器
将2个节点污点化,key和value设置一致,effect的值都设置不一样
kubectl taint nodes k8s-node1 k8s/tain1=v1:NoSchedule
kubectl taint nodes k8s-node2 k8s/tain1=v1:NoExecute
kubectl describe node k8s-node1 |grep Taints
kubectl describe node k8s-node2 |grep Taints
vi nginx-taint.yml 创建pod
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.18.0-alpine
tolerations:
- key: "k8s/tain1" #第1个匹配条件
operator: "Exists"
effect: "NoExecute" #第2个匹配条件
可以看到k8s-node1和k8s-node2的key一样,但是effect不一致
pod创建在了k8s-node2上面
所以证明pod文件匹配了2次,当污点化节点的key一致,effect就会成为第2个匹配条件
kubectl apply -f nginx-taint.yml 创建容器
kubectl get pod -o wide
kubectl delete -f nginx-taint.yml 删除容器
存在模式2:如果 operator 是 Exists ,不能设置value值。
将2个节点污点化,key和value设置一致,effect值设置为不一样
kubectl taint nodes k8s-node1 k8s/tain1=v1:NoSchedule
kubectl taint nodes k8s-node2 k8s/tain1=v1:NoExecute
查看污点化标签
kubectl describe node k8s-node1 |grep Taints
kubectl describe node k8s-node2 |grep Taints
创建pod
vi nginx-taint2.yml 创建pod
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.18.0-alpine
tolerations:
- key: ""
operator: "Exists"
effect: "NoSchedule" #污点化节点和这个值一致才会创建pod
kubectl apply -f nginx-taint2.yml 创建pod
可以看到因为文件的effect: "NoSchedule"
所以k8s-node1 设置污点化的effect值一致,所以节点被创建在k8s-node1上
kubectl get pod -o wide
kubectl delete -f nginx-taint2.yml 删除容器
将2个节点污点化,key和value设置不一致,effect值设置为一样
kubectl taint nodes k8s-node1 k8s/tain1=v1:NoSchedule
kubectl taint nodes k8s-node2 k8s/tain2=v2:NoSchedule
查看污点化标签
kubectl describe node k8s-node1 |grep Taints
kubectl describe node k8s-node2 |grep Taints
创建pod的文件不变,查看pod创建在了k8s-node2上,证明没有key的情况下,只要effect值一致,任意一个污点化节点都可以创建pod
kubectl apply -f nginx-taint2.yml
kubectl get pod -o wide
存在模式3:如果 operator 是 Exists ,不能设置value值。
将2个节点污点化,key和value设置不一致,effect值设置为一样
kubectl taint nodes k8s-node1 k8s/tain1=v1:NoSchedule
kubectl taint nodes k8s-node2 k8s/tain2=v2:NoSchedule
查看污点化标签
kubectl describe node k8s-node1 |grep Taints
kubectl describe node k8s-node2 |grep Taints
创建pod
vi nginx-taint3.yml 创建pod
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.18.0-alpine
tolerations:
- key: "k8s/tain1"
operator: "Exists"
effect: ""
因为文件设置了key: "k8s/tain1"
所以当污点化节点的key也是k8s/tain1时,节点被创建在k8s-node1上
kubectl apply -f nginx-taint3.yml 创建容器
kubectl get pod -o wide
kubectl delete -f nginx-taint3.yml 删除容器
将2个节点污点化,key和value设置一致,effect值设置为不一样
kubectl taint nodes k8s-node1 k8s/tain1=v1:NoSchedule
kubectl taint nodes k8s-node2 k8s/tain1=v1:NoExecute
查看污点化标签
kubectl describe node k8s-node1 |grep Taints
kubectl describe node k8s-node2 |grep Taints
创建pod的文件不变,查看pod创建在了k8s-node2上,证明没有effect的情况下,只要key值一致,任意一个污点化节点都可以创建pod
kubectl apply -f nginx-taint3.yml
kubectl get pod -o wide
tolerationSeconds注意:
添加污点化节点
kubectl taint nodes k8s-node1 k8s/tain1=v1:PreferNoSchedule
kubectl taint nodes k8s-node2 k8s/tain1=v1:PreferNoSchedule
kubectl describe node k8s-node1 |grep Taints
kubectl describe node k8s-node2 |grep Taints
vi nginx-toler.yml 创建pod
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker.io/library/nginx:1.18.0-alpine
tolerations:
- key: "k8s/tain1"
operator: "Equal"
value: "v1"
effect: "NoExecute"
tolerationSeconds: 15 #单位是秒
kubectl apply -f nginx-toler.yml 创建pod
可以看到pod被创建在了k8s-node2
可以看到存活时间和pod名字nginx-df4c8c5-gq9sk
kubectl get pod -o wide
去除k8s-node2的标签,查看pod不会因为没有污点化而重建pod
kubectl taint nodes k8s-node2 k8s/tain1=v1:PreferNoSchedule-
kubectl describe node k8s-node2 |grep Taints
kubectl get pod -o wide
将k8s-node1的污点化标签删除,这样新的pod会优先在k8s-node1创建
kubectl taint nodes k8s-node1 k8s/tain1=v1:PreferNoSchedule-
k8s-node2添加新的污点化的key和value,effect值改为NoExecute
因为污点化的key、value值和pod的yml文件的值不一致,原本在k8s-node2的pod会被重建到新的节点
kubectl taint nodes k8s-node2 k8s/tain2=v2:NoExecute
kubectl describe node k8s-node1 |grep Taints
kubectl describe node k8s-node2 |grep Taints
可以看到因为k8s-node1没有污点化,所以被优先选择在k8s-node1创建pod(nginx-df4c8c5-zrpfv)
而k8s-node2的老的pod(nginx-df4c8c5-gq9sk)因为设置了tolerationSeconds为15秒
所以k8s-node2老的pod还是会存活到限定的15秒后再删除
kubectl get pod -o wide