调度器通过 kubernetes 的 watch 机制来发现集群中新创建且尚未被调度到 Node 上的 Pod。调度器会将发现的每一个未调度的 Pod 调度到一个合适的 Node 上来运行。
kube-scheduler 是 Kubernetes 集群的默认调度器,并且是集群控制面的一部分。如果你真的希望或者有这方面的需求,kube-scheduler 在设计上是允许你自己写一个调度组件并替换原有的 kube-scheduler。
在做调度决定时需要考虑的因素包括:单独和整体的资源请求、硬件/软件/策略限制、亲和以及反亲和要求、数据局域性、负载间的干扰等等。
vim nodename.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
nodeName: server3
kubectl apply -f namepod.yaml
kubectl get pod -o wide
nodeSelector 是节点选择约束的最简单推荐形式
它是通过标签的方式来进行匹配的。
创建一个示例:
kubectl get nodes --show-labels #查看集群中各节点当前的标签
kubectl label nodes server2 disktype=ssd
kubectl get nodes --show-labels
vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
kubectl apply -f pod.yaml
kubectl get pod -o wide
注意:如果当前集群中所有的节点都没有对应的标签与yaml文件中的标签相匹配,那么创建pod后,则pod会一直处在Pending状态
vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: scsi
kubectl get nodes --show-labels
kubectl apply -f pod.yaml
kubectl get pod -o wide
nodeSelector 提供了一种非常简单的方法来将 pod 约束到具有特定标签的节点上。亲和/反亲和功能极大地扩展了你可以表达约束的类型。
你可以发现规则是“软”/“偏好”,而不是硬性要求,因此,如果调度器无法满足该要求,仍然调度该 pod。
你可以使用节点上的 pod 的标签来约束,而不是使用节点本身的标签,来允许哪些 pod 可以或者不可以被放置在一起。
requiredDuringSchedulingIgnoredDuringExecution 必须满足
preferredDuringSchedulingIgnoredDuringExecution 倾向满足
IgnoreDuringExecution 表示如果在Pod运行期间Node的标签发生变化,导致亲和性策略不能满足,则继续运行当前的Pod。
创建一个示例:
vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
kubectl get nodes --show-labels
kubectl apply -f pod.yaml
kubectl get pod -o wide
当然我们可以有多个标签,当集群中只要有一个节点的标签能够匹配,则pod就可以成功创建。
vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
- sata
kubectl label nodes server3 disktype=sata
kubectl get nodes --show-labels
kubectl apply -f pod.yaml
kubectl get pod -o wide
vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: NotIn
values:
- server2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: disktype
operator: In
values:
- ssd
kubectl get nodes --show-labels
kubectl apply -f pod.yaml
kubectl get pod -o wide
注意:这里我们在yaml文件中即写了必须满足的条件,又写了倾向满足条件,必须满足条件是不在节点server2上创建这个pod,倾向满足条件是,选择有ssd标签的节点来创建pod,但是在我们当前的集群中,只有server2节点上有ssd标签,但是我们的必须满足条件中是不将此pod创建在server2上,那从最后的结果来看,pod创建在了server3上,所以说必须满足的条件优先级高于倾向满足条件。也就是优先满足必须满足条件,然后再考虑倾向满足条件。
nodeaffinity还支持多种规则匹配条件的配置如:
创建一个pod亲和性示例:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
kubectl apply -f pod.yaml
kubectl get pod -o wide
kubectl get pod --show-labels
vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
app: mysql
spec:
containers:
- name: mysql
image: reg.westos.org:5000/mysql
env:
- name: "MYSQL_ROOT_PASSWORD"
value: "westos"
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
kubectl apply -f pod2.yaml
kubectl get pod -o wide
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
kubectl apply -f pod.yaml
kubectl get pod -o wide
kubectl get pod --show-labels
vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
app: mysql
spec:
containers:
- name: mysql
image: reg.westos.org:5000/mysql
env:
- name: "MYSQL_ROOT_PASSWORD"
value: "westos"
affinity:
podAntiAffinity: #这里使用的是反亲和参数
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
kubectl apply -f pod2.yaml
kubectl get pod -o wide
NodeAffinity节点亲和性,是Pod上定义的一种属性,使Pod能够按我们的要求调度到某个Node上,而Taints则恰恰相反,它可以让Node拒绝运行Pod,甚至驱逐Pod。
Taints(污点)是Node的一个属性,设置了Taints后,所以Kubernetes是不会将Pod调度到这个Node上的,于是Kubernetes就给Pod设置了个属性Tolerations(容忍),只要Pod能够容忍Node上的污点,那么Kubernetes就会忽略Node上的污点,就能够(不是必须)把Pod调度过去。
可以使用命令 kubectl taint 给节点增加一个 taint:
其中[effect] 可取值: [ NoSchedule | PreferNoSchedule | NoExecute ]
vim pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
kubectl apply -f pod.yaml
kubectl get pod -o wide
kubectl taint node server1 node-role.kubernetes.io/master:NoSchedule- #删除server1上的taint
kubectl describe nodes server1 | grep Taints #查看sever1上的taint
kubectl apply -f pod.yaml
kubectl get pod -o wide
kubectl taint node server3 key1=v1:NoSchedule
kubectl describe nodes server3 | grep Tain
vim pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
kubectl apply -f pod.yaml
kubectl get pod -o wide
vim pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
tolerations: #这里我们为容器添加了容忍标签,容忍了存在污点的节点,即在存在污点的节点上也可以以创建pod
- key: "key1"
operator: "Equal"
value: "v1"
effect: "NoSchedule"
kubectl apply -f pod.yaml
kubectl get pod -o wide
kubectl taint node server3 key1=v1:NoSchedule-
kubectl get pod -o wide
kubectl taint node server3 key1=v1:NoExecute
kubectl describe node server3 | grep Taint
kubectl get pod -o wide
注意:当节点添加污点(NoExecute),会立刻经存在于该节点的pod驱逐到其他节点上(即所有存在于该节点的pod,不管是什么时候创建的)
vim pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: nginx
replicas: 6 #修改副本数为6,新创建3个pod,观察是否会在存在污点的节点上创建pod
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
tolerations: #这里我们为容器添加了容忍标签,容忍了存在污点的节点,即在存在污点的节点上也可以以创建pod
- key: "key1"
operator: "Equal"
value: "v1"
effect: "NoExecute"
kubectl apply -f pod.yaml
kubectl get pod -o wide
注意:
tolerations中定义的key、value、effect,要与node上设置的taint保持一直:
还有两个特殊值:
按照上面的参数作用,如果我们现在需要容忍集群中的所有污点,应该怎样来设置operator的值?
kubectl taint node server2 key1=v1:NoSchedule #我们给server2上也添加上污点
kubectl describe node server2 | grep Taint
kubectl describe node server3 | grep Taint
vim pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
tolerations:
- operator: "Exists"
kubectl apply -f pod.yaml
kubectl get pod -o wide
影响Pod调度的指令还有:cordon、drain、delete,后期创建的pod都不会被调度到该节点上,但操作的暴力程度不一样。
cordon 停止调度:
kubectl cordon server3
kubectl get nodes
kubectl uncordon server3
drain 驱逐节点:
kubectl get pod -o wide
kubectl drain server3 --ignore-daemonsets
kubectl get nodes
kubectl get pod -o wide
delete 删除节点:
kubectl delete node server3
kubectl get nodes
systemctl restart kubelet #需要恢复的话,就需要重启kubelete