https://kubernetes.io/zh/docs/concepts/scheduling-eviction/kube-scheduler/
[kubeadm@server2 scheduler]$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: server3 绑定
只能在server3上:
kubectl label nodes <node name> disktype=ssd
[kubeadm@server2 scheduler]$ cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
给server3加标签
requiredDuringSchedulingIgnoredDuringExecution 必须满足
preferredDuringSchedulingIgnoredDuringExecution 倾向满足
https://kubernetes.io/zh/docs/concepts/configuration/assign-pod-node.
必须满足:
[kubeadm@server2 scheduler]$ cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
给server4加上标签
更改清单
[kubeadm@server2 scheduler]$ cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
- sata
倾向满足:
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: NotIn
values:
- server3
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: disktype
operator: In
values:
- ssd
server3上有disktype=ssd
pod在server4上,不会调度到server3
总结: 必须满足去除的条件就算满足倾向满足也不会实现
给nginx加标签
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
app: mysql
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
containers:
- name: mysql
image: mysql:5.7
env:
- name: "MYSQL_ROOT_PASSWORD"
value: "westos"
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
app: mysql
spec:
affinity:
podAntiAffinity: 区别
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
containers:
- name: mysql
image: mysql:5.7
env:
- name: "MYSQL_ROOT_PASSWORD"
value: "westos"
pod亲和也有软硬设置
kubectl taint nodes nodename key=value:NoSchedul 创建
kubectl describe nodes nodename | grep Taint 查询
kubectl taint nodes nodename key:NoSchedule- 删除
server2不参加调度的原因是因为有污点
创建一个控制器
[kubeadm@server2 scheduler]$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
server2没有参加调度
让server2参加调度的方法
删除server2污点,重新调度
一般情况下因为server2启动pod太多,不会参与调度(如果pod基数太大就会参与调度)
给server3打上标签
[kubeadm@server2 scheduler]$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
tolerations:
- key: "key1"
operator: "Equal"
value: "v1"
effect: "NoSchedule"
server3参与调度
给server3添加驱逐标签,server3上运行的pod都调度到server4
在清单中改变容忍标签
effect: "NoExecute"
server3又可以参与调度,并且全部调度到sevrer3上(因为压力小)
容忍所有污点
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: nginx
replicas: 6
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
tolerations:
- operator: "Exists"
影响pod调度的指令还有: cordon,drain,delete,后期创建的pod都不会被调度到该节点上,但操作的暴力程度不一样。
在所有节点上去掉污点:
用于集群新建时
cordon停止调度:
原来存在的pod依然会存在,新建pod不会再调度
恢复调度:
drain驱逐节点:
delete删除节点:
删除:
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets