官方文档:概念 | 调度,抢占和驱逐 | Kubernetes 调度器
kube-scheduler 是 Kubernetes 集群的默认调度器,并且是集群 控制面 的一部分。 如果你真的希望或者有这方面的需求,kube-scheduler 在设计上是允许 你自己写一个调度组件并替换原有的 kube-scheduler。
对每一个新创建的 Pod 或者是未被调度的 Pod,kube-scheduler 会选择一个最优的 Node 去运行这个 Pod。然而,Pod 内的每一个容器对资源都有不同的需求,而且 Pod 本身也有不同的资源需求。因此,Pod 在被调度到 Node 上之前, 根据这些特定的资源调度需求,需要对集群中的 Node 进行一次过滤。
在一个集群中,满足一个 Pod 调度请求的所有 Node 称之为 可调度节点。 如果没有任何一个 Node 能满足 Pod 的资源请求,那么这个 Pod 将一直停留在 未调度状态直到调度器能够找到合适的 Node。
调度器先在集群中找到一个 Pod 的所有可调度节点,然后根据一系列函数对这些可调度节点打分, 选出其中得分最高的 Node 来运行 Pod。之后,调度器将这个调度决定通知给 kube-apiserver,这个过程叫做 绑定。
在做调度决定时需要考虑的因素包括:单独和整体的资源请求、硬件/软件/策略限制、 亲和以及反亲和要求、数据局域性、负载间的干扰等等。
kube-scheduler 给一个 pod 做调度选择包含两个步骤:
过滤阶段会将所有满足 Pod 调度需求的 Node 选出来。 例如,PodFitsResources 过滤函数会检查候选 Node 的可用资源能否满足 Pod 的资源请求。 在过滤之后,得出一个 Node 列表,里面包含了所有可调度节点;通常情况下, 这个 Node 列表包含不止一个 Node。如果这个列表是空的,代表这个 Pod 不可调度。
在打分阶段,调度器会为 Pod 从所有可调度节点中选取一个最合适的 Node。 根据当前启用的打分规则,调度器会给每一个可调度节点进行打分。
最后,kube-scheduler 会将 Pod 调度到得分最高的 Node 上。 如果存在多个得分最高的 Node,kube-scheduler 会从中随机选取一个。
支持以下两种方式配置调度器的过滤和打分行为:
官方文档: 概述 | 调度,抢占和驱逐 | 将 Pod 指派给 节点
将 Pod 指派给节点
你可以约束一个 Pod 只能在特定的节点上运行。 有几种方法可以实现这点,推荐的方法都是用 标签选择算符来进行选择。 通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 Pod 分散到节点上, 而不是将 Pod 放置在可用资源不足的节点上等等)。但在某些情况下,你可能需要进一步控制 Pod 被部署到的节点。例如,确保 Pod 最终落在连接了 SSD 的机器上, 或者将来自两个不同的服务且有大量通信的 Pods 被放置在同一个可用区。
你可以使用下列方法中的任何一种来选择 Kubernetes 对特定 Pod 的调度:
通过为节点添加标签,你可以准备让 Pod 调度到特定节点或节点组上。 你可以使用这个功能来确保特定的 Pod 只能运行在具有一定隔离性,安全性或监管属性的节点上。
如果使用标签来实现节点隔离,建议选择节点上的 kubelet 无法修改的标签键。 这可以防止受感染的节点在自身上设置这些标签,进而影响调度器将工作负载调度到受感染的节点。
官方文档: 概述 | 调度,抢占和驱逐 | 将 Pod 指派给 节点 | nodeName
nodeName 是比亲和性或者 nodeSelector 更为直接的形式。nodeName 是 Pod 规约中的一个字段。如果 nodeName 字段不为空,调度器会忽略该 Pod, 而指定节点上的 kubelet 会尝试将 Pod 放到该节点上。 使用 nodeName 规则的优先级会高于使用 nodeSelector 或亲和性与非亲和性的规则。
使用 nodeName 来选择节点的方式有一些局限性:
nodeName 是节点选择约束的最简单方法,但一般不推荐
[root@k8s-1 ~]# vim nodename.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: k8s-3
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 29s 10.244.2.34 k8s-3 <none> <none>
[root@k8s-1 ~]# vim nodename.yaml
nodeName: k8s-4
[root@k8s-1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 0/1 Pending 0 6s
官方文档:概述 | 调度,抢占和驱逐 | nodeSelector
[root@k8s-1 ~]# vim nodeSelector.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
[root@k8s-1 ~]# kubectl label nodes k8s-3 disktype=ssd
node/k8s-3 labeled
[root@k8s-1 ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-1 Ready control-plane,master 4h28m v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-2 Ready <none> 4h2m v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-2,kubernetes.io/os=linux
k8s-3 Ready <none> 3h56m v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-3,kubernetes.io/os=linux
[root@k8s-1 ~]# kubectl apply -f nodeSelector.yaml
pod/nginx created
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 8s 10.244.2.35 k8s-3 <none> <none>
[root@k8s-1 ~]# kubectl label nodes k8s-3 disktype-
node/k8s-3 labeled
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 46s 10.244.2.35 k8s-3 <none> <none>
[root@k8s-1 ~]# kubectl delete -f nodeSelector.yaml --force
[root@k8s-1 ~]# kubectl apply -f nodeSelector.yaml
pod/nginx created
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 0/1 Pending 0 3s <none> <none> <none> <none>
[root@k8s-1 ~]# kubectl label nodes k8s-2 disktype=ssd
node/k8s-2 labeled
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 56s 10.244.1.42 k8s-2 <none> <none>
[root@k8s-1 ~]# kubectl label nodes k8s-3 disktype=ssd
node/k8s-3 labeled
[root@k8s-1 ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-1 Ready control-plane,master 4h32m v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-2 Ready <none> 4h6m v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-2,kubernetes.io/os=linux
k8s-3 Ready <none> 3h59m v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-3,kubernetes.io/os=linux
以上是节点层面的调度,限制 Pod 调度到哪个 node 上
【pod与pod之间的,两个业务之间的通信和调度】
解决pod和node的亲和性
官方文档:概念 | 调度,抢占和驱逐
nodeSelector 提供了一种最简单的方法来将 Pod 约束到具有特定标签的节点上。 亲和性和反亲和性扩展了你可以定义的约束类型。使用亲和性与反亲和性的一些好处有:
节点亲和性概念上类似于 nodeSelector, 它使你可以根据节点上的标签来约束 Pod 可以调度到哪些节点上。 节点亲和性有两种:
Note:
在上述类型中,IgnoredDuringExecution 意味着如果节点标签在 Kubernetes 调度 Pod 时发生了变更,Pod 仍将继续运行。
nodeaffinity还支持多种规则匹配条件的配置如
[root@k8s-1 ~]# vim nodeAffinity.yaml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
[root@k8s-1 ~]# kubectl apply -f nodeAffinity.yaml
pod/node-affinity created
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-affinity 1/1 Running 0 18s 10.244.1.43 k8s-2 <none> <none>
[root@k8s-1 ~]# vim nodeAffinity.yaml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: zone
operator: In
values:
- v1
[root@k8s-1 ~]# kubectl label nodes k8s-3 zone=v1
node/k8s-3 labeled
[root@k8s-1 ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-1 Ready control-plane,master 4h43m v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-2 Ready <none> 4h17m v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-2,kubernetes.io/os=linux
k8s-3 Ready <none> 4h11m v1.22.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-3,kubernetes.io/os=linux,zone=v1
[root@k8s-1 ~]# kubectl delete -f nodeAffinity.yaml --force
[root@k8s-1 ~]# kubectl apply -f nodeAffinity.yaml
pod/node-affinity created
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-affinity 1/1 Running 0 27s 10.244.2.37 k8s-3 <none> <none>
[root@k8s-1 ~]# kubectl describe pod node-affinity
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 97s default-scheduler Successfully assigned default/node-affinity to k8s-3
Normal Pulling 96s kubelet Pulling image "nginx"
Normal Pulled 90s kubelet Successfully pulled image "nginx" in 6.15509625s
Normal Created 89s kubelet Created container nginx
Normal Started 88s kubelet Started container nginx
pod 亲和性和反亲和性
与节点亲和性类似,Pod 的亲和性与反亲和性也有两种类型:
requiredDuringSchedulingIgnoredDuringExecution
preferredDuringSchedulingIgnoredDuringExecution
例如,你可以使用 requiredDuringSchedulingIgnoredDuringExecution 亲和性来告诉调度器, 将两个服务的 Pod 放到同一个云提供商可用区内,因为它们彼此之间通信非常频繁。 类似地,你可以使用 preferredDuringSchedulingIgnoredDuringExecution 反亲和性来将同一服务的多个 Pod 分布到多个云提供商可用区中。
要使用 Pod 间亲和性,可以使用 Pod 规约中的 .affinity.podAffinity 字段。 对于 Pod 间反亲和性,可以使用 Pod 规约中的 .affinity.podAntiAffinity 字段。
[root@k8s-1 ~]# vim podAffinity.yaml
apiVersion: v1
kind: Pod
metadata:
name: zabbix
labels:
app: zabbix
spec:
containers:
- name: zabbix
image: zabbix/zabbix-agent
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
[root@k8s-1 ~]# kubectl apply -f podAffinity.yaml
pod/zabbix created
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-affinity 1/1 Running 0 8s 10.244.2.38 k8s-3 <none> <none>
zabbix 0/1 Pending 0 39s <none> <none> <none> <none>
[root@k8s-1 ~]# kubectl label pod node-affinity app=nginx
pod/node-affinity labeled
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-affinity 1/1 Running 0 86s 10.244.2.38 k8s-3 <none> <none>
zabbix 1/1 Running 0 117s 10.244.2.39 k8s-3 <none> <none>
[root@k8s-1 ~]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
node-affinity 1/1 Running 0 2m8s app=nginx
zabbix 1/1 Running 0 2m39s app=zabbix
Pod 亲和性解决 Pod 与 Pod 之间的关系
node 亲和性解决 Pod 与 node 之间的关系
[root@k8s-1 ~]# cp podAffinity.yaml podantiAffinity.yaml
[root@k8s-1 ~]# vim podantiAffinity.yaml
apiVersion: v1
kind: Pod
metadata:
name: zabbix
labels:
app: zabbix
spec:
containers:
- name: zabbix
image: zabbix/zabbix-agent
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
[root@k8s-1 ~]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
node-affinity 1/1 Running 0 5m20s app=nginx
[root@k8s-1 ~]# kubectl apply -f podantiAffinity.yaml
pod/zabbix created
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-affinity 1/1 Running 0 6m1s 10.244.2.38 k8s-3 <none> <none>
zabbix 1/1 Running 0 23s 10.244.1.44 k8s-2 <none> <none>
[root@k8s-1 ~]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
node-affinity 1/1 Running 0 6m9s app=nginx
zabbix 1/1 Running 0 31s app=zabbix
官方文档:概念 | 调度,抢占和驱逐 | 污点和容忍度
NodeAffinity 节点亲和性,是 Pod 上定义的一种属性,使 Pod 能够按我们的要求调度到某个 Node 上,而 Taints 则恰恰相反,它可以让 Node 拒绝运行 Pod,甚至驱逐 Pod。
Taints (污点) 是 Node 的一个属性,设置了 Taints 后,所以 Kubernetes 是不会将 Pod 调度到这个 Node 上的,于是 Kubernetes 就给 Pod 设置了个属性 Tolerations (容忍),只要 Pod 能够容忍 Node 上的污点,那么 Kubernetes 就会忽略 Node 上的污点,就能够(不是必须)把 Pod 调度过去。
master 为什么不参与调度?
因为 master 上有 NoSchedule 的污点
[root@k8s-1 ~]# kubectl describe nodes k8s-1 | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule
[root@k8s-1 ~]# kubectl taint nodes k8s-2 k1=v1:NoSchedule
node/k8s-2 tainted
[root@k8s-1 ~]# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
[root@k8s-1 ~]# kubectl apply -f deployment.yaml
deployment.apps/deployment-example created
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-6799fc88d8-2wz98 1/1 Running 0 32s 10.244.2.40 k8s-3 <none> <none>
deployment-example-6799fc88d8-skds4 1/1 Running 0 32s 10.244.2.42 k8s-3 <none> <none>
deployment-example-6799fc88d8-w8s7m 1/1 Running 0 32s 10.244.2.41 k8s-3 <none> <none>
[root@k8s-1 ~]# kubectl describe nodes k8s-2 | grep -i taint
Taints: k1=v1:NoSchedule
[root@k8s-1 ~]# kubectl describe nodes k8s-3 | grep -i taint
Taints: <none>
1:
在server3上加NoExectue
get pod
发现3个pod都被驱离,pending
[root@k8s-1 ~]# kubectl taint nodes k8s-3 k1=v1:NoExecute
node/k8s-3 tainted
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-6799fc88d8-6xbct 0/1 Pending 0 6s <none> <none> <none> <none>
deployment-example-6799fc88d8-c9k6f 0/1 Pending 0 7s <none> <none> <none> <none>
deployment-example-6799fc88d8-jt9t2 0/1 Pending 0 6s <none> <none> <none> <none>
[root@k8s-1 ~]# kubectl taint nodes k8s-2 k1:NoSchedule-
node/k8s-2 untainted
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-6799fc88d8-6xbct 1/1 Running 0 103s 10.244.1.47 k8s-2 <none> <none>
deployment-example-6799fc88d8-c9k6f 1/1 Running 0 104s 10.244.1.45 k8s-2 <none> <none>
deployment-example-6799fc88d8-jt9t2 1/1 Running 0 103s 10.244.1.46 k8s-2 <none> <none>
[root@k8s-1 ~]# kubectl describe nodes k8s-2 | grep -i taint
Taints: <none>
[root@k8s-1 ~]# kubectl taint nodes k8s-2 k1=v1:NoSchedule
node/k8s-2 tainted
[root@k8s-1 ~]# kubectl describe nodes k8s-2 | grep -i taint
Taints: k1=v1:NoSchedule
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-6799fc88d8-6xbct 1/1 Running 0 3m2s 10.244.1.47 k8s-2 <none> <none>
deployment-example-6799fc88d8-c9k6f 1/1 Running 0 3m3s 10.244.1.45 k8s-2 <none> <none>
deployment-example-6799fc88d8-jt9t2 1/1 Running 0 3m2s 10.244.1.46 k8s-2 <none> <none>
[root@k8s-1 ~]# kubectl scale deployment deployment-example --replicas=6
deployment.apps/deployment-example scaled
[root@k8s-1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
deployment-example-6799fc88d8-2v8s7 0/1 Pending 0 9s
deployment-example-6799fc88d8-6xbct 1/1 Running 0 3m42s
deployment-example-6799fc88d8-bfn7r 0/1 Pending 0 9s
deployment-example-6799fc88d8-c9k6f 1/1 Running 0 3m43s
deployment-example-6799fc88d8-jt9t2 1/1 Running 0 3m42s
deployment-example-6799fc88d8-mlqr2 0/1 Pending 0 9s
[root@k8s-1 ~]# kubectl edit deployments.apps deployment-example
spec:
tolerations:
- operator: "Exists"
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-65797dc965-4jdk7 1/1 Running 0 49s 10.244.1.48 k8s-2 <none> <none>
deployment-example-65797dc965-bxdlx 1/1 Running 0 53s 10.244.2.46 k8s-3 <none> <none>
deployment-example-65797dc965-jppj6 1/1 Running 0 64s 10.244.2.45 k8s-3 <none> <none>
deployment-example-65797dc965-twlb2 1/1 Running 0 65s 10.244.2.43 k8s-3 <none> <none>
deployment-example-65797dc965-xf9cg 1/1 Running 0 41s 10.244.2.47 k8s-3 <none> <none>
deployment-example-65797dc965-zphk6 1/1 Running 0 65s 10.244.2.44 k8s-3 <none> <none>
[root@k8s-1 ~]# kubectl describe nodes k8s-2 | grep -i taint
Taints: k1=v1:NoSchedule
[root@k8s-1 ~]# kubectl describe nodes k8s-3 | grep -i taint
Taints: k1=v1:NoExecute
[root@k8s-1 ~]# kubectl describe nodes k8s-1 | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule
[root@k8s-1 ~]# kubectl taint node k8s-2 k1=v1:NoSchedule-
node/k8s-2 untainted
[root@k8s-1 ~]# kubectl taint node k8s-3 k1=v1:NoExecute-
node/k8s-3 untainted
[root@k8s-1 ~]# kubectl describe nodes k8s-1 | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule
[root@k8s-1 ~]# kubectl describe nodes k8s-2 | grep -i taint
Taints: <none>
[root@k8s-1 ~]# kubectl describe nodes k8s-3 | grep -i taint
Taints: <none>
[root@k8s-1 ~]# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
[root@k8s-1 ~]# kubectl apply -f deployment.yaml
deployment.apps/deployment-example created
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-6799fc88d8-g4xl7 1/1 Running 0 14s 10.244.2.48 k8s-3 <none> <none>
deployment-example-6799fc88d8-xn7m6 1/1 Running 0 14s 10.244.1.49 k8s-2 <none> <none>
deployment-example-6799fc88d8-z5p9b 1/1 Running 0 14s 10.244.2.49 k8s-3 <none> <none>
[root@k8s-1 ~]# kubectl cordon k8s-2
node/k8s-2 cordoned
[root@k8s-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-1 Ready control-plane,master 5h16m v1.22.1
k8s-2 Ready,SchedulingDisabled <none> 4h50m v1.22.1
k8s-3 Ready <none> 4h44m v1.22.1
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-6799fc88d8-g4xl7 1/1 Running 0 82s 10.244.2.48 k8s-3 <none> <none>
deployment-example-6799fc88d8-xn7m6 1/1 Running 0 82s 10.244.1.49 k8s-2 <none> <none>
deployment-example-6799fc88d8-z5p9b 1/1 Running 0 82s 10.244.2.49 k8s-3 <none> <none>
[root@k8s-1 ~]# kubectl drain k8s-3
node/k8s-3 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-74mkz, kube-system/kube-proxy-qc4tt
evicting pod default/deployment-example-6799fc88d8-z5p9b
evicting pod default/deployment-example-6799fc88d8-g4xl7
pod/deployment-example-6799fc88d8-z5p9b evicted
pod/deployment-example-6799fc88d8-g4xl7 evicted
node/k8s-3 evicted
[root@k8s-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-1 Ready control-plane,master 5h18m v1.22.1
k8s-2 Ready,SchedulingDisabled <none> 4h52m v1.22.1
k8s-3 Ready,SchedulingDisabled <none> 4h46m v1.22.1
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-6799fc88d8-xn7m6 1/1 Running 0 2m34s 10.244.1.49 k8s-2 <none> <none>
deployment-example-6799fc88d8-xp42n 0/1 Pending 0 5s <none> <none> <none> <none>
deployment-example-6799fc88d8-zjfg8 0/1 Pending 0 5s <none> <none> <none> <none>
[root@k8s-1 ~]# kubectl uncordon k8s-2
node/k8s-2 uncordoned
[root@k8s-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-1 Ready control-plane,master 5h20m v1.22.1
k8s-2 Ready <none> 4h53m v1.22.1
k8s-3 Ready,SchedulingDisabled <none> 4h47m v1.22.1
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-6799fc88d8-xn7m6 1/1 Running 0 4m9s 10.244.1.49 k8s-2 <none> <none>
deployment-example-6799fc88d8-xp42n 1/1 Running 0 100s 10.244.1.51 k8s-2 <none> <none>
deployment-example-6799fc88d8-zjfg8 1/1 Running 0 100s 10.244.1.50 k8s-2 <none> <none>
[root@k8s-1 ~]# kubectl uncordon k8s-3
node/k8s-3 uncordoned
[root@k8s-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-1 Ready control-plane,master 5h20m v1.22.1
k8s-2 Ready <none> 4h53m v1.22.1
k8s-3 Ready <none> 4h47m v1.22.1
[root@k8s-1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-6799fc88d8-xn7m6 1/1 Running 0 4m39s 10.244.1.49 k8s-2 <none> <none>
deployment-example-6799fc88d8-xp42n 1/1 Running 0 2m10s 10.244.1.51 k8s-2 <none> <none>
deployment-example-6799fc88d8-zjfg8 1/1 Running 0 2m10s 10.244.1.50 k8s-2 <none> <none>
[root@k8s-1 ~]# kubectl drain k8s-3 --ignore-daemonsets
node/k8s-3 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-74mkz, kube-system/kube-proxy-qc4tt
node/k8s-3 drained
[root@k8s-1 ~]# kubectl delete nodes k8s-3
node "k8s-3" deleted
[root@k8s-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-1 Ready control-plane,master 5h22m v1.22.1
k8s-2 Ready <none> 4h55m v1.22.1
题目:节点 k8s-3 出现问题,master 上显示不可达
思路:
解决:
[root@k8s-3 ~]# systemctl enable kubelet
[root@k8s-3 ~]# systemctl restart kubelet.service
[root@k8s-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-1 Ready control-plane,master 5h24m v1.22.1
k8s-2 Ready <none> 4h57m v1.22.1
k8s-3 Ready <none> 8s v1.22.1
[root@k8s-1 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default deployment-example-6799fc88d8-xn7m6 1/1 Running 0 8m27s
default deployment-example-6799fc88d8-xp42n 1/1 Running 0 5m58s
default deployment-example-6799fc88d8-zjfg8 1/1 Running 0 5m58s
kube-system coredns-bdc44d9f-ds4cc 1/1 Running 0 5h23m
kube-system coredns-bdc44d9f-gth5j 1/1 Running 0 5h23m
kube-system etcd-k8s-1 1/1 Running 0 5h24m
kube-system kube-apiserver-k8s-1 1/1 Running 0 5h23m
kube-system kube-controller-manager-k8s-1 1/1 Running 0 5h24m
kube-system kube-flannel-ds-gmpc5 1/1 Running 0 5h17m
kube-system kube-flannel-ds-kpw4n 1/1 Running 0 23s
kube-system kube-flannel-ds-qcvxs 1/1 Running 0 4h57m
kube-system kube-proxy-4vxkp 1/1 Running 0 4h57m
kube-system kube-proxy-d4g6h 1/1 Running 0 23s
kube-system kube-proxy-dmfq7 1/1 Running 0 5h23m
kube-system kube-scheduler-k8s-1 1/1 Running 0 5h24m