k8s之pod调度

k8s pod调度

容器资源限制:

resources.limits.cpu 最多

resources.limits.memory

容器使用的最小资源需求,作为容器调度时资源分配的依据

resources.requests.cpu

resources.requests.memory 最少

节点选择器

nodeSelector:用于将pod调度匹配label的node上,如果没有匹配标签会调度失败

作用:

约束Pod到到指定的节点运行

完全匹配节点标签

应用场景:

专用节点:根据业务线将node分组管理(根据不同的业务打上不同的标签)

匹配特殊硬件:部分node配有ssd硬盘,GPU

给node1节点打标签

[root@master ~]# kubectl label nodes node1.example.com disktype=ssd
node/node1.example.com labeled
#查看所有的节点
[root@master ~]# kubectl get nodes --show-labels
NAME                 STATUS   ROLES                  AGE     VERSION   LABELS
master.example.com   Ready    control-plane,master   4d23h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master.example.com,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node1.example.com    Ready                     4d22h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1.example.com,kubernetes.io/os=linux       #以打打上disktype=ssd(磁盘标签)
node2.example.com    Ready                     4d22h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2.example.com,kubernetes.io/os=linux

#查看单独节点 (一定要是全名)
[root@master ~]# kubectl get nodes node1.example.com  --show-labels
NAME                STATUS   ROLES    AGE     VERSION   LABELS
node1.example.com   Ready       4d22h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1.example.com,kubernetes.io/os=linux

创建pod

[root@master manifest]# cat test1.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - image: sktystwd/apache:v0.2
    imagePullPolicy: IfNotPresent
    name: test
  nodeSelector:     #节点选择器
    disktype: ssd

---
apiVersion: v1
kind: Service
metadata:
  name: test
spec: 
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: test
  type: NodePort
  
  
[root@master manifest]# kubectl create -f test1.yaml 
pod/test created
service/test created


查看这个pod在哪台node上运行

[root@master ~]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
test   1/1     Running   0          13s   10.244.1.55   node1.example.com              

nodeAffinity

nodeAffinity:节点亲和性,与nodeSelector作用一样,但相比更灵活,满足多个条件。

匹配有更多的逻辑组合,不只是字符串的完整相等

调度分为软测忽略和硬策略,而不是硬性要求

硬(required):必须满足

软(preferred):尝试满足

操作符:In(在)、NotIn(不在)、Exists(存在)、DoesNotExist(不存在)、Gt(大于)、Lt(小于) 严格区分大小写

required

创建pod

[root@master manifest]# cat test1.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - image: sktystwd/apache:v0.2
    imagePullPolicy: IfNotPresent
    name: test
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd
  

---
apiVersion: v1
kind: Service
metadata:
  name: test
spec: 
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: test
  type: NodePort


[root@master manifest]# kubectl create -f test1.yaml 
pod/test created
service/test created

[root@master ~]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
test   1/1     Running   0          4m24s   10.244.1.57   node1.example.com              

如果把node1上的ssd标签删除添加一个新的标签然后在看容器会在哪运行

[root@master ~]# [root@master ~]# kubectl label nodes node1.example.com disktype-
node/node1.example.com labeled

[root@master ~]# kubectl label nodes node1.example.com disktype=sata
node/node1.example.com labeled

[root@master ~]# kubectl get nodes node1.example.com  --show-labels
NAME                STATUS   ROLES    AGE   VERSION   LABELS
node1.example.com   Ready       5d    v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=sata,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1.example.com,kubernetes.io/os=linux


运行

[root@master manifest]# kubectl create -f test1.yaml 
pod/test created
service/test created
[root@master manifest]# 

[root@master ~]#  kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
test   0/1     Pending   0          7s                     
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
test   0/1     Pending   0          19s

#没有运行

那把node2加上ssd标签

[root@master ~]# kubectl label nodes node2.example.com disktype=ssd
node/node2.example.com labeled
[root@master ~]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
test   1/1     Running   0          3m42s   10.244.2.52   node2.example.com              
[root@master ~]# kubectl get pods 
NAME   READY   STATUS    RESTARTS   AGE
test   1/1     Running   0          3m51s

#自动在node2上运行起来了

preferre

[root@master manifest]# cat test1.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - image: sktystwd/apache:v0.2
    imagePullPolicy: IfNotPresent
    name: test
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd
      preferredDuringSchedulingIgnoredDuringExecution:
        weight: 5
        preference:
          matchExpressions:
          - key: app
            operator: In
            values:
            - apache


---
apiVersion: v1
kind: Service
metadata:
  name: test
spec: 
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: test
  type: NodePort

给node1和node2都打上相同的标签,再给node2多打一个标签

[root@master ~]# kubectl label nodes node1.example.com disktype=ssd
node/node1.example.com labeled
[root@master ~]# kubectl label nodes node2.example.com disktype=ssd
node/node2.example.com labeled
[root@master ~]# kubectl label nodes node2.example.com app=apache
node/node2.example.com labeled

看看在哪个node上运行

[root@master manifest]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
test   1/1     Running   0          21s   10.244.2.53   node2.example.com              

Taint (污点) Tolerations (污点容忍)

Taint:避免pod调度到特定的node上

Tolerations:允许pod调度到持有Taint 的node上

可取值:

NoSchedule:一定不能被调度

PreferNoSchedule:尽量不要调度,必须配置容忍

NoExecute:不仅不会调度,还会驱逐node上已有的pod

给node1节点添加污点

[root@master ~]#  kubectl taint node node1.example.com disktype:NoSchedule
node/node1.example.com tainted

[root@master ~]# kubectl describe node node1.example.com
Name:               node1.example.com
Roles:              
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    disktype=ssd
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node1.example.com
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"9a:65:2e:42:24:1e"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.244.147
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 18 Dec 2021 23:04:08 +0800
Taints:             disktype:NoSchedule  #污点

删除

[root@master ~]# kubectl taint node node1.example.com disktype-
node/node1.example.com untainted

两个(node1、node2)节点都添加加disktype标签 node1添加了污点

[root@master ~]#  kubectl get nodes --show-labels
NAME                 STATUS   ROLES                  AGE    VERSION   LABELS
master.example.com   Ready    control-plane,master   5d2h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master.example.com,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node1.example.com    Ready                     5d1h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1.example.com,kubernetes.io/os=linux
node2.example.com    Ready                     5d1h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2.example.com,kubernetes.io/os=linux

设置污点

[root@master manifest]# cat test1.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - image: sktystwd/apache:v0.2
    imagePullPolicy: IfNotPresent
    name: test
  tolerations:
  - key: disktype
    operator: Equal
    effect: NoExecute  

---
apiVersion: v1
kind: Service
metadata:
  name: test
spec: 
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: test
  type: NodePort

[root@master manifest]# kubectl create -f test1.yaml 
pod/test created
service/test created

可以看到运行在node2上面

[root@master ~]#  kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
test   1/1     Running   0          3m36s   10.244.2.54   node2.example.com              

你可能感兴趣的:(kubernetes,linux,docker)