19.kubernetes笔记 Pod资源调度(二) podAffinity、podAntiAffinity Pod亲和与反亲和

目录
概述
查看 podAffinity的详细说明
示例1: podAffinity Pod硬亲和
示例2: podAffinity 硬亲和
示例3: podAntiAffinity 硬反亲和

概述:

  • 先来了解 topology key 字段
    pod亲和性调度需要各个相关的pod对象运行于"同一位置", 而反亲和性调度则要求他们不能运行于"同一位置",
    这里指定“同一位置” 是通过 topologyKey 来定义的,topologyKey 对应的值是 node 上的一个标签名称,比如各别节点zone=A标签,各别节点有zone=B标签,pod affinity topologyKey定义为zone,那么调度pod的时候就会围绕着A拓扑,B拓扑来调度,而相同拓扑下的node就为“同一位置”。

顾名思义,topology 就是 拓扑的意思,这里指的是一个 拓扑域,是指一个范围的概念,比如一个 Node、一个机柜、一个机房或者是一个地区(如杭州、上海)等,实际上对应的还是 Node 上的标签。这里的 topologyKey 对应的是 Node 上的标签的 Key(没有Value),可以看出,其实 topologyKey 就是用于筛选 Node 的。通过这种方式,我们就可以将各个 Pod 进行跨集群、跨机房、跨地区的调度了。

查看 podAffinity的详细说明
[root@k8s-master Scheduler]# kubectl explain pods.spec.affinity
KIND:     Pod
VERSION:  v1

RESOURCE: affinity 

DESCRIPTION:
     If specified, the pod's scheduling constraints

     Affinity is a group of affinity scheduling rules.

FIELDS:
   nodeAffinity    #节点亲和
     Describes node affinity scheduling rules for the pod.

   podAffinity    #Pod亲和
     Describes pod affinity scheduling rules (e.g. co-locate this pod in the
     same node, zone, etc. as some other pod(s)).

   podAntiAffinity   #Pod反亲和
     Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod
     in the same node, zone, etc. as some other pod(s)).

 #亲和与反亲和中又分硬亲和 软亲和前一节提到的Node亲和一样不在累述
[root@k8s-master Scheduler]# kubectl explain pods.spec.affinity.podAffinity 

FIELDS:
   preferredDuringSchedulingIgnoredDuringExecution  <[]Object>
...

   requiredDuringSchedulingIgnoredDuringExecution   <[]Object>
...
#Pod反亲和
[root@k8s-master Scheduler]# kubectl explain pods.spec.affinity.podAntiAffinity
FIELDS:
   preferredDuringSchedulingIgnoredDuringExecution  <[]Object>
...ffinityTerm; the
     node(s) with the highest sum are the most preferred.

   requiredDuringSchedulingIgnoredDuringExecution   <[]Object>
...

[root@k8s-master Scheduler]# kubectl explain pods.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution

   labelSelector    
     A label query over a set of resources, in this case pods.

   namespaces   <[]string>  
     namespaces specifies which namespaces the labelSelector applies to (matches
     against); null or empty list means "this pod's namespace"

   topologyKey   -required-  #拓扑标签 使用哪一个标签为位置标签(必选字段)
     This pod should be co-located (affinity) or not co-located (anti-affinity)
     with the pods matching the labelSelector in the specified namespaces, where
     co-located is defined as running on a node whose value of the label with
     key topologyKey matches that of any node on which any of the selected pods
     is running. Empty topologyKey is not allowed.

 
 
示例1: podAffinity Pod硬亲和

requiredDuringSchedulingIgnoredDuringExecution

[root@k8s-master Scheduler]# cat pod-affinity-required-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis 
spec:
  replicas: 1  #redis为有状态应用 为了测试只运行1个副本
  selector:
    matchLabels:
      app: redis
      ctlr: redis
  template:
    metadata:
      labels:
        app: redis
        ctlr: redis
    spec:
      containers:
      - name: redis
        image: redis:6.0-alpine
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pod-affinity-required
spec:
  replicas: 5
  selector:
    matchLabels:
      app: demoapp
      ctlr: pod-affinity-required
  template:
    metadata:
      labels:
        app: demoapp
        ctlr: pod-affinity-required
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.0
      affinity:
        podAffinity: #Pod亲和
          requiredDuringSchedulingIgnoredDuringExecution:  #硬亲和
          - labelSelector:
              matchExpressions:
              - {key: app, operator: In, values: ["redis"]}   #复杂表示达匹配标签 app=reids、ctlr=redis
              - {key: ctlr, operator: In, values: ["redis"]}    #两个表达式为与关系 同时满足
            topologyKey: rack   #节点标签 位置标签 rack可以理解为同一个机架上 与labelSelector同样是与关系同时满足

[root@k8s-master Scheduler]# kubectl apply -f pod-affinity-required-demo.yaml 
deployment.apps/redis unchanged
deployment.apps/pod-affinity-required unchanged

[root@k8s-master Scheduler]# kubectl get pod -o wide  #节点还没有打上标签pod-affinity挂机
NAME                                     READY   STATUS    RESTARTS   AGE    IP              NODE        NOMINATED NODE   READINESS GATES
pod-affinity-required-5dd44f5d45-6vvhd   0/1     Pending   0          6m7s                              
pod-affinity-required-5dd44f5d45-c4hzl   0/1     Pending   0          6m7s                              
pod-affinity-required-5dd44f5d45-qm6zb   0/1     Pending   0          6m7s                              
pod-affinity-required-5dd44f5d45-t4mm5   0/1     Pending   0          6m7s                              
pod-affinity-required-5dd44f5d45-vs7dg   0/1     Pending   0          6m7s                              
redis-55f46d9795-r7pkz                   1/1     Running   0          10m    192.168.51.23   k8s-node3              

  • 模拟同一机架不同节点
[root@k8s-master Scheduler]# kubectl label node k8s-node1 rack=foo  #为节点1把rack标签
node/k8s-node1 labeled
[root@k8s-master Scheduler]# kubectl label node k8s-node2 rack=bar  #为节点2把rack标签
node/k8s-node2 labeled
[root@k8s-master Scheduler]# kubectl label node k8s-node3 rack=baz #为节点3把rack标签
node/k8s-node3 labeled
  • 硬亲和与redis运行在相同节点 都运行在node3上
[root@k8s-master Scheduler]# kubectl get pod  -o wide 
NAME                                     READY   STATUS    RESTARTS   AGE   IP              NODE        NOMINATED NODE   READINESS GATES
pod-affinity-required-5dd44f5d45-6vvhd   1/1     Running   0          21m   192.168.51.28   k8s-node3              
pod-affinity-required-5dd44f5d45-c4hzl   1/1     Running   0          21m   192.168.51.26   k8s-node3              
pod-affinity-required-5dd44f5d45-qm6zb   1/1     Running   0          21m   192.168.51.24   k8s-node3              
pod-affinity-required-5dd44f5d45-t4mm5   1/1     Running   0          21m   192.168.51.25   k8s-node3              
pod-affinity-required-5dd44f5d45-vs7dg   1/1     Running   0          21m   192.168.51.27   k8s-node3              
redis-55f46d9795-r7pkz                   1/1     Running   0          25m   192.168.51.23   k8s-node3              
示例2: podAffinity Pod软亲和

preferredDuringSchedulingIgnoredDuringExecution

[root@k8s-master Scheduler]# cat pod-affinity-preferred-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-preferred
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      ctlr: redis-preferred
  template:
    metadata:
      labels:
        app: redis
        ctlr: redis-preferred
    spec:
      containers:
      - name: redis
        image: redis:6.0-alpine
        resources:
          requests:
            cpu: 200m
            memory: 512Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pod-affinity-preferred
spec:
  replicas: 4
  selector:
    matchLabels:
      app: demoapp
      ctlr: pod-affinity-preferred
  template:
    metadata:
      labels:
        app: demoapp
        ctlr: pod-affinity-preferred
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.0
        resources:
          requests:
            cpu: 500m
            memory: 100Mi
      affinity:
        podAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:   #软亲和以不同权重计算分值重出最终的亲和节点
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - {key: app, operator: In, values: ["redis"]}     
                - {key: ctlr, operator: In, values: ["redis-prefered"]}  #Pod包含标签为app=redis、ctlr=redis-prefered同时满足
              topologyKey: kubernetes.io/hostname  #节点位置标签 以不同节点为位置 表示在相同节点权重为100 所有同一节点权重最高
          - weight: 50
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - {key: app, operator: In, values: ["redis"]}
                - {key: ctlr, operator: In, values: ["redis-prefered"]}
              topologyKey: rack  #节点标签 位置标签 以机架为位置  表示在相同机架权重为50

[root@k8s-master Scheduler]# kubectl apply -f  pod-affinity-preferred-demo.yaml 

[root@k8s-master Scheduler]# kubectl get pod -o wide   #运行在不同的节点
NAME                                      READY   STATUS    RESTARTS   AGE    IP               NODE        NOMINATED NODE   READINESS GATES
pod-affinity-preferred-57d457968f-4lxvq   1/1     Running   0          5m6s   192.168.113.21   k8s-node1              
pod-affinity-preferred-57d457968f-krb2f   1/1     Running   0          5m6s   192.168.113.20   k8s-node1              
pod-affinity-preferred-57d457968f-mzckm   1/1     Running   0          5m6s   192.168.12.24    k8s-node2              
pod-affinity-preferred-57d457968f-v8n8g   1/1     Running   0          5m6s   192.168.51.37    k8s-node3              
redis-preferred-5d775df679-wtpgs          1/1     Running   0          5m6s   192.168.51.38    k8s-node3              

示例3: podAntiAffinity 硬反亲和

requiredDuringSchedulingIgnoredDuringExecution

  • Deployment 4个副本必须运行在不同的节点上 因为node节点只有3个所有会有个挂起 内部相斥
[root@k8s-master Scheduler]# cat pod-antiaffinity-required-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pod-antiaffinity-required
spec:
  replicas: 4  #运行4个副本
  selector:
    matchLabels:
      app: demoapp
      ctlr: pod-antiaffinity-required
  template:
    metadata:
      labels:
        app: demoapp
        ctlr: pod-antiaffinity-required
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.0
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - {key: app, operator: In, values: ["demoapp"]}
              - key: ctlr
                operator: In
                values: ["pod-antiaffinity-required"]   #Pod包含标签app=demoapp、ctlr=pod-antiaffinity-required 同时满足
            topologyKey: kubernetes.io/hostname  #以节点为位置  表示每个节点只能运行1个Pod

[root@k8s-master Scheduler]# kubectl apply -f  pod-antiaffinity-required-demo.yaml 

[root@k8s-master Scheduler]# kubectl get pod -o wide  
NAME                                         READY   STATUS    RESTARTS   AGE    IP               NODE        NOMINATED NODE   READINESS GATES
pod-antiaffinity-required-697f7d764d-hgkxj   1/1     Running   0          5m5s   192.168.113.34   k8s-node1              
pod-antiaffinity-required-697f7d764d-n4zt9   1/1     Running   0          5m5s   192.168.12.34    k8s-node2              
pod-antiaffinity-required-697f7d764d-psqfb   1/1     Running   0          5m5s   192.168.51.53    k8s-node3              
pod-antiaffinity-required-697f7d764d-q6t7m   0/1     Pending   0          5m5s                                 #挂起 因为只有3个node节点

你可能感兴趣的:(19.kubernetes笔记 Pod资源调度(二) podAffinity、podAntiAffinity Pod亲和与反亲和)