kubernetes 25、亲和性与反亲和性的使用

目标:
弄清楚亲和性与反亲和性原理
1 亲和性与反亲和性
2 样例
3 总结

1 亲和性与反亲和性
PodAffinity: pod亲和与互斥调度策略
requiredDuringSchedulingIgnoredDuringExecution
作用:
限制pod所能运行的节点,根据节点本身的标签判断调和度。
原理:
在具有标签X的节点上运行1个或多个符合条件Y的pod,那么pod应该(
如果是互斥,就拒绝)运行在这个节点上。
X: 指集群中的节点,区域等,可通过节点标签中的key声明;
key的名字: topologyKey,表达节点所属的topology范围。
种类:
kubernetes.io/hostname
failure-domain.beta.kubernetes.io/zone
failure-domain.beta.kubernetes.io/region

pod属于命名空间,条件Y表达的是Label Selector。
pod亲和互斥的条件设置也是:
requiredDuringSchedulingIgnoredDuringExecution和
prefferedDuringSchedulingIgnoredDuringExecution。
pod亲和性定义与pod sepc的affinity字段的podAffinity子字段里。
pod的互斥性定义则定义于同一层次的podAntiAffinity子字段中。

查看ceilometer-api的反亲和性
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    application: ceilometer
    component: api
    release_group: ceilometer
  name: ceilometer-api
   ......
spec:
  template:
    metadata:
      labels:
        application: ceilometer
        component: api
        release_group: ceilometer
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: release_group
                operator: In
                values:
                - ceilometer
              - key: application
                operator: In
                values:
                - ceilometer
              - key: component
                operator: In
                values:
                - api
            topologyKey: kubernetes.io/hostname
......

解释:
Selector(选择器):
.spec.selector是可选字段,用来指定 label selector ,圈定Deployment管理的pod范围。
如果被指定, .spec.selector 必须匹配 .spec.template.metadata.labels,否则它将被API拒绝。
如果 .spec.selector 没有被指定, .spec.selector.matchLabels 默认是.spec.template.metadata.labels。

requiredDuringSchedulingIgnoredDuringExecution: 必须的规则,满足必须的规则的pod才会被调度到特定的Node上。
preferredDuringSchedulingIgnoredDuringExecution: 软约束,不一定满足。
topologyKey:调度时指定区域,如果用node标签实现,则指定为kubernetes.io/hostname。
IgnoredDuringExecution: 如果Node上的标签发生更改,并且反亲和性不再满足时会忽略。而不会在节点上驱逐不再匹配规则的pod。
反亲和性一般用来将pod分不在不同的节点上。

参考:
https://blog.csdn.net/weixin_33994429/article/details/92837489
https://www.kubernetes.org.cn/1890.html
https://blog.csdn.net/tiger435/article/details/78489369


2 样例
一个deployment的yaml文件内容如下:

{{- if .Values.manifests.statefulset_st2actionrunner }}
{{- $envAll := . }}
{{- $dependencies := .Values.dependencies.actionrunner }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dozer-st2actionrunner
spec:
  selector:
    matchLabels:
      app: dozer-st2actionrunner
  replicas: {{ .Values.pod.replicas.st2actionrunner }}
{{ tuple $envAll | include "helm-toolkit.snippets.kubernetes_upgrades_deployment" | indent 2 }}
  template:
    metadata:
      labels:
        app: dozer-st2actionrunner
      annotations:
        configmap-etc-hash: {{ tuple "configmap-etc.yaml" . | include "helm-toolkit.utils.hash" }}
        configmap-postgres-hash: {{ tuple "configmap-postgres.yaml" . | include "helm-toolkit.utils.hash" }}
        configmap-rabbitmq-hash: {{ tuple "configmap-rabbitmq.yaml" . | include "helm-toolkit.utils.hash" }}
        configmap-st2-hash: {{ tuple "configmap-st2.yaml" . | include "helm-toolkit.utils.hash" }}
    spec:
      nodeSelector:
        {{ .Values.labels.node_selector_key }}: {{ .Values.labels.node_selector_value }}
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - dozer-st2actionrunner
            topologyKey: "kubernetes.io/hostname"
      initContainers:
{{ tuple $envAll $dependencies "" | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }}
      containers:
      - name: st2actionrunner
        image: {{ tuple .Values.images.tags "dozer" . | include "helm-toolkit.utils.update_image" }}
        imagePullPolicy: {{ .Values.images.pull_policy }}
        command:
        - bash
        - "-c"
        - >-
          exec /opt/stackstorm/st2/bin/st2actionrunner --config-file /etc/st2/st2.conf
        env:
        - name: ST2_SERVICE
          value: st2actionrunner
        - name: ST2_ACTION_AUTH_URL
          value: {{ .Values.configmap.st2_auth_url }}
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        envFrom:
        - configMapRef: { name: dozer-st2 }
        - configMapRef: { name: dozer-rabbitmq }
        - configMapRef: { name: dozer-postgresql }
        volumeMounts:
        - name: dozer-etc
          mountPath: /etc/st2/st2.conf
          subPath: st2.conf
        - name: dozer-etc
          mountPath: /opt/stackstorm/configs/openstack.yaml
          subPath: openstack.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/configs/vm_clone.yaml
          subPath: vm_clone.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/configs/openrc
          subPath: openrc
        - name: ssh-data
          mountPath: /root/.ssh/id_rsa
        - name: kubectl
          mountPath: /tmp/kubectl
{{- if .Values.slack.enabled }}
        - name: dozer-etc
          mountPath: /opt/stackstorm/configs/slack.yaml
          subPath: slack.yaml
{{- end }}
        - name: dozer-etc
          mountPath: /opt/stackstorm/configs/email.yaml
          subPath: email.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/actions/workflows/mistral-ops-send-mail.yaml
          subPath: mistral-ops-send-mail.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/actions/mistral-ops-send-mail-metadata.yaml
          subPath: mistral-ops-send-mail-metadata.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/rules/ops-patrol.yaml
          subPath: ops-patrol.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/actions/workflows/mistral-network-check.yaml
          subPath: mistral-network-check.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/actions/mistral-network-check-metadata.yaml
          subPath: mistral-network-check-metadata.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/rules/rule-network-check.yaml
          subPath: rule-network-check.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/default/actions/workflows/mistral-control-plane-data-backup.yaml
          subPath: mistral-control-plane-data-backup.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/default/actions/mistral-control-plane-data-backup-metadata.yaml
          subPath: mistral-control-plane-data-backup-metadata.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/default/rules/rule-control-plane-data-backup.yaml
          subPath: rule-control-plane-data-backup.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/rules/check-cgroup.yaml
          subPath: check-cgroup.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/actions/workflows/mistral-cgroup-send-mail.yaml
          subPath: mistral-cgroup-send-mail.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/actions/mistral-cgroup-send-mail-metadata.yaml
          subPath: mistral-cgroup-send-mail-metadata.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/default/rules/fstrim-rbd.yaml
          subPath: fstrim-rbd.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/default/actions/workflows/mistral-fstrim-rbd.yaml
          subPath: mistral-fstrim-rbd.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/default/actions/mistral-fstrim-rbd-metadata.yaml
          subPath: mistral-fstrim-rbd-metadata.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/actions/workflows/mistral-database-full-backup.yaml
          subPath: mistral-database-full-backup.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/actions/mistral-database-full-backup-metadata.yaml
          subPath: mistral-database-full-backup-metadata.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/rules/rule-database-full-backup.yaml
          subPath: rule-database-full-backup.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/actions/workflows/mistral-database-incr-backup.yaml
          subPath: mistral-database-incr-backup.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/actions/mistral-database-incr-backup-metadata.yaml
          subPath: mistral-database-incr-backup-metadata.yaml
        - name: dozer-etc
          mountPath: /opt/stackstorm/packs/email/rules/rule-database-incr-backup.yaml
          subPath: rule-database-incr-backup.yaml
      volumes:
      - name: dozer-etc
        configMap:
          name: dozer-etc
          defaultMode: 0777
      - name: ssh-data
        hostPath:
          path: /root/.ssh/id_rsa
      - name: kubectl
        hostPath:
          path: /usr/local/bin/kubectl
{{- end }}

分析:
1) 由于deployment尽量需要将pod部署在不同的节点,则就需要使用反亲和性
对应上面yaml中关于反亲和性的内容如下

spec:
  selector:
    matchLabels:
      app: dozer-st2actionrunner
  replicas: {{ .Values.pod.replicas.st2actionrunner }}
{{ tuple $envAll | include "helm-toolkit.snippets.kubernetes_upgrades_deployment" | indent 2 }}
  template:
    metadata:
      labels:
        app: dozer-st2actionrunner
    spec:
      nodeSelector:
        {{ .Values.labels.node_selector_key }}: {{ .Values.labels.node_selector_value }}
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - dozer-st2actionrunner
            topologyKey: "kubernetes.io/hostname"

2)最终实际生成的内容如下
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2020-06-10T08:59:45Z"
  generation: 1
  labels:
    app: dozer-st2actionrunner
  name: dozer-st2actionrunner
  namespace: openstack
  resourceVersion: "1039968"
  selfLink: /apis/apps/v1/namespaces/openstack/deployments/dozer-st2actionrunner
  uid: bb0b4d22-aaf8-11ea-8c8b-b82a72d4e66a
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app: dozer-st2actionrunner
  strategy:
    rollingUpdate:
      maxSurge: 3
      maxUnavailable: 70%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        configmap-etc-hash: e38cfcb46157f7ddc95b37a454c778489ab143d3a00e52bbaa1b7870f644f864
        configmap-postgres-hash: be49388e6a0a2e73d98116e46613f6910aaece63f8925d5c29540ec8457a40ec
        configmap-rabbitmq-hash: 79bcff7b06f049bb03d5685baf44829ade2027ceec93db5f8b0f7fec2584bc05
        configmap-st2-hash: f1e6b87b2484d35b1301741cfdc10a1a773ca9177668e71a1409df00f536bab0
      creationTimestamp: null
      labels:
        app: dozer-st2actionrunner
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - dozer-st2actionrunner
            topologyKey: kubernetes.io/hostname
      containers:
      - command:
        - bash
        - -c
        - exec /opt/stackstorm/st2/bin/st2actionrunner --config-file /etc/st2/st2.conf
        env:
        - name: ST2_SERVICE
          value: st2actionrunner
        - name: ST2_ACTION_AUTH_URL
          value: http://st2auth:9100/
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        envFrom:
        - configMapRef:
            name: dozer-st2
        - configMapRef:
            name: dozer-rabbitmq
        - configMapRef:
            name: dozer-postgresql
        image: hub.easystack.io/production/escloud-linux-source-dozer:5.1.0-alpha.190
        imagePullPolicy: IfNotPresent
        name: st2actionrunner
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/st2/st2.conf
          name: dozer-etc
          subPath: st2.conf
        - mountPath: /opt/stackstorm/configs/openstack.yaml
          name: dozer-etc
          subPath: openstack.yaml
        - mountPath: /opt/stackstorm/configs/vm_clone.yaml
          name: dozer-etc
          subPath: vm_clone.yaml
        - mountPath: /opt/stackstorm/configs/openrc
          name: dozer-etc
          subPath: openrc
        - mountPath: /root/.ssh/id_rsa
          name: ssh-data
        - mountPath: /tmp/kubectl
          name: kubectl
        - mountPath: /opt/stackstorm/configs/email.yaml
          name: dozer-etc
          subPath: email.yaml
        - mountPath: /opt/stackstorm/packs/email/actions/workflows/mistral-ops-send-mail.yaml
          name: dozer-etc
          subPath: mistral-ops-send-mail.yaml
        - mountPath: /opt/stackstorm/packs/email/actions/mistral-ops-send-mail-metadata.yaml
          name: dozer-etc
          subPath: mistral-ops-send-mail-metadata.yaml
        - mountPath: /opt/stackstorm/packs/email/rules/ops-patrol.yaml
          name: dozer-etc
          subPath: ops-patrol.yaml
        - mountPath: /opt/stackstorm/packs/email/actions/workflows/mistral-network-check.yaml
          name: dozer-etc
          subPath: mistral-network-check.yaml
        - mountPath: /opt/stackstorm/packs/email/actions/mistral-network-check-metadata.yaml
          name: dozer-etc
          subPath: mistral-network-check-metadata.yaml
        - mountPath: /opt/stackstorm/packs/email/rules/rule-network-check.yaml
          name: dozer-etc
          subPath: rule-network-check.yaml
        - mountPath: /opt/stackstorm/packs/default/actions/workflows/mistral-control-plane-data-backup.yaml
          name: dozer-etc
          subPath: mistral-control-plane-data-backup.yaml
        - mountPath: /opt/stackstorm/packs/default/actions/mistral-control-plane-data-backup-metadata.yaml
          name: dozer-etc
          subPath: mistral-control-plane-data-backup-metadata.yaml
        - mountPath: /opt/stackstorm/packs/default/rules/rule-control-plane-data-backup.yaml
          name: dozer-etc
          subPath: rule-control-plane-data-backup.yaml
        - mountPath: /opt/stackstorm/packs/email/rules/check-cgroup.yaml
          name: dozer-etc
          subPath: check-cgroup.yaml
        - mountPath: /opt/stackstorm/packs/email/actions/workflows/mistral-cgroup-send-mail.yaml
          name: dozer-etc
          subPath: mistral-cgroup-send-mail.yaml
        - mountPath: /opt/stackstorm/packs/email/actions/mistral-cgroup-send-mail-metadata.yaml
          name: dozer-etc
          subPath: mistral-cgroup-send-mail-metadata.yaml
        - mountPath: /opt/stackstorm/packs/default/rules/fstrim-rbd.yaml
          name: dozer-etc
          subPath: fstrim-rbd.yaml
        - mountPath: /opt/stackstorm/packs/default/actions/workflows/mistral-fstrim-rbd.yaml
          name: dozer-etc
          subPath: mistral-fstrim-rbd.yaml
        - mountPath: /opt/stackstorm/packs/default/actions/mistral-fstrim-rbd-metadata.yaml
          name: dozer-etc
          subPath: mistral-fstrim-rbd-metadata.yaml
        - mountPath: /opt/stackstorm/packs/email/actions/workflows/mistral-database-full-backup.yaml
          name: dozer-etc
          subPath: mistral-database-full-backup.yaml
        - mountPath: /opt/stackstorm/packs/email/actions/mistral-database-full-backup-metadata.yaml
          name: dozer-etc
          subPath: mistral-database-full-backup-metadata.yaml
        - mountPath: /opt/stackstorm/packs/email/rules/rule-database-full-backup.yaml
          name: dozer-etc
          subPath: rule-database-full-backup.yaml
        - mountPath: /opt/stackstorm/packs/email/actions/workflows/mistral-database-incr-backup.yaml
          name: dozer-etc
          subPath: mistral-database-incr-backup.yaml
        - mountPath: /opt/stackstorm/packs/email/actions/mistral-database-incr-backup-metadata.yaml
          name: dozer-etc
          subPath: mistral-database-incr-backup-metadata.yaml
        - mountPath: /opt/stackstorm/packs/email/rules/rule-database-incr-backup.yaml
          name: dozer-etc
          subPath: rule-database-incr-backup.yaml
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - kubernetes-entrypoint
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: INTERFACE_NAME
          value: eth0
        - name: DEPENDENCY_SERVICE
          value: openstack:mongodb,openstack:rabbitmq
        - name: DEPENDENCY_JOBS
        - name: DEPENDENCY_DAEMONSET
        - name: DEPENDENCY_CONTAINER
        - name: COMMAND
          value: echo done
        image: hub.easystack.io/production/kubernetes-entrypoint:v0.2.1
        imagePullPolicy: IfNotPresent
        name: init
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      nodeSelector:
        openstack-control-plane: enabled
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 511
          name: dozer-etc
        name: dozer-etc
      - hostPath:
          path: /root/.ssh/id_rsa
          type: ""
        name: ssh-data
      - hostPath:
          path: /usr/local/bin/kubectl
          type: ""
        name: kubectl
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2020-06-10T08:59:45Z"
    lastUpdateTime: "2020-06-10T09:01:04Z"
    message: ReplicaSet "dozer-st2actionrunner-5998b446bd" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2020-06-10T13:29:14Z"
    lastUpdateTime: "2020-06-10T13:29:14Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1


3 总结
1)反亲和性一般用来将pod分不在不同的节点上。
2)PodAffinity: pod亲和与互斥调度策略
requiredDuringSchedulingIgnoredDuringExecution
作用:
限制pod所能运行的节点,根据节点本身的标签判断调和度。
原理:
在具有标签X的节点上运行1个或多个符合条件Y的pod,那么pod应该(
如果是互斥,就拒绝)运行在这个节点上。
X: 指集群中的节点,区域等,可通过节点标签中的key声明;
key的名字: topologyKey,表达节点所属的topology范围。
种类:
kubernetes.io/hostname
failure-domain.beta.kubernetes.io/zone
failure-domain.beta.kubernetes.io/region

3)pod属于命名空间,条件Y表达的是Label Selector。
pod亲和互斥的条件设置也是:
requiredDuringSchedulingIgnoredDuringExecution和
prefferedDuringSchedulingIgnoredDuringExecution。
pod亲和性定义与pod sepc的affinity字段的podAffinity子字段里。
pod的互斥性定义则定义于同一层次的podAntiAffinity子字段中。


参考:
https://blog.csdn.net/weixin_33994429/article/details/92837489
https://www.kubernetes.org.cn/1890.html
https://blog.csdn.net/tiger435/article/details/78489369

你可能感兴趣的:(kubernetes)