DaemonSet 控制器确保所有(或一部分)的节点都运行了一个指定的 Pod 副本(注意一个节点就一个Pod)。
DaemonSet 的典型应用场景:
在创建,DaemonSe时,DaemonSet 会忽略 Node 的 unschedulable 状态,有下面方式来指定 Pod 只运行在指定的 Node 节点上:
先为 Node 打上标签
kubectl label nodes k8s-node1 svc_type=microsvc
然后再 daemonset 配置中设置 nodeSelector
spec:
template:
spec:
nodeSelector:
svc_type: microsvc
nodeAffinity 目前支持两种:requiredDuringSchedulingIgnoredDuringExecution
和 preferredDuringSchedulingIgnoredDuringExecution
,分别代表必须满足条件和优选条件。
比如下面的例子代表调度到包含标签 wolfcode.cn/framework-name 并且值为 spring 或 springboot 的 Node 上,并且优选还带有标签 another-node-label-key=another-node-label-value 的Node。
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: wolfcode.cn/framework-name
operator: In
values:
- spring
- springboot
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
containers:
- name: with-node-affinity
image: pauseyyf/pause
podAffinity 基于 Pod 的标签来选择 Node,仅调度到满足条件Pod 所在的 Node 上,支持 podAffinity 和 podAntiAffinity。这个功能比较绕,以下面的例子为例:
apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: auth
operator: In
values:
- oauth2
topologyKey: failure-domain.beta.kubernetes.io/zone
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: auth
operator: In
values:
- jwt
topologyKey: kubernetes.io/hostname
containers:
- name: with-pod-affinity
image: pauseyyf/pause
下面以nodeSelector为例,关于亲和性后面还会有专门的章节讲解。
以使用 Fluentd 收集日志为例,创建yaml文件 fluentd-logging.yaml文件,内容如下:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd # 资源名字
spec:
selector: # 选择器,用于定义管理的指定资源的标签Pod
matchLabels:
app: logging
template: # pod模板
metadata: # pod的元信息
labels:
app: logging
id: fluentd
name: fluentd
spec: # 描述信息
containers:
- name: fluentd-es # 容器名字
image: agilestacks/fluentd-elasticsearch:v1.3.0 # 容器镜像
env: # 环境
- name: FLUENTD_ARGS
value: -qq
volumeMounts: # 数据卷
- name: containers # 数据卷名字
mountPath: /var/lib/docker/containers
- name: varlog
mountPath: /varlog
volumes:
- hostPath:
path: /var/lib/docker/containers
name: containers
- hostPath:
path: /var/log
name: varlog
执行创建命令:
kubectl create -f fluentd-logging.yaml
# daemonset.apps/fluentd created
查看daemonset和pod
kubectl get daemonset
# 结果如下,注意测试的时候就一个加点
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
fluentd 1 1 1 1 1 <none> 66s
kubectl get po
# 结果如下
NAME READY STATUS RESTARTS AGE
fluentd-hz88j 1/1 Running 0 8m52s
因为我们只有一个节点node,就是默认的,因为没有指定要部署在哪个节点,所以是正常部署成功了。
查看下节点信息:
kubectl get no --show-labels
# 结果如下
NAME STATUS ROLES AGE VERSION LABELS
docker-desktop Ready control-plane 11d v1.27.2 beta.kubernetes.io/arch=arm64,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm64,kubernetes.io/hostname=docker-desktop,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
下面我们通过标签选择器指定要部署的节点。
首先给我们的节点加个标签:
kubectl label no docker-desktop type=microservice
# node/docker-desktop labeled
# 查看标签
kubectl get no --show-labels
NAME STATUS ROLES AGE VERSION LABELS
docker-desktop Ready control-plane 11d v1.27.2 beta.kubernetes.io/arch=arm64,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm64,kubernetes.io/hostname=docker-desktop,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=,type=microservice
# 后面多了一个标签
修改yaml文件,添加标签选择器,选择标签为type: microservice
的节点部署,如下:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd # 资源名字
spec:
selector: # 选择器,用于定义管理的指定资源的标签Pod
matchLabels:
app: logging
template: # pod模板
metadata: # pod的元信息
labels:
app: logging
id: fluentd
name: fluentd
spec: # 描述信息
nodeSelector:
type: microservice
containers:
- name: fluentd-es # 容器名字
image: agilestacks/fluentd-elasticsearch:v1.3.0 # 容器镜像
env: # 环境
- name: FLUENTD_ARGS
value: -qq
volumeMounts: # 数据卷
- name: containers # 数据卷名字
mountPath: /var/lib/docker/containers
- name: varlog
mountPath: /varlog
volumes:
- hostPath:
path: /var/lib/docker/containers
name: containers
- hostPath:
path: /var/log
name: varlog
删除原来的重新创建:
kubectl create -f fluentd-logging.yaml
# daemonset.apps/fluentd created
查看:
kubectl get ds
# 结果如下
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
fluentd 1 1 1 1 1 type=microservice 17s
可以按到成功部署了。
如果我们把标签修改下:
kubectl label no docker-desktop type=app --overwrite
# node/docker-desktop labeled
然后再删除重新部署,结果如下:
kubectl get ds
# 结果如下
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
fluentd 0 0 0 0 0 type=microservice 7s
可以看到,部署是失败了。
查看下具体信息:
kubectl describe ds fluentd
# 结果如下
Name: fluentd
Selector: app=logging
Node-Selector: type=microservice
Labels: <none>
Annotations: deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=logging
id=fluentd
Containers:
fluentd-es:
Image: agilestacks/fluentd-elasticsearch:v1.3.0
Port: <none>
Host Port: <none>
Environment:
FLUENTD_ARGS: -qq
Mounts:
/var/lib/docker/containers from containers (rw)
/varlog from varlog (rw)
Volumes:
containers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
HostPathType:
varlog:
Type: HostPath (bare host directory volume)
Path: /var/log
HostPathType:
Events: <none>
可以看到没有匹配的节点,所以没有进行部署。