前面使用
Deployment
创建的Pod
是无状态的,当挂载了volume
之后,如果该Pod
挂了,Replication Controller
会再启动一个Pod
来保证可用性,但是由于Pod
是无状态的,pod
挂了就会和之前的Volume
的关系断开,新创建的Pod
无法找到之前的Pod
。但是对于用户来说,他们对底层的Pod
挂了是没有感知的,但是当Pod
挂了之后就无法再使用之前挂载的存储卷。为了解决这一问题,就引入了StatefulSet
用于保留Pod的状态信息。
StatefulSet
是Pod
资源控制器的一种实现,用于部署和扩展有状态应用的Pod
资源,确保它们的运行顺序及每个Pod
资源的唯一性。其应用场景包括:
稳定的持久化存储,即
Pod
重新调度后还是能访问到相同的持久化数据,基于PVC
来实现。稳定的网络标识,即
Pod
重新调度后其PodName
和HostName
不变,基于Headless Service
(即没有Cluster IP
的Service
)来实现有序部署,有序扩展,即
Pod
是有顺序的,在部署或者扩展的时候要依据定义的顺序依次进行(即从0到N-1,在下一个Pod
运行之前的所有之前的Pod
必须都是Running
和Ready
状态),基于init Containers
来实现有序收缩,有序删除(即从N-1到0)
StatefulSet
由以下几个部分组成:
用于定义网络标志(
DNS domain
)和Headless Service
用于创建
PersistentVolumes
和VolumeClaimTemplates
定义具体应用的
StatefulSet
StatefulSet
中的每个Pod
的DNS
格式为statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local
,其中
serviceName:为
Headless Service
的名字0..N-1:为Pod所在的序号,从0开始到N-1
statefulSetName:为
StatefulSet
的名字namespace:为服务所在的
namaspace
,Headless Service
和StatefulSet
必须在相同的namespace
.cluster.local:为
Cluster Domain
为了实现标识符的稳定,这时候就需要一个
headless service
解析直达到Pod
,还需要给Pod
配置一个唯一的名称。
1、Volume
2、Persistent Volume
3、Persistent Volume Clain
4、Service
5、StatefulSet
Volume可以有很多中类型,比如nfs、gluster等,下面使用nfs
statefulSet字段说明:
[root@k8s-master ~]# kubectl explain statefulset KIND: StatefulSet VERSION: apps/v1 DESCRIPTION: StatefulSet represents a set of pods with consistent identities. Identities are defined as: - Network: A single stable DNS and hostname. - Storage: As many VolumeClaims as requested. The StatefulSet guarantees that a given network identity will always map to the same storage identity. FIELDS: apiVersionkind metadata
通过上面的描述,下面示例定义StatefulSet
资源,在定义之前首先得准备PV
资源对象。这里同样使用NFS作为后端存储。
1)准备NFS
(安装软件省略,参考)
(1)创建存储卷对应的目录 [root@storage ~]# mkdir /data/volumes/v{1..5} -p (2)修改nfs的配置文件 [root@storage ~]# vim /etc/exports /data/volumes/v1 192.168.1.0/24(rw,no_root_squash) /data/volumes/v2 192.168.1.0/24(rw,no_root_squash) /data/volumes/v3 192.168.1.0/24(rw,no_root_squash) /data/volumes/v4 192.168.1.0/24(rw,no_root_squash) /data/volumes/v5 192.168.1.0/24(rw,no_root_squash) (3)查看nfs的配置 [root@storage ~]# exportfs -arv exporting 192.168.1.0/24:/data/volumes/v5 exporting 192.168.1.0/24:/data/volumes/v4 exporting 192.168.1.0/24:/data/volumes/v3 exporting 192.168.1.0/24:/data/volumes/v2 exporting 192.168.1.0/24:/data/volumes/v1 (4)使配置生效 [root@storage ~]# showmount -e Export list for storage: /data/volumes/v5 192.168.1.0/24 /data/volumes/v4 192.168.1.0/24 /data/volumes/v3 192.168.1.0/24 /data/volumes/v2 192.168.1.0/24 /data/volumes/v1 192.168.1.0/24
[root@k8s-master ~]# mkdir statefulset && cd statefulset (1)编写创建pv的资源清单 [root@k8s-master statefulset]# vim pv-nfs.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-001 labels: name: pv001 spec: nfs: path: /data/volumes/v1 server: 192.168.1.34 readOnly: false accessModes: ["ReadWriteOnce","ReadWriteMany"] capacity: storage: 5Gi persistentVolumeReclaimPolicy: Retain --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-002 labels: name: pv002 spec: nfs: path: /data/volumes/v2 server: 192.168.1.34 readOnly: false accessModes: ["ReadWriteOnce"] capacity: storage: 5Gi persistentVolumeReclaimPolicy: Retain --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-003 labels: name: pv003 spec: nfs: path: /data/volumes/v3 server: 192.168.1.34 readOnly: false accessModes: ["ReadWriteOnce","ReadWriteMany"] capacity: storage: 5Gi persistentVolumeReclaimPolicy: Retain --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-004 labels: name: pv004 spec: nfs: path: /data/volumes/v4 server: 192.168.1.34 readOnly: false accessModes: ["ReadWriteOnce","ReadWriteMany"] capacity: storage: 5Gi persistentVolumeReclaimPolicy: Retain --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-005 labels: name: pv005 spec: nfs: path: /data/volumes/v5 server: 192.168.1.34 readOnly: false accessModes: ["ReadWriteOnce","ReadWriteMany"] capacity: storage: 5Gi persistentVolumeReclaimPolicy: Retain (2)创建PV [root@k8s-master statefulset]# kubectl apply -f pv-nfs.yaml persistentvolume/pv-nfs-001 created persistentvolume/pv-nfs-002 created persistentvolume/pv-nfs-003 created persistentvolume/pv-nfs-004 created persistentvolume/pv-nfs-005 created (3)查看PV [root@k8s-master statefulset]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-nfs-001 2Gi RWO,RWX Retain Available 3s pv-nfs-002 5Gi RWO Retain Available 3s pv-nfs-003 5Gi RWO,RWX Retain Available 3s pv-nfs-004 5Gi RWO,RWX Retain Available 3s pv-nfs-005 5Gi RWO,RWX Retain Available 3s
[root@k8s-master statefulset]# vim statefulset-demo.yaml #定义一个Headless Service apiVersion: v1 kind: Service metadata: name: nginx-svc labels: app: nginx-svc spec: ports: - name: http port: 80 clusterIP: None selector: app: nginx-pod --- #定义StatefulSet apiVersion: apps/v1 kind: StatefulSet metadata: name: nginx-statefulset spec: serviceName: nginx-svc #指定service,和上面定义的service对应 replicas: 5 #指定副本数量 selector: #指定标签选择器,和后面的pod的标签对应 matchLabels: app: nginx-pod template: #定义后端Pod的模板 metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.12 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 volumeMounts: - name: nginxdata mountPath: /usr/share/nginx/html volumeClaimTemplates: #定义存储卷申请模板 - metadata: name: nginxdata spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 5Gi #--- 解析上面的资源清单:由于StatefulSet资源依赖于一个事先存在的Service资源,所以需要先定义一个名为nginx-svc的Headless Service资源,用于关联到每个Pod资源创建DNS资源记录。接着定义了一个名为nginx-statefulset的StatefulSet资源,它通过Pod模板创建了5个Pod资源副本,并基于volumeClaiTemplate向前面创建的PV进行了请求大小为5Gi的专用存储卷。
[root@k8s-master statefulset]# kubectl apply -f statefulset-demo.yaml service/nginx-svc created statefulset.apps/nginx-statefulset created [root@k8s-master statefulset]# kubectl get svc #查看创建的无头服务nginx-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1443/TCP 5d19h nginx-svc ClusterIP None 80/TCP 29s [root@k8s-master statefulset]# kubectl get pv #查看PV绑定 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-nfs-001 2Gi RWO,RWX Retain Available 3m49s pv-nfs-002 5Gi RWO Retain Bound default/nginxdata-nginx-statefulset-0 3m49s pv-nfs-003 5Gi RWO,RWX Retain Bound default/nginxdata-nginx-statefulset-1 3m49s pv-nfs-004 5Gi RWO,RWX Retain Bound default/nginxdata-nginx-statefulset-2 3m49s pv-nfs-005 5Gi RWO,RWX Retain Available 3m48s [root@k8s-master statefulset]# kubectl get pvc #查看PVC绑定 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nginxdata-nginx-statefulset-0 Bound pv-nfs-002 5Gi RWO 21s nginxdata-nginx-statefulset-1 Bound pv-nfs-003 5Gi RWO,RWX 18s nginxdata-nginx-statefulset-2 Bound pv-nfs-004 5Gi RWO,RWX 15s [root@k8s-master statefulset]# kubectl get statefulset #查看StatefulSet NAME READY AGE nginx-statefulset 3/3 58s [root@k8s-master statefulset]# kubectl get pods #查看Pod信息 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 78s nginx-statefulset-1 1/1 Running 0 75s nginx-statefulset-2 1/1 Running 0 72s [root@k8s-master ~]# kubectl get pods -w #动态查看pod创建过程,可以发现它是按照顺序从0-(n-1)的顺序创建 nginx-statefulset-0 0/1 Pending 0 0s nginx-statefulset-0 0/1 Pending 0 0s nginx-statefulset-0 0/1 Pending 0 1s nginx-statefulset-0 0/1 ContainerCreating 0 1s nginx-statefulset-0 1/1 Running 0 3s nginx-statefulset-1 0/1 Pending 0 0s nginx-statefulset-1 0/1 Pending 0 0s nginx-statefulset-1 0/1 Pending 0 1s nginx-statefulset-1 0/1 ContainerCreating 0 1s nginx-statefulset-1 1/1 Running 0 3s nginx-statefulset-2 0/1 Pending 0 0s nginx-statefulset-2 0/1 Pending 0 0s nginx-statefulset-2 0/1 Pending 0 2s nginx-statefulset-2 0/1 ContainerCreating 0 2s nginx-statefulset-2 1/1 Running 0 4s
[root@k8s-master statefulset]# kubectl delete -f statefulset-demo.yaml service "nginx-svc" deleted statefulset.apps "nginx-statefulset" deleted [root@k8s-master ~]# kubectl get pods -w #动态查看删除过程,可以也是按照顺序删除,逆向关闭。 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 18m nginx-statefulset-1 1/1 Running 0 18m nginx-statefulset-2 1/1 Running 0 18m nginx-statefulset-2 1/1 Terminating 0 18m nginx-statefulset-0 1/1 Terminating 0 18m nginx-statefulset-1 1/1 Terminating 0 18m nginx-statefulset-2 0/1 Terminating 0 18m nginx-statefulset-0 0/1 Terminating 0 18m nginx-statefulset-1 0/1 Terminating 0 18m nginx-statefulset-2 0/1 Terminating 0 18m nginx-statefulset-2 0/1 Terminating 0 18m nginx-statefulset-2 0/1 Terminating 0 18m nginx-statefulset-1 0/1 Terminating 0 18m nginx-statefulset-1 0/1 Terminating 0 18m nginx-statefulset-0 0/1 Terminating 0 18m nginx-statefulset-0 0/1 Terminating 0 18m 此时PVC依旧存在的,再重新创建pod时,依旧会重新去绑定原来的PVC [root@k8s-master statefulset]# kubectl apply -f statefulset-demo.yaml service/nginx-svc created statefulset.apps/nginx-statefulset created [root@k8s-master statefulset]# kubectl get pvc #查看PVC绑定 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nginxdata-nginx-statefulset-0 Bound pv-nfs-002 5Gi RWO 30m nginxdata-nginx-statefulset-1 Bound pv-nfs-003 5Gi RWO,RWX 30m nginxdata-nginx-statefulset-2 Bound pv-nfs-004 5Gi RWO,RWX 30m
[root@k8s-master statefulset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-statefulset-0 1/1 Running 0 12m 10.244.2.96 k8s-node2nginx-statefulset-1 1/1 Running 0 12m 10.244.1.96 k8s-node1 nginx-statefulset-2 1/1 Running 0 12m 10.244.2.97 k8s-node2 [root@k8s-master statefulset]# dig -t A nginx-statefulset-0.nginx-svc.default.svc.cluster.local @10.96.0.10 ...... ;; ANSWER SECTION: nginx-statefulset-0.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.2.96 [root@k8s-master statefulset]# dig -t A nginx-statefulset-1.nginx-svc.default.svc.cluster.local @10.96.0.10 ...... ;; ANSWER SECTION: nginx-statefulset-1.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.1.96 [root@k8s-master statefulset]# dig -t A nginx-statefulset-2.nginx-svc.default.svc.cluster.local @10.96.0.10 ...... ;; ANSWER SECTION: nginx-statefulset-2.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.2.97 也可以进入到容器中进行解析,通过对Pod的名称解析得到IP # pod_name.service_name.ns_name.svc.cluster.local eg: nginx-statefulset-0.nginx-svc.default.svc.cluster.local
StatefulSet
资源的扩缩容与Deployment
资源相似,即通过修改资源的副本数来改动其目标Pod
资源数量。对StatefulSet
资源来说,kubectl scale
和kubectl patch
命令均可以实现此功能,也可以使用kubectl edit
命令直接修改其副本数,或者修改资源清单文件,由kubectl apply
命令重新声明。
[root@k8s-master statefulset]# kubectl scale statefulset/nginx-statefulset --replicas=4 #扩容副本增加到4个 statefulset.apps/nginx-statefulset scaled [root@k8s-master statefulset]# kubectl get pods #查看pv信息 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 16m nginx-statefulset-1 1/1 Running 0 16m nginx-statefulset-2 1/1 Running 0 16m nginx-statefulset-3 1/1 Running 0 3s [root@k8s-master statefulset]# kubectl get pv #查看pv绑定 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-nfs-001 2Gi RWO,RWX Retain Available 21m pv-nfs-002 5Gi RWO Retain Bound default/nginxdata-nginx-statefulset-0 21m pv-nfs-003 5Gi RWO,RWX Retain Bound default/nginxdata-nginx-statefulset-1 21m pv-nfs-004 5Gi RWO,RWX Retain Bound default/nginxdata-nginx-statefulset-2 21m pv-nfs-005 5Gi RWO,RWX Retain Bound default/nginxdata-nginx-statefulset-3 21m
[root@k8s-master statefulset]# kubectl patch sts/nginx-statefulset -p '{"spec":{"replicas":2}}' #通过patch打补丁方式缩容 statefulset.apps/nginx-statefulset patched [root@k8s-master ~]# kubectl get pods -w #动态查看缩容过程 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 17m nginx-statefulset-1 1/1 Running 0 17m nginx-statefulset-2 1/1 Running 0 17m nginx-statefulset-3 1/1 Running 0 1m nginx-statefulset-3 1/1 Terminating 0 20s nginx-statefulset-3 0/1 Terminating 0 20s nginx-statefulset-3 0/1 Terminating 0 22s nginx-statefulset-3 0/1 Terminating 0 22s nginx-statefulset-2 1/1 Terminating 0 24s nginx-statefulset-2 0/1 Terminating 0 24s nginx-statefulset-2 0/1 Terminating 0 36s nginx-statefulset-2 0/1 Terminating 0 36s
StatefulSet
的默认更新策略为滚动更新,也可以暂停更新
滚动更新示例:
[root@k8s-master statefulset]# kubectl patch sts/nginx-statefulset -p '{"spec":{"replicas":4}}' #这里先将副本扩容到4个。方便测试 [root@k8s-master ~]# kubectl set image statefulset nginx-statefulset nginx=nginx:1.14 #更新镜像版本 statefulset.apps/nginx-statefulset image updated [root@k8s-master ~]# kubectl get pods -w #动态查看更新 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 18m nginx-statefulset-1 1/1 Running 0 18m nginx-statefulset-2 1/1 Running 0 13m nginx-statefulset-3 1/1 Running 0 13m nginx-statefulset-3 1/1 Terminating 0 13m nginx-statefulset-3 0/1 Terminating 0 13m nginx-statefulset-3 0/1 Terminating 0 13m nginx-statefulset-3 0/1 Terminating 0 13m nginx-statefulset-3 0/1 Pending 0 0s nginx-statefulset-3 0/1 Pending 0 0s nginx-statefulset-3 0/1 ContainerCreating 0 0s nginx-statefulset-3 1/1 Running 0 2s nginx-statefulset-2 1/1 Terminating 0 13m nginx-statefulset-2 0/1 Terminating 0 13m nginx-statefulset-2 0/1 Terminating 0 14m nginx-statefulset-2 0/1 Terminating 0 14m nginx-statefulset-2 0/1 Pending 0 0s nginx-statefulset-2 0/1 Pending 0 0s nginx-statefulset-2 0/1 ContainerCreating 0 0s nginx-statefulset-2 1/1 Running 0 1s nginx-statefulset-1 1/1 Terminating 0 18m nginx-statefulset-1 0/1 Terminating 0 18m nginx-statefulset-1 0/1 Terminating 0 18m nginx-statefulset-1 0/1 Terminating 0 18m nginx-statefulset-1 0/1 Pending 0 0s nginx-statefulset-1 0/1 Pending 0 0s nginx-statefulset-1 0/1 ContainerCreating 0 0s nginx-statefulset-1 1/1 Running 0 2s nginx-statefulset-0 1/1 Terminating 0 18m nginx-statefulset-0 0/1 Terminating 0 18m nginx-statefulset-0 0/1 Terminating 0 18m nginx-statefulset-0 0/1 Terminating 0 18m nginx-statefulset-0 0/1 Pending 0 0s nginx-statefulset-0 0/1 Pending 0 0s nginx-statefulset-0 0/1 ContainerCreating 0 0s nginx-statefulset-0 1/1 Running 0 2s [root@k8s-master statefulset]# kubectl get pods -l app=nginx-pod -o custom-columns=NAME:metadata.name,IMAGE:spec.containers[0].image #查看更新完成后的镜像版本 NAME IMAGE nginx-statefulset-0 nginx:1.14 nginx-statefulset-1 nginx:1.14 nginx-statefulset-2 nginx:1.14 nginx-statefulset-3 nginx:1.14
通过上面示例可以看出,默认为滚动更新,倒序更新,更新完成一个接着更新下一个。
暂停更新示例
有时候设定了一个更新操作,但是又不希望一次性全部更新完成,想先更新几个,观察其是否稳定,然后再更新所有的。这时候只需要将
.spec.spec.updateStrategy.rollingUpdate.partition
字段的值进行修改即可。(默认值为0
,所以我们看到了更新效果为上面那样,全部更新)。该字段表示如果设置为2
,那么只有当编号大于等于2
的才会进行更新。类似于金丝雀的发布方式。示例如下:
[root@k8s-master ~]# kubectl patch sts/nginx-statefulset -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":2}}}}}' #将更新值partition设置为2 statefulset.apps/nginx-statefulset patched [root@k8s-master ~]# kubectl set image statefulset nginx-statefulset nginx=nginx:1.12 #更新镜像版本 statefulset.apps/nginx-statefulset image updated [root@k8s-master ~]# kubectl get pods -w #动态查看更新 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 11m nginx-statefulset-1 1/1 Running 0 11m nginx-statefulset-2 1/1 Running 0 11m nginx-statefulset-3 1/1 Running 0 11m nginx-statefulset-3 1/1 Terminating 0 12m nginx-statefulset-3 0/1 Terminating 0 12m nginx-statefulset-3 0/1 Terminating 0 12m nginx-statefulset-3 0/1 Terminating 0 12m nginx-statefulset-3 0/1 Pending 0 0s nginx-statefulset-3 0/1 Pending 0 0s nginx-statefulset-3 0/1 ContainerCreating 0 0s nginx-statefulset-3 1/1 Running 0 2s nginx-statefulset-2 1/1 Terminating 0 11m nginx-statefulset-2 0/1 Terminating 0 11m nginx-statefulset-2 0/1 Terminating 0 12m nginx-statefulset-2 0/1 Terminating 0 12m nginx-statefulset-2 0/1 Pending 0 0s nginx-statefulset-2 0/1 Pending 0 0s nginx-statefulset-2 0/1 ContainerCreating 0 0s nginx-statefulset-2 1/1 Running 0 2s [root@k8s-master statefulset]# kubectl get pods -l app=nginx-pod -o custom-columns=NAME:metadata.name,IMAGE:spec.containers[0].image #查看更新完成后的镜像版本,可以发现只有当编号大于等于2的进行了更新。 NAME IMAGE nginx-statefulset-0 nginx:1.14 nginx-statefulset-1 nginx:1.14 nginx-statefulset-2 nginx:1.12 nginx-statefulset-3 nginx:1.12 将剩余的也全部更新,只需要将更新策略的partition的值改为0即可,如下: [root@k8s-master ~]# kubectl patch sts/nginx-statefulset -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}}' #将更新值partition设置为0 statefulset.apps/nginx-statefulset patche [root@k8s-master ~]# kubectl get pods -w #动态查看更新 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 18m nginx-statefulset-1 1/1 Running 0 18m nginx-statefulset-2 1/1 Running 0 6m44s nginx-statefulset-3 1/1 Running 0 6m59s nginx-statefulset-1 1/1 Terminating 0 19m nginx-statefulset-1 0/1 Terminating 0 19m nginx-statefulset-1 0/1 Terminating 0 19m nginx-statefulset-1 0/1 Terminating 0 19m nginx-statefulset-1 0/1 Pending 0 0s nginx-statefulset-1 0/1 Pending 0 0s nginx-statefulset-1 0/1 ContainerCreating 0 0s nginx-statefulset-1 1/1 Running 0 2s nginx-statefulset-0 1/1 Terminating 0 19m nginx-statefulset-0 0/1 Terminating 0 19m nginx-statefulset-0 0/1 Terminating 0 19m nginx-statefulset-0 0/1 Terminating 0 19m nginx-statefulset-0 0/1 Pending 0 0s nginx-statefulset-0 0/1 Pending 0 0s nginx-statefulset-0 0/1 ContainerCreating 0 0s nginx-statefulset-0 1/1 Running 0 2s