从前面的学习我们知道使用Deployment创建的pod是无状态的,当挂载了Volume之后,如果该pod挂了,Replication Controller会再启动一个pod来保证可用性,但是由于pod是无状态的,pod挂了就会和之前的Volume的关系断开,新创建的Pod无法找到之前的Pod。但是对于用户而言,他们对底层的Pod挂了是没有感知的,但是当Pod挂了之后就无法再使用之前挂载的存储卷。
为了解决这一问题,就引入了StatefulSet用于保留Pod的状态信息。
StatefulSet是为了解决有状态服务的问题(对应Deployments和ReplicaSets是为无状态服务而设计),其应用场景包括:
- 1、稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现
- 2、稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没有Cluster IP的Service)来实现
- 3、有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次依次进行(即从0到N-1,在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态),基于init containers来实现
- 4、有序收缩,有序删除(即从N-1到0)
- 5、有序的滚动更新
参考:https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
创建statefulSet示例
[root@k8s-master1 volume]# cat nginx-demo.yaml apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 3 template: metadata: labels: app: nginx spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: nginx ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "managed-nfs-storage" resources: requests: storage: 1Gi [root@k8s-master1 volume]# kubectl apply -f nginx-demo.yaml service/nginx created statefulset.apps/web created [root@k8s-master1 volume]# kubectl get svc,sts #查看无头服务,svc的CLUSTER—IP为None NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1443/TCP 9d service/nginx ClusterIP None 80/TCP 13s NAME READY AGE statefulset.apps/web 1/3 13s [root@k8s-master1 volume]# kubectl get pv,pvc #查看pv动态供给 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/default-www-web-0-pvc-7fe5d833-0a44-11e9-b58a-000c298a2b5f 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 18s persistentvolume/default-www-web-1-pvc-85d17f15-0a44-11e9-b58a-000c298a2b5f 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage 14s persistentvolume/default-www-web-2-pvc-8aa41937-0a44-11e9-b58a-000c298a2b5f 1Gi RWO Delete Bound default/www-web-2 managed-nfs-storage 6s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/www-web-0 Bound default-www-web-0-pvc-7fe5d833-0a44-11e9-b58a-000c298a2b5f 1Gi RWO managed-nfs-storage 24s persistentvolumeclaim/www-web-1 Bound default-www-web-1-pvc-85d17f15-0a44-11e9-b58a-000c298a2b5f 1Gi RWO managed-nfs-storage 14s persistentvolumeclaim/www-web-2 Bound default-www-web-2-pvc-8aa41937-0a44-11e9-b58a-000c298a2b5f 1Gi RWO managed-nfs-storage 6s [root@k8s-master1 volume]# kubectl get pod #查看pod NAME READY STATUS RESTARTS AGE nfs-client-provisioner-f69cd5cf-rfbdb 1/1 Running 0 2m16s web-0 1/1 Running 0 33s web-1 1/1 Running 0 23s web-2 1/1 Running 0 15s
删除statefulSet,PVC依旧存在的,再重新创建pod时,依旧会重新去绑定原来的pvc
[root@k8s-master1 volume]# kubectl delete -f nginx-demo.yaml service "nginx" deleted statefulset.apps "web" deleted [root@k8s-master1 volume]# kubectl get pods -w NAME READY STATUS RESTARTS AGE nfs-client-provisioner-f69cd5cf-rfbdb 1/1 Running 0 7m58s web-0 0/1 Terminating 0 6m15s web-1 0/1 Terminating 0 6m5s web-2 0/1 Terminating 0 5m57s web-0 0/1 Terminating 0 6m17s web-0 0/1 Terminating 0 6m17s web-2 0/1 Terminating 0 5m59s web-2 0/1 Terminating 0 5m59s web-1 0/1 Terminating 0 6m11s web-1 0/1 Terminating 0 6m11s [root@k8s-master1 volume]# kubectl apply -f nginx-demo.yaml service/nginx created statefulset.apps/web created [root@k8s-master1 volume]# kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/default-www-web-0-pvc-7fe5d833-0a44-11e9-b58a-000c298a2b5f 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 9m34s persistentvolume/default-www-web-1-pvc-85d17f15-0a44-11e9-b58a-000c298a2b5f 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage 9m30s persistentvolume/default-www-web-2-pvc-8aa41937-0a44-11e9-b58a-000c298a2b5f 1Gi RWO Delete Bound default/www-web-2 managed-nfs-storage 9m22s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/www-web-0 Bound default-www-web-0-pvc-7fe5d833-0a44-11e9-b58a-000c298a2b5f 1Gi RWO managed-nfs-storage 9m40s persistentvolumeclaim/www-web-1 Bound default-www-web-1-pvc-85d17f15-0a44-11e9-b58a-000c298a2b5f 1Gi RWO managed-nfs-storage 9m30s persistentvolumeclaim/www-web-2 Bound default-www-web-2-pvc-8aa41937-0a44-11e9-b58a-000c298a2b5f 1Gi RWO managed-nfs-storage 9m22s
滚动更新
RollingUpdate 更新策略在 StatefulSet 中实现 Pod 的自动滚动更新。 当StatefulSet的 .spec.updateStrategy.type 设置为 RollingUpdate 时,默认为:RollingUpdate。StatefulSet 控制器将在 StatefulSet 中删除并重新创建每个 Pod。 它将以与 Pod 终止相同的顺序进行(从最大的序数到最小的序数),每次更新一个 Pod。 在更新其前身之前,它将等待正在更新的 Pod 状态变成正在运行并就绪。如下操作的滚动更新是有2-0的顺序更新。
[root@k8s-master1 volume]# vim nginx-demo.yaml [root@k8s-master1 volume]# kubectl apply -f nginx-demo.yaml service/nginx unchanged statefulset.apps/web configured [root@k8s-master1 volume]# kubectl get pods -w NAME READY STATUS RESTARTS AGE nfs-client-provisioner-f69cd5cf-rfbdb 1/1 Running 0 14m web-0 1/1 Running 0 3m42s web-1 1/1 Running 0 3m34s web-2 0/1 Terminating 0 3m24s web-2 0/1 Terminating 0 3m35s web-2 0/1 Terminating 0 3m35s web-2 0/1 Pending 0 0s web-2 0/1 Pending 0 0s web-2 0/1 ContainerCreating 0 0s web-2 1/1 Running 0 7s web-1 1/1 Terminating 0 3m52s web-1 0/1 Terminating 0 3m53s web-1 0/1 Terminating 0 3m54s web-1 0/1 Terminating 0 3m54s web-1 0/1 Pending 0 0s web-1 0/1 Pending 0 0s web-1 0/1 ContainerCreating 0 0s web-1 1/1 Running 0 23s web-0 1/1 Terminating 0 4m25s web-0 0/1 Terminating 0 4m26s web-0 0/1 Terminating 0 4m33s web-0 0/1 Terminating 0 4m33s web-0 0/1 Pending 0 0s web-0 0/1 Pending 0 0s web-0 0/1 ContainerCreating 0 0s web-0 1/1 Running 0 4s
扩展伸缩
[root@k8s-master1 volume]# kubectl scale sts web --replicas=5 #扩展到5个副本 statefulset.apps/web scaled [root@k8s-master1 volume]# kubectl get pod -w #动态查看扩容 NAME READY STATUS RESTARTS AGE nfs-client-provisioner-f69cd5cf-rfbdb 1/1 Running 0 25m web-0 1/1 Running 0 9m40s web-1 1/1 Running 0 10m web-2 1/1 Running 0 10m web-3 0/1 ContainerCreating 0 12s web-3 1/1 Running 0 21s web-4 0/1 Pending 0 0s web-4 0/1 Pending 0 0s web-4 0/1 Pending 0 1s web-4 0/1 ContainerCreating 0 1s web-4 1/1 Running 0 9s [root@k8s-master1 volume]# kubectl get pv,pvc #查看pv绑定 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/default-www-web-0-pvc-7fe5d833-0a44-11e9-b58a-000c298a2b5f 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 25m persistentvolume/default-www-web-1-pvc-85d17f15-0a44-11e9-b58a-000c298a2b5f 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage 25m persistentvolume/default-www-web-2-pvc-8aa41937-0a44-11e9-b58a-000c298a2b5f 1Gi RWO Delete Bound default/www-web-2 managed-nfs-storage 25m persistentvolume/default-www-web-3-pvc-c7efc39f-0a47-11e9-b58a-000c298a2b5f 1Gi RWO Delete Bound default/www-web-3 managed-nfs-storage 2m10s persistentvolume/default-www-web-4-pvc-d498cf02-0a47-11e9-b58a-000c298a2b5f 1Gi RWO Delete Bound default/www-web-4 managed-nfs-storage 112s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/www-web-0 Bound default-www-web-0-pvc-7fe5d833-0a44-11e9-b58a-000c298a2b5f 1Gi RWO managed-nfs-storage 25m persistentvolumeclaim/www-web-1 Bound default-www-web-1-pvc-85d17f15-0a44-11e9-b58a-000c298a2b5f 1Gi RWO managed-nfs-storage 25m persistentvolumeclaim/www-web-2 Bound default-www-web-2-pvc-8aa41937-0a44-11e9-b58a-000c298a2b5f 1Gi RWO managed-nfs-storage 25m persistentvolumeclaim/www-web-3 Bound default-www-web-3-pvc-c7efc39f-0a47-11e9-b58a-000c298a2b5f 1Gi RWO managed-nfs-storage 2m13s persistentvolumeclaim/www-web-4 Bound default-www-web-4-pvc-d498cf02-0a47-11e9-b58a-000c298a2b5f 1Gi RWO managed-nfs-storage 112s [root@k8s-master1 volume]# kubectl patch sts web -p '{"spec":{"replicas":2}}' #缩容到两个副本 statefulset.apps/web patched [root@k8s-master1 volume]# kubectl get pod -w #动态查看缩容 NAME READY STATUS RESTARTS AGE nfs-client-provisioner-f69cd5cf-rfbdb 1/1 Running 0 28m web-0 1/1 Running 0 12m web-1 1/1 Running 0 13m web-2 1/1 Running 0 13m web-3 1/1 Running 0 3m16s web-4 0/1 Terminating 0 2m55s web-4 0/1 Terminating 0 2m57s web-4 0/1 Terminating 0 2m57s web-3 1/1 Terminating 0 3m18s web-3 0/1 Terminating 0 3m19s web-3 0/1 Terminating 0 3m22s web-3 0/1 Terminating 0 3m22s web-2 1/1 Terminating 0 13m web-2 0/1 Terminating 0 13m web-2 0/1 Terminating 0 13m web-2 0/1 Terminating 0 13m
更新策略
[root@k8s-master1 volume]# kubectl patch sts web -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":2}}}}' statefulset.apps/web patched [root@k8s-master1 volume]# kubectl describe sts web Name: web Namespace: default CreationTimestamp: Fri, 28 Dec 2018 10:11:01 +0800 Selector: app=nginx Labels:Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"web","namespace":"default"},"spec":{"replicas":3,"select... Replicas: 824638349112 desired | 2 total Update Strategy: RollingUpdate Partition: 824638947612 Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed 。。。
版本升级
[root@k8s-master1 volume]# kubectl get sts -o wide NAME READY AGE CONTAINERS IMAGES web 2/2 29m nginx nginx:1.14 [root@k8s-master1 volume]# kubectl set image sts/web nginx=nginx:1.15 statefulset.apps/web image updated [root@k8s-master1 volume]# kubectl get sts -o wide NAME READY AGE CONTAINERS IMAGES web 2/2 32m nginx nginx:1.15
headless service做dns解析是直接解析到pod的
[root@k8s-master1 volume]# kubectl get pod,svc -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/nfs-client-provisioner-f69cd5cf-rfbdb 1/1 Running 0 74m 172.17.73.3 192.168.0.126pod/web-0 1/1 Running 0 59m 172.17.32.3 192.168.0.125 pod/web-1 1/1 Running 0 59m 172.17.73.5 192.168.0.126 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.0.0.1 443/TCP 9d service/nginx ClusterIP None 80/TCP 63m app=nginx [root@k8s-master1 volume]# kubectl run -it --image=busybox:1.28.4 --rm --restart=Never sh If you don't see a command prompt, try pressing enter. / # nslookup web-0.nginx Server: 10.0.0.2 Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local Name: web-0.nginx Address 1: 172.17.32.3 web-0.nginx.default.svc.cluster.local / # nslookup web-1.nginx Server: 10.0.0.2 Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local Name: web-1.nginx Address 1: 172.17.73.5 web-1.nginx.default.svc.cluster.local
JOB
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
Job分为普通任务(Job)和定时任务(CronJob)
一次性执行
应用场景:离线数据处理,视频解码等业务
job示例
[root@k8s-master1 ~]# vim job.yaml apiVersion: batch/v1 kind: Job metadata: name: pi spec: template: spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never backoffLimit: 4 [root@k8s-master1 ~]# kubectl apply -f job.yaml job.batch/pi created [root@k8s-master1 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nfs-client-provisioner-f69cd5cf-rfbdb 1/1 Running 0 87m pi-nslj8 0/1 Completed 0 119s web-0 1/1 Running 0 71m web-1 1/1 Running 0 72m [root@k8s-master1 ~]# kubectl logs pi-nslj8 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275898
cronjob
https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/
定时任务,像Linux的Crontab一样。
应用场景:通知,备份
cronjob示例
[root@k8s-master1 ~]# vim cronjob.yaml apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure [root@k8s-master1 ~]# kubectl apply -f cronjob.yaml cronjob.batch/hello created [root@k8s-master1 ~]# kubectl get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE hello */1 * * * * False 1 5s 8s [root@k8s-master1 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE hello-1545968100-zhf2w 0/1 Completed 0 67s hello-1545968160-zftxj 0/1 ContainerCreating 0 7s nfs-client-provisioner-f69cd5cf-rfbdb 1/1 Running 0 96m pi-nslj8 0/1 Completed 0 10m web-0 1/1 Running 0 80m web-1 1/1 Running 0 81m [root@k8s-master1 ~]# kubectl logs hello-1545968100-zhf2w Fri Dec 28 03:35:14 UTC 2018 Hello from the Kubernetes cluster