kubernetes配置之:statefulset

1、statefulset控制器的特点

  • statefulset是一个有状态副本集;

  • statefulset的的每个Pod对象都有一个专有的索引;

  • statefulset的的每个Pod对象严格按照顺序升序部署、降序终止;

  • statefulset的的每个Pod对象都有专有的存储卷;

  • 一个完整的statefulset由三个组件组成:Headless Service、StatefulSet、volumeClaimTemplate(Headless Service:为Pod资源生成可解析的DNS资源记录、StatefulSet:管控Pod资源、volumeClaimTemplate:基于动态或静态的PV为Pod资源提供专有固定的存储)

  • 动态存储卷供给时,statefulset控制器会为每个volumclaim模板创建一个专有的PV,它从模板中指定的storageclass中每个PVC创建PV;静态存储卷供给时,需要管理员事先创建好满足的PV。

2、创建PV

静态PV供给时需要管理员事先创建好满足条件的PV;

[root@k8s-master-01 statefulset]# cat pv-nfs-statefulset.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: statefulset
---
apiVersion: v1
kind: PersistentVolume
metadata:
    name: statefulset-nfs-pv1
    namespace: statefulset
spec:
    capacity:
      storage: 2Gi
    accessModes:
      - ReadWriteMany
    persistentVolumeReclaimPolicy: Retain
    storageClassName: nfsv1
    nfs:
      path: data/pv-nfs/pv-1
      server: k8s-nfs 
---
apiVersion: v1
kind: PersistentVolume
metadata:
    name: statefulset-nfs-pv2
    namespace: statefulset
spec:
    capacity:
      storage: 2Gi
    accessModes:
      - ReadWriteMany
    persistentVolumeReclaimPolicy: Retain
    storageClassName: nfsv1
    nfs:
      path: data/pv-nfs/pv-2
      server: k8s-nfs
[root@k8s-master-01 statefulset]# kubectl get pv
NAME                  CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS   REASON   AGE
statefulset-nfs-pv1   2Gi        RWX            Retain           Bound    statefulset/myappdata-myapp-0   nfsv1                   19m
statefulset-nfs-pv2   2Gi        RWX            Retain           Bound    statefulset/myappdata-myapp-1   nfsv1                   19m

3、创建statefulset资源并利用NFS静态供给PV

  • statefulset控制器会自动创建一个PVC申请绑定与volumeClaimTemplate.storageClassName对应的PV;

  • statefulset对象引用静态PV时,PV类型资源对象中的spec.accessMode值需要与volumeClaimTemplate.accessMode的值完全一致,否者statefulset对象会一直处于pending状态;

[root@k8s-master-01 statefulset]# cat statefulset-deamon.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-svc
  namespace: statefulset
  labels:
    app: myapp-svc
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: myapp-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myapp
  namespace: statefulset
spec:
  serviceName: myapp-svc
  replicas: 2
  selector:
    matchLabels:
      app: myapp-pod
  template:
    metadata:
      labels:
        app: myapp-pod
    spec:
      containers:
      - name: myapp
        image: nginx:1.12-alpine
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: myappdata
          mountPath: usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: myappdata
    spec:
      accessModes: [ "ReadWriteMany" ]
      storageClassName: nfsv1
      resources:
        requests:
          storage: 2Gi
[root@k8s-master-01 statefulset]# kubectl get pvc -n statefulset
NAME                STATUS   VOLUME                CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myappdata-myapp-0   Bound    statefulset-nfs-pv1   2Gi        RWX            nfsv1          19m
myappdata-myapp-1   Bound    statefulset-nfs-pv2   2Gi        RWX            nfsv1          19m


[root@k8s-master-01 statefulset]# kubectl get pods -n statefulset -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE   READINESS GATES
myapp-0   1/1     Running   0          20m   10.244.3.39   k8s-worker-02              
myapp-1   1/1     Running   0          20m   10.244.1.67   k8s-worker-01              

4、statefulset类型Pod测试域名

statefulset创建的Pod对象具有固定的标识符,它们基于statefulset名称以及索引号生成;

[root@k8s-master-01 statefulset]# for i in 0 1; do kubectl exec myapp-$i -n statefulset -- sh -c 'hostname'; done 
myapp-0
myapp-1


[root@k8s-worker-02 mnt]# docker container exec 5024420aa1fa hostname
myapp-0
[root@k8s-worker-01 ~]# docker container exec 5ea7f1d19e6c hostname 
myapp-1


名称解析:
[root@k8s-master-01 statefulset]# kubectl run -it --image busybox dns-client --restart=Never  --rm bin/sh
/ # ping myapp-0.myapp-svc.statefulset.svc
PING myapp-0.myapp-svc.statefulset.svc (10.244.3.39): 56 data bytes
64 bytes from 10.244.3.39: seq=0 ttl=64 time=0.220 ms
64 bytes from 10.244.3.39: seq=1 ttl=64 time=0.147 ms
64 bytes from 10.244.3.39: seq=2 ttl=64 time=0.124 ms


/ # ping myapp-1.myapp-svc.statefulset.svc
PING myapp-1.myapp-svc.statefulset.svc (10.244.1.67): 56 data bytes
64 bytes from 10.244.1.67: seq=0 ttl=62 time=0.638 ms
64 bytes from 10.244.1.67: seq=1 ttl=62 time=0.507 ms
64 bytes from 10.244.1.67: seq=2 ttl=62 time=0.514 ms


[root@k8s-master-01 ~]# for i in 0 1; do kubectl exec myapp-$i -n statefulset -- sh -c 'echo $(date), hostname: $(hostname) > usr/share/nginx/html/index.html'; done
[root@k8s-master-01 statefulset]# kubectl run -it --image cirros client --restart=Never  --rm bin/sh
/ # curl myapp-0.myapp-svc.statefulset.svc
Thu Sep 24 05:08:07 UTC 2020, hostname: myapp-0
/ # curl myapp-1.myapp-svc.statefulset.svc
Thu Sep 24 05:08:07 UTC 2020, hostname: myapp-1

5、statefulset类型Pod扩缩容

statefulset资源的扩容与deployment资源的扩容方式相似,按照当前索引号的最大值向后命名;

[root@k8s-master-01 ~]# kubectl scale statefulset myapp --replicas=3 -n statefulset
[root@k8s-master-01 ~]# kubectl get pods -n statefulset  -l app=myapp-pod -w
NAME      READY   STATUS              RESTARTS   AGE
myapp-0   1/1     Running             0          169m
myapp-1   1/1     Running             0          169m
myapp-2   0/1     ContainerCreating   0          4m42s
myapp-2   1/1     Running             0          6m59s
[root@k8s-master-01 ~]# kubectl patch statefulset myapp -p '{"spec":{"replicas":2}}' -n statefulset
statefulset.apps/myapp patched
[root@k8s-master-01 ~]# kubectl get pods -n statefulset  -l app=myapp-pod -w
NAME      READY   STATUS        RESTARTS   AGE
myapp-0   1/1     Running       0          173m
myapp-1   1/1     Running       0          173m
myapp-2   0/1     Terminating   0          8m52s
myapp-2   0/1     Terminating   0          8m53s
myapp-2   0/1     Terminating   0          8m53s

6、statefulset类型Pod存储卷测试

强制删除某个Pod对象时,与之关联的PV不会删除,当该Pod被重新创建回来后,原PV会与原Pod对象关联;

7、statefulset类型Pod的滚动升级

滚动升级为statefulset类型Pod的默认升级策略,升级时按照索引号从大到小升级,任何Pod只有当比自己的索引号处于就绪状态时该Pod才会做升级操作;

[root@k8s-master-01 statefulset]# kubectl set image statefulset myapp myapp=nginx:alpine -n statefulset
statefulset.apps/myapp image updated


[root@k8s-master-01 ~]# kubectl get pods -n statefulset  -l app=myapp-pod -w
NAME      READY   STATUS    RESTARTS   AGE
myapp-0   1/1     Running   0          3h23m
myapp-1   1/1     Running   0          3h23m
myapp-1   1/1     Terminating   0          3h26m
myapp-1   0/1     Terminating   0          3h26m
myapp-1   0/1     Terminating   0          3h26m
myapp-1   0/1     Terminating   0          3h26m
myapp-1   0/1     Pending       0          0s
myapp-1   0/1     Pending       0          0s
myapp-1   0/1     ContainerCreating   0          0s
myapp-1   1/1     Running             0          2s
myapp-0   1/1     Terminating         0          3h26m
myapp-0   0/1     Terminating         0          3h26m
myapp-0   0/1     Terminating         0          3h26m
myapp-0   0/1     Terminating         0          3h26m
myapp-0   0/1     Pending             0          0s
myapp-0   0/1     Pending             0          0s
myapp-0   0/1     ContainerCreating   0          0s
myapp-0   1/1     Running             0          2s

8、statefulset类型Pod的暂存升级

当把updateStrategy.rollingUpdate.partition的值设置为把Pod的最大索引号大1,当下发升级操作时,任何Pod都不会执行更新操作;

[root@k8s-master-01 statefulset]# kubectl patch statefulset myapp -p '{"spec": {"updateStrategy":{"rollingUpdate": {"partition":3}}}}' -n statefulset
statefulset.apps/myapp patched
[root@k8s-master-01 statefulset]# kubectl get pods -l app=myapp-pod -o custom-columns=NAME:metadata.name,IMAGE:spec.containers[0].image -n statefulset
NAME      IMAGE
myapp-0   nginx:alpine
myapp-1   nginx:alpine

9、statefulset类型Pod的金丝雀升级

当设置updateStrategy.rollingUpdate.partition的值为Pod的最大索引值时,就只有一个Pod进行升级操作,等升级后的Pod运行平稳时,再进行下一个Pod升级;

[root@k8s-master-01 statefulset]# kubectl patch statefulset myapp -p '{"spec": {"updateStrategy":{"rollingUpdate": {"partition":1}}}}' -n statefulset
statefulset.apps/myapp patched
[root@k8s-master-01 statefulset]# kubectl get pods -l app=myapp-pod -o custom-columns=NAME:metadata.name,IMAGE:spec.containers[0].image -n statefulset
NAME      IMAGE
myapp-0   nginx:alpine
myapp-1   nginx:1.12-alpine

10、配置动态供给NFS

NFS存储官方默认不支持动态PV供给,需要额外安装插件;

10.1 创建storageclassname

StorageClassName在volumeClaimTemplate中需要引用,用以动态创建PVC;

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'

10.2 创建nfs-client-privisoner

  • deployment资源清单中需要指明实际的NFS server和exportfs的路径;

  • https://github.com/rimusz/nfs-client-provisioner/blob/master/deploy/

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 2
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: k8s-nfs
            - name: NFS_PATH
              value: /data/nfs-privisioner
      volumes:
        - name: nfs-client-root
          nfs:
            server: k8s-nfs
            path: /data/nfs-privisioner
[root@k8s-master-01 nfs-client-provisioner]# kubectl get deployment -n statefulset
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   2/2     2            2           4m20s
[root@k8s-master-01 nfs-client-provisioner]# kubectl get pod -n statefulset
NAME                                      READY   STATUS    RESTARTS   AGE       2m57s
nfs-client-provisioner-5c8f58cc6c-m492t   1/1     Running   0          4m25s
nfs-client-provisioner-5c8f58cc6c-tq9lt   1/1     Running   0          4m25s

10.3 创建rbac

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

10.3 创建statefulset对象进行测试

利用动态PV供给时,创建的Pod可以随时调整副本数和volume的空间大小,而无需事先创建PV;

[root@k8s-master-01 statefulset]# cat statefulset-deamon.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-svc
  namespace: statefulset
  labels:
    app: myapp-svc
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: myapp-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myapp
  namespace: statefulset
spec:
  serviceName: myapp-svc
  replicas: 4
  selector:
    matchLabels:
      app: myapp-pod
  template:
    metadata:
      labels:
        app: myapp-pod
    spec:
      containers:
      - name: myapp
        image: nginx:1.12-alpine
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: myappdata
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: myappdata
    spec:
      accessModes: [ "ReadWriteMany" ]
      storageClassName: managed-nfs-storage
      resources:
        requests:
          storage: 2Gi
[root@k8s-master-01 statefulset]# kubectl get pvc -n statefulset
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
myappdata-myapp-0   Bound    pvc-4adb0e56-ba0a-47c9-8eed-ebf5ee895b35   2Gi        RWX            managed-nfs-storage   4m9s
myappdata-myapp-1   Bound    pvc-28e414dd-7d4a-4894-9ac6-f8e1b2d1ed1d   2Gi        RWX            managed-nfs-storage   4m4s
myappdata-myapp-2   Bound    pvc-f5314bdf-6147-4e6e-9331-66bf54f5a019   2Gi        RWX            managed-nfs-storage   4m
myappdata-myapp-3   Bound    pvc-82a689b0-ba38-4353-a5ac-e8670e241434   2Gi        RWX            managed-nfs-storage   3m57s

11、statefulset实现etcd集群

etcd为一个分布式键值数据存储系统,具有可靠、快速、强一致性等特性,它通过分布式锁、leader选举和写屏障来实现可靠的分布式协作。

11.1 新建storageclass

[root@k8s-master-01 statefulset]# cat nfs-client-provisioner/class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: etcd-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'

11.2 新建service

  • 第一个service为headless类型,为Pod资源提供名称解析,例如etcd-0、etcd-1、etcd-2;

  • 第二个service为NodePort为etcd集群外部提供服务;

[root@k8s-master-01 statefulset]# cat etcd-service.yaml 
apiVersion: v1
kind: Service
metadata: 
  name: etcd
  annotations: 
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec: 
  ports: 
  - port: 2379
    name: client
  - port: 2380
    name: peer
  clusterIP: None
  selector:
    app: etcd-member
---
apiVersion: v1
kind: Service
metadata: 
  name: etcd-client
spec: 
  ports: 
  - name: client-client
    port: 2379
    protocol: TCP
    targetPort: 2379
  selector:
    app: etcd-member
  type: NodePort

11.3 新建statefulset

[root@k8s-master-01 statefulset]# cat etcd.statefulset.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata: 
  name: etcd
  labels:
    app: etcd
spec: 
  serviceName: etcd
  replicas: 3
  selector: 
    matchLabels:
      app: etcd-member
  template:
    metadata:
      name: etcd
      labels:
        app: etcd-member
    spec: 
      containers:
      - name: etcd
        image: "quay.io/coreos/etcd:v3.2.16"
        ports:
        - containerPort: 2379
          name: client
        - containerPort: 2380
          name: peer
        env:
        - name: CLUSTER_SIZE
          value: "3"
        - name: SET_NAME
          value: "etcd"
        volumeMounts:
        - name: data
          mountPath: /var/run/etcd
        command:
          - "/bin/sh"
          - "-ecx"
          - |
            IP=$(hostname -i)
            PEERS=""
            for i in $(seq 0 $((${CLUSTER_SIZE} - 1))); do
              PEERS="${PEERS}${PEERS:+,}${SET_NAME}-${i}=http://${SET_NAME}-${i}.${SET_NAME}:2380"
            done
            exec etcd --name ${HOSTNAME} \
              --listen-peer-urls http://${IP}:2380
              --listen-client-urls http://${IP}:2379,http://127.0.0.1:2379 \
              --advertise-client-urls http://${HOSTNME}.${SET_NAME}:2379 \
              --initial-advertise-peer-urls http://${HOSTNME}.${SET_NAME}:2380 \
              --initial-cluster-token etcd-cluster-1 \
              --initial-cluster ${PEERS} \
              --initial-cluster-state new \
              --data-dir /var/run/etcd/default.etcd
  volumeClaimTemplates:                
  - metadata:
      name: data
    spec:
      storageClassName: etcd-storage
      accessModes:
        - "ReadWriteOnce"
      resources:
        requests:
          storage: 1Gi
[root@k8s-master-01 statefulset]# kubectl get pods -l app=etcd-member -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP             NODE            NOMINATED NODE   READINESS GATES
etcd-0   1/1     Running   0          10m   10.244.3.94    k8s-worker-02              
etcd-1   1/1     Running   0          10m   10.244.3.95    k8s-worker-02              
etcd-2   1/1     Running   0          10m   10.244.1.118   k8s-worker-01              
[root@k8s-master-01 statefulset]# kubectl get statefulset -o wide
NAME   READY   AGE   CONTAINERS   IMAGES
etcd   3/3     11m   etcd         quay.io/coreos/etcd:v3.2.16
[root@k8s-master-01 statefulset]# kubectl get endpoints
NAME                  ENDPOINTS                                                         AGE
etcd                  10.244.1.118:2380,10.244.3.94:2380,10.244.3.95:2380 + 3 more...   81s
etcd-client           10.244.1.118:2379,10.244.3.94:2379,10.244.3.95:2379               81s

你可能感兴趣的:(K8S,docker,java,kubernetes)