kubernetes基于 StorageClass 的 NFS 动态卷

kubernetes v1.16.0

1、部署nfs

可以查看这个文章:https://blog.csdn.net/fuck487/article/details/102313948

2、部署nfs-provisioner存储供应卷

service-account.yml,需要创建帐号

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner

rbac.yml,帐号获取权限

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-provisioner
  apiGroup: rbac.authorization.k8s.io

nfs-provisioner.yml,部署nfs存储供应

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
      - name: nfs-provisioner
        # image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner # 若官方镜像无法下载,可以使用阿里云镜像代替
        image: quay.io/external_storage/nfs-client-provisioner:latest # 官方镜像
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes  # 这里默认不能变
        env:
        - name: PROVISIONER_NAME
          value: example.com/nfs   # 存储供应的名称
        - name: NFS_SERVER
          value: 192.168.200.207  #[NAS服务器的IP地址]
        - name: NFS_PATH
          value: /root/nfsdata   #NFS系统的挂载路径
      volumes:
      - name: nfs-client-root
        nfs:
          server: 192.168.200.207  #[NAS服务器的IP地址]
          path: /root/nfsdata   #NFS系统的挂载路径

查看nfs-provisioner是否正常运行

kubectl get pods

NAME                              READY   STATUS    RESTARTS   AGE
nfs-provisioner-c6fb69df9-48md4   1/1     Running   0          1h

3、创建StorageClass,指定provisioner为之前创建的nfs-provisioner的名称example.com/nfs

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: nfs
provisioner: example.com/nfs # 这里要跟nfs-provisioner一致
parameters:
  archiveOnDelete: "true"    # 这里为true,表示pv删除后,会自动归档,不会删除文件

4、创建PersistentVolumeClaim,指定StorageClass,看是否会自动生成相应的PersistentVolume

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-test-data
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs # 之前创建的storageClass名称

查看pv,已经生成了pv,并且状态为Bound

[root@k8s-master ~/docker-about/image-and-yml/jenkins]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS   REASON   AGE
pvc-0ecb9d8d-74f1-4d5b-8e58-3e17340365f0   1Gi        RWO            Delete           Bound    default/pvc-test-data            nfs                     34s

查看nfs服务器的目录下的自动创建了文件夹

default-pvc-test-data-pvc-0ecb9d8d-74f1-4d5b-8e58-3e17340365f0

5、部署StatefulSet

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx1"
  replicas: 2
  selector:
    matchLabels:
      app: nginx1
  volumeClaimTemplates:
  - metadata:
      name: test
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs" 
      resources:
        requests:
          storage: 1Gi
  template:
    metadata:
      labels:
        app: nginx1
    spec:
      serviceAccount: nfs-provisioner     #和前面创建的资源访问角色保持一致
      containers:
      - name: nginx1
        image: nginx
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: "/my_dir"  #此挂载路径可以任意指定
          name: test

查看部署结果,pod成功运行两个web-0,web-1,可以查看pvc和pvc,都相应动态创建了

NAME                              READY   STATUS    RESTARTS   AGE
nfs-provisioner-c6fb69df9-48md4   1/1     Running   0          1h
web-0                             1/1     Running   0          20s
web-1                             1/1     Running   0          21s
kubectl get pv

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS   REASON   AGE
pvc-d8cb8112-86f5-4b56-a830-04574412e165   1Gi        RWO            Delete           Bound    default/test-web-0               nfs                     39h
pvc-f2b7dbaf-6478-407d-9cd1-3d080efcfffa   1Gi        RWO            Delete           Bound    default/test-web-1               nfs                     39h


kubectl get pvc


NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-web-0               Bound    pvc-d8cb8112-86f5-4b56-a830-04574412e165   1Gi        RWO            nfs            39h
test-web-1               Bound    pvc-f2b7dbaf-6478-407d-9cd1-3d080efcfffa   1Gi        RWO            nfs            39h

注:创建StatefulSet查看kubectl describe pods pod的名称,报错pod has unbound immediate PersistentVolumeClaims,查看是否已存在名称相同并且创建时间比较久的pvc。

回想一下,我首次创建StatefulSet时在volumeClaimTemplates下没有配置storageClassName: "nfs",导致生成的pvc一直处于pending状态没能动态申请卷。

后面多次创建StatefulSet时,虽然正确指定了storageClassName: "nfs",一直提示pod has unbound immediate PersistentVolumeClaims,其实使用的pvc是之前创建的处于pending状态pvc,所以导致StatefulSet的pod一直pending。

只需要把之前的pvc删掉,再重新部署StatefulSet就能解决

 

你可能感兴趣的:(kubernetes)