k8s rbd 后端存储

github yaml克隆地址:https://github.com/kubernetes-retired/external-storage/tree/master/ceph/rbd/deploy/rbac

k8s使用rbd-provisioner插件对接 Ceph RBD作为后端存储

一、准备工作

Ceph版本:12.2.13 luminous稳定版,本步骤前2步在ceph集群操作。

1、Ceph上准备存储池
# ceph osd pool create kube 128 128
pool 'kube' created
# ceph osd pool ls | grep kube
kube
2、Ceph上准备K8S客户端账号

本环境中直接使用了Ceph的admin账号,当然生产环境中还是要根据不同功能客户端分配不同的账号:
ceph auth get-or-create client.kube mon ‘allow r’ osd ‘allow rwx pool=kube’ -o ceph.client.kube.keyring
获取账号的密钥:

# ceph auth get-key client.admin | base64
QVFBQmNLWmZNVUY5RkJBQWY5Z2NZVWtTMEtXL3B0Y09wSFBXeUE9PQ==
3、运行rbd-provisioner插件

使用StorageClass动态创建PV时,controller-manager会自动在Ceph上创建image,所以我们要为其准备好rbd-provisioner。

(1)如果集群是用kubeadm部署的,由于controller-manager官方镜像中没有rbd命令,所以我们要从github下载插件rbd-provisioner相关yaml并运行。

从github克隆yaml下来

git clone https://github.com/kubernetes-retired/external-storage/tree/master/ceph/rbd/deploy/rbac

clusterrolebinding.yaml

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io

clusterrole.yaml

kind: ClusterRole 
apiVersion: rbac.authorization.k8s.io/v1 
metadata: 
  name: rbd-provisioner 
rules: 
  - apiGroups: [""] 
    resources: ["persistentvolumes"] 
    verbs: ["get", "list", "watch", "create", "delete"] 
  - apiGroups: [""] 
    resources: ["persistentvolumeclaims"] 
    verbs: ["get", "list", "watch", "update"] 
  - apiGroups: ["storage.k8s.io"] 
    resources: ["storageclasses"] 
    verbs: ["get", "list", "watch"] 
  - apiGroups: [""] 
    resources: ["events"] 
    verbs: ["create", "update", "patch"] 
  - apiGroups: [""] 
    resources: ["services"] 
    resourceNames: ["kube-dns","coredns"] 
    verbs: ["list", "get"] 

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rbd-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rbd-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: "quay.io/external_storage/rbd-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
      serviceAccount: rbd-provisioner

rolebinding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner
subjects:
- kind: ServiceAccount
  name: rbd-provisioner
  namespace: default

role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: rbd-provisioner
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]

serviceaccount.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: rbd-provisioner
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
[root@kubesphere rbac]# 
[root@kubesphere rbac]# 
[root@kubesphere rbac]# 
[root@kubesphere rbac]# cat serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-provisioner

(2)如果集群是用二进制方式部署的,直接在master节点安装ceph-common即可。(二进制安装或者kubeadm都要执行)

yum install -y ceph-common

拷贝keyring文件
将ceph的ceph.client.admin.keyring及ceph.conf文件拷贝到k8s-master的/etc/ceph目录下。

备注:创建pod时,kubelet需要使用rbd命令去检测和挂载pv对应的ceph image,所以要在所有的worker节点安装ceph客户端ceph-common-13.2.5,使用L版本的ceph可以不需要安装13.2, yum默认版本即可。
4 执行运行rbd-provisioner

把yaml放在一个目录。

[root@kubesphere rbac]# ls 
clusterrolebinding.yaml  clusterrole.yaml  deployment.yaml  rolebinding.yaml  role.yaml  serviceaccount.yaml

执行kubectl命令:

kubectl apply -f .

查看pod运行情况:

kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
rbd-provisioner-76f6bc6669-jxjmv   1/1     Running   1          13h

二、K8S上调用Ceph RBD存储

1、创建存储类

sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rbd
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "false"
#provisioner: kubernetes.io/rbd
provisioner: ceph.com/rbd
reclaimPolicy: Retain
parameters:
  monitors: 192.168.3.61:6789,192.168.3.62:6789,192.168.3.63:6789
  adminId: admin
  adminSecretName: ceph-admin-secret
  adminSecretNamespace: default
  pool: kube
  userId: admin
  userSecretName: ceph-kube1-secret
  userSecretNamespace: default
  fsType: xfs
  imageFormat: "2"
  imageFeatures: "layering"
reclaimPolicy: Retain

执行:

kubectl apply -f sc.yaml

注意:provisioner的值要和rbd-provisioner设置的值一样

ceph-admin-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: default
data:
  key: "QVFBQmNLWmZNVUY5RkJBQWY5Z2NZVWtTMEtXL3B0Y09wSFBXeUE9PQ=="
type: kubernetes.io/rbd

执行:

kubectl apply -f ceph-admin-secret.yaml
2、创建PVC

pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: rbd-pvc1
  annotations:
    volume.beta.kubernetes.io/storage-class: "rbd"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

执行:

kubectl apply -f pvc.yaml

查看:

# kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
rbd-pvc1   Bound    pvc-62a390b9-d41a-4aed-8274-3a5484cc34e5   2Gi        RWO            rbd            13h
3 出现问题

出现以下问题:

[root@kubesphere rbd-provisioner]# kubectl describe pvc rbd-pvc1
Name:          rbd-pvc1
Namespace:     default
StorageClass:  rbd
Status:        Pending
Volume:        
Labels:        
Annotations:   volume.beta.kubernetes.io/storage-class: rbd
               volume.beta.kubernetes.io/storage-provisioner: ceph.com/rbd
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       
Events:
  Type    Reason                Age               From                         Message
  ----    ------                ----              ----                         -------
  Normal  ExternalProvisioning  2s (x4 over 30s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "ceph.com/rbd" or manually created by system administrator

查看pod的log

[root@kubesphere rbd-provisioner]# kubectl describe pod rbd-provisioner-76f6bc6669-jxjmv
Name:         rbd-provisioner-76f6bc6669-jxjmv
Namespace:    default
Priority:     0
Node:         kubesphere/192.168.3.70
Start Time:   Fri, 18 Jun 2021 22:39:50 +0800
Labels:       app=rbd-provisioner
              pod-template-hash=76f6bc6669
Annotations:  cni.projectcalico.org/podIP: 10.233.122.54/32
              cni.projectcalico.org/podIPs: 10.233.122.54/32
Status:       Running
IP:           10.233.122.54
IPs:
  IP:           10.233.122.54
Controlled By:  ReplicaSet/rbd-provisioner-76f6bc6669
Containers:
  rbd-provisioner:
    Container ID:   docker://0fd023836307519e447ed1b6acfe8c4611ad23d27a3ed84b18024499d9fd6f27
    Image:          quay.io/external_storage/rbd-provisioner:latest
    Image ID:       docker-pullable://quay.io/external_storage/rbd-provisioner@sha256:94fd36b8625141b62ff1addfa914d45f7b39619e55891bad0294263ecd2ce09a
    Port:           
    Host Port:      
    State:          Running
      Started:      Fri, 18 Jun 2021 22:40:16 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      PROVISIONER_NAME:  ceph.com/rbd
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from rbd-provisioner-token-vng7b (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  rbd-provisioner-token-vng7b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rbd-provisioner-token-vng7b
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  8m52s  default-scheduler  Successfully assigned default/rbd-provisioner-76f6bc6669-jxjmv to kubesphere
  Normal  Pulling    8m51s  kubelet            Pulling image "quay.io/external_storage/rbd-provisioner:latest"
  Normal  Pulled     8m27s  kubelet            Successfully pulled image "quay.io/external_storage/rbd-provisioner:latest" in 23.882600186s
  Normal  Created    8m27s  kubelet            Created container rbd-provisioner
  Normal  Started    8m26s  kubelet            Started container rbd-provisioner
[root@kubesphere rbd-provisioner]# kubectl logs pod/rbd-provisioner-76f6bc6669-jxjmv
I0618 14:40:16.264218       1 main.go:85] Creating RBD provisioner ceph.com/rbd with identity: ceph.com/rbd
I0618 14:40:16.265874       1 leaderelection.go:185] attempting to acquire leader lease  default/ceph.com-rbd...
E0618 14:40:16.276488       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ceph.com-rbd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d996384c-1ce1-4fbe-9f31-d44ffce5706b", ResourceVersion:"414673", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63759624016, loc:(*time.Location)(0x1bc94e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"rbd-provisioner-76f6bc6669-jxjmv_18d0bd2b-d043-11eb-a38e-365d0590d7f4\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-06-18T14:40:16Z\",\"renewTime\":\"2021-06-18T14:40:16Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'rbd-provisioner-76f6bc6669-jxjmv_18d0bd2b-d043-11eb-a38e-365d0590d7f4 became leader'
I0618 14:40:16.276565       1 leaderelection.go:194] successfully acquired lease default/ceph.com-rbd
I0618 14:40:16.276610       1 controller.go:631] Starting provisioner controller ceph.com/rbd_rbd-provisioner-76f6bc6669-jxjmv_18d0bd2b-d043-11eb-a38e-365d0590d7f4!
I0618 14:40:16.377229       1 controller.go:680] Started provisioner controller ceph.com/rbd_rbd-provisioner-76f6bc6669-jxjmv_18d0bd2b-d043-11eb-a38e-365d0590d7f4!
I0618 14:47:17.806787       1 controller.go:987] provision "default/rbd-pvc1" class "rbd": started
E0618 14:47:17.814452       1 controller.go:1004] provision "default/rbd-pvc1" class "rbd": unexpected error getting claim reference: selfLink was empty, can't make reference

编辑 编辑/etc/kubernetes/manifests/kube-apiserver.yaml

在这里:

spec:
  containers:
  - command:
    - kube-apiserver

添加这一行:

- --feature-gates=RemoveSelfLink=false

重启容器:

[root@kubesphere rbd-provisioner]# docker ps | grep kube-apiserver-kubesphere
d8c2ab77fcae   ae5eb22e4a9d                                                        "kube-apiserver --ad…"   3 minutes ago       Up 3 minutes                 k8s_kube-apiserver_kube-apiserver-kubesphere_kube-system_9c4f15ddc36ae2c0aee73dac4e15953c_0
4c7b57c7706e   registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2             "/pause"                 3 minutes ago       Up 3 minutes                 k8s_POD_kube-apiserver-kubesphere_kube-system_9c4f15ddc36ae2c0aee73dac4e15953c_0
您在 /var/spool/mail/root 中有新邮件
[root@kubesphere rbd-provisioner]# docker restart d8c2ab77fcae
d8c2ab77fcae
[root@kubesphere rbd-provisioner]# docker restart 4c7b57c7706e
4c7b57c7706e

查看pod log

kubectl logs pod/rbd-provisioner-76f6bc6669-jxjmv
I0618 14:57:08.186706       1 main.go:85] Creating RBD provisioner ceph.com/rbd with identity: ceph.com/rbd
I0618 14:57:08.189769       1 leaderelection.go:185] attempting to acquire leader lease  default/ceph.com-rbd...
I0618 14:57:25.664359       1 leaderelection.go:194] successfully acquired lease default/ceph.com-rbd
I0618 14:57:25.664747       1 controller.go:631] Starting provisioner controller ceph.com/rbd_rbd-provisioner-76f6bc6669-jxjmv_73f7f1df-d045-11eb-b102-365d0590d7f4!
I0618 14:57:25.664852       1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"ceph.com-rbd", UID:"d996384c-1ce1-4fbe-9f31-d44ffce5706b", APIVersion:"v1", ResourceVersion:"417887", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rbd-provisioner-76f6bc6669-jxjmv_73f7f1df-d045-11eb-b102-365d0590d7f4 became leader
I0618 14:57:25.765923       1 controller.go:680] Started provisioner controller ceph.com/rbd_rbd-provisioner-76f6bc6669-jxjmv_73f7f1df-d045-11eb-b102-365d0590d7f4!
I0618 14:57:25.766329       1 controller.go:987] provision "default/rbd-pvc1" class "rbd": started
I0618 14:57:25.776362       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"rbd-pvc1", UID:"62a390b9-d41a-4aed-8274-3a5484cc34e5", APIVersion:"v1", ResourceVersion:"415893", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/rbd-pvc1"
I0618 14:57:26.034448       1 provision.go:132] successfully created rbd image "kubernetes-dynamic-pvc-7e7556b2-d045-11eb-b102-365d0590d7f4"
I0618 14:57:26.034581       1 controller.go:1087] provision "default/rbd-pvc1" class "rbd": volume "pvc-62a390b9-d41a-4aed-8274-3a5484cc34e5" provisioned
I0618 14:57:26.034643       1 controller.go:1101] provision "default/rbd-pvc1" class "rbd": trying to save persistentvvolume "pvc-62a390b9-d41a-4aed-8274-3a5484cc34e5"
I0618 14:57:26.068695       1 controller.go:1108] provision "default/rbd-pvc1" class "rbd": persistentvolume "pvc-62a390b9-d41a-4aed-8274-3a5484cc34e5" saved
I0618 14:57:26.068775       1 controller.go:1149] provision "default/rbd-pvc1" class "rbd": succeeded
I0618 14:57:26.069769       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"rbd-pvc1", UID:"62a390b9-d41a-4aed-8274-3a5484cc34e5", APIVersion:"v1", ResourceVersion:"415893", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-62a390b9-d41a-4aed-8274-3a5484cc34e5
E0618 15:00:33.652532       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=409, ErrCode=NO_ERROR, debug=""
E0618 15:00:33.656205       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=409, ErrCode=NO_ERROR, debug=""
E0618 15:00:33.659797       1 reflector.go:322] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:668: Failed to watch *v1.StorageClass: Get https://10.233.0.1:443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=417664&timeoutSeconds=390&watch=true: dial tcp 10.233.0.1:443: connect: connection refused
E0618 15:00:33.660592       1 reflector.go:322] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:662: Failed to watch *v1.PersistentVolumeClaim: Get https://10.233.0.1:443/api/v1/persistentvolumeclaims?resourceVersion=417930&timeoutSeconds=454&watch=true: dial tcp 10.233.0.1:443: connect: connection refused
E0618 15:00:33.664999       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=409, ErrCode=NO_ERROR, debug=""
E0618 15:00:33.665746       1 reflector.go:322] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:665: Failed to watch *v1.PersistentVolume: Get https://10.233.0.1:443/api/v1/persistentvolumes?resourceVersion=417928&timeoutSeconds=544&watch=true: dial tcp 10.233.0.1:443: connect: connection refused
E0618 15:00:33.970302       1 leaderelection.go:234] error retrieving resource lock default/ceph.com-rbd: Get https://10.233.0.1:443/api/v1/namespaces/default/endpoints/ceph.com-rbd: dial tcp 10.233.0.1:443: connect: connection refused
E0618 15:00:34.662192       1 reflector.go:205] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:668: Failed to list *v1.StorageClass: Get https://10.233.0.1:443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
E0618 15:00:34.663584       1 reflector.go:205] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:662: Failed to list *v1.PersistentVolumeClaim: Get https://10.233.0.1:443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
E0618 15:00:34.667616       1 reflector.go:205] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:665: Failed to list *v1.PersistentVolume: Get https://10.233.0.1:443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
E0618 15:00:35.663249       1 reflector.go:205] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:668: Failed to list *v1.StorageClass: Get https://10.233.0.1:443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
E0618 15:00:35.664270       1 reflector.go:205] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:662: Failed to list *v1.PersistentVolumeClaim: Get https://10.233.0.1:443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
E0618 15:00:35.668180       1 reflector.go:205] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:665: Failed to list *v1.PersistentVolume: Get https://10.233.0.1:443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused

查看pvc的log

kubectl describe pvc pvc-62a390b9-d41a-4aed-8274-3a5484cc34e5
Error from server (NotFound): persistentvolumeclaims "pvc-62a390b9-d41a-4aed-8274-3a5484cc34e5" not found
[root@kubesphere rbd-provisioner]# kubectl describe pvc rbd-pvc1
Name:          rbd-pvc1
Namespace:     default
StorageClass:  rbd
Status:        Bound
Volume:        pvc-62a390b9-d41a-4aed-8274-3a5484cc34e5
Labels:        
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-class: rbd
               volume.beta.kubernetes.io/storage-provisioner: ceph.com/rbd
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      2Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       
Events:
  Type    Reason                 Age                 From                                                                                Message
  ----    ------                 ----                ----                                                                                -------
  Normal  ExternalProvisioning   13m (x26 over 19m)  persistentvolume-controller                                                         waiting for a volume to be created, either by external provisioner "ceph.com/rbd" or manually created by system administrator
  Normal  Provisioning           9m47s               ceph.com/rbd_rbd-provisioner-76f6bc6669-jxjmv_73f7f1df-d045-11eb-b102-365d0590d7f4  External provisioner is provisioning volume for claim "default/rbd-pvc1"
  Normal  ProvisioningSucceeded  9m46s               ceph.com/rbd_rbd-provisioner-76f6bc6669-jxjmv_73f7f1df-d045-11eb-b102-365d0590d7f4  Successfully provisioned volume pvc-62a390b9-d41a-4aed-8274-3a5484cc34e5

参考文章:https://blog.51cto.com/fengjicheng/2401702

K8S使用rbd-csi创建快照

本次ceph插件选取最新发行版release-v3.3
最新发行版快照可支持v1版(之前版本为v1beta1)
在ceph-csi项目deploy/rbd/kubernetes目录下有官方给出的相关yaml文件,注意版本对应。另,如果对接的是cephfs则是在deploy/cephfs/kubernetes目录下,克隆地址如下:

默认版本:git clone https://github.com/ceph/ceph-csi.git
v3.3下载: git clone https://github.com/ceph/ceph-csi.git -b release-v3.3

一、安装ceph-rbd-csi

# pwd
/root/ceph-csi-release-v3.3/deploy/rbd/kubernetes
# ls
csi-config-map.yaml  csi-nodeplugin-psp.yaml   csi-provisioner-psp.yaml   csi-rbdplugin-provisioner.yaml
csidriver.yaml       csi-nodeplugin-rbac.yaml  csi-provisioner-rbac.yaml  csi-rbdplugin.yaml

以下为本次所需所有已修改yaml文件:

1 创建ConfigMap, csi-config-map.yaml 填入对应的"clusterID"和"monitors" 。

csi-config-map.yaml

---
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    [
      {
        "clusterID": "319d2cce-b087-4de9-bd4a-13edc7644abc",
        "monitors": [
          "192.168.3.61:6789",
          "192.168.3.62:6789",
          "192.168.3.63:6789"
        ]
      }
    ]
metadata:
  name: ceph-csi-config
2 创建csidriver,新建或者编辑 csidriver.yaml
---
# if Kubernetes version is less than 1.18 change
# apiVersion to storage.k8s.io/v1betav1
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
  name: rbd.csi.ceph.com
spec:
  attachRequired: true
  podInfoOnMount: false
3 创建seviceAccount和RBAC等

csi-provisioner-rbac.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-csi-provisioner

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-external-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims/status"]
    verbs: ["update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots"]
    verbs: ["get", "list"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents"]
    verbs: ["create", "get", "list", "watch", "update", "delete"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments/status"]
    verbs: ["patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["csinodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents/status"]
    verbs: ["update"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-provisioner-role
subjects:
  - kind: ServiceAccount
    name: rbd-csi-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: rbd-external-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  # replace with non-default namespace name
  namespace: default
  name: rbd-external-provisioner-cfg
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "list", "watch", "create", "update", "delete"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["get", "watch", "list", "delete", "update", "create"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-provisioner-role-cfg
  # replace with non-default namespace name
  namespace: default
subjects:
  - kind: ServiceAccount
    name: rbd-csi-provisioner
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: Role
  name: rbd-external-provisioner-cfg
  apiGroup: rbac.authorization.k8s.io

csi-nodeplugin-rbac.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-csi-nodeplugin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-nodeplugin
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get"]
  # allow to read Vault Token and connection options from the Tenants namespace
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-nodeplugin
subjects:
  - kind: ServiceAccount
    name: rbd-csi-nodeplugin
    namespace: default
roleRef:
  kind: ClusterRole
  name: rbd-csi-nodeplugin
  apiGroup: rbac.authorization.k8s.io
4 创建PodSecurityPolicy:

csi-provisioner-psp.yaml

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: rbd-csi-provisioner-psp
spec:
  allowPrivilegeEscalation: true
  allowedCapabilities:
    - 'SYS_ADMIN'
  fsGroup:
    rule: RunAsAny
  privileged: true
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    - 'hostPath'
  allowedHostPaths:
    - pathPrefix: '/dev'
      readOnly: false
    - pathPrefix: '/sys'
      readOnly: false
    - pathPrefix: '/lib/modules'
      readOnly: true

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  # replace with non-default namespace name
  namespace: default
  name: rbd-csi-provisioner-psp
rules:
  - apiGroups: ['policy']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['rbd-csi-provisioner-psp']

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-provisioner-psp
  # replace with non-default namespace name
  namespace: default
subjects:
  - kind: ServiceAccount
    name: rbd-csi-provisioner
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: Role
  name: rbd-csi-provisioner-psp
  apiGroup: rbac.authorization.k8s.io

csi-nodeplugin-psp.yaml

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: rbd-csi-nodeplugin-psp
spec:
  allowPrivilegeEscalation: true
  allowedCapabilities:
    - 'SYS_ADMIN'
  fsGroup:
    rule: RunAsAny
  privileged: true
  hostNetwork: true
  hostPID: true
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    - 'hostPath'
  allowedHostPaths:
    - pathPrefix: '/dev'
      readOnly: false
    - pathPrefix: '/run/mount'
      readOnly: false
    - pathPrefix: '/sys'
      readOnly: false
    - pathPrefix: '/lib/modules'
      readOnly: true
    - pathPrefix: '/var/lib/kubelet/pods'
      readOnly: false
    - pathPrefix: '/var/lib/kubelet/plugins/rbd.csi.ceph.com'
      readOnly: false
    - pathPrefix: '/var/lib/kubelet/plugins_registry'
      readOnly: false
    - pathPrefix: '/var/lib/kubelet/plugins'
      readOnly: false

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-nodeplugin-psp
  # replace with non-default namespace name
  namespace: default
rules:
  - apiGroups: ['policy']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['rbd-csi-nodeplugin-psp']

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-nodeplugin-psp
  # replace with non-default namespace name
  namespace: default
subjects:
  - kind: ServiceAccount
    name: rbd-csi-nodeplugin
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: Role
  name: rbd-csi-nodeplugin-psp
  apiGroup: rbac.authorization.k8s.io
5 部署CSI sidecar containers和RBD CSI driver:

csi-rbdplugin-provisioner.yaml 文件中,将有关KMS相关注释掉,修改image为quay,谷歌的镜像中国下载不了。

将
# cat csi-rbdplugin-provisioner.yaml | grep k8s.gcr.io/sig-storage
          image: k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4
          image: k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
          image: k8s.gcr.io/sig-storage/csi-attacher:v3.0.2
          image: k8s.gcr.io/sig-storage/csi-resizer:v1.0.1
改成:
# cat csi-rbdplugin-provisioner.yaml | grep quay.io/k8scsi
          image: quay.io/k8scsi/csi-provisioner:v2.0.4
          image: quay.io/k8scsi/csi-snapshotter:v4.0.0
          image: quay.io/k8scsi/csi-attacher:v3.0.2
          image: quay.io/k8scsi/csi-resizer:v1.0.1

单节点需要将

将
kind: Deployment
apiVersion: apps/v1
metadata:
  name: csi-rbdplugin-provisioner
  namespace: kube-system
spec:
  replicas: 3
  
  
改成:
kind: Deployment
apiVersion: apps/v1
metadata:
  name: csi-rbdplugin-provisioner
  namespace: kube-system
spec:
  replicas: 1

csi-rbdplugin-provisioner.yaml

---
kind: Service
apiVersion: v1
metadata:
  name: csi-rbdplugin-provisioner
  labels:
    app: csi-metrics
spec:
  selector:
    app: csi-rbdplugin-provisioner
  ports:
    - name: http-metrics
      port: 8080
      protocol: TCP
      targetPort: 8680

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: csi-rbdplugin-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: csi-rbdplugin-provisioner
  template:
    metadata:
      labels:
        app: csi-rbdplugin-provisioner
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - csi-rbdplugin-provisioner
              topologyKey: "kubernetes.io/hostname"
      serviceAccountName: rbd-csi-provisioner
      priorityClassName: system-cluster-critical
      containers:
        - name: csi-provisioner
          image: quay.io/k8scsi/csi-provisioner:v2.0.4
          args:
            - "--csi-address=$(ADDRESS)"
            - "--v=5"
            - "--timeout=150s"
            - "--retry-interval-start=500ms"
            - "--leader-election=true"
            #  set it to true to use topology based provisioning
            - "--feature-gates=Topology=false"
            # if fstype is not specified in storageclass, ext4 is default
            - "--default-fstype=ext4"
            - "--extra-create-metadata=true"
          env:
            - name: ADDRESS
              value: unix:///csi/csi-provisioner.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: csi-snapshotter
          image: quay.io/k8scsi/csi-snapshotter:v4.0.0
          args:
            - "--csi-address=$(ADDRESS)"
            - "--v=5"
            - "--timeout=150s"
            - "--leader-election=true"
          env:
            - name: ADDRESS
              value: unix:///csi/csi-provisioner.sock
          imagePullPolicy: "IfNotPresent"
          securityContext:
            privileged: true
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: csi-attacher
          image: quay.io/k8scsi/csi-attacher:v3.0.2
          args:
            - "--v=5"
            - "--csi-address=$(ADDRESS)"
            - "--leader-election=true"
            - "--retry-interval-start=500ms"
          env:
            - name: ADDRESS
              value: /csi/csi-provisioner.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: csi-resizer
          image: quay.io/k8scsi/csi-resizer:v1.0.1
          args:
            - "--csi-address=$(ADDRESS)"
            - "--v=5"
            - "--timeout=150s"
            - "--leader-election"
            - "--retry-interval-start=500ms"
            - "--handle-volume-inuse-error=false"
          env:
            - name: ADDRESS
              value: unix:///csi/csi-provisioner.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: csi-rbdplugin
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
          # for stable functionality replace canary with latest release version
          image: quay.io/cephcsi/cephcsi:v3.3-canary
          args:
            - "--nodeid=$(NODE_ID)"
            - "--type=rbd"
            - "--controllerserver=true"
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--v=5"
            - "--drivername=rbd.csi.ceph.com"
            - "--pidlimit=-1"
            - "--rbdhardmaxclonedepth=8"
            - "--rbdsoftmaxclonedepth=4"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: NODE_ID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # - name: POD_NAMESPACE
            #   valueFrom:
            #     fieldRef:
            #       fieldPath: spec.namespace
            # - name: KMS_CONFIGMAP_NAME
            #   value: encryptionConfig
            - name: CSI_ENDPOINT
              value: unix:///csi/csi-provisioner.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - mountPath: /dev
              name: host-dev
            - mountPath: /sys
              name: host-sys
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - name: ceph-csi-config
              mountPath: /etc/ceph-csi-config/
            #- name: ceph-csi-encryption-kms-config
            #  mountPath: /etc/ceph-csi-encryption-kms-config/
            - name: keys-tmp-dir
              mountPath: /tmp/csi/keys
        - name: csi-rbdplugin-controller
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
          # for stable functionality replace canary with latest release version
          image: quay.io/cephcsi/cephcsi:v3.3-canary
          args:
            - "--type=controller"
            - "--v=5"
            - "--drivername=rbd.csi.ceph.com"
            - "--drivernamespace=$(DRIVER_NAMESPACE)"
          env:
            - name: DRIVER_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: ceph-csi-config
              mountPath: /etc/ceph-csi-config/
            - name: keys-tmp-dir
              mountPath: /tmp/csi/keys
        - name: liveness-prometheus
          image: quay.io/cephcsi/cephcsi:v3.3-canary
          args:
            - "--type=liveness"
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--metricsport=8680"
            - "--metricspath=/metrics"
            - "--polltime=60s"
            - "--timeout=3s"
          env:
            - name: CSI_ENDPOINT
              value: unix:///csi/csi-provisioner.sock
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
          imagePullPolicy: "IfNotPresent"
      volumes:
        - name: host-dev
          hostPath:
            path: /dev
        - name: host-sys
          hostPath:
            path: /sys
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: socket-dir
          emptyDir: {
            medium: "Memory"
          }
        - name: ceph-csi-config
          configMap:
            name: ceph-csi-config
        #- name: ceph-csi-encryption-kms-config
        #  configMap:
        #    name: ceph-csi-encryption-kms-config
        - name: keys-tmp-dir
          emptyDir: {
            medium: "Memory"
          }

csi-rbdplugin.yaml 文件中,将有关KMS相关注释掉

csi-rbdplugin.yaml

---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: csi-rbdplugin
spec:
  selector:
    matchLabels:
      app: csi-rbdplugin
  template:
    metadata:
      labels:
        app: csi-rbdplugin
    spec:
      serviceAccountName: rbd-csi-nodeplugin
      hostNetwork: true
      hostPID: true
      priorityClassName: system-node-critical
      # to use e.g. Rook orchestrated cluster, and mons' FQDN is
      # resolved through k8s service, set dns policy to cluster first
      dnsPolicy: ClusterFirstWithHostNet
      containers:
        - name: driver-registrar
          # This is necessary only for systems with SELinux, where
          # non-privileged sidecar containers cannot access unix domain socket
          # created by privileged CSI driver container.
          securityContext:
            privileged: true
          image: quay.io/k8scsi/csi-node-driver-registrar:v2.0.1
          args:
            - "--v=5"
            - "--csi-address=/csi/csi.sock"
            - "--kubelet-registration-path=/var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock"
          env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - name: registration-dir
              mountPath: /registration
        - name: csi-rbdplugin
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          # for stable functionality replace canary with latest release version
          image: quay.io/cephcsi/cephcsi:v3.3-canary
          args:
            - "--nodeid=$(NODE_ID)"
            - "--type=rbd"
            - "--nodeserver=true"
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--v=5"
            - "--drivername=rbd.csi.ceph.com"
            # If topology based provisioning is desired, configure required
            # node labels representing the nodes topology domain
            # and pass the label names below, for CSI to consume and advertise
            # its equivalent topology domain
            # - "--domainlabels=failure-domain/region,failure-domain/zone"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: NODE_ID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # - name: POD_NAMESPACE
            #   valueFrom:
            #     fieldRef:
            #       fieldPath: spec.namespace
            # - name: KMS_CONFIGMAP_NAME
            #   value: encryptionConfig
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - mountPath: /dev
              name: host-dev
            - mountPath: /sys
              name: host-sys
            - mountPath: /run/mount
              name: host-mount
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - name: ceph-csi-config
              mountPath: /etc/ceph-csi-config/
            #- name: ceph-csi-encryption-kms-config
            #  mountPath: /etc/ceph-csi-encryption-kms-config/
            - name: plugin-dir
              mountPath: /var/lib/kubelet/plugins
              mountPropagation: "Bidirectional"
            - name: mountpoint-dir
              mountPath: /var/lib/kubelet/pods
              mountPropagation: "Bidirectional"
            - name: keys-tmp-dir
              mountPath: /tmp/csi/keys
        - name: liveness-prometheus
          securityContext:
            privileged: true
          image: quay.io/cephcsi/cephcsi:v3.3-canary
          args:
            - "--type=liveness"
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--metricsport=8680"
            - "--metricspath=/metrics"
            - "--polltime=60s"
            - "--timeout=3s"
          env:
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
          imagePullPolicy: "IfNotPresent"
      volumes:
        - name: socket-dir
          hostPath:
            path: /var/lib/kubelet/plugins/rbd.csi.ceph.com
            type: DirectoryOrCreate
        - name: plugin-dir
          hostPath:
            path: /var/lib/kubelet/plugins
            type: Directory
        - name: mountpoint-dir
          hostPath:
            path: /var/lib/kubelet/pods
            type: DirectoryOrCreate
        - name: registration-dir
          hostPath:
            path: /var/lib/kubelet/plugins_registry/
            type: Directory
        - name: host-dev
          hostPath:
            path: /dev
        - name: host-sys
          hostPath:
            path: /sys
        - name: host-mount
          hostPath:
            path: /run/mount
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: ceph-csi-config
          configMap:
            name: ceph-csi-config
        #- name: ceph-csi-encryption-kms-config
        #  configMap:
        #    name: ceph-csi-encryption-kms-config
        - name: keys-tmp-dir
          emptyDir: {
            medium: "Memory"
          }
---
# This is a service to expose the liveness metrics
apiVersion: v1
kind: Service
metadata:
  name: csi-metrics-rbdplugin
  labels:
    app: csi-metrics
spec:
  ports:
    - name: http-metrics
      port: 8080
      protocol: TCP
      targetPort: 8680
  selector:
    app: csi-rbdplugin

1-5的yaml放在同一个目录,然后执行:

#kubectl apply -f .

二、创建rbd-sc

1 创建rbd的secret

csi-rbd-secret.yaml

---
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: default
stringData:
  userID: admin
  userKey: AQABcKZfMUF9FBAAf9gcYUkS0KW/ptcOpHPWyA==

2 创建 csi-rbd的storageclass

csi-rbd-sc.yaml

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-rbd
provisioner: rbd.csi.ceph.com
parameters:
   clusterID: 319d2cce-b087-4de9-bd4a-13edc7644abc
   pool: kube
#   imageFeatures: layering
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: default
   csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
   csi.storage.k8s.io/controller-expand-secret-namespace: default
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: default
   csi.storage.k8s.io/fstype: xfs
   imageFormat: "2"
   imageFeatures: "layering"
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
   - discard

3 创建pvc

csi-rbd-pvc.yaml

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-rbd-pvc1
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-rbd

4 执行1-3步的yaml

kubectl apply -f csi-rbd-secret.yaml
kubectl apply -f csi-rbd-sc.yaml
kubectl apply -f csi-rbd-pvc.yaml

查看结果:

# kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
csi-rbd-pvc1   Bound    pvc-aaedd83d-9338-4795-8d3d-b217b496c4a9   3Gi        RWO            csi-rbd        18h

参考博客:https://www.cnblogs.com/abner123/p/13220075.html

https://blog.csdn.net/qq_42015987/article/details/116528450

https://github.com/ceph/ceph-csi/

kubectl edit storageclasses.storage.k8s.io rbd

查看是否有如下字段
allowVolumeExpansion: true #增加该字段表示允许动态扩容

你可能感兴趣的:(ceph,后端,kubernetes,docker)