ceph csi 的使用

  1. 如果ceph 基于rook部署,k8s集群同时管理ceph和业务集群,那么不需要用该方式,直接 基于rook 构建一个storage class即可
    参考: https://kubevirt.io/2019/KubeVirt_storage_rook_ceph.html

  2. 本文档说明的是 k8s集群 连接另一个k8s集群管理的rook-ceph集群,当然只要是ceph集群就行,rook部署可以忽略

  3. 本文说明的是rbd场景而非cephfs

参考:

https://github.com/ceph/ceph-csi

https://github.com/ajinkyaingole30/K8s-Ceph

ceph-csi的文档说明不是很细致,所以有些具体参数还需要搜下github,比较快

预置条件:

创建一个ceph rbd 池,我这里是qaokd,并在该池内部创建一个ns,则所有pvc对应的rbd都在该目录下面

[root@node1 ~]# rbd namespace list qaokd
[root@node1 ~]# 
[root@node1 ~]# rbd namespace create qaokd/k8s
[root@node1 ~]# rbd namespace list qaokd
NAME
k8s 

涉及的模板:

  1. 部署
    kubectl apply -f /root/ceph-csi/deploy/rbd/kubernetes/csi-config-map.yaml

# csi-config-map.yaml

---
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    [
      {
        "clusterID": "52feab00-xxxxxxxxxxxxxxxxxxx",
        "radosNamespace": "k8s",
        "monitors": [
          "10.1xx.xx.12:6789",
          "10.1xx.xx.13:6789",
          "10.1xx.xx.14:6789"
        ]
      }
    ]
metadata:
  name: ceph-csi-config

(py3env) (qa)[root@control01 rbd]# cat /root/ceph-csi/examples/rbd/secret.yaml

---
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: default
stringData:
  userID: admin
  userKey: AQBxxxxxxxxxxxxxxxxxxxx/00NVsY7w==

用户权限配置,这两个不用改动 直接apply 即可

kubectl apply -f /root/ceph-csi/deploy/rbd/kubernetes/csi-provisioner-rbac.yam
kubectl apply -f /root/ceph-csi/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml

部署csi rbd 插件

可能要根据集群情况修改下副本数,比如我的是aio环境,如果是ha 3节点则无需修改

kubectl apply -f /root/ceph-csi/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml
kubectl apply -f /root/ceph-csi/deploy/rbd/kubernetes/csi-rbdplugin.yaml

部署storage class

kubectl apply -f /root/ceph-csi/examples/rbd/storageclass.yaml


---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
   clusterID: 52feab00-22e8-4f98-9840-xxxxxxxxxxx
   pool: qaokd
   imageFeatures: layering
   thickProvision: "false"
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: default
   csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
   csi.storage.k8s.io/controller-expand-secret-namespace: default
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Delete
allowVolumeExpansion: true   # 应该是支持卷resize
mountOptions:
   - discard
  1. 测试

kubectl apply -f /root/ceph-csi/examples/rbd/pvc.yaml

kubectl apply -f /root/ceph-csi/examples/rbd/pod.yaml

结果:


# kubectl exec -it csi-rbd-demo-pod -- bash

root@csi-rbd-demo-pod:/# lsblk
NAME              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                 8:0    0   3.7T  0 disk 
|-sda1              8:1    0     1M  0 part 
|-sda2              8:2    0     1G  0 part 
`-sda3              8:3    0   3.7T  0 part 
  |-centos-root   253:0    0 648.5G  0 lvm  /etc/hosts
  |-centos-swap   253:1    0     4G  0 lvm  
  `-centos-docker 253:2    0     3T  0 lvm  /etc/hostname
rbd0              252:0    0     1G  0 disk /var/lib/www/html

# 可以看到 pvc 1G已用上

kubevirt 测试:

  1. 修改dv模板
(py3env) (qa)[root@control01 example]# git diff
diff --git a/manifests/example/vm-dv.yaml b/manifests/example/vm-dv.yaml
index d82fa35..1c5397a 100644
--- a/manifests/example/vm-dv.yaml
+++ b/manifests/example/vm-dv.yaml
@@ -17,7 +17,7 @@ spec:
         resources:
           requests:
             storage: 100M
-        storageClassName: hdd
+        storageClassName: csi-rbd-sc
       source:
         http:
           url: https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
(py3env) (qa)[root@control01 example]# pwd
/root/containerized-data-importer/manifests/example

kubectl create -f vm-dv.yaml 


(py3env) (qa)[root@control01 example]# kubectl get pod  -A -o wide | grep cirros | grep cirros

default       importer-cirros-dv                            1/1     Running   0          38s    10.233.64.35    compute01

(py3env) (qa)[root@control01 example]# kubectl get dv -A -o wide
NAMESPACE   NAME             PHASE       PROGRESS   RESTARTS   AGE
default     cirros-dv        Succeeded   100.0%                2m48s

(py3env) (qa)[root@control01 example]# kubectl get vm
NAME                   AGE     VOLUME
vm-cirros-datavolume   4m11s   
kubectl virt start  vm-cirros-datavolume

kubectl virt console  vm-cirros-datavolume

# 登陆访问vm
image.png

你可能感兴趣的:(ceph csi 的使用)