①、K8S集群运行正常。
[root@k8s01 rbac]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s01 Ready <none> 10d v1.16.6
k8s02 Ready <none> 10d v1.16.6
k8s03 Ready <none> 10d v1.16.6
②、ceph集群运行正常,并配置好了cephfs。
[root@k8s01 rbac]# ceph -s
cluster:
id: b5f36dec-8faa-4efa-b08d-cbcd8305ae63
health: HEALTH_WARN
1 monitors have not enabled msgr2
services:
mon: 3 daemons, quorum k8s01,k8s03,k8s04 (age 2d)
mgr: k8s01(active, since 118m)
mds: cephfs:1 {0=k8s01=up:active}
osd: 3 osds: 3 up (since 2d), 3 in (since 3d)
task status:
scrub status:
mds.k8s01: idle
data:
pools: 3 pools, 81 pgs
objects: 29 objects, 161 KiB
usage: 3.2 GiB used, 237 GiB / 240 GiB avail
pgs: 81 active+clean
③、社区提供的cephfs-provisioner,地址:https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/cephfs
官方没有cephfs动态卷支持
使用社区提供的cephfs-provisioner
#pwd
opt/k8s/work/kubernetes/cluster/addons/ceph-storage-class/external-storage/ceph/cephfs/deploy/rbac
[root@k8s01 rbac]# ls -l
total 40
-rw-r--r-- 1 root root 279 Jul 27 14:40 ceph-deployment.yaml
-rw-r--r-- 1 root root 148 Jul 27 14:38 ceph-secret.yaml
-rw-r--r-- 1 root root 283 Jul 27 14:14 clusterrolebinding.yaml
-rw-r--r-- 1 root root 652 Jul 27 14:14 clusterrole.yaml
-rw-r--r-- 1 root root 708 Jul 27 14:14 deployment.yaml
-rw-r--r-- 1 root root 263 Jul 27 14:14 rolebinding.yaml
-rw-r--r-- 1 root root 316 Jul 27 14:14 role.yaml
-rw-r--r-- 1 root root 93 Jul 27 14:14 serviceaccount.yaml
-rw-r--r-- 1 root root 499 Jul 27 14:35 test-pvc.yaml
-rw-r--r-- 1 root root 294 Jul 27 14:47 test.yaml
先下载社区提供的cephfs-provisioner到本地,或可以通过百度网盘下载!
①、创建cephfs-provisioner
#kubectl create ns cephfs
#创建namespace
#kubectl -n apply -f ./
②、创建ceph-secret
在创建k8s pv前,由于ceph 集群是开启了cephx认证的,于是首先需要k8s 创建secret资源,k8s的secret资源采用的是base64加密。用下列命令在ceph monitor上提取key:
[root@k8s01 rbac]# ceph auth get-key client.admin |base64
QVFCL3E1ZGIvWFdxS1JBQTUyV0ZCUkxldnRjQzNidTFHZXlVYnc9PQ==
[root@k8s01 rbac]# cat ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret #secret名称
namespace: cephfs #名称空间
data:
key: QVFBeVFCbGZCOS9uTlJBQTdvWDlBZEtxcDBReG10VTgrVmN6aUE9PQ==
[root@k8s01 rbac]# kubectl apply -f ceph-secret.yaml
③、创建storageclass
[root@k8s01 rbac]# cat ceph-deployment.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cephfs
namespace: cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: 172.16.1.11:6789,172.16.1.13:6789,172.16.1.14:6789 #ceph服务列表
adminId: admin #认证用户
adminSecretName: ceph-secret #admin的secret
adminSecretNamespace: cephfs
[root@k8s01 rbac]# kubectl apply -f ceph-deployment.yaml
④、测试cephfs作为后端存储
[root@k8s01 rbac]# cat test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim #PVC名称 pod 关联改名称
namespace: cephfs
spec:
accessModes:
- ReadWriteMany
storageClassName: cephfs
resources:
requests:
storage: 1Gi
---
kind: Pod
apiVersion: v1
metadata:
name: test-pod
namespace: cephfs
spec:
containers:
- name: test-pod
image: ikubernetes/myapp:v4
volumeMounts:
- name: pvc
mountPath: "/data/cephfs" #pod 内挂载路径
volumes:
- name: pvc
persistentVolumeClaim:
claimName: claim
⑤、验证
[root@k8s01 rbac]# kubectl get storageclass
NAME PROVISIONER AGE
cephfs ceph.com/cephfs 82m
[root@k8s01 rbac]# kubectl get pvc
No resources found in default namespace.
[root@k8s01 rbac]# kubectl get pvc -n cephfs
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim Bound pvc-d2bf4d1c-0f9b-409e-b902-bcd0b6087658 1Gi RWX cephfs 67m
[root@k8s01 rbac]# kubectl get pod -n cephfs
NAME READY STATUS RESTARTS AGE
cephfs-provisioner-7b77478cb8-687rv 1/1 Running 0 95m
test-pod 1/1 Running 0 63m
资源:
参考:https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/cephfs
参考博客:https://blog.csdn.net/weixin_42715413/article/details/89513385
参考博客:https://blog.csdn.net/ywq935/article/details/103850257
下载速度慢,已上传至百度网盘。
链接:https://pan.baidu.com/s/1PNali5-YwQd_K_77Yos5RQ
提取码:vmi5