kubernetes 数据持久化 使用ceph存储-cephfs

PersistentVolume持久化卷(即PV)是Kubernetes对存储的抽象,PV可以是网络存储,不属于任何Node,但可以在每个Node上访问。PV有以下三种访问模式(Access Mode)

  • ReadWriteOnce:只可被一个Node挂载,这个Node对PV拥有读写权限
  • ReadOnlyMany: 可以被多个Node挂载,这些Node对PV只有只读权限
  • ReadWriteMany: 可以被多个Node挂载,这些Node对PV拥有读写权限

我们之前使用Ceph RBD作为Kubernetes集群的PV,在使用场景上基于Kubernetes的Dynamic Storage Provision特性,使用provisioner: kubernetes.io/rbd的StorageClass来动态创建PV并由volumeClaimTemplates中声明自动创建的PVC去绑定,当前这种形式很稳定的支撑了我们业务的需求。 但RBD的访问模式是ReadWriteOnce的面对最近的ReadWriteMany的需求 各种存储文件系统的对比如下

 

Volume Plugin

ReadWriteOnce

ReadOnlyMany

ReadWriteMany

AWSElasticBlockStore

-

-

AzureFile

-

AzureDisk

-

-

CephFS

Cinder

-

-

FC

-

FlexVolume

-

Flocker

-

-

GCEPersistentDisk

-

Glusterfs

HostPath

-

-

iSCSI

-

PhotonPersistentDisk

-

-

Quobyte

NFS

RBD

-

VsphereVolume

-

-(works when pods are collocated)

PortworxVolume

-

ScaleIO

-

StorageOS

-

-

如果使用 ReadWriteMany, 通过上面的对比,cephfs 是最佳的选择!

CephFS方式支持k8s的pv的3种访问模式:

                 ReadWriteOnce,

                 ReadOnlyMany ,

                 ReadWriteMany

一: 首先Ceph端创建CephFS pool

 

二: 部署 配置 storageclass

创建系统级 Secret

 

# ceph auth get-key client.admin|more

AQAYz3lekwIKKxAA+HUR4UIIK2GrljL5k7sYbg==

# kubectl create secret generic cephfs-secret --type="kubernetes.io/rbd" --from-literal=key=AQAYz3lekwIKKxAA+HUR4UIIK2GrljL5k7sYbg==  --namespace=default

secret/cephfs-secret created

#

 

查看 secret

#kubectl get secret cephfs-secret  -o yaml

 

 

配置 StorageClass

cat >storageclass-cephfs.yaml<

kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

name: dynamic-cephfs

provisioner: ceph.com/cephfs

parameters:

   monitors: 100.100.100.100:6789,100.100.100.101:6789,100.100.100.102:6789

   adminId: admin

   adminSecretName: cephfs-screct

   adminSecretNamespace: "default"

   claimRoot: /volumes/kubernetes

EOF

 

创建

# kubectl apply -f storageclass-cephfs.yaml

storageclass.storage.k8s.io/dynamic-cephfs created

 

查看

# kubectl get sc|grep cephfs

dynamic-cephfs        ceph.com/cephfs   7s

 

三: 测试使用

创建pvc测试

# cat cephfs-pvc-test.yaml

 

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

  name: cephfs-claim

spec:

  accessModes:    

    - ReadWriteMany

  storageClassName: dynamic-cephfs

  resources:

    requests:

      storage: 2Gi

 

# kubectl apply -f cephfs-pvc-test.yaml

persistentvolumeclaim/cephfs-claim created

#

 

查看

 

# kubectl get pvc

NAME      STATUS   VOLUME        CAPACITY   ACCESS MODES                                              STORAGECLASS          AGE

cephfs-claim     Bound    pvc-e30e1c1f-fa5b-4bd3-8d9a-0f5f98f9fb95   2Gi        RWX       dynamic-cephfs        43s

 

 

# kubectl get pv

NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM    STORAGECLASS          REASON   AGE

pvc-e30e1c1f-fa5b-4bd3-8d9a-0f5f98f9fb95   2Gi        RWX            Delete           Bound       default/cephfs-claim                       dynamic-cephfs                 2m21s

 

创建 nginx pod 挂载测试

# cat cephfs-deployment.yaml

kind: Deployment

apiVersion: apps/v1

metadata:

  labels:

    app: cephfs-nginx-pod

  name: cephfs-nginx-pod

  namespace: default

spec:

  replicas: 1

  selector:

    matchLabels:

      app: cephfs-nginx-pod

  template:

    metadata:

      labels:

        app: cephfs-nginx-pod

    spec:

      containers:

      - name: cephfs-nginx-pod

        image: nginx

        imagePullPolicy: Always

        resources:

          requests:

            memory: "1Gi"

            cpu: "250m"

          limits:

            memory: "1Gi"

            cpu: "500m"

        volumeMounts:

          - name: cephfs

            mountPath: "/usr/share/nginx/html"

        ports:

          - containerPort: 80

            protocol: TCP

      volumes:

        - name: cephfs

          persistentVolumeClaim:

            claimName: cephfs-claim

#

#

 

查看

# kubectl apply -f cephfs-deployment.yaml

deployment.apps/cephfs-nginx-pod created

# kubectl get deployment|grep cephfs

cephfs-nginx-pod         0/1     1            0           9s

# kubectl get pod|grep cephfs

cephfs-nginx-pod-7569d79f87-5jsc7         0/1     ContainerCreating   0          15s

# kubectl get pod|grep cephfs

cephfs-nginx-pod-7569d79f87-5jsc7         1/1     Running             0          43s

# kubectl get pod -o wide|grep cephfs

cephfs-nginx-pod-7569d79f87-5jsc7         1/1     Running             0          87s     10.244.1.48   node-17             

#

#

 

创建service

# cat cephfs-service.yaml

apiVersion: v1

kind: Service

metadata:

  name: cephfs-service

spec:

  type: NodePort

  ports:

   - protocol: TCP

     port: 80

     targetPort: 80

  selector:

    app: cephfs-nginx-pod

 

# kubectl apply -f cephfs-service.yaml

service/cephfs-service created

# kubectl get svc|grep cephfs

cephfs-service        NodePort    10.0.0.163           80:31214/TCP   24s

#

 

 

 

 

创建ingress

# cat cephfs-ingress.yaml

apiVersion: v1

kind: List

items:

- apiVersion: extensions/v1beta1

  kind: Ingress

  metadata:

    name: cephfs-ingress

    namespace: default

    annotations:

      nginx.ingress.kubernetes.io/rewrite-target: /

  spec:

    rules:

    - host: 04.test.cn

      http:

        paths:

        - path: /

          backend:

            serviceName: cephfs-service

            servicePort: 80

 

#

 

# kubectl apply -f cephfs-ingress.yaml

ingress.extensions/cephfs-ingress created

#

# kubectl get ingress

NAME                  HOSTS          ADDRESS   PORTS   AGE

 

cephfs-ingress        04.test.cn             80      15m

 

# kubectl get ep|grep cephfs

cephfs-service        10.244.1.48:80                  23m

#

 

域名访问

 

http://04.test.cn

kubernetes 数据持久化 使用ceph存储-cephfs_第1张图片

 

 

 

 

 

 

 

你可能感兴趣的:(分布式存储-ceph,K8S)