使用GlusterFS作为Kubernetes的后端存储

用GlusterFS作为Kubernetes的存储

这里简单的介绍一下使用基于容器化的GlusterFS + heketi作kubernetes的后端存储的部署方式;对于GlusterFS的介绍这里就不多说了;部署过程主要参考:gluster-kubernetes

1、环境

[root@master-0 ~]# kubectl get nodes -o wide
NAME       STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP       OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
master-0   Ready    master   40d   v1.12.1   192.168.112.221   192.168.112.221   CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://18.6.1
master-1   Ready    master   40d   v1.12.1   192.168.112.222   192.168.112.222   CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://18.6.1
master-2   Ready    master   40d   v1.12.1   192.168.112.223   192.168.112.223   CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://18.6.1
worker-0   Ready    worker   40d   v1.12.1   192.168.112.224   192.168.112.224   CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://18.6.1
worker-1   Ready    worker   40d   v1.12.1   192.168.112.225   192.168.112.225   CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://18.6.1

实验过程中将使用master-2、worker-0和worker-1作为GlusterFS的三个节点~

2、简介

GlusterFS是一个开源的分布式存储软件,kubernetes默认内置的对应的Provisioner,且支持ReadWriteOnce、ReadOnlyMany和ReadWriteMany;heketi喂glusterFS提供一套RESTfull管理接口,kubernetes可以通过heketi来管理glusterFS卷的生命周期;

想要正常的在kubernetes集群中使用或者挂载glusterfs,集群中的对应节点都需要安装 glusterfs-fuse

[root@master-0 ~]# yum install glusterfs-fuse -y

本次部署过程中,我的实验环境没有多余的空白磁盘用来作为GlusterFS的存储,因此,将使用loop device来模拟磁盘;loop device在操作系统重启后可能会被卸载,导致GlusterFS无法正常使用;

3、部署

3.1、创建loop device模拟磁盘

在master-2、worker-0、worker-1三个节点上执行一下命令,创建对应的loop device

[root@master-2 ~]# mkdir -p /home/glusterfs
[root@master-2 ~]# cd /home/glusterfs/
### 创建一个30G的文件 ###
[root@master-2 glusterfs]# dd if=/dev/zero of=gluster.disk bs=1024 count=$(( 1024 * 1024 * 30 ))
31457280+0 records in
31457280+0 records out
32212254720 bytes (32 GB) copied, 140.933 s, 229 MB/s
### 将文件安装为loop device ###
[root@master-2 glusterfs]# losetup -f gluster.disk
[root@master-2 glusterfs]# losetup -l
NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0         0      0         0  0 /home/glusterfs/gluster.disk

其他相关命令:

  • 卸载:losetup -d /dev/loop0
  • 卸载全部:losetup -D
  • 查看磁盘:fdisk -l
  • 查看逻辑卷:lvs或者lvdisplay
  • 查看卷组:vgs或者vgdisplay
  • 删除卷组:vgremove

losetup -d和losetup -D无法删除loop device时,检查disk和vg,一般删除vg之后即可删除;如果无法删除vg,可使用 dmsetup status 查看Device Mapper,然后 dmsetup remove xxxxxx 将loop device相关的内容删除即可~

设置master-2、worker-0、worker-1三个节点的label,以便于在这三个节点上启动glusterFS pod

[root@master-0 glusterFS]# kubectl label node master-2 storagenode=glusterfs
[root@master-0 glusterFS]# kubectl label node worker-0 storagenode=glusterfs
[root@master-0 glusterFS]# kubectl label node worker-1 storagenode=glusterfs
[root@master-0 glusterFS]# kubectl get nodes -L storagenode
NAME       STATUS   ROLES    AGE   VERSION   STORAGENODE
master-0   Ready    master   40d   v1.12.1   
master-1   Ready    master   40d   v1.12.1   
master-2   Ready    master   40d   v1.12.1   glusterfs
worker-0   Ready    worker   40d   v1.12.1   glusterfs
worker-1   Ready    worker   40d   v1.12.1   glusterfs

3.2、部署GlusterFS

#  glusterfs-daemonset.yml
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: glusterfs
  namespace: ns-support
  labels:
    glusterfs: daemonset
  annotations:
    description: GlusterFS DaemonSet
    tags: glusterfs
spec:
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs: pod
        glusterfs-node: pod
    spec:
      nodeSelector:
        storagenode: glusterfs
      hostNetwork: true
      containers:
        - image: 192.168.101.88:5000/gluster/gluster-centos:gluster4u0_centos7
          imagePullPolicy: IfNotPresent
          name: glusterfs
          env:
            # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the
            # readiness/liveness probe validate gluster-blockd as well
            - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
              value: "1"
            - name: GB_GLFS_LRU_COUNT
              value: "15"
            - name: TCMU_LOGDIR
              value: "/var/log/glusterfs/gluster-block"
          resources:
            requests:
              memory: 100Mi
              cpu: 100m
          volumeMounts:
            - name: glusterfs-heketi
              mountPath: "/var/lib/heketi"
            - name: glusterfs-run
              mountPath: "/run"
            - name: glusterfs-lvm
              mountPath: "/run/lvm"
            - name: glusterfs-etc
              mountPath: "/etc/glusterfs"
            - name: glusterfs-logs
              mountPath: "/var/log/glusterfs"
            - name: glusterfs-config
              mountPath: "/var/lib/glusterd"
            - name: glusterfs-dev
              mountPath: "/dev"
            - name: glusterfs-misc
              mountPath: "/var/lib/misc/glusterfsd"
            - name: glusterfs-cgroup
              mountPath: "/sys/fs/cgroup"
              readOnly: true
            - name: glusterfs-ssl
              mountPath: "/etc/ssl"
              readOnly: true
            - name: kernel-modules
              mountPath: "/usr/lib/modules"
              readOnly: true
          securityContext:
            capabilities: {}
            privileged: true
          readinessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 40
            exec:
              command:
                - "/bin/bash"
                - "-c"
                - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi"
            periodSeconds: 25
            successThreshold: 1
            failureThreshold: 50
          livenessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 40
            exec:
              command:
                - "/bin/bash"
                - "-c"
                - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi"
            periodSeconds: 25
            successThreshold: 1
            failureThreshold: 50
      volumes:
        - name: glusterfs-heketi
          hostPath:
            path: "/var/lib/heketi"
        - name: glusterfs-run
        - name: glusterfs-lvm
          hostPath:
            path: "/run/lvm"
        - name: glusterfs-etc
          hostPath:
            path: "/etc/glusterfs"
        - name: glusterfs-logs
          hostPath:
            path: "/var/log/glusterfs"
        - name: glusterfs-config
          hostPath:
            path: "/var/lib/glusterd"
        - name: glusterfs-dev
          hostPath:
            path: "/dev"
        - name: glusterfs-misc
          hostPath:
            path: "/var/lib/misc/glusterfsd"
        - name: glusterfs-cgroup
          hostPath:
            path: "/sys/fs/cgroup"
        - name: glusterfs-ssl
          hostPath:
            path: "/etc/ssl"
        - name: kernel-modules
          hostPath:
            path: "/usr/lib/modules"
      tolerations:
        - effect: NoSchedule
          operator: Exists
  • GlusterFS依赖本地设备文件,部署时,设置 hostNetwork: trueprivileged: true
  • 以DaemonSet方式在特定节点上部署

3.3、部署Heketi

  • 创建heketi的RBAC权限
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heketi-service-account
  namespace: ns-support
  labels:
    glusterfs: heketi-sa
    heketi: sa
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: heketi-sa-view
  labels:
    glusterfs: heketi-sa-view
    heketi: sa-view
subjects:
  - kind: ServiceAccount
    name: heketi-service-account
    namespace: ns-support
roleRef:
  kind: ClusterRole
  name: edit
  apiGroup: rbac.authorization.k8s.io
  • 创建heketi的配置
apiVersion: v1
kind: ConfigMap
metadata:
  name: heketi-config
  namespace: ns-support
data:
  heketi.json: |-
    {
      "_port_comment": "Heketi Server Port Number",
      "port": "8080",
      "_use_auth": "Enable JWT authorization. Please enabled for deployment",
      "use_auth": false,
      "_jwt": "Private keys for access",
      "jwt": {
        "_admin": "Admin has access to all APIs",
        "admin": {
          "key": "awesomePassword"
        },
        "_user": "User only has access to /volumes endpoint",
        "user": {
          "key": "awesomePassword"
        }
      },
      "_glusterfs_comment": "GlusterFS Configuration",
      "glusterfs": {
        "_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
        "executor": "kubernetes",
        "_db_comment": "Database file name",
        "db" : "/var/lib/heketi/heketi.db",
        "kubeexec": {
          "rebalance_on_expansion": true
        },
        "sshexec": {
          "rebalance_on_expansion": true,
          "keyfile": "/etc/heketi/private_key",
          "port": "22",
          "user": "root",
          "sudo": false
        }
      },
      "backup_db_to_kube_secret": false
    }
  topology.json: |-
    {
      "clusters": [
        {
          "nodes": [
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "master-2"
                  ],
                  "storage": [
                    "192.168.112.223"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/loop0"
              ]
            },
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "worker-0"
                  ],
                  "storage": [
                    "192.168.112.224"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/loop0"
              ]
            },
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "worker-1"
                  ],
                  "storage": [
                    "192.168.112.225"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/loop0"
              ]
            }
          ]
        }
      ]
    }
  private_key: ""
  • 三个配置文件,因为带有敏感信息,也可以使用secret创建,这里简单使用configMap了~
  • 注意每个节点的devices,可能每个节点的loop device不同,根据实际情况调整~
  • 创建不带持久化的heketi应用
kind: Deployment
apiVersion: apps/v1
metadata:
  name: deploy-heketi
  namespace: ns-support
  labels:
    glusterfs: heketi-deployment
    deploy-heketi: deployment
  annotations:
    description: Defines how to deploy Heketi
spec:
  replicas: 1
  selector:
    matchLabels:
      glusterfs: heketi-pod
      deploy-heketi: pod
  template:
    metadata:
      name: deploy-heketi
      labels:
        glusterfs: heketi-pod
        deploy-heketi: pod
    spec:
      serviceAccountName: heketi-service-account
      containers:
        - image: 192.168.101.88:5000/heketi/heketi:dev
          imagePullPolicy: IfNotPresent
          name: deploy-heketi
          env:
            - name: HEKETI_USER_KEY
              value: "awesomePassword"
            - name: HEKETI_ADMIN_KEY
              value: "awesomePassword"
            - name: HEKETI_EXECUTOR
              value: kubernetes
            - name: HEKETI_FSTAB
              value: "/var/lib/heketi/fstab"
            - name: HEKETI_SNAPSHOT_LIMIT
              value: '14'
            - name: HEKETI_KUBE_GLUSTER_DAEMONSET
              value: "y"
            - name: HEKETI_IGNORE_STALE_OPERATIONS
              value: "true"
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: db
              mountPath: /var/lib/heketi
            - name: config
              mountPath: /etc/heketi
          readinessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 3
            httpGet:
              path: "/hello"
              port: 8080
          livenessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 30
            httpGet:
              path: "/hello"
              port: 8080
      volumes:
        - name: db
        - name: config
          configMap:
            name: heketi-config
---
kind: Service
apiVersion: v1
metadata:
  name: deploy-heketi
  namespace: ns-support
  labels:
    glusterfs: heketi-service
    deploy-heketi: service
  annotations:
    description: Exposes Heketi Service
spec:
  selector:
    deploy-heketi: pod
  ports:
    - name: deploy-heketi
      port: 8080
      targetPort: 8080
  • 这里部署的heketi主要是为了初始化glusterFS并创建heketi数据库的持久化存储卷,使用完之后丢弃~
  • 使用不带持久化的heketi应用初始化glusterFS并创建heketi持久化数据库
[root@master-0 glusterFS]# kubectl get pods -n ns-support -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP                NODE       NOMINATED NODE
deploy-heketi-59d569ddbc-5d2jq   1/1     Running   0          15s     10.233.68.109     worker-0   <none>
glusterfs-cpj22                  1/1     Running   0          3m53s   192.168.112.225   worker-1   <none>
glusterfs-h6slp                  1/1     Running   0          3m53s   192.168.112.224   worker-0   <none>
glusterfs-pnwfl                  1/1     Running   0          3m53s   192.168.112.223   master-2   <none>
### 执行容器内的heketi-cli命令,加载glusterFS拓扑 ###
[root@master-0 glusterFS]# kubectl exec -i -n ns-support deploy-heketi-59d569ddbc-5d2jq -- heketi-cli topology load --user admin --secret "awesomePassword" --json=/etc/heketi/topology.json
Creating cluster ... ID: cb7860a4714bd4a0e4a3a69471e34b91
	Allowing file volumes on cluster.
	Allowing block volumes on cluster.
	Creating node master-2 ... ID: 1d23857752fec50956ef8b50c4bb2c6a
		Adding device /dev/loop0 ... OK
	Creating node worker-0 ... ID: 28112dc6ea364a3097bb676fefa8975a
		Adding device /dev/loop0 ... OK
	Creating node worker-1 ... ID: 51d882510ee467c0f5c963bb252768d4
		Adding device /dev/loop0 ... OK
### 执行容器内的heketi-cli命令,生成kubernetes对象部署文件 ###
[root@master-0 glusterFS]# kubectl exec -i -n ns-support deploy-heketi-59d569ddbc-5d2jq -- heketi-cli setup-kubernetes-heketi-storage --user admin --secret "awesomePassword" --image 192.168.101.88:5000/heketi/heketi:dev --listfile=/tmp/heketi-storage.json
Saving /tmp/heketi-storage.json
### 在指定的namespace下执行部署文件 ###
[root@master-0 glusterFS]# kubectl exec -i -n ns-support deploy-heketi-59d569ddbc-5d2jq -- cat /tmp/heketi-storage.json | kubectl apply -n ns-support -f -
secret/heketi-storage-secret created
endpoints/heketi-storage-endpoints created
service/heketi-storage-endpoints created
job.batch/heketi-storage-copy-job created
### 检查部署文件部署的job是否执行完成 ###
[root@master-0 glusterFS]# kubectl get pods -n ns-support 
NAME                             READY   STATUS      RESTARTS   AGE
deploy-heketi-59d569ddbc-5d2jq   1/1     Running     0          9m34s
glusterfs-cpj22                  1/1     Running     0          13m
glusterfs-h6slp                  1/1     Running     0          13m
glusterfs-pnwfl                  1/1     Running     0          13m
heketi-storage-copy-job-96cvb    0/1     Completed   0          21s
  • heketi-cli topology load 使glusterFS节点组成集群
  • heketi-cli setup-kubernetes-heketi-storage 命令产生在kubernetes上部署heketi的必要YAML部署文件,主要是创建heketi的持久化存储,setup-kubernetes-heketi-storage有几个别名:setup-openshift-heketi-storagesetup-heketi-db-storagesetup-kubernetes-heketi-storage,这三个都是同一个作用
  • 如果heketi-storage-copy-job-96cvb一直处于ContainerCreating或者Pending状态,使用describe查看pod详细信息,unknown filesystem type 'glusterfs'考虑pod所在节点是否安装glusterfs-fuse
  • 删除不带持久化的heketi应用及其他临时对象
[root@master-0 glusterFS]# kubectl delete all,service,jobs,deployment,secret -l "deploy-heketi" -n ns-support
pod "deploy-heketi-59d569ddbc-5d2jq" deleted
service "deploy-heketi" deleted
deployment.apps "deploy-heketi" deleted
job.batch "heketi-storage-copy-job" deleted
secret "heketi-storage-secret" deleted
  • 部署持久化heketi应用
kind: Deployment
apiVersion: apps/v1
metadata:
  name: heketi
  namespace: ns-support
  labels:
    glusterfs: heketi-deployment
    heketi: deployment
  annotations:
    description: Defines how to deploy Heketi
spec:
  replicas: 1
  selector:
    matchLabels:
      glusterfs: heketi-pod
      heketi: pod
  template:
    metadata:
      name: heketi
      labels:
        glusterfs: heketi-pod
        heketi: pod
    spec:
      serviceAccountName: heketi-service-account
      containers:
        - image: 192.168.101.88:5000/heketi/heketi:dev
          imagePullPolicy: IfNotPresent
          name: heketi
          env:
            - name: HEKETI_USER_KEY
              value: "awesomePassword"
            - name: HEKETI_ADMIN_KEY
              value: "awesomePassword"
            - name: HEKETI_EXECUTOR
              value: kubernetes
            - name: HEKETI_FSTAB
              value: "/var/lib/heketi/fstab"
            - name: HEKETI_SNAPSHOT_LIMIT
              value: '14'
            - name: HEKETI_KUBE_GLUSTER_DAEMONSET
              value: "y"
            - name: HEKETI_IGNORE_STALE_OPERATIONS
              value: "true"
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: db
              mountPath: "/var/lib/heketi"
            - name: config
              mountPath: /etc/heketi
          readinessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 3
            httpGet:
              path: "/hello"
              port: 8080
          livenessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 30
            httpGet:
              path: "/hello"
              port: 8080
      volumes:
        - name: db
          glusterfs:
            endpoints: heketi-storage-endpoints
            path: heketidbstorage
        - name: config
          configMap:
            name: heketi-config
---
kind: Service
apiVersion: v1
metadata:
  name: heketi
  namespace: ns-support
  labels:
    glusterfs: heketi-service
    heketi: service
  annotations:
    description: Exposes Heketi Service
spec:
  selector:
    glusterfs: heketi-pod
  ports:
    - name: heketi
      port: 8080
      targetPort: 8080
  • 和前一个heketi的部署文件的差别,就是多了一个Volume:db~

4、测试

  • 创建StorageClass
[root@master-0 glusterFS]# kubectl get svc -n ns-support 
NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
heketi                     ClusterIP   10.100.35.36     <none>        8080/TCP   10m
heketi-storage-endpoints   ClusterIP   10.100.144.197   <none>        1/TCP      116m

需要先找出heketi服务的IP~

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gluster-volume-sc
provisioner: kubernetes.io/glusterfs
parameters:
  # resturl: "http://heketi.ns-support.cluster.local:8080"
  resturl: "http://10.100.35.36:8080"
  restuser: "admin"
  restuserkey: "awesomePassword"

注意:

  • 目前在kubernetes里还无法在resturl直接设置FQDN,具体参见:https://github.com/gluster/gluster-kubernetes/issues/425和https://github.com/kubernetes/kubernetes/issues/42306
  • restuserkey可以用secret指定
  • 创建PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gluster1
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: gluster-volume-sc
[root@master-0 glusterFS]# kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
gluster1   Bound    pvc-e07bcccc-f2ce-11e8-a5fe-0050568e2411   5Gi        RWO            gluster-volume-sc   8s

[root@master-0 glusterFS]# kubectl exec -i -n ns-support heketi-9749d4567-jslhc -- heketi-cli topology info --user admin --secret "awesomePassword"

Cluster Id: cb7860a4714bd4a0e4a3a69471e34b91

    File:  true
    Block: true

    Volumes:

	Name: vol_43ac12bd8808606a93910d7e37e035b8
	Size: 5
	Id: 43ac12bd8808606a93910d7e37e035b8
	Cluster Id: cb7860a4714bd4a0e4a3a69471e34b91
	Mount: 192.168.112.223:vol_43ac12bd8808606a93910d7e37e035b8
	Mount Options: backup-volfile-servers=192.168.112.224,192.168.112.225
	Durability Type: replicate
	Replica: 3
	Snapshot: Enabled
	Snapshot Factor: 1.00

		Bricks:
			Id: 747347aaa7e51219bc82253eb72a8b25
			Path: /var/lib/heketi/mounts/vg_cf0f189dcacdb908ba170bdf966af902/brick_747347aaa7e51219bc82253eb72a8b25/brick
			Size (GiB): 5
			Node: 28112dc6ea364a3097bb676fefa8975a
			Device: cf0f189dcacdb908ba170bdf966af902

			Id: 8cb8c19431cb1eb993792c52cd8e2cf8
			Path: /var/lib/heketi/mounts/vg_4bf3738b87fda5335dbc9d65e0785d8b/brick_8cb8c19431cb1eb993792c52cd8e2cf8/brick
			Size (GiB): 5
			Node: 51d882510ee467c0f5c963bb252768d4
			Device: 4bf3738b87fda5335dbc9d65e0785d8b

			Id: ef9eae79083692ec8b20c3fe2651e348
			Path: /var/lib/heketi/mounts/vg_dc3ab1fc864771568a7243362a08250a/brick_ef9eae79083692ec8b20c3fe2651e348/brick
			Size (GiB): 5
			Node: 1d23857752fec50956ef8b50c4bb2c6a
			Device: dc3ab1fc864771568a7243362a08250a


	Name: heketidbstorage
	Size: 2
	Id: c43da9673bcb1bec2331c05b7d8dfa28
	Cluster Id: cb7860a4714bd4a0e4a3a69471e34b91
	Mount: 192.168.112.223:heketidbstorage
	Mount Options: backup-volfile-servers=192.168.112.224,192.168.112.225
	Durability Type: replicate
	Replica: 3
	Snapshot: Disabled

		Bricks:
			Id: 6c06acd1eacf3522dea90e38f450d0a0
			Path: /var/lib/heketi/mounts/vg_cf0f189dcacdb908ba170bdf966af902/brick_6c06acd1eacf3522dea90e38f450d0a0/brick
			Size (GiB): 2
			Node: 28112dc6ea364a3097bb676fefa8975a
			Device: cf0f189dcacdb908ba170bdf966af902

			Id: 8eb9ff3bb8ce7c50fb76d00d6ed93f17
			Path: /var/lib/heketi/mounts/vg_4bf3738b87fda5335dbc9d65e0785d8b/brick_8eb9ff3bb8ce7c50fb76d00d6ed93f17/brick
			Size (GiB): 2
			Node: 51d882510ee467c0f5c963bb252768d4
			Device: 4bf3738b87fda5335dbc9d65e0785d8b

			Id: f62ca07100b73a6e7d96cc87c7ddb4db
			Path: /var/lib/heketi/mounts/vg_dc3ab1fc864771568a7243362a08250a/brick_f62ca07100b73a6e7d96cc87c7ddb4db/brick
			Size (GiB): 2
			Node: 1d23857752fec50956ef8b50c4bb2c6a
			Device: dc3ab1fc864771568a7243362a08250a


    Nodes:

	Node Id: 1d23857752fec50956ef8b50c4bb2c6a
	State: online
	Cluster Id: cb7860a4714bd4a0e4a3a69471e34b91
	Zone: 1
	Management Hostnames: master-2
	Storage Hostnames: 192.168.112.223
	Devices:
		Id:dc3ab1fc864771568a7243362a08250a   Name:/dev/loop0          State:online    Size (GiB):29      Used (GiB):7       Free (GiB):22      
			Bricks:
				Id:ef9eae79083692ec8b20c3fe2651e348   Size (GiB):5       Path: /var/lib/heketi/mounts/vg_dc3ab1fc864771568a7243362a08250a/brick_ef9eae79083692ec8b20c3fe2651e348/brick
				Id:f62ca07100b73a6e7d96cc87c7ddb4db   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_dc3ab1fc864771568a7243362a08250a/brick_f62ca07100b73a6e7d96cc87c7ddb4db/brick

	Node Id: 28112dc6ea364a3097bb676fefa8975a
	State: online
	Cluster Id: cb7860a4714bd4a0e4a3a69471e34b91
	Zone: 1
	Management Hostnames: worker-0
	Storage Hostnames: 192.168.112.224
	Devices:
		Id:cf0f189dcacdb908ba170bdf966af902   Name:/dev/loop0          State:online    Size (GiB):29      Used (GiB):7       Free (GiB):22      
			Bricks:
				Id:6c06acd1eacf3522dea90e38f450d0a0   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_cf0f189dcacdb908ba170bdf966af902/brick_6c06acd1eacf3522dea90e38f450d0a0/brick
				Id:747347aaa7e51219bc82253eb72a8b25   Size (GiB):5       Path: /var/lib/heketi/mounts/vg_cf0f189dcacdb908ba170bdf966af902/brick_747347aaa7e51219bc82253eb72a8b25/brick

	Node Id: 51d882510ee467c0f5c963bb252768d4
	State: online
	Cluster Id: cb7860a4714bd4a0e4a3a69471e34b91
	Zone: 1
	Management Hostnames: worker-1
	Storage Hostnames: 192.168.112.225
	Devices:
		Id:4bf3738b87fda5335dbc9d65e0785d8b   Name:/dev/loop0          State:online    Size (GiB):29      Used (GiB):7       Free (GiB):22      
			Bricks:
				Id:8cb8c19431cb1eb993792c52cd8e2cf8   Size (GiB):5       Path: /var/lib/heketi/mounts/vg_4bf3738b87fda5335dbc9d65e0785d8b/brick_8cb8c19431cb1eb993792c52cd8e2cf8/brick
				Id:8eb9ff3bb8ce7c50fb76d00d6ed93f17   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_4bf3738b87fda5335dbc9d65e0785d8b/brick_8eb9ff3bb8ce7c50fb76d00d6ed93f17/brick

你可能感兴趣的:(学习整理,分布式)