使用GlusterFS作为Kubernetes的后端存储

进大厂,身价翻倍的法宝来了!

主讲内容:docker/kubernetes 云原生技术,大数据架构,分布式微服务,自动化测试、运维。

视频地址:ke.qq.com/course/419718


全栈工程师开发手册 (作者:栾鹏)
架构系列文章

gfs相关知识

https://blog.51cto.com/linuxg/1929335

主机部署GFS

格式化硬盘,并添加挂在作为数据盘

mkfs.xfs -i size=2048  or (/dev/sdc)
mkdir -p /data/brick1
mount /dev/sdc /data/brick1
echo '/dev/sdc /data/brick1 xfs defaults 1 2' >> /etc/fstab 
tip: you can use a directory as brick, and you can skip the steps above install glusterfs //run on every nodes

**在每个节点上安装glusterfs **

sudo add-apt-repository ppa:gluster/glusterfs-3.10
sudo apt-get update
sudo apt-get install glusterfs-server attr
sudo iptables -I INPUT -p all -s  -j ACCEPT //ip address of other nodes configure cluster
sudo service glusterd start 
sudo service glusterd status

配置集群

gluster peer probe node1 //on node2。可以使用ip
gluster peer probe node2 //on node1

创建和启动gfs挂载卷

gluster volume create gv0 replica 2 node1:/data/brick1/gv0 node2:/data/brick1/gv0
gluster volume start gv0

some useful commands

gluster peer status
gluster volume info

安装gfs的client

apt-get install glusterfs-client //glusterfs-client was installed when installing glusterfs-server on server node
mount -t glusterfs node1:/gv0 /mnt

如果添加了新机器,在原有机器上运行连接到新机器,这样将机器加入到集群,然后在新机器上运行下面的命令才能使用原来的分区数据。

gluster volume add-brick gv-code  replica 3  cluster-aicloud-master-3:/gfs/gv-code  force

验证

安装完成之后,查看版本

#glusterfs -V

启动gluster

#service glusterd start

设置开机自启动

sudo apt-get install sysv-rc-conf
sudo sysv-rc-conf glusterd on

查看监听的端口。这样在后面创建endpoint,才能写出端口号。

sudo netstat -tunlp|grep glusterd

glusterfs配置文件(/etc/glusterfs/glusterd.vol)
working-directory的位置为/var/lib/glusterfsd

相关命令

#删除卷
gluster volume stop gfs01
gluster volume delete gfs01
#将机器移出集群
gluster peer detach 192.168.1.100
#只允许172.28.0.0的网络访问glusterfs
gluster volume set gfs01 auth.allow 172.28.26.*
gluster volume set gfs01 auth.allow 192.168.222.1,192.168.*.*
#加入新的机器并添加到卷里(由于副本数设置为2,至少要添加2(4、6、8..)台机器)
gluster peer probe 192.168.222.134
gluster peer probe 192.168.222.135
#新加卷
gluster volume add-brick gfs01 repl 2 192.168.222.134:/data/gluster 192.168.222.135:/data/gluster force
#删除卷
gluster volume remove-brick gfs01 repl 2 192.168.222.134:/opt/gfs 192.168.222.135:/opt/gfs start
gluster volume remove-brick gfs01 repl 2 192.168.222.134:/opt/gfs 192.168.222.135:/opt/gfs status
gluster volume remove-brick gfs01 repl 2 192.168.222.134:/opt/gfs 192.168.222.135:/opt/gfs commit
注意:扩展或收缩卷时,也要按照卷的类型,加入或减少的brick个数必须满足相应的要求。
#当对卷进行了扩展或收缩后,需要对卷的数据进行重新均衡。
gluster volume rebalance mamm-volume start|stop|status
###########################################################
迁移卷---主要完成数据在卷之间的在线迁移
#启动迁移过程
gluster volume replace-brick gfs01 192.168.222.134:/opt/gfs 192.168.222.134:/opt/test start force
#查看迁移状态
gluster volume replace-brick gfs01 192.168.222.134:/opt/gfs 192.168.222.134:/opt/test status
#迁移完成后提交完成
gluster volume replace-brick gfs01 192.168.222.134:/opt/gfs 192.168.222.134:/opt/test commit
#机器出现故障,执行强制提交
gluster volume replace-brick gfs01 192.168.222.134:/opt/gfs 192.168.222.134:/opt/test commit force
###########################################################
触发副本自愈
gluster volume heal mamm-volume #只修复有问题的文件
gluster volume heal mamm-volume full #修复所有文件
gluster volume heal mamm-volume info #查看自愈详情
#####################################################
data-self-heal, metadata-self-heal and entry-self-heal
启用或禁用文件内容、文件元数据和目录项的自我修复功能,默认情况下三个全部是“on”。
#将其中的一个设置为off的范例:
gluster volume set gfs01 entry-self-heal off

在k8s里面创建Endpoints

apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs-cluster
subsets:
- addresses:
  - ip: 192.168.12.97
  - ip: 192.168.12.96
  ports:
  - port: 1990
    protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
  name: glusterfs-cluster
spec:
  ports:
  - port: 1990

创建pv、pvc、pod

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: cloudai-code-pv
  namespace: cloudai-2
  labels:
    alicloud-pvname: cloudai-code-pv
spec:     # 定义pv属性
  capacity:         # 容量
    storage: 1Gi   # 存储容量
  accessModes:    # 访问模式
    - ReadWriteMany  
  glusterfs:
    endpoints: 'glusterfs-cluster'
    path: 'gv-code'   
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle  
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cloudai-code-pvc
  namespace: cloudai-2
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      alicloud-pvname: cloudai-code-pv
---
apiVersion: v1
kind: Pod                     # 使用pod能固定容器的名称,但是不能设置副本数,所以要注意。
metadata:
  namespace: cloudai-2
  name: code
  labels:
    app: code
spec:
  volumes: 
  - name: code-path  
    persistentVolumeClaim:
      claimName: cloudai-code-pvc            # 替换pvc,这样就可以进入
  imagePullSecrets:
  - name: hubsecret              # 镜像拉取秘钥
  containers:
    - name: code
      image: luanpeng/lp:python-base-1.0.0
      command: ['sleep','30000']
      volumeMounts:
      - name: code-path
        mountPath: /app

容器化部署GFS

这里简单的介绍一下使用基于容器化的GlusterFS + heketi作kubernetes的后端存储的部署方式;对于GlusterFS的介绍这里就不多说了;部署过程主要参考:gluster-kubernetes

1、环境

[root@master-0 ~]# kubectl get nodes -o wide
NAME       STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP       OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
master-0   Ready    master   40d   v1.12.1   192.168.112.221   192.168.112.221   CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://18.6.1
master-1   Ready    master   40d   v1.12.1   192.168.112.222   192.168.112.222   CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://18.6.1
master-2   Ready    master   40d   v1.12.1   192.168.112.223   192.168.112.223   CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://18.6.1
worker-0   Ready    worker   40d   v1.12.1   192.168.112.224   192.168.112.224   CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://18.6.1
worker-1   Ready    worker   40d   v1.12.1   192.168.112.225   192.168.112.225   CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://18.6.1

实验过程中将使用master-2、worker-0和worker-1作为GlusterFS的三个节点~

2、简介

GlusterFS是一个开源的分布式存储软件,kubernetes默认内置的对应的Provisioner,且支持ReadWriteOnce、ReadOnlyMany和ReadWriteMany;heketi喂glusterFS提供一套RESTfull管理接口,kubernetes可以通过heketi来管理glusterFS卷的生命周期;

想要正常的在kubernetes集群中使用或者挂载glusterfs,集群中的对应节点都需要安装 glusterfs-fuse

centos安装
[root@master-0 ~]# yum install glusterfs glusterfs-fuse -y
ubuntu安装
sudo apt-get install -y glusterfs-server
sudo service glusterfs-server start
sudo service glusterfs-server status

本次部署过程中,我的实验环境没有多余的空白磁盘用来作为GlusterFS的存储,因此,将使用loop device来模拟磁盘;loop device在操作系统重启后可能会被卸载,导致GlusterFS无法正常使用;

3、部署

3.1、创建loop device模拟磁盘

在master-2、worker-0、worker-1三个节点上执行以下命令,创建对应的loop device

[root@master-2 ~]# mkdir -p /home/glusterfs
[root@master-2 ~]# cd /home/glusterfs/
### 创建一个30G的文件 ###
[root@master-2 glusterfs]# dd if=/dev/zero of=gluster.disk bs=1024 count=$(( 1024 * 1024 * 30 ))
31457280+0 records in
31457280+0 records out
32212254720 bytes (32 GB) copied, 140.933 s, 229 MB/s
### 将文件安装为loop device ###
[root@master-2 glusterfs]# sudo losetup -f gluster.disk
[root@master-2 glusterfs]# sudo losetup -l
NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0         0      0         0  0 /home/glusterfs/gluster.disk

其他相关命令:

  • 卸载:losetup -d /dev/loop0
  • 卸载全部:losetup -D
  • 查看磁盘:fdisk -l
  • 查看磁盘的使用情况 df -l
  • 查看逻辑卷:lvs或者lvdisplay
  • 查看卷组:vgs或者vgdisplay
  • 删除卷组:vgremove

losetup -d和losetup -D无法删除loop device时,检查disk和vg,一般删除vg之后即可删除;如果无法删除vg,可使用 dmsetup status 查看Device Mapper,然后 dmsetup remove xxxxxx 将loop device相关的内容删除即可~

设置master-2、worker-0、worker-1三个节点的label,以便于在这三个节点上启动glusterFS pod

[root@master-0 glusterFS]# kubectl label node master-2 storagenode=glusterfs
[root@master-0 glusterFS]# kubectl label node worker-0 storagenode=glusterfs
[root@master-0 glusterFS]# kubectl label node worker-1 storagenode=glusterfs
[root@master-0 glusterFS]# kubectl get nodes -L storagenode
NAME       STATUS   ROLES    AGE   VERSION   STORAGENODE
master-0   Ready    master   40d   v1.12.1   
master-1   Ready    master   40d   v1.12.1   
master-2   Ready    master   40d   v1.12.1   glusterfs
worker-0   Ready    worker   40d   v1.12.1   glusterfs
worker-1   Ready    worker   40d   v1.12.1   glusterfs

3.2、部署GlusterFS

#  glusterfs-daemonset.yml
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: glusterfs
  namespace: ns-support
  labels:
    glusterfs: daemonset
  annotations:
    description: GlusterFS DaemonSet
    tags: glusterfs
spec:
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs: pod
        glusterfs-node: pod
    spec:
      nodeSelector:
        storagenode: glusterfs
      hostNetwork: true
      containers:
        - image: 192.168.101.88:5000/gluster/gluster-centos:gluster4u0_centos7
          imagePullPolicy: IfNotPresent
          name: glusterfs
          env:
            # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the
            # readiness/liveness probe validate gluster-blockd as well
            - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
              value: "1"
            - name: GB_GLFS_LRU_COUNT
              value: "15"
            - name: TCMU_LOGDIR
              value: "/var/log/glusterfs/gluster-block"
          resources:
            requests:
              memory: 100Mi
              cpu: 100m
          volumeMounts:
            - name: glusterfs-heketi
              mountPath: "/var/lib/heketi"
            - name: glusterfs-run
              mountPath: "/run"
            - name: glusterfs-lvm
              mountPath: "/run/lvm"
            - name: glusterfs-etc
              mountPath: "/etc/glusterfs"
            - name: glusterfs-logs
              mountPath: "/var/log/glusterfs"
            - name: glusterfs-config
              mountPath: "/var/lib/glusterd"
            - name: glusterfs-dev
              mountPath: "/dev"
            - name: glusterfs-misc
              mountPath: "/var/lib/misc/glusterfsd"
            - name: glusterfs-cgroup
              mountPath: "/sys/fs/cgroup"
              readOnly: true
            - name: glusterfs-ssl
              mountPath: "/etc/ssl"
              readOnly: true
            - name: kernel-modules
              mountPath: "/usr/lib/modules"
              readOnly: true
          securityContext:
            capabilities: {}
            privileged: true
          readinessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 40
            exec:
              command:
                - "/bin/bash"
                - "-c"
                - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi"
            periodSeconds: 25
            successThreshold: 1
            failureThreshold: 50
          livenessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 40
            exec:
              command:
                - "/bin/bash"
                - "-c"
                - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi"
            periodSeconds: 25
            successThreshold: 1
            failureThreshold: 50
      volumes:
        - name: glusterfs-heketi
          hostPath:
            path: "/var/lib/heketi"
        - name: glusterfs-run
        - name: glusterfs-lvm
          hostPath:
            path: "/run/lvm"
        - name: glusterfs-etc
          hostPath:
            path: "/etc/glusterfs"
        - name: glusterfs-logs
          hostPath:
            path: "/var/log/glusterfs"
        - name: glusterfs-config
          hostPath:
            path: "/var/lib/glusterd"
        - name: glusterfs-dev
          hostPath:
            path: "/dev"
        - name: glusterfs-misc
          hostPath:
            path: "/var/lib/misc/glusterfsd"
        - name: glusterfs-cgroup
          hostPath:
            path: "/sys/fs/cgroup"
        - name: glusterfs-ssl
          hostPath:
            path: "/etc/ssl"
        - name: kernel-modules
          hostPath:
            path: "/usr/lib/modules"
      tolerations:
        - effect: NoSchedule
          operator: Exists

  • GlusterFS依赖本地设备文件,部署时,设置 hostNetwork: trueprivileged: true
  • 以DaemonSet方式在特定节点上部署

3.3、部署Heketi

  • 创建heketi的RBAC权限
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heketi-service-account
  namespace: ns-support
  labels:
    glusterfs: heketi-sa
    heketi: sa
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: heketi-sa-view
  labels:
    glusterfs: heketi-sa-view
    heketi: sa-view
subjects:
  - kind: ServiceAccount
    name: heketi-service-account
    namespace: ns-support
roleRef:
  kind: ClusterRole
  name: edit
  apiGroup: rbac.authorization.k8s.io

  • 创建heketi的配置
apiVersion: v1
kind: ConfigMap
metadata:
  name: heketi-config
  namespace: ns-support
data:
  heketi.json: |-
    {
      "_port_comment": "Heketi Server Port Number",
      "port": "8080",
      "_use_auth": "Enable JWT authorization. Please enabled for deployment",
      "use_auth": false,
      "_jwt": "Private keys for access",
      "jwt": {
        "_admin": "Admin has access to all APIs",
        "admin": {
          "key": "awesomePassword"
        },
        "_user": "User only has access to /volumes endpoint",
        "user": {
          "key": "awesomePassword"
        }
      },
      "_glusterfs_comment": "GlusterFS Configuration",
      "glusterfs": {
        "_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
        "executor": "kubernetes",
        "_db_comment": "Database file name",
        "db" : "/var/lib/heketi/heketi.db",
        "kubeexec": {
          "rebalance_on_expansion": true
        },
        "sshexec": {
          "rebalance_on_expansion": true,
          "keyfile": "/etc/heketi/private_key",
          "port": "22",
          "user": "root",
          "sudo": false
        }
      },
      "backup_db_to_kube_secret": false
    }
  topology.json: |-
    {
      "clusters": [
        {
          "nodes": [
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "master-2"
                  ],
                  "storage": [
                    "192.168.112.223"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/loop0"
              ]
            },
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "worker-0"
                  ],
                  "storage": [
                    "192.168.112.224"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/loop0"
              ]
            },
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "worker-1"
                  ],
                  "storage": [
                    "192.168.112.225"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/loop0"
              ]
            }
          ]
        }
      ]
    }
  private_key: ""

  • 三个配置文件,因为带有敏感信息,也可以使用secret创建,这里简单使用configMap了~
  • 注意每个节点的devices,可能每个节点的loop device不同,根据实际情况调整~
  • 创建不带持久化的heketi应用
kind: Deployment
apiVersion: apps/v1
metadata:
  name: deploy-heketi
  namespace: ns-support
  labels:
    glusterfs: heketi-deployment
    deploy-heketi: deployment
  annotations:
    description: Defines how to deploy Heketi
spec:
  replicas: 1
  selector:
    matchLabels:
      glusterfs: heketi-pod
      deploy-heketi: pod
  template:
    metadata:
      name: deploy-heketi
      labels:
        glusterfs: heketi-pod
        deploy-heketi: pod
    spec:
      serviceAccountName: heketi-service-account
      containers:
        - image: 192.168.101.88:5000/heketi/heketi:dev
          imagePullPolicy: IfNotPresent
          name: deploy-heketi
          env:
            - name: HEKETI_USER_KEY
              value: "awesomePassword"
            - name: HEKETI_ADMIN_KEY
              value: "awesomePassword"
            - name: HEKETI_EXECUTOR
              value: kubernetes
            - name: HEKETI_FSTAB
              value: "/var/lib/heketi/fstab"
            - name: HEKETI_SNAPSHOT_LIMIT
              value: '14'
            - name: HEKETI_KUBE_GLUSTER_DAEMONSET
              value: "y"
            - name: HEKETI_IGNORE_STALE_OPERATIONS
              value: "true"
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: db
              mountPath: /var/lib/heketi
            - name: config
              mountPath: /etc/heketi
          readinessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 3
            httpGet:
              path: "/hello"
              port: 8080
          livenessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 30
            httpGet:
              path: "/hello"
              port: 8080
      volumes:
        - name: db
        - name: config
          configMap:
            name: heketi-config
---
kind: Service
apiVersion: v1
metadata:
  name: deploy-heketi
  namespace: ns-support
  labels:
    glusterfs: heketi-service
    deploy-heketi: service
  annotations:
    description: Exposes Heketi Service
spec:
  selector:
    deploy-heketi: pod
  ports:
    - name: deploy-heketi
      port: 8080
      targetPort: 8080

  • 这里部署的heketi主要是为了初始化glusterFS并创建heketi数据库的持久化存储卷,使用完之后丢弃~
  • 使用不带持久化的heketi应用初始化glusterFS并创建heketi持久化数据库
[root@master-0 glusterFS]# kubectl get pods -n ns-support -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP                NODE       NOMINATED NODE
deploy-heketi-59d569ddbc-5d2jq   1/1     Running   0          15s     10.233.68.109     worker-0   
glusterfs-cpj22                  1/1     Running   0          3m53s   192.168.112.225   worker-1   
glusterfs-h6slp                  1/1     Running   0          3m53s   192.168.112.224   worker-0   
glusterfs-pnwfl                  1/1     Running   0          3m53s   192.168.112.223   master-2   
### 执行容器内的heketi-cli命令,加载glusterFS拓扑 ###
[root@master-0 glusterFS]# kubectl exec -i -n ns-support deploy-heketi-59d569ddbc-5d2jq -- heketi-cli topology load --user admin --secret "awesomePassword" --json=/etc/heketi/topology.json
Creating cluster ... ID: cb7860a4714bd4a0e4a3a69471e34b91
	Allowing file volumes on cluster.
	Allowing block volumes on cluster.
	Creating node master-2 ... ID: 1d23857752fec50956ef8b50c4bb2c6a
		Adding device /dev/loop0 ... OK
	Creating node worker-0 ... ID: 28112dc6ea364a3097bb676fefa8975a
		Adding device /dev/loop0 ... OK
	Creating node worker-1 ... ID: 51d882510ee467c0f5c963bb252768d4
		Adding device /dev/loop0 ... OK
### 执行容器内的heketi-cli命令,生成kubernetes对象部署文件 ###
[root@master-0 glusterFS]# kubectl exec -i -n ns-support deploy-heketi-59d569ddbc-5d2jq -- heketi-cli setup-kubernetes-heketi-storage --user admin --secret "awesomePassword" --image 192.168.101.88:5000/heketi/heketi:dev --listfile=/tmp/heketi-storage.json
Saving /tmp/heketi-storage.json
### 在指定的namespace下执行部署文件 ###
[root@master-0 glusterFS]# kubectl exec -i -n ns-support deploy-heketi-59d569ddbc-5d2jq -- cat /tmp/heketi-storage.json | kubectl apply -n ns-support -f -
secret/heketi-storage-secret created
endpoints/heketi-storage-endpoints created
service/heketi-storage-endpoints created
job.batch/heketi-storage-copy-job created
### 检查部署文件部署的job是否执行完成 ###
[root@master-0 glusterFS]# kubectl get pods -n ns-support 
NAME                             READY   STATUS      RESTARTS   AGE
deploy-heketi-59d569ddbc-5d2jq   1/1     Running     0          9m34s
glusterfs-cpj22                  1/1     Running     0          13m
glusterfs-h6slp                  1/1     Running     0          13m
glusterfs-pnwfl                  1/1     Running     0          13m
heketi-storage-copy-job-96cvb    0/1     Completed   0          21s

  • heketi-cli topology load 使glusterFS节点组成集群
  • heketi-cli setup-kubernetes-heketi-storage 命令产生在kubernetes上部署heketi的必要YAML部署文件,主要是创建heketi的持久化存储,setup-kubernetes-heketi-storage有几个别名:setup-openshift-heketi-storagesetup-heketi-db-storagesetup-kubernetes-heketi-storage,这三个都是同一个作用
  • 如果heketi-storage-copy-job-96cvb一直处于ContainerCreating或者Pending状态,使用describe查看pod详细信息,unknown filesystem type 'glusterfs'考虑pod所在节点是否安装glusterfs-fuse
  • 删除不带持久化的heketi应用及其他临时对象
[root@master-0 glusterFS]# kubectl delete all,service,jobs,deployment,secret -l "deploy-heketi" -n ns-support
pod "deploy-heketi-59d569ddbc-5d2jq" deleted
service "deploy-heketi" deleted
deployment.apps "deploy-heketi" deleted
job.batch "heketi-storage-copy-job" deleted
secret "heketi-storage-secret" deleted

  • 部署持久化heketi应用
kind: Deployment
apiVersion: apps/v1
metadata:
  name: heketi
  namespace: ns-support
  labels:
    glusterfs: heketi-deployment
    heketi: deployment
  annotations:
    description: Defines how to deploy Heketi
spec:
  replicas: 1
  selector:
    matchLabels:
      glusterfs: heketi-pod
      heketi: pod
  template:
    metadata:
      name: heketi
      labels:
        glusterfs: heketi-pod
        heketi: pod
    spec:
      serviceAccountName: heketi-service-account
      containers:
        - image: 192.168.101.88:5000/heketi/heketi:dev
          imagePullPolicy: IfNotPresent
          name: heketi
          env:
            - name: HEKETI_USER_KEY
              value: "awesomePassword"
            - name: HEKETI_ADMIN_KEY
              value: "awesomePassword"
            - name: HEKETI_EXECUTOR
              value: kubernetes
            - name: HEKETI_FSTAB
              value: "/var/lib/heketi/fstab"
            - name: HEKETI_SNAPSHOT_LIMIT
              value: '14'
            - name: HEKETI_KUBE_GLUSTER_DAEMONSET
              value: "y"
            - name: HEKETI_IGNORE_STALE_OPERATIONS
              value: "true"
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: db
              mountPath: "/var/lib/heketi"
            - name: config
              mountPath: /etc/heketi
          readinessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 3
            httpGet:
              path: "/hello"
              port: 8080
          livenessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 30
            httpGet:
              path: "/hello"
              port: 8080
      volumes:
        - name: db
          glusterfs:
            endpoints: heketi-storage-endpoints
            path: heketidbstorage
        - name: config
          configMap:
            name: heketi-config
---
kind: Service
apiVersion: v1
metadata:
  name: heketi
  namespace: ns-support
  labels:
    glusterfs: heketi-service
    heketi: service
  annotations:
    description: Exposes Heketi Service
spec:
  selector:
    glusterfs: heketi-pod
  ports:
    - name: heketi
      port: 8080
      targetPort: 8080

  • 和前一个heketi的部署文件的差别,就是多了一个Volume:db~

4、测试

  • 创建StorageClass
[root@master-0 glusterFS]# kubectl get svc -n ns-support 
NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
heketi                     ClusterIP   10.100.35.36             8080/TCP   10m
heketi-storage-endpoints   ClusterIP   10.100.144.197           1/TCP      116m

需要先找出heketi服务的IP~

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gluster-volume-sc
provisioner: kubernetes.io/glusterfs
parameters:
  # resturl: "http://heketi.ns-support.cluster.local:8080"
  resturl: "http://10.100.35.36:8080"
  restuser: "admin"
  restuserkey: "awesomePassword"

注意:

  • 目前在kubernetes里还无法在resturl直接设置FQDN,具体参见:https://github.com/gluster/gluster-kubernetes/issues/425和https://github.com/kubernetes/kubernetes/issues/42306
  • restuserkey可以用secret指定
  • 创建PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gluster1
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: gluster-volume-sc

[root@master-0 glusterFS]# kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
gluster1   Bound    pvc-e07bcccc-f2ce-11e8-a5fe-0050568e2411   5Gi        RWO            gluster-volume-sc   8s

[root@master-0 glusterFS]# kubectl exec -i -n ns-support heketi-9749d4567-jslhc -- heketi-cli topology info --user admin --secret "awesomePassword"

Cluster Id: cb7860a4714bd4a0e4a3a69471e34b91

    File:  true
    Block: true

    Volumes:

	Name: vol_43ac12bd8808606a93910d7e37e035b8
	Size: 5
	Id: 43ac12bd8808606a93910d7e37e035b8
	Cluster Id: cb7860a4714bd4a0e4a3a69471e34b91
	Mount: 192.168.112.223:vol_43ac12bd8808606a93910d7e37e035b8
	Mount Options: backup-volfile-servers=192.168.112.224,192.168.112.225
	Durability Type: replicate
	Replica: 3
	Snapshot: Enabled
	Snapshot Factor: 1.00

		Bricks:
			Id: 747347aaa7e51219bc82253eb72a8b25
			Path: /var/lib/heketi/mounts/vg_cf0f189dcacdb908ba170bdf966af902/brick_747347aaa7e51219bc82253eb72a8b25/brick
			Size (GiB): 5
			Node: 28112dc6ea364a3097bb676fefa8975a
			Device: cf0f189dcacdb908ba170bdf966af902

			Id: 8cb8c19431cb1eb993792c52cd8e2cf8
			Path: /var/lib/heketi/mounts/vg_4bf3738b87fda5335dbc9d65e0785d8b/brick_8cb8c19431cb1eb993792c52cd8e2cf8/brick
			Size (GiB): 5
			Node: 51d882510ee467c0f5c963bb252768d4
			Device: 4bf3738b87fda5335dbc9d65e0785d8b

			Id: ef9eae79083692ec8b20c3fe2651e348
			Path: /var/lib/heketi/mounts/vg_dc3ab1fc864771568a7243362a08250a/brick_ef9eae79083692ec8b20c3fe2651e348/brick
			Size (GiB): 5
			Node: 1d23857752fec50956ef8b50c4bb2c6a
			Device: dc3ab1fc864771568a7243362a08250a


	Name: heketidbstorage
	Size: 2
	Id: c43da9673bcb1bec2331c05b7d8dfa28
	Cluster Id: cb7860a4714bd4a0e4a3a69471e34b91
	Mount: 192.168.112.223:heketidbstorage
	Mount Options: backup-volfile-servers=192.168.112.224,192.168.112.225
	Durability Type: replicate
	Replica: 3
	Snapshot: Disabled

		Bricks:
			Id: 6c06acd1eacf3522dea90e38f450d0a0
			Path: /var/lib/heketi/mounts/vg_cf0f189dcacdb908ba170bdf966af902/brick_6c06acd1eacf3522dea90e38f450d0a0/brick
			Size (GiB): 2
			Node: 28112dc6ea364a3097bb676fefa8975a
			Device: cf0f189dcacdb908ba170bdf966af902

			Id: 8eb9ff3bb8ce7c50fb76d00d6ed93f17
			Path: /var/lib/heketi/mounts/vg_4bf3738b87fda5335dbc9d65e0785d8b/brick_8eb9ff3bb8ce7c50fb76d00d6ed93f17/brick
			Size (GiB): 2
			Node: 51d882510ee467c0f5c963bb252768d4
			Device: 4bf3738b87fda5335dbc9d65e0785d8b

			Id: f62ca07100b73a6e7d96cc87c7ddb4db
			Path: /var/lib/heketi/mounts/vg_dc3ab1fc864771568a7243362a08250a/brick_f62ca07100b73a6e7d96cc87c7ddb4db/brick
			Size (GiB): 2
			Node: 1d23857752fec50956ef8b50c4bb2c6a
			Device: dc3ab1fc864771568a7243362a08250a


    Nodes:

	Node Id: 1d23857752fec50956ef8b50c4bb2c6a
	State: online
	Cluster Id: cb7860a4714bd4a0e4a3a69471e34b91
	Zone: 1
	Management Hostnames: master-2
	Storage Hostnames: 192.168.112.223
	Devices:
		Id:dc3ab1fc864771568a7243362a08250a   Name:/dev/loop0          State:online    Size (GiB):29      Used (GiB):7       Free (GiB):22      
			Bricks:
				Id:ef9eae79083692ec8b20c3fe2651e348   Size (GiB):5       Path: /var/lib/heketi/mounts/vg_dc3ab1fc864771568a7243362a08250a/brick_ef9eae79083692ec8b20c3fe2651e348/brick
				Id:f62ca07100b73a6e7d96cc87c7ddb4db   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_dc3ab1fc864771568a7243362a08250a/brick_f62ca07100b73a6e7d96cc87c7ddb4db/brick

	Node Id: 28112dc6ea364a3097bb676fefa8975a
	State: online
	Cluster Id: cb7860a4714bd4a0e4a3a69471e34b91
	Zone: 1
	Management Hostnames: worker-0
	Storage Hostnames: 192.168.112.224
	Devices:
		Id:cf0f189dcacdb908ba170bdf966af902   Name:/dev/loop0          State:online    Size (GiB):29      Used (GiB):7       Free (GiB):22      
			Bricks:
				Id:6c06acd1eacf3522dea90e38f450d0a0   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_cf0f189dcacdb908ba170bdf966af902/brick_6c06acd1eacf3522dea90e38f450d0a0/brick
				Id:747347aaa7e51219bc82253eb72a8b25   Size (GiB):5       Path: /var/lib/heketi/mounts/vg_cf0f189dcacdb908ba170bdf966af902/brick_747347aaa7e51219bc82253eb72a8b25/brick

	Node Id: 51d882510ee467c0f5c963bb252768d4
	State: online
	Cluster Id: cb7860a4714bd4a0e4a3a69471e34b91
	Zone: 1
	Management Hostnames: worker-1
	Storage Hostnames: 192.168.112.225
	Devices:
		Id:4bf3738b87fda5335dbc9d65e0785d8b   Name:/dev/loop0          State:online    Size (GiB):29      Used (GiB):7       Free (GiB):22      
			Bricks:
				Id:8cb8c19431cb1eb993792c52cd8e2cf8   Size (GiB):5       Path: /var/lib/heketi/mounts/vg_4bf3738b87fda5335dbc9d65e0785d8b/brick_8cb8c19431cb1eb993792c52cd8e2cf8/brick
				Id:8eb9ff3bb8ce7c50fb76d00d6ed93f17   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_4bf3738b87fda5335dbc9d65e0785d8b/brick_8eb9ff3bb8ce7c50fb76d00d6ed93f17/brick

你可能感兴趣的:(架构)