银河麒麟高级服务器操作系统V10上k8s部署集成GlusterFS、Heketi

前言

本文介绍基于银河麒麟高级服务器操作系统V10已安装部署的k8s单机集群上部署GlusterFS、Heketi

本文涉及部署脚本主要基于gluster官方项目https://github.com/gluster/gluster-kubernetes在arm64上的迁移适配项目https://github.com/hknarutofk/gluster-kubernetes

 

前置条件

银河麒麟高级服务器操作系统V10上安装k8s单机集群: https://blog.csdn.net/m0_46573967/article/details/112935319

 

一、准备一个块独立的硬盘

在长城云上为银河麒麟高级服务器V10所在虚拟机分配一个独立的云盘。其他虚拟化平台类似,物理机则直接插到硬盘接口上。

银河麒麟高级服务器操作系统V10上k8s部署集成GlusterFS、Heketi_第1张图片

新增的云盘会被内核自动识别

银河麒麟高级服务器操作系统V10上k8s部署集成GlusterFS、Heketi_第2张图片

二、下载gluster-kubernetes脚本

切换到root用户,下载脚本

[yeqiang@192-168-110-185 桌面]$ sudo su
[root@192-168-110-185 桌面]# cd ~
[root@192-168-110-185 ~]# git clone --depth=1 https://github.com/hknarutofk/gluster-kubernetes.git
正克隆到 'gluster-kubernetes'...
remote: Enumerating objects: 157, done.
remote: Counting objects: 100% (157/157), done.
remote: Compressing objects: 100% (132/132), done.
remote: Total 157 (delta 21), reused 95 (delta 14), pack-reused 0
接收对象中: 100% (157/157), 659.85 KiB | 7.00 KiB/s, 完成.
处理 delta 中: 100% (21/21), 完成.

三、部署单点GlusterFS及heketi服务

由于资源有限,我们直接在当前服务器节点上部署一个GlusterFS实例、一个heketi实例

获取节点信息

[root@192-168-110-185 ~]# kubectl get nodes --show-labels
NAME              STATUS   ROLES    AGE     VERSION   LABELS
192.168.110.185   Ready    master   3h19m   v1.18.6   beta.kubernetes.io/arch=arm64,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm64,kubernetes.io/hostname=192.168.110.185,kubernetes.io/os=linux,kubernetes.io/role=master

节点直接是名与ip地址相同

创建topology.json文件

[root@192-168-110-185 ~]# cd gluster-kubernetes/
[root@192-168-110-185 gluster-kubernetes]# ^C
[root@192-168-110-185 gluster-kubernetes]# cd ..
[root@192-168-110-185 ~]# cd gluster-kubernetes/deploy/
[root@192-168-110-185 deploy]# vim topology.json
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "192.168.110.185"
              ],
              "storage": [
                "192.168.110.185"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sda"
          ]
        }
      ]
    }
  ]
}

注意

node.hostnames.manage数组内填写的是节点名称,不是ip地址!只是我们当前安装的节点是以ip命名

node.hostnames.storage数组内填写的是GlusterFS安装目标节点ip地址

devices数组填写的是目前ip节点上为glusterfs准备的磁盘路径(空硬盘)

在目标节点服务器安装glusterfs-fuse

[root@192-168-110-185 ~]# yum install glusterfs-fuse -y
Last metadata expiration check: 0:11:06 ago on 2021年01月22日 星期五 14时43分51秒.
Dependencies resolved.
================================================================================
 Package                 Arch        Version              Repository       Size
================================================================================
Installing:
 glusterfs               aarch64     7.0-4.ky10           ks10-adv-os     3.5 M
Installing dependencies:
 python3-gluster         aarch64     7.0-4.ky10           ks10-adv-os      15 k
 python3-prettytable     noarch      0.7.2-18.ky10        ks10-adv-os      33 k
 rdma-core               aarch64     20.1-6.ky10          ks10-adv-os     494 k

Transaction Summary
================================================================================
Install  4 Packages

Total download size: 4.0 M
Installed size: 23 M
Downloading Packages:
(1/4): python3-gluster-7.0-4.ky10.aarch64.rpm    90 kB/s |  15 kB     00:00    
(2/4): python3-prettytable-0.7.2-18.ky10.noarch 116 kB/s |  33 kB     00:00    
(3/4): rdma-core-20.1-6.ky10.aarch64.rpm        402 kB/s | 494 kB     00:01    
(4/4): glusterfs-7.0-4.ky10.aarch64.rpm         767 kB/s | 3.5 MB     00:04    
--------------------------------------------------------------------------------
Total                                           883 kB/s | 4.0 MB     00:04     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1 
  Installing       : rdma-core-20.1-6.ky10.aarch64                          1/4 
  Running scriptlet: rdma-core-20.1-6.ky10.aarch64                          1/4 
  Installing       : python3-prettytable-0.7.2-18.ky10.noarch               2/4 
  Installing       : python3-gluster-7.0-4.ky10.aarch64                     3/4 
  Running scriptlet: glusterfs-7.0-4.ky10.aarch64                           4/4 
  Installing       : glusterfs-7.0-4.ky10.aarch64                           4/4 
警告:/etc/glusterfs/glusterd.vol 已建立为 /etc/glusterfs/glusterd.vol.rpmnew 
警告:/etc/glusterfs/glusterfs-logrotate 已建立为 /etc/glusterfs/glusterfs-logrotate.rpmnew 
警告:/etc/glusterfs/gsyncd.conf 已建立为 /etc/glusterfs/gsyncd.conf.rpmnew 

  Running scriptlet: glusterfs-7.0-4.ky10.aarch64                           4/4 
  Verifying        : glusterfs-7.0-4.ky10.aarch64                           1/4 
  Verifying        : python3-gluster-7.0-4.ky10.aarch64                     2/4 
  Verifying        : python3-prettytable-0.7.2-18.ky10.noarch               3/4 
  Verifying        : rdma-core-20.1-6.ky10.aarch64                          4/4 

Installed:
  glusterfs-7.0-4.ky10.aarch64              python3-gluster-7.0-4.ky10.aarch64 
  python3-prettytable-0.7.2-18.ky10.noarch  rdma-core-20.1-6.ky10.aarch64      

Complete!

 

删除掉原项目中的污点容忍配置

由于原项目基于独立部署节点部署GlusterFS,所有GlusterFS上均设置了污点。本文只采用了一个节点安装所有组建,因此需要删除这部分配置

编辑/root/gluster-kubernetes/deploy/kube-templates/glusterfs-daemonset.yaml

去掉末尾的

      tolerations:
        - key: glusterfs
          operator: Exists
          effect: NoSchedule

保存文件

修改后的完整内容如下

---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: glusterfs
  labels:
    glusterfs: daemonset
  annotations:
    description: GlusterFS DaemonSet
    tags: glusterfs
spec:
  selector:
    matchLabels:
      glusterfs: pod
      glusterfs-node: pod
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs: pod
        glusterfs-node: pod
    spec:
      nodeSelector:
        storagenode: glusterfs
      hostNetwork: true
      containers:
      - image: registry.cn-hangzhou.aliyuncs.com/hknaruto/glusterfs-gluster_centos-arm64:latest
        imagePullPolicy: Always
        name: glusterfs
        env:
        # alternative for /dev volumeMount to enable access to *all* devices
        - name: HOST_DEV_DIR
          value: "/mnt/host-dev"
        # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the
        # readiness/liveness probe validate gluster-blockd as well
        - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
          value: "1"
        - name: GB_GLFS_LRU_COUNT
          value: "15"
        - name: TCMU_LOGDIR
          value: "/var/log/glusterfs/gluster-block"
        resources:
          requests:
            memory: 100Mi
            cpu: 100m
        volumeMounts:
        - name: glusterfs-heketi
          mountPath: "/var/lib/heketi"
        - name: glusterfs-run
          mountPath: "/run"
        - name: glusterfs-lvm
          mountPath: "/run/lvm"
        - name: glusterfs-etc
          mountPath: "/etc/glusterfs"
        - name: glusterfs-logs
          mountPath: "/var/log/glusterfs"
        - name: glusterfs-config
          mountPath: "/var/lib/glusterd"
        - name: glusterfs-host-dev
          mountPath: "/mnt/host-dev"
        - name: glusterfs-misc
          mountPath: "/var/lib/misc/glusterfsd"
        - name: glusterfs-block-sys-class
          mountPath: "/sys/class"
        - name: glusterfs-block-sys-module
          mountPath: "/sys/module"
        - name: glusterfs-cgroup
          mountPath: "/sys/fs/cgroup"
          readOnly: true
        - name: glusterfs-ssl
          mountPath: "/etc/ssl"
          readOnly: true
        - name: kernel-modules
          mountPath: "/lib/modules"
          readOnly: true
        securityContext:
          capabilities: {}
          privileged: true
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi"
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 50
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi"
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 50
      volumes:
      - name: glusterfs-heketi
        hostPath:
          path: "/var/lib/heketi"
      - name: glusterfs-run
      - name: glusterfs-lvm
        hostPath:
          path: "/run/lvm"
      - name: glusterfs-etc
        hostPath:
          path: "/etc/glusterfs"
      - name: glusterfs-logs
        hostPath:
          path: "/var/log/glusterfs"
      - name: glusterfs-config
        hostPath:
          path: "/var/lib/glusterd"
      - name: glusterfs-host-dev
        hostPath:
          path: "/dev"
      - name: glusterfs-misc
        hostPath:
          path: "/var/lib/misc/glusterfsd"
      - name: glusterfs-block-sys-class
        hostPath:
          path: "/sys/class"
      - name: glusterfs-block-sys-module
        hostPath:
          path: "/sys/module"
      - name: glusterfs-cgroup
        hostPath:
          path: "/sys/fs/cgroup"
      - name: glusterfs-ssl
        hostPath:
          path: "/etc/ssl"
      - name: kernel-modules
        hostPath:
          path: "/lib/modules"

执行部署

[root@192-168-110-185 ~]# cd gluster-kubernetes/deploy/
[root@192-168-110-185 deploy]# sh deploy.sh 
Using Kubernetes CLI.

Checking status of namespace matching 'default':
default   Active   3h34m
Using namespace "default".
Checking for pre-existing resources...
  GlusterFS pods ... 
Checking status of pods matching '--selector=glusterfs=pod':

Timed out waiting for pods matching '--selector=glusterfs=pod'.
not found.
  deploy-heketi pod ... 
Checking status of pods matching '--selector=deploy-heketi=pod':

Timed out waiting for pods matching '--selector=deploy-heketi=pod'.
not found.
  heketi pod ... 
Checking status of pods matching '--selector=heketi=pod':

Timed out waiting for pods matching '--selector=heketi=pod'.
not found.
  gluster-s3 pod ... 
Checking status of pods matching '--selector=glusterfs=s3-pod':

Timed out waiting for pods matching '--selector=glusterfs=s3-pod'.
not found.
Creating initial resources ... /opt/kube/bin/kubectl -n default create -f /root/gluster-kubernetes/deploy/kube-templates/heketi-service-account.yaml 2>&1
serviceaccount/heketi-service-account created
/opt/kube/bin/kubectl -n default create clusterrolebinding heketi-sa-view --clusterrole=edit --serviceaccount=default:heketi-service-account 2>&1
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view created
/opt/kube/bin/kubectl -n default label --overwrite clusterrolebinding heketi-sa-view glusterfs=heketi-sa-view heketi=sa-view
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view labeled
OK
Marking '192.168.110.185' as a GlusterFS node.
/opt/kube/bin/kubectl -n default label nodes 192.168.110.185 storagenode=glusterfs --overwrite 2>&1
node/192.168.110.185 not labeled
Deploying GlusterFS pods.
sed -e 's/storagenode\: glusterfs/storagenode\: 'glusterfs'/g' /root/gluster-kubernetes/deploy/kube-templates/glusterfs-daemonset.yaml | /opt/kube/bin/kubectl -n default create -f - 2>&1
daemonset.apps/glusterfs created
Waiting for GlusterFS pods to start ... 
Checking status of pods matching '--selector=glusterfs=pod':
glusterfs-ndxgd   1/1   Running   0     91s
OK
/opt/kube/bin/kubectl -n default create secret generic heketi-config-secret --from-file=private_key=/dev/null --from-file=./heketi.json --from-file=topology.json=topology.json
secret/heketi-config-secret created
/opt/kube/bin/kubectl -n default label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret
secret/heketi-config-secret labeled
sed -e 's/\${HEKETI_EXECUTOR}/kubernetes/' -e 's#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#' -e 's/\${HEKETI_ADMIN_KEY}/admin/' -e 's/\${HEKETI_USER_KEY}/user/' /root/gluster-kubernetes/deploy/kube-templates/deploy-heketi-deployment.yaml | /opt/kube/bin/kubectl -n default create -f - 2>&1
service/deploy-heketi created
deployment.apps/deploy-heketi created
Waiting for deploy-heketi pod to start ... 
Checking status of pods matching '--selector=deploy-heketi=pod':
deploy-heketi-59d9fdff68-kpr87   1/1   Running   0     12s
OK
Determining heketi service URL ... OK
/opt/kube/bin/kubectl -n default exec -i deploy-heketi-59d9fdff68-kpr87 -- heketi-cli -s http://localhost:8080 --user admin --secret 'admin' topology load --json=/etc/heketi/topology.json 2>&1
Creating cluster ... ID: 5c8d62cb9f57df17d7a55cc60d5fe5ca
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node 192.168.110.185 ... ID: a326a80bfc84137329503683869d044e
Adding device /dev/sda ... OK
heketi topology loaded.
/opt/kube/bin/kubectl -n default exec -i deploy-heketi-59d9fdff68-kpr87 -- heketi-cli -s http://localhost:8080 --user admin --secret 'admin' setup-openshift-heketi-storage --help --durability=none >/dev/null 2>&1
/opt/kube/bin/kubectl -n default exec -i deploy-heketi-59d9fdff68-kpr87 -- heketi-cli -s http://localhost:8080 --user admin --secret 'admin' setup-openshift-heketi-storage --listfile=/tmp/heketi-storage.json --durability=none 2>&1
Saving /tmp/heketi-storage.json
/opt/kube/bin/kubectl -n default exec -i deploy-heketi-59d9fdff68-kpr87 -- cat /tmp/heketi-storage.json | sed 's/heketi\/heketi:dev/registry.cn-hangzhou.aliyuncs.com\/hknaruto\/heketi-arm64:v10.2.0/g' | /opt/kube/bin/kubectl -n default create -f - 2>&1
secret/heketi-storage-secret created
endpoints/heketi-storage-endpoints created
service/heketi-storage-endpoints created
job.batch/heketi-storage-copy-job created

Checking status of pods matching '--selector=job-name=heketi-storage-copy-job':
heketi-storage-copy-job-f85d7   0/1   Completed   0     2m24s
/opt/kube/bin/kubectl -n default label --overwrite svc heketi-storage-endpoints glusterfs=heketi-storage-endpoints heketi=storage-endpoints
service/heketi-storage-endpoints labeled
/opt/kube/bin/kubectl -n default delete all,service,jobs,deployment,secret --selector="deploy-heketi" 2>&1
pod "deploy-heketi-59d9fdff68-kpr87" deleted
service "deploy-heketi" deleted
deployment.apps "deploy-heketi" deleted
replicaset.apps "deploy-heketi-59d9fdff68" deleted
job.batch "heketi-storage-copy-job" deleted
secret "heketi-storage-secret" deleted
sed -e 's/\${HEKETI_EXECUTOR}/kubernetes/' -e 's#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#' -e 's/\${HEKETI_ADMIN_KEY}/admin/' -e 's/\${HEKETI_USER_KEY}/user/' /root/gluster-kubernetes/deploy/kube-templates/heketi-deployment.yaml | /opt/kube/bin/kubectl -n default create -f - 2>&1
service/heketi created
deployment.apps/heketi created
Waiting for heketi pod to start ... 
Checking status of pods matching '--selector=heketi=pod':
heketi-bc754bf5d-lm2z2   1/1   Running   0     9s
OK
Determining heketi service URL ... OK

heketi is now running and accessible via http://172.20.0.17:8080 . To run
administrative commands you can install 'heketi-cli' and use it as follows:

  # heketi-cli -s http://172.20.0.17:8080 --user admin --secret '' cluster list

You can find it at https://github.com/heketi/heketi/releases . Alternatively,
use it from within the heketi pod:

  # /opt/kube/bin/kubectl -n default exec -i heketi-bc754bf5d-lm2z2 -- heketi-cli -s http://localhost:8080 --user admin --secret '' cluster list

For dynamic provisioning, create a StorageClass similar to this:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://172.20.0.17:8080"
  restuser: "user"
  restuserkey: "user"


Deployment complete!

[root@192-168-110-185 deploy]# 

创建StorageClass

部署日志最后打印出来的StorageClass只是一个固定的例子,其中的配置需要随当前项目修改

查看heketi服务地址

[root@192-168-110-185 deploy]# kubectl get svc | grep heketi
heketi                     ClusterIP   10.68.93.101           8080/TCP   3m
heketi-storage-endpoints   ClusterIP   10.68.28.192           1/TCP      5m30s

得到heketi服务地址:http://10.68.93.101:8080

查看管理员口令

管理员口令是安装是指令参数输入的,位于deploy.sh脚本中

[root@192-168-110-185 deploy]# cat deploy.sh 
#!/bin/bash

bash ./gk-deploy -v -g --admin-key=admin --user-key=user --single-node -l /tmp/gk-deploy.log  -y

得到admin-key:admin

修改storageclass-singlenode.yaml

[root@192-168-110-185 deploy]# vim storageclass-singlenode.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-storage
  annotations:
    #  由于系统存在其他默认存储,此处设置为false(默认也是false)
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://10.68.93.101:8080"
  restuser: "admin"
  restuserkey: "admin"
  volumetype: none

说明:

1. 此处设置的是非默认存储,可以根据实际情况修改。如果设置为默认存储,则k8s所有pvc将到此节点消耗存储空间

2. 单节点glusterfs部署必须指定volumetype:none,否则默认是需要三个节点才能成功绑定pvc

部署StorageClass

[root@192-168-110-185 deploy]# kubectl apply -f storageclass-singlenode.yaml 
storageclass.storage.k8s.io/glusterfs-storage created
[root@192-168-110-185 deploy]# kubectl get storageclasses
NAME                PROVISIONER               RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
glusterfs-storage   kubernetes.io/glusterfs   Delete          Immediate           false                  7s

总结

k8s集成heketi、glusterfs后,可以自动根据pvc创建、管理对应的pv,无需人工提前准备。

本文涉及的arm64镜像制作涉及项目

https://github.com/hknarutofk/gluster-containers/tree/master/CentOS

https://github.com/hknarutofk/heketi-docker-arm64

 

你可能感兴趣的:(银河麒麟服务器V10)