1、准备工作
所有节点安装GFS客户端
yum install glusterfs glusterfs-fuse -y
如果不是所有节点要部署GFS管理服务,就在需要部署的节点上打上标签
[root@k8s-master01 ~]# kubectl label node k8s-node01 storagenode=glusterfs node/k8s-node01 labeled [root@k8s-master01 ~]# kubectl label node k8s-node02 storagenode=glusterfs node/k8s-node02 labeled [root@k8s-master01 ~]# kubectl label node k8s-master01 storagenode=glusterfs node/k8s-master01 labeled
所有节点
[root@k8s-master01 ~]# modprobe dm_snapshot [root@k8s-master01 ~]# modprobe dm_mirror [root@k8s-master01 ~]# modprobe dm_thin_pool
2、创建GFS管理服务容器集群
本文采用容器化方式部署GFS,公司如有GFS集群可直接使用。
GFS已Daemonset的方式进行部署,保证每台需要部署GFS管理服务的Node上都运行一个GFS管理服务。
下载相关文件:
wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz
创建集群:
[root@k8s-master01 kubernetes]# pwd
/root/heketi-client/share/heketi/kubernetes
[root@k8s-master01 kubernetes]# kubectl create -f glusterfs-daemonset.json daemonset.extensions/glusterfs created [root@k8s-master01 kubernetes]# pwd /root/heketi-client/share/heketi/kubernetes
注意1:此时采用的为默认的挂载方式,可使用其他磁盘当做GFS的工作目录。
注意2:此时创建的namespace为默认的default,按需更改
注意3:可使用gluster/gluster-centos:gluster3u12_centos7镜像
查看pods
[root@k8s-master01 kubernetes]# kubectl get pods -l glusterfs-node=daemonset NAME READY STATUS RESTARTS AGE glusterfs-5npwn 1/1 Running 0 1m glusterfs-bd5dx 1/1 Running 0 1m
...
3、创建Heketi服务
Heketi是一个提供RESTful API管理GFS卷的框架,并能够在K8S、OpenShift、OpenStack等云平台上实现动态存储资源供应,支持GFS多集群管理,便于管理员对GFS进行操作。
创建Heketi的ServiceAccount对象:
[root@k8s-master01 kubernetes]# cat heketi-service-account.json { "apiVersion": "v1", "kind": "ServiceAccount", "metadata": { "name": "heketi-service-account" } } [root@k8s-master01 kubernetes]# kubectl create -f heketi-service-account.json serviceaccount/heketi-service-account created [root@k8s-master01 kubernetes]# pwd /root/heketi-client/share/heketi/kubernetes [root@k8s-master01 kubernetes]# kubectl get sa NAME SECRETS AGE default 1 13d heketi-service-account 1
创建Heketi对应的权限和secret
[root@k8s-master01 kubernetes]# kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account clusterrolebinding.rbac.authorization.k8s.io/heketi-gluster-admin created [root@k8s-master01 kubernetes]# kubectl create secret generic heketi-config-secret --from-file=./heketi.json secret/heketi-config-secret created
初始化部署Heketi
[root@k8s-master01 kubernetes]# kubectl create -f heketi-bootstrap.json
secret/heketi-db-backup created
service/heketi created
deployment.extensions/heketi created
[root@k8s-master01 kubernetes]# pwd
/root/heketi-client/share/heketi/kubernetes
4、设置GFS集群
[root@k8s-master01 heketi-client]# cp bin/heketi-cli /usr/local/bin/ [root@k8s-master01 heketi-client]# pwd /root/heketi-client [root@k8s-master01 heketi-client]# heketi-cli -v heketi-cli v7.0.0
修改topology-sample,manage为GFS管理服务的Node节点主机名,storage为Node节点IP,devices为Node节点上的裸设备
[root@k8s-master01 kubernetes]# cat topology-sample.json { "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "k8s-master01" ], "storage": [ "192.168.20.20" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdc", "destroydata": false } ] }, { "node": { "hostnames": { "manage": [ "k8s-node01" ], "storage": [ "192.168.20.30" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": false } ] }, { "node": { "hostnames": { "manage": [ "k8s-node02" ], "storage": [ "192.168.20.31" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": false } ] } ] } ] }
查看当前pod的ClusterIP
[root@k8s-master01 kubernetes]# kubectl get svc | grep heketi deploy-heketi ClusterIP 10.110.217.1538080/TCP 26m [root@k8s-master01 kubernetes]# export HEKETI_CLI_SERVER=http://10.110.217.153:8080
创建GFS集群
[root@k8s-master01 kubernetes]# heketi-cli topology load --json=topology-sample.json Creating cluster ... ID: a058723afae149618337299c84a1eaed Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node k8s-master01 ... ID: 929909065ceedb59c1b9c235fc3298ec Adding device /dev/sdc ... OK Creating node k8s-node01 ... ID: 37409d82b9ef27f73ccc847853eec429 Adding device /dev/sdb ... OK Creating node k8s-node02 ... ID: e3ab676be27945749bba90efb34f2eb9 Adding device /dev/sdb ... OK
创建heketi持久化卷
yum install device-mapper* -y
[root@k8s-master01 kubernetes]# heketi-cli setup-openshift-heketi-storage Saving heketi-storage.json [root@k8s-master01 kubernetes]# ls glusterfs-daemonset.json heketi.json heketi-storage.json heketi-bootstrap.json heketi-service-account.json README.md heketi-deployment.json heketi-start.sh topology-sample.json [root@k8s-master01 kubernetes]# kubectl create -f heketi-storage.json secret/heketi-storage-secret created endpoints/heketi-storage-endpoints created service/heketi-storage-endpoints created job.batch/heketi-storage-copy-job created
如果出现如下报错:
[root@k8s-master01 kubernetes]# heketi-cli setup-openshift-heketi-storage Error: /usr/sbin/modprobe failed: 1 thin: Required device-mapper target(s) not detected in your kernel. Run `lvcreate --help' for more information.
解决办法:所有节点执行modprobe dm_thin_pool
删除中间产物
[root@k8s-master01 kubernetes]# kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi" pod "deploy-heketi-59f8dbc97f-5rf6s" deleted service "deploy-heketi" deleted service "heketi" deleted deployment.apps "deploy-heketi" deleted replicaset.apps "deploy-heketi-59f8dbc97f" deleted job.batch "heketi-storage-copy-job" deleted secret "heketi-storage-secret" deleted
创建持久化Heketi,持久化方式也可以选用其他方法。
[root@k8s-master01 kubernetes]# kubectl create -f heketi-deployment.json service/heketi created deployment.extensions/heketi created
待pod起来后,部署完成
[root@k8s-master01 kubernetes]# kubectl get po NAME READY STATUS RESTARTS AGE glusterfs-5npwn 1/1 Running 0 3h glusterfs-8zfzq 1/1 Running 0 3h glusterfs-bd5dx 1/1 Running 0 3h heketi-5cb5f55d9f-5mtqt 1/1 Running 0 2m
查看最新部署的持久化Heketi的svc,并更改HEKETI_CLI_SERVER的值
[root@k8s-master01 kubernetes]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE heketi ClusterIP 10.111.95.2408080/TCP 12h heketi-storage-endpoints ClusterIP 10.99.28.153 1/TCP 12h kubernetes ClusterIP 10.96.0.1 443/TCP 14d [root@k8s-master01 kubernetes]# export HEKETI_CLI_SERVER=http://10.111.95.240:8080 [root@k8s-master01 kubernetes]# curl http://10.111.95.240:8080/hello Hello from Heketi
查看GFS信息
Hello from Heketi[root@k8s-master01 kubernetes]# heketi-cli topology info Cluster Id: 5dec5676c731498c2bdf996e110a3e5e File: true Block: true Volumes: Name: heketidbstorage Size: 2 Id: 828dc2dfaa00b7213e831b91c6213ae4 Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Mount: 192.168.20.31:heketidbstorage Mount Options: backup-volfile-servers=192.168.20.30,192.168.20.20 Durability Type: replicate Replica: 3 Snapshot: Disabled Bricks: Id: 16b7270d7db1b3cfe9656b64c2a3916c Path: /var/lib/heketi/mounts/vg_04290ec786dc7752a469b66f5e94458f/brick_16b7270d7db1b3cfe9656b64c2a3916c/brick Size (GiB): 2 Node: fb181b0cef571e9af7d84d2ecf534585 Device: 04290ec786dc7752a469b66f5e94458f Id: 828da093d9d78a2b1c382b13cc4da4a1 Path: /var/lib/heketi/mounts/vg_80b61df999fcac26ebca6e28c4da8e61/brick_828da093d9d78a2b1c382b13cc4da4a1/brick Size (GiB): 2 Node: d38819746cab7d567ba5f5f4fea45d91 Device: 80b61df999fcac26ebca6e28c4da8e61 Id: e8ef0e68ccc3a0416f73bc111cffee61 Path: /var/lib/heketi/mounts/vg_82af8e5f2fb2e1396f7c9e9f7698a178/brick_e8ef0e68ccc3a0416f73bc111cffee61/brick Size (GiB): 2 Node: 0f00835397868d3591f45432e432ba38 Device: 82af8e5f2fb2e1396f7c9e9f7698a178 Nodes: Node Id: 0f00835397868d3591f45432e432ba38 State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostnames: k8s-node02 Storage Hostnames: 192.168.20.31 Devices: Id:82af8e5f2fb2e1396f7c9e9f7698a178 Name:/dev/sdb State:online Size (GiB):39 Used (GiB):22 Free (GiB):17 Bricks: Id:e8ef0e68ccc3a0416f73bc111cffee61 Size (GiB):2 Path: /var/lib/heketi/mounts/vg_82af8e5f2fb2e1396f7c9e9f7698a178/brick_e8ef0e68ccc3a0416f73bc111cffee61/brick Node Id: d38819746cab7d567ba5f5f4fea45d91 State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostnames: k8s-node01 Storage Hostnames: 192.168.20.30 Devices: Id:80b61df999fcac26ebca6e28c4da8e61 Name:/dev/sdb State:online Size (GiB):39 Used (GiB):22 Free (GiB):17 Bricks: Id:828da093d9d78a2b1c382b13cc4da4a1 Size (GiB):2 Path: /var/lib/heketi/mounts/vg_80b61df999fcac26ebca6e28c4da8e61/brick_828da093d9d78a2b1c382b13cc4da4a1/brick Node Id: fb181b0cef571e9af7d84d2ecf534585 State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostnames: k8s-master01 Storage Hostnames: 192.168.20.20 Devices: Id:04290ec786dc7752a469b66f5e94458f Name:/dev/sdc State:online Size (GiB):39 Used (GiB):22 Free (GiB):17 Bricks: Id:16b7270d7db1b3cfe9656b64c2a3916c Size (GiB):2 Path: /var/lib/heketi/mounts/vg_04290ec786dc7752a469b66f5e94458f/brick_16b7270d7db1b3cfe9656b64c2a3916c/brick
5、定义StorageClass
[root@k8s-master01 gfs]# cat storageclass-gfs-heketi.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gluster-heketi provisioner: kubernetes.io/glusterfs parameters: resturl: "http://10.111.95.240:8080" restauthenabled: "false" [root@k8s-master01 gfs]# ku [root@k8s-master01 gfs]# kubectl create -f storageclass-gfs-heketi.yaml storageclass.storage.k8s.io/gluster-heketi created
Provisioner参数须设置为"kubernetes.io/glusterfs"
resturl地址为API Server所在主机可以访问到的Heketi服务的某个地址
6、定义PVC及测试Pod
[root@k8s-master01 gfs]# kubectl create -f pod-use-pvc.yaml pod/pod-use-pvc created persistentvolumeclaim/pvc-gluster-heketi created [root@k8s-master01 gfs]# cat pod-use-pvc.yaml apiVersion: v1 kind: Pod metadata: name: pod-use-pvc spec: containers: - name: pod-use-pvc image: busybox command: - sleep - "3600" volumeMounts: - name: gluster-volume mountPath: "/pv-data" readOnly: false volumes: - name: gluster-volume persistentVolumeClaim: claimName: pvc-gluster-heketi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-gluster-heketi spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "gluster-heketi" resources: requests: storage: 1Gi
PVC定义一旦生成,系统便触发Heketi进行相应的操作,主要为在GlusterFS集群上创建brick,再创建并启动一个volume
创建的pv及pvc如下
[root@k8s-master01 gfs]# kubectl get pv,pvc | grep gluster persistentvolume/pvc-4a8033e8-e7f7-11e8-9a09-000c293bfe27 1Gi RWO Delete Bound default/pvc-gluster-heketi gluster-heketi 5m persistentvolumeclaim/pvc-gluster-heketi Bound pvc-4a8033e8-e7f7-11e8-9a09-000c293bfe27 1Gi RWO gluster-heketi 5m
7、测试数据
进入到pod并创建文件
[root@k8s-master01 /]# kubectl exec -ti pod-use-pvc -- /bin/sh / # cd /pv-data/ /pv-data # mkdir {1..10} /pv-data # ls {1..10}
宿主机挂载测试
# 查看volume [root@k8s-master01 /]# heketi-cli topology info Cluster Id: 5dec5676c731498c2bdf996e110a3e5e File: true Block: true Volumes: Name: vol_56d636b452d31a9d4cb523d752ad0891 Size: 1 Id: 56d636b452d31a9d4cb523d752ad0891 Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Mount: 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891 Mount Options: backup-volfile-servers=192.168.20.30,192.168.20.20 Durability Type: replicate Replica: 3 Snapshot: Enabled ... ...
# 或者使用volume list查看
[root@k8s-master01 mnt]# heketi-cli volume list
Id:56d636b452d31a9d4cb523d752ad0891 Cluster:5dec5676c731498c2bdf996e110a3e5e Name:vol_56d636b452d31a9d4cb523d752ad0891
Id:828dc2dfaa00b7213e831b91c6213ae4 Cluster:5dec5676c731498c2bdf996e110a3e5e Name:heketidbstorage
[root@k8s-master01 mnt]#
vol_56d636b452d31a9d4cb523d752ad0891为volume Name,Mount: 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891,挂载方式。
挂载查看数据
[root@k8s-master01 /]# mount -t glusterfs 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891 /mnt/ [root@k8s-master01 /]# cd /mnt/ [root@k8s-master01 mnt]# ls {1..10}
8、测试Deployments
[root@k8s-master01 gfs]# cat nginx-gluster.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-gfs spec: replicas: 2 template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: nginx-gfs-html mountPath: "/usr/share/nginx/html" - name: nginx-gfs-conf mountPath: "/etc/nginx/conf.d" volumes: - name: nginx-gfs-html persistentVolumeClaim: claimName: glusterfs-nginx-html - name: nginx-gfs-conf persistentVolumeClaim: claimName: glusterfs-nginx-conf --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: glusterfs-nginx-html spec: accessModes: [ "ReadWriteMany" ] storageClassName: "gluster-heketi" resources: requests: storage: 500Mi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: glusterfs-nginx-conf spec: accessModes: [ "ReadWriteMany" ] storageClassName: "gluster-heketi" resources: requests: storage: 10Mi [root@k8s-master01 gfs]# kubectl get po,pvc,pv | grep nginx pod/nginx-gfs-77c758ccc-2hwl6 1/1 Running 0 4m pod/nginx-gfs-77c758ccc-kxzfz 0/1 ContainerCreating 0 3m persistentvolumeclaim/glusterfs-nginx-conf Bound pvc-f40c5d4b-e800-11e8-8a89-000c293ad492 1Gi RWX gluster-heketi 2m persistentvolumeclaim/glusterfs-nginx-html Bound pvc-f40914f8-e800-11e8-8a89-000c293ad492 1Gi RWX gluster-heketi 2m persistentvolume/pvc-f40914f8-e800-11e8-8a89-000c293ad492 1Gi RWX Delete Bound default/glusterfs-nginx-html gluster-heketi 4m persistentvolume/pvc-f40c5d4b-e800-11e8-8a89-000c293ad492 1Gi RWX Delete Bound default/glusterfs-nginx-conf gluster-heketi 4m
查看挂载情况
[root@k8s-master01 gfs]# kubectl exec -ti nginx-gfs-77c758ccc-2hwl6 -- df -Th Filesystem Type Size Used Avail Use% Mounted on overlay overlay 86G 6.6G 80G 8% / tmpfs tmpfs 7.8G 0 7.8G 0% /dev tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup /dev/mapper/centos-root xfs 86G 6.6G 80G 8% /etc/hosts shm tmpfs 64M 0 64M 0% /dev/shm 192.168.20.20:vol_b9c68075c6f20438b46db892d15ed45a fuse.glusterfs 1014M 43M 972M 5% /etc/nginx/conf.d 192.168.20.20:vol_32146a51be9f980c14bc86c34f67ebd5 fuse.glusterfs 1014M 43M 972M 5% /usr/share/nginx/html tmpfs tmpfs 7.8G 12K 7.8G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs
挂载并创建index.html
[root@k8s-master01 gfs]# mount -t glusterfs 192.168.20.20:vol_32146a51be9f980c14bc86c34f67ebd5 /mnt/ [root@k8s-master01 gfs]# cd /mnt/ [root@k8s-master01 mnt]# ls [root@k8s-master01 mnt]# echo "test" > index.html [root@k8s-master01 mnt]# kubectl exec -ti nginx-gfs-77c758ccc-2hwl6 -- cat /usr/share/nginx/html/index.html test
扩容nginx
[root@k8s-master01 ~]# kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE heketi 1 1 1 1 14h nginx-gfs 2 2 2 2 23m [root@k8s-master01 ~]# kubectl scale deploy nginx-gfs --replicas 3 deployment.extensions/nginx-gfs scaled [root@k8s-master01 ~]# kubectl get po NAME READY STATUS RESTARTS AGE glusterfs-5npwn 1/1 Running 0 18h glusterfs-8zfzq 1/1 Running 0 17h glusterfs-bd5dx 1/1 Running 0 18h heketi-5cb5f55d9f-5mtqt 1/1 Running 0 14h nginx-gfs-77c758ccc-2hwl6 1/1 Running 0 11m nginx-gfs-77c758ccc-6fphl 1/1 Running 0 8m nginx-gfs-77c758ccc-kxzfz 1/1 Running 0 10m
查看文件内容
[root@k8s-master01 ~]# kubectl exec -ti nginx-gfs-77c758ccc-6fphl -- cat /usr/share/nginx/html/index.html
test
9、扩容GlusterFS
9.1添加磁盘至已存在的node节点
基于上述节点,假设在k8s-node02上增加磁盘
查看k8s-node02部署的pod name及IP
[root@k8s-master01 ~]# kubectl get po -o wide -l glusterfs-node NAME READY STATUS RESTARTS AGE IP NODE glusterfs-5npwn 1/1 Running 0 20h 192.168.20.31 k8s-node02 glusterfs-8zfzq 1/1 Running 0 20h 192.168.20.20 k8s-master01 glusterfs-bd5dx 1/1 Running 0 20h 192.168.20.30 k8s-node01
在node02上确认新添加的盘符
Disk /dev/sdc: 42.9 GB, 42949672960 bytes, 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
使用heketi-cli查看cluster ID和所有node ID
[root@k8s-master01 ~]# heketi-cli cluster info Error: Cluster id missing [root@k8s-master01 ~]# heketi-cli cluster list Clusters: Id:5dec5676c731498c2bdf996e110a3e5e [file][block] [root@k8s-master01 ~]# heketi-cli cluster info 5dec5676c731498c2bdf996e110a3e5e Cluster id: 5dec5676c731498c2bdf996e110a3e5e Nodes: 0f00835397868d3591f45432e432ba38 d38819746cab7d567ba5f5f4fea45d91 fb181b0cef571e9af7d84d2ecf534585 Volumes: 32146a51be9f980c14bc86c34f67ebd5 56d636b452d31a9d4cb523d752ad0891 828dc2dfaa00b7213e831b91c6213ae4 b9c68075c6f20438b46db892d15ed45a Block: true File: true
找到对应的k8s-node02的node ID
[root@k8s-master01 ~]# heketi-cli node info 0f00835397868d3591f45432e432ba38 Node Id: 0f00835397868d3591f45432e432ba38 State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostname: k8s-node02 Storage Hostname: 192.168.20.31 Devices: Id:82af8e5f2fb2e1396f7c9e9f7698a178 Name:/dev/sdb State:online Size (GiB):39 Used (GiB):25 Free (GiB):14 Bricks:4
添加磁盘至GFS集群的node02
[root@k8s-master01 ~]# heketi-cli device add --name=/dev/sdc --node=0f00835397868d3591f45432e432ba38
Device added successfully
查看结果
[root@k8s-master01 ~]# heketi-cli node info 0f00835397868d3591f45432e432ba38 Node Id: 0f00835397868d3591f45432e432ba38 State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostname: k8s-node02 Storage Hostname: 192.168.20.31 Devices: Id:5539e74bc2955e7c70b3a20e72c04615 Name:/dev/sdc State:online Size (GiB):39 Used (GiB):0 Free (GiB):39 Bricks:0 Id:82af8e5f2fb2e1396f7c9e9f7698a178 Name:/dev/sdb State:online Size (GiB):39 Used (GiB):25 Free (GiB):14 Bricks:4
9.2 添加新节点
假设将k8s-master03,IP为192.168.20.22的加入glusterfs集群,并将该节点的/dev/sdc加入到集群
加标签,之后会自动创建pod
[root@k8s-master01 kubernetes]# kubectl label node k8s-master03 storagenode=glusterfs node/k8s-master03 labeled [root@k8s-master01 kubernetes]# kubectl get pod -owide -l glusterfs-node NAME READY STATUS RESTARTS AGE IP NODE glusterfs-5npwn 1/1 Running 0 21h 192.168.20.31 k8s-node02 glusterfs-8zfzq 1/1 Running 0 21h 192.168.20.20 k8s-master01 glusterfs-96w74 0/1 ContainerCreating 0 2m 192.168.20.22 k8s-master03 glusterfs-bd5dx 1/1 Running 0 21h 192.168.20.30 k8s-node01
在任意节点执行peer probe
[root@k8s-master01 kubernetes]# kubectl exec -ti glusterfs-5npwn -- gluster peer probe 192.168.20.22 peer probe: success.
将新节点加入到glusterfs集群中
[root@k8s-master01 kubernetes]# heketi-cli cluster list Clusters: Id:5dec5676c731498c2bdf996e110a3e5e [file][block] [root@k8s-master01 kubernetes]# heketi-cli node add --zone=1 --cluster=5dec5676c731498c2bdf996e110a3e5e --management-host-name=k8s-master03 --storage-host-name=192.168.20.22 Node information: Id: 150bc8c458a70310c6137e840619758c State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostname k8s-master03 Storage Hostname 192.168.20.22
将新节点的磁盘加入到集群中
[root@k8s-master01 kubernetes]# heketi-cli device add --name=/dev/sdc --node=150bc8c458a70310c6137e840619758c
Device added successfully
验证
[root@k8s-master01 kubernetes]# heketi-cli node list Id:0f00835397868d3591f45432e432ba38 Cluster:5dec5676c731498c2bdf996e110a3e5e Id:150bc8c458a70310c6137e840619758c Cluster:5dec5676c731498c2bdf996e110a3e5e Id:d38819746cab7d567ba5f5f4fea45d91 Cluster:5dec5676c731498c2bdf996e110a3e5e Id:fb181b0cef571e9af7d84d2ecf534585 Cluster:5dec5676c731498c2bdf996e110a3e5e [root@k8s-master01 kubernetes]# heketi-cli node info 150bc8c458a70310c6137e840619758c Node Id: 150bc8c458a70310c6137e840619758c State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostname: k8s-master03 Storage Hostname: 192.168.20.22 Devices: Id:2d5210c19858fb7ea3f805e6f582ecce Name:/dev/sdc State:online Size (GiB):39 Used (GiB):0 Free (GiB):39 Bricks:0
PS:扩容volume可使用heketi-cli volume expand --volume=volumeID --expand-size=10
10、重启heketi报错解决
报错如下:
[heketi] ERROR 2018/11/26 11:57:51 heketi/apps/glusterfs/app.go:185:glusterfs.NewApp: Heketi was terminated while performing one or more operations. Server may refuse to start as long as pending operations are present in the db.
解决:
修改heketi.json,添加"brick_min_size_gb" : 1,
[root@k8s-master01 kubernetes]# cat heketi.json { "_port_comment": "Heketi Server Port Number", "port": "8080", "_use_auth": "Enable JWT authorization. Please enable for deployment", "use_auth": false, "_jwt": "Private keys for access", "jwt": { "_admin": "Admin has access to all APIs", "admin": { "key": "My Secret" }, "_user": "User only has access to /volumes endpoint", "user": { "key": "My Secret" } }, "_glusterfs_comment": "GlusterFS Configuration", "glusterfs": { "_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh", "executor": "kubernetes", "_db_comment": "Database file name", "db": "/var/lib/heketi/heketi.db", "brick_min_size_gb" : 1, "kubeexec": { "rebalance_on_expansion": true }, "sshexec": { "rebalance_on_expansion": true, "keyfile": "/etc/heketi/private_key", "fstab": "/etc/fstab", "port": "22", "user": "root", "sudo": false } }, "_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.", "backup_db_to_kube_secret": false } [root@k8s-master01 kubernetes]# pwd /root/heketi-client/share/heketi/kubernetes
删除secret并重建
[root@k8s-master01 kubernetes]# kubectl delete secret heketi-config-secret [root@k8s-master01 ~]# kubectl create secret generic heketi-config-secret --from-file heketi.json
更改heketi的deploy
# env添加变量如下 - name: HEKETI_IGNORE_STALE_OPERATIONS value: "true"
11、GFS容器无法启动
glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled
解决(新建集群,无数据):
rm -rf /var/lib/heketi/ rm -rf /var/lib/glusterd rm -rf /etc/glusterfs/ yum remove glusterfs yum install glusterfs yum install glusterfs glusterfs-fuse -y
赞助作者: