heketi扩展已经挂载到k8s的volume

上周在K8S集群上部署了prometheus,当时给划分的是5g存储,系统提示存储空间已满,pod不停在拉起失败。

1.先找到pod对应的pv,查看最终挂载volume 挂载路径.

kubectl describe pv pvc-98bde2dc-11a3-44af-8daf-fc54fb9aba3d -n ns-monitor
Name:            pvc-98bde2dc-11a3-44af-8daf-fc54fb9aba3d
Labels:          <none>
Annotations:     Description: Gluster-Internal: Dynamically provisioned PV
                 gluster.kubernetes.io/heketi-volume-id: 6e1551730d3d6798fd8d6013d7a0b705
                 gluster.org/type: file
                 kubernetes.io/createdby: heketi-dynamic-provisioner
                 pv.beta.kubernetes.io/gid: 2013
                 pv.kubernetes.io/bound-by-controller: yes
                 pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    gluster-heketi-storageclass
Status:          Bound
Claim:           ns-monitor/prometheus-data-pvc
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        5Gi
Node Affinity:   <none>
Message:
Source:
    Type:                Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
    EndpointsName:       glusterfs-dynamic-98bde2dc-11a3-44af-8daf-fc54fb9aba3d
    EndpointsNamespace:  ns-monitor
    Path:                vol_6e1551730d3d6798fd8d6013d7a0b705
    ReadOnly:            false
Events:                  <none>

2.我用的是heketi管理glusterfs存储,所以直接用heketi就可以扩展。

heketi-cli volume expand --volume=6e1551730d3d6798fd8d6013d7a0b705 --expand-size=12

[root@g0001 ~]# heketi-cli volume expand --volume=6e1551730d3d6798fd8d6013d7a0b705 --expand-size=12
Name: vol_6e1551730d3d6798fd8d6013d7a0b705
Size: 17
Volume Id: 6e1551730d3d6798fd8d6013d7a0b705
Cluster Id: 6711daf8abb6699cdd2025de96493800
Mount: 10.132.1.11:vol_6e1551730d3d6798fd8d6013d7a0b705
Mount Options: backup-volfile-servers=10.132.1.12,10.132.1.10
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distributed+Replica: 2
Snapshot Factor: 1.00

3.已经扩展完成,查看扩展后的存储大小。
查看volume的详细信息,看到已经gluster已经新增了brick3和brick4
因为副本是2,2个是正常的。

gluster volume info vol_6e1551730d3d6798fd8d6013d7a0b705
                                                                                                                                                        Volume Name: vol_6e1551730d3d6798fd8d6013d7a0b705
Type: Distributed-Replicate                                                                                                                             Volume ID: 51b0fbf1-19a4-4de3-a1e5-6bb3c2d313ec
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.132.1.10:/var/lib/heketi/mounts/vg_a1bdd995310f0b8154702036cba441a9/brick_d3a4f1cf40d761b198552bfd4218af45/brick
Brick2: 10.132.1.12:/var/lib/heketi/mounts/vg_56c8960839de084bac9bf8a7fe5c02f5/brick_ad05e87a0ca2ff501e6ac839cf155021/brick
Brick3: 10.132.1.10:/var/lib/heketi/mounts/vg_a1bdd995310f0b8154702036cba441a9/brick_d7198cf4a0f867ef7adcfb14c339c897/brick
Brick4: 10.132.1.12:/var/lib/heketi/mounts/vg_56c8960839de084bac9bf8a7fe5c02f5/brick_8b8bdd516f3ab35b10ea8e2d65688c94/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

4.查看brick3和4的详细信息。
LV显示的是12G,显示扩展成功

[root@g0001 ~]# lvdisplay /dev/vg_a1bdd995310f0b8154702036cba441a9/brick_d7198cf4a0f867ef7adcfb14c339c897
  --- Logical volume ---
  LV Path                /dev/vg_a1bdd995310f0b8154702036cba441a9/brick_d7198cf4a0f867ef7adcfb14c339c897
  LV Name                brick_d7198cf4a0f867ef7adcfb14c339c897
  VG Name                vg_a1bdd995310f0b8154702036cba441a9
  LV UUID                Y5u8vu-Nj0e-9asC-sefD-mgZD-9Kde-YHPY8D
  LV Write Access        read/write
  LV Creation host, time g0001, 2020-07-30 17:10:47 +0800
  LV Pool name           tp_aa46a737beb2e5560997722c63864113
  LV Status              available
  # open                 1
  LV Size                12.00 GiB
  Mapped size            17.73%
  Current LE             3072
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:62

5.在去看一下brick挂载的情况,根据brick-id可以查看。
这个有意思了,之前创建的5g和刚扩的12g的volume都在。

[root@g0001 ~]# df -h| grep brick_d7198cf4a0f867ef7adcfb14c339c897
/dev/mapper/vg_a1bdd995310f0b8154702036cba441a9-brick_d7198cf4a0f867ef7adcfb14c339c897   12G  654M   12G    6% /var/lib/heketi/mounts/vg_a1bdd995310f0b8154702036cba441a9/brick_d7198cf4a0f867ef7adcfb14c339c897
[root@g0001 ~]#
[root@g0001 ~]#
[root@g0001 ~]# df -h| grep brick_d3a4f1cf40d761b198552bfd4218af45
/dev/mapper/vg_a1bdd995310f0b8154702036cba441a9-brick_d3a4f1cf40d761b198552bfd4218af45  5.0G  3.9G  1.2G   78% /var/lib/heketi/mounts/vg_a1bdd995310f0b8154702036cba441a9/brick_d3a4f1cf40d761b198552bfd4218af45

6.去k8s集群检查看pod是否被拉起来。
重新拉起了18次,终于起来了

[root@m0001 ~]# kubectl get pod -n ns-monitor
NAME                          READY   STATUS    RESTARTS   AGE
grafana-7f54b49f5d-p88d6      1/1     Running   3          10d
node-exporter-5tg7g           1/1     Running   6          10d
node-exporter-9mzrm           1/1     Running   2          10d
node-exporter-b6jt8           1/1     Running   3          10d
node-exporter-jl6ls           1/1     Running   2          10d
node-exporter-mqfvr           1/1     Running   2          10d
node-exporter-ntb92           1/1     Running   2          10d
node-exporter-rkqsh           1/1     Running   3          10d
node-exporter-v887p           1/1     Running   2          10d
prometheus-5f7cb6d955-5rwbp   1/1     Running   18          1m

你可能感兴趣的:(k8s,linux)