IOMESH Installation
官方文档:
同步发表于个人网站:www.etaon.top
实验拓扑图:
也可以再测试的时候是使用一个端口,官方建议将IOMESH的端口分可,即下图10.234.1.0/24网段:
实验使用裸机,配置如下:
配件 | 型号规格 | 数量 | 备注 |
---|---|---|---|
CPU | Intel(R) Xeon(R) Silver 4214R @ 2.40GHz | 2 | 节点1、3 |
Intel(R) Xeon(R) Gold 6226R @ 2.90GHz | 1 | 节点2 | |
内存 | 256 GB | 3 | 节点1、2、3 |
SSD | 1.6 TB NVMe SSD | 2*3 | 节点1/节点2/节点3 |
HDD | 2.4 TB SAS | 4*3 | 节点1/节点2/节点3 |
DOM盘 | M.2 240GB | 2 | 节点1/节点2/节点3 |
千兆网卡 | I350 | 2 | 节点1/节点2/节点3 |
存储网卡 | 10/25G | 2 | 节点1/节点2/节点3 |
- 部署IOMesh 至少需要一块 cache disk以及一块 partition disk
安装步骤:
创建Kubernetes集群
采用Kubernetes 1.24 版本,Runtime为containerd。
安装参考Install Kubernetes 1.24 - 路无止境! (etaon.top)
安装前准备
在所有worker节点执行以下操作。
- 安装 open-iscsi,ubuntu已经安装。
apt install open-iscsi -y
- 编辑iscsi配置文件
sudo sed -i 's/^node.startup = automatic$/node.startup = manual/' /etc/iscsi/iscsid.conf
- 确保 iscsi_tcp 模块已被加载
sudo modprobe iscsi_tcpsudo bash -c 'echo iscsi_tcp > /etc/modprobe.d/iscsi-tcp.conf'
- 启动 iscsid 服务
systemctl enable --now iscsid
离线安装 IOMesh
安装文档:
Install IOMesh · Documentation
- 下载离线安装文件,将其上传至 所有 需要部署 IOMesh 的节点上,包括Master节点。
#这个的节点一般指worker节点,且>=3;
#如果worker节点不够,希望master节点加入,再master节点执行:
kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-
- 对上传的离线安装包进行解压缩:
tar -xvf iomesh-offline-v0.11.1.tgz && cd iomesh-offline
- 加载镜像文件,Runtime 为 Containerd 的情况下执行以下命令
ctr --namespace k8s.io image import ./images/iomesh-offline-images.tar
- 转到Master 节点(或API Server接入点),生成IOMesh 配置文件
./helm show values charts/iomesh > iomesh.yaml
- 定义 iomesh.yaml 配置文件,注意修改 dataCIDR 为 IOMesh 存储网段
...
iomesh:
chunk:
dataCIDR: "10.234.1.0/24" # change to your own data network CIDR
- Master 安装 IOMesh 集群
./helm install iomesh ./charts/iomesh \
--create-namespace \
--namespace iomesh-system \
--values iomesh.yaml \
--wait
#返回结果
NAME: iomesh
LAST DEPLOYED: Tue Dec 27 09:38:56 2022
NAMESPACE: iomesh-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
- 创建成功以后的POD情况
创建POD过程:
root@cp01:~/iomesh-offline# kubectl get po -n iomesh-system -w
NAME READY STATUS RESTARTS AGE
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z 6/6 Running 0 24s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62 6/6 Running 0 24s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq 6/6 Running 0 24s
iomesh-csi-driver-node-plugin-gr8bm 3/3 Running 0 24s
iomesh-csi-driver-node-plugin-kshdt 3/3 Running 0 24s
iomesh-csi-driver-node-plugin-xxhhx 3/3 Running 0 24s
iomesh-hostpath-provisioner-59v28 1/1 Running 0 24s
iomesh-hostpath-provisioner-79dgh 1/1 Running 0 24s
iomesh-hostpath-provisioner-cknk8 1/1 Running 0 24s
iomesh-openebs-ndm-7vdkm 1/1 Running 0 24s
iomesh-openebs-ndm-cluster-exporter-75f568df84-dvz4g 1/1 Running 0 24s
iomesh-openebs-ndm-hctjc 1/1 Running 0 24s
iomesh-openebs-ndm-node-exporter-f59t5 1/1 Running 0 24s
iomesh-openebs-ndm-node-exporter-l48dj 1/1 Running 0 24s
iomesh-openebs-ndm-node-exporter-sxgjn 1/1 Running 0 24s
iomesh-openebs-ndm-operator-7d58d8fbc8-xxwdt 1/1 Running 0 24s
iomesh-openebs-ndm-x64pj 1/1 Running 0 24s
iomesh-zookeeper-0 0/1 Running 0 18s
iomesh-zookeeper-operator-f5588b6d7-lrj6n 1/1 Running 0 24s
operator-765dd9678f-95rcw 1/1 Running 0 24s
operator-765dd9678f-ns2gs 1/1 Running 0 24s
operator-765dd9678f-q8pwn 1/1 Running 0 24s
iomesh-zookeeper-0 1/1 Running 0 23s
iomesh-zookeeper-1 0/1 Pending 0 0s
iomesh-zookeeper-1 0/1 Pending 0 1s
iomesh-zookeeper-1 0/1 ContainerCreating 0 1s
iomesh-zookeeper-1 0/1 ContainerCreating 0 1s
iomesh-zookeeper-1 0/1 Running 0 2s
iomesh-csi-driver-node-plugin-gr8bm 3/3 Running 1 (0s ago) 50s
iomesh-csi-driver-node-plugin-kshdt 3/3 Running 1 (0s ago) 50s
iomesh-csi-driver-node-plugin-xxhhx 3/3 Running 1 (0s ago) 50s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62 2/6 Error 1 (1s ago) 53s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z 2/6 Error 1 (1s ago) 53s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq 2/6 Error 1 (1s ago) 54s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62 6/6 Running 5 (1s ago) 54s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z 6/6 Running 5 (2s ago) 54s
iomesh-zookeeper-1 1/1 Running 0 26s
iomesh-zookeeper-2 0/1 Pending 0 0s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq 6/6 Running 5 (2s ago) 55s
iomesh-zookeeper-2 0/1 Pending 0 1s
iomesh-zookeeper-2 0/1 ContainerCreating 0 1s
iomesh-zookeeper-2 0/1 ContainerCreating 0 1s
iomesh-zookeeper-2 0/1 Running 0 2s
iomesh-zookeeper-2 1/1 Running 0 26s
iomesh-meta-0 0/2 Pending 0 0s
iomesh-meta-1 0/2 Pending 0 0s
iomesh-meta-2 0/2 Pending 0 0s
iomesh-meta-0 0/2 Pending 0 1s
iomesh-meta-1 0/2 Pending 0 1s
iomesh-meta-1 0/2 Init:0/1 0 1s
iomesh-meta-0 0/2 Init:0/1 0 1s
iomesh-meta-2 0/2 Pending 0 1s
iomesh-meta-2 0/2 Init:0/1 0 1s
iomesh-iscsi-redirector-jj2qm 0/2 Pending 0 0s
iomesh-iscsi-redirector-jj2qm 0/2 Pending 0 0s
iomesh-iscsi-redirector-9crx8 0/2 Pending 0 0s
iomesh-iscsi-redirector-zlhtk 0/2 Pending 0 0s
iomesh-iscsi-redirector-9crx8 0/2 Pending 0 0s
iomesh-iscsi-redirector-zlhtk 0/2 Pending 0 0s
iomesh-iscsi-redirector-zlhtk 0/2 Init:0/1 0 0s
iomesh-iscsi-redirector-9crx8 0/2 Init:0/1 0 0s
iomesh-chunk-0 0/3 Pending 0 0s
iomesh-meta-0 0/2 Init:0/1 0 2s
iomesh-meta-1 0/2 Init:0/1 0 2s
iomesh-meta-2 0/2 Init:0/1 0 2s
iomesh-iscsi-redirector-zlhtk 0/2 Init:0/1 0 0s
iomesh-iscsi-redirector-jj2qm 0/2 Init:0/1 0 0s
iomesh-chunk-0 0/3 Pending 0 1s
iomesh-meta-1 0/2 Init:0/1 0 3s
iomesh-meta-2 0/2 PodInitializing 0 3s
iomesh-iscsi-redirector-9crx8 0/2 PodInitializing 0 1s
iomesh-chunk-0 0/3 Init:0/1 0 1s
iomesh-iscsi-redirector-zlhtk 0/2 PodInitializing 0 2s
iomesh-iscsi-redirector-9crx8 1/2 Running 0 2s
iomesh-meta-0 0/2 PodInitializing 0 4s
iomesh-meta-1 0/2 PodInitializing 0 4s
iomesh-meta-2 1/2 Running 0 4s
iomesh-meta-1 1/2 Running 0 5s
iomesh-iscsi-redirector-jj2qm 0/2 PodInitializing 0 3s
iomesh-iscsi-redirector-9crx8 1/2 Error 0 3s
iomesh-iscsi-redirector-zlhtk 1/2 Running 0 4s
iomesh-meta-0 1/2 Running 0 6s
iomesh-iscsi-redirector-zlhtk 1/2 Error 0 4s
iomesh-iscsi-redirector-zlhtk 1/2 Running 1 (3s ago) 5s
iomesh-meta-0 1/2 Running 0 7s
iomesh-iscsi-redirector-9crx8 1/2 Running 1 (2s ago) 5s
iomesh-iscsi-redirector-jj2qm 1/2 Running 0 5s
iomesh-iscsi-redirector-9crx8 1/2 Error 1 (2s ago) 5s
iomesh-iscsi-redirector-zlhtk 1/2 Error 1 (3s ago) 5s
iomesh-iscsi-redirector-jj2qm 1/2 Error 0 6s
iomesh-iscsi-redirector-9crx8 1/2 CrashLoopBackOff 1 (1s ago) 6s
iomesh-iscsi-redirector-zlhtk 1/2 CrashLoopBackOff 1 (2s ago) 6s
iomesh-chunk-0 0/3 PodInitializing 0 7s
iomesh-chunk-0 3/3 Running 0 7s
iomesh-chunk-1 0/3 Pending 0 0s
iomesh-iscsi-redirector-jj2qm 1/2 Running 1 (5s ago) 8s
iomesh-chunk-1 0/3 Pending 0 1s
iomesh-chunk-1 0/3 Init:0/1 0 1s
iomesh-chunk-1 0/3 Init:0/1 0 2s
iomesh-meta-0 2/2 Running 0 12s
iomesh-meta-1 2/2 Running 0 12s
iomesh-meta-2 2/2 Running 0 12s
iomesh-chunk-0 2/3 Error 0 11s
iomesh-chunk-1 0/3 PodInitializing 0 4s
iomesh-chunk-0 3/3 Running 1 (2s ago) 12s
iomesh-chunk-1 3/3 Running 0 5s
iomesh-chunk-2 0/3 Pending 0 0s
iomesh-chunk-2 0/3 Pending 0 1s
iomesh-chunk-2 0/3 Init:0/1 0 1s
iomesh-chunk-2 0/3 Init:0/1 0 2s
iomesh-chunk-2 0/3 PodInitializing 0 3s
iomesh-iscsi-redirector-jj2qm 2/2 Running 1 (13s ago) 16s
iomesh-csi-driver-node-plugin-gr8bm 3/3 Running 2 (0s ago) 100s
iomesh-chunk-2 3/3 Running 0 4s
iomesh-csi-driver-node-plugin-xxhhx 3/3 Running 2 (0s ago) 100s
iomesh-csi-driver-node-plugin-kshdt 3/3 Running 2 (1s ago) 101s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq 2/6 Error 6 (0s ago) 103s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62 2/6 Error 6 (1s ago) 103s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z 2/6 Error 6 (1s ago) 103s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq 2/6 CrashLoopBackOff 6 (1s ago) 104s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62 2/6 CrashLoopBackOff 6 (2s ago) 104s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z 2/6 CrashLoopBackOff 6 (2s ago) 104s
iomesh-iscsi-redirector-9crx8 1/2 Running 2 (18s ago) 23s
iomesh-iscsi-redirector-zlhtk 1/2 Running 2 (19s ago) 23s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62 6/6 Running 10 (14s ago) 116s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq 6/6 Running 10 (14s ago) 117s
iomesh-iscsi-redirector-zlhtk 2/2 Running 2 (31s ago) 35s
iomesh-iscsi-redirector-9crx8 2/2 Running 2 (30s ago) 35s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z 6/6 Running 10 (18s ago) 2m
root@cp01:~/iomesh-offline# kubectl get po -n iomesh-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
iomesh-chunk-0 3/3 Running 0 7m27s 192.168.80.23 worker02
iomesh-chunk-1 3/3 Running 0 7m20s 192.168.80.22 worker01
iomesh-chunk-2 3/3 Running 0 7m16s 192.168.80.21 cp01
iomesh-csi-driver-controller-plugin-6887b8d974-4cjlq 6/6 Running 10 (7m12s ago) 9m10s 192.168.80.22 worker01
iomesh-csi-driver-controller-plugin-6887b8d974-d2d4r 6/6 Running 44 (11m ago) 35m 192.168.80.23 worker02
iomesh-csi-driver-controller-plugin-6887b8d974-j2rks 6/6 Running 44 (11m ago) 35m 192.168.80.21 cp01
iomesh-csi-driver-node-plugin-95f5z 3/3 Running 14 (7m15s ago) 35m 192.168.80.22 worker01
iomesh-csi-driver-node-plugin-dftlv 3/3 Running 12 (11m ago) 35m 192.168.80.21 cp01
iomesh-csi-driver-node-plugin-s549x 3/3 Running 12 (11m ago) 35m 192.168.80.23 worker02
iomesh-hostpath-provisioner-8jvs8 1/1 Running 1 (8m59s ago) 35m 10.211.5.10 worker01
iomesh-hostpath-provisioner-rkkrs 1/1 Running 0 35m 10.211.30.71 worker02
iomesh-hostpath-provisioner-rmk4z 1/1 Running 0 35m 10.211.214.131 cp01
iomesh-iscsi-redirector-6g2fk 2/2 Running 2 (7m25s ago) 7m30s 192.168.80.22 worker01
iomesh-iscsi-redirector-cxnbq 2/2 Running 2 (7m25s ago) 7m30s 192.168.80.23 worker02
iomesh-iscsi-redirector-wnglv 2/2 Running 2 (7m24s ago) 7m30s 192.168.80.21 cp01
iomesh-meta-0 2/2 Running 0 7m29s 10.211.30.76 worker02
iomesh-meta-1 2/2 Running 0 7m29s 10.211.5.11 worker01
iomesh-meta-2 2/2 Running 0 7m29s 10.211.214.135 cp01
iomesh-openebs-ndm-55lgk 1/1 Running 1 (8m59s ago) 35m 192.168.80.22 worker01
iomesh-openebs-ndm-cluster-exporter-75f568df84-mrd6b 1/1 Running 0 35m 10.211.30.70 worker02
iomesh-openebs-ndm-k7dxn 1/1 Running 0 35m 192.168.80.23 worker02
iomesh-openebs-ndm-m5p2k 1/1 Running 0 35m 192.168.80.21 cp01
iomesh-openebs-ndm-node-exporter-2stn9 1/1 Running 0 35m 10.211.30.72 worker02
iomesh-openebs-ndm-node-exporter-ccfr4 1/1 Running 0 35m 10.211.214.129 cp01
iomesh-openebs-ndm-node-exporter-jdzdv 1/1 Running 1 (8m59s ago) 35m 10.211.5.8 worker01
iomesh-openebs-ndm-operator-7d58d8fbc8-plkzz 1/1 Running 0 9m10s 10.211.30.75 worker02
iomesh-zookeeper-0 1/1 Running 0 35m 10.211.30.73 worker02
iomesh-zookeeper-1 1/1 Running 0 8m14s 10.211.5.9 worker01
iomesh-zookeeper-2 1/1 Running 0 7m59s 10.211.214.134 cp01
iomesh-zookeeper-operator-f5588b6d7-wknqk 1/1 Running 0 9m10s 10.211.30.74 worker02
operator-765dd9678f-6x5jf 1/1 Running 0 35m 10.211.30.68 worker02
operator-765dd9678f-h5mzh 1/1 Running 0 35m 10.211.214.130 cp01
operator-765dd9678f-svff4 1/1 Running 0 9m10s 10.211.214.133 cp01
部署 IOMesh
查看 Blockdevice 设备状态
IOMesh 安装完成之后,所有的 blockdevice 均处于 Unclaimed 状态
root@cp01:~# kubectl -n iomesh-system get blockdevice
NAME NODENAME SIZE CLAIMSTATE STATUS AGE
blockdevice-02e15a7c78768f42a1e552a3726cff89 worker02 2400476553216 Unclaimed Active 15m
blockdevice-0c6a53344f61d5f4789c393acf459c83 cp01 1600321314816 Unclaimed Active 15m
blockdevice-4f6835975745542e4a36b0548a573ef3 cp01 2400476553216 Unclaimed Active 15m
blockdevice-5254ac6aa3528ba39dd3cb18663a1211 cp01 1600321314816 Unclaimed Active 15m
blockdevice-5f40eb004d563e77febbde69d930880c worker01 2400476553216 Unclaimed Active 15m
blockdevice-6e9e9eaafee63535906f3f3b9ab35687 worker01 2400476553216 Unclaimed Active 15m
blockdevice-73f728aca1ab2f74a833303fffc59301 worker01 2400476553216 Unclaimed Active 15m
blockdevice-7401844e99d0b75f2543768a0464e16c cp01 2400476553216 Unclaimed Active 15m
blockdevice-86ba281ed13da04c83b0d5efba7cbeeb worker02 1600321314816 Unclaimed Active 15m
blockdevice-ad8fe313ea9a4cec2c9ecad4e001e110 worker02 2400476553216 Unclaimed Active 15m
blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0 worker01 1600321314816 Unclaimed Active 15m
blockdevice-d017164bd3f69f337e1838cc3b3c4aaf worker02 2400476553216 Unclaimed Active 15m
blockdevice-d116cd6ee3b11542beee3737d03de26a worker02 1600321314816 Unclaimed Active 15m
blockdevice-d81eadce662fe037fa84d1ffa4e73a37 worker01 1600321314816 Unclaimed Active 15m
blockdevice-de24330f6ca7152c7d9cec70bfa6b3b6 worker01 2400476553216 Unclaimed Active 15m
blockdevice-de6e4ce89275cd799fdf88246c445399 cp01 2400476553216 Unclaimed Active 15m
blockdevice-f19c4d080abe3dbf0b7c7c5600df32a1 worker02 2400476553216 Unclaimed Active 15m
blockdevice-f505ed7231329ab53ebe26e8a00d4483 cp01 2400476553216 Unclaimed Active 15m
标记磁盘(非必须)
通过以下命令分别对所有缓存盘以及数据盘打上标签,便于后续按照不同功能 claimed。
缓存盘:
# kubectl label blockdevice blockdevice-20cce4274e106429525f4a0ca7d192c2 -n iomesh-system iomesh-system/disk=SSD
数据盘:
# kubectl label blockdevice blockdevice-31a2642808ca091a75331b6c7a1f9f68 -n iomesh-system iomesh-system/disk=HDD
设备映射-HybridFlash
iomesh.yaml 文件 chunk/deviceMap 部分用以说明哪些磁盘被 iomesh 所使用以及用以何种类型磁盘进行挂载,本文档进行label select时按照 iomesh-system/disk=SSD 或 HDD 进行区分。
也可以通过查看磁盘的labels,选择默认的label
root@cp01:~/iomesh-offline# kubectl get blockdevice -n iomesh-system blockdevice-02e15a7c78768f42a1e552a3726cff89 -oyaml
apiVersion: openebs.io/v1alpha1
kind: BlockDevice
metadata:
annotations:
internal.openebs.io/uuid-scheme: gpt
creationTimestamp: "2022-12-27T09:39:07Z"
generation: 1
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
iomesh.com/bd-devicePath: dev.sdd
iomesh.com/bd-deviceType: disk
iomesh.com/bd-driverType: HDD
iomesh.com/bd-model: DL2400MM0159
iomesh.com/bd-serial: 5000c500e2476207
iomesh.com/bd-vendor: SEAGATE
kubernetes.io/arch: amd64
kubernetes.io/hostname: worker02
kubernetes.io/os: linux
ndm.io/blockdevice-type: blockdevice
ndm.io/managed: "true"
nodename: worker02
name: blockdevice-02e15a7c78768f42a1e552a3726cff89
namespace: iomesh-system
resourceVersion: "3333"
uid: 489546d4-ef39-4980-9fcd-b699a21940ec
......
如果客户环境中有多个磁盘但有部分磁盘不作为 IOMesh 使用,可以通过 exclude 排除。
示例:这个选用了我们上面设置的label
. . .
deviceMap:
# cacheWithJournal:
# selector:
# matchLabels:
# iomesh.com/bd-deviceType: disk
cacheWithJournal:
selector:
matchExpressions:
- key: iomesh-system/disk
operator: In
values:
- SSD
dataStore:
selector:
matchExpressions:
- key: iomesh-system/disk
operator: In
values:
- HDD
# exclude: blockdev-xxxx ### 需要排除的blockdevice
申领磁盘
iomesh.yaml 文件修改完成之后通过以下命令完成磁盘申领
./helm -n iomesh-system upgrade iomesh charts/iomesh -f iomesh.yaml
执行后查询所有需要用到的磁盘均已处于 Claimed 状态
root@cp01:~/iomesh-offline# kubectl get blockdevice -n iomesh-system
NAME NODENAME SIZE CLAIMSTATE STATUS AGE
blockdevice-02e15a7c78768f42a1e552a3726cff89 worker02 2400476553216 Claimed Active 5h19m
blockdevice-0c6a53344f61d5f4789c393acf459c83 cp01 1600321314816 Claimed Active 5h19m
blockdevice-4f6835975745542e4a36b0548a573ef3 cp01 2400476553216 Claimed Active 5h19m
blockdevice-5254ac6aa3528ba39dd3cb18663a1211 cp01 1600321314816 Claimed Active 5h19m
blockdevice-5f40eb004d563e77febbde69d930880c worker01 2400476553216 Claimed Active 5h19m
blockdevice-6e9e9eaafee63535906f3f3b9ab35687 worker01 2400476553216 Claimed Active 5h19m
blockdevice-73f728aca1ab2f74a833303fffc59301 worker01 2400476553216 Claimed Active 5h19m
blockdevice-7401844e99d0b75f2543768a0464e16c cp01 2400476553216 Claimed Active 5h19m
blockdevice-86ba281ed13da04c83b0d5efba7cbeeb worker02 1600321314816 Claimed Active 5h19m
blockdevice-ad8fe313ea9a4cec2c9ecad4e001e110 worker02 2400476553216 Claimed Active 5h19m
blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0 worker01 1600321314816 Claimed Active 5h19m
blockdevice-d017164bd3f69f337e1838cc3b3c4aaf worker02 2400476553216 Claimed Active 5h19m
blockdevice-d116cd6ee3b11542beee3737d03de26a worker02 1600321314816 Claimed Active 5h19m
blockdevice-d81eadce662fe037fa84d1ffa4e73a37 worker01 1600321314816 Claimed Active 5h19m
blockdevice-de24330f6ca7152c7d9cec70bfa6b3b6 worker01 2400476553216 Claimed Active 5h19m
blockdevice-de6e4ce89275cd799fdf88246c445399 cp01 2400476553216 Claimed Active 5h19m
blockdevice-f19c4d080abe3dbf0b7c7c5600df32a1 worker02 2400476553216 Claimed Active 5h19m
blockdevice-f505ed7231329ab53ebe26e8a00d4483 cp01 2400476553216 Claimed Active 5h19m
创建 StorageClass
接下来就是创建StorageClass的过程。
这里面涉及到给容器提供持久化存储时 PV - PVC - StorageClass(SC) 的概念。具体的描述可以参见 CNCF 的介绍。给出一个比较浅显的理解
- PV 对应的存储集群中的 Volume。
- PVC 用来声明需要什么类型的存储。当容器需要使用持久化存储资源时通过 PVC 来实现容器和 PV 之间的一个接口。
- 在定义 PV 的时候需要配置很多的字段内容,在大规模部署环境中需要创建多个PV较为繁琐。为了简化配置过程,于是就有了动态供给概念 - Dynamic Provisioning,自动创建 PV。管理员可以定义 StorageClass 并部署 PV 配置器(provisioner)。开发人员在申请资源的时候通过 PVC 指定所需的存储类型 - StorageClass,PVC 会把 StorageClass 传递给 PV provisioner,由Provisioner自动创建 PV。
IOMesh CSI driver 安装完成之后,会默认安装 Storage Class:iomesh-csi-driver。同时也可以根据需求创建自定义的 StorageClass.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: iomesh-sc
provisioner: com.iomesh.csi-driver # <-- driver.name in iomesh-values.yaml
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
# "ext4" / "ext3" / "ext2" / "xfs"
csi.storage.k8s.io/fstype: "ext4"
# "2" / "3"
replicaFactor: "2"
# "true" / "false"
thinProvision: "true"
volumeBindingMode: Immediate
示例:
------
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: sc-default
provisioner: com.iomesh.csi-driver
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
csi.storage.k8s.io/fstype: "ext4"
replicaFactor: "2"
thinProvision: "true"
volumeBindingMode: Immediate
说明:reclaimPolicy 可以设置为 Ratain 或者是 Delete,默认 Delete。在创建PV的时候persistentVolumeReclaimPolicy 会继承 StorageClass 的 reclaimPolicy策略。
两者的区别在于:
Delete:通过 PVC-SC 创建的PV,当PVC被删除的时候,PV会一并被删除掉。
Retain:通过 PVC-SC 创建的PV,当PVC被删除的时候,PV不会被删除掉,会处于Released 状态,可被其他容器使用。
root@cp01:~/iomesh-offline# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
hostpath kubevirt.io/hostpath-provisioner Delete WaitForFirstConsumer false 5h23m
iomesh-csi-driver com.iomesh.csi-driver Retain Immediate true 5h23m
sc-default com.iomesh.csi-driver Delete Immediate true 47m
创建 SnapshotClass
Kubernetes VolumeSnapshotClass 对象类似于 StorageClass。通过如下方式定义:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: iomesh-csi-driver-default
driver: com.iomesh.csi-driver
deletionPolicy: Delete
创建 Volume
创建PVC之前需要确保 StorageClass 已经创建完毕。
示例:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-test
spec: storageClass
Name: sc-default ## 先前创建的 StorageClass 名称
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
通过上面的PVC创建对应的PV,创建完成之后查询创建完成的PVC:
root@cp01:~/iomesh-offline# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
iomesh-pvc-10g Bound pvc-ac1e1184-a9ce-417e-a7eb-a799cf199be8 10Gi RWO sc-default 44m
查询通过PVC 创建的 PV:
root@cp01:~/iomesh-offline# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-5900b5e5-00d4-42ad-821e-6d1199a7e3e3 217Gi RWO Delete Bound iomesh-system/coredump-iomesh-chunk-0 hostpath 4h55m
pvc-ac1e1184-a9ce-417e-a7eb-a799cf199be8 10Gi RWO Delete Bound default/iomesh-pvc-10g sc-default 44m
查看有关Cluster的存储情况
root@cp01:~/iomesh-offline# kubectl get iomeshclusters.iomesh.com -n iomesh-system -oyaml
apiVersion: v1
items:
- apiVersion: iomesh.com/v1alpha1
kind: IOMeshCluster
metadata:
annotations:
meta.helm.sh/release-name: iomesh
meta.helm.sh/release-namespace: iomesh-system
creationTimestamp: "2022-12-27T09:38:58Z"
finalizers:
- iomesh.com/iomesh-cluster-protection
- iomesh.com/access-protection
generation: 4
labels:
app.kubernetes.io/instance: iomesh
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: iomesh
app.kubernetes.io/part-of: iomesh
app.kubernetes.io/version: v5.1.2-rc14
helm.sh/chart: iomesh-v0.11.1
name: iomesh
namespace: iomesh-system
resourceVersion: "653267"
uid: cd33bce1-9400-4914-aa32-75a13f07d13e
spec:
chunk:
dataCIDR: 10.234.1.0/24
deviceMap:
cacheWithJournal:
selector:
matchExpressions:
- key: iomesh-system/disk
operator: In
values:
- SSD
dataStore:
selector:
matchExpressions:
- key: iomesh-system/disk
operator: In
values:
- HDD
devicemanager:
image:
pullPolicy: IfNotPresent
repository: iomesh/operator-devicemanager
tag: v0.11.1
image:
pullPolicy: IfNotPresent
repository: iomesh/zbs-chunkd
tag: v5.1.2-rc14
replicas: 3
resources: {}
diskDeploymentMode: hybridFlash
meta:
image:
pullPolicy: IfNotPresent
repository: iomesh/zbs-metad
tag: v5.1.2-rc14
replicas: 3
resources: {}
portOffset: -1
probe:
image:
pullPolicy: IfNotPresent
repository: iomesh/operator-probe
tag: v0.11.1
reclaimPolicy:
blockdevice: Delete
volume: Delete
redirector:
dataCIDR: 10.234.1.0/24
image:
pullPolicy: IfNotPresent
repository: iomesh/zbs-iscsi-redirectord
tag: v5.1.2-rc14
resources: {}
storageClass: hostpath
toolbox:
image:
pullPolicy: IfNotPresent
repository: iomesh/operator-toolbox
tag: v0.11.1
status:
attachedDevices:
- device: blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0
mountType: cacheWithJournal
nodeName: worker01
- device: blockdevice-02e15a7c78768f42a1e552a3726cff89
mountType: dataStore
nodeName: worker02
- device: blockdevice-7401844e99d0b75f2543768a0464e16c
mountType: dataStore
nodeName: cp01
- device: blockdevice-6e9e9eaafee63535906f3f3b9ab35687
mountType: dataStore
nodeName: worker01
- device: blockdevice-5f40eb004d563e77febbde69d930880c
mountType: dataStore
nodeName: worker01
- device: blockdevice-de6e4ce89275cd799fdf88246c445399
mountType: dataStore
nodeName: cp01
- device: blockdevice-d81eadce662fe037fa84d1ffa4e73a37
mountType: cacheWithJournal
nodeName: worker01
- device: blockdevice-d116cd6ee3b11542beee3737d03de26a
mountType: cacheWithJournal
nodeName: worker02
- device: blockdevice-f505ed7231329ab53ebe26e8a00d4483
mountType: dataStore
nodeName: cp01
- device: blockdevice-f19c4d080abe3dbf0b7c7c5600df32a1
mountType: dataStore
nodeName: worker02
- device: blockdevice-4f6835975745542e4a36b0548a573ef3
mountType: dataStore
nodeName: cp01
- device: blockdevice-73f728aca1ab2f74a833303fffc59301
mountType: dataStore
nodeName: worker01
- device: blockdevice-86ba281ed13da04c83b0d5efba7cbeeb
mountType: cacheWithJournal
nodeName: worker02
- device: blockdevice-0c6a53344f61d5f4789c393acf459c83
mountType: cacheWithJournal
nodeName: cp01
- device: blockdevice-ad8fe313ea9a4cec2c9ecad4e001e110
mountType: dataStore
nodeName: worker02
- device: blockdevice-d017164bd3f69f337e1838cc3b3c4aaf
mountType: dataStore
nodeName: worker02
- device: blockdevice-de24330f6ca7152c7d9cec70bfa6b3b6
mountType: dataStore
nodeName: worker01
- device: blockdevice-5254ac6aa3528ba39dd3cb18663a1211
mountType: cacheWithJournal
nodeName: cp01
license:
expirationDate: Thu, 26 Jan 2023 10:07:30 UTC
maxChunkNum: 3
maxPhysicalDataCapacity: 0 TB
maxPhysicalDataCapacityPerNode: 128 TB
serial: 44831211-c377-486e-acc5-7b4ea354091e
signDate: Tue, 27 Dec 2022 10:07:30 UTC
softwareEdition: COMMUNITY
subscriptionExpirationDate: "0"
subscriptionStartDate: "0"
readyReplicas:
iomesh-chunk: 3
iomesh-meta: 3
runningImages:
chunk:
iomesh-chunk-0: iomesh/zbs-chunkd:v5.1.2-rc14
iomesh-chunk-1: iomesh/zbs-chunkd:v5.1.2-rc14
iomesh-chunk-2: iomesh/zbs-chunkd:v5.1.2-rc14
meta:
iomesh-meta-0: iomesh/zbs-metad:v5.1.2-rc14
iomesh-meta-1: iomesh/zbs-metad:v5.1.2-rc14
iomesh-meta-2: iomesh/zbs-metad:v5.1.2-rc14
summary:
chunkSummary:
chunks:
- id: 1
ip: 10.234.1.23
spaceInfo:
dirtyCacheSpace: 193.75Mi
failureCacheSpace: 0B
failureDataSpace: 0B
totalCacheCapacity: 2.87Ti
totalDataCapacity: 8.22Ti
usedCacheSpace: 193.75Mi
usedDataSpace: 5.75Gi
status: CHUNK_STATUS_CONNECTED_HEALTHY
- id: 2
ip: 10.234.1.22
spaceInfo:
dirtyCacheSpace: 0B
failureCacheSpace: 0B
failureDataSpace: 0B
totalCacheCapacity: 2.87Ti
totalDataCapacity: 8.22Ti
usedCacheSpace: 0B
usedDataSpace: 0B
status: CHUNK_STATUS_CONNECTED_HEALTHY
- id: 3
ip: 10.234.1.21
spaceInfo:
dirtyCacheSpace: 193.75Mi
failureCacheSpace: 0B
failureDataSpace: 0B
totalCacheCapacity: 2.87Ti
totalDataCapacity: 8.22Ti
usedCacheSpace: 193.75Mi
usedDataSpace: 5.75Gi
status: CHUNK_STATUS_CONNECTED_HEALTHY
clusterSummary:
spaceInfo:
dirtyCacheSpace: 387.50Mi
failureCacheSpace: 0B
failureDataSpace: 0B
totalCacheCapacity: 8.62Ti
totalDataCapacity: 24.66Ti
usedCacheSpace: 387.50Mi
usedDataSpace: 11.50Gi
metaSummary:
aliveHost:
- 10.211.5.11
- 10.211.30.76
- 10.211.214.135
leader: 10.211.5.11:10100
status: META_RUNNING
kind: List
metadata:
resourceVersion: ""
主要查看Summary的内容:
summary:
chunkSummary:
chunks:
- id: 1
ip: 10.234.1.23
spaceInfo:
dirtyCacheSpace: 193.75Mi
failureCacheSpace: 0B
failureDataSpace: 0B
totalCacheCapacity: 2.87Ti
totalDataCapacity: 8.22Ti
usedCacheSpace: 193.75Mi
usedDataSpace: 5.75Gi
status: CHUNK_STATUS_CONNECTED_HEALTHY
- id: 2
ip: 10.234.1.22
spaceInfo:
dirtyCacheSpace: 0B
failureCacheSpace: 0B
failureDataSpace: 0B
totalCacheCapacity: 2.87Ti
totalDataCapacity: 8.22Ti
usedCacheSpace: 0B
usedDataSpace: 0B
status: CHUNK_STATUS_CONNECTED_HEALTHY
- id: 3
ip: 10.234.1.21
spaceInfo:
dirtyCacheSpace: 193.75Mi
failureCacheSpace: 0B
failureDataSpace: 0B
totalCacheCapacity: 2.87Ti
totalDataCapacity: 8.22Ti
usedCacheSpace: 193.75Mi
usedDataSpace: 5.75Gi
status: CHUNK_STATUS_CONNECTED_HEALTHY
clusterSummary:
spaceInfo:
dirtyCacheSpace: 387.50Mi
failureCacheSpace: 0B
failureDataSpace: 0B
totalCacheCapacity: 8.62Ti
totalDataCapacity: 24.66Ti
usedCacheSpace: 387.50Mi
usedDataSpace: 11.50Gi
metaSummary:
aliveHost:
- 10.211.5.11
- 10.211.30.76
- 10.211.214.135
leader: 10.211.5.11:10100
status: META_RUNNING
All Flash模式
改变iomesh.yaml文件
iomesh:
# Whether to create IOMeshCluster object
create: true
# Use All-Flash Mode or Hybrid-Flash Mode, in All-Flash mode will reject mounting `cacheWithJournal` and `rawCache` type.
# And enable mount `dataStoreWithJournal`.
diskDeploymentMode: "allFlash"
#注意:这个提示没有给出关键词,不是ALL-Flash。应用后会提示错误!
#./helm -n iomesh-system upgrade iomesh charts/iomesh -f iomesh.yaml
#Error: UPGRADE FAILED: cannot patch "iomesh" with kind IOMeshCluster: IOMeshCluster.iomesh.com "iomesh" is invalid: spec.diskDeploymentMode: Unsupported value: "ALL-Flash": supported values: "hybridFlash", "allFlash"
deviceMap:
# cacheWithJournal:
# selector:
# matchExpressions:
# - key: iomesh-system/disk
# operator: In
# values:
# - SSD
# dataStore:
# selector:
# matchExpressions:
# - key: iomesh-system/disk
# operator: In
# values:
# - HDD
dataStoreWithJournal:
selector:
matchLabels:
iomesh.com/bd-deviceType: disk
matchExpressions:
- key: iomesh.com/bd-driverType
operator: In
values:
- SSD
查看CRD的yaml可以看到这部分的配置:
diskDeploymentMode:
default: hybridFlash
description: DiskDeploymentMode set this IOMesh cluster start with
all-flash mode or hybrid-flash mode. In all-flash mode, the DeviceManager
will reject mount any `Cache` type partition. In hybrid-flash mode,
the DeviceManager will reject mount `dataStoreWithJournal` type
partition.
enum:
- hybridFlash
- allFlash
type: string
执行以后:
root@cp01:~/iomesh-offline# ./helm -n iomesh-system upgrade iomesh charts/iomesh -f iomesh.yaml
Release "iomesh" has been upgraded. Happy Helming!
NAME: iomesh
LAST DEPLOYED: Wed Dec 28 11:30:02 2022
NAMESPACE: iomesh-system
STATUS: deployed
REVISION: 8
TEST SUITE: None
磁盘会重新Claim
root@cp01:~/iomesh-offline# kubectl get blockdevice -n iomesh-system
NAME NODENAME SIZE CLAIMSTATE STATUS AGE
blockdevice-02e15a7c78768f42a1e552a3726cff89 worker02 2400476553216 Unclaimed Active 25h
blockdevice-0c6a53344f61d5f4789c393acf459c83 cp01 1600321314816 Claimed Active 25h
blockdevice-4f6835975745542e4a36b0548a573ef3 cp01 2400476553216 Unclaimed Active 25h
blockdevice-5254ac6aa3528ba39dd3cb18663a1211 cp01 1600321314816 Claimed Active 25h
blockdevice-5f40eb004d563e77febbde69d930880c worker01 2400476553216 Unclaimed Active 25h
blockdevice-6e9e9eaafee63535906f3f3b9ab35687 worker01 2400476553216 Unclaimed Active 25h
blockdevice-73f728aca1ab2f74a833303fffc59301 worker01 2400476553216 Unclaimed Active 25h
blockdevice-7401844e99d0b75f2543768a0464e16c cp01 2400476553216 Unclaimed Active 25h
blockdevice-86ba281ed13da04c83b0d5efba7cbeeb worker02 1600321314816 Claimed Active 25h
blockdevice-ad8fe313ea9a4cec2c9ecad4e001e110 worker02 2400476553216 Unclaimed Active 25h
blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0 worker01 1600321314816 Claimed Active 25h
blockdevice-d017164bd3f69f337e1838cc3b3c4aaf worker02 2400476553216 Unclaimed Active 25h
blockdevice-d116cd6ee3b11542beee3737d03de26a worker02 1600321314816 Claimed Active 25h
blockdevice-d81eadce662fe037fa84d1ffa4e73a37 worker01 1600321314816 Claimed Active 25h
blockdevice-de24330f6ca7152c7d9cec70bfa6b3b6 worker01 2400476553216 Unclaimed Active 25h
blockdevice-de6e4ce89275cd799fdf88246c445399 cp01 2400476553216 Unclaimed Active 25h
blockdevice-f19c4d080abe3dbf0b7c7c5600df32a1 worker02 2400476553216 Unclaimed Active 25h
blockdevice-f505ed7231329ab53ebe26e8a00d4483 cp01 2400476553216 Unclaimed Active 25h
再次查看系统的存储
root@cp01:~/iomesh-offline# kubectl get iomeshclusters.iomesh.com -n iomesh-system -o yaml
apiVersion: v1
items:
- apiVersion: iomesh.com/v1alpha1
kind: IOMeshCluster
metadata:
annotations:
meta.helm.sh/release-name: iomesh
meta.helm.sh/release-namespace: iomesh-system
creationTimestamp: "2022-12-27T09:38:58Z"
finalizers:
- iomesh.com/iomesh-cluster-protection
- iomesh.com/access-protection
generation: 5
labels:
app.kubernetes.io/instance: iomesh
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: iomesh
app.kubernetes.io/part-of: iomesh
app.kubernetes.io/version: v5.1.2-rc14
helm.sh/chart: iomesh-v0.11.1
name: iomesh
namespace: iomesh-system
resourceVersion: "692613"
uid: cd33bce1-9400-4914-aa32-75a13f07d13e
spec:
chunk:
dataCIDR: 10.234.1.0/24
deviceMap:
dataStoreWithJournal:
selector:
matchExpressions:
- key: iomesh.com/bd-driverType
operator: In
values:
- SSD
matchLabels:
iomesh.com/bd-deviceType: disk
devicemanager:
image:
pullPolicy: IfNotPresent
repository: iomesh/operator-devicemanager
tag: v0.11.1
image:
pullPolicy: IfNotPresent
repository: iomesh/zbs-chunkd
tag: v5.1.2-rc14
replicas: 3
resources: {}
diskDeploymentMode: allFlash
meta:
image:
pullPolicy: IfNotPresent
repository: iomesh/zbs-metad
tag: v5.1.2-rc14
replicas: 3
resources: {}
portOffset: -1
probe:
image:
pullPolicy: IfNotPresent
repository: iomesh/operator-probe
tag: v0.11.1
reclaimPolicy:
blockdevice: Delete
volume: Delete
redirector:
dataCIDR: 10.234.1.0/24
image:
pullPolicy: IfNotPresent
repository: iomesh/zbs-iscsi-redirectord
tag: v5.1.2-rc14
resources: {}
storageClass: hostpath
toolbox:
image:
pullPolicy: IfNotPresent
repository: iomesh/operator-toolbox
tag: v0.11.1
status:
attachedDevices:
- device: blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0
mountType: dataStoreWithJournal
nodeName: worker01
- device: blockdevice-d81eadce662fe037fa84d1ffa4e73a37
mountType: dataStoreWithJournal
nodeName: worker01
- device: blockdevice-d116cd6ee3b11542beee3737d03de26a
mountType: dataStoreWithJournal
nodeName: worker02
- device: blockdevice-86ba281ed13da04c83b0d5efba7cbeeb
mountType: dataStoreWithJournal
nodeName: worker02
- device: blockdevice-0c6a53344f61d5f4789c393acf459c83
mountType: dataStoreWithJournal
nodeName: cp01
- device: blockdevice-5254ac6aa3528ba39dd3cb18663a1211
mountType: dataStoreWithJournal
nodeName: cp01
license:
expirationDate: Thu, 26 Jan 2023 10:07:30 UTC
maxChunkNum: 3
maxPhysicalDataCapacity: 0 TB
maxPhysicalDataCapacityPerNode: 128 TB
serial: 44831211-c377-486e-acc5-7b4ea354091e
signDate: Tue, 27 Dec 2022 10:07:30 UTC
softwareEdition: COMMUNITY
subscriptionExpirationDate: "0"
subscriptionStartDate: "0"
readyReplicas:
iomesh-chunk: 3
iomesh-meta: 3
runningImages:
chunk:
iomesh-chunk-0: iomesh/zbs-chunkd:v5.1.2-rc14
iomesh-chunk-1: iomesh/zbs-chunkd:v5.1.2-rc14
iomesh-chunk-2: iomesh/zbs-chunkd:v5.1.2-rc14
meta:
iomesh-meta-0: iomesh/zbs-metad:v5.1.2-rc14
iomesh-meta-1: iomesh/zbs-metad:v5.1.2-rc14
iomesh-meta-2: iomesh/zbs-metad:v5.1.2-rc14
summary:
chunkSummary:
chunks:
- id: 1
ip: 10.234.1.23
spaceInfo:
dirtyCacheSpace: 0B
failureCacheSpace: 0B
failureDataSpace: 0B
totalCacheCapacity: 0B
totalDataCapacity: 2.85Ti
usedCacheSpace: 0B
usedDataSpace: 0B
status: CHUNK_STATUS_CONNECTED_HEALTHY
- id: 2
ip: 10.234.1.22
spaceInfo:
dirtyCacheSpace: 0B
failureCacheSpace: 0B
failureDataSpace: 0B
totalCacheCapacity: 0B
totalDataCapacity: 2.85Ti
usedCacheSpace: 0B
usedDataSpace: 0B
status: CHUNK_STATUS_CONNECTED_HEALTHY
- id: 3
ip: 10.234.1.21
spaceInfo:
dirtyCacheSpace: 0B
failureCacheSpace: 0B
failureDataSpace: 0B
totalCacheCapacity: 0B
totalDataCapacity: 2.85Ti
usedCacheSpace: 0B
usedDataSpace: 0B
status: CHUNK_STATUS_CONNECTED_HEALTHY
clusterSummary:
spaceInfo:
dirtyCacheSpace: 0B
failureCacheSpace: 0B
failureDataSpace: 0B
totalCacheCapacity: 0B
totalDataCapacity: 8.56Ti
usedCacheSpace: 0B
usedDataSpace: 0B
metaSummary:
aliveHost:
- 10.211.5.11
- 10.211.30.76
- 10.211.214.135
leader: 10.211.5.11:10100
status: META_RUNNING
kind: List
metadata:
resourceVersion: ""
- 监控用pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: pyzbs
spec:
selector:
matchLabels:
app: pyzbs
replicas: 1
template:
metadata:
labels:
app: pyzbs # has to match .spec.selector.matchLabels
spec:
containers:
- name: pyzbs
image: iomesh/zbs-client-py-builder:latest
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
Uninstall
root@cp01:~/iomesh-offline# ./helm uninstall -n iomesh-system iomesh
These resources were kept due to the resource policy:
[Deployment] iomesh-openebs-ndm-operator
[Deployment] iomesh-zookeeper-operator
[Deployment] operator
[DaemonSet] iomesh-hostpath-provisioner
[DaemonSet] iomesh-openebs-ndm
[RoleBinding] iomesh-zookeeper-operator
[RoleBinding] iomesh:leader-election
[Role] iomesh-zookeeper-operator
[Role] iomesh:leader-election
[ClusterRoleBinding] iomesh-hostpath-provisioner
[ClusterRoleBinding] iomesh-openebs-ndm
[ClusterRoleBinding] iomesh-zookeeper-operator
[ClusterRoleBinding] iomesh:manager
[ClusterRole] iomesh-hostpath-provisioner
[ClusterRole] iomesh-openebs-ndm
[ClusterRole] iomesh-zookeeper-operator
[ClusterRole] iomesh:manager
[ConfigMap] iomesh-openebs-ndm-config
[ServiceAccount] openebs-ndm
[ServiceAccount] zookeeper-operator
[ServiceAccount] iomesh-operator
release "iomesh" uninstalled
Q1-发现pod在某个node上无法挂载pvc
Warning FailedMount 5m25s kubelet Unable to attach or mount volumes: unmounted volumes=[fio-pvc-worker01], unattached volumes=[fio-pvc-worker01 kube-api-access-thpm9]: timed out waiting for the condition
Warning FailedMapVolume 66s (x11 over 7m20s) kubelet MapVolume.SetUpDevice failed for volume "pvc-bbe904d5-0362-481b-8ec4-e059c0ac2d52" : rpc error: code = Internal desc = failed to attach &{LunId:8 Initiator:iqn.2020-05.com.iomesh:817dd926-3e0f-4c82-9c02-5e30b8e045c5-59068042-6fff-4700-b0b4-537376943d5f IFace:59068042-6fff-4700-b0b4-537376943d5f Portal:127.0.0.1:3260 TargetIqn:iqn.2016-02.com.smartx:system:ebc8b5ba-7a07-4651-9986-f7ea4fe0121e ChapInfo:}, failed to iscsiadm discovery target, command: [iscsiadm -m discovery -t sendtargets -p 127.0.0.1:3260 -I 59068042-6fff-4700-b0b4-537376943d5f -o update], output: iscsiadm: Failed to load module tcp: No such file or directory
iscsiadm: Could not load transport tcp.Dropping interface 59068042-6fff-4700-b0b4-537376943d5f.
, error: exit status 21
Warning FailedMount 54s (x2 over 3m9s) kubelet Unable to attach or mount volumes: unmounted volumes=[fio-pvc-worker01], unattached volumes=[kube-api-access-thpm9 fio-pvc-worker01]: timed out waiting for the condition
重新检查:
1. 编辑iscsi配置文件
sudo sed -i 's/^node.startup = automatic$/node.startup = manual/' /etc/iscsi/iscsid.conf
---
2. 确保 iscsi_tcp 模块已被加载
sudo modprobe iscsi_tcpsudo bash -c 'echo iscsi_tcp > /etc/modprobe.d/iscsi-tcp.conf'
---
3. 启动 iscsid 服务
systemctl enable --now iscsid