最简集群需要两台机器,本文以两台为例进行搭建,每台机器包含两块硬盘,一块为系统安装盘,另一块为将来的分布式存储盘。
# cat /etc/hosts
192.168.12.16 k8s-worker01
192.168.12.17 k8s-worker02
部署最简易的 Gluster 集群仅需两台机器,官方推荐配置为(2 颗 CPU、4GB 内存、千兆网卡) ,不同发行版的安装方式略有不同,本文以 CentOS 7.8 x86_64 版本为例。
yum install centos-release-gluster
官方仓库默认包含于 Extras 仓库内,默认安装为 LTM 长期维护版。
yum install -y glusterfs-server
systemctl enable glusterd
systemctl start glusterd
# netstat -lntup|grep glusterd
tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 9517/glusterd
在节点1上信任节点2
# gluster peer probe k8s-worker02
peer probe: success.
可使用如下命令查看全部已添加节点(看不到自身)
worker01查看
gluster peer status
Number of Peers: 1
Hostname: k8s-worker02
Uuid: 0d3e82ee-ec52-4b4e-8873-dc73087f900a
State: Peer in Cluster (Connected)
worker02查看
gluster peer status
Number of Peers: 1
Hostname: k8s-worker01
Uuid: c30a2582-06c7-4d74-b95a-920f90f2bf69
State: Peer in Cluster (Connected)
mkdir /opt/data/gfs -p
# gluster volume create gv0 replica 2 k8s-worker{01,02}:/opt/data/gfs/gv0 force
volume create: gv0: success: please start the volume to access data
或使用如下命令进行创建
gluster volume create gv0 replica 2 k8s-worker01:/opt/data/gfs/gv0 k8s-worker02:/opt/data/gfs/gv0 force
# gluster volume start gv0
volume start: gv0: success
所有节点
# gluster volume status all
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick k8s-worker01:/opt/data/gfs/gv0 49152 0 Y 13215
Brick k8s-worker02:/opt/data/gfs/gv0 49152 0 Y 20566
Self-heal Daemon on localhost N/A N/A Y 13237
Self-heal Daemon on k8s-worker02 N/A N/A Y 20587
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks
# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: 7fa2d40f-6fb5-42c5-9642-21b145eea21c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: k8s-worker01:/opt/data/gfs/gv0
Brick2: k8s-worker02:/opt/data/gfs/gv0
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
创建逻辑卷目录
mkdir /opt/logs/gfs -p
挂载逻辑卷
mount -t glusterfs k8s-worker01:/gv0 /opt/logs/gfs
查看逻辑卷书否挂载
# df -h|grep gv0
k8s-worker01:/gv0 1008G 18G 950G 2% /opt/logs/gfs
测试挂载的目录,节点间是否可数据同步