CentOS7下分布式系统GlusterFS安装配置
https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.12/
标签:GlusterFS
原创作品,允许转载,转载时请务必以超链接形式标明文章原始出处、作者信息和本声明。否则将追究法律责任。http://hzde0128.blog.51cto.com/918211/1898622
一、主机规划
操作系统版本为CentOS 7.2.1511
node1:172.17.0.1 gfs1
node2:172.17.0.2 gfs2
node3:172.17.0.3 gfs3
node4:172.17.0.4 gfs
client:172.17.0.5
二、安装:
1.在node1-4上安装glusterfs-server
yum install -y centos-release-gluster38
yum install -y glusterfs glusterfs-server glusterfs-fuse
设置开机自启动并启动
systemctl enable glusterd.service
systemctl start glusterd.service
2.在gfs1-gfs4节点上配置整个GlusterFS集群,把各个节点加入到集群
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.17.0.1 gf1
172.17.0.2 gf2
172.17.0.3 gf3
172.17.0.4 gf4
将下面节点加入集群如下:
[root@redis1 ~]# gluster peer probe gf4
peer probe: success. Probe on localhost not needed
[root@redis1 ~]# gluster peer probe gf2
peer probe: success.
[root@redis1 ~]# gluster peer probe gf3
peer probe: success.
[root@redis1 ~]# gluster peer probe gf1
peer probe: success.
3.查看节点状态
[root@gredis1 ~]#gluster peer status
Number of Peers: 3
Hostname: gf2
Uuid: e3ee9ce6-ffb3-40e4-8e92-2a13619b9383
State: Peer in Cluster (Connected)
Hostname: gf3
Uuid: b1490f21-aa2c-4a02-ac75-1909ab4ba636
State: Peer in Cluster (Connected)
Hostname: gf1
Uuid: 5d2e75be-6573-4528-ac61-1618f0e6f064
State: Peer in Cluster (Connected)
4.在gfs{1-4}上创建数据存储目录
# mkdir -p /usr/local/share/models
5.在gfs1上创建GlusterFS磁盘
注意:
加上replica 4就是4个节点中,每个节点都要把数据存储一次,就是一个数据存储4份,每个节点一份
如果不加replica 4,就是4个节点的磁盘空间整合成一个硬盘,
[root@redis1 ~]# gluster volume create share replica 4 gf1:/share gf2:/share gf3:/share gf4:/share force
volume create: models: success: please start the volume to access data
6.启动
[root@redis1 ~]# gluster volume start share
移除节点:
gluster peer detach gf1
查看逻辑卷状态
[root@redis1 ~]# gluster volume info
Volume Name: models
Type: Replicate
Volume ID: 3834bc4e-0511-457e-9bde-51eb86e653fd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: gf1:/share
Brick2: gf2:/share
Brick3: gf3:/share
Brick4: gf4:/share
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
清除glusterfs配置
通过查看/etc/glusterfs/glusterd.vol可以得知glusterfs的工作目录是在/var/lib/glusterd中
[root@localhost ~]#cat /etc/glusterfs/glusterd.vol
volume management
type mgmt/glusterdoption working-directory /var/lib/glusterdoption transport-type socket,rdma
option transport.socket.keepalive-time10option transport.socket.keepalive-interval2option transport.socket.read-fail-log off
optionping-timeout0option event-threads1# option transport.address-family inet6
# option base-port49152end-volume
如果需要清除glusterfs配置,将工作目录删除后重启服务即可
[root@localhost ~]#rm-rf /var/lib/glusterd/
[root@localhost ~]# /etc/init.d/glusterd restart
删除卷
gluster volume stop models
gluster volume delete models
客户端
1.部署GlusterFS客户端并mount GlusterFS文件系统
[root@client ~]# yum install -y centos-release-gluster38
[root@client ~]# yum install -y glusterfs glusterfs-fuse
[root@localhost ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.7.165 gf3
172.16.7.166 gf4
172.16.7.164 gf2
172.16.7.162 gf1
172.16.7.161 glusterfs-client
[root@localhost ~]# mkdir -p /gluster
[root@localhost ~]# mount -t glusterfs gf1:/models /gluster
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 14G 12G 1.5G 90% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 375M 3.5G 10% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 139M 876M 14% /boot
tmpfs 783M 0 783M 0% /run/user/0
gf1:/models 14G 13G 882M 94% /gluster
2.查看效果
[root@client ~]# df -h
3.观察分布式文件系统的效果
umount /mnt/models
mount -t glusterfs -o rw gfs1:models /mnt/models/
[root@client ~]# cd /mnt/models/
[root@client mnt]# for i in `seq -w 10`; do mkdir $i ; done
[root@client models]# for i in `seq -w 10`; do mkdir $i ; done
[root@client models]# ll
total 40
drwxr-xr-x 2 root root 4096 Feb 14 21:56 01
drwxr-xr-x 2 root root 4096 Feb 14 21:59 02
drwxr-xr-x 2 root root 4096 Feb 14 21:56 03
drwxr-xr-x 2 root root 4096 Feb 14 21:56 04
drwxr-xr-x 2 root root 4096 Feb 14 21:56 05
drwxr-xr-x 2 root root 4096 Feb 14 21:56 06
drwxr-xr-x 2 root root 4096 Feb 14 21:59 07
drwxr-xr-x 2 root root 4096 Feb 14 21:56 08
drwxr-xr-x 2 root root 4096 Feb 14 21:56 09
drwxr-xr-x 2 root root 4096 Feb 14 21:59 10
分别在4台Server上查看新建的文件夹同步情况
[root@gfs1 ~]# ls /usr/local/share/models/ -l
total 0
drwxr-xr-x 2 root root 6 Feb 14 21:56 01
drwxr-xr-x 2 root root 6 Feb 14 21:59 02
drwxr-xr-x 2 root root 6 Feb 14 21:56 03
drwxr-xr-x 2 root root 6 Feb 14 21:56 04
drwxr-xr-x 2 root root 6 Feb 14 21:56 05
drwxr-xr-x 2 root root 6 Feb 14 21:56 06
drwxr-xr-x 2 root root 6 Feb 14 21:59 07
drwxr-xr-x 2 root root 6 Feb 14 21:56 08
drwxr-xr-x 2 root root 6 Feb 14 21:56 09
drwxr-xr-x 2 root root 6 Feb 14 21:59 10
可以看到4台服务器都同步过来了。
当单台Server出现故障,比如服务器断开连接的情况,创建一个文件需要比较长的时间。
但是当Server重新连上之后,文件可以及时的同步过来。