服务器的相关信息:

Centos7.3
192.168.1.1
Node1
/dev/sdb(3GB)
/dev/sdc(4GB)
/dev/sdd(5GB)
/dev/sde(6GB)
/b1
/c2
/d3
/e4

Centos7.3
192.168.1.2
Node2
dev/sdb(3GB)
/dev/sdc(4GB)
/dev/sdd(5GB)
/dev/sde(6GB)
/b1
/c2
/d3
/e4

Centos7.3
192.168.1.3
Node3
/dev/sdb(3GB)
/dev/sdc(4GB)
/dev/sdd(5GB)
/b1
/c2
/d3

Centos7.3
192.168.1.4
Node4
/dev/sdb(3GB)
/dev/sdc(4GB)
/dev/sdd(5GB)
/b1
/c2
/d3

服务器的相关信息
卷名称 卷类型 空间大小 Brick
dis-volume 分布式卷 12 Node1(/e4)、node(/e4)
Stripe-volume 条带卷 10 Node1(/d3)、node2(/d3)
Rep-volume 复制卷 5 Node3(/d3)、node4(/d3)
Dis-stripe 分布式条带卷 12 Node1(/b1)、node2(/b1)、node3(/b1)、node4(/b1)
Dis-rep 分布式复制卷 8 Node1(/c2)、node2(/c2)、node(/c2)、node4(/c2)

一、服务器端配置

  1. 准备环境
    在所有节点上执行以下操作
    开启4台虚拟机,根据上述表添加磁盘,通过fdisk分区,mkfs格式化,创建相应的挂载目录,并将格式化的磁盘挂载到相应的目录中,最后修改/etc/fstab配置文件,使其永久生效。
    以node1为例:
    关闭防火墙和selinux
    #systemctl stopfirewalld
    #systemctldisable firewalld
    #setenforce 0

2.(1) 创建目录:
#mkidr -p /b1 /c2 /d3 /e4
(2)分区:#fdisk /dev/sdb
(3)格式化:#mkfs.ext4 /dev/sdb1
(4)挂载:#mount /dev/sdb1 /b1
(5)永久挂载
#vim /etc/fstab
/dev/sdb1 /b1 ext4 defaults 0 0
/dev/sdc1 /c2 ext4 defaults 0 0
/dev/sdd1 /d3 ext4 defaults 0 0
/dev/sde1 /e4 ext4 defaults 0 0

3.配置hosts文件hostname文件
[root@node1 ~]# vim /etc/hostname
node1
[root@node2 ~]# vim /etc/hostname
node2
[root@node3 ~]# vim /etc/hostname
node3
[root@node4 ~]# vim /etc/hostname
node4
[root@node1 ~]# vim /etc/hosts
192.168.1.1 node1
192.168.1.2 node2
192.168.1.3 node3
192.168.1.4 node4
4.安装软件(在所有的服务器上都要安装)
#yum -y install glusterfsglusterfs-server glusterfs-fuse glusterfs-rdma
5.启动glusterfs(在所有节点上)
#systemctl start glusterd
#systemctlenable glusterd
6.添加节点(在node1上执行),添加node1—node4节点
[root@node1 ~]#gluster peer probe node1
peer probe: success. Probe on localhost not needed
[root@node1 ~]#gluster peer probe node2
peer probe: success.
[root@node1 ~]#gluster peer probe node3
peer probe: success.
[root@node1 ~]#gluster peer probe node4
peer probe: success.
6.查看群集状态
[root@node1 ~]#gluster peer status
Number of Peers: 3 //总共有3个节点

Hostname: node2
Uuid: 6ef95b2b-a30e-4be2-a1bd-12f22cc51c4a
State: Peer in Cluster (Connected)

Hostname: node3
Uuid: 764a7f63-0978-41b0-a67c-e74a16c0c1fb
State: Peer in Cluster (Connected)

Hostname: node4
Uuid: 396abdcd-430c-4334-8773-50a4bfc87688
State: Peer in Cluster (Connected)
7.
(1)创建卷
①创建分布式卷
[root@node1 ~]#gluster volume create dis-volume node1:/e4 node2:/e4 force
volume create: dis-volume: success: please start the volume to access data
②创建条带卷
[root@node1 ~]#gluster volume create stripe-volume stripe 2 node1:/d3 node2:/d3 force
volume create: stripe-volume: success: please start the volume to access data
[root@node1 ~]#gluster volume start stripe-volume
volume start: stripe-volume: success
③创建复制卷
[root@node1 ~]#gluster volume create rep-volume replica 2 node3:/d3 node4:/d3 force
volume create: rep-volume: success: please start the volume to access data
[root@node1 ~]#gluster volume start rep-volume
volume start: rep-volume: success
④创建分布式条带卷
[root@node1 ~]#gluster volume create dis-stripe stripe 2 node1:/b1 node2:/b1 node3:/b1 node4:/b1 force
volume create: dis-stripe: success: please start the volume to access data
[root@node1 ~]#gluster volume start dis-stripe
volume start: dis-stripe: success
⑤创建分布式复制卷
[root@node1 ~]#gluster volume create dis-rep replica 2 node1:/c2 node2:/c2 node3:/c2 node4:/c2 force
volume create: dis-rep: success: please start the volume to access data
[root@node1 ~]#gluster volume start dis-rep
volume start: dis-rep: success

(2)查看卷:
[root@node1 ~]#gluster volume info dis-volume

Volume Name: dis-volume
Type: Distribute
Volume ID: 097119a3-21b3-4bb7-b984-26af52e74101
Status: Created
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node1:/e4
Brick2: node2:/e4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
(3)启动卷:
[root@node1 ~]# gluster volume status(查看卷状态)
[root@node1 ~]#gluster volume start dis-volume
volume start: dis-volume: success
[root@node1 ~]#gluster volume info dis-volume

Volume Name: dis-volume
Type: Distribute
Volume ID: 097119a3-21b3-4bb7-b984-26af52e74101
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node1:/e6
Brick2: node2:/e6
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
没有指定类型默认创建的是分布式卷

(4)查看卷状态
[root@node2 ~]#gluster volume status
Status of volume: dis-rep
Gluster process TCP Port RDMA Port Online Pid

Brick node1:/c4 49155 0 Y 11872
Brick node2:/c4 49155 0 Y 13008
Brick node3:/c4 49154 0 Y 11038
Brick node4:/c4 49154 0 Y 14778
Self-heal Daemon on localhost N/A N/A Y 13028
Self-heal Daemon on node1 N/A N/A Y 11892
Self-heal Daemon on node3 N/A N/A Y 11058
Self-heal Daemon on node4 N/A N/A Y 14798

Task Status of Volume dis-rep

There are no active volume tasks

Status of volume: dis-stripe
Gluster process TCP Port RDMA Port Online Pid

Brick node1:/b3 49154 0 Y 11825
Brick node2:/b3 49154 0 Y 12974
Brick node3:/b3 49153 0 Y 11004
Brick node4:/b3 49153 0 Y 14743

Task Status of Volume dis-stripe

There are no active volume tasks

Status of volume: dis-volume
Gluster process TCP Port RDMA Port Online Pid

Brick node1:/e6 49152 0 Y 11547
Brick node2:/e6 49152 0 Y 12644

Task Status of Volume dis-volume

There are no active volume tasks

Status of volume: rep-volume
Gluster process TCP Port RDMA Port Online Pid

Brick node3:/d5 49152 0 Y 10950
Brick node4:/d5 49152 0 Y 14689
Self-heal Daemon on localhost N/A N/A Y 13028
Self-heal Daemon on node1 N/A N/A Y 11892
Self-heal Daemon on node3 N/A N/A Y 11058
Self-heal Daemon on node4 N/A N/A Y 14798

Task Status of Volume rep-volume

There are no active volume tasks

Status of volume: stripe-volume
Gluster process TCP Port RDMA Port Online Pid

Brick node1:/d5 49153 0 Y 11745
Brick node2:/d5 49153 0 Y 12905

Task Status of Volume stripe-volume

There are no active volume tasks

二、客户端配置

1.安装客户端软件
[root@node5 yum.repos.d]# yum -y install glusterfsglusterfs-fuse

2创建挂载目录
[root@node5 ~]#mkdir -p /test /{dis,stripe,,rep dis_and_stripe,dis_and_rep}

3.修改hosts文件
[root@node5 ~]# vim /etc/hosts
192.168.1.1 node1
192.168.1.2 node2
192.168.1.3 node3
192.168.1.6 node6
192.168.1.4 node4
192.168.1.5 node5

4.挂载gluster文件系统
[root@node5 test]# mount -t glusterfs node1:dis-volume /test/dis
[root@node5 test]# mount -t glusterfs node1:stripe-volume /test/stripe
[root@node5 test]# mount -t glusterfs node1:rep-volume /test/rep
[root@node5 test]# mount -t glusterfs node1:dis-volume /test/dis_and_stripe
[root@node5 test]# mount -t glusterfs node1:dis-rep /test/dis_and_rep
[root@node5 test]#

5.永久挂载
[root@node5 test]# vim /etc/fstab
node1:dis-volume /test/dis glusterfs defaults,_netdev 0 0
node1:strip-volume /test/stripe glusterfs defaults,_netdev 0 0
node1:rep-volume /test/rep glusterfs defaults,_netdev 0 0
node1:dis-stripe /test/dis_and_stripeglusterfs defaults,_netdev 0 0
node1:dis-rep /test/dis_and_repglusterfs defaults,_netdev 0 0

三、测试gluster文件系统

1.卷中写入文件
[root@node5 ~]# dd if=/dev/zero of=demo1.log bs=43M count=1
[root@node5 ~]# dd if=/dev/zero of=demo2.log bs=43M count=1
[root@node5 ~]# dd if=/dev/zero of=demo3.log bs=43M count=1
[root@node5 ~]# dd if=/dev/zero of=demo4.log bs=43M count=1
[root@node5 ~]# dd if=/dev/zero of=demo5.log bs=43M count=1
[root@node5 ~]#cp demo /test/dis
[root@node5 ~]#cp demo
/test/stripe
[root@node5 ~]#cp demo /test/rep
[root@node5 ~]#cp demo
/test/dis_and_stripe
[root@node5 ~]#cp demo* /test/dis_and_rep

2.查看文件分布
(1)查看分布式卷文件分布
[root@node1~]#ll -h /e6
总用量 130M
-rw-r--r-- 2 root root 43M 10月 25 19:35 demo1.log
-rw-r--r-- 2 root root 43M 10月 25 19:35 demo2.log
-rw-r--r-- 2 root root 43M 10月 25 19:35 demo3.log
drwx------ 2 root root 16K 9月 23 09:07 lost+found
[root@node2~]#ll -h /e6
总用量 86M
-rw-r--r-- 2 root root 43M 10月 25 19:35 demo4.log
-rw-r--r-- 2 root root 43M 10月 25 19:36 demo5.log
[root@node2 ~]#
(2)查看条带卷文件分布
[root@node1 ~]#ll -h /d5
总用量 108M
-rw-r--r-- 2 root root22M 10月 25 19:34 demo1.log
-rw-r--r-- 2 root root22M 10月 25 19:34 demo2.log
-rw-r--r-- 2 root root22M 10月 25 19:34 demo3.log
-rw-r--r-- 2 root root22M 10月 25 19:34 demo4.log
-rw-r--r-- 2 root root22M 10月 25 19:36 demo5.log
drwx------ 2 root root 16K 9月 23 09:06 lost+found
[root@node2~]#ll -h /d5
总用量 108M
-rw-r--r-- 2 root root22M 10月 25 19:34 demo1.log
-rw-r--r-- 2 root root22M 10月 25 19:34 demo2.log
-rw-r--r-- 2 root root22M 10月 25 19:34 demo3.log
-rw-r--r-- 2 root root22M 10月 25 19:34 demo4.log
-rw-r--r-- 2 root root22M 10月 25 19:36 demo5.log
drwx------ 2 root root 16K 9月 23 09:14 lost+found
(3)查看复制卷文件分布
[root@node3 ~]#ll -h /d5
总用量 216M
-rw-r--r-- 2 root root 43M 10月 25 19:34 demo1.log
-rw-r--r-- 2 root root 43M 10月 25 19:34 demo2.log
-rw-r--r-- 2 root root 43M 10月 25 19:34 demo3.log
-rw-r--r-- 2 root root 43M 10月 25 19:34 demo4.log
-rw-r--r-- 2 root root 43M 10月 25 19:36 demo5.log
drwx------ 2 root root 16K 9月 23 09:59 lost+found
[root@node4 ~]#ll -h /d5
总用量 216M
-rw-r--r-- 2 root root 43M 10月 25 19:34 demo1.log
-rw-r--r-- 2 root root 43M 10月 25 19:34 demo2.log
-rw-r--r-- 2 root root 43M 10月 25 19:34 demo3.log
-rw-r--r-- 2 root root 43M 10月 25 19:34 demo4.log
-rw-r--r-- 2 root root 43M 10月 25 19:36 demo5.log
drwx------ 2 root root 16K 9月 23 10:08 lost+found

3.破环性测试
挂起node2节点,在客户端上测试文件是否可以正常使用
(1)测试分布式卷是否可以访问
[root@node5 ~]# head -1 /test/dis/demo1.log
[root@node5 ~]# head -1 /test/dis/demo2.log
[root@node5 ~]# head -1 /test/dis/demo5.log
head: 无法打开"/test/dis/demo5.log" 读取数据: 没有那个文件或目录
[root@node5 ~]# head -1 /test/dis/demo4.log
head: 无法打开"/test/dis/demo4.log" 读取数据: 没有那个文件或目录
[root@node5 ~]# head -1 /test/dis/demo3.log
(2)测试条带卷数据是否可以访问
[root@node5 ~]# head -1 /test/stripe/demo1.log
head: 读取"/test/stripe/demo1.log" 时出错: 没有那个文件或目录

其它的维护命令:
(1)查看glusterfs卷
[root@node1 ~]#gluster volume list
dis-rep
dis-stripe
dis-volume
rep-volume
stripe-volume
(2)查看所有卷的状态
[root@node1 ~]#gluster volume status
(3)查看所有卷的信息
[root@node1 ~]#gluster volume info
(4)停止和删除卷
[root@node1 ~]#gluster volume stop dis-stripe
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: dis-stripe: success
[root@node1 ~]#
[root@node1 ~]#gluster volume delete dis-stripe
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: dis-stripe: success
[root@node1 ~]#
(5)设置卷的访问控制
[root@node1 ~]#gluster volume set dis-rep auth.allow 192.168.1.*
volume set: success