■ 分布式卷
■ 条带卷
■ 复制卷
■ 分布式条带卷
■ 分布式复制卷
■ 条带复制卷
■ 分布式条带复制卷
创建分布式卷
gluster volume create dis-volume server1:/dir1 server2:/dir2
创建条带卷
gluster volume create stripe-volume stripe 2 transport tcp server1:/dir1 server2:/dir2
创建复制卷
gluster volume create rep-volume replica 2 transport tcp server1:/dir1 server2:/dir2
gluster volume create dis-stripe stripe 2 transport tcp server1:/dir1 server2:/dir2 server3:/dir3 server4:/dir4
gluster volume create dis-rep replica 2 transport tcp server1:/dir1 server2:/dir2 server3:/dir3 server4:/dir4
操作系统 | IP | 主机名 |
---|---|---|
CentOS 7.4 | 20.0.0.21 | node1 |
CentOS 7.4 | 20.0.0.22 | node2 |
CentOS 7.4 | 20.0.0.23 | node3 |
CentOS 7.4 | 20.0.0.24 | node4 |
CentOS 7.4 | 20.0.0.25 | client |
关闭防火墙、核心防护
所有node节点都需要
[root@node1 ~]# vi /etc/hosts
20.0.0.21 node1
20.0.0.22 node2
20.0.0.23 node3
20.0.0.24 node4
###更改主机名
hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname node3
hostnamectl set-hostname node4
node1上:
scp /etc/hosts [email protected]:/etc/hosts
scp /etc/hosts [email protected]:/etc/hosts
scp /etc/hosts [email protected]:/etc/hosts
getenforce: 查看selinux的状态
Enforing: 强制模式,代表SELinux在运行中,且已经开始限制domain/type之间的验证关系
Permissive: 宽容模式,代表SELinux在运行中,不过不会限制domain/type之间的验证关系,即使验证步正确,进程仍可以对文件进行操作,不过如果验证不正确会发出警告
Disabled: 关闭模式,SELinux没有实际运行
创建自动格式化,自动永久挂载脚本
[root@node1 ~]# vi gsf.sh
#!/bin/bash
for V in $(ls /dev/sd[b-z])
do
echo -e "n\np\n\n\n\nw\n" |fdisk $V
mkfs.xfs -i size=512 ${V}1 &>/dev/null
sleep 1
M=$(echo "$V" |awk -F "/" '{print $3}')
mkdir -p /data/${M}1 &>/dev/null
echo -e "${V}1 /data/${M}1 xfs defaults 0 0\n" >>/etc/fstab
mount -a &>/dev/null
done
[root@node1 yum.repos.d]# vim local.repo
[GLFS]
name=glfs
baseurl=http://mirror.centos.org/centos/$releasever/storage/$basearch/gluster-3.12/
gpgcheck=0
enabled=1
[root@node1 yum.repos.d]# yum clean all
[root@node1 yum.repos.d]# yum makecache
[root@node1 yum.repos.d]# yum -y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
[root@node1 yum.repos.d]# systemctl start glusterd.service
[root@node1 yum.repos.d]# systemctl enable glusterd.service
[root@node1 yum.repos.d]# systemctl status glusterd.service
ntpdate ntp.aliyun.com
[root@node1 yum.repos.d]# gluster peer probe node2
peer probe: success.
[root@node1 yum.repos.d]# gluster peer probe node3
peer probe: success.
[root@node1 yum.repos.d]# gluster peer probe node4
peer probe: success.
'查看节点池'
[root@node1 ~]# gluster peer status
[root@node1 yum.repos.d]# gluster volume create dis-vol node1:/data/sdb1 node2:/data/sdb1 force ###创建分布式卷,force:强制执行
volume create: dis-vol: success: please start the volume to access data
[root@node1 yum.repos.d]# gluster volume info dis-vol ###查看分布式卷信息
Volume Name: dis-vol
Type: Distribute
Volume ID: 5ced04aa-1fa5-42e1-b273-2da41bb31469
Status: Created
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node1:/data/sdb1
Brick2: node2:/data/sdb1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@node1 yum.repos.d]# gluster volume list ###查看卷列表
dis-vol[root@node1 yum.repos.d]# gluster volume start dis-vol ###开启分布式卷
volume start: dis-vol: success
[root@node1 yum.repos.d]# gluster volume info dis-vol ###查看分布式卷信息
Volume Name: dis-vol
Type: Distribute
Volume ID: 5ced04aa-1fa5-42e1-b273-2da41bb31469
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node1:/data/sdb1
Brick2: node2:/data/sdb1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@node2 yum.repos.d]# gluster volume create strip-vol strip 2 node1:/data/sdc1 node2:/data/sdc1 force
volume create: strip-vol: success: please start the volume to access data
[root@node2 yum.repos.d]# gluster volume start strip-vol
volume start: strip-vol: success
[root@node2 yum.repos.d]# gluster volume info strip-vol
Volume Name: strip-vol
Type: Stripe
Volume ID: 66d151bd-7790-4774-a1c8-513bd0c9e796
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node1:/data/sdc1
Brick2: node2:/data/sdc1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@node1 yum.repos.d]# gluster volume create rep-vol replica 2 node3:/data/sdb1 node4:/data/sdb1 force
volume create: rep-vol: success: please start the volume to access data
[root@node1 yum.repos.d]# gluster volume list
dis-vol
rep-vol
strip-vol
[root@node1 yum.repos.d]# gluster volume start rep-vol
volume start: rep-vol: success
[root@node1 yum.repos.d]# gluster volume create dis-strip stripe 2 node1:/data/sdd1 node2:/data/sdd1 node3:/data/sdd1 node4:/data/sdd1 force
volume create: dis-strip: success: please start the volume to access data
[root@node1 yum.repos.d]# gluster volume list
dis-strip
dis-vol
rep-vol
strip-vol
[root@node1 yum.repos.d]# gluster volume start dis-strip
volume start: dis-strip: success
[root@node1 yum.repos.d]# gluster volume info dis-strip
Volume Name: dis-strip
Type: Distributed-Stripe
Volume ID: 19a0b846-fc67-4e08-b81a-9cdb10b77bf8
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1:/data/sdd1
Brick2: node2:/data/sdd1
Brick3: node3:/data/sdd1
Brick4: node4:/data/sdd1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@node3 yum.repos.d]# gluster volume create dis-rep replica 2 node1:/data/sde1 node2:/data/sde1 node3:/data/sde1 node4:/data/sde1 force
volume create: dis-rep: success: please start the volume to access data
[root@node3 yum.repos.d]# gluster volume list
dis-rep
dis-strip
dis-vol
rep-vol
strip-vol
[root@node3 yum.repos.d]# gluster volume start dis-rep
volume start: dis-rep: success
[root@node3 yum.repos.d]# gluster volume info dis-rep
Volume Name: dis-rep
Type: Distributed-Replicate
Volume ID: 1d542149-6ce1-468f-b81f-96b876181d64
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1:/data/sde1
Brick2: node2:/data/sde1
Brick3: node3:/data/sde1
Brick4: node4:/data/sde1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@localhost yum.repos.d]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
20.0.0.21 node1
20.0.0.22 node2
20.0.0.23 node3
20.0.0.24 node4
[root@localhost yum.repos.d]# yum -y install glusterfs glusterfs-fuse
[root@localhost yum.repos.d]# yum -y install glusterfs glusterfs-fuse
[root@localhost ~]# mkdir -p /test/dis
[root@localhost ~]# mkdir -p /test/strip
[root@localhost ~]# mkdir -p /test/rep
[root@localhost ~]# mkdir -p /test/dis_stripe
[root@localhost ~]# mkdir -p /test/dis_rep
[root@localhost ~]# mount.glusterfs node1:dis-vol /test/dis
[root@localhost ~]# mount.glusterfs node2:stripe-vol /test/strip
[root@localhost ~]# mount.glusterfs node3:rep-vol /test/rep
[root@localhost ~]# mount.glusterfs node4:dis-stripe /test/dis_stripe
[root@localhost ~]# mount.glusterfs node1:dis-rep /test/dis_rep
创建5个大小为40M的文件
[root@localhost test]# dd if=/dev/zero of=/demo1.log bs=1M count=40
[root@localhost test]# dd if=/dev/zero of=/demo2.log bs=1M count=40
[root@localhost test]# dd if=/dev/zero of=/demo3.log bs=1M count=40
[root@localhost test]# dd if=/dev/zero of=/demo4.log bs=1M count=40
[root@localhost test]# dd if=/dev/zero of=/demo5.log bs=1M count=40
复制5个文件,存到刚才创的文件进行存储
[root@localhost test]# cp /demo* dis
[root@localhost test]# cp /demo* strip/
[root@localhost test]# cp /demo* rep/
[root@localhost test]# cp /demo* dis_stripe/
[root@localhost test]# cp /demo* dis_rep/
[root@localhost test]# ls dis
demo1.log demo2.log demo3.log demo4.log demo5.log
[root@localhost test]# ls dis-rep/
demo1.log demo2.log demo3.log demo4.log demo5.log
查看分布式卷
[root@node1 yum.repos.d]# ll -h /data/sdb1
total 160M
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo1.log
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo2.log
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo3.log
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo4.log
[root@node2 yum.repos.d]# ll -h /data/sdb1/
总用量 40M
-rw-r--r-- 2 root root 40M 10月 27 17:09 demo5.log
查看条带卷文件分布
[root@node1 yum.repos.d]# ll -h /data/sdc1
total 100M
-rw-r--r-- 2 root root 20M Oct 28 01:09 demo1.log
-rw-r--r-- 2 root root 20M Oct 28 01:09 demo2.log
-rw-r--r-- 2 root root 20M Oct 28 01:09 demo3.log
-rw-r--r-- 2 root root 20M Oct 28 01:09 demo4.log
-rw-r--r-- 2 root root 20M Oct 28 01:09 demo5.log
[root@node2 yum.repos.d]# ll -h /data/sdc1/
总用量 100M
-rw-r--r-- 2 root root 20M 10月 27 17:09 demo1.log
-rw-r--r-- 2 root root 20M 10月 27 17:09 demo2.log
-rw-r--r-- 2 root root 20M 10月 27 17:09 demo3.log
-rw-r--r-- 2 root root 20M 10月 27 17:09 demo4.log
-rw-r--r-- 2 root root 20M 10月 27 17:09 demo5.log
查看复制卷文件分布
[root@node3 yum.repos.d]# ll -h /data/sdb1
total 200M
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo1.log
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo2.log
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo3.log
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo4.log
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo5.log
[root@node4 yum.repos.d]# ll -h /data/sdb1/
总用量 200M
-rw-r--r-- 2 root root 40M 10月 27 17:09 demo1.log
-rw-r--r-- 2 root root 40M 10月 27 17:09 demo2.log
-rw-r--r-- 2 root root 40M 10月 27 17:09 demo3.log
-rw-r--r-- 2 root root 40M 10月 27 17:09 demo4.log
-rw-r--r-- 2 root root 40M 10月 27 17:09 demo5.log
查看分布式条带卷
[root@node1 yum.repos.d]# ll -h /data/sdd1
total 80M
-rw-r--r-- 2 root root 20M Oct 28 01:09 demo1.log
-rw-r--r-- 2 root root 20M Oct 28 01:09 demo2.log
-rw-r--r-- 2 root root 20M Oct 28 01:09 demo3.log
-rw-r--r-- 2 root root 20M Oct 28 01:09 demo4.log
[root@node2 yum.repos.d]# ll -h /data/sdd1/
总用量 80M
-rw-r--r-- 2 root root 20M 10月 27 17:09 demo1.log
-rw-r--r-- 2 root root 20M 10月 27 17:09 demo2.log
-rw-r--r-- 2 root root 20M 10月 27 17:09 demo3.log
-rw-r--r-- 2 root root 20M 10月 27 17:09 demo4.log
[root@node3 yum.repos.d]# ll -h /data/sdd1
total 20M
-rw-r--r-- 2 root root 20M Oct 28 01:09 demo5.log
[root@node4 yum.repos.d]# ll -h /data/sdd1/
总用量 20M
-rw-r--r-- 2 root root 20M 10月 27 17:09 demo5.log
查看分布式复制卷
[root@node1 yum.repos.d]# ll -h /data/sde1
total 160M
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo1.log
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo2.log
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo3.log
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo4.log
[root@node2 yum.repos.d]# ll -h /data/sde1/
总用量 160M
-rw-r--r-- 2 root root 40M 10月 27 17:09 demo1.log
-rw-r--r-- 2 root root 40M 10月 27 17:09 demo2.log
-rw-r--r-- 2 root root 40M 10月 27 17:09 demo3.log
-rw-r--r-- 2 root root 40M 10月 27 17:09 demo4.log
[root@node3 yum.repos.d]# ll -h /data/sde1
total 40M
-rw-r--r-- 2 root root 40M Oct 28 01:09 demo5.log
[root@node4 yum.repos.d]# ll -h /data/sde1/
总用量 40M
-rw-r--r-- 2 root root 40M 10月 27 17:09 demo5.log
破坏
[root@node2 yum.repos.d]# init 0
[root@localhost ~]# ls /text/
dis dis-rep dis-scrip rep strip
[root@localhost text]# ls -lh /data/sdb1
ls: 无法访问/data/sdb1: 没有那个文件或目录
[root@localhost text]# ^C
[root@localhost text]# cd ~
[root@localhost ~]# ls /text/
dis dis-rep dis-scrip rep strip
[root@localhost ~]# ls /text/dis
demo1.log demo2.log demo3.log demo4.log
[root@localhost ~]# ls /text/dis-rep/
demo1.log demo2.log demo3.log demo4.log demo5.log
[root@localhost ~]# ls /text/dis-scrip/
demo5.log
[root@localhost ~]# ls /text/rep/
demo1.log demo2.log demo3.log demo4.log demo5.log
[root@localhost ~]# ls /text/strip/
ls: 正在读取目录/text/strip/: 传输端点尚未连接
删除卷
[root@node1 yum.repos.d]# gluster volume list
dis-rep
dis-strip
dis-vol
rep-vol
strip-vol
[root@node1 yum.repos.d]# gluster volume stop rep-vol
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: dis-vol: success
[root@node1 yum.repos.d]# gluster volume delete rep-vol
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: dis-vol: failed: Some of the peers are down
'###这里说明需要将关了的机器开启'
[root@node1 yum.repos.d]# gluster volume stop rep-vol
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: rep-vol: success
[root@node1 yum.repos.d]# gluster volume delete rep-vol
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: rep-vol: success
访问控制
[root@node1 yum.repos.d]# gluster volume set dis-vol auth.reject 20.0.0.33
volume set: success
[root@node1 yum.repos.d]# gluster volume set dis-vol auth.allow 20.0.0.33
volume set: success