glusterfs存储卷分布卷、复制卷测试

分布式卷

创建分布卷默认模式 类似raid0,hash写入单个文件,节点上是单个文件

[root@node1 ~]# gluster volume create test1 node1:/brick/test1
volume create: test1: success: please start the volume to access data
[root@node1 ~]# gluster volume info test1
Bricks:
Brick1: node1:/brick/test1
[root@node1 ~]# gluster volume start test1
volume start: test1: success

挂载测试

[root@client ~]# mount.glusterfs node1:/test1 /opt
[root@client ~]# touch /opt/test{1..5}
[root@client ~]# ls /opt/
test1  test2  test3  test4  test5
[root@node1 ~]# ls /brick/test1/
test1  test2  test3  test4  test5

增加brick

[root@node1 ~]# gluster volume add-brick test1 node2:/brick/test1
volume add-brick: success
[root@node1 ~]# gluster volume info test1
Bricks:
Brick1: node1:/brick/test1
Brick2: node2:/brick/test1

rebalance操作能够让文件按照之前的规则再分配现网中rebalance,最好在服务器空闲的时间操作

[root@node2 ~]# ls /brick/test1/      //此时是空的
[root@node1 ~]# gluster volume rebalance test1 fix-layout start
[root@node1 ~]# gluster volume rebalance test1 start
[root@node1 ~]# ls /brick/test1/
test1  test2  test4  test5
[root@node1 ~]# gluster volume rebalance test1 status
volume rebalance: test1: success
[root@node2 ~]# ls /brick/test1/     //此时有文件被分到node2节点上去了
test3
[root@node1 ~]# ls /brick/test1/
test1  test2  test4  test5

创建复制卷

创建卷

[root@node1 ~]# gluster volume create test_rep replica 2 node1:/brick/test-rep   node2:/brick/test-rep
[root@node1 ~]# gluster volume info test_rep
Bricks:
Brick1: node1:/brick/test-rep
Brick2: node2:/brick/test-rep
[root@node1 ~]# gluster volume start test_rep
volume start: test_rep: success

挂在卷

[root@client ~]# mkdir /test_rep

[root@client ~]# mount.glusterfs node1:/test_rep /test_rep/
[root@client ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
node1:/test_rep           20G  245M   19G   2% /test_rep

测试卷

[root@client ~]# touch /test_rep/test{1..5}
[root@client ~]# ls /test_rep/
test1  test2  test3  test4  test5
[root@node1 ~]# ls /brick/test-rep/
test1  test2  test3  test4  test5
[root@node2 ~]# ls /brick/test-rep/
test1  test2  test3  test4  test5

添加副本

[root@node1 ~]# gluster volume add-brick test_rep replica 3 node3:/brick/test-rep
volume add-brick: success
[root@node3 ~]# ls /brick/test-rep/
test1  test2  test3  test4  test5

删除副本

[root@node1 ~]# gluster volume remove-brick test_rep replica 2 node3:/brick/test-rep force
[root@node1 ~]# gluster volume info test_rep
Bricks:
Brick1: node1:/brick/test-rep
Brick2: node2:/brick/test-rep

替换副本

[root@node3 ~]# rm -rf /brick/test-rep/
[root@node1 ~]# gluster volume replace-brick test_rep node2:/brick/test-rep node3:/brick/test-rep commit force
[root@node1 ~]# gluster volume info test_rep
Bricks:
Brick1: node1:/brick/test-rep
Brick2: node3:/brick/test-rep
[root@node3 ~]# ls /brick/test-rep/
test1  test2  test3  test4  test5

模拟数据删除恢复

[root@node2 ~]# rm -rf /brick/test-rep/test1
[root@node2 ~]# ls /brick/test-rep/
test2  test3  test4  test5
[root@node1 ~]# gluster volume heal test_rep full
[root@node2 ~]# ls /brick/test-rep/
test1  test2  test3  test4  test5

[root@node1 ~]# gluster volume heal test_rep  info     //查看需要修复的文件
Brick node1:/brick/test-rep
Status: Connected
Number of entries: 0
Brick node2:/brick/test-rep
Status: Connected
Number of entries: 0
[root@node1 ~]# gluster volume heal test_rep  info split-brain   //脑裂文件
Brick node1:/brick/test-rep
Status: Connected
Number of entries in split-brain: 0

Brick node2:/brick/test-rep
Status: Connected
Number of entries in split-brain: 0

GlusterFS配置信息和日志

[root@node1 ~]# ls /var/lib/glusterd/       //配置信息
bitd           glusterd.upgrade  glustershd  hooks  options  quotad  snaps     vols
glusterd.info  glusterfind       groups      nfs    peers    scrub   ss_brick
[root@node1 ~]# ls /var/lib/glusterd/vols/        //创建出来的卷信息位置
test1  test_rep
[root@node1 ~]# ls /var/log/glusterfs/    //日志信息
bricks  cli.log  cmd_history.log  gfproxy  glusterd.log  glustershd.log  snaps  start.log  status.log  test1-rebalance.log

Gluster设置访问客户端IP

[root@node1 ~]# gluster volume set test_rep auth.allow 192.168.1.*
[root@node1 ~]# gluster volume info test_rep
Options Reconfigured:
auth.allow: 192.168.1.*

复制卷数据不一致修复处理

[root@client ~]# ls /opt/
test1  test2  test3  test4  test5
[root@node2 ~]# ls /brick/test-rep/
test1  test2  test3  test4  test5
[root@node2 ~]# rm -rf /brick/test-rep/test1
[root@node2 ~]# ls /brick/test-rep/
test2  test3  test4  test5
[root@client ~]# cat /opt/test1   //访问挂载目录里的文件,能正常访问
[root@node2 ~]# ls /brick/test-rep/ 
test1  test2  test3  test4  test5      //访问文件触发文件的自动修复

节点宕机自动修复

[root@node2 ~]# systemctl stop glusterd.service
[root@client ~]# touch /opt/first-test
[root@node2 ~]# ls /brick/test-rep/
test1  test2  test3  test4  test5
[root@node2 ~]# systemctl start  glusterd.service
[root@node2 ~]# ls /brick/test-rep/
first-test  test1  test2  test3  test4  test5

复制卷的扩容


[root@node1 ~]# gluster volume add-brick test-rep replica 2 node3:/brick/test-rep node4:/brick/test-rep  force   
#添加两个块设备
[root@node1 ~]# gluster volume info test-rep
Number of Bricks: 2 x 2 = 4         #已经扩容

缩减与删除

[root@node1 ~]# gluster volume stop test-rep    //先停止卷
#然后移除卷,replica为2,移除必须是2的倍数
[root@node1 ~]# gluster volume remove-brick test-rep replica 2 node3:/brick/test-rep node4:/brick/test-rep force			
[root@node1 ~]# gluster volume info test-rep			//检测
Status: Stopped
Number of Bricks: 1 x 2 = 2
[root@glusterfs01 ~]# gluster volume start test-rep     //重新启动卷       
[root@node1 ~]# gluster volume stop test-rep
[root@node1 ~]# gluster volume delete   test-rep    //删除卷

限制目录大小

[root@node1 ~]# gluster volume quota test_rep  enable     //开启
volume quota : success
[root@node1 ~]# gluster volume quota test_rep  disable		//关闭
[root@client ~]# mkdir /opt/limit
[root@node1 ~]# gluster volume quota test_rep  limit-usage  /limit    10GB
volume quota : success
[root@node1 ~]# gluster volume quota  test_rep list
 Path      Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit 
-----------------------------------------------------------------------------------------
/limit     10.0GB     80%(8.0GB)   0Bytes  10.0GB              No                   No
[root@node1 ~]# gluster volume quota test_rep remove /limit    //移除使用大小限制
volume quota : success
[root@node1 ~]# gluster volume quota  test_rep list
quota: No quota configured on volume test_rep

主机故障恢复的模拟

找一台完全一样的机器,配置和故障机同样的ip,安装gluster软件,保证配置一样,
gluster peer status  //主节点上查看故障机器的Uuid
vim /var/lib/glusterd/glusterd.info   //修改新装的机器的UUID信息
gluster volume heal test_rep full     //启动后执行修复

你可能感兴趣的:(ceph存储,gluster存储,运维,linux,学习)