分布式文件系统Glusterfs

Gluster File System 是自由软件,主要由 Z RESEARCH 公司负责开发,十几名开发者,最近非常活跃。主要应用在集群系统中,具有很好的可扩展性。软件的结构设计良好,易于扩展和配置,通过各个模块的灵活搭配以得到针对性的解决方案。可解决以下问题:网络存储,联合存储(融合多个节点上的存储空间),冗余备份,大文件的负载均衡(分块)

1.1系统环境

准备三台机器:

[root@node1 data]# cat /etc/redhat-release

CentOS release 6.8 (Final)

 

Node1 

192.168.70.71  

server

Node2

192.168.70.72

server

Node3

192.168.70.73

client

1.2设置防火墙

vi /etc/sysconfig/iptables:

    -A INPUT -m state--state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT

    -A INPUT -m state --stateNEW -m tcp -p tcp --dport 49152:49162 -j ACCEPT

service iptables restart

 

1.3Glusterfs安装

Server1server2下都执行以下操作

安装方法1

· 

wget -l 1 -nd -nc -r -A.rpmhttp://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/CentOS/epel-6Server/x86_64/ 

 

wget -l 1 -nd -nc -r -A.rpmhttp://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/CentOS/epel-6Server/noarch/

wget -nc

http://download.gluster.org/pub/gluster/nfs-ganesha/2.3.0/EPEL.repo/epel-6Server/x86_64/nfs-ganesha-gluster-2.3.0-1.el6.x86_64.rpm

yum install * -y

方法二:(不同版本)

mkdir tools

cd /tools

wget -l 1 -nd -nc -r -A.rpm http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.5/RHEL/epel-6/x86_64/

yum install *.rpm

开启gluster服务

/etc/init.d/glusterd start

设置glusterFS服务开机启动

chkconfig glusterd on

1.4gluster服务器设置

1.4.1配置存储池

1.4.2添加可信任的存储池

[root@node1 /]# gluster peer probe 192.168.70.72

peer probe: success.

1.4.3查看状态

[root@node1 /]# gluster peer status

Number of Peers: 1

 

Hostname: 192.168.70.72

Uuid: fdc6c52d-8393-458a-bf02-c1ff60a0ac1b

State: Accepted peer request (Connected)

1.4.4移除节点

[root@node1 /]# gluster peer detach 192.168.70.72

peer detach: success

[root@node1 /]# gluster peer status

Number of Peers: 0

1.5 创建GlusterFS逻辑卷(Volume)

node1node2分别建立mkdir /data/gfs/

mkdir /data/gfs/

创建逻辑卷、

gluster volume create vg0 replica 2192.168.70.71:/data/gfs 192.168.70.72:/data/gfs force

volume create: vg0: success: please startthe volume to access data

查看逻辑卷信息

[root@node1 /]# gluster volume info

 

Volume Name: vg0

Type: Replicate

Volume ID: 6aff1f4f-8efe-4ed0-879e-95df483a86a2

Status: Created

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: 192.168.70.71:/data/gfs

Brick2: 192.168.70.72:/data/gfs

[root@node1 /]# gluster volume status

Volume vg0 is not started

开启逻辑卷

[root@node1 /]# gluster volume start vg0

volume start: vg0: success

1.6客户端安装

yum -y install glusterfs glusterfs-fuse

mount -t glusterfs 192.168.70.71:/gv0/mnt/gfs

# 卷扩容(由于副本数设置为2,至少要添加2468..)台机器)

gluster peer probe 192.168.70.74 # 加节点

gluster peer probe 192.168.70.75 # 加节点

gluster volume add-brick gv0 192.168.70.74:/data/glusterfs192.168.70.75:/data/glusterfs # 合并卷

收缩卷(收缩卷前gluster需要先移动数据到其他位置)

gluster volume remove-brick gv0 192.168.70.74:/data/glusterfs 192.168.70.75:/data/glusterfsstart # 开始迁移

gluster volume remove-brick gv0 192.168.70.74:/data/glusterfs192.168.70.74:/data/glusterfsstatus # 查看迁移状态

gluster volume remove-brick gv0 192.168.70.74:/data/glusterfs192.168.70.74:/data/glusterfs commit # 迁移完成后提交

# 迁移卷

gluster peer probe 192.168.70.75# 192.168.70.76数据迁移到192.168.70.75先将192.168.70.75加入集群

gluster volume replace-brick gv0 192.168.70.76:/data/glusterfs192.168.70.75:/data/glusterfsstart # 开始迁移

gluster volume replace-brick gv0 192.168.70.76:/data/glusterfs192.168.70.75:/data/glusterfsstatus # 查看迁移状态

gluster volume replace-brick gv0 192.168.70.76:/data/glusterfs192.168.70.75:/data/glusterfscommit # 数据迁移完毕后提交

gluster volume replace-brick gv0 192.168.70.76:/data/glusterfs192.168.70.75:/data/glusterfs commit -force # 如果机器192.168.70.76出现故障已经不能运行,执行强制提交

gluster volume heal gv0full # 同步整个卷