一,前言
再学习kvm过程中,老大要求我把数据磁盘和系统盘分开存储。因此,老大建议用glusterfs来做data 的存储。找了很多资料才搞完,看下面的操作吧。
二,安装部署。
通过很多资料发现,有linux系统是直接找到glusterfs的源代码的网站下载repo的文件后yum安装。开始的时候,我也这样搞,发现各种报错,各种依赖,让我烦不胜烦,但是,我还是决心用yum源进行安装,因为用yum源安装会省去很多事,例如:启动脚本,环境变量,等等。
安装开始:
找一个163的yum源:
[root@datastorage231 vm-p_w_picpaths]# cat /etc/yum.repos.d/99bill.repo
[base]
name=CentOS-yum
baseurl=http://mirrors.163.com/centos/6/os/x86_64/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
<------------------------------------------->
安装一些依赖或者一些有用的软件:
yum -y install libibverbs librdmacm xfsprogs nfs-utils rpcbind libaio liblvm2app lvm2-devel
cd /etc/yum.repos.d/
获取glusterfs的源:
wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/RHEL/glusterfs-epel.repo
mv 99bill.repo 99bill.repo.bak
yum clean all
cd /home/
安装EPEL源:
wget http://mirrors.sohu.com/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm
安装依赖包:
wget ftp://195.220.108.108/linux/epel/6/x86_64/userspace-rcu-0.7.7-1.el6.x86_64.rpm
rpm -ivh userspace-rcu-0.7.7-1.el6.x86_64.rpm
wget ftp://rpmfind.net/linux/fedora/linux/releases/24/Everything/x86_64/os/Packages/p/pyxattr-0.5.3-7.fc24.x86_64.rpm
rpm -ivh pyxattr-0.5.3-7.fc24.x86_64.rpm --force --nodeps
wget ftp://ftp.pbone.net/mirror/ftp.pramberger.at/systems/linux/contrib/rhel6/archive/x86_64/python-argparse-1.3.0-1.el6.pp.noarch.rpm
rpm -ivh python-argparse-1.3.0-1.el6.pp.noarch.rpm
<-------------------------------------------------------->
安装glusterfs的软件
yum install -y --skip-broken glusterfs glusterfs-api glusterfs-cli glusterfs-client-xlators glusterfs-fuse glusterfs-libs glusterfs-server
启动:/etc/init.d/glusterd restart
<---------------------------------------------------->
三,使用glusterfs
现在用4台机器做glusterfs的服务器,因为,我将做一个分布复制条带卷!
分布复制条带卷的含义:把4台机器分成两份
具体使用步骤:
[root@datastorage231 vm-p_w_picpaths]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.55.231 datastorage231
192.168.55.232 datastorage232
192.168.55.233 datastorage233
192.168.55.234 datastorage234
在每个机器中都添加上述的hostnmae对应的IP地址。
把服务器添加到存储池:
我是用datastorage231 这个机器上面进行操作的
gluster peer probe datastorage231 ==> 在这里会提示,在本机不用添加存储池的提示。
gluster peer probe datastorage232
gluster peer probe datastorage233
gluster peer probe datastorage234
创建分布复制条带卷的命令:
[root@datastorage231 opt]# gluster volume create vm-p_w_picpaths stripe 2 replica 2 transport tcp 192.168.55.231:/gfs_data/vm-p_w_picpaths 192.168.55.232:/gfs_data/vm-p_w_picpaths 192.168.55.233:/gfs_data/vm-p_w_picpaths 192.168.55.234:/gfs_data/vm-p_w_picpaths
volume create: vm-p_w_picpaths: success: please start the volume to access data ==>提示成功
[root@datastorage231 opt]# gluster volume info ==> 查看创建的卷组信息
Volume Name: vm-p_w_picpaths
Type: Striped-Replicate
Volume ID: e1dcf250-a1d4-47e8-8f43-328c14f2508c
Status: Created
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.55.231:/gfs_data/vm-p_w_picpaths
Brick2: 192.168.55.232:/gfs_data/vm-p_w_picpaths
Brick3: 192.168.55.233:/gfs_data/vm-p_w_picpaths
Brick4: 192.168.55.234:/gfs_data/vm-p_w_picpaths
Options Reconfigured:
performance.readdir-ahead: on
[root@datastorage231 opt]# gluster volume start vm-p_w_picpaths ==>启动卷
volume start: vm-p_w_picpaths: success
[root@datastorage231 opt]# gluster volume status all
Status of volume: vm-p_w_picpaths
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.55.231:/gfs_data/vm-p_w_picpaths 49152 0 Y 2533
Brick 192.168.55.232:/gfs_data/vm-p_w_picpaths 49152 0 Y 3019
Brick 192.168.55.233:/gfs_data/vm-p_w_picpaths 49152 0 Y 2987
Brick 192.168.55.234:/gfs_data/vm-p_w_picpaths 49152 0 Y 2668
NFS Server on localhost 2049 0 Y 2555
Self-heal Daemon on localhost N/A N/A Y 2560
NFS Server on datastorage233 2049 0 Y 3009
Self-heal Daemon on datastorage233 N/A N/A Y 3015
NFS Server on datastorage234 2049 0 Y 2690
Self-heal Daemon on datastorage234 N/A N/A Y 2695
NFS Server on datastorage232 2049 0 Y 3041
Self-heal Daemon on datastorage232 N/A N/A Y 3046
Task Status of Volume vm-p_w_picpaths
------------------------------------------------------------------------------
There are no active volume tasks
服务端基本的安装已经完成。
提示:增加pool后节点会自动创建gluster.info,文件中有唯一的UUID
/var/lib/glusterd/glusterd.info
节点状态不对时,可删除/var/lib/glusterd/目录中除了glusterd.info之外的所有目录文件,重启gluster服务
四,客户端使用
modprobe fuse
lsmod |grep fuse
dmesg | grep -i fuse
yum-y install openssh-server wget fuse fuse-libs openib libibverbs
yuminstall -y glusterfs glusterfs-fuse
挂载卷:
mount -t glusterfs 192.168.55.231:/vm-p_w_picpaths /rhel6_gfs_data/
备用服务器挂载:
you can specify the following options whenusing the mount -t glusterfs command. Note that you need to separate all optionswith commas.
backupvolfile-server=server-name
volfile-max-fetch-attempts=number ofattempts
log-level=loglevel
log-file=logfile
transport=transport-type
direct-io-mode=[enable|disable]
use-readdirp=[yes|no]
mount -t glusterfs -o backupvolfile-server=volfile_server2,use-readdirp=no,log-level=WARNING,log-file=/var/log/gluster.logserver1:/test-volume /mnt/glusterfs
mount -t glusterfs -obackupvolfile-server=192.168.55.233,use-readdirp=no,log-level=WARNING,log-file=/var/log/gluster.log192.168.55.231:/vm-p_w_picpaths /opt/gfs_temp