关于GlusterFS的基础知识,这里不再阐述,毕竟当你搜到这篇文章时,你一定对它有所了解。
离线部署GlusterFS的准备工作
没错,要的就是离线部署!因为不排除客户环境是局域网。为了能离线部署,首要的便是构建GlusterFS安装包。这里以GlusterFS4.1.0为例,部署环境是Centos7.4.福利:如果不想倒腾这一切,做个伸手党,直接移步下一节《编写部署脚本》
安装rpmbuild工具
yum -y install rpm-build
安装编译工具和相关依赖
yum install -y flex bison openssl-devel libacl-devel sqlite-devel libxml2-devel libtool automake autoconf gcc attr python-devel unzip
userspace-rcu-master这个依赖有点特殊,只找到了源码包,需要自己编译
cp userspace-rcu-master.zip /tmp
cd /tmp
unzip userspace-rcu-master.zip
cd userspace-rcu-master
./bootstrap
./configure
make
make install
ldconfig
安装glusterfs源码包(这一步将会安装glusterfs-4.1.0.tar.gz和glusterfs.spec这两个文件,分别位于以下目录:/root/rpmbuild/SOURCES 和 /root/rpmbuild/SPECS.)
cp glusterfs-4.1.0-1.el7.centos.src.rpm /tmp
rpm -i glusterfs-4.1.0-1.el7.centos.src.rpm
产生RPM安装包
cd /root/rpmbuild/SPECS
rpmbuild -bb glusterfs.spec
经过这一步,你就会在/root/rpmbuild/RPMS/x86_64这个目录下,看到安装包了:glusterfs-4.1.0-1.el7.centos.x86_64.rpm
编写部署脚本
这个脚本采用的部署方式为分布+复制,每个文件都有两个副本,也就意味着,作为服务节点,必须是偶数台服务器。执行以下脚本时,首先要将
glusterfs-4.1.0-install.sh(就是下面的脚本文件)
libsqlite3-devel-3.25.2-alt2.x86_64.rpm
libuserspace-rcu-0.10.1-alt1.x86_64.rpm
glusterfs-4.1.0-1.el7.centos.x86_64.rpm
这四个文件放到/tmp目录下。.sh文件就是下面的脚本,另三个安装包我上传到了百度网盘,链接:百度网盘
提取码:1rym
话不多说,先上代码:
#!/bin/sh
# Author haotaro
# 各个节点的IP地址,以英文逗号分隔,偶数个
NODES=$1
# 跑这个脚本的服务器IP,这个服务器也会作为客户端
MY_IP=$2
# 各节点的密码,以英文逗号分隔,次序与NODES一致
PASSWORDS=$3
NODES=${NODES//,/ }
PASSWORDS=(${PASSWORDS//,/ })
nodes_arr=($NODES)
# 如果非偶数节点,会强制删除其中一个服务器
if [ $[${#nodes_arr[@]} % 2] != 0 ]; then
for ((i=0;i<${#nodes_arr[@]};i++)); do
if [ ${nodes_arr[$i]} != $MY_IP ]; then
unset nodes_arr[$i]
break
fi
done
fi
NODES="${nodes_arr[*]}"
NODES_STR=${NODES// /,}
###############Initialization################
# ROOT_PASS=$PASSWORD
# Gluster peers
# NODES=(192.168.0.108 192.168.0.105 192.168.0.188 192.168.0.157 192.168.0.167 192.168.0.149 192.168.0.178 192.168.0.181)
# Gluster volumes
#卷名称为test-volume,每个节点的/data/aiosdata是数据实际存放位置
volume=(test-volume /data/gfsdata $NODES_STR)
VOLUMES=(volume)
#客户端挂载点
MOUNT_POINT=/mnt/gfs
#############################################
# Get MY_IP
# if [ "${MY_IP}" == "" ];then
# MY_IP=$(python -c "import socket;socket=socket.socket();socket.connect(('8.8.8.8',53));print socket.getsockname()[0];")
# fi
# Step 1. 安装sshpass
sudo yum -y install sshpass
# Step 2. 在每个节点上安装.
sudo cat > /tmp/tmp_install_gfs.sh << _wrtend_
#!/bin/bash
yum -y install rpcbind
rpm -i --force --nodeps /tmp/libuserspace-rcu-0.10.1-alt1.x86_64.rpm
rpm -i --force --nodeps /tmp/libsqlite3-devel-3.25.2-alt2.x86_64.rpm
yum -y install /tmp/glusterfs-4.1.0-1.el7.centos.x86_64.rpm
mkdir -p /var/log/glusterfs /data/aiosdata
systemctl daemon-reload
systemctl start glusterd.service
systemctl enable glusterd.service
sleep 5
_wrtend_
sudo chmod +x /tmp/tmp_install_gfs.sh
i=0
for node in ${NODES[@]}; do
if [ "${MY_IP}" != "$node" ];then
echo $node install start
sudo sshpass -p ${PASSWORDS[$i]} scp -o StrictHostKeyChecking=no /tmp/glusterfs-4.1.0-1.el7.centos.x86_64.rpm ${node}:/tmp/
sudo sshpass -p ${PASSWORDS[$i]} scp -o StrictHostKeyChecking=no /tmp/libuserspace-rcu-0.10.1-alt1.x86_64.rpm ${node}:/tmp/
sudo sshpass -p ${PASSWORDS[$i]} scp -o StrictHostKeyChecking=no /tmp/libsqlite3-devel-3.25.2-alt2.x86_64.rpm ${node}:/tmp/
sudo sshpass -p ${PASSWORDS[$i]} scp -o StrictHostKeyChecking=no /tmp/tmp_install_gfs.sh ${node}:/tmp/
sudo sshpass -p ${PASSWORDS[$i]} ssh -o StrictHostKeyChecking=no root@${node} /tmp/tmp_install_gfs.sh
echo $node install end
fi
let i+=1
done
sudo /tmp/tmp_install_gfs.sh
# Step 3. 进行peer操作
k=0
for node in ${NODES[@]}; do
if [ "${MY_IP}" != "$node" ];then
sudo gluster peer probe ${node}
sudo sshpass -p ${PASSWORDS[$k]} ssh root@${node} gluster peer probe ${MY_IP}
fi
let k+=1
done
sleep 2
# Step 4. 确认peer状态,创建并开启卷
conn_peer_num=`gluster peer status | grep Connected | wc -l`
conn_peer_num=`expr $conn_peer_num + 1`
if [ ${conn_peer_num} -ge ${#NODES[@]} ];then
echo "All peers have been attached."
for vol in ${VOLUMES[@]};do
eval vol_info=(\${$vol[@]})
eval vol_nodes=(${vol_info[2]//,/ })
vol_path=""
for node in ${vol_nodes[@]};do
vol_path=$vol_path$node:${vol_info[1]}" "
done
# create volume
sudo gluster volume create ${vol_info[0]} replica 2 transport tcp ${vol_path} force
# start volume
sudo gluster volume start ${vol_info[0]}
done
else
echo "Attach peers error"
exit 0
fi
# 创建客户端
sudo mkdir -p ${MOUNT_POINT}
sudo mount -t glusterfs ${MY_IP}:${vol_info[0]} ${MOUNT_POINT}
echo "mount success"
执行命令:sh /tmp/glusterfs-4.1.0-install.sh
其中,thisServer是执行该脚本的服务器,并且是前面服务器列表其中一员,它除了作为服务节点,还部署了客户端。执行完后,可以运行gluster peer status
来检查集群状况。
防止客户端中途挂载失效
这里是个笨方法,每隔十秒,检测一次挂载情况,如果失效,则重新挂载。
#!/bin/sh
HISTFILE=/root/.bash_history
set -o history
echo "export PROMPT_COMMAND='history -a; history -c; history -r; $PROMPT_COMMAND'" >> /root/.bashrc
shopt -s histappend
echo "export HISTTIMEFORMAT='%F %T '" >> /root/.bashrc
source /root/.bashrc
# Get GlusterFS master url
GLUSTERFS_SERVER=$1
# Start to test GlusterFS mount
while true
do
test_result=`df -h | grep "% /mnt/gfs"`
if [ -z "$test_result" ]
then
date=`date`
echo "["$date"]Unmounted...restart mounting..." >> /var/log/test_mount_log
# Get history,check if someone umounted GlusterFS client
history -w
echo "-------------------HISTORY--------------------" >> /var/log/test_mount_log
history >> /var/log/test_mount_log
echo "---------------------END----------------------" >> /var/log/test_mount_log
# Start to mount
mount -t glusterfs $GLUSTERFS_SERVER:test-volume /raid/aios-data
echo "["$date"]Mounted" >> /var/log/test_mount_log
fi
sleep 10
done
执行完后,可以用df -h
命令查看挂载是否成功。
GlusterFS清理
在每个服务节点执行以下命令
清理数据
systemctl stop glusterd
rm -rf /data/gfsdata
rm -rf /var/lib/glusterd
删除安装包
rpm -qa | grep gluster
yum -y remove 安装包名称
其他
GlusterFS其他的维护技巧,如处理脑裂问题,直接进入官网,搜split关键字,就会出现很多很方便的技巧,这里就不再赘述。