我们的应用MyApp不支持集群,但要求双机单活(两台机器:master和slave):
1.正常情况下,只有master启动MyApp并提供服务
2.当master发生故障时,slave自动启动本机的MyApp,同时虚拟IP漂移至slave,保持对外提供服务的IP和端口不变
F5据说也能满足上面的需求,但F5的通常用法都是双机双活,单活的话还没研究过
服务器资源
10.75.201.2:虚拟IP(VIP),与master和slave在同一子网;以此IP对外暴露服务
10.75.201.67(master):安装MyApp+ Haproxy + Keepalived
10.75.201.66(slave):安装MyApp+ Haproxy + Keepalived
网络策略
10.75.201.2:8000 <-> 10.75.201.67:9000
10.75.201.2:8000 <-> 10.75.201.66:9000
系统安装
Haproxy安装及配置
同时安装在master和slave
1. 拷贝安装文件到/app/risk/ha下
2. 使用root用户安装haproxy
tar -xvf haproxy-1.4.24.tar.gz
cd haproxy-1.4.24
make TARGET=linux26 ARCH=x86_64
make install
3. 使用risk用户配置haproxy,创建文件/app/risk/ha/haproxy.cfg,粘贴如下内容
global
log 127.0.0.1 alert
log 127.0.0.1 alert debug
defaults
log global
mode http
option dontlognull
option redispatch
retries 3
contimeout 10800000
clitimeout 10800000
srvtimeout 10800000
listen MyApp 10.75.201.2:8000
mode tcp
balance roundrobin
option tcpka
server MyApp01 10.75.201.67:9000 check inter 5000 downinter 500
server MyApp02 10.75.201.66:9000 check inter 5000 backup
以10.75.201.2:8000对外提供服务,后端是主备两台机器:10.75.201.67:9000和10.75.201.66:9000
使用HAProxy遇到的问题:
a.
[WARNING] 003/191909 (20413) : config : 'stats' statement ignored for proxy 'MyApp' as it requires HTTP mode.
[WARNING] 003/191909 (20413) : config : 'option forwardfor' ignored for proxy 'MyApp' as it requires HTTP mode
原因是:
mode tcp是基于第4层(网络层),mode http是基于第7层(应用层),mode tcp不会检查http header
不同的mode可配置的选项不同,例如配置改成:
mode http
stats enable
option forwardfor
就会提示上述错误
而MyApp是基于netty的TCP服务器,因此配置为mode tcp
b.
Starting proxy MyApp: cannot bind socket
原因是配置的8000端口被其他程序占用了,或者是重复启动haproxy
keepalived安装及配置
同时安装在master和slave,使用root用户操作
1. 依次安装如下package
rpm -Uvh keyutils-libs-devel-1.4-4.el6.x86_64.rpm
rpm -Uvh libcom_err-devel-1.41.12-14.el6.x86_64.rpm
rpm -Uvh libsepol-devel-2.0.41-4.el6.x86_64.rpm
rpm -Uvh libselinux-devel-2.0.94-5.3.el6.x86_64.rpm
rpm -Uvh krb5-devel-1.10.3-10.el6.x86_64.rpm
rpm -Uvh zlib-devel-1.2.3-29.el6.x86_64.rpm
rpm -Uvh openssl-devel-1.0.0-27.el6.x86_64.rpm
2. 解压keepalived-1.2.12.tar.gz,跳转到目录下编译安装
./configure
make
make install
修改/etc/keepalived/keepalived.conf(interface值用ifconfig确认一下)
master:
vrrp_script check_MyApp {
script "/app/risk/MyApp/shell/check_MyApp.sh" # verify the port is listening
interval 2 # check every 2 seconds
}
vrrp_instance VI_1 {
interface eth1 # interface to monitor
state MASTER
virtual_router_id 11 # Assign one ID for this route
priority 101 # 101 on master, 100 on backup
virtual_ipaddress {
10.75.201.2 # the virtual IP
}
track_script {
check_MyApp
}
notify_stop "/app/risk/MyApp/shell/stopAll.sh"
}
slave:
vrrp_script check_MyApp {
script "killall -0 haproxy" # verify the haproxy is running
interval 2 # check every 2 seconds
}
vrrp_instance VI_1 {
interface eth1 # interface to monitor
state BACKUP
virtual_router_id 11 # Assign one ID for this route
priority 100 # 101 on master, 100 on backup
virtual_ipaddress {
10.75.201.2 # the virtual IP
}
track_script {
check_MyApp
}
notify "/app/risk/MyApp/shell/keepalivednotify.sh"
}
说明:
a. master中的vrrp_script 是check_MyApp.sh,它定时检查MyApp(提供服务的端口是否起来),若down掉,则停掉keepalived,使得slave能接管服务并成为master:
#!/bin/bash
#if MyApp is down, stop keepalived
PORT=9000
nc -z 127.0.0.1 $PORT 1>/dev/null 2>&1; result=$?;
if [ $result -eq 0 ]; then
exit 0
else
service keepalived stop
. ./log.sh "keepalived stopped"
exit 1
fi
b.master中notify_stop的作用是service keepalived stop后,把MyApp关掉,确保“单活”;但如果是直接kill掉keepalived进程的话,是不会触发notify_stop的,应该避免这样做。同时,salve中notify的作用是,当slave转换成master后,启动本机的MyApp的同时,远程调用shell脚本把原master的MyApp停掉,确保“单活”
c. slave中vrrp_script 则简单地检查Haproxy是否正常,由于haproxy是一直开着的,因此slave会进入BACKUP状态
d.虚拟IP(10.75.201.2)并不需要对应一台机器,只要求跟master和slave的IP是同一子网就可以了
e.keepalivednotify.sh的主要作用就是当slave进入MASTER状态时,启动本机的MyApp,同时把之前master的MyApp停掉:
#!/bin/bash
LOCAL=10.75.201.66
REMOTE=10.75.201.67
PORT=9000
NOW=`date "+%Y-%m-%d %H:%M:%S"`
startLocal(){
nc -z $LOCAL $PORT 1>/dev/null 2>&1; result=$?;
if [ $result -eq 0 ]; then
echo $NOW-$LOCAL:$PORT 'is listening! No need to restart!'
else
. /app/risk/MyApp/shell/startAll.sh
echo $NOW-'start at' $LOCAL:$PORT 'ok'
fi
}
#startRemote(){
# nc -z $REMOTE $PORT 1>/dev/null 2>&1; result=$?;
# if [ $result -eq 0 ]; then
# echo $NOW-$REMOTE:$PORT 'is listening! No need to restart!'
# else
# ssh root@$REMOTE "sh /app/risk/MyApp/shell/startAll.sh"
# echo $NOW-'start at' $REMOTE:$PORT 'ok'
# fi
#}
stopRemote(){
nc -z $REMOTE $PORT 1>/dev/null 2>&1; result=$?;
if [ $result -eq 0 ]; then
ssh root@$REMOTE "sh /app/risk/MyApp/shell/stopAll.sh"
fi
}
stopLocal(){
nc -z $LOCAL $PORT 1>/dev/null 2>&1; result=$?;
if [ $result -eq 0 ]; then
. /app/risk/MyApp/shell/stopAll.sh
fi
}
TYPE=$1
NAME=$2
STATE=$3
case $STATE in
"MASTER")
startLocal
stopRemote
exit 0
;;
"BACKUP")
stopLocal
exit 0
;;
"FAULT")
stopLocal
exit 0
;;
*) echo "unknown state"
exit 1
;;
esac
keepalived的其他相关配置:
1. 让keepalived作为linux服务自启动
cd /etc/sysconfig
ln -s /usr/local/etc/sysconfig/keepalived .
cd /etc/rc3.d/
ln -s /usr/local/etc/rc.d/init.d/keepalived S100keepalived
cd /etc/init.d/
ln -s /usr/local/etc/rc.d/init.d/keepalived .
2. 让keepalived启动时输出详细信息,修改/usr/local/etc/sysconfig/keepalived
修改为KEEPALIVED_OPTIONS="-D -d"
3. 让主机可绑定非本地IP
echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf
sysctl -p
使用keepalived遇到的问题:
a.虚拟IP绑定不成功
可以尝试service network restart,同时注意启动keepalived使用绝对路径的配置文件,例如:
keepalived -f /app/risk/ha/keepalived.conf
而不要这样:
cd /app/risk/ha
keepalived -f ./keepalived.conf
具体原因不明
如果不指定keepalived的配置文件的话,默认是读取/etc/keepalived/keepalived.conf文件
强烈建议使用/etc/keepalived/keepalived.conf(没有这个文件则创建一个)
这个默认的路径可以在/usr/local/etc/rc.d/init.d/keepalived中看到:
#!/bin/sh
#
# Startup script for the Keepalived daemon
#
# processname: keepalived
# pidfile: /var/run/keepalived.pid
# config: /etc/keepalived/keepalived.conf
# chkconfig: - 21 79
# description: Start and stop Keepalived
# Source function library
. /etc/rc.d/init.d/functions
# Source configuration file (we set KEEPALIVED_OPTIONS there)
. /etc/sysconfig/keepalived
......
b.执行service keepalived start报错:
Starting keepalived: /bin/bash: keepalived: command not found
keepalived这个命令在:
/usr/local/sbin/keepalived
而echo $PATH确认/usr/local/sbin/是在PATH里面的
这个问题非常奇怪,不知是什么原因
上网搜索一下,解决方案有两个:
一是把keepalived拷贝到/usr/sbin/:
cp /usr/local/sbin/keepalived /usr/sbin/
二是修改启动脚本/etc/init.d/keepalived,把start方法当中启动命令修改为全路径:
daemon /usr/local/sbin/keepalived ${KEEPALIVED_OPTIONS}
c.其他错误
需要检查各个shell脚本是否有执行权限,以及shell脚本中是否有windows字符(dos2unix)
启动
1. 使用risk用户启动master的MyApp(slave不启动MyApp)
/app/risk/MyApp/shell/startAll.sh
2. 使用risk用户启动master和slave的haproxy
nohup haproxy -f /app/risk/ha/haproxy.cfg &
nohup haproxy -f /app/risk/ha/haproxy.cfg &
3. 使用root用户启动master和slave的keepalived.
service keepalived start
测试验证
1. 查看MyApp状态
netstat -anp | grep 9000
2. 查看Haproxy状态
ps -ef | grep haproxy
3. 查看keepalived状态
ps –ef | grep keepalived
cat /var/log/messages | grep VRRP_Instance
或tail –f /var/log/messages
4. 虚拟IP绑定状态
ip a | grep eth
正常情况下:
1. 虚拟IP绑定在master上
在master上执行命令ip add | grep eth1:
eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UNKNOWN qlen 1000
inet 10.75.201.67/24 brd 10.75.201.255 scope global eth1
inet 10.75.201.2/32 scope global eth1
可见绑定了虚拟IP(10.75.201.2)
2. master的keepalived状态为MASTER(在master上执行命令tail –f /var/log/messages):
localhost Keepalived_healthcheckers[23359]: Using LinkWatch kernel netlink reflector...
localhost Keepalived_vrrp[23360]: VRRP_Script(check_MyApp) succeeded
localhost Keepalived_vrrp[23360]: VRRP_Instance(VI_1) Transition to MASTER STATE
localhost Keepalived_vrrp[23360]: VRRP_Instance(VI_1) Received lower prio advert, forcing new election
localhost Keepalived_vrrp[23360]: VRRP_Instance(VI_1) Entering MASTER STATE
3. slave的keepalived为BACKUP状态(命令同上)
localhost Keepalived_vrrp[5161]: VRRP_Instance(VI_1) Entering BACKUP STATE
测试master有故障的情况:
1. 把master的MyApp进程kill掉
2. 把master的keepalived服务停掉:service keepalived stop
3. 把master的keepalived进程kill掉(此时master上的MyApp不会停掉,需要手动关掉,应该避免这种情况)
以上任意一种情况发生,都会使得slave转换到MASTER状态并且虚拟IP漂移至slave,同时slave的MyApp会自动启动