一、keepalived的引入
1、HA基础回顾
HA: heartbeat、corosync keepalived:lvs(director:HA,ipvs rules ,health check,) messger layer coluter resource messger 为那些非HA var提供管理 resource agent 资源:主资源,组资源,克隆资源,主从资源
keepalived 的实现方法:
2、vrrp的基础知识
vrrp:virtual redundent routing protocol 虚拟路由冗余协议
到达外网的方法
1、默认网关
2、添加路由条目
静态路由
动态路由:ospf,rip
vrrp:蒋两个或两个以上的路由器虚拟化为一个路由组,虚拟化为一个路由器,在每一个上面配置上ip地址和mac地址,如果一个路由器坏了,另外一个路由器会自动承担起转发路由数据包的任务,达到高可用的目的。 抢占式和非抢占式 抢占式:根据配置在路由器上的策略来设置路由器在工作时如果意外重启要不要比较优先级,ip地址大小,确定谁为主,谁为备。
3、keepalived的引入
Keepalived的适用场景 keepalived:只需要简单的定义一个ip就能实现nginx的高可用,它的适用场景是轻量级,无需共享存储,资源征用。主要用在反向代理,的高可用。 ipvs: lvs:keepalived nginx,haproxy(reverse proxy):keepalived heartbeat+drbd:主要实现mysql的高可用 keepalived haproxy 被read hot 收入到红帽系 而nginx却没有
4、Keepalived的主从配置
keepalived
的安装
前提是安装了 nginx做反向代理
二、keepalived的主从、主主配置
1、主从模型
环境准备
Node1:172.16.1.143 lamp+nginx+keepalived
Node2:172.16.1.144 lamp+nginx+keepalived
keepalived可以编译安装,也可rpm包安装
Node1 yum install keepalived -y cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak Node1 vim /etc/keepalived/keepalived.conf vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 107 priority 100 advert_int 1 authentication { auth_type PASS auth_pass kingken } virtual_ipaddress { 172.16.1.147 } } scp /etc/keepalived/keepalived.conf 172.16.1.144:/etc/keepalived/keepalived.conf
Node2
yum install keepalived -y vim /etc/keepalived/keepalived.conf backup vrrp_instance VI_1 { state BACKUP 修改 interface eth0 virtual_router_id 107 priority 99 修改 advert_int 1 authentication { auth_type PASS auth_pass kingken } virtual_ipaddress { 172.16.1.147 } }
2、实验验证
开启节点一查看日志 service keepalived start tail -f /var/log/messages ip addr show 查看vip配置上没有 然后关闭节点1 service keepalived stop 开启节点二 service keepalived start tail -f /var/log/messages ip addr show 查看vip配置上没有 然后在开启节点一看配置的优先级起作用没有,会不会抢回vip
3、双主模型
node1 vim /etc/keepalived/keepalived.conf vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 107 priority 100 advert_int 1 authentication { auth_type PASS auth_pass kingken } virtual_ipaddress { 172.16.1.147 } } vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 149 priority 99 advert_int 1 authentication { auth_type PASS auth_pass kingking } virtual_ipaddress { 172.16.1.149 } scp /etc/keepalived/keepalived.conf 172.16.1.144:/etc/keepalived/keepalived.conf
node2
vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 107 priority 99 advert_int 1 authentication { auth_type PASS auth_pass kingken } virtual_ipaddress { 172.16.1.147 } } vrrp_instance VI_2 { state MASTER interface eth0 virtual_router_id 149 priority 100 advert_int 1 authentication { auth_type PASS auth_pass kingking } virtual_ipaddress { 172.16.1.149 } }
4、主主验证
开启节点一查看日志 service keepalived start
tail -f /var/log/messages
ip addr show 查看vip配置上没有
开启节点二 service keepalived start
tail -f /var/log/messages
ip addr show 查看vip配置上没有
然后在开启节点一看配置的优先级起作用没有,会不会抢回vip
三、结合邮件做健康状态检测
1、邮件基础知识回顾
25号端口是邮件服务器 echo "hello" | mail -s "how are you?" root mail 查看
2、定义邮件发给谁
global_defs { notification_email { root@localhost } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 } man keepalived #查看帮助命令怎么设置 /notify 可以是脚本 也可以是自己定义的命令
3、定义发邮件内容
virtual_ipaddress { 172.16.1.147 } notify_master "echo 'to be master' | /bin/mail -s 'to be master' root" notify_backup "echo 'to be backup' | /bin/mail -s 'to be backup' root" }
4、定义在什么情况下发邮件
实现nginx 的主备切换
两边都装上nginx然后启动添加测试页面
virtual_ipaddress {
172.16.1.147
notify_master "/etc/rc.d/init.d/nginx start"
notify_backup "/etc/rc.d/init.d/nginx stop"
notify_fault "/etc/rc.d/init.d/nginx stop"
}
5、检测脚本的原理:
使用单独的配置段定义进程机制
vrrp_script CHK_NAME { script "/path/to/somefie.sh"脚本返回值0为成功,非零值为不成功 interval # 每隔多长时间检测 weight -5 降低权重 fall 3 正常到失败次数,至少检测3次 rise 1 失败到正常 }
在实例调用定义的检测机制,其才能生效
6、示例帮助脚本查看
cat /usr/share/doc/keepalived-1.2.7/keepalived.conf.vrrp.localcheck ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 } #vrrp_script chk_sched_down { # script "[ -e /etc/keepalived/down] && exit 1 || exit 0 " # interval 2 # weight -50 # fall 2 # rise 1 #}
#主要是在这里定义
7、检测nginx健康的应用实例
vrrp_script chk_nginx { script "killall -0 nginx" interval 1 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 107 priority 100 advert_int 1 authentication { auth_type PASS auth_pass kingken } virtual_ipaddress { 172.16.1.147 } #在此处调用检测nginx启动,注意不能killall ngnix 这样资源是不会转过去的 track_script { chk_nginx #chk_sched_down } notify_master "/etc/rc.d/init.d/nginx start" notify_backup "/etc/rc.d/init.d/nginx stop" notify_fault "/etc/rc.d/init.d/nginx stop" }
从节点的话只要修改下1、优先级2、为BACKUP就行啦
8、扩展应用脚本检测nginx的健康状态
vim chk_nginx.sh
Pwd
定义调用脚本检测nginx的健康状态并发送邮件
/etc/keepalived/ notify_master "/etc/rc.d/init.d/chk_nginx.sh master" notify_backup "/etc/rc.d/init.d/chk_nginx.sh backup" notify_fault "/etc/rc.d/init.d/chk_nginx.sh fault"
复制keepalived 发邮件的脚本
脚本内容
#!/bin/bash # Author: MageEdu <[email protected]> # description: An example of notify script # vip=172.16.1.147 contact='root@localhost' notify() { mailsubject="`hostname` to be $1: $vip floating" mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1" echo $mailbody | mail -s "$mailsubject" $contact } case "$1" in master) notify master /etc/rc.d/init.d/nginx start exit 0 ;; backup) notify backup /etc/rc.d/init.d/nginx stop exit 0 ;; fault) notify fault /etc/rc.d/init.d/nginx stop exit 0 ;; *) echo 'Usage: `basename $0` {master|backup|fault}' exit 1 ;; Esac
chmod +x chk_nginx.sh
两个节点都要有配置文件和修改
然后重启测试mail
四、keepalived自动生成ipvs规则的应用
缺点是有有一台directory是空闲的
dns轮询到directory
规则要每台directory的不同
双主模型,dns轮询,real server 也要轮询
vrrp 的vip地址必须是公网地址,要做轮询
keepalived实现 lvs
node1 ip:172.16.1.143 vip:172.16.1.148 lamp+nginx+keepalived
node2 ip:172.16.1.144 directory keepalived lamp+nginx+keepalived
node3 ip:172.16.1.145 vip:172.16.1.148 real server
1、Keepalived的主从ipvs
Node1
[root@localhost ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore [root@localhost ~]# echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore [root@localhost ~]# echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce [root@localhost ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce [root@localhost ~]# ifconfig lo:0 172.16.1.148 netmask 255.255.255.255 broadcast 172.16.1.148 up [root@localhost ~]# route add -host 172.16.1.148 dev lo:0 Vim /etc/keepalived/keepalived.conf vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 43 priority 100 advert_int 1 authentication { auth_type PASS auth_pass kingken } virtual_ipaddress { 172.16.1.148 } } virtual_server 172.16.1.148 { delay_loop 6 lb_algo rr lb_kind DR nat_mask 255.255.0.0 protocol TCP sorry_server 127.0.0.1 80 real_server 172.16.1.145 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } } }
node2 修改为备节点 和优先级改小
实验验证
node2
yum install ipvsadm -y
ipvsadm -L -n
查看是否自动配置好real server 生成规则
node1
yum install ipvsadm -y
ipvsadm -L -n
查看是否自动配置好real server 生成规则
我们也可以在网页上浏览172.16.1.148看是否实现了ipvs的功能
2、Keepalived的双主模式下实现ipvs
双主模式
vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 43 priority 100 advert_int 1 authentication { auth_type PASS auth_pass kingken } virtual_ipaddress { 172.16.1.148 } } vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 50 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1kingken } virtual_ipaddress { 172.16.1.149 } } virtual_server 172.16.1.148 { delay_loop 6 lb_algo rr lb_kind DR nat_mask 255.255.0.0 protocol TCP sorry_server 127.0.0.1 80 real_server 172.16.1.145 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } } } virtual_server 172.16.1.149 { delay_loop 6 lb_algo rr lb_kind DR nat_mask 255.255.0.0 protocol TCP sorry_server 127.0.0.1 80 real_server 172.16.1.145 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } } }
备用节点跟上面一样修改优先级、和主备互换就可以啦
node3
[root@localhost ~]# ifconfig lo:0 172.16.1.148 netmask 255.255.255.255 broadcast 172.16.1.148 up
[root@localhost ~]# route add -host 172.16.1.148 dev lo:0
node3注意虚拟主机只能定义成<VirtualHost *:80>
实验测试同上