LVS-DR模式加上Keepalived 实现负载均衡高可用集群配置

共四台虚拟机:

LVS-DR-master :192.168.70.133   (HA1)

LVS-DR-backup :  192.168.70.135     (HA2)

LVS-DR-VIP :192.168.70.70

LVS-DR-Realsever1 :192.168.70.137 (RS1)

LVS-DR-Realserver2:192.168.70.136 (RS2)

1.在RS1上配置DR模型,也就是绑定VIP

#!/bin/bash
#
# Script to start LVS DR real server.
# description: LVS DR real server
#
.  /etc/rc.d/init.d/functions
VIP=192.168.70.70
host=`/bin/hostname`
case "$1" in
start)
# Start LVS-DR real server on this machine.
/sbin/ifconfig lo down
/sbin/ifconfig lo up
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev lo:0
;;
stop)
# Stop LVS-DR real server loopback device(s).
/sbin/ifconfig lo:0 down
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
;;
status)
# Status of LVS-DR real server.
islothere=`/sbin/ifconfig lo:0 | grep $VIP`
isrothere=`netstat -rn | grep "lo:0" | grep $VIP`
if [ ! "$islothere" -o ! "isrothere" ];then
# Either the route or the lo:0 device
# not found.
echo "LVS-DR real server Stopped."
else
echo "LVS-DR real server Running."
fi
;;
*)
# Invalid entry.
echo "$0: Usage: $0 {start|status|stop}"
exit 1
;;
esac

执行该脚本

[root@zhu3 ~]# sh real-server.sh start

2.在RS2上的操作与同上

3.在RS1和RS2上创建不同的网页,并启动web服务

[root@zhu3 ~]# vim /var/www/jiang/zhu.html

192.168.70.137 is my name

~ [root@zhu3 ~]# /opt/nginx/sbin/nginx [root@zhu3 ~]# /opt/php/sbin/php-fpm start Starting php_fpm done [root@zhu3 ~]#

[root@zhu4 ~]# vim /var/www/jiang/zhu.html

My name is192.168.70.136

[root@zhu4 ~]# /opt/nginx/sbin/nginx [root@zhu4 ~]# /opt/php/sbin/php-fpm start Starting php_fpm done

Real server服务器配置完成

4.安装ipvsadm和keepalived

5.  HA1上keepalived配置文件

! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass chtopnet
}
virtual_ipaddress {
192.168.70.70
}
}
virtual_server 192.168.70.70 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
sorry_server 1127.0.0.1 80
real_server 192.168.70.137 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.70.136 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
}

HA2上keepalived的配置文件

[root@zhu2 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass chtopnet
}
virtual_ipaddress {
192.168.70.70
}
}
virtual_server 192.168.70.70 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
sorry_server 1127.0.0.1 80
real_server 192.168.70.137 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.70.136 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
}

该文件与HA1上不同的地方有两项

state BACKUP #该项表示为备节点
priority 99 #该项应小于主HA1上的值

6 :HA1和HA2上先后启动keepalived

[root@zhu1 ~]# service keepalived start
启动 keepalived:                                          [确定]
[root@zhu2 ~]# service keepalived start
启动 keepalived:                                          [确定]

7:查看keepalived是否正常启动

[root@zhu1 ~]# ip a
1: lo:  mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:d3:3b:5e brd ff:ff:ff:ff:ff:ff
inet 192.168.70.133/24 brd 192.168.70.255 scope global eth0
inet 192.168.70.70/32 scope global eth0
inet6 fe80::20c:29ff:fed3:3b5e/64 scope link
valid_lft forever preferred_lft forever
3: sit0:  mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0
[root@zhu2 ~]# ip a
1: lo:  mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:ba:9d:f1 brd ff:ff:ff:ff:ff:ff
inet 192.168.70.135/24 brd 192.168.70.255 scope global eth0
inet6 fe80::20c:29ff:feba:9df1/64 scope link
valid_lft forever preferred_lft forever
3: sit0:  mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0

查看lvsadm规则是否加进来

[root@zhu1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.70.70:80 rr persistent 50
-> 192.168.70.136:80            Route   1      0          0
-> 192.168.70.137:80            Route   1      0          0
#加载成功

查看当服务器出故障,是否可以自动剔除集群,当服务器恢复正常,是否可以自动加载

[root@zhu1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.70.70:80 rr persistent 50
-> 192.168.70.136:80            Route   1      0          0
-> 192.168.70.137:80            Route   1      0          0
#关闭nginx服务
[root@zhu3 ~]# killall nginx
[root@zhu3 ~]# netstat -lntp | grep 80
#再查看
[root@zhu1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.70.70:80 rr persistent 50
-> 192.168.70.136:80            Route   1      0          0

[root@zhu3 ~]# netstat -lntp | grep 80
[root@zhu3 ~]# /opt/nginx/sbin/nginx
[root@zhu3 ~]# ps -ef | grep nginx
root      4549     1  0 03:51 ?        00:00:00 nginx: master process /opt/nginx/sbin/nginx
www       4550  4549  1 03:51 ?        00:00:00 nginx: worker process
www       4551  4549  1 03:51 ?        00:00:00 nginx: worker process
www       4552  4549  2 03:51 ?        00:00:00 nginx: worker process
www       4553  4549  1 03:51 ?        00:00:00 nginx: worker process
www       4554  4549  0 03:51 ?        00:00:00 nginx: cache manager process
www       4555  4549  0 03:51 ?        00:00:00 nginx: cache loader process
root      4557  3062  0 03:51 pts/0    00:00:00 grep nginx
#再查看,节点已经自动添加
[root@zhu1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.70.70:80 rr persistent 50
-> 192.168.70.137:80            Route   1      0          0
-> 192.168.70.136:80            Route   1      0

查看高可用性,VIP是否能够切换

 

[root@zhu2 ~]# ip a
1: lo:  mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:ba:9d:f1 brd ff:ff:ff:ff:ff:ff
inet 192.168.70.135/24 brd 192.168.70.255 scope global eth0
inet6 fe80::20c:29ff:feba:9df1/64 scope link
valid_lft forever preferred_lft forever
3: sit0:  mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0
#在HA1 上关闭keepalived再查看
[root@zhu2 ~]# ip a
1: lo:  mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:ba:9d:f1 brd ff:ff:ff:ff:ff:ff
inet 192.168.70.135/24 brd 192.168.70.255 scope global eth0
inet 192.168.70.70/32 scope global eth0
inet6 fe80::20c:29ff:feba:9df1/64 scope link
valid_lft forever preferred_lft forever
3: sit0:  mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0

可见切换成功。