DR模式也需要三台虚拟机,三台机器只需要有“公网”IP,但该模式下又多了一个VIP。
dir(调度器) 192.168.33.127
rs1(真实服务器) 192.168.33.128
rs2(真实服务器) 192.168.33.129
VIP 192.168.33.100
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
GATEWAY=192.168.33.2 # rs1上改回原始网关
# service network restart
Restarting network (via systemctl): [ 确定 ]
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
GATEWAY=192.168.33.2 # rs2上改回原始网关
# service network restart
Restarting network (via systemctl): [ 确定 ]
# vim /usr/local/sbin/lvs_dr.sh
#! /bin/bash
echo 1 > /proc/sys/net/ipv4/ip_forward
ipv=/usr/sbin/ipvsadm
vip=192.168.33.100
rs1=192.168.33.128
rs2=192.168.33.129
#注意这里的网卡名字
ifdown ens33
ifup ens33 #上面两步重启网卡,防止重复设置IP
ifconfig ens33:1 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip dev ens33:1
$ipv -C
$ipv -A -t $vip:80 -s wrr
$ipv -a -t $vip:80 -r $rs1:80 -g -w 1
$ipv -a -t $vip:80 -r $rs2:80 -g -w 1
# vim /usr/local/sbin/lvs_dr_rs.sh
#/bin/bash
vip=192.168.33.100
#把vip绑定在lo上,是为了实现rs直接把结果返回给客户端
ifdown lo
ifup lo
ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip lo:0
#以下操作为更改arp内核参数,目的是为了让rs顺利发送mac地址给客户端
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
# bash /usr/local/sbin/lvs_dr.sh # dir执行脚本
bash /usr/local/sbin/lvs_dr.sh
成功断开设备 'ens33'。
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/3)
# echo "this is rs1" > /usr/share/nginx/html/index.html # rs1上面是yum安装的nginx
# bash /usr/local/sbin/lvs_dr_rs.sh # rs1执行脚本
# echo "this is rs2" > /usr/local/nginx/html/index.html # rs2上面是编译安装的nginx
# bash /usr/local/sbin/lvs_dr_rs.sh # rs2执行脚本
这里用浏览器测试结果不明显,也不能直接在dir上curl VIP地址,我用的是另外一台同网段的虚拟机curl VIP地址。
这里记得把rs1和rs2的nginx服务打开,我在测试过程中忘了开启yum安装的nginx,所以rs1一直显示拒绝访问,也算是犯了小错误。
# curl 192.168.33.100
this is rs1
# curl 192.168.33.100
this is rs2
# curl 192.168.33.100
this is rs1
# curl 192.168.33.100
this is rs2
# curl 192.168.33.100
this is rs1
# curl 192.168.33.100
this is rs2
# curl 192.168.33.100
this is rs1
# curl 192.168.33.100
this is rs2
# curl 192.168.33.100
this is rs1
# curl 192.168.33.100
this is rs2
可以看到,实验成功,上面结果就是LVS在做负载均衡,因为使用的是wrr算法,而rs1和rs2的权重都为1,所以就在rs1和rs2直接轮询。
LVS架构中,不管是NAT模式还是DR模式,当后端的RS宕掉时,调度器依然会把请求转发到宕掉的RS上,这样就会出现问题。
为了解决这个问题,我们可以在调度器上安装keepalived,在keepalived中嵌入LVS功能。完整的keepalived+LVS架构需要两台调度器实现高可用,提供调度服务需要一台,另外一台作为备用。
这里,我只使用一台主keepalived,将备用的省略。下面是各机器的角色和IP:
主keepalived(调度器) 192.168.33.127
rs1(真实服务器) 192.168.33.128
rs2(真实服务器) 192.168.33.129
VIP 192.168.33.100
# yum install -y keepalived
# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
#备用服务器上为 BACKUP
#绑定vip的网卡为ens33,你的网卡和我的可能不一样,这里需要你改一下
#备用服务器上为90
priority 100
advert_int 1
authentication {
}
virtual_ipaddress {
}
}
virtual_server 192.168.33.100 80 {
#(每隔10秒查询realserver状态)
#(DR模式)
lb_kind DR
#(同一IP的连接60秒内被分配到同一台realserver)
persistence_timeout 60
#(用TCP协议检查realserver状态)
protocol TCP
real_server 192.168.33.128 80 {
#(权重)
weight 100
TCP_CHECK {
#(10秒无响应超时)
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 192.168.33.129 80 {
weight 100
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
# ipvsadm -C #把之前的ipvsadm规则清空,-C(大写)
# systemctl restart network #删除之前设置的VIP
# bash /usr/local/sbin/lvs_dr_rs.sh # rs1执行脚本
# bash /usr/local/sbin/lvs_dr_rs.sh # rs2执行脚本
# systemctl start keepalived
# ps aux |grep keepalived
root 2803 1.0 0.0 118676 1400 ? Ss 14:52 0:00 /usr/sbin/keepalived -D
root 2804 2.2 0.1 120924 3104 ? S 14:52 0:00 /usr/sbin/keepalived -D
root 2805 1.6 0.1 120800 2392 ? S 14:52 0:00 /usr/sbin/keepalived -D
# ipvsadm -ln #查看当前连接数
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.33.100:80 wlc persistent 60
-> 192.168.33.128:80 Route 100 0 0
-> 192.168.33.129:80 Route 100 2 2
关闭rs2上的nginx服务,再看
# systemctl stop nginx
# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.33.100:80 wlc persistent 60
-> 192.168.33.128:80 Route 100 1 1 #这里只剩下rs1的连接
再启动rs2上的nginx服务,再看
# systemctl start nginx
# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.33.100:80 wlc persistent 60
-> 192.168.33.128:80 Route 100 1 2
-> 192.168.33.129:80 Route 100 1 0 #rs2的连接又出现了,不过浏览器显示的还是rs1的内容
至此,实验成功结束。
更多资料参考:
mysql+keepalived
LVS负载均衡中arp_ignore和arp_annonuce参数配置的含义
LVS DR模式只使用一个公网ip的实现方法