一般情况下,我们使用 Keepalived + Nginx 搭建的软负载主备群都是双节点的。今天,我们来测试下,搭建三节点、抢占式(相对于非抢占式而言)的Keepalived主备集群,即集群中包括一个主节点和两个备用节点。
环境:CentOS 7.9 + Keepalived 2.2.8
虚拟机:192.168.223.31、192.168.223.32、192.168.223.33
VIP:192.168.223.30
Keepalived安装我这边就不做过多的介绍了,给出一些基本的命令,仅供参考:
# mkdir -p /usr/local/keepalived /etc/keepalived
# cd /usr/local/src/keepalived-2.2.8
# ./configure --prefix=/usr/local/keepalived --sysconfdir=/etc/keepalived
# make
# make install
备注:预编译之前,需要在虚拟机系统安装gcc环境。
下面重点介绍下三台虚拟机的keepalived配置:
# 192.168.223.31 - keepalived.conf - master default
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script chk_state_down {
script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
interval 2
weight 3
}
vrrp_script chk_nginx {
script "/etc/keepalived/chk_nginx.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 123
priority 120
advert_int 1
track_interface {
ens33
}
unicast_src_ip 192.168.223.31
unicast_peer{
192.168.223.32
192.168.223.33
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.223.30
}
track_script {
chk_nginx
chk_state_down
}
}
# 192.168.223.32 - keepalived.conf - backup
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script chk_state_down {
script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
interval 2
weight 3
}
vrrp_script chk_nginx {
script "/etc/keepalived/chk_nginx.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 123
priority 110
advert_int 1
track_interface {
ens33
}
unicast_src_ip 192.168.223.32
unicast_peer{
192.168.223.31
192.168.223.33
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.223.30
}
track_script {
chk_nginx
chk_state_down
}
}
# 192.168.223.33 - keepalived.conf - backup
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script chk_state_down {
script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
interval 2
weight 3
}
vrrp_script chk_nginx {
script "/etc/keepalived/chk_nginx.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 123
priority 100
advert_int 1
track_interface {
ens33
}
unicast_src_ip 192.168.223.33
unicast_peer{
192.168.223.31
192.168.223.32
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.223.30
}
track_script {
chk_nginx
chk_state_down
}
}
为了避免多播可能会产生的脑裂,这里的三节点集群采用了单播的方式。
下面我们来简单描述下 VIP 在三个节点之间的漂移情况:
1)当三个节点系统和 Keepalived 服务都正常的情况下,VIP(192.168.223.30)会在 MASTER 节点(即192.168.223.31),如下图所示:
# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:2c:75:e6 brd ff:ff:ff:ff:ff:ff
inet 192.168.223.31/24 brd 192.168.223.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.223.30/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::46fd:9b9a:4d5e:fec0/64 scope link noprefixroute
valid_lft forever preferred_lft forever
2)当MASTER 节点(即192.168.223.31)系统宕掉或者服务不可用,VIP会根据节点配置的优先级,被强行迁移(即抢占)到另外两台备用节点中优先级(priority)较高的节点,即192.168.223.32。
# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:3e:8d:ae brd ff:ff:ff:ff:ff:ff
inet 192.168.223.32/24 brd 192.168.223.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.223.30/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::468b:5c93:8d8d:35c0/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3)当 192.168.223.31 、192.168.223.32 这两个主备节点系统宕掉或者 keepalived 服务都不可用,VIP会漂移到最后一个备用节点,即192.168.223.33
# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:12:6a:b5 brd ff:ff:ff:ff:ff:ff
inet 192.168.223.33/24 brd 192.168.223.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.223.30/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::1683:cd8a:8f10:e4ae/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4)当 192.168.223.32 系统系统且 keepalived 服务恢复, VIP会被 192.168.223.32 抢占。
5)当 192.168.223.31、192.168.223.32 系统系统且 keepalived 服务同时恢复, VIP会被 192.168.223.31 抢占。
对于非抢占式的配置,与抢占式的配置上的最大的区别主要要有两个:
第一,非抢占式集群,集群的所有节点都是 BACKUP;
第二,在 keepalived.conf 配置中要增加 nopreempt 配置;
抢占式和非抢占式集群性价比的对比:
非抢占式的 VIP 漂移,感兴趣的可以自行测试。
非抢占式,相对于抢占式,少了一次 VIP 的切换,这是非抢占式集群的优点。不过在生产环境上,如果要实现这样的优点,就必须将集群中的所有虚拟机都配置一样的资源(如内存、CPU、磁盘存储等),否则如果因为备用节点资源不够,可能会影响业务访问。但是如果所有节点都是同样的资源配置,又会导致较大的资源浪费,因为毕竟有两个节点会空闲在那。所以,从性价比来看,还是抢占式的集群,更适合在生产环境部署,即两个备用节点的资源配置可以减半,甚至更低,因为只要主节点能在可控的时间内恢复,VIP就会被主节点再次抢占。而且从测试来看,VIP的漂移,对于用户来说,基本无感。