keepalived+nginx实现服务的高可用

可能有的同学会和我有一样的疑惑,我都使用nginx对业务程序做负载均衡了,为什么还要用keepalived的呢?我理解的keepalived是为了保障主机宕机的风险的,nginx主机一旦宕机或者nginx服务出现故障,那业务就会出现无法访问的故障!
upstream主要用于负载均衡,将请求分发到多个后端服务器上,提高整体性能。
keepalived主要用于故障切换,保证在主备节点之间进行无缝切换,提高系统的可用性。
在具体应用中,可以将两者结合使用,通过upstream实现负载均衡,再通过keepalived实现主备切换,从而达到更高的可用性和负载均衡效果。

keepalived 是一个实现高可用性的软件,它通过提供一个虚拟的 IP 地址和一组服务(如代理服务器、负载均衡器等)的状态检测与切换机制来保证高可用。以下是 keepalived 保证高可用的几个关键点:

高可用集群的构建:keepalived 可以将一组服务器节点组成一个高可用集群,每个节点都运行着相同的服务,并且通过互相检测彼此的状态来确定主备节点。

虚拟 IP 地址管理:keepalived 会为集群分配一个虚拟 IP 地址,该地址随着主备节点的切换而自动漂移,确保用户始终连接到活跃的主节点上。

健康检测与故障切换:keepalived 会周期性地对集群中的节点进行健康检测,包括检查服务的运行状态、网络连通性等。当检测到主节点故障或服务异常时,keepalived 会快速将备用节点切换为主节点,确保服务的持续可用性。

同步状态信息:keepalived 通过在主备节点间同步状态信息,包括虚拟 IP 地址、配置文件等,保证状态的一致性。

总结来说,keepalived 通过集群构建、虚拟 IP 地址管理、健康检测与故障切换、状态信息同步等机制来保证高可用性,确保服务能够在主备节点之间无缝切换,提供连续可用的服务。

废话少说,咱们直接开始实战!

我的实战方法是两个nginx负载两个tomcat服务,用keepalived实现ngixn的高可用,我这是测试环境的组网,我开了两个虚拟机,给大家演示一下怎么配置,生产环境nginx服务和tomcat服务最好分别独立部署,尽量不要合设。

IP 服务
192.168.21.100 keepalived,nginx,tomcat
192.168.21.101 keepalived,nginx,tomcat

咱们先部署tomcat服务

这里的tomcat服务为了简化部署,我使用docker进行部署
[root@localhost ~]# docker run -d --name tomcat8 -p 8080:8080 tomcat:8.5.34-jre8-alpine
335c8d68b7bbf03db2081019980fd8799178c8891127a970ac298c5501b72402
为了区分这两个tomcat,我们分别在这两个tomcat的index路径下记录IP来做区分
[root@localhost ~]# curl http://192.168.21.100:8080/index/index.html
<h1>192.168.21.100</h1>
[root@localhost ~]# curl http://192.168.21.101:8080/index/index.html
<h1>192.168.21.101</h1>

配置nginx主备

安装nginx和keepalived,两个机器都需要装
[root@localhost ~]# yum install nginx nginx-mod-stream keepalived -y
配置nginx,192.168.21.100,192.168.21.101这两个机器的nginx都配置一样的
[root@localhost ~]# curl http://192.168.21.101/index/
<h1>192.168.21.101</h1>
[root@localhost ~]# curl http://192.168.21.100/index/
<h1>192.168.21.100</h1>
[root@localhost ~]# curl http://192.168.21.100/index/

keepalived+nginx实现服务的高可用_第1张图片
配置keepalive服务
主keepalived配置

[root@localhost keepalived]# cat keepalived.conf
global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh 80"
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33  # 修改为实际网卡名
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虚拟IP
    virtual_ipaddress {
        192.168.21.119/24
    }
    track_script {
        check_nginx
    }
}
配置nginx检查脚本
[root@localhost keepalived]# cat /etc/keepalived/check_nginx.sh
#!/bin/bash
#keepalived 监控端口脚本
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi


备keepalived配置

[root@localhost keepalived]# cat /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh 80"
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33  # 修改为实际网卡名
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 90    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虚拟IP
    virtual_ipaddress {
        192.168.21.119/24
    }
    track_script {
        check_nginx
    }
}
配置检查脚本
[root@localhost keepalived]# cat /etc/keepalived/check_nginx.sh
#!/bin/bash
#keepalived 监控端口脚本
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi
#分别启动主备的keepalived服务
[root@localhost keepalived]# systemctl daemon-reload
[root@localhost keepalived]# systemctl start keepalived
[root@localhost keepalived]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.

服务高可用验证

通过查看主机的IP,可以看到在192.168.21.100主机上分配了一个192.168.21.119的VIP,通过VIP访问服务是可以的,咱们将192.168.21.100主机关机,咱们在访问VIP看看是什么情况

[root@localhost ~]# curl http://192.168.21.119/index/
<h1>192.168.21.101</h1>
[root@localhost ~]# curl http://192.168.21.119/index/
<h1>192.168.21.100</h1>
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:4b:5b:c4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.21.100/24 brd 192.168.21.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.21.119/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::398:cd4:2675:77f5/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::600b:df60:1fe4:581f/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:67:9d:42:74 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:67ff:fe9d:4274/64 scope link
       valid_lft forever preferred_lft forever
11: veth893f427@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 1e:93:ab:18:7e:59 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::1c93:abff:fe18:7e59/64 scope link
       valid_lft forever preferred_lft forever

将192.168.21.100主机关机以后,访问192.168.21.119这个VIP还是能正常访问的,192.168.21.119这个VIP落在了192.168.21.101机器上,访问的服务都被分发到192.168.21.101这个机器上了,这样就完成了高可用的验证。

[root@localhost ~]# curl http://192.168.21.119/index/
<h1>192.168.21.101</h1>
[root@localhost ~]# curl http://192.168.21.119/index/
<h1>192.168.21.101</h1>
[root@localhost ~]# curl http://192.168.21.119/index/
<h1>192.168.21.101</h1>

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:6c:e8:de brd ff:ff:ff:ff:ff:ff
    inet 192.168.21.101/24 brd 192.168.21.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.21.119/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::398:cd4:2675:77f5/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:d3:e2:24:69 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:d3ff:fee2:2469/64 scope link
       valid_lft forever preferred_lft forever
9: veth4386b10@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 12:92:c1:78:0e:28 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::1092:c1ff:fe78:e28/64 scope link
       valid_lft forever preferred_lft forever

你可能感兴趣的:(linux,nginx,运维)