LVS负载均衡高可用集群及Sorry Server

实验环境

1. host1 host2 host3 host4 都是 CentOS7.3 系统;
2. host3 host4 作为 Real Server,搭建Web服务;
3. host1 host2 作为LVS负载均衡高可用集群;
4. HA Cluster的配置前提:
    (1) 各节点时间必须同步;
            ntp, chrony
    (2) 确保iptables及selinux不会成为阻碍;
    (3) 各节点之间可通过主机名互相通信(对KA并非必须);
            建议使用/etc/hosts文件实现; 
    (4) 确保各节点的用于集群服务的接口支持MULTICAST(多播)通信;
            D类:224-239

配置Real Server

  • 安装Nginx并修改测试页:
# RS1:
[root@host3 ~]#yum -y install nginx
[root@host3 ~]#echo "

RS1:host3

" > /usr/share/nginx/html/index.html [root@host3 ~]#cat /usr/share/nginx/html/index.html

RS1:host3

[root@host3 ~]#systemctl start nginx [root@host3 ~]#ss -ntl State Recv-Q Send-Q Local Address:Port LISTEN 0 128 *:80 LISTEN 0 128 *:22 LISTEN 0 100 127.0.0.1:25 LISTEN 0 128 :::80 LISTEN 0 128 :::22 LISTEN 0 100 ::1:25 [root@host3 ~]# ------------------------------------------------------------------------------------------------------ # RS2: [root@host4 ~]#yum -y install nginx [root@host4 ~]#echo "

RS2:host4

" > /usr/share/nginx/html/index.html [root@host4 ~]#cat /usr/share/nginx/html/index.html

RS2:host4

[root@host4 ~]#systemctl start nginx [root@host4 ~]#ss -ntl State Recv-Q Send-Q Local Address:Port LISTEN 0 128 *:80 LISTEN 0 128 *:22 LISTEN 0 100 127.0.0.1:25 LISTEN 0 128 :::80 LISTEN 0 128 :::22 LISTEN 0 100 ::1:25 [root@host4 ~]# ------------------------------------------------------------------------------------------------------ # 测试: >.host1: [root@host1 ~]#curl http://192.168.10.13

RS1:host3

[root@host1 ~]#curl http://192.168.10.14

RS2:host4

[root@host1 ~]# >.host2: [root@host2 ~]#curl http://192.168.10.13

RS1:host3

[root@host2 ~]#curl http://192.168.10.14

RS2:host4

[root@host2 ~]#
  • 修改RS的内核参数并配置VIP
LVS负载均衡高可用集群及Sorry Server_第1张图片
RS1
LVS负载均衡高可用集群及Sorry Server_第2张图片
RS1上给权限
LVS负载均衡高可用集群及Sorry Server_第3张图片
RS1上执行脚本并查看lo:0的IP地址
LVS负载均衡高可用集群及Sorry Server_第4张图片
RS2
LVS负载均衡高可用集群及Sorry Server_第5张图片
RS2上给权限
LVS负载均衡高可用集群及Sorry Server_第6张图片
RS2上执行脚本并查看lo:0的IP地址

搭建LVS集群

给两台VS服务器 host1 host2上 安装 ipvsadmkeepalived

# VS1:
[root@host1 ~]#yum -y install ipvsadm keepalived

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

# VS2:
[root@host2 ~]#yum -y install ipvsadm keepalived

配置LVS高可用集群服务器

  • 配置VS服务器host1的keepalived服务并启动:
[root@host1 keepalived]#cat keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id host1
   vrrp_mcast_group4 224.89.51.18
}

vrrp_instance VI_1 {
    state MASTER                       <-- 配置 host1 为 MASTER
    interface ens33
    virtual_router_id 89
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass nQyVIaY1
    }
    virtual_ipaddress {
        192.168.10.88                  <-- 虚拟 IP 为 VIP
    }
}

virtual_server 192.168.10.88 80 {      <-- 配置 VS 服务器 host1
    delay_loop 6
    lb_algo rr                         <-- LVS调度算法
    lb_kind DR                         <--LVS工作模式
    nat_mask 255.255.255.255
    protocol TCP

    real_server 192.168.10.13 80 {      <-- 添加第一台 RS服务器 host3
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200 
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
        }
    real_server 192.168.10.14 80 {       <--添加第二台RS服务器 host4
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200 
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}
[root@host1 keepalived]#systemctl start keepalived
  • 配置VS服务器host2的keepalived服务并启动:

只需把vrrp_instance VI_1state改为BACKUP即可,其他与VS1保持一致

[root@host1 keepalived]#cat keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id host1
   vrrp_mcast_group4 224.89.51.18
}

vrrp_instance VI_1 {
    state BACKUP                        <-- 配置 host2 为 BACKUP
    interface ens33
    virtual_router_id 89
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass nQyVIaY1
    }
    virtual_ipaddress {
        192.168.10.88                  <-- 虚拟 IP 为 VIP
    }
}

virtual_server 192.168.10.88 80 {      <-- 配置 VS 服务器 host2
    delay_loop 6
    lb_algo rr                         <-- LVS调度算法
    lb_kind DR                         <--LVS工作模式
    nat_mask 255.255.255.255
    protocol TCP

    real_server 192.168.10.13 80 {      <-- 添加第一台 RS服务器 host3
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200 
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
        }
    real_server 192.168.10.14 80 {       <--添加第二台RS服务器 host4
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200 
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}
[root@host2 keepalived]#systemctl start keepalived

测试:

如何测试的详细步骤我就不细写了,贴个图意思意思下:

LVS负载均衡高可用集群及Sorry Server_第7张图片

Sorry Server

在两台VS上也安装Nginx,设置页面内容为Say sorry的message!

  • VS1上:
[root@host1 keepalived]#yum -y install nginx
[root@host1 keepalived]#echo "

Sorry from Director 1

" > /usr/share/nginx/html/index.html [root@host1 keepalived]#systemctl start nginx
  • VS2上:
[root@host2 keepalived]#yum -y install nginx
[root@host2 keepalived]#echo "

Sorry from Director 2

" > /usr/share/nginx/html/index.html [root@host2 keepalived]#systemctl start nginx
  • 编辑配置文件:

很简单,只需在两台VS的配置文件keepalived.conf的virtual_server里加一行sorry_server 127.0.0.1 80 即可!

LVS负载均衡高可用集群及Sorry Server_第8张图片
  • 测试Sorry Server 的效果:
LVS负载均衡高可用集群及Sorry Server_第9张图片

你可能感兴趣的:(LVS负载均衡高可用集群及Sorry Server)