深入了解负载均衡集群LVS


可以参考之前写过的一篇入门级介绍:https://www.jianshu.com/p/468949678396


可能你是一只萌新,初次接触lvs(老实说,本人初次接触lvs一样很懵比,之前所写的东西不过是ctrl+c 和 ctrl +v)。本文尽量以简短易懂形象生动的文字去介绍,并且会有详细的操作演示。


  • 集群是什么?
    企业为了降低成本,提高系统性能,把一组相互独立的计算机聚合成一个组,以单一的模式去管理。客户端和集群相互作用时候,集群就像一个独立的服务器。主要优点:高可用性,高可扩展,高性能,高性价比

HA高可用集群:提升服务在线能力的集群
LB负载均衡集群:提供服务并发处理能力的集群
HPC高性能计算机集群:着重用于处理海量任务


  • LB
    负载均衡集群分为硬件负载均衡和软件负载均衡,软件又分为四层负载(lvs),七层负载(nginx,haproxy)

  • lvs的组成:
    ipvsadm: 用于管理集群服务的命令行工具,工作于linux系统中的用户空间。
    ipvs: 为lvs 提供服务的内核模块, 工作于内核空间(相当于一个框架,通过ipvsadm手动添加规则,来实现ipvs功能)
  • lvs的类型:四种,LVS-nat,lvs-DR,LVS-tun,LVS-fullnat
    每一个类型都在接下来的实验中详细介绍
  • lvs相关术语
    DS:Director Server。指的是前端负载均衡器节点。
    RS:Real Server。后端真实的工作服务器。
    VIP:向外部直接面向用户请求,作为用户请求的目标的IP地址。
    DIP:Director Server IP,主要用于和内部主机通讯的IP地址。
    RIP:Real Server IP,后端服务器的IP地址。
    CIP:Client IP,访问客户端的IP地址。
  • lvs负载均衡算法:
    LVS的调度方法分为两种,一种是静态方法,一种是动态方法:
    静态方法:仅根据算法本身实现调度;实现起点公平,不管服务器当前处理多少请求,分配的数量一致
    动态方法:根据算法及后端RS当前的负载状况实现调度;不管以前分了多少,只看分配的结果是不是公平
    静态调度算法(static Schedu)(4种):
    动态调度算法(dynamic Schedu)(6种):

实验一:LVS-DR模型

环境:rhel6.5
server1:172.25.4.1
server2:172.25.4.2
server3:172.25.4.3
server4:172.25.4.4

1、增加必要的yum仓库


我们添加了[HighAvailability],[LoadBalancer],[ResilientStorage],[ScalableFileSystem]

[root@server1 ~]# cat /etc/yum.repos.d/rhel-source.repo 
[rhel-source-beta]
name=Red Hat Enterprise Linux $releasever Beta - $basearch - Source
baseurl=http://172.25.4.250/rhel6.5
enabled=1
gpgcheck=0

[HighAvailability]
name=HighAvailability
baseurl=http://172.25.4.250/rhel6.5/HighAvailability
gpgcheck=0

[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.4.250/rhel6.5/LoadBalancer
gpgcheck=0

[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.4.250/rhel6.5/ResilientStorage
gpgcheck=0

[ScalableFileSystem]
name=ScalableFileSystem
baseurl=http://172.25.4.250/rhel6.5/ScalableFileSystem
gpgcheck=0

2、安装配置ipvsadm

ipvsadm的参数详解:
-A --add-service 在内核的虚拟服务器表中添加一条新的虚拟服务器记录。也就是增加一台新的虚拟服务器。
-E --edit-service 编辑内核虚拟服务器表中的一条虚拟服务器记录。
-D --delete-service 删除内核虚拟服务器表中的一条虚拟服务器记录。
-C --clear 清除内核虚拟服务器表中的所有记录。
-R --restore 恢复虚拟服务器规则
-S --save 保存虚拟服务器规则,输出为-R 选项可读的格式
-a --add-server 在内核虚拟服务器表的一条记录里添加一条新的真实服务器记录。也就是在一个虚拟服务器中增加一台新的
真实服务器
-e --edit-server 编辑一条虚拟服务器记录中的某条真实服务器记录
-d --delete-server 删除一条虚拟服务器记录中的某条真实服务器记录
-L|-l --list 显示内核虚拟服务器表
-Z --zero 虚拟服务表计数器清零(清空当前的连接数量等)

[root@server1 ~]# yum install ipvsadm -y
[root@server1 ~]# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
[root@server1 ~]# ipvsadm -A -t 172.25.4.100:80 -s rr                            
[root@server1 ~]# ipvsadm -a -t 172.25.4.100:80 -r 172.25.4.2:80 -g
[root@server1 ~]# ipvsadm -a -t 172.25.4.100:80 -r 172.25.4.3:80 -g
[root@server1 ~]# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.4.100:http rr
  -> server2:http                 Route   1      0          0         
  -> server3:http                 Route   1      0          0         
[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.4.100:80 rr
  -> 172.25.4.2:80                Route   1      0          0         
  -> 172.25.4.3:80                Route   1      0          0   

[root@server1 ~]# ipvsadm -A -t 172.25.4.100:80 -s rr
-A 添加新的虚拟服务器 -t 基于tcp协议,-s指定调度算法
[root@server1 ~]# ipvsadm -a -t 172.25.4.100:80 -r 172.25.4.2:80 -g
[root@server1 ~]# ipvsadm -a -t 172.25.4.100:80 -r 172.25.4.3:80 -g
-a 增加真实服务器 -r 需要转发到的后端真实服务器

在server1的网卡上添加VIP
[root@server1 ~]# ip addr add 172.25.4.100/24 dev eth0



给后端服务器安装httpd,并且写入发布页



在真机测试



无反应,查看策略,由于采用rr算法,一共访问6次vip,那么,每个后端请求三次,注意观察InActConn的值。



**你会发现随着连接次数的增多,InActConn的值越来越大,这就说明,我们ipvsadm配置成功
2、为后端服务器添加虚拟地址
[root@server2 ~]# ip addr add 172.25.4.100/32 dev lo
[root@server2 ~]# ip addr show lo
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet 172.25.4.100/32 scope global lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

在server3上也进行相同的操作

3、在真机上测试



貌似成功了!

[root@ivans rhel6.5]# arp -an | grep 100
? (172.25.4.100) at 52:54:00:22:e8:61 [ether] on br0
[root@ivans rhel6.5]# arp -d 172.25.4.100
[root@ivans rhel6.5]# arp -an | grep 100
? (172.25.4.100) at  on br0
[root@ivans rhel6.5]# for i in {1..10};do curl 172.25.4.100;done
server2
server3
server2
server3
server2
server3
server2
server3
server2
server3
[root@ivans rhel6.5]# arp -d 172.25.4.100
[root@ivans rhel6.5]# arp -an | grep 100
? (172.25.4.100) at  on br0
[root@ivans rhel6.5]# ping 172.25.4.100
PING 172.25.4.100 (172.25.4.100) 56(84) bytes of data.
64 bytes from 172.25.4.100: icmp_seq=1 ttl=64 time=0.314 ms
64 bytes from 172.25.4.100: icmp_seq=2 ttl=64 time=0.124 ms
64 bytes from 172.25.4.100: icmp_seq=3 ttl=64 time=0.127 ms
^C
--- 172.25.4.100 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.124/0.188/0.314/0.089 ms
[root@ivans rhel6.5]# for i in {1..10};do curl 172.25.4.100;done
server2
server2
server2
server2
server2
server2
server2
server2
server2
server2
[root@ivans rhel6.5]# arp -an | grep 100
? (172.25.4.100) at 52:54:00:12:c1:be [ether] on br0      已经变成server2的mac地址了,这就是偶然性问题!

多做几次清空arp缓存的操作,会发现这仅仅是偶然!
通过以上测试,我们会发现,当真机上的arp缓存清除掉后,调度器的策略就失效了,客户端会直接访问后端服务器
4、通过添加访问server2和server3时服务被拒绝的策略来解决偶然性,通过arptables_jf工具更改策略

[root@server2 ~]# arptables -A IN -d 172.25.4.100 -j DROP
[root@server2 ~]# arptables -A OUT -s 172.25.4.100 -j mangle --mangle-ip-s 172.25.4.2
[root@server2 ~]# arptables -L
Chain IN (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
DROP       anywhere             172.25.4.100         anywhere           anywhere           any    any        any        any       

Chain OUT (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
mangle     172.25.4.100         anywhere             anywhere           anywhere           any    any        any        any       --mangle-ip-s server2 

Chain FORWARD (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
[root@server2 ~]# arptables -L
Chain IN (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
DROP       anywhere             172.25.4.100         anywhere           anywhere           any    any        any        any       

Chain OUT (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
mangle     172.25.4.100         anywhere             anywhere           anywhere           any    any        any        any       --mangle-ip-s server2 

Chain FORWARD (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
[root@server2 ~]# /etc/init.d/arptables_jf save
将当前规则保存到 /etc/sysconfig/arptables:                [确定]
[root@server2 ~]# /etc/init.d/arptables_jf start
刷新所有当前规则与用户定义链:                             [确定]
清除所有当前规则与用户定义链:                             [确定]
应用 arptables 防火墙规则:                                [确定]
[root@server3 ~]# arptables -A IN -d 172.25.4.100 -j DROP
[root@server3 ~]# arptables -A OUT -s 172.25.4.100 -j mangle --mangle-ip-s 172.25.4.3
[root@server3 ~]# arptables -L
Chain IN (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
DROP       anywhere             172.25.4.100         anywhere           anywhere           any    any        any        any       

Chain OUT (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
mangle     172.25.4.100         anywhere             anywhere           anywhere           any    any        any        any       --mangle-ip-s server3 

Chain FORWARD (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
[root@server3 ~]# arptables -L
Chain IN (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
DROP       anywhere             172.25.4.100         anywhere           anywhere           any    any        any        any       

Chain OUT (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
mangle     172.25.4.100         anywhere             anywhere           anywhere           any    any        any        any       --mangle-ip-s server3 

Chain FORWARD (policy ACCEPT)
target     source-ip            destination-ip       source-hw          destination-hw     hlen   op         hrd        pro       
[root@server3 ~]# /etc/init.d/arptables_jf save
将当前规则保存到 /etc/sysconfig/arptables:                [确定]
[root@server3 ~]# /etc/init.d/arptables_jf start
刷新所有当前规则与用户定义链:                             [确定]
清除所有当前规则与用户定义链:                             [确定]
应用 arptables 防火墙规则:                                [确定]

[root@server3 ~]# arptables -A IN -d 172.25.4.100 -j DROP 来自172.25.4.100 的包丢弃
[root@server3 ~]# arptables -A OUT -s 172.25.4.100 -j mangle --mangle-ip-s 172.25.4.3 以172.25.4.100出去的话设置为172.25.4.3

5、在真机进行测试

[root@ivans rhel6.5]# arp -an | grep 100
[root@ivans rhel6.5]# for i in {1..10};do curl 172.25.4.100;done
server2
server3
server2
server3
server2
server3
server2
server3
server2
server3
[root@ivans rhel6.5]# arp -an | grep 100
? (172.25.4.100) at 52:54:00:22:e8:61 [ether] on br0
[root@ivans rhel6.5]# arp -d 172.25.4.100
[root@ivans rhel6.5]# for i in {1..10};do curl 172.25.4.100;done
server2
server3
server2
server3
server2
server3
server2
server3
server2
server3
[root@ivans rhel6.5]# arp -d 172.25.4.100
[root@ivans rhel6.5]# for i in {1..10};do curl 172.25.4.100;done
server2
server3
server2
server3
server2
server3
server2
server3
server2
server3
[root@ivans rhel6.5]# arp -an | grep 100
? (172.25.4.100) at 52:54:00:22:e8:61 [ether] on br0

此时发现清空arp缓存后还是可以实现负载均衡
结果就成功了!

6、在调度器上查看

[root@server1 ~]# ipvsadm -lnc
IPVS connection entries
pro expire state       source             virtual            destination
TCP 01:56  FIN_WAIT    172.25.4.250:48460 172.25.4.100:80    172.25.4.2:80
TCP 01:57  FIN_WAIT    172.25.4.250:48463 172.25.4.100:80    172.25.4.3:80
TCP 01:56  FIN_WAIT    172.25.4.250:48461 172.25.4.100:80    172.25.4.3:80
TCP 01:56  FIN_WAIT    172.25.4.250:48459 172.25.4.100:80    172.25.4.3:80
TCP 01:56  FIN_WAIT    172.25.4.250:48458 172.25.4.100:80    172.25.4.2:80
TCP 01:57  FIN_WAIT    172.25.4.250:48464 172.25.4.100:80    172.25.4.2:80
TCP 01:57  FIN_WAIT    172.25.4.250:48462 172.25.4.100:80    172.25.4.2:80
TCP 01:57  FIN_WAIT    172.25.4.250:48466 172.25.4.100:80    172.25.4.2:80
TCP 01:57  FIN_WAIT    172.25.4.250:48467 172.25.4.100:80    172.25.4.3:80
TCP 01:57  FIN_WAIT    172.25.4.250:48465 172.25.4.100:80    172.25.4.3:80

效果很明显


实验二:实现健康检查

1、停止server2的httpd

[root@server2 ~]# /etc/init.d/httpd stop
停止 httpd:                                               [确定]
[root@ivans rhel6.5]# for i in {1..10};do curl 172.25.4.100;done
curl: (7) Failed connect to 172.25.4.100:80; 拒绝连接
server3
curl: (7) Failed connect to 172.25.4.100:80; 拒绝连接
server3
curl: (7) Failed connect to 172.25.4.100:80; 拒绝连接
server3
curl: (7) Failed connect to 172.25.4.100:80; 拒绝连接
server3
curl: (7) Failed connect to 172.25.4.100:80; 拒绝连接
server3

这样缺少健康检查,是不可以的
2、安装ldirectord可以配合LVS进行健康检查

ldirectord是专门为LVS监控而编写的,用来监控lvs架构中服务器池(server pool)的服务器状态ldirectord运行在IPVS节点上,ldirectord作为一个守护进程启动后会对服务器池中的每个真实服务器发送请求进行监控。如果服务器没有响应ldrectord的请求,那么ldrectord认为该服务器不可用,ldirectord会运行ipvsadm对IPVS表中该服务器进行删除,如果等下次再检测有响应则通过ipvsadm进行添加。

[root@server1 ~]# yum install ldirectord-3.9.5-3.1.x86_64.rpm -y

查看配置文件

[root@server1 ~]# rpm -ql ldirectord
/etc/ha.d
/etc/ha.d/resource.d
/etc/ha.d/resource.d/ldirectord
/etc/init.d/ldirectord
/etc/logrotate.d/ldirectord
/usr/lib/ocf/resource.d/heartbeat/ldirectord
/usr/sbin/ldirectord
/usr/share/doc/ldirectord-3.9.5
/usr/share/doc/ldirectord-3.9.5/COPYING
/usr/share/doc/ldirectord-3.9.5/ldirectord.cf
/usr/share/man/man8/ldirectord.8.gz

编辑配置文件

[root@server1 ~]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d/
# Sample for an http virtual service
virtual=172.25.4.100:80
        real=172.25.4.2:80 gate
        real=172.25.4.3:80 gate
        fallback=127.0.0.1:80 gate
        service=http
        scheduler=rr
        #persistent=600
        #netmask=255.255.255.255
        protocol=tcp
        checktype=negotiate
        checkport=80
        request="index.html"
        #receive="Test Page"
        #virtualhost=www.x.y.z

清除内核虚拟服务器表中的所有记录

[root@server1 ~]# ipvsadm -C

打开服务

[root@server1 ~]# /etc/init.d/ldirectord start
Starting ldirectord... success

由于刚才已经关闭了server2上的httpd,所以查看策略就只剩下server3了

[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.4.100:80 rr
  -> 172.25.4.3:80                Route   1      0          0      

关闭server3上的httpd服务

[root@server3 ~]# /etc/init.d/httpd stop
停止 httpd:                                               [确定]

调度器上查看策略

[root@server1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.4.100:80 rr
  -> 127.0.0.1:80                 Local   1      0          0   

启动server1上的httpd服务,当后端服务器全部挂掉,调度器会作为最后一个服务器

[root@server1 ~]# echo "网页正在维护" > /var/www/html/index.html
[root@server1 ~]# cat /var/www/html/index.html 
网页正在维护
[root@server1 ~]# /etc/init.d/httpd start
正在启动 httpd:httpd: Could not reliably determine the server's fully qualified domain name, using 172.25.4.1 for ServerName
                                                           [确定]

在真机上进行测试,已经有了健康检查

[root@ivans rhel6.5]# for i in {1..10};do curl 172.25.4.100;done
网页正在维护
网页正在维护
网页正在维护
网页正在维护
网页正在维护
网页正在维护
网页正在维护
网页正在维护
网页正在维护
网页正在维护

实验三:高可用集群和负载均衡集群

安装keepalived软件防止调度器出现错误导致整个系统崩溃

keepalived是集群管理中保证高可用的一个服务软件,用来防止单点故障,这种故障切换是通过VRRP协议来实现的,主节点 在一定的时间间隔中发送心跳信息的广播包,告诉自己的存活状态信息,当主节点发生故障时,各从节点在一段时间内收到广 播包,从而判断主节点是否发生了故障,因此会调用自己的接管程序来接管主节点的IP资源和服务,当主节点恢复时,后备节点 会主动释放资源,恢复到接管前状态,从而实现主备故障切换。
1、关闭server1上的ldirectord服务

[root@server1 ~]# /etc/init.d/ldirectord stop
Stopping ldirectord... success
[root@server1 ~]# chkconfig ldirectord off     物理性关闭
[root@server1 ~]# chkconfig ldirectord --list
ldirectord      0:关闭    1:关闭    2:关闭    3:关闭    4:关闭    5:关闭    6:关闭

2、调度器上安装keepalived

[root@server1 ~]# ls
keepalived-2.0.6.tar.gz  ldirectord-3.9.5-3.1.x86_64.rpm
[root@server1 ~]# tar zxf keepalived-2.0.6.tar.gz 
[root@server1 ~]# ls
keepalived-2.0.6  keepalived-2.0.6.tar.gz  ldirectord-3.9.5-3.1.x86_64.rpm

编译安装

[root@server1 keepalived-2.0.6]# yum install gcc -y
[root@server1 keepalived-2.0.6]# yum install openssl-devel -y
[root@server1 keepalived-2.0.6]# ./configure --prefix=/usr/local/keepalived --with-init=SYSV
[root@server1 keepalived-2.0.6]# make && make install
[root@server1 keepalived-2.0.6]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d
[root@server1 keepalived-2.0.6]# ln -s /usr/local/keepalived/etc/keepalived/ /etc
[root@server1 keepalived-2.0.6]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server1 keepalived-2.0.6]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server1 keepalived-2.0.6]# which keepalived
/sbin/keepalived
[root@server1 keepalived-2.0.6]# ll /etc/init.d/keepalived
lrwxrwxrwx 1 root root 48 12月  5 20:21 /etc/init.d/keepalived -> /usr/local/keepalived/etc/rc.d/init.d/keepalived
[root@server1 keepalived-2.0.6]# chmod +x /usr/local/keepalived/etc/rc.d/init.d/keepalived
[root@server1 keepalived-2.0.6]# /etc/init.d/keepalived start
正在启动 keepalived:                                      [确定]
[root@server1 keepalived-2.0.6]# /etc/init.d/keepalived stop
停止 keepalived:                                          [确定]

3、在server4上同样安装keepalived

[root@server4 keepalived-2.0.6]# ls
aclocal.m4  bin          compile        configure     COPYING  genhash     keepalived          lib          Makefile.in  README.md
ar-lib      bin_install  config.log     configure.ac  depcomp  INSTALL     keepalived.spec     Makefile     missing      snap
AUTHOR      ChangeLog    config.status  CONTRIBUTORS  doc      install-sh  keepalived.spec.in  Makefile.am  README       TODO
[root@server4 keepalived-2.0.6]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d
[root@server4 keepalived-2.0.6]# ln -s /usr/local/keepalived/etc/keepalived/ /etc
[root@server4 keepalived-2.0.6]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@server4 keepalived-2.0.6]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
[root@server4 keepalived-2.0.6]# ll /etc/init.d/keepalived
lrwxrwxrwx 1 root root 48 12月  5 20:35 /etc/init.d/keepalived -> /usr/local/keepalived/etc/rc.d/init.d/keepalived
[root@server4 keepalived-2.0.6]# chmod +x /usr/local/keepalived/etc/rc.d/init.d/keepalived 
[root@server4 keepalived-2.0.6]# /etc/init.d/keepalived start
正在启动 keepalived:                                      [确定]
[root@server4 keepalived-2.0.6]# /etc/init.d/keepalived stop
停止 keepalived:                                          [确定]

4、在server4上同样安装ipvsadm

[root@server1 keepalived]# scp /etc/yum.repos.d/rhel-source.repo 172.25.4.4:/etc/yum.repos.d/
[root@server4 keepalived-2.0.6]# yum install ipvsadm -y

5、在server1和server4上配置keepalived配置文件

[root@server4 keepalived-2.0.6]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     root@loclhost
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state BACKUP                           注意是BACKEUP
    interface eth0
    virtual_router_id 51
    priority 50                                 优先级比server1低
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.4.100
    }
}

virtual_server 172.25.4.100 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP

    real_server 172.25.4.2 80 {
        weight 1
    TCP_CHECK {
        connect_timeout 3
        retry 3
        delay_before_retry 3
    }
}
    real_server 172.25.4.3 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
}
[root@server1 keepalived]# cat keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     root@loclhost
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER                          由于是主端,所以是master
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.4.100
    }
}

virtual_server 172.25.4.100 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP

    real_server 172.25.4.2 80 {
        weight 1
    TCP_CHECK {
        connect_timeout 3
        retry 3
        delay_before_retry 3
    }
}
    real_server 172.25.4.3 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
}

修改配置文件后启动keepalived服务
[root@server1 keepalived]# /etc/init.d/keepalived restart
6、测试


删除之前的添加的虚拟ip
[root@server1 keepalived]# ip addr del 172.25.4.100/24 dev eth0



使得server1主机挂掉
[root@server1 keepalived]# echo c >/proc/sysrq-trigger
测试



发现服务正常,虚拟ip漂移到server4上,server4已经接管了server1的IP达到高可用

重新启动server1上的keepalived服务,xuniip会立刻自动漂移到server1上

[root@server1 ~]# /etc/init.d/keepalived status
keepalived 已停
[root@server1 ~]# /etc/init.d/keepalived start
正在启动 keepalived:                                      [确定]
[root@server1 ~]# ip addr
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:22:e8:61 brd ff:ff:ff:ff:ff:ff
    inet 172.25.4.1/24 brd 172.25.4.255 scope global eth0
    inet6 fe80::5054:ff:fe22:e861/64 scope link 
       valid_lft forever preferred_lft forever
[root@server1 ~]# ip addr
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:22:e8:61 brd ff:ff:ff:ff:ff:ff
    inet 172.25.4.1/24 brd 172.25.4.255 scope global eth0
    inet 172.25.4.100/32 scope global eth0
    inet6 fe80::5054:ff:fe22:e861/64 scope link 
       valid_lft forever preferred_lft forever
[root@server1 ~]# ip addr
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:22:e8:61 brd ff:ff:ff:ff:ff:ff
    inet 172.25.4.1/24 brd 172.25.4.255 scope global eth0
    inet 172.25.4.100/32 scope global eth0
    inet6 fe80::5054:ff:fe22:e861/64 scope link 
       valid_lft forever preferred_lft forever

关闭server2上的httpd服务,测试keepalived的监控模块是否打开,结果发现具有健康检查功能

[root@server2 ~]# /etc/init.d/httpd stop
停止 httpd:                                               [确定]

测试的结果:当改变配置文件并且把模式改为backup时,server2会作为备用调度器,当server1止,server2会自动配置虚拟ip并且作为调度器,并且具有负载均衡的功能。并且该软件还具备健康检查的功能。当所有后端服务器停止后,不会显示server1中httpd显示的内容。

7、lvs+keepalived的应用:不同服务互为主备
当server1运行时,备用机一直处于空闲状态,比较浪费资源,我们可以开启两个服务,server1为第一个服务的主机和第二个服务的备机,server2为第一个服务的备机和第二个服务的主机.

在server2和server3上安装vsftpd服务并且开启
[root@server2 ~]# yum install vsftpd -y
[root@server2 ~]# /etc/init.d/vsftpd start

[root@server1 ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
    root@localhost
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 15
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.4.100
    }
}

virtual_server 172.25.4.100 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP

    real_server 172.25.4.2 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.4.3 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
}

vrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 115
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    172.25.4.200
    }
}
virtual_server 172.25.4.200 21 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 172.25.4.2 21 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.4.3 21 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
}

[root@server4 keepalived-2.0.6]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
    root@localhost
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 15
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.4.100
    }
}

virtual_server 172.25.4.100 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP

    real_server 172.25.4.2 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.4.3 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
}

vrrp_instance VI_2 {
    state MASTER
    interface eth0
    virtual_router_id 115
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    172.25.4.200
    }
}
virtual_server 172.25.4.200 21 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 172.25.4.2 21 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
    real_server 172.25.4.3 21 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
        }
    }
}

编辑server1和server4上的配置文件时我们只需要注意这几个点,vsftpd服务是需要长连接的,所以它的负载均衡需要设置时间,由于是互为主备,那么状态一定要填写相反的,还有一个是vsftopd监听的是21端口。
配置完成后一定记得重新启动服务

然后就在两个后端服务器中创建arp策略

[root@server3 pub]# ip addr add 172.25.4.200 dev lo
[root@server3 pub]#  arptables -A OUT -s 172.25.4.200 -j mangle --mangle-ip-s 172.25.4.3
[root@server3 pub]# arptables -A IN -d 172.25.4.200 -j DROP
[root@server3 pub]# /etc/init.d/arptables_jf save
将当前规则保存到 /etc/sysconfig/arptables:                [确定]

测试:



实验完成后我们会发现,server1作为80端口服务的主备机器,server4作为21端口的主备机器,看他们的虚拟ip就知道了。


本实验基于上一个(DR)的环境

1、在调度器server1上添加一块网卡,并且给该网卡添加ip,状态设置为up



2、server2和server3依然作为后端服务器,打开server2和server3的httpd服务,设置server2和server3的网关为server1

[root@server2 ~]# route add default gw 172.25.4.1
[root@server3 ~]# route add default gw 172.25.4.1
[root@ivans ~]# ssh [email protected]
[email protected]'s password: 
Last login: Wed Dec  5 16:33:36 2018 from 172.25.4.250
[root@server2 ~]# /etc/init.d/httpd start
正在启动 httpd:httpd: Could not reliably determine the server's fully qualified domain name, using 172.25.4.2 for ServerName
                                                           [确定]

3、在调度器上添加ipvsadm策略
首先,开启ipvsadm服务
[root@server1 ~]# ipvsadm -A -t 172.25.254.100:80 -s wrr
设置虚拟服务器,设置加权轮叫算法
[root@server1 ~]# ipvsadm -a -t 172.25.254.100:80 -r 172.25.4.2 -m -w 1
-m表示nat模式,-w设置权重

[root@server1 ~]# /etc/init.d/ipvsadm status
ipvsadm: IPVS is not running.
[root@server1 ~]# /etc/init.d/ipvsadm start
[root@server1 ~]# ipvsadm -C
[root@server1 ~]# ipvsadm -A -t 172.25.254.100:80 -s wrr
[root@server1 ~]# ipvsadm -a  -t 172.25.254.100:80 -r 172.25.4.2 -m -w 1
[root@server1 ~]# ipvsadm -a  -t 172.25.254.100:80 -r 172.25.4.3 -m -w 1
[root@server1 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.254.100:http wrr
  -> server2:http                 Masq    1      0          0         
  -> server3:http                 Masq    1      0          0   

4、在server1开启路由机制

[root@server1 ~]# vim /etc/sysctl.conf 
net.ipv4.ip_forward = 1
[root@server1 ~]# sysctl -a | grep ip_forward
net.ipv4.ip_forward = 1

[root@server1 ~]# sysctl -p  
刷新

5、在server1加载NAT模块开启服务

[root@server1 ~]# modprobe iptable_nat
[root@server1 ~]# /etc/init.d/ipvsadm restart
ipvsadm: Clearing the current IPVS table:                  [确定]
ipvsadm: Unloading modules:                                [确定]
ipvsadm: Clearing the current IPVS table:                  [确定]
ipvsadm: Applying IPVS configuration:                      [确定]

6、测试


当关闭server3的httpd服务时,没有健康检查功能




本实验基于上一个实验的环境只需要重启以下就可以

调度器的初始环境


1、在server1,server2,server3上添加隧道(三个都要添加,因为在虚拟服务器和真实服务器之间是直接通过隧道交换包的),激活隧道,并且通过隧道添加对外暴露的vip

[root@server1 ~]# modprobe ipip                       添加隧道
[root@server1 ~]# ip addr add 172.25.4.100/24 dev tunl0           添加vip
[root@server1 ~]# ip link set up tunl0               激活隧道

2、因为在同一个网段内,如果三台服务器有着相同的ip,那么会发生冲突,那么我们可以利用arptables将servere2和server3的ip端口对外隐藏。

[root@server2 ~]# /etc/init.d/arptables_jf start
刷新所有当前规则与用户定义链:                             [确定]
清除所有当前规则与用户定义链:                             [确定]
应用 arptables 防火墙规则:                                [确定]
[root@server2 ~]# arptables -A IN -d 172.25.4.100 -j DROP


[root@server3 ~]# /etc/init.d/arptables_jf start
刷新所有当前规则与用户定义链:                             [确定]
清除所有当前规则与用户定义链:                             [确定]
应用 arptables 防火墙规则:                                [确定]
[root@server3 ~]# arptables -A IN -d 172.25.4.100 -j DROP

4.在server2和server3上修改rp_filter参数,启动httpd服务

[root@server2 ~]# sysctl -w net.ipv4.conf.tunl0.rp_filter=0
[root@server3 ~]# arptables -A IN -d 172.25.4.100 -j DROP


5.在server1 上清除之前的策略重新添加新的策略

[root@server1 ~]# ipvsadm -C
[root@server1 ~]# ipvsadm -A -t 172.25.4.100:80 -s rr
[root@server1 ~]# ipvsadm -a -t 172.25.4.100:80 -r 172.25.4.2:80 -i
[root@server1 ~]# ipvsadm -a -t 172.25.4.100:80 -r 172.25.4.3:80 -i
[root@server1 ~]# ipvsadm -lnc
IPVS connection entries
pro expire state       source             virtual            destination
[root@server1 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.4.100:http rr
  -> server2:http                 Tunnel  1      0          0         
  -> server3:http                 Tunnel  1      0          0         


6、测试


你可能感兴趣的:(深入了解负载均衡集群LVS)