【LVS-DR模式集群搭建】

DR模式全程流向不变则目标地址、源地址不变,因为DR模式工作于2层。

第一步、请求报文从客户端发出,源地址:CIP,目标地址:VIP,源MAC:CMAC,目标MAC:VMAC;
第二步、请求到达负载均衡器,分配真实服务器(修改MAC),源地址:CIP,目标地址:VIP,源MAC:DMAC,目标MAC:RMAC;
第三步、真实服务器接收报文,处理并响应(回头了),源地址:RIP,目标地址:CIP,源MAC:RMAC,目标MAC:CMAC。
说明:
1)基于MAC的数据报文转发是效率最好的,但是是根据交换机的MAC地址表来实现的;
 2)2层设备不具有路由功能, 那么广播也就不具有跨路由的功能, 所有要实现mac地址广播, 必须在同一物理网段;
3)vlan具有隔离广播的功能, 所有要能处理mac地址广播, 就应该在同一个VLAN中
总来说就是,所有的设备应该在同一个物理网段,所有的设备都应该在同一个广播域中。

【LVS-DR模式集群搭建】_第1张图片
node1 192.168.131.107(DIP) 192.168.131.205(VIP) 负载均衡器
node2  192.168.131.108      真实服务器1
node3  192.168.131.109      真实服务器2

一、命令行方式

准备环境
1.配置主机的ip地址和添加访问的路由
node1负载均衡器:采用子接口的配置一个vip192.168.131.205和添加路由

[root@node1 ~]#ifconfig ens33:205 192.168.131.205 broadcast 192.168.131.205 netmask 255.255.255.255 up
[root@node1 ~]# route add -host 192.168.131.205 dev ens33:205
[root@node1 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.131.107  netmask 255.255.255.0  broadcast 192.168.131.255
        ether 00:0c:29:d0:85:02  txqueuelen 1000  (Ethernet)
        RX packets 12782  bytes 1137493 (1.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8066  bytes 1497683 (1.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33:205: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.131.205  netmask 255.255.255.255  broadcast 192.168.131.205
        ether 00:0c:29:d0:85:02  txqueuelen 1000  (Ethernet)
[root@node1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.131.2   0.0.0.0         UG    100    0        0 ens33
192.168.131.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.131.205 0.0.0.0         255.255.255.255 UH    0      0        0 ens33       

node2192.168.131.108和node3192.168.131.109:环回接口lo绑定vip192.168.131.205和添加路由

[root@node2 ~]# ifconfig lo:205 192.168.131.205 netmask 255.255.255.255 broadcast 192.168.131.205 up
[root@node2 ~]# route add -host 192.168.131.205 dev lo
[root@node2 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.131.108  netmask 255.255.255.0  broadcast 192.168.131.255
        inet6 fe80::b6c0:3a78:4c0c:abf8  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:6b:42:87  txqueuelen 1000  (Ethernet)
        RX packets 3452  bytes 288099 (281.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1635  bytes 362981 (354.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo:205: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 192.168.131.205  netmask 255.255.255.255
        loop  txqueuelen 1000  (Local Loopback)
[root@node2 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.131.2   0.0.0.0         UG    100    0        0 ens33
192.168.131.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.131.205 0.0.0.0         255.255.255.255 UH    0      0        0 lo

注:为什么要在会环口上配置VIP?

1.因为DR模式只改变MAC地址而不改变IP地址,为了真实服务器能够接收到报文,所以需要配置一个和VIP一样的IP地址
2.VIP不能配置在出口网卡上,否则会响应客户端的ARP请求,造成client/gateway arp table混乱,导致整个集群不能正常工作。

2.搭建时间服务器:保证集群内的主机时间同步
服务端192.168.131.107:

[root@node1 ~]# vim /etc/chrony.conf 
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 172.24.8.253 iburst
allow 172.24.8.0/24
local stratum 10
[root@node1 ~]# systemctl restart chronyd

客户端192.168.131.108和192.168.131.109:

[root@node2 ~]# vim /etc/chrony.conf 
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 172.24.8.253 iburst
[root@node2 ~]# systemctl restart chronyd
[root@node2 ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? 119.28.206.193                2  10     0  131m   +309us[ -101us] +/-   65ms
^? 84.16.73.33                   1  10     0  134m  -3923us[-4218us] +/-  111ms
^? 162.159.200.1                 3  10     0  133m  +6155us[+5860us] +/-  101ms
^? 94.130.49.186                 3  10     0  132m  +5978us[+5978us] +/-  113ms

3.集群主机全部关闭防火墙,关闭selinux

systemctl stop firewalld
systemctl disable firewalld
vim /etc/selinux/config
SELINUX=disabled

步骤:
1.在负载均衡器node1上安装ipvsadmin,用ipvsadm命令设置规则:添加虚拟服务器、添加真实服务器、DR模式(-g)

[root@node1 ~]# yum -y install ipvsadm
[root@node1 ~]# ipvsadm -At 192.168.131.205:80 -s rr
[root@node1 ~]# ipvsadm -at 192.168.131.205:80 -r 192.168.131.108:80 -g
[root@node1 ~]# ipvsadm -at 192.168.131.205:80 -r 192.168.131.109:80 -g
[root@node1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.131.205:80 rr
  -> 192.168.131.108:80           Route   1      0          0         
  -> 192.168.131.109:80           Route   1      0          0

2.在真实服务器node2和node3上两台真实服务器上安装nginx用于测试
node2:

[root@node2 ~]# yum install -y nginx-1.10.0-1.el7.ngx.x86_64.rpm
[root@node2 ~]# cd /usr/share/nginx/html
[root@node2 html]# mv index.html{,.bak}
[root@node2 html]# echo "web1 test page" > index.html
[root@node2 html]# ls
index.html  index.html.bak
[root@node2 html]# systemctl enable --now nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
[root@node2 html]# curl localhost
web1 test page

node3重复上述操作:

[root@node3 html]# echo "web2 test page" > index.html
[root@node3 html]# curl localhost
web2 test page

3.在真实服务器node2和node3上两台真实服务器上调整/proc参数,去抑制ARP响应

[root@node2 ~]# echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore  #当系统收到外部的arp请求时,只响应目的IP地址为接收网卡上的本地地址的aep请求
[root@node2 ~]# echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce #忽略IP数据包的源IP地址,选择该发送网卡上最合适的本地地址作为发送arp请求的源地址
[root@node2 ~]# echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
[root@node2 ~]# echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce

注:相当于编辑/etc/sysctl.conf文件,然后sysctl -p 进行刷新
net.ipv4.conf.all.arp_ignore = 1 #当系统收到外部的arp请求时,只响应目的IP地址为接收网卡上的本地地址的aep请求
net.ipv4.conf.all.arp_announce = 2 #忽略IP数据包的源IP地址,选择该发送网卡上最合适的本地地址作为发送arp请求的源地址
net.ipv4.conf.lo.arp_ignore = 1 #在环回网卡设置
net.ipv4.conf.lo.arp_announce = 2
4.测试:在外网主机192.168.131.207上访问192.168.131.205
【LVS-DR模式集群搭建】_第2张图片
【LVS-DR模式集群搭建】_第3张图片
扩展:/proc参数说明:

arp_ignore参数的作用是控制系统在收到外部的arp请求时,是否要返回arp响应。参数常用的取值主要有0123~8较少用到:

    0:响应任意网卡上接收到的对本机IP地址的arp请求(包括环回网卡上的地址),而不管该目的IP是否在接收网卡上;
    1:只响应目的IP地址为接收网卡上的本地地址的arp请求;
    2:只响应目的IP地址为接收网卡上的本地地址的arp请求,并且arp请求的源IP必须和接收网卡同网段;
    3:如果ARP请求数据包所请求的IP地址对应的本地地址其作用域(scope)为主机(host),则不回应ARP响应数据包,如果作用域为全局(global)或链路(link),则回应ARP响应数据包;
    4~7:保留未使用;
    8:不回应所有的arp请求。
  sysctl.conf中包含all和eth/lo(具体网卡)的arp_ignore参数,取其中较大的值生效。
 
  arp_announce的作用是控制系统在对外发送arp请求时,如何选择arp请求数据包的源IP地址。(比如系统准备通过网卡发送一个数据包a,这时数据包a的源IP和目的IP一般都是知道的,而根据目的IP查询路由表,发送网卡也是确定的,故源MAC地址也是知道的,这时就差确定目的MAC地址了。而想要获取目的IP对应的目的MAC地址,就需要发送arp请求。arp请求的目的IP自然就是想要获取其MAC地址的IP,而arp请求的源IP是什么呢? 可能第一反应会以为肯定是数据包a的源IP地址,但是这个也不是一定的,arp请求的源IP是可以选择的,控制这个地址如何选择就是arp_announce的作用)  arp_announce参数常用的取值有012:
    0:允许使用任意网卡上的IP地址作为arp请求的源IP,通常就是使用数据包a的源IP。
    1:尽量避免使用不属于该发送网卡子网的本地地址作为发送arp请求的源IP地址。
    2:忽略IP数据包的源IP地址,选择该发送网卡上最合适的本地地址作为arp请求的源IP地址。
  sysctl.conf中包含all和eth/lo(具体网卡)的arp_ignore参数,取其中较大的值生效。

二、编写脚本方式

1.负载调度器node1:编写/etc/init.d/lvs,并将lvs服务添加系统服务

[root@node1 ~]# vim /etc/init.d/lvs
#!/bin/bash
# Startup script handle the initialisation of LVS
# chkconfig: - 28 72
# description: Initialise the Linux Virtual Server for DR
#
### BEGIN INIT INFO
# Provides: ipvsadm
# Required-Start: $local_fs $network $named
# Required-Stop: $local_fs $remote_fs $network
# Short-Description: Initialise the Linux Virtual Server
# Description: The Linux Virtual Server is a highly scalable and highly
#  available server built on a cluster of real servers, with the load
#  balancer running on Linux.
# description: start LVS of DR
LOCK=/var/lock/ipvsadm.lock
VIP=192.168.131.205
RIP1=192.168.131.108
RIP2=192.168.131.109
DipName=ens33
. /etc/rc.d/init.d/functions
start() {
	PID=$(ipvsadm -Ln | grep ${VIP} | wc -l)
	if  [ $PID -gt 0 ];
	then
     	echo "The LVS-DR Server is already running"
  	else
		#Set the Virtual IP Address
		/sbin/ifconfig ${DipName}:205 $VIP broadcast $VIP netmask 255.255.255.255 up
		/sbin/route add -host $VIP dev ${DipName}:205
		#Clear IPVS Table
		/sbin/ipvsadm -C
		#Set Lvs
		/sbin/ipvsadm -At $VIP:80 -s rr
		/sbin/ipvsadm -at $VIP:80 -r $RIP1:80 -g
		/sbin/ipvsadm -at $VIP:80 -r $RIP2:80 -g
		/bin/touch $LOCK
     	#Run Lvs
		echo "starting LVS-DR Server is ok"   
	fi
}

stop() {
	#clear Lvs and vip
	/sbin/ipvsadm -C
	/sbin/route del -host $VIP dev ${DipName}:205
	/sbin/ifconfig ${DipName}:205 down >/dev/null
	rm -rf $LOCK
	echo "stopping LVS-DR server is ok !"
}
status(){
	if [ -e $LOCK ];
	then
    	echo "The LVS-DR Server is already running !"
  	else
   		echo "The LVS-DR Server is not running !"
	fi
}
case "$1" in
	start)
		start
   		;;
	stop)
		stop
		;;
	restart)
		stop
    	start
		;;
	status)
		status
		;;
	*)
		echo "Usage: $1 {start|stop|restart|status}"
		exit 1
esac
exit 0
[root@node1 ~]# chmod +x /etc/init.d/lvs

添加到系统服务并且启动服务:

[root@node1 ~]# chkconfig --add lvs
[root@node1 ~]# chkconfig lvs on
[root@node1 ~]# chkconfig --list

注:该输出结果只显示 SysV 服务,并不包含
原生 systemd 服务。SysV 配置数据
可能被原生 systemd 配置覆盖。 

      要列出 systemd 服务,请执行 'systemctl list-unit-files'。
      查看在具体 target 启用的服务请执行
      'systemctl list-dependencies [target]'。

lvs            	0:1:2:3:4:5:6:关
netconsole     	0:1:2:3:4:5:6:关
network        	0:1:2:3:4:5:6:[root@node1 ~]# systemctl start lvs

查看vip地址和路由配置:

[root@node1 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.131.107  netmask 255.255.255.0  broadcast 192.168.131.255
        ether 00:0c:29:d0:85:02  txqueuelen 1000  (Ethernet)
        RX packets 12782  bytes 1137493 (1.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8066  bytes 1497683 (1.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33:205: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.131.205  netmask 255.255.255.255  broadcast 192.168.131.205
        ether 00:0c:29:d0:85:02  txqueuelen 1000  (Ethernet)
[root@node1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.131.2   0.0.0.0         UG    100    0        0 ens33
192.168.131.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.131.205 0.0.0.0         255.255.255.255 UH    0      0        0 ens33 

查看IPVS表:

[root@node1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.131.205:80 rr
  -> 192.168.131.108:80           Route   1      0          0         
  -> 192.168.131.109:80           Route   1      0          0

2.node2和node3:编写/etc/init.d/lvs_rs,并将lvs_rs服务添加系统服务

[root@node2 ~]# vim /etc/init.d/lvs_rs
#!/bin/sh
#
# Startup script handle the initialisation of LVS
# chkconfig: - 28 72
# description: Initialise the Linux Virtual Server for DR
#
### BEGIN INIT INFO
# Provides: ipvsadm
# Required-Start: $local_fs $network $named
# Required-Stop: $local_fs $remote_fs $network
# Short-Description: Initialise the Linux Virtual Server
# Description: The Linux Virtual Server is a highly scalable and highly
#  available server built on a cluster of real servers, with the load
#  balancer running on Linux.
# description: start LVS of DR-RIP
LOCK=/var/lock/ipvsadm.lock
VIP=192.168.131.205
. /etc/rc.d/init.d/functions
start() {
  PID=`ifconfig | grep lo:205 | wc -l`
  if [ $PID -ne 0 ];
  then
    echo "The LVS-DR-RIP Server is already running !"
  else
    /sbin/ifconfig lo:205 $VIP netmask 255.255.255.255 broadcast $VIP up
    /sbin/route add -host $VIP dev lo:205
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/ens33/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/ens33/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
    /bin/touch $LOCK
    echo "starting LVS-DR-RIP server is ok !"
  fi
}
stop() {
    /sbin/route del -host $VIP dev lo:205
    /sbin/ifconfig lo:205 down >/dev/null
    echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "0" >/proc/sys/net/ipv4/conf/ens33/arp_ignore
    echo "0" >/proc/sys/net/ipv4/conf/ens33/arp_announce
    echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
    rm -rf $LOCK
    echo "stopping LVS-DR-RIP server is ok !"
}
status() {
  if [ -e $LOCK ];
  then
    echo "The LVS-DR-RIP Server is already running !"
  else
    echo "The LVS-DR-RIP Server is not running !"
  fi
}
case "$1" in
 start)
    start
   ;;
 stop)
    stop
   ;;
 restart)
    stop
    start
   ;;
status)
   status
   ;;
*)
    echo "Usage: $1 {start|stop|restart|status}"
    exit 1
esac
exit 0
[root@node2 ~]# chmod +x /etc/init.d/lvs_rs
[root@node2 ~]# chkconfig --add lvs_rs
[root@node2 ~]# chkconfig lvs_rs on
[root@node2 ~]# chkconfig --list

注:该输出结果只显示 SysV 服务,并不包含
原生 systemd 服务。SysV 配置数据
可能被原生 systemd 配置覆盖。 

      要列出 systemd 服务,请执行 'systemctl list-unit-files'。
      查看在具体 target 启用的服务请执行
      'systemctl list-dependencies [target]'。

lvs_rs         	0:1:2:3:4:5:6:关
netconsole     	0:1:2:3:4:5:6:关
network        	0:1:2:3:4:5:6:[root@node2 ~]# systemctl start lvs_rs

查看lo和路由:

[root@node2 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.131.108  netmask 255.255.255.0  broadcast 192.168.131.255
        inet6 fe80::b6c0:3a78:4c0c:abf8  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:6b:42:87  txqueuelen 1000  (Ethernet)
        RX packets 3452  bytes 288099 (281.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1635  bytes 362981 (354.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo:205: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 192.168.131.205  netmask 255.255.255.255
        loop  txqueuelen 1000  (Local Loopback)
[root@node2 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.131.2   0.0.0.0         UG    100    0        0 ens33
192.168.131.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.131.205 0.0.0.0         255.255.255.255 UH    0      0        0 lo

3.测试
【LVS-DR模式集群搭建】_第4张图片
【LVS-DR模式集群搭建】_第5张图片

你可能感兴趣的:(【LVS-DR模式集群搭建】)