LVS来搭建IP负载均衡集群有VS/NAT、VS/TUN和VS/DR三种技术。
(1)VS/NAT:主要有通过网络地址转换NAT(Network Address Translation)将一组服务器构成一个高性能的、高可用的虚拟服务器,称之为VS/NAT技术(Virtual Server via NetworkAddress Translation)。缺点:扩展性有限。
(2)VS/TUN:通过IP隧道实现虚拟服务器的方法VS/TUN (Virtual Server via IPTunneling)。
缺点:仅支持linux系统的负载均衡。
(3)VS/DR:通过直接路由实现虚拟服务器的方法VS/DR(Virtual Server via Direct Routing)。
优点:负载均衡器只负责将请求包分发给物理服务器,而物理服务器将应答包直接发给用户。所以,负载均衡器能处理很巨大的请求量,这种方式,一台负载均衡能为超过100台的物理服务器服务,负载均衡器不再是系统的瓶颈,可以极大地提高系统的伸缩性。
缺点:要求负载均衡器的网卡必须与物理网卡在一个物理段上。
本次负载均衡架构采用lvs的VS/DR技术实现。
架构图如下
架构部署
lvs安装部署
用LVS来搭建负载均衡集群,理论上来说,只需要在负载调度器上安装LVS核心软件ipvs和ipvs的功能实现软件ipvsadm,而真实服务器无需额外安装软件。当前,大部分Linux发行版本已经集成了ipvs,因此我们只需要安装它的实现软件ipvsadm即可。
1、检查Load Balancer服务器是否已支持ipvs。
modprobe –l|grepipvs
若有类似以下输出,则表示服务器已支持ipvs:
若服务器不支持ipvs,则需要手动下载ipvs并编译安装,相信你们的Linux版本都已支持ipvs,所以我不在此讨论如何在内核安装ipvs
2、检查是否有必须的依赖包:Kernel-devel、gcc、openssl、openssl-devel、popt 。
rpm -qkernel-devel
rpm -q gcc
rpm -q openssl
rpm -qopenssl-devel
rpm -q popt
若有类似以下输出,则表示服务器已安装这些依赖包:若服务器输出”package ** is notinstalled”则表示该包未安装,在安装光盘找到该rpm文件,并上传至服务器安装。
3、安装ipvsadm
rpm -ivhipvsadm-1.26-2.el6.x86_64.rpm
若没有错误提示,则安装成功,使用ipvsadm命令验证:
keepalived部署
1、Keepalived安装
1.解压安装
root用户将keepalived-1.2.2.tar.gz上传到home目录下。
tar -xvfzkeepalived-1.2.2.tar.gz
cdkeepalived-1.2.2
./configure
make
make install
2. 安装完成后关键文件位置
/usr/local/sbin/keepalived --执行文件
/usr/local/etc/rc.d/init.d/keepalived--linux启动关闭时开启关闭keepalived服务脚本
/usr/local/etc/sysconfig/keepalived--运行模式
/usr/local/etc/keepalived/keepalived.conf--默认配置文件
3.更改为系统服务
cp/usr/local/sbin/keepalived /usr/sbin/
cp/usr/local/etc/sysconfig/keepalived /etc/sysconfig/
cp/usr/local/etc/rc.d/init.d/keepalived /etc/init.d/
mkdir/etc/keepalived
2、LVS_master端配置
修改keepalived.conf文件
cd /etc/keepalived
videepalived.conf
! Configuration File for keepalived
global_defs {
router_id master_101
}
vrrp_instanceVI_1 {
state MASTER
interface eth0
virtual_router_id 100
priority 151
advert_int 1
authentication {
auth_typePASS
auth_pass123456
}
virtual_ipaddress {
192.168.1.11
}
}
virtual_server 192.168.102.11 80 {
delay_loop 6
lb_algowrr
lb_kind DR
# persistence_timeout 50
protocol TCP
real_server 192.168.1.10280 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 192.168.1.10380 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
3、LVS_Slave端配置
安装ipvsadm、keepalived(Slave)
yum -y install keepalivedipvsadm
修改keepalived.conf文件
cd /etc/keepalived
videepalived.conf
! Configuration File for keepalived
global_defs {
router_id slave_104
}
vrrp_instanceVI_1 {
state BACKUP
interface eth0
virtual_router_id 100
priority 150
advert_int 1
authentication {
auth_typePASS
auth_pass123456
}
virtual_ipaddress {
192.168.1.11
}
}
virtual_server 192.168.1.11 80 {
delay_loop 6
lb_algowrr
lb_kind DR
# persistence_timeout 50
protocol TCP
real_server 192.168.1.10280 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 192.168.1.10380 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
4、启动keepalived
service keepalived start ---启动keepalived
ps–ef | grepkeepalived检查keepalived进程,出现以下界面表示keepalived启动成功。
keepalived其他命令:
service keepalived stop --停止keepalived
service keepalived restart –重启keepalived
Nginx端配置
编写一个lvs的bash脚本
vilvs.sh
#!/bin/bash
#
#Script to start LVS DR real server.
#description: LVS DR real server
#
. /etc/rc.d/init.d/functions
VIP=192.168.1.11 #这里根据需要改成自己的VIP地址
host=`/bin/hostname`
case"$1" in
start)
# Start LVS-DR real server on this machine.
/sbin/ifconfig lo down
/sbin/ifconfig lo up
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask255.255.255.255 up
/sbin/route add -host $VIP dev lo:0
;;
stop)
# Stop LVS-DR real server loopback device(s).
/sbin/ifconfig lo:0 down
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
;;
status)
# Status of LVS-DR real server.
islothere=`/sbin/ifconfig lo:0 | grep $VIP`
isrothere=`netstat -rn | grep "lo:0" | grep$VIP`
if [ ! "$islothere" -o ! "isrothere"];then
# Either the route or the lo:0 device
# not found.
echo "LVS-DR real server Stopped."
else
echo "LVS-DRreal server Running."
fi
;;
*)
# Invalid entry.
echo "$0: Usage: $0{start|status|stop}”
exit 1
;;
esac
架构测试
启动服务
(1)启动ipvsadm
/etc/init.d/ipvsadmrestart
(2)启动nginx
cd $NGINX_HOME
./nginx–cconf/nginx.conf
(3)启动keepalived
servicekeepalived start 或者/etc/init.d/keepalived start
Master机器上验证
[root@test1home]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
ProtLocalAddress:PortScheduler Flags
->RemoteAddress:Port Forward Weight ActiveConnInActConn
TCP 192.168.1.11:80 wrr
->192.168.1.102:80 Route 1 0 0
-> 192.168.1.103:80 Route 1 0 0
说明:master lvs监听了两台nginx机器192.168.1.102、192.168.1.103的80端口。
Slave机器上验证
[root@test4home]#ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
ProtLocalAddress:PortScheduler Flags
->RemoteAddress:Port Forward WeightActiveConnInActConn
TCP 192.168.1.11:80 wrr
->192.168.1.102:80 Route 1 0 0
->192.168.1.103:80 Route 1 0 0
说明:slavelvs监听了两台nginx机器192.168.1.102、192.168.1.103的80端口。
nginx负载均衡测试
在浏览器中输入vip地址192.168.1.11,连续刷新两次,分别显示访问的nginx为192.168.1.102和192.168.1.103
lvs主备切换测试
停掉master lvs192.168.1.101上的keepalived服务
tail/var/log/messages查看192.168.1.104lvs状态,发现已经切换成主master状态。
nginx心跳检测测试
先杀掉192.168.1.102的进程
killallnginx
查看lvs监听的nginx列表,发现192.168.1.102从列表中移除了
重新启动192.168.1.102的nginx
./nginx–cconf/nignx.conf
再查看lvs监听的nginx列表,发现192.168.1.102重新回到了列表中。