这两天学习了LVS+Keepalived负载均衡的搭建,网上的教程很多,但是动起手来遇到不少问题。
现在把自己的搭建过程以及遇到的一些问题给分享下。
硬件环境:
Macbook 8G内存,250G SSD,双核
软件环境:
由于资源有限,搭建了4个虚拟机。
虚拟机
[root@rs-1 work]# uname -a
Linux rs-1 2.6.18-238.el5 #1 SMP Thu Jan 13 15:51:15 EST 2011 x86_64 x86_64 x86_64 GNU/Linux
[root@rs-1 work]# cat /etc/redhat-release
CentOS release 5.6 (Final)
4个虚拟机的ip地址分配如下:
Master DR: { ip: 172.16.3.89 hostname: lvs-backup}
Slave DR: { ip:172.16.3.90 hostname:lvs}
Real Server1: {ip: 172.16.3.91 hostname: rs-1}
Real Server2: { ip:172.16.3.92 hostname: rs-2}
VIP: 172.16.3.199
1.在Master DR和Slave DR分别安装ipvsadm(1.24), keepalived(1.2.12)
安装ipvsadm
检查系统是否安装了IPVS模块,下图显示系统已经支持ipvs模块的。
[root@lvs ~]# modprobe -l | grep ipvs
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_dh.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_ftp.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_lblc.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_lblcr.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_lc.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_nq.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_rr.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_sed.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_sh.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_wlc.ko
/lib/modules/2.6.18-238.el5/kernel/net/ipv4/ipvs/ip_vs_wrr.ko
做个软连接
[vagrant@lvs src]$ sudo ln -s /usr/src/kernels/2.6.18-238.el5-x86_64/ /usr/src/linux
编译
[vagrant@lvs ipvsadm-1.24]$ make
[vagrant@lvs ipvsadm-1.24]$ sudo make install
[root@lvs ~]# ipvsadm -v
ipvsadm v1.24 2005/12/10 (compiled with getopt_long and IPVS v1.2.1)
能打印出版本号说明已经安装成功了!!!!
安装keepalived
configure
[vagrant@lvs keepalived-1.2.12]$./configure --sysconf=/etc --with-kernel-dir=/usr/src/kernels/2.6.18-238.el5-x86_64/
编译
[vagrant@lvs keepalived-1.2.12]$ make
安装
[vagrant@lvs keepalived-1.2.12]$ sudo make install
做个软连接
[vagrant@lvs keepalived-1.2.12]$ sudo ln -s /usr/local/sbin/keepalived /sbin/
[root@lvs ~]# keepalived -v
Keepalived v1.2.12 (05/06,2014)
能打印出版本号,说明安装已经成功了!!
同理在lvs-backup上安装keepalived
检测是否安装成功
[root@lvsbackup~]# keepalived -v
Keepalived v1.2.12 (05/06,2014)
配置keepalived
! Configuration File for keepalived
#global_defs {
# notification_email {
#设置报警邮件地址,可以设置多个,每行1个,
#需开启邮件报警及本机的Sendmail服务。
# }
#notification_email_from [email protected]
#smtp_server 192.168.199.1 #设置SMTP Server地址;
#smtp_connect_timeout 30
#router_id LVS_DEVEL
#}
########VRRP Instance########
vrrp_instance VI_1 {
state MASTER #指定Keepalived的角色,MASTER为主机服务器,BACKUP为备用服务器
interface eth1 #BACKUP为备用服务器
virtual_router_id 51
priority 100 #定义优先级,数字越大,优先级越高,主DR必须大于备用DR。
advert_int 1
authentication {
auth_type PASS #设置验证类型,主要有PASS和AH两种
auth_pass 1111 #设置验证密码
}
virtual_ipaddress {
172.16.3.199 #设置主DR的虚拟IP地址(virtual IP),可多设,但必须每行1个
}
}
########Virtual Server########
virtual_server 172.16.3.199 80 { #注意IP地址与端口号之间用空格隔开
delay_loop 6 #设置健康检查时间,单位是秒
lb_algo rr #设置负载调度算法,默认为rr,即轮询算法,最优秀是wlc算法
lb_kind DR #设置LVS实现LB机制,有NAT、TUNN和DR三个模式可选
nat_mask 255.255.255.0
#persistence_timeout 50 #会话保持时间,单位为秒
protocol TCP #指定转发协议类型,有TCP和UDP两种
real_server 172.16.3.92 80 {
weight 50 #配置节点权值,数字越大权值越高
TCP_CHECK {
connect_timeout 3 #表示3秒无响应,则超时
nb_get_retry 3 #表示重试次数
delay_before_retry 3 #表示重试间隔
}
}
real_server 172.16.3.91 80 { #配置服务器节点,即Real Server2的public IP
weight 50 #配置节点权值,数字越大权值越高
TCP_CHECK {
connect_timeout 3 #表示3秒无响应,则超时
nb_get_retry 3 #表示重试次数
delay_before_retry 3 #表示重试间隔
}
}
MASTER改为BACKUP,priority 100改为priority 80
这边说下persistence_timeout 选项,意思就是在这个一定时间内会讲来自同一用户(根据ip来判断的)route到同一个real
server。我这边给注释掉了。具体根据业务需求,长连接的话最好是配置上,配置值最好跟lvs的配置的timeout一致。
启动keepalived
编写start.sh(stop.sh,restart.sh)脚本方便启动
#!/bin/sh
/etc/init.d/keepalived start
执行脚本
[root@lvs work]# ./start.sh
Starting keepalived: [ OK ]
编写检测脚本watch.sh
#!/bin/sh
watch 'ipvsadm -l -n'
启动检测
[root@lvs work]# ./watch.sh
Every 2.0s: ipvsadm -l -n Tue May 6 12:49:52 2014
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.3.199:80 rr persistent 50
-> 172.16.3.91:80 Route 50 0 0
-> 172.16.3.92:80 Route 50 0 0
在Slave DR上做同样配置和脚本。
2.在Real Server1和Real Server2安装nginx
安装nginx过程省略。
安装完nginx之后,需要启动nginx。
配置 realserver.sh脚本
#!/bin/bash
SNS_VIP=172.16.3.199
/etc/rc.d/init.d/functions
case "$1" in
start)
ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP
/sbin/route add -host $SNS_VIP dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p >/dev/null 2>&1
echo "RealServer Start OK"
;;
stop)
ifconfig lo:0 down
route del $SNS_VIP >/dev/null 2>&1
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "RealServer Stoped"
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
~
[root@rs-1 work]# ./realserver.sh start
RealServer Start
执行ifconfig,可以看到做往常多了一段下图红框内的内容。
测试
在Slave DR上测试
[vagrant@centos-5 conf]$ for((i=0;i<100;i++));do curl 172.16.3.199;done;
[vagrant@centos-5 conf]$ webbench -c 10 -t 10 http://172.16.3.199/
在Master DR上执行watch.sh
Every 2.0s: ipvsadm -l -n Wed May 7 11:45:27 2014
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.3.199:80 rr
-> 172.16.3.91:80 Route 50 0 1763
-> 172.16.3.92:80 Route 50 0 1762
整个配置过程,记得关闭所有虚拟机的防火墙, 这点很重要!!!
[root@lvs work]# service iptables stop
[root@lvs work]# chkconfig --list | grep iptables
iptables 0:off1:off2:off3:off4:off5:off6:off
参考链接:
http://beyondhdf.blog.51cto.com/229452/1331874
http://www.it165.net/admin/html/201308/1604.html