基于heartbeat v2和heartbeat-ldirectord实现LVS(DR)中Director的高可用
托普图:
1.我这里用到了4台虚拟机,分别是
node1:172.16.133.11
node2:172.16.133.12
rs1:172.16.133.21
rs2:172.16.133.22
其中node1和node2作为Director,rs1和rs2作为Real Server
2.配置两台Real Server
rs1:安装httpd
- yum -y install httpd
- vim /var/www/html/index.html
- <h1>rs1<h1>
- service httpd start
- elinks -dump http://172.16.133.21
rs2:同rs1,其中/var/www/html/index.html的内容改为<h1>rs2</h1>
测试elinks -dump http://172.16.133.22
3.配置Director
(1).先使node1和node2主机建立双机互信,为后面实验提供方便
node1:
- vim /etc/hosts
- 172.16.133.11 node1
- 172.16.133.12 node2
- ssh-keygen -t rsa(一路回车)
- ssh-copy-id .ssh/id_rsa.pub root@node2
- scp /etc/hosts node2:/etc
node2:
- ssh-keygen -t rsa
- ssh-copy-id .ssh/id_rsa.pub root@node2
(2).然后在分别在node1和node2主机上安装ipvsadm,对两台Real Server进行测试
rs1:
- echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
- echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
- echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
- echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
配置vip
- ifconfig lo:0 172.16.133.1 broadcast 172.16.133.1 netmask 255.255.255.255 up
- route add -host 172.16.133.1 dev lo:0
rs2:同rs1
node1:
配置vip
- ifconfig eth0:0 172.16.133.1 broadcast 172.16.133.1 netmask 255.255.255.255 up
- route add -host 172.16.133.1 dev eth0:0
- echo 1 > /proc/sys/net/ipv4/ip_forward
安装配置ipvsadm
- yum -y install ipvsadm
- ipvsadm -A -t 172.16.133.1:80 -s rr
- ipvsadm -a -t 172.16.133.1:80 -r 172.16.133.21 -g
- ipvsadm -a -t 172.16.133.1:80 -r 172.16.133.22 -g
- ipvsadm -Ln
然后就可以通过ie访问172.16.133.1来测试了,成功后保存规则
- ipvsadm -S > /etc/sysconfig/ipvsadm
- cat /etc/sysconfig/ipvsadm
node2:同node1
(3).测试完成后,将ipvsadm停掉,并chkconfig --list ipvsadm确保ipvsadm不会开机自启动
开始安装基于heartbeat v2和heartbeat-ldirectord实现LVS(DR)中Director的高可用所需要用到的包
node1:
- yum -y install *.rpm
- cp /usr/share/doc/heartbeat-2.1.4/ha.cf /etc/ha.d
- cp /usr/share/doc/heartbeat-2.1.4/haresources /etc/ha.d
- cp /usr/share/doc/heartbeat-2.1.4/authkeys /etc/ha.d
- cp /usr/share/doc/heartbeat-ldirectord-2.1.4/ldirectord.cf /etc/ha.d
然后修改heartbeat的主配置文件/etc/ha.d/ha.cf
logfacility local0
keepalive 2
deadtime 20
udpport 694
bcast eth0
auto_failback on
node node1
node node2
ping 172.16.0.1(这个要自己定义,确定两台Director都能ping通)
compression bz2
compression_threshold 2
crm on
修改/etc/ha.d/authkeys,添加:
auth 1
1 md5 redhat(密码,越难猜越好)
修改/etc/ha.d/haresources,在末行添加
node1 172.16.133.1/16/eth0/172.16.255.255 httpd
修改/etc/ha.d/ldirectord.cf
想看日志文件方便的话,可以把logfile="/var/log/ldirectord.log"打开
主要修改下面的virtual
virtual=172.16.133.1:80
real=172.16.133.21:80 gate
real=172.16.133.22:80 gate
#fallback=127.0.0.1:80 gate
service=http
request=".text.html"
receive="ok"
#virtualhost=some.domain.com.au
scheduler=rr
#persistent=600
#netmask=255.255.255.255
protocol=tcp
checktype=negotiate
checkport=80
#irequest="index.html"
#receive="Test Page"
#virtualhost=www.x.y.z
其中request=".text.html"是另外建立的测试页,即ipvsadm -Ln中的
这个页面需要自己创建,这里就不演示了
node2:同node1
(4).以上配置完成后,就可以启动heartbeat了
node1和node2启动heartbeat(启动之前,确定两台主机的time是否一致,不要相差太多,防止heartbeat误判)
在node1上执行hb_gui,这里我用的是xshell客户端,会直接弹出图形化界面
在Resources中添加vip,和ldirector
添加ldirector
然后同理添加vip
最后
然后启动vip,ldirectord
我这里node2是DC,所以这两个资源都运行在node2上
进入node2主机
ip addr show
ipvsadm -Ln
(5).上面都完成后,打开ie,输入172.16.133.1,即可浏览,同时我们也已通过iptables将node2主机DROP掉,看看高可用效果