操作目的
基于heartbeat v2和heartbeat-ldirectord实现LVS(DR)中Director的高可用,基于httpd提供web服务,并通过hb_gui的图形界面进行;
规划
准备工作:三台主机, 分别配置如图所示的IP 和主机名
注意:1、rs1和rs2提供的不同的页面,目的是让效果明显
2 、VIP,即虚拟地址,不能被其他主机占用
3、director中的ipvsadm和vip都要确保是关闭的,让CRM来管理这些资源
一、配置LVS(DR)模型
- rs1
- #setenforce 0 //关闭selinux
- #yum -y install httpd
- # echo "<h1>rs1</h1>" >> /var/www/html/index.html //提供页面文件
- #service httpd start
- # echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
- # echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
- # echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
- # echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
- # ifconfig lo:0 172.16.220.100 broadcast 172.16.220.100 netmask 255.255.255.255 up
- # route add -host 172.16.220.100 dev lo:0
- # elinks -dump http://172.16.220.21 //测试
- rs1
- # elinks -dump http://172.16.220.10
- rs1
- rs2:
- #setenforce 0
- # yum -y install httpd
- # echo "<h1>rs2</h1>" >> /var/www/html/index.html
- #service httpd start
- # echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
- # echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
- # echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
- # echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
- # ifconfig lo:0 172.16.220.100 broadcast 172.16.220.100 netmask 255.255.255.255 up
- # route add -host 172.16.220.100 dev lo:0
- # elinks -dump http://172.16.220.22
- rs2
- # elinks -dump http://172.16.220.100
- rs2
- Directory :node1
- #setenforce 0
- # yum -y install ipvsadm //安装ipvsam
- # ifconfig eth0:0 172.16.220.100 broadcast 172.16.220.100 netmask 255.255.255.255 up
- # route add -host 172.16.220.100 dev eth0:0
- # echo 1 > /proc/sys/net/ipv4/ip_forward
- # ipvsadm -A -t 172.16.220.100:80 -s r
- # ipvsadm -a -t 172.16.220.100:80 -r 172.16.220.21 -g
- # ipvsadm -a -t 172.16.220.100:80 -r 172.16.220.22 -g
- # ipvsadm -ln
- IP Virtual Server version 1.2.1 (size=4096)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 172.16.220.100:80 rr
- -> 172.16.220.22:80 Route 1 0 0
- -> 172.16.220.21:80 Route 1 0 0
- 测试;172.16.220.100 效果是rs1 rs2的轮询
- 到此,一个LVS的DR模型创建成功了。
二、把Director做成高可用
node1 node2 做成高可用集群中,如规划图所示
1 停止node1的相关资源
- node1:
- #ipvsadm -S > /etc/sysconfig/ipvsadm
- #service ipvsadm restart
- #ipvsadm -ln
- #service ipvsadm stop
- #chkconfig ipvsadm off
- #chkconfig --list ipvsadm
- # ifconfig eth0:0 down
2 把node2做成director
- #setenforce 0
- #yum -y install ipvsadm
- # ifconfig eth0:0 172.16.220.100 broadcast 172.16.220.100 netmask 255.255.255.255 up
- # route add -host 172.16.200.100 dev eth0:0
- # echo 1 > /proc/sys/net/ipv4/ip_forward
- # ipvsadm -A -t 172.16.220.100:80 -s rr
- # ipvsadm -a -t 172.16.220.100:80 -r 172.16.220.21 -g
- # ipvsadm -a -t 172.16.220.100:80 -r 172.16.220.22 -g
- # ipvsadm -ln //查看定义的结果
- IP Virtual Server version 1.2.1 (size=4096)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 172.16.220.100:80 rr
- -> 172.16.220.22:80 Route 1 0 0
- -> 172.16.220.21:80 Route 1 0 0
- 测试:浏览器输入172.16.220.100 结果rs1 rs2轮询,工作正常
3 停止node2的相关资源
- # ipvsadm -S > /etc/sysconfig/ipvsadm
- #service ipvsadm restart
- #ipvsadm -ln
- #service ipvsadm stop
- #chkconfig ipvsadm off
- #chkconfig --list ipvsadm
- # ifconfig eth0:0 down
4 把node1 node2做成集群
4.1 配置时间同步、ssh 互连
- node1:
- #hwclock -s //与系统时间一致
- #vim /etc/hosts
- 添加
- 172.16.220.11 node1
- 172.16.220.12 node2
- # ssh-keygen -t rsa //交互的方式,Enter即可
- Generating public/private rsa key pair.
- Enter file in which to save the key (/root/.ssh/id_rsa):
- Created directory '/root/.ssh'.
- Enter passphrase (empty for no passphrase):
- Enter same passphrase again:
- Your identification has been saved in /root/.ssh/id_rsa.
- Your public key has been saved in /root/.ssh/id_rsa.pub.
- The key fingerprint is:
- 9e:2f:7d:c7:c3:ab:cb:11:da:04:6c:4a:d6:31:29:78 root@node1
- # ssh-copy-id -i .ssh/id_rsa.pub root@node2 //与node2建立通信
- 15
- The authenticity of host 'node2 (172.16.220.12)' can't be established.
- RSA key fingerprint is 16:15:c4:65:45:d7:ea:c2:a7:29:4b:25:d1:ff:72:c8.
- Are you sure you want to continue connecting (yes/no)? yes
- Warning: Permanently added 'node2,172.16.220.12' (RSA) to the list of known hosts.
- root@node2's password:
- Now try logging into the machine, with "ssh 'root@node2'", and check in:
- .ssh/authorized_keys
- to make sure we haven't added extra keys that you weren't expecting.
- #ssh node2 'ifconfig' //测试 ,显示node2的"ifconfig"的相关内容
- node2:
- #hwclock -s
- #vim /etc/hosts
- 添加
- 172.16.220.11 node1
- 172.16.220.12 node2
- # ssh-keygen -t rsa (交互中的Enter即可)
- # ssh-copy-id -i .ssh/id_rsa.pub root@node1
- #ssh node1 'ifconfig' //测试 即可
4.2 在集群节点node1 node2上安装heartbeat
需要的包有
- heartbeat-2.1.4-9.el5.i386.rpm
- heartbeat-pils-2.1.4-10.el5.i386.rpm
- heartbeat-stonith-2.1.4-10.el5.i386.rpm
- libnet-1.1.4-3.el5.i386.rpm
- perl-MailTools-1.77-1.el5.noarch.rpm
- heartbeat-ldirectord-2.1.4-9.el5.i386.rpm
- heartbeat-gui-2.1.4-9.el5.i386.rpm
- node1 node1安装软件包:
- # yum -y --nogpgcheck localinstall heartbeat-2.1.4-9.el5.i386.rpm heartbeat-pils-2.1.4-10.el5.i386.rpm heartbeat-stonith-2.1.4-10.el5.i386.rpm libnet-1.1.4-3.el5.i386.rpm perl-MailTools-1.77-1.el5.noarch.rpm heartbeat-ldirectord-2.1.4-9.el5.i386.rpm heartbeat-gui-2.1.4-9.el5.i386.rpm
- node1:(以下的配置都是在node1上完成的,node2不需要再操作)
- #cp /usr/share/doc/heartbeat-2.1.4/ha.cf authkeys haresources /etc/ha.d/
- #cp /usr/share/doc/heartbeat-ldirectord-2.1.4/ldirectord.cf /etc/ha.d/
- #cd /etc/ha.d/
- #chmod 600 authkeys
- #vim authkeys
- auth 1 (dd if=/dev/urandom count=512 bs=1 | md5sum生成的随机字符串,最后一行)
- 1 md5 7b1b89ead5bcc0265a8d419ef91de7f7
- # vim ha.cf
- 将#bcast eth0 # Linux启用
改为 bcast eth0- 将#node ken3
#node kathy
启用并修改为:
node node1
node node2- 将#ping 10.10.10.254
启用,并改为:
ping 172.16.0.1- compression_threshold 2
compression bz2
并添加;
crm on- #vim haresources
- 添加
- node1 172.16.220.100/16/eth0/172.16.220.255 httpd
- #vim ldirectord.cf
- 内容如下:
- checktimeout=3
- checkinterval=1
- utoreload=yes
- logfile="/var/log/ldirectord.log"
- quiescent=yes
- virtual=172.16.220.100:80
- real=172.16.220.21:80 gate
- real=172.16.220.22:80 gate
- fallback=127.0.0.1:80 gate
- service=http
- request=".test.html"
- receive="OK"
- scheduler=rr
- protocol=tcp
- checktype=negotiate
- checkport=80
- #scp -p authkeys ha.cf haresources ldirectord.cf node2:/etc/ha.d/
- (注意此时在rs1 rs2的上分别配置;echo "<h1>OK</h1>" >> /var/www/html/.test.html)
- #chkconfig ldirectord off
- #passwd hacluster //在此节点上修改hacluster的密码
- redhat
- #service heartbeat start //启动本节点服务
- #ssh node2 '/etc/rc.d/init.d/heartbeat start' //启动node2节点的服务
- #hb_gui & //打开图像界面配置
1)出现图形界面 Connection --Login
输入密码:redhat,点击OK即可
进入之后观察node1 node2都处于running状态为正常
2)添加资源
第一个资源:
Resource IP:ldrictord
Type 选中ldirecotd 点击Add 即可 资源上右击--Start 发现运行在其中一个节点上
第二个资源:
Resource IP:vip
Type 选中IPaddr
Name 下面给ip一个地址为 172.16.220.100
Add Parameter 选中lvs_support 并为true
点击Add即可 --资源上右击--Start 运行在其中一个节点上
3)定义资源约束
Colocations--Add New Item 默认点击OK即可,配置如下
Order---Add New Item 默认点击OK即可,配置如下
4)启动资源并测试
配置后的结果如下
在node2上做测试
- #ifconfig
- eth0:0 Link encap:Ethernet HWaddr 00:0C:29:5B:DC:50
- inet addr:172.16.220.100 Bcast:172.16.255.255 Mask:255.255.0.0
- UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
- Interrupt:67 Base address:0x2000
- #ipvsadm -ln
- IP Virtual Server version 1.2.1 (size=4096)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 172.16.220.100:80 rr
- -> 172.16.220.21:80 Route 1 0 0
- -> 172.16.220.22:80 Route 1 0 0
最后浏览172.16.220.100 效果是rs1 rs2的轮询