实验说明:
1 现在我们用两台director和两台realserver 实现集群的高可用性
2 这个实验主要通过配置文件实现
3 在上篇高可用集群的基础上进行
4结构示意图如上
Realserver
的配置
1配置两个realserver的yum源
[base]
name=base
baseurl=ftp://192.168.0.254/pub/Server
gpgcheck=0
[Cluster]
name=cluster
baseurl=ftp://192.168.0.254/pub/Cluster
gpgcheck=0
2.配置两rs的rip
Vim /etc/sysconfig/network-scripts/ifcfg-eth0
3.配置两rs的vip
ifconfig lo:0 192.168.0.100 broadcast 192.168.0.100 netmask 255.255.255.255
4.验证并测试连通性
5.安装httpd
######使用yum安装
6.设置网页内容
Vim /var/www/heml/index.html
<h1>to test realserver2</h1>
添加测试网页的内容
测试网页时用于ldirectord对realserver的检测
<h1>realserver</h1>
7 可以开启httpd验证测试
8 修改内核参数
添加主机路由
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
route add -host 192.168.0.100 dev lo:0
第
7
和第
8
步
我们可以通过脚本是实现
在两个rs上编写脚本
Vim realserver.sh
#!/bin/bash
vip='192.168.0.100'
CARD='lo:0'
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
ifconfig $CARD $vip broadcast $vip netmask 255.255.255.255 up &>/dev/null
route add -host $vip dev $CARD &> /dev/null
service httpd start &> /dev/null
;;
stop)
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
route del -host $vip &> /dev/null
ifconfig $CARD down &> /dev/null
service httpd stop &> /dev/null
;;
esac
实现对
director
的配置
首先要复习下上篇博文的配置:
1
yum的配置
vim /etc/yum.repos.d/server.repo
[base]
name=base
baseurl=http://192.168.0.254/pub/Server
gpgcheck=0
[Cluster]
name=cluster
baseurl=http://192.168.0.254/pub/Cluster
gpgcheck=0
2
配置两节点的非vip的ip为静态的(在eth0上)
最好在eth1上也配置一下
3
调试主机名
Uname -n 是显示主机名的,这个命令显示的信息要与节点的名称一致,详细配置参考上篇博文
4
安装heartbeat 和ipvsadm
Ipvsadm的安装就不用多说了,但是heartbeat的安装要说一下
Heartbeat的安装基于
heartbeat-2.1.4-9.el5.i386.rpm
heartbeat-pils-2.1.4-10.el5.i386.rpm
heartbeat-devel-2.1.4-9.el5.i386.rpm heartbeat-stonith-2.1.4-10.el5.i386.rpm
heartbeat-gui-2.1.4-9.el5.i386.rpm
libnet-1.1.4-3.el5.i386.rpm
六个包,详细的下载安装过程不多说了 ,上篇博文上有
5
上篇博文提到过,要是两节点间互相以root的身份不用密码就能访问,在这里还要进行配置
6
第4步heartbeat包的下载是在节点1上的,那么节点2就不用下载了,直接在node1上cp过去就行了,这里就显示到了第5步的效果,详细步骤也不说了
7
/etc/ha.d下没有 heartbeat的三个配置文件 我们从/usr/share/doc/heartbeat-2.1.4/下cp到/etc/ha.d中去,cp过去后编辑,编辑过程略 ,三个文件编辑好后从节点1上cp到节点二上
8
下面就不是上个博文中 的内容了我们邀=安装heartbeat-ldirector-2.1.4-9.el5.i386.rpm
这个包可以实现自动检测的功能,并且经过对安装后配置文件的定义可以自动的生成集群的添加和realserver的添加,但是这个包还有个依赖包perl-MailTools-1.77-1.el5.noarch.rpm
我们一起把这两个包下下来安装
Yum ocalinstall–y heartbeat-ldirector-2.1.4-9.el5.i386.rpm perl-MailTools-1.77-1.el5.noarch.rpm --nogpgcheck
安装好后把样板文件/usr/share/doc/heartbeat-ldirector-2.1.4/ldirectord.cf 复制到/etc/ha.d下 复制过后修改 /etc/ha.d/ldirectord.cf
Vim /etc/ha.d/ldirectord.cf
checktimeout=3
checkinterval=1
autoreload=yes
logfile="/var/log/ldirectord.log"
quiescent=yes
virtual=192.168.0.100:80
######定义lvs使用的vip和port
real=192.168.0.101:80 gate 3 ######定义realserver
real=192.168.0.102:80 gate 2
fallback=127.0.0.1:80 gate
service=http #############定义服务
request="test.html" ###############realserver上测试网页的名字
receive="realserver" ###################测试网页的内容
scheduler=wlc ###########ipvsadm的算法
protocol=tcp ########## 使用的协议类型
checktype=negotiate #############监查rs的方法
checkport=80 ######监查的端口
9 下面需要修改下两节点上的/etc/ha.d/haresources文件
Vim /etc/ha.d/haresources
node1.a.org 192.168.0.100/24/eth0/192.168.0.255 ldirectord::ldirectord.cf
10 在node1 上开启heartbeat
/etc//init.d/heartbeat start
在node1上开启node2 的heartbeat
ssh 192.168.0.22 -- '/etc/init.d/heartbeat start'
11 验证测试ipvsadm在node1上 的配置结果,由于node1是主节点,所以ipvsadm的配置只在node1上显示,node1正常的情况下,node2不会显示
[root@node1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port
Forward Weight ActiveConn InActConn
TCP 192.168.0.100:80 wlc
-> 192.168.0.101:80
Route 3 0 0
-> 192.168.0.102:80
Route 2 0 0
主节点node1上的vip情况
[root@node1 ~]# ifconfig
eth0
Link encap:Ethernet HWaddr 00:0C:29:F1:6B:4F
inet addr:192.168.0.21 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fef1:6b4f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:118897 errors:0 dropped:0 overruns:0 frame:0
TX packets:84064 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:21579108 (20.5 MiB) TX bytes:8144390 (7.7 MiB)
Interrupt:169 Base address:0x2000
eth0:0
Link encap:Ethernet HWaddr 00:0C:29:F1:6B:4F
inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:169 Base address:0x2000
现在我们进行测试
验证集群的高可用性
我们关闭主节点node1
在node1上
[root@node1 ~]# ifconfig
eth0
Link encap:Ethernet HWaddr 00:0C:29:F1:6B:4F
inet addr:192.168.0.21 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fef1:6b4f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:121343 errors:0 dropped:0 overruns:0 frame:0
TX packets:86179 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:21875951 (20.8 MiB) TX bytes:8343090 (7.9 MiB)
Interrupt:169 Base address:0x2000
eth1
Link encap:Ethernet HWaddr 00:0C:29:F1:6B:59
inet addr:192.168.21.1 Bcast:192.168.21.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fef1:6b59/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10256 errors:0 dropped:0 overruns:0 frame:0
节点node2上
[root@node2 ~]# ifconfig
eth0
Link encap:Ethernet HWaddr 00:0C:29:1D:54:F9
inet addr:192.168.0.22 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe1d:54f9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:41582 errors:0 dropped:0 overruns:0 frame:0
TX packets:12679 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:7762945 (7.4 MiB) TX bytes:1519415 (1.4 MiB)
Interrupt:169 Base address:0x2000
eth0:0
Link encap:Ethernet HWaddr 00:0C:29:1D:54:F9
inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:169 Base address:0x2000
eth1
Link encap:Ethernet HWaddr 00:0C:29:1D:54:03
inet addr:192.168.21.2 Bcast:192.168.21.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe1
这说明当node1 出现问题的时候node2会自动接管服务,实现了高可用性
我们在realserver1上停掉httpd服务
Server httpd stop
再 在node1 上 ipvsadm –L –n
[root@node1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port
Forward Weight ActiveConn InActConn
TCP 192.168.0.100:80 wlc
-> 192.168.0.101:80
Route 0 0 0
-> 192.168.0.102:80
Route 2 0 0
再开启realserver1的httpd
再查看node1
ipvsadm -L –n
[root@node1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port
Forward Weight ActiveConn InActConn
TCP 192.168.0.100:80 wlc
-> 192.168.0.101:80
Route 3 0 0
-> 192.168.0.102:80
Route 2 0 0
现在我们在浏览器中验证
保证配置完整时,用vip查看网页 显示的是两网页按照权重的论调显示
在node1上关闭你node1的heartbeat 访问情况不会受到影响
现在我们可以关闭掉rs1的httpd ,显示的网页只能是
好了,高可用集群到这里就成功配置好了