lvs+keepalive 实现高可用集群

上一篇文章提到在vmware上搭建lvs+keepalive 实验一直没做成功,找了很多资料和视频看,然后对照着自己的配置文件,发现完全没问题,可是就是不成功,做了n多次,甚至都觉得是vmware的问题,但是有个视频又在vmware上做出来了,于是我又尝试了n+n遍,今天终于发现问题了。

其实问题很简单,是因为我在编译安装keepalive的时候指定了参数--prefix=/usr/local/keepalive ,而当我启动keepalive的时候,keepalive默认会去/etc/ 下找keepalived.conf 这个配置文件,从下面帮助信息可以看到:

[root@server ~]# keepalived -h

Keepalived v1.2.7 (03/04,2013)

Usage:

keepalived -f keepalived.conf

keepalived --use-file -f Use the specified configuration file.

Default is /etc/keepalived/keepalived.conf.

而我编译的时候把路径指向了/usr/local/keepalive 那么执行Keepalive服务的时候就发现不了keepalived.conf,导致上篇文章所述的没办法实现lvs。

解决办法有两个,一个就是重新编译安装,然后不指定安装路径,或者像帮助文件说的用-f指定keepalived.conf路径。(不过我试了一下好像不行,不知道是我配错还是怎样,我是重新编译安装了keepalive的)


刚接触的童鞋可能会在这里有一个误区,觉得是先配置好Lvs 再在lvs的基础上配置keepalive,其实不然,在Director上根本不用配置Lvs,而是要配置keepalive,通过keepalive来调度lvs,具体看下面实验过程:


=================下面是安装配置的过程==================

首先是实验拓扑图

【IP分配】

Director(MASTER) 192.168.30.105

Director(BACKUP) 192.168.30.106

node1 192.168.30.113

node2 192.168.30.114

VIP 192.168.30.254


一、安装ipvsadm.

1.安装依赖软件

[root@server ~]# yum -y install popt popt-devel popt-static openssl-devel kernel-devel libnl libnl-devel


注意:popt-static 系统可能没有自带,需要另行下载,我已经打包好,下载链接在本文底部。


2.解压安装ipvsadm

[root@server src]# tar xf ipvsadm-1.26.tar.gz

[root@server src]# cd ipvsadm-1.26

[root@server ipvsadm-1.26]# make && make install

[root@server ipvsadm-1.26]# ipvsadm

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

有以上输出说明Ipvsadm已经安装完成。


二、安装keepalive

1.安装keepalive

[root@server src]# tar zxvf keepalived-1.2.7.tar.gz
[root@server src]# cd keepalived-1.2.7
[root@server keepalived-1.2.7 ]# ./configure --sysconf=/etc --with-kernel-dir=/usr/src/kernels/2.6.32-279.el6.x86_64/
[root@server keepalived-1.2.7 ]# make
[root@server keepalived-1.2.7]# make install

[root@server keepalived-1.2.7]# ln -s /usr/local/sbin/keepalived /sbin/keepalived


PS:编译参数应该注意,若指定了安装目录例如/usr/local/keepalive 那么就要使用keepalive -f来指定keepalived.conf 的位置。为了省去那些麻烦,可以直接指定--sysconf=/etc 参数

编译完成后要看到下面3个yes,keepalive才能调度Lvs,如果没有3个yes,检查编译是否带了内核参数--with-kernel-dir

Keepalived configuration

------------------------

Keepalived version : 1.1.15

Compiler : gcc

Compiler flags : -g -O2

Extra Lib : -lpopt -lssl -lcrypto

Use IPVS Framework : Yes

IPVS sync daemon support : Yes

Use VRRP Framework : Yes

Use LinkWatch : No

Use Debug flags : No


2.配置keepalived.conf

[root@server ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived


global_defs {

notification_email {

[email protected]

}

notification_email_from [email protected]

smtp_server 127.0.0.1

smtp_connect_timeout 30

router_id LVS_1

}


vrrp_instance VI_1 {

state MASTER // 将本机设置成主服务器,从服务器上这里设置为BACKUP

interface eth0 //监听端口

virtual_router_id 51

priority 100 //优先级,从服务器上的优先级必须低于这个值

advert_int 1

authentication { // 配置认证

auth_type PASS

auth_pass 1111

}

virtual_ipaddress { // 虚拟ip,若有多个,可以隔行写

192.168.30.254

}

}


virtual_server 192.168.30.254 80 {

delay_loop 6

lb_algo wlc //定义算法为wlc,最小连接算法

lb_kind DR //定义lvs的模式为DR

nat_mask 255.255.255.0 //子网掩码

persistence_timeout 50

protocol TCP


real_server 192.168.30.113 80 { //定义RS

weight 1

TCP_CHECK {

connect_timeout 10

nb_get_retry 3

connect_port 80

}

}


real_server 192.168.30.114 80 {

weight 1

TCP_CHECK {

connect_timeout 10

nb_get_retry 3

connect_port 80

}

}

}

从服务器上的keepalived配置,只需修改上面2处红色的地方即可。


配置完成后,启动keepalive服务

Starting keepalived: [ OK ]


执行ipvsadm 可以看到lvs已经正常启动。

[root@server ~]# ipvsadm

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.30.254:http wlc persistent 50

-> 192.168.30.113:http Route 1 0 0

-> 192.168.30.114:http Route 1 0 0


查看

[root@server ~]# ip addr list |grep eth0

2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000

inet 192.168.30.105/24 brd 192.168.30.255 scope global eth0

inet 192.168.30.254/32 scope global eth0

可以看到VIP 已经出现在server上


查看本机日志,可以看到服务已经正常启动:

[root@server ~]# tailf /var/log/messages

May 16 17:15:10 server Keepalived_healthcheckers[1310]: Netlink reflector reports IP 192.168.30.254 added

May 16 17:15:12 server Keepalived_healthcheckers[1310]: TCP connection to [192.168.30.113]:80 failed !!!

May 16 17:15:12 server Keepalived_healthcheckers[1310]: Removing service [192.168.30.113]:80 from VS [192.168.30.254]:80

May 16 17:15:12 server Keepalived_healthcheckers[1310]: Remote SMTP server [127.0.0.1]:25 connected.

May 16 17:15:12 server Keepalived_healthcheckers[1310]: TCP connection to [192.168.30.114]:80 failed !!!

May 16 17:15:12 server Keepalived_healthcheckers[1310]: Removing service [192.168.30.114]:80 from VS [192.168.30.254]:80

May 16 17:15:12 server Keepalived_healthcheckers[1310]: Lost quorum 1-0=1 > 0 for VS [192.168.30.254]:80

May 16 17:15:12 server Keepalived_healthcheckers[1310]: Remote SMTP server [127.0.0.1]:25 connected.

May 16 17:15:12 server Keepalived_healthcheckers[1310]: SMTP alert successfully sent.

May 16 17:15:12 server Keepalived_healthcheckers[1310]: SMTP alert successfully sent.

May 16 17:15:15 server Keepalived_vrrp[1311]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.30.254


以上搭建都是在server上搭建的,在client上同样需要搭建ipvsadm和keepalive ,过程跟server上的搭建一样。唯一不同的是,备用Director上的keepalived.conf需要修改2处:1.修改状态为BACKUP 2.优先级改到低于服务器的优先级


三、配置RS

1.配置real server

这里我用脚本来实现,脚本如下:

#!/bin/bash

VIP=192.168.30.254

case $1 in

start)

echo "Start LVS of DR"

/sbin/ifdown eth1

ifconfig eth0:0 192.168.30.254 netmask 255.255.255.255 broadcast 192.168.30.254

route add -host 192.168.30.254 dev eth0:0

#route add default gw 192.168.30.200


echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce

echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore

echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce

sysctl -p > /dev/null 2>&1

;;

stop)

echo "Stop LVS of DR"

/sbin/ifconfig eth0:0 down

echo "0" > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo "0" > /proc/sys/net/ipv4/conf/lo/arp_announce

echo "0" > /proc/sys/net/ipv4/conf/all/arp_ignore

echo "0" > /proc/sys/net/ipv4/conf/all/arp_announce

sysctl -p > /dev/null 2>&1

;;

*)

echo "Usage:$0 {start|stop}"

exit 1

esac


将这个脚本分别在RS1和RS2上面执行


四、测试

确保主从Director 上都启动了keepalive服务,并且node节点上都执行了脚本后,进行下一步的测试。

首先在备用Director服务器上启动日志监控tailf /var/log/message,然后关闭主Director上的keealived服务,测试当主服务器down机,从服务器是否会接管服务。

[root@client ~]# tailf /var/log/messages

May 16 17:50:52 client Keepalived_vrrp[1546]: Registering gratuitous ARP shared channel

May 16 17:50:52 client Keepalived_healthcheckers[1545]: Configuration is using : 13986 Bytes

May 16 17:50:52 client Keepalived_vrrp[1546]: Opening file '/etc/keepalived/keepalived.conf'.

May 16 17:50:52 client Keepalived_vrrp[1546]: Configuration is using : 63007 Bytes

May 16 17:50:52 client Keepalived_vrrp[1546]: Using LinkWatch kernel netlink reflector...

May 16 17:50:52 client Keepalived_healthcheckers[1545]: Using LinkWatch kernel netlink reflector...

May 16 17:50:52 client Keepalived_healthcheckers[1545]: Activating healthchecker for service [192.168.30.113]:80

May 16 17:50:52 client Keepalived_healthcheckers[1545]: Activating healthchecker for service [192.168.30.114]:80

May 16 17:50:52 client Keepalived_vrrp[1546]: VRRP_Instance(VI_1) Entering BACKUP STATE

May 16 17:50:52 client Keepalived_vrrp[1546]: VRRP sockpool: [ifindex(2), proto(112), fd(10,11)]


从下面可以看到VIP已经飘到了该服务器上

[root@client ~]# ipvsadm

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.30.254:http rr persistent 50

-> 192.168.30.113:http Route 1 0 0

-> 192.168.30.114:http Route 1 0 0


[root@client ~]# ip addr list |grep eth0

2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000

inet 192.168.30.106/24 brd 192.168.30.255 scope global eth0

inet 192.168.30.254/32 scope global eth0


查看日志可以看到:

May 16 17:50:52 client Keepalived_vrrp[1546]: VRRP sockpool: [ifindex(2), proto(112), fd(10,11)]

May 16 17:52:06 client Keepalived_vrrp[1546]: VRRP_Instance(VI_1) Transition to MASTER STATE

May 16 17:52:07 client Keepalived_vrrp[1546]: VRRP_Instance(VI_1) Entering MASTER STATE

May 16 17:52:07 client Keepalived_vrrp[1546]: VRRP_Instance(VI_1) setting protocol VIPs.

May 16 17:52:07 client Keepalived_vrrp[1546]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.30.254

May 16 17:52:07 client Keepalived_healthcheckers[1545]: Netlink reflector reports IP 192.168.30.254 added

May 16 17:52:07 client avahi-daemon[1076]: Registering new address record for 192.168.30.254 on eth0.IPv4.

May 16 17:52:12 client Keepalived_vrrp[1546]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.30.254


恢复主服务器上的keepalived服务,查看日志:

Last login: Thu May 16 17:52:30 2013 from 192.168.30.1

[root@client ~]# tailf /var/log/messages

May 16 17:54:02 client Keepalived_vrrp[1546]: VRRP_Instance(VI_1) Received higher prio advert

May 16 17:54:02 client Keepalived_vrrp[1546]: VRRP_Instance(VI_1) Entering BACKUP STATE

May 16 17:54:02 client Keepalived_vrrp[1546]: VRRP_Instance(VI_1) removing protocol VIPs.

May 16 17:54:02 client Keepalived_healthcheckers[1545]: Netlink reflector reports IP 192.168.30.254 removed

May 16 17:54:02 client avahi-daemon[1076]: Withdrawing address record for 192.168.30.254 on eth0.


可以看到VIP又飘回了主Director服务器

至此,lvs+keepalive 完成!


相关软件包下载地址:

http://down.51cto.com/data/793964

你可能感兴趣的:(lvs+keepalive 实现高可用集群)