lvs-fullnat + keepalived+HA/ECMP(OSPF)

本地到lvs官网(http://kb.linuxvirtualserver.org/wiki/IPVS_FULLNAT_and_SYNPROXY)下载操作手册:http://kb.linuxvirtualserver.org/images/c/c8/LVS%E6%93%8D%E4%BD%9C%E6%89%8B%E5%86%8C.zip

lvs的fullnat+keepalived分为HA集群模式和fullnat负载均衡集群模式,HA集群模式主要是常规的主/从模式,只有一台lvs抗实际业务,另外一台用于备份,而负载均衡集群模式是每台lvs均摊所有的业务流量,这个模式需要在上层网络路由器配置指定相关的路由策略。这两种模式的lvs编译安装过程是一样的,只是keepalived的配置有所不同,并且负载均衡需要在操作系统里面设置ospf规则。



一.lvs的系统配置:  lvs和rs的内核编译+keepalived+系统参数和优化


  • lvs的编译安装

lvs-fullnat模式,需要在kernel里面打上lvs的补丁,才能支持lvs-fullnat,因此最好选择一个纯净的kernel版本进行内核编译,然后再打补丁,需要满足以下条件:

1.关掉iptables和selinux=disabled

2.支持lvs-fullnat的内核

3.支持lvs-fullnat(FNAT)模式的keepalived

4.支持full-nat的ipvsadm

因此需要到lvs官网下载lvs补丁包,kernel最好是纯净版,在红帽官网下载就行,lvs-fullnat的补丁包在http://kb.linuxvirtualserver.org/images/a/a5/Lvs-fullnat-synproxy.tar.gz,但是解压后会发现只有kernel-2.6.32.220版本的,前端的Director需要打lvs-2.6.32-220.23.1.el6.patch这个补丁,而realserver需要toa-2.6.32-220.23.1.el6.patch这个补丁,之前鄙人已经尝试过2.6.32-431,2.6.32-573版本的,发现如果431版本的kernel的Director在打kernel的lvs-2.6.32-220.23.1.el6.patch补丁时报错条目还是比较多的,如果要修复得花上不少时间,因此直接换成将Centos6.7的操作系统换成centos6.2了,Director的版本可以和realserver不一样,这样顺利多了,对于realserver(centos6.5)在打toa-2.6.32-220.23.1.el6.patch这个补丁时会报一些错,但都还比较容易修改,因此我的Director和Realserver的版本如下:



DS: centos6.2

kernel 版本: 2.6.32-220.el6.x86_64  

lvs_dr01

eth0 : 5.5.5.5

lvs_dr02:

eth0: 5.5.5.6

RS:centos6.5   

        kernel版本: 2.6.32-431.el6.x86_64

lvs_rs01:

eth0: 5.5.5.102

lvs_rs02:

eth0:5.5.5.103

lvs_rs03:

eth0:5.5.5.104

lvs_rs04:

eth0:5.5.5.105



Director:

   wget  http://kb.linuxvirtualserver.org/images/a/a5/Lvs-fullnat-synproxy.tar.gz

    wget ftp://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/SRPMS/kernel-2.6.32-220.el6.src.rpm



useradd lvs

groupadd mockbuild

useradd -g mockbuild mockbuild

echo '%_topdir /home/lvs/rpms' >>  ~/.rpmmacros
echo '%_tmppath /home/lvs/rpms/tmp' >>  ~/.rpmmacros
echo '%_sourcedir /home/lvs/rpms/SOURCES' >>  ~/.rpmmacros
echo '%_specdir /home/lvs/rpms/SPECS' >>  ~/.rpmmacros
echo '%_srcrpmdir /home/lvs/rpms/SRPMS' >>  ~/.rpmmacros
echo '%_rpmdir /home/lvs/rpms/RPMS' >>  ~/.rpmmacros
echo '%_builddir /home/lvs/rpms/BUILD' >>  ~/.rpmmacros

cd /home/lvs/

mkdir rpms/{tmp,SOURCES,SPECS,SRPMS,RPMS,BUILD} -pv

rpm -ivh  /usr/local/src/lvs/kernel-2.6.32-220.el6.src.rpm

yum install -y rpm-build-4.8.0-47.el6.x86_64  xmlto asciidoc elfutils-libelf-devel elfutils-devel  zlib-devel binutils-devel newt-devel  python-devel audit-libs-devel perl hmaccalc  perl-ExtUtils-Embed  rng-tools  openssl-devel

##说明:rng-tools用于在执行rpmbuild -bb --target=`uname -m` kernel.spec的时候生成随机数,不然会卡在那里,但是根据卡的地方倒退回去会看到提示就执行rngd -r /dev/hwrandom,不行的话执行 rngd -r /dev/urandom,因此需要安装此工具

yum -y groupinstall "Development tools"

cd /usr/local/src/lvs/

tar xf  Lvs-fullnat-synproxy.tar.gz

cd /home/lvs/rpms/SPECS/

rpmbuild -bp kernel.spec

sed -i 's/CONFIG_IP_VS_TAB_BITS=.*$/CONFIG_IP_VS_TAB_BITS=20/g' /home/lvs/rpms/SOURCES/config-generic

cd /home/lvs/rpms/BUILD/kernel-2.6.32-220.el6/linux-2.6.32-220.el6.x86_64/

cp /usr/local/src/lvs/lvs-fullnat-synproxy/lvs-2.6.32-220.23.1.el6.patch ./

patch -p1 < lvs-2.6.32-220.23.1.el6.patch 

make -j16;
make modules_install;
make install;


修改vim /home/lvs/rpms/SOURCES/config-generic下面的值为20,默认是12    #这条其实没什么用了

sed -i 's/CONFIG_IP_VS_TAB_BITS=.*$/CONFIG_IP_VS_TAB_BITS=20/g' /home/lvs/rpms/SOURCES/config-generic




  • rs的编译安装

Realserer:

[root@lvs_rs02 lvs]# cat lvs_rs.sh 
#!/bin/bash
useradd lvs
groupadd mockbuild
useradd -g mockbuild mockbuild
echo '%_topdir /home/lvs/rpms' >>  ~/.rpmmacros
echo '%_tmppath /home/lvs/rpms/tmp' >>  ~/.rpmmacros
echo '%_sourcedir /home/lvs/rpms/SOURCES' >>  ~/.rpmmacros
echo '%_specdir /home/lvs/rpms/SPECS' >>  ~/.rpmmacros
echo '%_srcrpmdir /home/lvs/rpms/SRPMS' >>  ~/.rpmmacros
echo '%_rpmdir /home/lvs/rpms/RPMS' >>  ~/.rpmmacros
echo '%_builddir /home/lvs/rpms/BUILD' >>  ~/.rpmmacros


cd /home/lvs/
mkdir rpms/{tmp,SOURCES,SPECS,SRPMS,RPMS,BUILD} -pv
rpm -ivh  /usr/local/src/lvs/kernel-2.6.32-431.el6.src.rpm


yum install -y rpm-build  xmlto asciidoc elfutils-libelf-devel elfutils-devel  zlib-devel binutils-devel newt-devel  python-devel audit-libs-devel perl hmaccalc  perl-ExtUtils-Embed  rng-tools
yum -y groupinstall "Development tools" 
cd /home/lvs/rpms/SPECS/
#rpmbuild -bb --target=`uname -m` kernel.spec
rpmbuild -bp  kernel.spec
sed -i 's/CONFIG_IP_VS_TAB_BITS=.*$/CONFIG_IP_VS_TAB_BITS=20/g' /home/lvs/rpms/SOURCES/config-generic
cd /home/lvs/rpms/BUILD/kernel-2.6.32-431.el6/linux-2.6.32-431.el6.x86_64/
cd /usr/local/src/lvs/
tar xf  Lvs-fullnat-synproxy.tar.gz 
cd /home/lvs/rpms/BUILD/kernel-2.6.32-431.el6/linux-2.6.32-431.el6.x86_64/
patch -p1 < /usr/local/src/lvs/lvs-fullnat-synproxy/toa-2.6.32-220.23.1.el6.patch
sed -i  '/WIMAX/a obj\-\$\(CONFIG\_TOA\)               \+\= toa/' net/Makefile
make -j8
make modules_install
make  install
sed  -i 's/default.*$/default=0/' /boot/grub/grub.conf
modprobe toa
# reload tao module 
echo 'modprobe toa' >> /etc/rc.local


  • keepalived的编译安装配置

在lvs上面安装并配置keepalived:

cd /usr/local/src/lvs/

tar xf  Lvs-fullnat-synproxy.tar.gz

cd /usr/local/src/lvs/lvs-fullnat-synproxy

cd /home/lvs

cp /usr/local/src/lvs/lvs-fullnat-synproxy/lvs-tools.tar.gz  ./
tar xf lvs-tools.tar.gz;
cd tools;

  cd keepalived;
  ./configure --with-kernel-dir="/lib/modules/`uname -r`/build";
  make;
  make install;


mkdir /etc/keepalived/keepalived.d -pv

cp -a bin/keepalived /sbin/

cp -a keepalived/etc/init.d/keepalived.init /etc/init.d/keepalived

cp -a keepalived/etc/keepalived/keepalived.conf /etc/keepalived/keepalived.d

cp -a keepalived/etc/init.d/keepalived.sysconfig /etc/sysconfig/keepalived


安装ipvsadm:

  cd tools/ipvsadm;
  make;
  make install;



keepalived配置文件如下:

HA集群模式:

MASTER:在lvs_dr01上面

[root@lvs_dr01 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived






global_defs {
   notification_email {
root@localhost
  }
   notification_email_from [email protected]
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30
   router_id  lvs_dr01
   vrrp_mcast_group4 224.30.0.2
}


local_address_group laddr_g1 {
5.5.5.5
}


virtual_server_group vip {
5.5.5.5.254 80
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 53
    priority 100
    advert_int 1
    nopreempt FALSE
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        5.5.5.254
   }
}


virtual_server 5.5.5.254 80 {
delay_loop 6
lb_algo rr
lb_kind FNAT
protocol TCP
syn_proxy    
laddr_group_name laddr_g1 

real_server 5.5.5 .102 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connet_port 80     #这里的port 80就是rs的port
}
}


        real_server 5.5.5.103 80 {
        weight 1
        TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
                connet_port 80
        }
        }
        real_server 5.5.5.104 80 {
        weight 1
        TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
                connet_port 80
        }
        }
        real_server 5.5.5.105 80 {
        weight 1
        TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
                connet_port 80
        }
        }


}

 


BACKUP:

[root@lvs_dr02 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived






global_defs {
   notification_email {
root@localhost
  }
   notification_email_from [email protected]
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30
   router_id  lvs_dr02
   vrrp_mcast_group4 224.30.0.2
}


local_address_group laddr_g1 {
5.5.5.6
}


virtual_server_group vip {
5.5.5.254 80
}
vrrp_instance VI_1 {
    state BACKUP 
    interface eth0
    virtual_router_id 53
    priority 50 
    advert_int 1
    nopreempt FALSE
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        5.5.5.254
   }
}


virtual_server 5.5.5.254 80 {
delay_loop 6
lb_algo rr
lb_kind FNAT
protocol TCP
syn_proxy    
laddr_group_name laddr_g1 

real_server 5.5.5.102 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connet_port 80
}
}


        real_server 5.5.5.103 80 {
        weight 1
        TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
                connet_port 80
        }
        }
        real_server 5.5.5.104 80 {
        weight 1
        TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
                connet_port 80
        }
        }
        real_server 5.5.5.105 80 {
        weight 1
        TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
                connet_port 80
        }
        }


}


验证realserver是否成功加入集群;

lvs-fullnat + keepalived+HA/ECMP(OSPF)_第1张图片


验证toa模块功能:

在主机10.129.36.242上用浏览器访问 http://5.5.5.254/test1.html


在lvs_rs04上面查看日志,clientip是否为真实ip:下面为真实ip,说明成功,如果不成功应该是代理的Director 5.5.5.5的地址

lvs-fullnat + keepalived+HA/ECMP(OSPF)_第2张图片


lvs fullnat的负载均衡集群模式:

fullnat集群模式需要在三层交换机配置对lvs本身的负载规则,我们使用的是ECMP等价路由,交换机和lvs交互通信是通过ospf来通知的,除了在交换机配置,还需要在lvs里面配置ospf,配置ospf使用的是软件quagga,是个软件路由器,它的前身是zebra,网络这一块如果没描述清楚,还请多多指教。我们上层使用的是核心交换机来配置的,以后如果有相关配置会更新的,而这篇文章主张主要介绍在操作系统层面对lvs和软件路由器的配置: lvs系统配置(lvs和rs的内核编译安装+lvs的keepalived+lvs的系统参数+lvs的参数优化)+lvs的ospf配置(lvs的quagga的安装和配置)


! Configuration File for keepalived






global_defs {
   notification_email {
        root@localhost
  }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id lvs1
   vrrp_mcast_group4 224.30.0.2
}


local_address_group laddr_g1 {
        5.5.60-100     #lip的地址池,就是5.5.5.60到5.5.5.100所有的地址,地址池主要是使用更多的lip和端口用于与后端rs交互,如果后端rs过多成千上万台,那么一个lip的端口最多也只有65535个,显然是不够用的,因此给了一个地址池,但是这些lip地址得配置在相应的网卡上,才能使用
}


virtual_server_group vip {
      2.2.2.2 9828      #2.2.2.2为vip1地址,9828是vip的端口,这里是任意给的

      3.3.3.3  9878    #vip2地址3.3.3.3,虚服务端口时9878
}


#虚服务和实服务第一组
virtual_server 2.2.2.2 9828{      #2.2.2.2为vip地址,9828是vip的端口,这里是任意给的,这里可以根据业务设置更多的虚服务和实服务组
        delay_loop 6
        lb_algo lc
        lb_kind FNAT
        protocol TCP
#       syn_proxy
        laddr_group_name laddr_g1
        alpha
#       omega

        quorum 1
        hysteresis 0
        quorum_up "ip addr  add x.x.x.x/32 dev lo;ip addr  add 127.0.0.1/8 dev lo;"    #给lo口添加vip地址,但是其实在这个地方添加不是很好用,后来在/etc/init.d/keepalivedli面统一添加和删除了,后面的127.0.0.1是当时这个地址不在lo上面了,所以临时增加的,如果没有的话可以加上,有的话就不用加了不影响
        quorum_down "ip addr  del x.x.x.x/32 dev lo;"   #删除vip,但是其实在这个地方添加不是很好用,后来在/etc/init.d/keepalivedli面统一添加和删除了
       real_server 1.1.1.2 80 {    #rs的ip地址1.1.1.2和端口号80

        weight 1
        TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
                connet_port 8080
        }
                }
       real_server 1.1.1.3 81 {
        weight 1
        TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
                connet_port 8080
        }
                }
       real_server 1.1.1.4 82 {
        weight 1
        TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
                connet_port 8080
        }

        }

}

  • 系统参数优化如下:

cat  /etc/sysctl.conf

net.ipv4.ip_forward = 1

net.ipv4.conf.all.arp_ignore = 1

net.ipv4.conf.all.arp_announce = 2

net.core.netdev_max_backlog = 500000
net.ipv4.tcp_timestamps = 0
fs.nr_open = 5242880
fs.file-max = 4194304
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 10


cat /etc/security/limits.conf

*               soft    nofile          102400
*               hard    nofile          102400


二.ospf安装配置

安装: yum -y insall  quagga

[root@lvs1 ~]# rpm -qa quagga
quagga-0.99.15-14.el6.x86_64

配置:

cd /etc/quagga

cp ospfd.conf.sample ospfd.conf

cp zebra.conf.sample zebra.conf

[root@lvs1 quagga]# cat ospfd.conf
!
! Zebra configuration saved from vty
!   2017/07/18 10:39:06
!
hostname ospfd
password your_password
log stdout
!
!
!
interface eth0
!
interface eth1
!
interface lo
!
router ospf
 network 5.5.5.5/24 area 0.0.0.0     #这里的5.5.5.5是向上跟交换机ospf互联的ospf网段,通常也是对外的ip地址
 network 2.2.2.2/32 area 0.0.0.0  #这里的2.2.2.2就是lvs的vip
!
line vty
!

[root@lvs1 quagga]# cat zebra.conf
!
! Zebra configuration saved from vty
!   2017/07/18 10:35:13
!
hostname lvs1
password zebra
enable password zebra
!
debug zebra rib
!
interface eth0
 ip address 5.5.5.5/24     #这里的x.x.x.x  ip地址是本机eth0的地址,与上层交换机路由的ospf地址
 ipv6 nd suppress-ra
!
interface eth1
 ipv6 nd suppress-ra
!
interface lo 
 ip address 2.2.2.2/32   #lvs的vip地址
!
router-id 5.5.5.5    #x.x.x.x可以自定义,是唯一的,不一定是ip地址
ip forwarding
!
!
line vty
!


参考博客:http://shanks.blog.51cto.com/3899909/1536539

lvs网卡调优: http://navyaijm.blog.51cto.com/4647068/1334671

<1>:millmon表示链路监测时间间隔,单位为ms,millmon=100表示每100ms监测一次链路连接状态,如果有一条不通,就转入另一条。这个值建议为100, 设成其它值可能导致不稳定
<2>:mode表示两张网卡的运行方式,0 表示load blance,1 表示热备(建议使用热备)

报错参考:http://www.iyunv.com/thread-66463-1-1.html

问题一:keepalived的qurom_down选项并不能删除vip和lip怎么办?

 解决办法:在/etc/init.d/keepalived服务脚本中直接添加和删除vip和rip:截取如下:


      start() {
    echo -n $"Starting $prog: "
    daemon keepalived ${KEEPALIVED_OPTIONS}
    RETVAL=$?
    echo
    [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog
    ip addr  add  2.2.2.2/32 dev lo;    #增加vip
    /bin/sh /etc/keepalived/ipadd.sh start    #增加lip
}


stop() {
    echo -n $"Stopping $prog: "
    killproc keepalived
    RETVAL=$?
    echo
    [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog
    ip addr  del  2.2.2.2/32 dev lo;     #删除vip
    /bin/sh /etc/keepalived/ipadd.sh stop    #删除rip


}

          [root@lvs1 ~]# cat /etc/keepalived/ipadd.sh 
#!/bin/bash
arg=$1
dev=eth1
network="5.5.5."      #生成lip地址
seq="60 100"
function start() {
for i in `seq $seq`
do
ip addr add $network.$i/20 dev $dev
done
}
function stop() {
for i in `seq $seq`
do
ip addr del $network.$i/20 dev $dev
done
}
case "$arg" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
esac



AUTHOR:网名为什么那么长

邮箱:[email protected]


你可能感兴趣的:(操作系统)