由于CentOS6.3的iso并没有heartbeat的rpm包,所以采用互联网上的资源下载安装epel,可以直接(以下步骤需要在2台node上执行)
yum install heartbeat
wget ftp://mirror.switch.ch/pool/1/mirror/scientificlinux/6rolling/i386/os/Packages/epel-release-6-5.noarch.rpm
rpm -ivUh epel-release-6-5.noarch.rpm
vi /etc/yum.repos.d/epel.repo #把第6行改成enabled=0
使用yum安装heartbeat Pacemaker
yum --enablerepo=epel install heartbeat
yum install heartbeat cluster-glueSince you most likely will also want to install Pacemaker (beyond the scope of this manual), do so by issuing the following commands as well:
yum install resource-agents pacemaker12. 修改heartbeat配置文件(以下步骤需要在2台node上执行)
复制配置文件,资源文件,认证密钥文件
cp /usr/share/doc/heartbeat-3.0.4/ha.cf /etc/ha.d/
cp /usr/share/doc/heartbeat-3.0.4/haresources /etc/ha.d/
cp /usr/share/doc/heartbeat-3.0.4/authkeys /etc/ha.d/
之后编辑 配置文件。
vi /etc/ha.d/ha.cf
logfile /var/log/ha-log (主要检查日志文件)
logfacility local0
keepalive 1 #定义心跳频率1s
deadtime 10 #如果其他节点10S内没有回应,则确认其死亡
warntime 5 #确认一个节点连接不上5S之后将警告信息写入日志
initdead 60 #在其他节点死掉之后,系统启动前需要等待的时间,一般为deadtime的两倍
udpport 694 #端口号。
ucast eth0 192.168.135.129(对端IP) #对端的IP,在备机上改为192.168.135.130(本机IP)
auto_failback off
node web1 (IP)
node web2 (IP)
chmod 600 /etc/ha.d/authkeys
vi /etc/ha.d/authkeys
auth 1
1 crc
vi /etc/ha.d/haresources
web1 IPaddr::192.168.135.0/24/eth1 drbddisk::r0 Filesystem::/dev/drbd1::/drbd::ext3 httpd
资源文件说明:
test1– the hosname that will be the primary node
drbddisk::r0 – activate the r0 resource disk (make sure r0 corresponds to whatever your resource is named)
Filesystem::/dev/drbd1::/drbd::ext3 – mount /dev/drbd1 on /drbd as ext3 filesystemnginx–the service we’re going to watch over and take care of, in this case nginx(which wasn’t really what I was configuring, but it’s the easiest to show as an example)
安装Keepalived
yum install keepalived 当然我说的是yum 安装 ,也可以去官网 http://www.keepalived.org/software/找到你所需要的 版本。
[root@web1 ~]# keepalived --help Keepalived v1.2.7 (02/21,2013) Usage: keepalived keepalived -n keepalived -f keepalived.conf keepalived -d keepalived -h keepalived -v Commands: Either long or short options are allowed. keepalived --vrrp -P Only run with VRRP subsystem. keepalived --check -C Only run with Health-checker subsystem. keepalived --dont-release-vrrp -V Dont remove VRRP VIPs & VROUTEs on daemon stop. keepalived --dont-release-ipvs -I Dont remove IPVS topology on daemon stop. keepalived --dont-fork -n Dont fork the daemon process. keepalived --use-file -f Use the specified configuration file. Default is /etc/keepalived/keepalived.conf. keepalived --dump-conf -d Dump the configuration data. keepalived --log-console -l Log message to local console. keepalived --log-detail -D Detailed log messages. keepalived --log-facility -S 0-7 Set syslog facility to LOG_LOCAL[0-7]. (default=LOG_DAEMON) keepalived --snmp -x Enable SNMP subsystem keepalived --help -h Display this short inlined help screen. keepalived --version -v Display the version number keepalived --pid -p pidfile keepalived --checkers_pid -c checkers pidfile keepalived --vrrp_pid -r vrrp pidfile
安装 lvs可伸缩网络服务的几种结构,它们都需要一个前端的负载调度器(或者多个进行主从备份)。我们先分析实现虚拟网络服务的主要技术,指出IP负载均衡技术是在负载调度器的实现技术中效率最高的。在已有的IP负载均衡技术中,主要有通过网络地址转换(Network Address Translation)将一组服务器构成一个高性能的、高可用的虚拟服务器,我们称之为VS/NAT技术(Virtual Server via Network Address Translation)。在分析VS/NAT的缺点和网络服务的非对称性的基础上,我们提出了通过IP隧道实现虚拟服务器的方法VS/TUN (Virtual Server via IP Tunneling),和通过直接路由实现虚拟服务器的方法VS/DR(Virtual Server via Direct Routing),它们可以极大地提高系统的伸缩性。VS/NAT、VS/TUN和VS/DR技术是LVS集群中实现的三种IP负载均衡技术。
yum install lpvsadm
[root@web1 ~]# ipvsadm --help
ipvsadm v1.25 2008/5/15 (compiled with popt and IPVS v1.2.1)
Usage:
ipvsadm -A|E -t|u|f service-address [-s scheduler] [-p [timeout]] [-M netmask]
ipvsadm -D -t|u|f service-address
ipvsadm -C
ipvsadm -R
ipvsadm -S [-n]
ipvsadm -a|e -t|u|f service-address -r server-address [options]
ipvsadm -d -t|u|f service-address -r server-address
ipvsadm -L|l [options]
ipvsadm -Z [-t|u|f service-address]
ipvsadm --set tcp tcpfin udp
ipvsadm --start-daemon state [--mcast-interface interface] [--syncid sid]
ipvsadm --stop-daemon state
ipvsadm -h
看到提示后 安装成功。
我们可以通过 heartbeat 搭建 LVS 高可用集群。
也可以通过 Keepalived 搭建 LVS 高可用性集群。
接着配置真实服务器,这里通过脚本来实现吧(这个脚本只在WEB机上执行,我现在是用两台机。服务器既充当lvs、也充当web机
#!/bin/sh
#!/bin/bash
#description : start realserver
VIP=192.168.1.250 #zhuji
/etc/rc.d/init.d/functions
case "$1" in
start)
echo " start LVS of REALServer"
/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.0 up
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
;;
stop)
/sbin/ifconfig lo:0 down
echo "close LVS Directorserver"
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
这个脚本 基本就是检测 是否down 机 及及时更换。
实验部分。后期再续。。。