KeepAlived+Haproxy集群


  • FileName:
  • KeepAlived+Haproxy集群.txt
    
  • Function:
  • Implement the load balancer cluster via KeepAlived and Haproxy
    
  • Version:
  • V1.0(trial version)
    
  • ChangeLog:
  • 2015/08/27     yzhantong.com(Internal test passed)
    

0、规划
网络环境:
192.168.146.220 VIP
eth0:192.168.146.221 node1.mycluster.com
eth1:10.0.0.221(暂时未用上,如果涉及到集群内网环境,可能数据库、文件服务器可能会用上)

eth0:192.168.146.222 node2.mycluster.com
eth1:10.0.0.222

操作系统:
CentOS release 6.6 (Final) x86_64

1、在两个节点上设置hosts文件
[root@node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.146.221 node1.mycluster.com
192.168.146.222 node2.mycluster.com

[root@node2 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.146.221 node1.mycluster.com
192.168.146.222 node2.mycluster.com

2、在两个节点上安装HAProxy and Keepalived
[root@node1 ~]# yum -y install haproxy keepalived
[root@node2 ~]# yum -y install haproxy keepalived
已加载插件:fastestmirror
设置安装进程
Loading mirror speeds from cached hostfile

  • base: mirrors.btte.net
  • extras: mirrors.btte.net
  • updates: mirrors.btte.net
    解决依赖关系
    --> 执行事务检查
    ---> Package haproxy.i686 0:1.5.2-2.el6 will be 安装
    ---> Package keepalived.i686 0:1.2.13-5.el6_6 will be 安装
    --> 处理依赖关系 libnl.so.1,它被软件包 keepalived-1.2.13-5.el6_6.i686 需要
    --> 执行事务检查
    ---> Package libnl.i686 0:1.1.4-2.el6 will be 安装
    --> 完成依赖关系计算

依赖关系解决

====================================================================================================================================
软件包 架构 版本 仓库 大小
====================================================================================================================================
正在安装:
haproxy i686 1.5.2-2.el6 base 787 k
keepalived i686 1.2.13-5.el6_6 updates 209 k
为依赖而安装:
libnl i686 1.1.4-2.el6 base 124 k

事务概要

Install 3 Package(s)

总下载量:1.1 M
Installed size: 3.4 M
下载软件包:
(1/3): haproxy-1.5.2-2.el6.i686.rpm | 787 kB 00:00
(2/3): keepalived-1.2.13-5.el6_6.i686.rpm | 209 kB 00:00
(3/3): libnl-1.1.4-2.el6.i686.rpm | 124 kB 00:00


总计 5.7 MB/s | 1.1 MB 00:00
运行 rpm_check_debug
执行事务测试
事务测试成功
执行事务
正在安装 : libnl-1.1.4-2.el6.i686 1/3
正在安装 : keepalived-1.2.13-5.el6_6.i686 2/3
正在安装 : haproxy-1.5.2-2.el6.i686 3/3
Verifying : libnl-1.1.4-2.el6.i686 1/3
Verifying : keepalived-1.2.13-5.el6_6.i686 2/3
Verifying : haproxy-1.5.2-2.el6.i686 3/3

已安装:
haproxy.i686 0:1.5.2-2.el6 keepalived.i686 0:1.2.13-5.el6_6

作为依赖被安装:
libnl.i686 0:1.1.4-2.el6

完毕!

3、两节点确认两服务随系统启动
[root@node1 ~]# chkconfig haproxy on && chkconfig keepalived on
[root@node1 ~]# chkconfig | egrep 'haproxy|keepalived'
haproxy 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭
keepalived 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭

[root@node2 ~]# chkconfig haproxy on && chkconfig keepalived on
[root@node2 ~]# chkconfig | egrep 'haproxy|keepalived'
haproxy 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭
keepalived 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭

4、允许non-local Virtual IPs在所有节点上
[root@node1 ~] vi /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind = 1
[root@node1 ~]# sysctl -p

[root@node2 ~] vi /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind = 1
[root@node2 ~]# sysctl -p

5、配置haproxy
[root@node1 haproxy]# pwd
/etc/haproxy
[root@node1 haproxy]# cat haproxy.cfg

---------------------------------------------------------------------

Example configuration for a possible web application. See the

full configuration options online.

http://haproxy.1wt.eu/download/1.4/doc/configuration.txt

---------------------------------------------------------------------

---------------------------------------------------------------------

Global settings

---------------------------------------------------------------------

global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2

chroot      /var/lib/haproxy
pidfile     /var/run/haproxy.pid
maxconn     4000
user        haproxy
group       haproxy
daemon

# turn on stats unix socket
stats socket /var/lib/haproxy/stats

---------------------------------------------------------------------

common defaults that all the 'listen' and 'backend' sections will

use if not designated in their block

---------------------------------------------------------------------

defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

---------------------------------------------------------------------

main frontend which proxys to the backends

---------------------------------------------------------------------

frontend main *:80
default_backend webservers

---------------------------------------------------------------------

static backend for serving up images, stylesheets and such

---------------------------------------------------------------------

backend static

balance roundrobin

server static 127.0.0.1:4331 check

---------------------------------------------------------------------

round robin balancing between the various backends

---------------------------------------------------------------------

backend webservers
mode http
cookie webservers insert

stats enable

stats auth admin:admin

stats uri /haproxy?stats

balance     roundrobin
  option httpclose
    option forwardfor
#server  webserver1 10.0.0.222:8000 check
server  webserver1 10.0.0.222:8000 cookie webserver1 check
#server  webserver2 10.0.0.221:8000 check
server  webserver2 10.0.0.221:8000 cookie webserver2 check

listen stats :8888
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth admin:Pa55wd@CM

[root@node2 haproxy]# pwd
/etc/haproxy
[root@node2 haproxy]# cat haproxy.cfg

---------------------------------------------------------------------

Example configuration for a possible web application. See the

full configuration options online.

http://haproxy.1wt.eu/download/1.4/doc/configuration.txt

---------------------------------------------------------------------

---------------------------------------------------------------------

Global settings

---------------------------------------------------------------------

global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2

chroot      /var/lib/haproxy
pidfile     /var/run/haproxy.pid
maxconn     4000
user        haproxy
group       haproxy
daemon

# turn on stats unix socket
stats socket /var/lib/haproxy/stats

---------------------------------------------------------------------

common defaults that all the 'listen' and 'backend' sections will

use if not designated in their block

---------------------------------------------------------------------

defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

---------------------------------------------------------------------

main frontend which proxys to the backends

---------------------------------------------------------------------

frontend main *:80
default_backend webservers

---------------------------------------------------------------------

static backend for serving up images, stylesheets and such

---------------------------------------------------------------------

backend static

balance roundrobin

server static 127.0.0.1:4331 check

---------------------------------------------------------------------

round robin balancing between the various backends

---------------------------------------------------------------------

backend webservers
mode http
cookie webservers insert

stats enable

stats auth admin:admin

stats uri /haproxy?stats

balance     roundrobin
  option httpclose
    option forwardfor
#server  webserver1 10.0.0.222:8000 check
server  webserver1 10.0.0.222:8000 cookie webserver1 check
#server  webserver2 10.0.0.221:8000 check
server  webserver2 10.0.0.221:8000 cookie webserver2 check

listen stats :8888
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth admin:Pa55wd@CM

6、配置keepalved

[root@node1 haproxy]# cd /etc/keepalived/
[root@node1 keepalived]# pwd
/etc/keepalived
[root@node1 keepalived]# ll
total 8
-rw-r--r-- 1 root root 1172 Jun 10 15:28 keepalived.conf
-rw-r--r--. 1 root root 3562 Jun 2 14:50 keepalived.conf.default
[root@node1 keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}

vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid existance
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}

vrrp_instance VI_1 {
interface eth0 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 101 # 101 on master, 100 on backup

authentication {
auth_type PASS
auth_pass VI_1
}
virtual_ipaddress {
192.168.146.220 # the virtual IP
}
track_script {
chk_haproxy
}
}

vrrp_instance VI_2 {
state BACKUP
interface eth0
virtual_router_id 58
priority 92
advert_int 1
authentication {
auth_type PASS
auth_pass VI_2
}
virtual_ipaddress {
192.168.146.223
}
}

[root@node2 keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}

vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid existance
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}

vrrp_instance VI_1 {
interface eth0 # interface to monitor
state BACKUP
virtual_router_id 51 # Assign one ID for this route
priority 100 # 101 on master, 100 on backup
authentication {
auth_type PASS
auth_pass VI_1
}
virtual_ipaddress {
192.168.146.220 # the virtual IP
}

track_script {

chk_haproxy

}

}

vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 58
priority 92
advert_int 1
authentication {
auth_type PASS
auth_pass VI_2
}
virtual_ipaddress {
192.168.146.223
}
track_script {
chk_haproxy
}
}

7、两节点启动haproxy、keepalived服务

service haproxy start

service keepalived start

[root@node1 ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
2: eth0: mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:19:b9:f1:d2:25 brd ff:ff:ff:ff:ff:ff
inet 192.168.146.221/24 brd 192.168.146.255 scope global eth0
inet 192.168.146.220/32 scope global eth0
3: eth1: mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:19:b9:f1:d2:27 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.221/8 brd 10.255.255.255 scope global eth1

[root@node2 keepalived]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc mq state UP qlen 1000
link/ether 84:2b:2b:19:f5:ca brd ff:ff:ff:ff:ff:ff
inet 192.168.146.222/24 brd 192.168.146.255 scope global eth0
inet 192.168.146.223/32 scope global eth0
inet6 fe80::862b:2bff:fe19:f5ca/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc mq state UP qlen 1000
link/ether 84:2b:2b:19:f5:cb brd ff:ff:ff:ff:ff:ff
inet 10.0.0.222/8 brd 10.255.255.255 scope global eth1
inet6 fe80::862b:2bff:fe19:f5cb/64 scope link
valid_lft forever preferred_lft forever

8、两台安装nginx,测试HA与LB
两节点配置文件内容一样,在主目录放测试文件
[root@node1 conf.d]# cat default.conf
server {
listen 8000;
server_name localhost;

#charset koi8-r;
#access_log  /var/log/nginx/log/host.access.log  main;

location / {
    root   /usr/share/nginx/html;
    index  index.php index.html index.htm;
}

#error_page  404              /404.html;

# redirect server error pages to the static page /50x.html
#
error_page   500 502 503 504  /50x.html;
location = /50x.html {
    root   /usr/share/nginx/html;
}

# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
#    proxy_pass   http://127.0.0.1;
#}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
    root           /usr/share/nginx/html;
    fastcgi_pass   127.0.0.1:9000;
    fastcgi_index  index.php;
    #fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
    include        fastcgi_params;
}

# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
#    deny  all;
#}

}

WEB:
http://192.168.146.220/

haproxy:
http://192.168.146.220:8888/
登录信息在配置文件中

9、日常维护
9.1 Keepalive

  1. 服务管理
    [root@node1 ~]# service keepalived {start|stop|status|restart|condrestart|try-restart|reload|force-reload}
    or
    [root@node1 ~]# /etc/init.d/keepalived {start|stop|status|restart|condrestart|try-restart|reload|force-reload}

  2. VIP绑定
    [root@node1 ~]# ip a
    1: lo: mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    2: eth0: mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:19:b9:f1:d2:25 brd ff:ff:ff:ff:ff:ff
    inet 192.168.146.221/24 brd 192.168.146.255 scope global eth0
    inet 192.168.146.220/32 scope global eth0

  3. keepalived运行日志
    [root@node1 ~]# cat /var/log/messages|grep -i Keepalived
    Aug 27 16:19:17 node1 Keepalived_vrrp[1620]: VRRP_Instance(VI_1) sending 0 priority
    Aug 27 16:19:17 node1 Keepalived_vrrp[1620]: VRRP_Instance(VI_1) removing protocol VIPs.
    Aug 27 16:19:17 node1 Keepalived[1617]: Stopping Keepalived v1.2.13 (03/19,2015)
    Aug 27 16:19:18 node1 Keepalived[27675]: Starting Keepalived v1.2.13 (03/19,2015)
    Aug 27 16:19:18 node1 Keepalived[27676]: Starting Healthcheck child process, pid=27678
    Aug 27 16:19:18 node1 Keepalived[27676]: Starting VRRP child process, pid=27679
    Aug 27 16:19:18 node1 Keepalived_vrrp[27679]: Netlink reflector reports IP 192.168.146.221 added
    Aug 27 16:19:18 node1 Keepalived_vrrp[27679]: Netlink reflector reports IP 10.0.0.221 added
    Aug 27 16:19:18 node1 Keepalived_vrrp[27679]: Registering Kernel netlink reflector
    Aug 27 16:19:18 node1 Keepalived_vrrp[27679]: Registering Kernel netlink command channel
    Aug 27 16:19:18 node1 Keepalived_vrrp[27679]: Registering gratuitous ARP shared channel
    Aug 27 16:19:18 node1 Keepalived_vrrp[27679]: Opening file '/etc/keepalived/keepalived.conf'.
    Aug 27 16:19:18 node1 Keepalived_vrrp[27679]: Configuration is using : 69407 Bytes
    Aug 27 16:19:18 node1 Keepalived_vrrp[27679]: Using LinkWatch kernel netlink reflector...
    Aug 27 16:19:18 node1 Keepalived_vrrp[27679]: VRRP_Instance(VI_2) Entering BACKUP STATE
    Aug 27 16:19:18 node1 Keepalived_vrrp[27679]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
    Aug 27 16:19:18 node1 Keepalived_vrrp[27679]: VRRP_Script(chk_haproxy) succeeded
    Aug 27 16:19:18 node1 Keepalived_vrrp[27679]: VRRP_Instance(VI_1) Transition to MASTER STATE
    Aug 27 16:19:18 node1 Keepalived_vrrp[27679]: VRRP_Instance(VI_1) Received lower prio advert, forcing new election
    Aug 27 16:19:19 node1 Keepalived_vrrp[27679]: VRRP_Instance(VI_1) Entering MASTER STATE
    Aug 27 16:19:19 node1 Keepalived_vrrp[27679]: VRRP_Instance(VI_1) setting protocol VIPs.
    Aug 27 16:19:19 node1 Keepalived_vrrp[27679]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.146.220
    Aug 27 16:19:24 node1 Keepalived_vrrp[27679]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.146.220

9.2 Haproxy

  1. 服务管理

service haproxy {start|stop|status|restart|try-restart|reload|force-reload}

or

/etc/init.d/haproxy {start|stop|status|restart|try-restart|reload|force-reload}

  1. 运行日志
    [root@node1 ~]# tail -f /var/log/haproxy.log
    Netlink reflector reports IP 192.168.146.221 added
    Netlink reflector reports IP 10.0.0.221 added
    Registering Kernel netlink reflector
    Registering Kernel netlink command channel
    Opening file '/etc/keepalived/keepalived.conf'.
    Configuration is using : 8029 Bytes
    Using LinkWatch kernel netlink reflector...
    Netlink reflector reports IP 192.168.146.220 added

[root@node1 ~]# tail -f /var/log/haproxy-status.log
Server webservers/node1 is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 3ms. 5 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Server webservers/node1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 4 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Server webservers/node4 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 3 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Proxy main started.
Proxy webservers started.
Proxy stats started.
Server webservers/node1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 5 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Server webservers/node2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 4 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Server webservers/node4 is DOWN, reason: Layer7 timeout, check duration: 10002ms. 3 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Server webservers/node4 is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 22ms. 4 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.

10、Q&A

  1. message日志中报错:
    Jun 2 15:02:39 node1 modprobe: FATAL: Error inserting ip_vs (/lib/modules/2.6.32-504.16.2.el6.x86_64/kernel/net/netfilter/ipvs/ip_vs.ko): Unknown symbol in module, or unknown parameter (see dmesg)
    Jun 2 15:02:39 node1 Keepalived_healthcheckers[12807]: IPVS: Can't initialize ipvs: Protocol not available
    Jun 2 15:02:39 node1 kernel: ip_vs: Unknown symbol icmpv6_send
    Jun 2 15:02:39 node1 Keepalived[14868]: Healthcheck child process(12807) died: Respawning
    Jun 2 15:02:39 node1 Keepalived[14868]: Starting Healthcheck child process, pid=12810
    Jun 2 15:02:39 node1 kernel: ip_vs: Unknown symbol ip6_local_out
    Jun 2 15:02:39 node1 kernel: ip_vs: Unknown symbol ip6_route_me_harder
    Jun 2 15:02:39 node1 kernel: ip_vs: Unknown symbol ipv6_dev_get_saddr
    Jun 2 15:02:39 node1 kernel: ip_vs: Unknown symbol ip6_route_output

查看是否禁用了IPV6
[root@node2 keepalived]# cat /etc/modprobe.d/ipv6.conf
install ipv6 /bin/true
[root@node2 keepalived]# vi /etc/modprobe.d/ipv6.conf

install ipv6 /bin/true

11、引用

  1. http://keepalived.org/
  2. http://www.haproxy.org/
  3. http://marc.info/?l=haproxy
  4. http://demo.haproxy.org/
  5. https://cbonte.github.io/haproxy-dconv/configuration-1.5.html

你可能感兴趣的:(KeepAlived+Haproxy集群)