以上是haproxy的相关理论知识,配置文件中的相关参数,暂不介绍,待会演示的时候在介绍配置文件中各参数的含义。
此处,keepalived用来提供虚拟路由的功能,采用两台主机来实现此功能;nginx反向代理功能与keepalived一起在同一主机上实现。在提供反向代理功能方面,还有另外一个也可以实现反向代理的功能,那就是haproxy,关于haproxy的反向代理功能的实现,会在后面的博客中介绍,敬请关注,这里主要介绍nginx的反向代理功能的实现。apache用来提供http服务,后端也可以是一个基于LAMP架构的服务器。在上一遍博客中,提供的一张图片显示,apache的市场份额依然占据着领导地位,其强大的市场份额一时半会还没有哪个能够超越,因此,在企业中,基于LAMP架构来提供http服务还是很多的。这里我们就不演示基于LAMP架构来实现此三者的组合了,有兴趣的话读者可参考我的另一篇博客中介绍的方法自行构建一个LAMP,然后加上nginx的反向代理功能,再加上keepalived来提供虚拟路由功能,将三者结合起来一起工作。
现在开始演示我们的实验。
rpm -q httpd #查看是否安装此软件包,如果没有配置好yum源后安装此软件包 yum -y install httpd #安装该软件包 vim /var/www/html/index.html #编辑该文件,添加如下两行信息,以提供主页 <h1>http://lq2419.blog.51cto.com/</h1> <h2>Apache node3, IP: 172.16.32.32</h2> #在node4主机上,修改<h2>Apache node4, IP: 172.16.32.33</h2> service httpd start #启动httpd服务
下面是实现的日志,这里进贴出其中一个主机的日志。
tail /var/log/httpd/access_log #在node3查看日志信息,查看显示访问的IP,因为是通过我们的物理机直接访问,所以显示的是物理机IP 172.16.32.0 - - [25/May/2013:20:55:46 +0800] "GET / HTTP/1.1" 200 79 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31" 172.16.32.0 - - [25/May/2013:20:55:46 +0800] "GET /favicon.ico HTTP/1.1" 404 287 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31" 172.16.32.0 - - [25/May/2013:20:55:46 +0800] "GET / HTTP/1.1" 200 79 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31" 172.16.32.0 - - [25/May/2013:20:55:46 +0800] "GET /favicon.ico HTTP/1.1" 404 287 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31" 172.16.32.0 - - [25/May/2013:20:55:47 +0800] "GET / HTTP/1.1" 200 79 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31" 172.16.32.0 - - [25/May/2013:20:55:47 +0800] "GET /favicon.ico HTTP/1.1" 404 287 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31" 172.16.32.0 - - [25/May/2013:20:55:47 +0800] "GET / HTTP/1.1" 200 79 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31" 172.16.32.0 - - [25/May/2013:20:55:47 +0800] "GET /favicon.ico HTTP/1.1" 404 287 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31" 172.16.32.0 - - [25/May/2013:20:55:47 +0800] "GET / HTTP/1.1" 200 79 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31" 172.16.32.0 - - [25/May/2013:20:55:47 +0800] "GET /favicon.ico HTTP/1.1" 404 287 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31"
vim /etc/httpd/conf/httpd.conf #修改该文件 LogFormat "%{X-Real-IP}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined #找到该项,将第一个%h修改%{X-Real-IP}i,用于当使用代理访问时显示客户端IP,而不是代理IP service httpd restart #重启httpd服务
以上修改需在两台后端服务器主机上都进行。为了看效果,你也可以先不修改后端服务器httpd配置文件中日志格式的显示情况,先启动代理服务,使用物理机访问代理主机,然后在服务器主机上查看访问日志,看显示的客户端IP是否正常,是代理IP还是物理机IP。然后再来修改此选项。这里不再演示。
后端的两台http服务器已配置完毕。接着去前端的两台代理主机node1和node2上进行相关的配置。下载好nginx,这里通过源码编译安装nginx,所用版本为nginx-1.4.1.tar.gz,在编译安装前,先解决依赖关系,安装pcre-devle包。nginx源码下载地址:http://nginx.org/en/download.html也可根据需要下载相关版本。
yum -y install pcre-devel tar xf nginx-1.4.1.tar.gz cd nginx-1.4.1 ./configure \ #执行该命令,各参数相关含义前面博客中已经介绍过,这里就不再介绍了 --prefix=/usr \ --sbin-path=/usr/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --pid-path=/var/run/nginx/nginx.pid \ --lock-path=/var/lock/nginx.lock \ --user=nginx \ --group=nginx \ --with-http_ssl_module \ --with-http_flv_module \ --with-http_stub_status_module \ --with-http_gzip_static_module \ --http-client-body-temp-path=/var/tmp/nginx/client/ \ --http-proxy-temp-path=/var/tmp/nginx/proxy/ \ --http-fastcgi-temp-path=/var/tmp/nginx/fcgi/ \ --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi \ --http-scgi-temp-path=/var/tmp/nginx/scgi \ --with-pcre \ --with-file-aio 提供SysV风格的服务脚本: vim /etc/init.d/nginx #!/bin/sh # # nginx - this script starts and stops the nginx daemon # # chkconfig: - 85 15 # description: Nginx is an HTTP(S) server, HTTP(S) reverse \ # proxy and IMAP/POP3 proxy server # processname: nginx # config: /etc/nginx/nginx.conf # config: /etc/sysconfig/nginx # pidfile: /var/run/nginx.pid # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 nginx="/usr/sbin/nginx" prog=$(basename $nginx) NGINX_CONF_FILE="/etc/nginx/nginx.conf" [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx lockfile=/var/lock/subsys/nginx make_dirs() { # make required directories user=`nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -` options=`$nginx -V 2>&1 | grep 'configure arguments:'` for opt in $options; do if [ `echo $opt | grep '.*-temp-path'` ]; then value=`echo $opt | cut -d "=" -f 2` if [ ! -d "$value" ]; then # echo "creating" $value mkdir -p $value && chown -R $user $value fi fi done } start() { [ -x $nginx ] || exit 5 [ -f $NGINX_CONF_FILE ] || exit 6 make_dirs echo -n $"Starting $prog: " daemon $nginx -c $NGINX_CONF_FILE retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc $prog -QUIT retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { configtest || return $? stop sleep 1 start } reload() { configtest || return $? echo -n $"Reloading $prog: " killproc $nginx -HUP RETVAL=$? echo } force_reload() { restart } configtest() { $nginx -t -c $NGINX_CONF_FILE } rh_status() { status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}" exit 2 esac 赋予其执 行权限: chmod +x /etc/init.d/nginx service httpd stop #如果http服务启动,需先关闭 chkconfig httpd off #关闭开机自启动 chkconfig --add nginx #添加到服务列表 chkconfig nginx on #开机自启动 chkconfig --list nginx service nginx start #启动服务
vim /etc/nginx/nginx.conf #编辑此配置文件,添加与代理相关的配置 upstream webserver { #在http段,添加新的上下文 server 172.16.32.32 weight=1 max_fails=2 fail_timeout=2; #定义后端服务器,权重为1,最大尝试失败次数为2,失败时两次尝试的超时时长为2 server 172.16.32.33 weight=1 max_fails=2 fail_timeout=2; server 127.0.0.1 backup; #定义当上边两个都down机后,启用本机的服务,只要有一个还提供服务,就不启用此主机的服务 } server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { #删除根下所有的默认配置,添加如下两行 proxy_pass http://webserver; #将所有访问根的都转发的webserver proxy_set_header X-Real-IP $remote_addr; #设置转发的头部,此项可实现让后端服务器记录客户端IP,否则记录的是代理的IP,但后端服务器的记录格式也需要相应修改 } 同样,将上边配置好的文件发给另一台代理主机: scp /etc/nginx/nginx.conf node2:/etc/nginx #将此配置文件传给另一个代理主机,假如你没有实现主机名解析,请写成相应的IP地址 service nginx reload #重新载入nginx服务
知识点补充:
upstream模块常用的指令有: ip_hash:基于客户端IP地址完成请求的分发,它可以保证来自于同一个客户端的请求始终被转发至同一个upstream服务器; keepalive:每个worker进程为发送到upstream服务器的连接所缓存的个数; least_conn:最少连接调度算法; server:定义一个upstream服务器的地址,还可包括一系列可选参数,如: weight:权重; max_fails:最大失败连接次数,失败连接的超时时长由fail_timeout指定; fail_timeout:等待请求的目标服务器发送响应的时长; backup:用于fallback的目的,所有服务均故障时才启动此服务器; down:手动标记其不再处理任何请求; upstream模块的负载均衡算法主要有三种,轮调(round-robin)、ip哈希(ip_hash)和最少连接(least_conn)三种。默认为轮调。
vim /etc/keepalived/keepalived.conf #修改其配置文件 global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_script chk_nginx { #检查nginx服务是否存在 script "killall -0 nginx" interval 2 weight -2 fall 2 rise 1 } vrrp_script chk_schedown { #用于手动控制keepalived的主从模型 script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" interval 2 weight -2 } vrrp_instance VI_1 { state MASTER #设置其为master,在node2主机上设置该项为backup interface eth0 virtual_router_id 232 #设置虚拟路由组ID priority 101 #设置优先级,在node2主机上设置该项为100,因为是backup advert_int 1 authentication { auth_type PASS auth_pass langdu } virtual_ipaddress { 172.16.32.5/16 dev eth0 label eth0:0 } track_script { chk_nginx chk_schedown } notify_master "/etc/keepalived/notify.sh master" #根据检查结果不同,向同一脚本传递不同的参数 notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } 健康检查脚本如下所示: vim /etc/keepalived/notify.sh #健康检查脚本,在node2主机上也添加次脚本 #!/bin/bash # Author: MageEdu <[email protected]> # description: An example of notify script # vip=172.16.32.5 contact='root@localhost' Notify() { mailsubject="`hostname` to be $1: $vip floating" mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1" echo $mailbody | mail -s "$mailsubject" $contact } case "$1" in master) notify master /etc/rc.d/init.d/haproxy start exit 0 ;; backup) notify backup /etc/rc.d/init.d/haproxy restart exit 0 ;; fault) notify fault exit 0 ;; *) echo 'Usage: `basename $0` {master|backup|fault}' exit 1 ;; esac service keepalived start #启动我们的keepalived服务 先看下主节点上的日志信息 tail /var/log/messages #查看node1节点上的日志信息 May 25 19:37:03 node1 Keepalived_vrrp[3822]: VRRP_Script(chk_schedown) succeeded May 25 19:37:04 node1 Keepalived_vrrp[3822]: VRRP_Instance(VI_1) forcing a new MASTER election May 25 19:37:05 node1 Keepalived_vrrp[3822]: VRRP_Instance(VI_1) Transition to MASTER STATE May 25 19:37:06 node1 Keepalived_vrrp[3822]: VRRP_Instance(VI_1) Entering MASTER STATE #进入master状态 May 25 19:37:06 node1 Keepalived_vrrp[3822]: VRRP_Instance(VI_1) setting protocol VIPs. May 25 19:37:06 node1 Keepalived_vrrp[3822]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.32.5 May 25 19:37:06 node1 Keepalived_vrrp[3822]: Netlink reflector reports IP 172.16.32.5 added #添加虚拟IP May 25 19:37:06 node1 Keepalived_healthcheckers[3821]: Netlink reflector reports IP 172.16.32.5 added May 25 19:37:11 node1 Keepalived_vrrp[3822]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.32.5 ifconfig #查看node1上各端口的IP配置 eth0 Link encap:Ethernet HWaddr 00:0C:29:9F:2F:AF inet addr:172.16.32.30 Bcast:172.16.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:655564 errors:7 dropped:0 overruns:0 frame:0 TX packets:66292 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:76104124 (72.5 MiB) TX bytes:9021082 (8.6 MiB) Interrupt:59 Base address:0x2000 eth0:0 Link encap:Ethernet HWaddr 00:0C:29:9F:2F:AF #添加有虚拟IP inet addr:172.16.32.5 Bcast:0.0.0.0 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:59 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2472 errors:0 dropped:0 overruns:0 frame:0 TX packets:2472 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:227776 (222.4 KiB) TX bytes:227776 (222.4 KiB)
touch /etc/keepalived/down #创建该文件,用以实现手动实现IP地址漂移 tail /var/log/messages #查看node1的日志信息 May 25 19:37:06 node1 Keepalived_vrrp[3822]: Netlink reflector reports IP 172.16.32.5 added May 25 19:37:06 node1 Keepalived_healthcheckers[3821]: Netlink reflector reports IP 172.16.32.5 added May 25 19:37:11 node1 Keepalived_vrrp[3822]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.32.5 May 25 20:41:02 node1 Keepalived_vrrp[3822]: VRRP_Script(chk_schedown) failed May 25 20:41:03 node1 Keepalived_vrrp[3822]: VRRP_Instance(VI_1) Received higher prio advert May 25 20:41:03 node1 Keepalived_vrrp[3822]: VRRP_Instance(VI_1) Entering BACKUP STATE #进入backup状态 May 25 20:41:03 node1 Keepalived_vrrp[3822]: VRRP_Instance(VI_1) removing protocol VIPs. May 25 20:41:03 node1 Keepalived_vrrp[3822]: Netlink reflector reports IP 172.16.32.5 removed #实现IP漂移 May 25 20:41:03 node1 Keepalived_healthcheckers[3821]: Netlink reflector reports IP 172.16.32.5 removed 现在去查看下node2主机上的日志信息,看是否是master。 tail /var/log/messages May 25 19:37:15 node1 Keepalived_vrrp[17601]: Netlink reflector reports IP 172.16.32.5 removed May 25 20:41:15 node1 Keepalived_vrrp[17601]: VRRP_Instance(VI_1) forcing a new MASTER election May 25 20:41:16 node1 Keepalived_vrrp[17601]: VRRP_Instance(VI_1) Transition to MASTER STATE May 25 20:41:17 node1 Keepalived_vrrp[17601]: VRRP_Instance(VI_1) Entering MASTER STATE #进入master状态 May 25 20:41:17 node1 Keepalived_vrrp[17601]: VRRP_Instance(VI_1) setting protocol VIPs. May 25 20:41:17 node1 Keepalived_healthcheckers[17600]: Netlink reflector reports IP 172.16.32.5 added #添加虚拟IP May 25 20:41:17 node1 avahi-daemon[3375]: Registering new address record for 172.16.32.5 on eth0. May 25 20:41:17 node1 Keepalived_vrrp[17601]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.32.5 May 25 20:41:17 node1 Keepalived_vrrp[17601]: Netlink reflector reports IP 172.16.32.5 added May 25 20:41:22 node1 Keepalived_vrrp[17601]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.32.5
实现IP漂移回来就不在演示了,可自己删除创建的文件,然后查看日志信息。从上面两台主机的日志信息,我们已经看到,可以实现IP地址的漂移,并且在物理机上刷新界面显示也正常。到此,keepalived + nginx反向代理 + apache构建Web站点我们已经实现。
还是在我们刚才的主机上进行。但是有一点不好的是,rhel5.8上其yum源里没有自带的haproxy安装包,所以,这里我们采用下载源码包,自己编译安装的方式安装haproxy,本人也是第一次源码编译安装,以前是在rhel6.4上采用rpm包安装的,所以,在源码编译安装的过程中,肯定会遇到各种问题,如果有必要我会将遇到的问题一并贴出来,与大家分享,也好解决当你遇到同样问题时不知如何办才好。好了,废话不多说,开始安装我们的haproxy吧。
service keepalived stop #关闭keepalived服务 service nginx stop #关闭nginx服务 chkconfig nginx off #关闭开机自启动 然后去下载我们的haproxy,这里采用haproxy-1.4.22版本,下载地址为http://haproxy.1wt.eu/#down tar xf haproxy-1.4.22.tar.gz cd haproxy-1.4.22 uname -r #确定自己的内核版本是多少,这里我的内核是linux-2.6.18-308.el5,所以下边使用linux26 make TARGET=linux26 PREFIX=/usr/local/ make install PARFIX=/usr/local/
mkdir /etc/haproxy #创建该目录,用来存放我们的配置文件 cp examples/haproxy.cfg /etc/haproxy/ #复制该文件当我们的配置文件,但是里边好多需要修改 提供SysV风格服务脚本: vim /etc/init.d/haproxy #编辑该文件,添加服务脚本 #!/bin/sh # # haproxy # # chkconfig: - 85 15 # description: HAProxy is a free, very fast and reliable solution \ # offering high availability, load balancing, and \ # proxying for TCP and HTTP-based applications # processname: haproxy # config: /etc/haproxy/haproxy.cfg # pidfile: /var/run/haproxy.pid # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 exec="/usr/local/sbin/haproxy" #在执行完make install之后,会出现此选项,我们可以直接执行haproxy命令 prog=$(basename $exec) #说白了,其实就是让取haproxy,不信,待会启动报错时你就会发现了 [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/subsys/haproxy #锁文件 check() { $exec -c -V -f /etc/$prog/$prog.cfg #知道为何创建刚才的那个haproxy目录了,如果不创建此目录,这里就需要修改了。假如你没有创建刚才的目录,请相应修改服务脚本中的选项 } start() { $exec -c -q -f /etc/$prog/$prog.cfg if [ $? -ne 0 ]; then echo "Errors in configuration file, check with $prog check." return 1 fi echo -n $"Starting $prog: " # start it up here, usually something like "daemon $exec" daemon $exec -D -f /etc/$prog/$prog.cfg -p /var/run/$prog.pid retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " # stop it here, often "killproc $prog" killproc $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { $exec -c -q -f /etc/$prog/$prog.cfg if [ $? -ne 0 ]; then echo "Errors in configuration file, check with $prog check." return 1 fi stop start } reload() { $exec -c -q -f /etc/$prog/$prog.cfg if [ $? -ne 0 ]; then echo "Errors in configuration file, check with $prog check." return 1 fi echo -n $"Reloading $prog: " $exec -D -f /etc/$prog/$prog.cfg -p /var/run/$prog.pid -sf $(cat /var/run/$prog.pid) retval=$? echo return $retval } force_reload() { restart } fdr_status() { status $prog } case "$1" in start|stop|restart|reload) $1 ;; force-reload) force_reload ;; check) check ;; status) fdr_status ;; condrestart|try-restart) [ ! -f $lockfile ] || restart ;; *) echo $"Usage: $0 {start|stop|status|restart|try-restart|reload|force-reload}" exit 2 esac
此服务脚本是参考rhel6.4上的haproxy的服务脚本,稍加改动而成的。假如你不想源码编译安装haproxy的话,也可以在rhel6.4上进行此实验。但是rhel6.4上有个地方需要修改,待会再说。
chmod +x /etc/init.d/haproxy chkconfig --add haproxy chkconfig haproxy on
haproxy的配置文件中的选项可分为两个类:
所有代理的名称只能使用大写字母、小写字母、数字、-(中线)、_(下划线)、.(点号)和:(冒号)。
rdp-cookie(name):
errorloc 503 /etc/haproxy/errorpages/sorry.htm
vim /etc/hapaproxy/haproxy.conf #修改配置文件,将不需要的直接删掉,添加如下内容 global #定义全局配置段 log 127.0.0.1 local2 #启用日志,假如是在rhel6.4上,需修改/etc/sysconfig/rsyslog此文件SYSLOGD_OPTIONS="-c 2 -r" chroot /usr/share/haproxy #切换根,所以待会需创建此目录,本人亲测,若没有会报错 pidfile /var/run/haproxy.pid #指定pid文件 maxconn 2000 #最大连接数,假如你的服务器性能没那么强, 还是设置的小点吧 user haproxy #需要事先创建此用户和组 group haproxy daemon #以守护进程运行此服务 #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults #定义默认项 mode http #定义所代理的服务,haproxy还可以代理其他服务,如MySQL log global #日志采用全局配置 option httplog option dontlognull option http-server-close #关闭http服务 option forwardfor except 127.0.0.0/8 option redispatch retries 3 #尝试次数 timeout http-request 10s #请求尝试时长,单位为秒 timeout queue 1m #查询超时时长,单位为毫秒 timeout connect 10s #连接超时时长,单位为秒 timeout client 1m # timeout server 1m timeout http-keep-alive 10s #保持连接时长 timeout check 10s #检查时长 maxconn 3000 #最大连接次数 listen stats #设定监听段名称 mode http #监听的服务 bind 0.0.0.0:8080 #监听所绑定的端口号 stats enable #启动 stats hide-version stats uri /haproxyadmin-stats #定义uri,可在浏览器界面访问 stats realm Haproxy\ Statistics #启用统计报告,与认证相关 stats auth admin:admin #定义通过浏览器进入监听界面时的用户和密码,只有拥有用户名和密码才能访问 stats admin if TRUE #表示如果认证通过,则启用管理接口 frontend http-in bind *:80 mode http log global option httpclose option logasap option dontlognull capture request header Host len 20 capture request header Referer len 60 default_backend servers frontend healthcheck #前段健康检查 bind :2000 mode http option httpclose option forwardfor default_backend servers backend servers balance roundrobin #采用轮调方式,当刷新浏览器时,会交替显示两个后端服务器上的主界面的内容 server web1 172.16.32.32:80 check maxconn 1000 #定义我们的后端服务器主机 server web2 172.16.32.33:80 check maxconn 1500 现在去创建我们在配置文件中提到的目录或者用户。 mkdir /usr/share/haproxy #创建该目录 groupadd -r haproxy useradd -r -g haproxy haproxy #创建此用户和组 vim /etc/sysconfig/syslog #修改syslog日志配置文件 SYSLOGD_OPTIONS="-m 2 -r" service syslog restart #重启该服务 service haproxy start #启动haproxy服务 一起查看下node1上的日志。 tail /var/log/messages May 27 01:17:23 node1 kernel: Kernel log daemon terminating. May 27 01:17:25 node1 exiting on signal 15 May 27 01:17:25 node1 syslogd 1.4.1: restart (remote reception). #syslogd重启 May 27 01:17:25 node1 kernel: klogd 1.4.1, log source = /proc/kmsg started. May 27 01:17:32 localhost.localdomain haproxy[3574]: Proxy stats started. #启动haproxy代理服务 May 27 01:17:32 localhost.localdomain haproxy[3574]: Proxy http-in started. #启动http-in服务 May 27 01:17:32 localhost.localdomain haproxy[3574]: Proxy healthcheck started. #启动健康检查 May 27 01:17:32 localhost.localdomain haproxy[3574]: Proxy servers started. #启动后端的服务器 May 27 01:19:32 node1 -- MARK -- May 27 01:22:01 node1 -- MARK --
以上从源码编译安装,到提供配置文件、服务脚本、创建用户和组,以及创建相应的目录需要在另一台代理主机上也进行相应的操作。假如你是按照上面的一步步做的,是不会出错的。因为在出错的地方我都已经将问题一一解决,并说明了需要创建什么,然后怎样配置,最后再启动服务。
至此,我们的haproxy代理也已工作正常了。现在开启我们的keepalived服务,让keepalived结合haproxy、apache三者一起工作。
service keepalived start #启动keepalived服务 再来看下node1上的日志信息。 tail /var/log/messages May 27 01:25:52 node1 Keepalived_vrrp[3618]: VRRP_Instance(VI_1) removing protocol VIPs. May 27 01:25:52 node1 Keepalived_healthcheckers[3617]: Netlink reflector reports IP 172.16.32.5 removed May 27 01:25:52 node1 Keepalived_vrrp[3618]: Netlink reflector reports IP 172.16.32.5 removed May 27 01:25:53 node1 Keepalived_vrrp[3618]: VRRP_Instance(VI_1) forcing a new MASTER election #进入master状态 May 27 01:25:54 node1 Keepalived_vrrp[3618]: VRRP_Instance(VI_1) Transition to MASTER STATE May 27 01:25:55 node1 Keepalived_vrrp[3618]: VRRP_Instance(VI_1) Entering MASTER STATE May 27 01:25:55 node1 Keepalived_vrrp[3618]: VRRP_Instance(VI_1) setting protocol VIPs. May 27 01:25:55 node1 Keepalived_healthcheckers[3617]: Netlink reflector reports IP 172.16.32.5 added #添加虚拟IP May 27 01:25:55 node1 Keepalived_vrrp[3618]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.32.5 May 27 01:25:55 node1 Keepalived_vrrp[3618]: Netlink reflector reports IP 172.16.32.5 added
以上相关的操作需要在两台前端代理主机上都进行。
到此,我们的keepalived + 反向代理 + apache构建实用性Web站点都已实现。怎么样,你的成功了么?