声明:原文地址我忘记了,在此对作者表示抱歉!如果谁知道此文英文原版地址,请留言给我!谢谢!
如何配置一个负载均衡(Loadbalanced)的
高可用性(High-Availability,HA)Apache集群(Cluster)
――
How To Set Up A Loadbalanced High-Availability Apache Cluster
翻译者:徐大雨
翻译日期:
2007-8-31
1.0
版
作者:
falko time<ft[at] falkotimme [dot] com>
最后编辑日期:
2006
年
4
月
26
日
This tutorial shows how to set up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another "Single Point Of Failure", we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using
heartbeat
, and if one load balancer fails, the other takes over silently.
本教程说明了如何配置一个提供高可用性(
High-Availability,
简称HA
)
的双节点(two-node)的Apache网络服务器(web sever,例如:JBoss)集群。在我们创造一个负载均衡的Apache集群前,两个Apache节点(例如:JBoss)之间是分开引入请求的。因为我们不想负载均衡器变成另一个“单点故障”,所以我们也必须为负载均衡器提供高可用性。所以我们的负载均衡器事实上将由两个负载均衡器节点组成,它们彼此利用心跳(heartbeat)监控对方,假如一个负载均衡器失效,那么另一个负载均衡器就会接管服务。
The advantage of using a load balancer compared to using round robin DNS is that it takes care of the load on the web server nodes and tries to direct requests to the node with less load, and it also takes care of connections/sessions. Many web applications (e.g. forum software, shopping carts, etc.) make use of sessions, and if you are in a session on Apache node 1, you would lose that session if suddenly node 2 served your requests. In addition to that, if one of the Apache nodes goes down, the load balancer realizes that and directs all incoming requests to the remaining node which would not be possible with round robin DNS.
用负载均衡器比用DNS循环的优势在于它会照顾在Web服务器(例如:JBoss)节点上的负载,并设法把请求转到负载较小的节点上,并且它还照顾连接/会话(connections/sessions)。很多Web应用程序(例如:软件论坛,购物车等)使用会话,如果你只在Apache节点(例如:JBoss)一上创建会话,那么当节点二突然为你的请求提供服务的时候你将丢失这个会话。除此之外,如果Apache节点(例如:JBoss节点)中的任何一个节点down掉了(如:关机,停止服务等),负载均衡器认识到此问题后,那它就会指挥所有传来的请求到其余正常工作的节点上,这未必在可能存在的方面先进DNS循环。
For this setup, we need four nodes (two Apache nodes and two load balancer nodes) and five IP addresses: one for each node and one virtual IP address that will be shared by the load balancer nodes and used for incoming HTTP requests.
这种集群结构,我们需要有四个节点(两个Apache节点和两个负载均衡器节点)和五个IP地址:每一个节点和一个虚拟IP地址将分担由负载均衡器节点传入的HTTP请求。
I will use the following setup here:
・
Apache node 1:
webserver1.example.com
(
webserver1
) - IP address:
192.168.0.101
; Apache document root:
/var/www
・
Apache node 2:
webserver2.example.com
(
webserver2
) - IP address:
192.168.0.102
; Apache document root:
/var/www
・
Load Balancer node 1:
loadb1.example.com
(
loadb1
) - IP address:
192.168.0.103
・
Load Balancer node 2:
loadb2.example.com
(
loadb2
) - IP address:
192.168.0.104
・
Virtual IP Address:
192.168.0.105
(used for incoming requests)
这里我将用以下设置:
l
Apache
节点一(例如:装有JBoss的服务器):
webserver1.example.com
(
webserver1
)-IP
地址:
192.168.0.101
;Apache
文档根路径:
/var/www
l
Apache
节点二:
webserver2.example.com
(
webserver2
)-IP
地址:
192.168.0.102
;Apache
文档根路径:
/var/www
l
负载均衡器节点一:
loadb1.example.com
(
loadb1
) �C IP
地址:
192.168.0.103
l
负载均衡器节点二:
loadb2.example.com
(
loadb2
) �C IP
地址:
192.168.0.104
l
虚拟IP地址:
192.168.0.105
(
用于外部请求的IP地址)
Have a look at the drawing on [url]http://www.linuxvirtualserver.org/docs/ha/ultramonkey.html[/url] to understand how this setup looks like.
我们可以到[url]http://www.linuxvirtualserver.org/docs/ha/ultramonkey.html[/url]上看看,学习一下像这样的设置怎么做。
In this tutorial I will use Debian Sarge for all four nodes. I assume that you have installed a basic Debian installation on all four nodes, and that you have installed Apache on
webserver1
and
webserver2
, with
/var/www
being the document root of the main web site.
在这个指南里我将使用Debian Sarge适用于所有的四个节点。我假设你已经在所有的四个节点上安装了一个基础的Debian安装,而且你已经在webserver1和webserver2上安装了Apache(例如:JBoss),以
/var/www
的名字
创建了web站点主文档目录。
I want to say first that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!
首先我想说的是,这不是配置这样一个系统的唯一方法。有许多方法可以达到这种目标,但是我接受这种方法。我不做任何保证这种方式会为你工作!
1 Enable IPVS On The Load Balancers
在负载均衡器上启用IPVS
First we must enable IPVS on our load balancers. IPVS (IP Virtual Server) implements transport-layer load balancing inside the Linux kernel, so called Layer-4 switching.
首先我们必须启用IPVS在我们的负载均衡器上。IPVS(IP虚拟服务器)在Linux Kernel内部实现传输层(transport-layer)负载均衡,所以叫四层交换(Layer-4 switching)。
loadb1/loadb2:
echo ip_vs_dh >> /etc/modules
echo ip_vs_ftp >> /etc/modules
echo ip_vs >> /etc/modules
echo ip_vs_lblc >> /etc/modules
echo ip_vs_lblcr >> /etc/modules
echo ip_vs_lc >> /etc/modules
echo ip_vs_nq >> /etc/modules
echo ip_vs_rr >> /etc/modules
echo ip_vs_sed >> /etc/modules
echo ip_vs_sh >> /etc/modules
echo ip_vs_wlc >> /etc/modules
echo ip_vs_wrr >> /etc/modules
Then we do this:
然后我们这样做:
loadb1/loadb2:
modprobe ip_vs_dh
modprobe ip_vs_ftp
modprobe ip_vs
modprobe ip_vs_lblc
modprobe ip_vs_lblcr
modprobe ip_vs_lc
modprobe ip_vs_nq
modprobe ip_vs_rr
modprobe ip_vs_sed
modprobe ip_vs_sh
modprobe ip_vs_wlc
modprobe ip_vs_wrr
If you get errors, then most probably your kernel wasn't compiled with IPVS support, and you need to compile a new kernel with IPVS support (or install a kernel image with IPVS support) now.
如果你得到错误,那么很有可能是你的kernel编译时没有IPVS支持,那么你需要马上编译一个新的kernel支持IPVS(或者在安装一个支持IPVS的kernel镜像)。
2 Install Ultra Monkey On The Load
alancers
在负载均衡器上安装Ultra Monkey
Ultra Monkey is a project to create load balanced and highly available services on a local area network using Open Source components on the Linux operating system; the Ultra Monkey package provides
heartbeat
(used by the two load balancers to monitor each other and check if the other node is still alive) and
ldirectord
, the actual load balancer.
Ultra Monkey
是一个方案,创建在一个局域网使用的对于Linux操作系统的负载均衡和高可用性服务的开放源码组件;Ultra Monkey包提供心跳(heartbeat ,用于两个负载均衡器监视和检查对方,如果另一个节点是活着的)和指挥器(
ldirectord
)
,实现负载均衡。
To install Ultra Monkey, we must edit
/etc/apt/sources.list
now and add these two lines (don't remove the other repositories):
安装Ultra Monkey,我们必须马上编辑
/etc/apt/sources.list
文件,增加两条线路(不要删除其他的
repositories
)
:
loadb1/loadb2:
deb [url]http://www.ultramonkey.org/download/3/[/url] sarge main
deb-src [url]http://www.ultramonkey.org/download/3[/url] sarge main
Afterwards we do this:
然后我们这样做:
loadb1/loadb2:
and install Ultra Monkey:
接着安装Ultra Monkey:
loadb1/loadb2:
apt-get install ultramonkey
If you see this warning:
如果你看见这样的警告:
libsensors3 not functional
It appears that your kernel is not compiled with sensors support. As a
result, libsensors3 will not be functional on your system.
If you want to enable it, have a look at "I 2C Hardware Sensors Chip
support" in your kernel configuration.
you can ignore it.
你可以忽略它。
During the Ultra Monkey installation you will be asked a few question. Answer as follows:
在安装Ultra Monkey 的时候会问你一些问题。回答如下:
Do you want to automatically load IPVS rules on boot?
<-- No
Select a daemon method.
<-- none
3 Enable Packet Forwarding On The Load
Balancers
在负载均衡器上激活包转发
The load balancers must be able to route traffic to the Apache nodes. Therefore we must enable packet forwarding on the load balancers. Add the following lines to
/etc/sysctl.conf
:
负载均衡器必须能够在通信线路上与Apache节点通信。因此我们必须在负载均衡器上激活包转发。在
/etc/sysctl.conf
文件中增加如下项:
loadb1/loadb2:
# Enables packet forwarding
net.ipv4.ip_forward = 1
Then do this:
然后这样做:
loadb1/loadb2:
4 Configure heartbeat And ldirectord
配置心跳和指挥器
Now we have to create three configuration files for
heartbeat
. They must be identical on
loadb1
and
loadb2
!
现在我们不得不为心跳创建三个配置文件。它们在
loadb1
and
loadb2
上必须完全一样。
loadb1/loadb2:
logfacility local0 bcast eth0 # Linux mcast eth0 225.0.0.1 694 1 0 auto_failback off node loadb1 node loadb2 respawn hacluster /usr/lib/heartbeat/ipfail apiauth ipfail gid=haclient uid=hacluster
|
Important:
As nodenames we must use the output of
重要的是:节点名字,我们必须用它来输出。
loadb1/loadb2:
loadb1 \ ldirectord::ldirectord.cf \ LVSSyncDaemonSwap::master \ IPaddr2::192.168.0.105/24/eth0/192.168.0.255
|
The first word is the output of
第一个单词是输出
on
loadb1
, no matter if you create the file on
loadb1
or
loadb2
! After
IPaddr2
we put our virtual IP address
192.168.0.105
.
对于loadb1,在loadb1和loadb2上不论你是否船舰这个文件!在IPadr2后面我们放置我们的虚拟IP地址192.168.0.105
loadb1/loadb2:
auth 3 3 md5 somerandomstring
|
somerandomstring
is a password which the two
heartbeat
daemons on
loadb1
and
loadb2
use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use
md5
as it is the most secure one.
somerandomstring
是一个密码,在loadb1和loadb2上的两个心跳守护线程用它来鉴别对方。在这使用你们自己的字符串。你可以在三个鉴定机制中方式中选择。我使用MD5,因为它是最安全的。
/etc/ha.d/authkeys
should be readable by root only, therefore we do this:
这个文件应该最好只有root用户能操作,因此我们要这样做:
loadb1/loadb2:
chmod 600 /etc/ha.d/authkeys
ldirectord
is the actual load balancer. We are going to configure our two load balancers (
loadb1.example.com
and
loadb2.example.com
) in an active/passive setup, which means we have one active load balancer, and the other one is a hot-standby and becomes active if the active one fails. To make it work, we must create the
ldirectord
configuration file
/etc/ha.d/ldirectord.cf
which again must be identical on
loadb1
and
loadb2
.
指挥器是实际的负载均衡者。我们将要在主动/被动(active/passive)设置中配置我们的两个负载均衡器(
loadb1.example.com
and
loadb2.example.com
),
这意味着有一个活跃(起作用)的负载均衡者,另一个是热备份。如果这个活跃的均衡者这失效了,那么另一个热备份的均衡应该变成活跃的。达到这样一个工作目的,我们必须创建指挥器配置文件
/etc/ha.d/ldirectord.cf
,这次在
loadb1
和
loadb2
上也必须保持相同。
loadb1/loadb2:
vi /etc/ha.d/ldirectord.cf
checktimeout=10 checkinterval=2 autoreload=no logfile="local0" quiescent=yes
virtual=192.168.0.105:80 real=192.168.0.101:80 gate real=192.168.0.102:80 gate fallback=127.0.0.1:80 gate service=http request="ldirector.html" receive="Test Page" scheduler=rr protocol=tcp checktype=negotiate
|
In the
virtual=
line we put our virtual IP address (
192.168. 0.105 in this example), and in the real=
lines we list the IP addresses of our Apache nodes (
192.168.0.101
and
192.168. 0.102 in this example). In the request=
line we list the name of a file on
webserver1
and
webserver2
that
ldirectord
will request repeatedly to see if
webserver1
and
webserver2
are still alive. That file (that we are going to create later on) must contain the string listed in the
receive=
line.
在这个 virtual= 线上,我们放置我们的虚拟IP地址(这个例子是192.168.0.105),然后在 real= 线上我们列出我们的Apache节点(这个例子上是:192.168.0.101 是 192.168.0.102)的IP地址。在这个 request= 我们列出在webserver1和威尔伯server上一个文件的名字,指挥器将重复请求去看webserver1和webserver2是否都是活着的。这个文件(我们将在稍后创建)必须包含在 receive= 线上列出的字符串。
Afterwards we create the system startup links for
heartbeat
and remove those of
ldirectord
because
ldirectord
will be started by the
heartbeat
daemon:
然后我们创建这个系统对于心跳的启动环节,移除那些指挥器,因为指挥器将被心跳守护线程启动:
loadb1/loadb2:
update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
update-rc.d -f ldirectord remove
Finally we start heartbeat (and with it ldirectord):
最后我们启动心跳(和对于它的指挥器)
loadb1/loadb2:
/etc/init.d/ldirectord stop
/etc/init.d/heartbeat start
5 Test The Load Balancers
试负载均衡器
Let's check if both load balancers work as expected:
让我们来检测两个负载均衡者是否会像预期的那样工作:
loadb1/loadb2:
The active load balancer should list the virtual IP address (192.168.0.105):
活跃的负载均衡者将列出虚拟IP地址是(192.168.0.105):
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:3e:40:18:e5 brd ff:ff:ff:ff:ff:ff inet 192.168.0.103/24 brd 192.168.0.255 scope global eth0 inet 192.168.0.105/24 brd 192.168.0.255 scope global secondary eth0
|
The hot-standby should show this:
热备份的负载均衡者将显示:
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:16:3e:50:e3: 3a brd ff:ff:ff:ff:ff:ff inet 192.168.0.104/24 brd 192.168.0.255 scope global eth0
|
loadb1/loadb2:
ldirectord ldirectord.cf status
Output on the active load balancer:
在活动的负载均衡器上输出:
ldirectord for /etc/ha.d/ldirectord.cf is running with pid: 1455
|
Output on the hot-standby:
在热备份上输出:
ldirectord is stopped for /etc/ha.d/ldirectord.cf
|
loadb1/loadb2:
Output on the active load balancer:
在活跃的负载均衡器上输出:
IP Virtual Server version
1.2.1
(size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.105:80 rr -> 192.168.0.101:80 Route 0 0 0 -> 192.168.0.102:80 Route 0 0 0 -> 127.0.0.1:80 Local 1 0 0
|
Output on the hot-standby:
在热备份上输出:
IP Virtual Server version
1.2.1
(size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn
|
loadb1/loadb2:
/etc/ha.d/resource.d/LVSSyncDaemonSwap master status
Output on the active load balancer:
在活动的负载均衡器上输出:
master running (ipvs_syncmaster pid: 1591)
|
Output on the hot-standby:
在热备份上输出:
If your tests went fine, you can now go on and configure the two Apache nodes.
如果你的测试是美好的,你现在可以继续配置两架Apache节点。
6 Configure The Two Apache Nodes
配置两个Apache节点
Finally we must configure our Apache cluster nodes
webserver1.example.com
and
webserver2.example.com
to accept requests on the virtual IP address
192.168.0.105
.
最后我们必须配置我们的Apache集群节点webserver1.example.com and webserver2.example.com 接受在虚拟IP地址192.168.0.105的请求。
webserver1/webserver2:
Add the following to /etc/sysctl.conf:
在/etc/sysctl.conf 文件中加入如下内容:
webserver1/webserver2:
# Enable configuration of arp_ignore option net.ipv4.conf.all.arp_ignore = 1 # When an arp request is received on eth0, only respond if that address is # configured on eth0. In particular, do not respond if the address is # configured on lo net.ipv4.conf.eth0.arp_ignore = 1 # Ditto for eth1, add for all ARPing interfaces #net.ipv4.conf.eth1.arp_ignore = 1 # Enable configuration of arp_announce option net.ipv4.conf.all.arp_announce = 2 # When making an ARP request sent through eth0 Always use an address that # is configured on eth0 as the source address of the ARP request. If this # is not set, and packets are being sent out eth0 for an address that is on # lo, and an arp request is required, then the address on lo will be used. # As the source IP address of arp requests is entered into the ARP cache on # the destination, it has the effect of announcing this address. This is # not desirable in this case as adresses on lo on the real-servers should # be announced only by the linux-director. net.ipv4.conf.eth0.arp_announce = 2 # Ditto for eth1, add for all ARPing interfaces #net.ipv4.conf.eth1.arp_announce = 2
|
Then run this:
然后运行:
webserver1/webserver2:
Add this section for the virtual IP address to /etc/network/interfaces:
在/etc/network/interfaces 文件中增加虚拟IP地址的部分:
webserver1/webserver2:
vi /etc/network/interfaces
auto lo:0 iface lo:0 inet static address 192.168.0.105 netmask 255.255.255.255 pre-up sysctl -p > /dev/null
|
Then run this:
然后运行:
webserver1/webserver2:
Finally we must create the file
ldirector.html
. This file is requested by the two load balancer nodes repeatedly so that they can see if the two Apache nodes are still running. I assume that the document root of the main apache web site on
webserver1
and
webserver2
is
/var/www
, therefore we create the file
/var/www/ldirector.html
:
最后我们必须创建这个 ldirector.html 文件这个文件被两个负载平衡器节点重复的请求,如果这两个Apache节点都是运行的,所以它们能看见。我架设在webserver1 和 webserver2上的Apahce web站点的文档主目录是/var/www,因此我们要在创建/var/www/ldirector.html文件:
webserver1/webserver2:
vi /var/www/ldirector.html
You can now access the web site that is hosted by the two Apache nodes by typing
[url]http://192.168.[/url] 0.105 in your browser.
现在你可以通过由两台Apache节点组成的主机访问web站点,在浏览器的地址栏中键入[url]http://192.168.0.105[/url]。
Now stop the Apache on either
webserver1
or
webserver2
. You should then still see the web site on
[url]http://192.168.0.105[/url]
because the load balancer directs requests to the working Apache node. Of course, if you stop both Apaches, then your request will fail.
现在停止任何一台webserver,webserver1或者webserver2。你应该仍然可以访问web站点[url]http://192.168.0.105[/url],因为负载均衡器指挥请求到工作着的Apache节点上。当然,如果你停止两台Apache,那么你的请求将会失败。
Now let's assume that
loadb1
is our active load balancer, and
loadb2
is the hot-standby. Now stop
heartbeat
on
loadb1
:
现在让我们架设这个loadb1是我们活跃的负载均衡器,loadb2是热备份的。现在在loadb1上停止心跳:
loadb1:
/etc/init.d/heartbeat stop
Wait a few seconds, and then try
[url]http://192.168.0.105[/url]
again in your browser. You should still see your web site because
loadb2
has taken the active role now.
等待几秒钟,然后尝试在浏览器的地址栏里输入[url]http://192.168.0.105[/url]再次请求。你应该仍然看见你的web站点,因为loadb2已经从备份状态变成活跃的。
Now start heartbeat again on loadb1:
现在再一次在loadb1上启动心跳:
loadb1:
/etc/init.d/heartbeat start
loadb2
should still have the active role. Do the tests from chapter 5 again on
loadb1
and
loadb2
, and you should see the inverse results as before.
Loadb2
仍将保持活跃状态的角色。在loadb1和loadb2上再一次做测试,从第五章的步骤开始,你也应该看到这个翻转的结果和前面的一样。
If you have also passed these tests, then your loadbalanced Apache cluster is working as expected. Have fun!
如果你也通过了这些测试,那么你的负载均衡Apache集群的工作是预期的。正确完工!
This tutorial shows how to loadbalance two Apache nodes. It does not show how to keep the files in the Apache document root in sync or how to create a storage solution like an NFS server that both Apache nodes can use, nor does it provide a solution how to manage your MySQL database(s). You can find solutions for these issues here:
本指南演示了如何配置两个Apache节点的负载均衡。它并没有说明如何在apahce的根目录保持文件同步,或者如何创建一个像NFS服务器一样的存储方案供给两个Apahce节点使用,也没有提供一个怎样管理你的MySQL数据库的解决方案。你可以找到这些问题的解决方案在如下地址:
・
Mirror Your Web Site With rsync
・
Setting Up A Highly Available NFS Server
・
How To Set Up A Load-Balanced MySQL Cluster
・
How To Set Up Database Replication In MySQL
・
heartbeat / The High-Availability Linux Project: [url]http://linux-ha.org[/url]
心跳/高可用性 Linux工程:[url]http://linux-ha.org[/url]
・
The Linux Virtual Server Project: [url]http://www.linuxvirtualserver.org[/url]
Linux
虚拟服务器项目:[url]http://www.linuxvirtualserver.org[/url]
・
Ultra Monkey: [url]http://www.ultramonkey.org[/url]