什么是高可用集群:
高可用集群,英文名为High Availability Cluster,简称HA Cluster,简单的说,集群(cluster)就是一组计算机,他们作为一个整体向用户提供一组网络资源。这些单个的计算机系统,就是集群的节点(node)。
高可用集群可实现时时在线服务,其中只有一个提供在线服务,另外的做备份。
本文主要讲述corosync/Openais and pacemaker 高可用集群实现。
Corosync简介:
Coreosync在传递信息的时候可以通过一个简单的配置文件来定义信息传递的方式和协议等。它是一个新兴的软件,2008年推出,但其实它并不是一个真正意义上的新软件,在2002年的时候有一个项目Openais , 它由于过大,分裂为两个子项目,其中可以实现HA心跳信息传输的功能就是Corosync ,它的代码60%左右来源于Openais. Corosync可以提供一个完整的HA功能,但是要实现更多,更复杂的功能,那就需要使用Openais了。Corosync是未来的发展方向。在以后的新项目里,一般采用Corosync,而hb_gui可以提供很好的HA管理功能,可以实现图形化的管理。另外相关的图形化有RHCS的套件luci+ricci。
本文主要介绍高可用集群corosync/Openais and pacemaker 实现过程步骤如下:
前提:
1)本配置共有两个测试节点,分别node1.rain.com和node2.rain.com,相的IP地址分别为172.16.5.10和172.16.5.11;
2)集群服务为apache的httpd服务;
3)提供web服务的地址为172.16.5.1;
4)系统为rhel5.4
1、准备工作
为了配置一台Linux主机成为HA的节点,通常需要做出如下的准备工作:
1)所有的节点主机名称和对应的IP地址解析服务可以正常工作且每个节点的主机名称需要跟"uname -n“命令的结果保持一致;因此,需要保证两个节点上的/etc/hosts文件均为下面的内容:
172.16.5.10 node1.rain.com node11
172.16.5.11 node2.rain.com node22
也即是我们的主机名称分别172.16.5.10为node1.rain.com,172.16.5.11为node2.rain.com
并验证主机172.16.5.10名称:
- [root@node1 ~]# hostname
- node1.rain.com
- [root@node1 ~]# uname -n
- node1.rain.com
验证主机172.16.5.11名称:
- [root@node2 ~]# hostname
- node2.rain.com
- [root@node2 ~]# uname -n
- node2.rain.com
2)设定两个节点可以基于密钥进行ssh通信,这可以通过类似如下的命令实现:
node1
- [root@node1 ~]# ssh-keygen -t rsa
- Generating public/private rsa key pair.
- Enter file in which to save the key (/root/.ssh/id_rsa):
- Enter passphrase (empty for no passphrase):
- Enter same passphrase again:
- Your identification has been saved in /root/.ssh/id_rsa.
- Your public key has been saved in /root/.ssh/id_rsa.pub.
- The key fingerprint is:
- 3c:a7:2e:c5:2b:fd:1d:5b:fb:99:cf:98:ba:5c:d4:9d [email protected]
- [root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node22
- 21
- The authenticity of host 'node22 (172.16.5.11)' can't be established.
- RSA key fingerprint is 3d:2d:46:1f:0b:9a:77:6f:68:36:f4:64:a4:68:51:81.
- Are you sure you want to continue connecting (yes/no)? yes
- Warning: Permanently added 'node22,172.16.5.11' (RSA) to the list of known hosts.
- root@node22's password:
- Now try logging into the machine, with "ssh 'root@node22'", and check in:
- .ssh/authorized_keys
- to make sure we haven't added extra keys that you weren't expecting.
node2
- [root@node2 ~]# ssh-keygen -t rsa
- Generating public/private rsa key pair.
- Enter file in which to save the key (/root/.ssh/id_rsa):
- Enter passphrase (empty for no passphrase):
- Enter same passphrase again:
- Your identification has been saved in /root/.ssh/id_rsa.
- Your public key has been saved in /root/.ssh/id_rsa.pub.
- The key fingerprint is:
- fd:96:eb:cc:aa:0e:97:08:fc:5d:9f:6a:c6:08:e0:30 [email protected]
- [root@node2 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node11
- 21
- The authenticity of host 'node11 (172.16.5.10)' can't be established.
- RSA key fingerprint is 3d:2d:46:1f:0b:9a:77:6f:68:36:f4:64:a4:68:51:81.
- Are you sure you want to continue connecting (yes/no)? yes
- Warning: Permanently added 'node11,172.16.5.10' (RSA) to the list of known hosts.
- root@node11's password:
- Now try logging into the machine, with "ssh 'root@node11'", and check in:
- .ssh/authorized_keys
- to make sure we haven't added extra keys that you weren't expecting.
cluster-glue-1.0.6-1.6.el5.i386.rpm libesmtp-1.0.4-5.el5.i386.rpm
cluster-glue-libs-1.0.6-1.6.el5.i386.rpm corosync-1.2.7-1.1.el5.i386.rpm pacemaker-1.1.5-1.1.el5.i386.rpm corosynclib-1.2.7-1.1.el5.i386.rpm pacemaker-cts-1.1.5-1.1.el5.i386.rpm heartbeat-3.0.3-2.3.el5.i386.rpm pacemaker-libs-1.1.5-1.1.el5.i386.rpm heartbeat-libs-3.0.3-.3.el5.i386.rpm perl-TimeDate-1.16-5.el5.noarch.rpm
这里使用yum安装:
- [root@node1 ~]# yum -y --nogpgcheck localinstall *.rpm
4、配置corosync
- [root@node1 ~]# cd /etc/corosync/
- [root@node1 corosync]# cp corosync.conf.example corosync.conf
- [root@node1 corosync]# vim corosync.conf
- 添加:
- service {
- ver: 0
- name: pacemaker
- use_mgmtd: yes
- }
- aisexec {
- user: root
- group: root
- }
- 更改:bindnetaddr: 172.16.0.0
- secauth:on
- to_syslog: no
生成节点间通信时用到的认证密钥文件:
- [root@node1 corosync]# corosync-keygen
- Corosync Cluster Engine Authentication key generator.
- Gathering 1024 bits for key from /dev/random.
- Press keys on your keyboard to generate entropy.
- Writing corosync key to /etc/corosync/authkey.
- [root@node1 corosync]# ls
- authkey corosync.conf corosync.conf.example service.d uidgid.d
将corosync和authkey复制至node22:
- [root@node1 corosync]# scp -p corosync.conf authkey node22:/etc/corosync/
- corosync.conf 100% 545 0.5KB/s 00:00
- authkey 100% 128 0.1KB/s 00:00
分别为两个节点创建corosync生成日志所在目录:
- [root@node1 ~]# mkdir /var/log/cluster
- [root@node1 ~]# ssh node22 'mkdir /var/log/cluster'
5、启动corosync
- [root@node1 ~]# service corosync start
- Starting Corosync Cluster Engine (corosync): [ OK ]
查看corosync引擎是否正常启动:
- [root@node1 ~]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log
- Apr 12 03:39:24 corosync [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.
- Apr 12 03:39:24 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
查看初始化成员节点通知是否正常发出:
- # grep TOTEM /var/log/cluster/corosync.log
- Jun 14 19:03:49 node1 corosync[5120]: [TOTEM ] Initializing transport (UDP/IP).
- Jun 14 19:03:49 node1 corosync[5120]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
- Jun 14 19:03:50 node1 corosync[5120]: [TOTEM ] The network interface [172.16.100.11] is now up.
- Jun 14 19:03:50 node1 corosync[5120]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
查看初始化成员节点通知是否正常发出:
- # grep TOTEM /var/log/cluster/corosync.log
- Jun 14 19:03:49 node1 corosync[5120]: [TOTEM ] Initializing transport (UDP/IP).
- Jun 14 19:03:49 node1 corosync[5120]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
- Jun 14 19:03:50 node1 corosync[5120]: [TOTEM ] The network interface [172.16.100.11] is now up.
- Jun 14 19:03:50 node1 corosync[5120]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
检查启动过程中是否有错误产生:
- # grep ERROR: /var/log/cluster/corosync.log | grep -v unpack_resources
查看pacemaker是否正常启动:
- [root@node1 ~]# grep pcmk_startup /var/log/cluster/corosync.log
- Apr 18 09:14:34 corosync [pcmk ] info: pcmk_startup: CRM: Initialized
- Apr 18 09:14:34 corosync [pcmk ] Logging: Initialized pcmk_startup
- Apr 18 09:14:34 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295
- Apr 18 09:14:34 corosync [pcmk ] info: pcmk_startup: Service: 9
- Apr 18 09:14:34 corosync [pcmk ] info: pcmk_startup: Local hostname: node1.rain.com
- Apr 18 11:41:14 corosync [pcmk ] info: pcmk_startup: CRM: Initialized
- Apr 18 11:41:14 corosync [pcmk ] Logging: Initialized pcmk_startup
- Apr 18 11:41:14 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295
- Apr 18 11:41:14 corosync [pcmk ] info: pcmk_startup: Service: 9
- Apr 18 11:41:14 corosync [pcmk ] info: pcmk_startup: Local hostname: node1.rain.com
如果上面命令执行均没有问题,接着可以执行如下命令启动node2上的corosync
- [root@node1 ~]# ssh node22 -- '/etc/init.d/corosync start'
- Starting Corosync Cluster Engine (corosync): [ OK ]
注意:启动node2需要在node1上使用如上命令进行,不要在node2节点上直接启动;
使用如下命令查看集群节点的启动状态:
- [root@node1 ~]# crm status
- ============
- Last updated: Thu Apr 12 03:47:56 2012
- Stack: openais
- Current DC: node1.rain.com - partition with quorum
- Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
- 2 Nodes configured, 2 expected votes
- 0 Resources configured.
- ============
- Online: [ node1.rain.com node2.rain.com ]
6、配置集群的工作属性,禁用stonith
corosync默认启用了stonith,而当前集群并没有相应的stonith设备,因此此默认配置目前尚不可用,这可以通过如下命令验正:
- [root@node1 ~]# crm_verify -L
- crm_verify[13965]: 2012/04/12_03:54:30 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
- crm_verify[13965]: 2012/04/12_03:54:30 ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
- crm_verify[13965]: 2012/04/12_03:54:30 ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
- Errors found during check: config not valid
- -V may provide more details
我们里可以通过如下命令先禁用stonith:
- [root@node1 ~]# crm configure property stonith-enabled=false
查看当前的配置信息:
- [root@node1 ~]# crm configure show
- node node1.rain.com
- node node2.rain.com
- property $id="cib-bootstrap-options" \
- dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2" \
- stonith-enabled="false"
从中可以看出stonith已经被禁用。
7、为集群添加集群资源
orosync支持heartbeat,LSB和ocf等类型的资源代理,目前较为常用的类型为LSB和OCF两类,stonith类专为配置stonith设备而用;
查看当前集群系统所支持的类型:
- [root@node1 ~]# crm ra classes
- heartbeat
- lsb
- ocf / heartbeat linbit pacemaker
- stonith
如果想要查看某种类别下的所用资源代理的列表,可以使用类似如下命令实现:
- # crm ra list lsb
- # crm ra list ocf heartbeat
- # crm ra list ocf pacemaker
- # crm ra list stonith
- # crm ra info [class:[provider:]]resource_agent
例如:
- # crm ra info ocf:heartbeat:IPaddr
8、创建的web集群创建一个IP地址资源,以在通过集群提供web服务时使用;这可以过如下方式实现:
通过如下的命令执行结果可以看出此资源已经在node1.rain.com上启动:
- 语法:
- primitive <rsc> [<class>:[<provider>:]]<type>
- [params attr_list]
- [operations id_spec]
- [op op_type [<attribute>=<value>...] ...]
- op_type :: start | stop | monitor
- 例子:
- primitive apcfence stonith:apcsmart \
- params ttydev=/dev/ttyS0 hostlist="node11 node22" \
- op start timeout=60s \
- op monitor interval=30m timeout=60s
- 应用:
- # crm configure primitive WebIP ocf:heartbeat:IPaddr params ip=172.16.5.1
- [root@node1 ~]# crm status
- ============
- Last updated: Thu Apr 12 04:30:51 2012
- Stack: openais
- Current DC: node1.rain.com - partition with quorum
- Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
- 2 Nodes configured, 2 expected votes
- 1 Resources configured.
- ============
- Online: [ node1.rain.com node2.rain.com ]
- WebIP (ocf::heartbeat:IPaddr): Started node1.rain.com
当然,也可以在node1上执行ifconfig命令看到此地址已经在eth0的别名上生效:
- root@node1 ~]# ifconfig
- 其中可以查看到:
- eth0:0 Link encap:Ethernet HWaddr 00:0C:29:3A:AA:2F
- inet addr:172.16.5.1 Bcast:172.16.255.255 Mask:255.255.0.0
- UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
- Interrupt:67 Base address:0x2000
而后我们到node2上通过如下命令停止node11上的corosync服务:
- [root@node2 ~]# ssh node11 /etc/init.d/corosync stop
- Signaling Corosync Cluster Engine (corosync) to terminate: [ OK ]
- Waiting for corosync services to unload:.......[ OK ]
查看集群工作状态:
- [root@node2 ~]# crm status
- ============
- Last updated: Thu Apr 12 04:41:40 2012
- Stack: openais
- Current DC: node2.rain.com - partition WITHOUT quorum
- Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
- 2 Nodes configured, 2 expected votes
- 1 Resources configured.
- ============
- Online: [ node2.rain.com ]
- OFFLINE: [ node1.rain.com ]
上面的信息显示node1.rain.com已经离线,但资源WebIP却没能在node2.rain.com上启动。这是因为此时的集群状态为"WITHOUT quorum",即已经失去了quorum,此时集群服务本身已经不满足正常运行的条件,这对于只有两节点的集群来讲是不合理的。因此,我们可以通过如下的命令来修改忽略quorum不能满足的集群状态检查:
- crm(live)configure# property no-quorum-policy=ignore
查看集群状态,集群就会在节点node2上启动此资源了,如下所示:
- crm(live)# status
- ============
- Last updated: Thu Apr 12 04:52:28 2012
- Stack: openais
- Current DC: node2.rain.com - partition with quorum
- Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
- 2 Nodes configured, 2 expected votes
- 1 Resources configured.
- ============
- Online: [ node1.rain.com node2.rain.com ]
- WebIP (ocf::heartbeat:IPaddr): Started node2.rain.com
验证完后,启动node1.rain.com
- # ssh node11 -- /etc/init.d/corosync start
正常启动node1.rain.com后,集群资源WebIP很可能就会从node2.rain.com转移回node1.rain.com。资源的这种在节点间每一次的来回流动都会造成那段时间内其无法正常被访问,所以,我们有时候需要在资源因为节点故障转移到其它节点后,即便原来的节点恢复正常也禁止资源再次流转回来。这可以通过定义资源的黏性(stickiness)来实现。在创建资源时或在创建资源后,都可以指定指定资源黏性。
注:资源黏性值范围及其作用:
0:这是默认选项。资源放置在系统中的最适合位置。这意味着当负载能力“较好”或较差的节点变得可用时才转移资源。此选项的作用基本等同于自动故障回复,只是资源可能会转移到非之前活动的节点上;
大于0:资源更愿意留在当前位置,但是如果有更合适的节点可用时会移动。值越高表示资源越愿意留在当前位置;
小于0:资源更愿意移离当前位置。绝对值越高表示资源越愿意离开当前位置;
INFINITY:如果不是因节点不适合运行资源(节点关机、节点待机、达到migration-threshold 或配置更改)而强制资源转移,资源总是留在当前位置。此选项的作用几乎等同于完全禁用自动故障回复;
-INFINITY:资源总是移离当前位置;
这里我们可以指定默认黏性值:
- # crm configure rsc_defaults resource-stickiness=100
9、结合上面已经配置好的IP地址资源,将此集群配置成为一个active/passive模型的web(httpd)服务集群
为了将此集群启用为web(httpd)服务器集群,我们得先在各节点上安装httpd,并配置其能在本地各自提供一个测试页面。
- Node1:
- # yum -y install httpd
- # echo "<h1>Node1.rain.com</h1>" > /var/www/html/index.html
- Node2:
- # yum -y install httpd
- # echo "<h1>Node2.rain.com</h1>" > /var/www/html/index.html
测试页面显示成功:
当我们关闭node1上的corosync服务时:
- [root@node2 ~]# ssh node11 '/etc/init.d/corosync stop'
- Signaling Corosync Cluster Engine (corosync) to terminate: [ OK ]
- Waiting for corosync services to unload:......[ OK ]
会出现以下界面:
则证明服务正常
接下来我们将此httpd服务添加为集群资源。将httpd添加为集群资源有两处资源代理可用:lsb和ocf:heartbeat,为了简单起见,我们这里使用lsb类型:
新建资源WebSite:
- # crm configure primitive WebSite lsb:httpd
查看资源的启用状态:
- [root@node1 ~]# crm status
- ============
- Last updated: Wed Apr 18 15:05:47 2012
- Stack: openais
- Current DC: node2.rain.com - partition with quorum
- Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
- 2 Nodes configured, 2 expected votes
- 2 Resources configured.
- ============
- Online: [ node1.rain.com node2.rain.com ]
- WebIP (ocf::heartbeat:IPaddr): Started node2.rain.com
- WebSite (lsb:httpd): Started node2.rain.com
从上面的信息中可以看出WebIP和WebSite有可能会分别运行于两个节点上,这对于通过此IP提供Web服务的应用来说是不成立的,即此两者资源必须同时运行在某节点上。
由此可见,即便集群拥有所有必需资源,但它可能还无法进行正确处理。资源约束则用以指定在哪些群集节点上运行资源,以何种顺序装载资源,以及特定资源依赖于哪些其它资源。pacemaker共给我们提供了三种资源约束方法:
1)Resource Location(资源位置):定义资源可以、不可以或尽可能在哪些节点上运行;
2)Resource Collocation(资源排列):排列约束用以定义集群资源可以或不可以在某个节点上同时运行;
3)Resource Order(资源顺序):顺序约束定义集群资源在节点上启动的顺序;
定义约束时,还需要指定分数。各种分数是集群工作方式的重要组成部分。其实,从迁移资源到决定在已降级集群中停止哪些资源的整个过程是通过以某种方式修改分数来实现的。分数按每个资源来计算,资源分数为负的任何节点都无法运行该资源。在计算出资源分数后,集群选择分数最高的节点。INFINITY(无穷大)目前定义为 1,000,000。加减无穷大遵循以下3个基本规则:
1)任何值 + 无穷大 = 无穷大
2)任何值 - 无穷大 = -无穷大
3)无穷大 - 无穷大 = -无穷大
定义资源约束时,也可以指定每个约束的分数。分数表示指派给此资源约束的值。分数较高的约束先应用,分数较低的约束后应用。通过使用不同的分数为既定资源创建更多位置约束,可以指定资源要故障转移至的目标节点的顺序。
因此,对于前述的WebIP和WebSite可能会运行于不同节点的问题,可以通过以下命令来解决:
# crm configure colocation website-with-ip INFINITY: WebSite WebIP
接着,我们还得确保WebSite在某节点启动之前得先启动WebIP,这可以使用如下命令实现:
# crm configure order httpd-after-ip mandatory: WebIP WebSite
此外,由于HA集群本身并不强制每个节点的性能相同或相近,所以,某些时候我们可能希望在正常时服务总能在某个性能较强的节点上运行,这可以通过位置约束来实现:
# crm configure location prefer-node1 WebSite rule 200: node1
这条命令实现了将WebSite约束在node1上,且指定其分数为200;
好了 ,这就是我们的corosync实现的高可用集群过程,做这么多真的不容易呀!不过多练习可以增强记忆,平常在学习中要记得多动手的习惯。