pacemaker是一个集群资源管理器。利用集群基础构件(OpenAIS,heartbeat和corosync)提供的消息和成员管理能力来探测并从节点或者资源级别的故障中恢复,以实现集群服务的最大可用性。corosync是集群框架引擎程序,pacemaker是高可用集群资源管理器;资源管理层(pacemaker)负责活动节点,ip地址转移,本地资源管理系统。(corosync)消息传递层负责心跳信息。
预留两个备节点称为故障转移域。当故障节点修复后,资源返回来称为failback.故障节点修复后,资源仍在备用节点称为failover;资源在节点间每一次的来回流动都会造成那段时间内其无法正常被访问,所以,我们有时候需要在资源因为节点故障转移到其它节点后,即便原来的节点恢复正常也禁止资源再次流转回来。这可以通过定义资源的黏性(stickiness)来实现。
资源黏性
资源黏性是指:资源更倾向于运行在哪个节点。
资源黏性值范围及其作用:
0:这是默认选项。资源放置在系统中的最适合位置。这意味着当负载能力“较好”或较差的节点变得可用时才转移资源。此选项的作用基本等同于自动故障回复,只是资源可能会转移到非之前活动的节点上;
大于0:资源更愿意留在当前位置,但是如果有更合适的节点可用时会移动。值越高表示资源越愿意留在当前位置;
小于0:资源更愿意移离当前位置。绝对值越高表示资源越愿意离开当前位置;
INFINITY:如果不是因节点不适合运行资源(节点关机、节点待机、达到migration-threshold 或配置更改)而强制资源转移,资源总是留在当前位置。此选项的作用几乎等同于完全禁用自动故障回复
CRM(cluster resource manager) 相当于pacemaker,每个节点都要一个crmd的守护进程,负责识别和资源服务的处理!资源管理器支持不同种类的资源代理,常见的有ocf(open cluster framework)资源代理,LSB(linux standard base)资源代理,systemd和service资源代理。
resource_id: 用户定义的资源名称;
standsrd: 脚本遵循的标准,允许值为OCF,service,upstart,systemd,LSB,stonith;
type:资源代理名称;集群是由众多功能的资源服务组合而成,相互之间存在依赖与被依赖关系,通过资源限制(resource constraint)实现相互依赖关系。
集群资源约束:
位置约束(location):位置约束限定服务资源应在在什么节点或者地方启动;
顺序约束(order): 限定资源服务的启动顺序;
资源捆绑约束(colocation): 将不同的资源捆绑在一起作为一个逻辑整体。作为整体故障时会整体进行迁移。
集群资源类型:
资源组:在集群中,需要将多个资源作为一个资源组进行统一操作;
资源属性:priority: 资源优先级,默认值为0,如果集群无法保证所有资源都能运行,优先级高的运行
target-role: 资源目标角色:默认值为started,表示资源处于什么状态;(stop,master:允许资源启动,适当时间作为主设备);
is_managed:默认值为true,表示允许集群停止和启动该资源,fales表示不允许;
resource-stickiness:默认值为0,表示该资源保留原有在位置节点的倾向程度值;
粘性资源
1.active/active模式:故障节点的访问请求会自动转移到另外一个正常运行节点上,或者装有负载均衡器的服务系统将故障节点的请求利用算法调度到正常运行的节点进行服务。节点部署相同的服务并且处于运行状态以及参数配置相同。
2.activs/passive模式:整个集群节点服务以及配置参数相同,其中一个节点的服务处于备用(standby)状态不运行,当其他一个节点故障时备用节点才会备激活运行。
3.N+1模式:多准备一个额外的备机节点,在该节点安装和配置不同的软件和服务,当集群的某个节点故障后该备用节点会被激活从故障节点接受服务,该备机节点具备接管任何故障能力。
4.N+M模式:因为N+1模式下一个备机节点无法提供冗余能力,因此需要设置M个备机节点,M个数需要根据实际情况计算。
5.N-to-1模式:当节点故障时备用节点(只有一个)暂时接管运行服务,当故障节点修复时备用节点将服务释放由恢复的节点接管,而后备用节点处于备用状态。
6.N-to-N模式:将故障节点的服务和访问请求分散到集群其他正常节点,不存在备用节点。
pacemaker配置资源的方法:
1.命令配置方式: crmsh pcs
2.图形配置方式: pygui hawk LCWC
cib manage shadow CIBs
resource resources management ##资源管理器,定义资源参数configure CRM cluster configuration ###设置集群的配置信息
node nodes management ####节点管理命令
options user preferences ####用户优先级
history CRM cluster history ###查询命令历史
site Geo-cluster support ###支持geo集群
ra resource agents information center
status show cluster status ####显示集群信息状态
help,? show help (help topics for list of topics) ##查询帮助
end,cd,up go back one level ###推迟当前层
quit,bye,exit exit the program ###推出管理
大概列出一些常用的
node define a cluster node ###定义集群的节点
primitive define a resource ###定义资源
monitor add monitor(监控) operation(操作) to a primitive ##对定义资源添加监控操作命令
group define a group ###定义一个组
clone define a clone(克隆) ###定义一个克隆资源
ms define a master-slave resource ###定义一个主从类型
rsc_template define a resource template ##定义资源模板
location a location preference ###定义位置约束优先级
colocation colocate resources ###排列约束资源
order order resources ####启动顺序
property set a cluster property ###定义集群的性质,属性
show display CIB objects ##显示cib项目信息
delete delete CIB objects ###删除集群定义节
rename rename a CIB object
commit commit the changes to the CIB ###提交保存信息命令
save save the CIB to a file
help show help (help topics for list of topics)
end go back one level
quit exit the program
primitive id 资源代理类型:资源代理的提供程序:资源代理名称
meta :元属性,可以为资源添加的选项。告诉CRM如何处理特定资源。
params:实例属性,是特定资源类的参数,用于确定资源类的行为方式以及控制服务。
op 说明:默认情况下,集群不会确保资源一直正常工作,向资源定义添加一个监控monitor确保资源正常工作!
设置其他参数:interval:执行操作的频率!
timeout:需要等待多久确定操作失败
ignore:忽略失败操作
stop:停止资源并且不在其他位置启动资源
fence:关闭资源失败的节点(STONITH).
standby:将资源从该节点拿走,或者称为预备节点
enabled:如果值为false,将操作看作不存在。允许值为true;
colocation命令用法:colocation id +INFINITY/-INFINITY : 资源1 资源2
(+INFINITY定义在一个节点上运行;-INFINITY定义在不同节点上运行)
order排序约束:order id mandatory/optiona/serialize: 资源1 资源2
mandatory默认值,表示强制约束,所有资源状态都一样。
Optional表示选择性约束
Serialize表示顺序约束
location位置约束:location id 资源 规则
status show nodes' status as XML ##查看状态
show show node ##查看信息
standby put node into standby ##设置为备用状态
online set node online ##设置节点运行
fence fence node
clearstate Clear node state ##清楚状态
delete delete node ##删除节点
help show help (help topics for list of topics)
end go back one level
quit exit the program
status show status of resources
start start a resource
stop stop a resource
restart restart a resource
promote promote a master-slave resource
## status显示当前集群信息 ##
crm(live)# status
Last updated: Thu Jun 28 11:00:23 2018
Last change: Thu Jun 28 10:12:50 2018 via cibadmin on servre4
Stack: classic openais (with plugin)
Current DC: servre4 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
0 Resources configured
Online: [ servre3 servre4 ]
pacemaker1.1.8版本以后,crmsh发展为独立项目,所以需要我们自己先下载安装包安装!
[root@server2 ~]# yum install -y corosync pacemaker
###tab键后我们后面时没有命令的###
[root@servre4 ~]# crm
crmadmin crm_error crm_mon crm_resource crm_standby
crm_attribute crm_failcount crm_node crm_shadow crm_ticket
crm_diff crm_master crm_report crm_simulate crm_verify
[root@servre4 ~]# yum install -y crmsh-1.2.6-0.rc2.2.1.x86_64.rpm pssh-2.3.1-2.1.x86_64.rpm
[root@server2 ~]# crm
crm crm_error crm_node crm_simulate
crmadmin crm_failcount crm_report crm_standby
crm_attribute crm_master crm_resource crm_ticket
crm_diff crm_mon crm_shadow crm_verify
[root@servre4 ~]# cd /etc/corosync/
[root@servre4 corosync]# ls
corosync.conf.example corosync.conf.example.udpu service.d uidgid.d
###复制模板配置文件###
[root@servre4 corosync]# cp corosync.conf.example corosync.conf
[root@servre4 corosync]# vim corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: 192.168.1.1
mcastaddr: 226.94.1.1
mcastport: 5405
ttl: 1
}
}
34 service {
35 name:pacemaker
36 ver:0
37 }
[root@server2 ~]# /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
###默认资源属性配置###
[root@servre4 ~]# crm
crm(live)# configure
crm(live)configure# show
node server5
node servre4
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2"
###检查配置文件是否有错####
crm(live)configure# verify
error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
###报错STONTIH资源没有定义,解决办法###
crm(live)configure# property stonith-enabled=false
crm(live)configure# verify
crm(live)configure# show
node server5
node servre4
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false"
查看当前系统所支持的类型
crm(live)# ra
crm(live)ra# classes
lsb
ocf / heartbeat pacemaker
service
stonith
####查看某种类型下可以利用的资源
crm(live)ra# list ocf
CTDB ClusterMon Dummy Filesystem HealthCPU
HealthSMART IPaddr IPaddr2 IPsrcaddr LVM
MailTo Route SendArp Squid Stateful
SysInfo SystemHealth VirtualDomain Xinetd apache
conntrackd controld dhcpd ethmonitor exportfs
mysql mysql-proxy named nfsserver nginx
pgsql ping pingd postfix remote
rsyncd rsyslog slapd symlink tomcat
定义资源
###查看怎么定义一个主资源###
crm(live)configure# primitive
usage: primitive {[:[:]]|@}
[params = [=...]]
[meta = [=...]]
[utilization = [=...]]
[operations id_spec
[op op_type [=...] ...]]
###定义vip资源以及监控###
crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 params ip=192.168.88.200 cidr_netmask=32 op monitor interval=1min
crm(live)configure# show
node server5
node servre4
primitive vip ocf:heartbeat:IPaddr2 \
params ip="192.168.88.200" cidr_netmask="32" \
op monitor interval="1min"
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false"
crm(live)configure# commit
crm(live)# status
Last updated: Sun Jul 1 16:06:23 2018
Last change: Sun Jul 1 16:04:15 2018 via cibadmin on servre4
Stack: classic openais (with plugin)
Current DC: server5 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configured
Online: [ server5 servre4 ]
vip (ocf::heartbeat:IPaddr2): Started server5
###vip运行在server5###
[root@server5 ~]# ip addr
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:68:25:82 brd ff:ff:ff:ff:ff:ff
inet 192.168.88.156/24 brd 192.168.88.255 scope global eth1
inet 192.168.88.200/32 brd 192.168.88.255 scope global eth1
inet6 fe80::20c:29ff:fe68:2582/64 scope link
valid_lft forever preferred_lft forever
3: virbr0: mtu 1500 qdisc noqueue state UNKNOWN
link/ether 52:54:00:d1:bf:0b brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
4: virbr0-nic: mtu 1500 qdisc noop state DOWN qlen 500
link/ether 52:54:00:d1:bf:0b brd ff:ff:ff:ff:ff:ff
测试将server5的节点停止,查看是否会转移到servre4节点
[root@server5 ~]# /etc/init.d/corosync stop
Signaling Corosync Cluster Engine (corosync) to terminate: [ OK ]
Waiting for corosync services to unload:. [ OK ]
[root@servre4 ~]# crm
crm(live)# status
Last updated: Sun Jul 1 16:12:14 2018
Last change: Sun Jul 1 16:04:15 2018 via cibadmin on servre4
Stack: classic openais (with plugin)
Current DC: servre4 - partition WITHOUT quorum ##server4没有quorum分区
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configured
Online: [ servre4 ]
OFFLINE: [ server5 ] ####server5已经停止###
###虚拟ip没有转移到server4节点上###
[root@servre4 ~]# ip addr
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:e4:49:d0 brd ff:ff:ff:ff:ff:ff
inet 192.168.88.155/24 brd 192.168.88.255 scope global eth1
inet6 fe80::20c:29ff:fee4:49d0/64 scope link
valid_lft forever preferred_lft forever
3: virbr0: mtu 1500 qdisc noqueue state UNKNOWN
link/ether 52:54:00:d1:bf:0b brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
4: virbr0-nic: mtu 1500 qdisc noop state DOWN qlen 500
link/ether 52:54:00:d1:bf:0b brd ff:ff:ff:ff:ff:ff
###server4没有quorum分区,集群服务本身已经不满足正常运行的条件,这对于只有两节点的集群来讲是不合理的。修改忽略quorum不能满足的集群状态检查###
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# commit
crm(live)configure# show
node server5
node servre4
primitive vip ocf:heartbeat:IPaddr2 \
params ip="192.168.88.200" cidr_netmask="32" \
op monitor interval="1min"
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
###查看我们的vip运行到server4上面###
crm(live)# status
Last updated: Sun Jul 1 16:21:43 2018
Last change: Sun Jul 1 16:18:29 2018 via cibadmin on servre4
Stack: classic openais (with plugin)
Current DC: servre4 - partition WITHOUT quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configured
Online: [ servre4 ]
OFFLINE: [ server5 ]
vip (ocf::heartbeat:IPaddr2): Started servre4
[root@servre4 ~]# ip addr
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:e4:49:d0 brd ff:ff:ff:ff:ff:ff
inet 192.168.88.155/24 brd 192.168.88.255 scope global eth1
inet 192.168.88.200/32 brd 192.168.88.255 scope global eth1
inet6 fe80::20c:29ff:fee4:49d0/64 scope link
valid_lft forever preferred_lft forever
3: virbr0: mtu 1500 qdisc noqueue state UNKNOWN
link/ether 52:54:00:d1:bf:0b brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
4: virbr0-nic: mtu 1500 qdisc noop state DOWN qlen 500
link/ether 52:54:00:d1:bf:0b brd ff:ff:ff:ff:ff:ff
定义粘性资源
###我们正常启动server5的corosync服务,虚拟ip运行在server4节点,但有时它会自己恢复到原来节点###
[root@server5 ~]# /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
crm(live)# status
Last updated: Sun Jul 1 16:23:25 2018
Last change: Sun Jul 1 16:18:29 2018 via cibadmin on servre4
Stack: classic openais (with plugin)
Current DC: servre4 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configured
Online: [ server5 servre4 ]
vip (ocf::heartbeat:IPaddr2): Started servre4
###定义资源的黏性###
crm(live)configure# rsc_defaults resource-stickiness=100
crm(live)configure# verify
crm(live)configure# show
node server5
node servre4
primitive vip ocf:heartbeat:IPaddr2 \
params ip="192.168.88.200" cidr_netmask="32" \
op monitor interval="1min"
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
crm(live)configure# commit
将http服务添加为集群资源
[root@servre4 ~]# curl 192.168.88.155
192.168.88.155 server4
[root@servre4 ~]# /etc/init.d/httpd stop
Stopping httpd: [ OK ]
[root@servre4 ~]# chkconfig httpd off
[root@server5 ~]# curl 192.168.88.156
192.168.88.156 server5
[root@server5 ~]# /etc/init.d/httpd stop
Stopping httpd: [ OK ]
[root@server5 ~]# chkconfig httpd off
###新建http集群资源###
crm(live)# configure
crm(live)configure# primitive httpd lsb:httpd
crm(live)configure# show
node server5
node servre4
primitive httpd lsb:httpd
primitive vip ocf:heartbeat:IPaddr2 \
params ip="192.168.88.200" cidr_netmask="32" \
op monitor interval="1min"
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# cd ..
crm(live)# status
Last updated: Sun Jul 1 16:47:46 2018
Last change: Sun Jul 1 16:46:13 2018 via cibadmin on servre4
Stack: classic openais (with plugin)
Current DC: servre4 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured
Online: [ server5 servre4 ]
vip (ocf::heartbeat:IPaddr2): Started servre4
httpd (lsb:httpd): Started server5
我们的vip和http服务没有运行在一个节点上,如果想通过vip访问是不成立的,需要两者同时运行在一个节点上;两种方法1.将vip和http绑定成为一个组运行在一起2.定义资源约束,将vip和http定义在一个集群节点,以那种方式,顺序运行;
1,定义组
crm(live)configure# group
crm(live)configure# group httpservice vip httpd ###将vip和http定义一个组
crm(live)configure# show
node server5
node servre4
primitive httpd lsb:httpd
primitive vip ocf:heartbeat:IPaddr2 \
params ip="192.168.88.200" cidr_netmask="32" \
op monitor interval="1min"
group httpservice vip httpd
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
crm(live)configure# verify ###检查是否有错误###
crm(live)configure# commit
crm(live)configure# cd ..
crm(live)# status
Last updated: Sun Jul 1 16:55:01 2018
Last change: Sun Jul 1 16:54:50 2018 via cibadmin on servre4
Stack: classic openais (with plugin)
Current DC: servre4 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured
Online: [ server5 servre4 ]
Resource Group: httpservice
vip (ocf::heartbeat:IPaddr2): Started servre4
httpd (lsb:httpd): Started servre4
将server4节点作为备用,测试vip和http服务是否转移!
crm(live)# node
crm(live)node# standby
crm(live)node# cd ..
crm(live)# status
Last updated: Sun Jul 1 16:59:58 2018
Last change: Sun Jul 1 16:59:47 2018 via crm_attribute on servre4
Stack: classic openais (with plugin)
Current DC: servre4 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured
Node servre4: standby
Online: [ server5 ]
Resource Group: httpservice
vip (ocf::heartbeat:IPaddr2): Started server5
httpd (lsb:httpd): Started server5
浏览器测试:
2定义资源约束
开启server4服务,删除组资源
crm(live)# resource
crm(live)resource# show
Resource Group: httpservice
vip (ocf::heartbeat:IPaddr2): Started
httpd (lsb:httpd): Started
crm(live)resource# stop httpservice ###停止组资源
crm(live)resource# cleanup httpservice ###清除组资源
Cleaning up vip on server5
Cleaning up vip on servre4
Cleaning up httpd on server5
Cleaning up httpd on servre4
Waiting for 1 replies from the CRMd. OK
crm(live)resource# cd ..
crm(live)# configure
crm(live)configure# delete httpservice ###删除组资源
crm(live)configure# commit
crm(live)configure# show
node server5 \
attributes standby="off"
node servre4 \
attributes standby="off"
primitive httpd lsb:httpd
primitive vip ocf:heartbeat:IPaddr2 \
params ip="192.168.88.200" cidr_netmask="32" \
op monitor interval="1min"
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
last-lrm-refresh="1530437326"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
crm(live)# status
Last updated: Sun Jul 1 17:49:16 2018
Last change: Sun Jul 1 17:49:07 2018 via cibadmin on server5
Stack: classic openais (with plugin)
Current DC: server5 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured
Online: [ server5 servre4 ]
vip (ocf::heartbeat:IPaddr2): Started server5
httpd (lsb:httpd): Started servre4
定义资源的位置以及顺序
###定义http服务和vip服务在同一个节点运行###
crm(live)configure# colocation http_vip INFINITY: httpd vip
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# cd ..
crm(live)# status
Last updated: Sun Jul 1 18:22:05 2018
Last change: Sun Jul 1 18:21:59 2018 via cibadmin on servre4
Stack: classic openais (with plugin)
Current DC: server5 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured
Online: [ server5 servre4 ]
vip (ocf::heartbeat:IPaddr2): Started server5
httpd (lsb:httpd): Started server5
###定义启动顺序httpd服务启动在后,vip启动在前###
crm(live)configure# order httpd_after_vip mandatory: vip httpd
crm(live)configure# verify
crm(live)configure# show
node server5 \
attributes standby="off"
node servre4 \
attributes standby="off"
primitive httpd lsb:httpd
primitive vip ocf:heartbeat:IPaddr2 \
params ip="192.168.88.200" cidr_netmask="32" \
op monitor interval="1min"
colocation http_vip inf: httpd vip
order httpd_after_vip inf: vip httpd
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
last-lrm-refresh="1530437326"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"