第三章openstack环境部署 之Pacemaker+corosync+pcs 高可用集群

Pacemaker+corosync+pcs 高可用集群

参考文档
https://www.centos.bz/2019/06/linux%E4%B8%8B%E6%90%AD%E5%BB%BAhaproxypacemakercorosync%E9%9B%86%E7%BE%A4/

https://www.andylouse.net/linux-galera-haproxy-keepalived-mariadb

https://www.cnblogs.com/hukey/p/8047125.html
#################################################################################################
#所有节点安装
yum -y install pcs pacemaker corosync fence-agents resource-agents 
#创建数据库用户
MariaDB [(none)]> CREATE USER 'haproxy'@'%' ;
 
#配置pcs

systemctl start pcsd
systemctl enable pcsd

#配置用户

安装pcs时会自动创建hacluster用户,此时只需修改密码   ************

passwd hacluster

[root@node1 ~]# pcs cluster auth node1 node2 node3 -u hacluster -p "密码" --force
node1: Authorized
node3: Authorized
node2: Authorized
[root@node1 ~]# pcs cluster setup --name hacluster node1 node2 node3 --force
Destroying cluster on nodes: node1, node2, node3...
node3: Stopping Cluster (pacemaker)...
node1: Stopping Cluster (pacemaker)...
node2: Stopping Cluster (pacemaker)...
node1: Successfully destroyed cluster
node2: Successfully destroyed cluster
node3: Successfully destroyed cluster

Sending 'pacemaker_remote authkey' to 'node1', 'node2', 'node3'
node2: successful distribution of the file 'pacemaker_remote authkey'
node1: successful distribution of the file 'pacemaker_remote authkey'
node3: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
node1: Succeeded
node2: Succeeded
node3: Succeeded

Synchronizing pcsd certificates on nodes node1, node2, node3...
node1: Success
node3: Success
node2: Success
Restarting pcsd on the nodes in order to reload the certificates...
node1: Success
node3: Success
node2: Success

[root@node1 ~]# pcs cluster start --all
node1: Starting Cluster (corosync)...
node2: Starting Cluster (corosync)...
node3: Starting Cluster (corosync)...
node1: Starting Cluster (pacemaker)...
node2: Starting Cluster (pacemaker)...
node3: Starting Cluster (pacemaker)...
[root@node1 ~]# pcs cluster enable --all
node1: Cluster Enabled
node2: Cluster Enabled
node3: Cluster Enabled

禁用stonith:
stonith是一种能够接受指令断电的物理设备,环境无此设备,如果不关闭该选项,执行pcs命令总是含其报错信息。
pcs property set stonith-enabled=false

#二个节点时,忽略节点quorum功能:
pcs property set no-quorum-policy=ignore

设置合适的输入处理历史记录及策略引擎生成的错误与警告
pcs property set pe-warn-series-max=10000 pe-input-series-max=10000 pe-error-series-max=10000

基于时间驱动的方式进行状态处理
pcs property set cluster-recheck-interval=5

#配置 VIP
pcs resource create vip ocf:heartbeat:IPaddr2 ip=192.168.176.62 cidr_netmask=32 nic=em1 op monitor interval=3s

#Pacemaker+corosync 是为 haproxy服务的,添加haproxy资源到pacemaker集群
pcs resource create lb-haproxy systemd:haproxy --clone

#注意:这里一定要进行资源绑定,否则每个节点都会启动haproxy,造成访问混乱
将这两个资源绑定到同一个节点上
pcs constraint colocation add lb-haproxy-clone vip INFINITY

#配置资源的启动顺序,先启动vip,然后haproxy再启动,因为haproxy是监听到vip
pcs constraint order vip then lb-haproxy-clone
pcs constraint order start vip then lb-haproxy-clone kind=Optional

#手动指定资源到某个默认节点,因为两个资源绑定关系,移动一个资源,另一个资源自动转移。
pcs constraint location vip prefers node1

# 设置资源粘性,防止自动切回造成集群不稳定 现在vip已经绑定到node1节点
pcs property set default-resource-stickiness="100"

#查看状态
pcs resource

#未绑定的结果:错误
############################################################
[root@node3 glance]# pcs status
Cluster name: hacluster
Stack: corosync
Current DC: node2 (version 1.1.20-5.el7_7.1-3c4c782f70) - partition with quorum
Last updated: Thu Nov 21 14:54:20 2019
Last change: Fri Nov 15 14:19:32 2019 by root via cibadmin on node1

3 nodes configured
4 resources configured

Online: [ node1 node2 node3 ]

Full list of resources:

 vip    (ocf::heartbeat:IPaddr2):    Started node1
 Clone Set: lb-haproxy-clone [lb-haproxy]
     Stopped: [ node1 node2 node3 ]

Failed Resource Actions:
* vip_monitor_3000 on node1 'not running' (7): call=20, status=complete, exitreason='',
    last-rc-change='Tue Nov 19 15:18:20 2019', queued=0ms, exec=0ms
* lb-haproxy_start_0 on node1 'not running' (7): call=15, status=complete, exitreason='',
    last-rc-change='Tue Nov 19 15:04:36 2019', queued=0ms, exec=2087ms
* lb-haproxy_start_0 on node2 'not running' (7): call=42, status=complete, exitreason='',
    last-rc-change='Tue Nov 19 15:06:44 2019', queued=0ms, exec=2080ms
* vip_monitor_3000 on node2 'not running' (7): call=29, status=complete, exitreason='',
    last-rc-change='Fri Nov 15 17:55:15 2019', queued=0ms, exec=0ms
* lb-haproxy_start_0 on node3 'not running' (7): call=14, status=complete, exitreason='',
    last-rc-change='Tue Nov 19 15:06:49 2019', queued=0ms, exec=2069ms

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
##################################################################

你可能感兴趣的:(openstack-stein,centos7.6,pacemaker)