pacemaker+corosync+pcs实验

https://blog.51cto.com/9652359/2109398

https://blog.csdn.net/abel_dwh/article/details/78475630

2018-05-01 11:37:44

实验目的:使用corosync作为集群消息事务层(Massage Layer),pacemaker作为集群资源管理器(Cluster Resource Management),pcs作为CRM的管理接口工具。要求实现httpd的高可用功能。

准备工作:

  1. 配置SSH双机互信;
  2. 配置主机名解析/etc/hosts文件;
  3. 关闭防火墙:service iptables stop
  4. 关闭selunux:setenforce 0
  5. 关闭networkmanager: chkconfig NetworkManager off 、service NetworkManager stop

 一、 软件安装
使用yum源可以直接安装corosync pacemaker以及pcs软件:
yum install corosync pacemaker pcs -y

二、 开启pcsd服务,两台都要开启(不然认证会失败)

systemctl  start  pcsd.service 

三、 设置hacluster账号的密码,两台都要设置
为hacluster设置一个密码,用于pcs与pcsd通信
[root@node1 ~]# grep "hacluster" /etc/passwd

echo 'hacluster'|passwd hacluster --stdin

集群节点认证

 pcs cluster auth node1 node2  -u hacluster

node1: Authorized
node2: Authorized

四、在node1和node2上分别执行下面命令创建集群,配置文件会保存在/etc/corosync

 pcs cluster setup --name mycluster node1 node2 --force

  • enable我们创建的集群
# pcs cluster enable --name webha
# pcs cluster  start --all
# pcs status

[root@node-1 ~]# crm_verify -L -V ##crm_verify命令用来验证当前的集群配置是否有错误
error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
[root@node-1 ~]# pcs property set stonith-enabled=false

[root@node-1 ~]# pcs property list ##查看已经更改过的集群属性

如果是全局的,使用pcs property --all

七、 配置服务

  1. 配置VIP服务

  2. 配置VIP服务
    pcs resource create vip ocf:heartbeat:IPaddr2 ip=192.168.110.150 cidr_netmask=24 op monitor interval=30s
    pcs status 查看资源是否启动,注意这里测试发现掩码需要配置为与网卡同掩码,否则资源启动不了

  3. 配置httpd服务
    这里有两种方式,使用ocf:heartbeat:apache或者使用lsb:httpd方式,前者需要手工在两台服务器上将httpd服务启动,而后者服务由pacemaker集群启动。
    pcs resource create web lsb:httpd op monitor interval=20s
    pcs status 可以看到资源已经启动。
  4. 同时,可以在对应的节点上面直接service httpd status查看服务是否启动,以及ip addr 查看VIP是否获取到。

    八、 资源约束配置

  5. 配置资源的启动顺序:order,要求vip先启动,web后启动。
    pcs constraint order vip then web
  6. 配置位置约束,希望资源优先在node1节点上运行,设置vip/web对node1节点的优先级为150,对node2节点的优先级为50:
    pcs constraint location web prefers node1=150
    pcs constraint location vip prefers node1=150
    pcs constraint location web prefers node2=50
    pcs constraint location vip prefers node2=50
    [root@node1 ~]# pcs constraint
  7. Location Constraints:
    Resource: vip
    Enabled on: node1 (score:100)
    Enabled on: node2 (score:50)
    Resource: web
    Enabled on: node1 (score:100)
    Enabled on: node2 (score:50)

    注意:如果多个资源分布在不同的设备上,而这些资源又必须共同在同一个设备上才能够正常的对外提供服务,那么这个集群将不能正常工作。
    可以看到只有web以及vip对node1的优先级都调整为150后,集群才能够正常对外提供服务,否则会出现两个资源分布在不同的设备而导致不能对外提供服务

  8. 配置资源组,只有两者对节点的位置优先级调整为一样后,资源组同时切换:
    pcs resource group add mygroup vip web
    [root@node2 ~]# pcs status groups
    mygroup: vip web
    [root@node1 ~]# pcs resource
    Resource Group: httpgroup
    vip (ocf::heartbeat:IPaddr2): Started node1
    web (lsb:httpd): Started node1
  9. 配置排列约束,让vip与web 资源运行在一起,分数为100
    [root@node1 ~]# pcs constraint colocation add vip with web 100
    [root@node1 ~]# pcs constraint show
    Location Constraints:
    Resource: httpgroup
    Enabled on: node1 (score:200)
    Enabled on: node2 (score:100)
    Resource: vip
    Enabled on: node1 (score:100)
    Enabled on: node2 (score:50)
    Resource: web
    Enabled on: node1 (score:100)
    Enabled on: node2 (score:50)
    Ordering Constraints:
    start vip then start web (kind:Mandatory)
    Colocation Constraints:
    vip with web (score:100)
    Ticket Constraints:
  10. 九、 将资源切换到node2上面
    pcs constraint location web prefers node1=100 //将web资源对node1的位置优先级调整为100,可以看到资源从node2转换到node1,注意可以调整httpgroup,也可以同时调整web以及vip对node2的优先级。
    May 1 09:43:02 node1 crmd[2965]: notice: State transition S_IDLE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph
    May 1 09:43:02 node1 pengine[2964]: warning: Processing failed op monitor for web on node2: not running (7)
    May 1 09:43:02 node1 pengine[2964]: notice: Move web#011(Started node2 -> node1)
    May 1 09:43:02 node1 pengine[2964]: notice: Calculated transition 4, saving inputs in /var/lib/pacemaker/pengine/pe-input-57.bz2
    May 1 09:43:02 node1 crmd[2965]: notice: Initiating stop operation web_stop_0 on node2 | action 6
    May 1 09:43:02 node1 crmd[2965]: notice: Initiating start operation web_start_0 locally on node1 | action 7
    May 1 09:43:03 node1 lrmd[2962]: notice: web_start_0:3682:stderr [ httpd: Could not reliably determine the server's fully qualified domain name, using node1.yang.com for ServerName ]
    May 1 09:43:03 node1 crmd[2965]: notice: Result of start operation for web on node1: 0 (ok) | call=12 key=web_start_0 confirmed=true cib-update=42
    May 1 09:43:03 node1 crmd[2965]: notice: Initiating monitor operation web_monitor_20000 locally on node1 | action 8
    May 1 09:43:03 node1 crmd[2965]: notice: Transition 4 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-57.bz2): Complete
    May 1 09:43:03 node1 crmd[2965]: notice: State transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd

    [root@node1 ~]# crm_simulate -sL

    Current cluster status:
    Online: [ node1 node2 ]

    Resource Group: httpgroup
    vip (ocf::heartbeat:IPaddr2): Started node1
    web (lsb:httpd): Started node1

    Allocation scores:
    group_color: httpgroup allocation score on node1: 0
    group_color: httpgroup allocation score on node2: 0
    group_color: vip allocation score on node1: 100
    group_color: vip allocation score on node2: 50
    group_color: web allocation score on node1: 100
    group_color: web allocation score on node2: 50
    native_color: web allocation score on node1: 200
    native_color: web allocation score on node2: 100
    native_color: vip allocation score on node1: 400
    native_color: vip allocation score on node2: 150

    也可以将整个资源组作为整体调整优先级,如下:
    pcs constraint location httpgroup prefers node2=100
    pcs constraint location httpgroup prefers node1=200

    [root@node1 ~]# pcs constraint
    Location Constraints:
    Resource: httpgroup
    Enabled on: node1 (score:200)
    Enabled on: node2 (score:100)
    Resource: vip
    Enabled on: node1 (score:100)
    Enabled on: node2 (score:50)
    Resource: web
    Enabled on: node1 (score:100)
    Enabled on: node2 (score:50)
    Ordering Constraints:
    start vip then start web (kind:Mandatory)

     

 

 

 

你可能感兴趣的:(pacemaker+corosync+pcs实验)