Coreosync在传递信息的时候可以通过一个简单的配置文件来定义信息传递的方式和协议等。它是一个新兴的软件,2008年推出,但其实它并不是一个真正意义上的新软件,在2002年的时候有一个项目Openais它由于过大,分裂为两个子项目,其中可以实现HA心跳信息传输的功能就是Corosync ,它的代码60%左右来源于Openais. Corosync可以提供一个完整的HA功能,但是要实现更多,更复杂的功能,那就需要使用Openais了。Corosync是未来的发展方向。在以后的新项目里,一般采用Corosync,而hb_gui可以提供很好的HA管理功能,可以实现图形化的管理。另外相关的图形化有RHCS的套件luci+ricci.
pacemaker是一个开源的高可用资源管理器(CRM),位于HA集群架构中资源管理、资源代理(RA)这个层次,它不能提供底层心跳信息传递的功能,要想与对方节点通信需要借助底层的心跳传递服务,将信息通告给对方。通常它与corosync的结合方式有两种:
资源管理层(pacemaker负责仲裁指定谁是活动节点、IP地址的转移、本地资源管理系统)、消息传递层负责心跳信息(heartbeat、corosync)、Resource Agent(理解为服务脚本)负责服务的启动、停止、查看状态。多个节点上允许多个不同服务,剩下的2个备节点称为故障转移域,主节点所在位置只是相对的,同样,第三方仲裁也是相对的。vote system:少数服从多数。当故障节点修复后,资源返回来称为failback,当故障节点修复后,资源仍在备用节点,称为failover。
CRM:cluster resource manager ===>pacemaker心脏起搏器,每个节点都要一个crmd(5560/tcp)的守护进程,有命令行接口crmsh和pcs(在heartbeat v3,红帽提出的)编辑xml文件,让crmd识别并负责资源服务的处理。也就是说crmsh和pcs等价。
Resource Agent,OCF(open cluster framework)
primtive:主资源,在集群中只运行一个实例。clone:克隆资源,在集群中可运行多个实例。每个资源都有一定的优先级。
无穷大+负无穷大=负无穷大。主机名要和DNS解析的名称相同才行
主机IP | 主机名 | 安装配置 |
---|---|---|
192.168.8.71 | C7-2 | corosync+pacemaker+pcsd+crmsh |
192.168.8.72 | C7-3 | corosync+pacemaker+pcsd+crmsh |
[root@C7-2 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.8.72 C7-3
192.168.8.71 C7-2
[root@C7-2 ~] ssh-keygen
[root@C7-2 ~] ssh-copy-id -i /root/.ssh/id_rsa root@C7-3
[root@C7-3 ~] ssh-keygen
[root@C7-3 ~] ssh-copy-id -i /root/.ssh/id_rsa root@C7-2
[root@C7-2 ~] hwclock -s
[root@C7-3 ~] hwclock -s
[root@C7-2 ~]# yum install corosync pacemaker -y //centos自带源即可,也可以只安装pcs即可。
[root@C7-2 ~] cd /etc/corosync
[root@C7-2 corosync]# cp corosync.conf.example corosync.conf
[root@C7-2 corosync]# vim corosync.conf
#修改如下部分
bindnetaddr: 192.168.8.0 #改成机器所在的网段
#添加如下部分
service {
var: 0
name: pacemaker #表示启动pacemaker
}
[root@C7-2 corosync]# mv /dev/{random,random.bak}
[root@C7-2 corosync]# ln -s /dev/urandom /dev/random
[root@C7-2 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Writing corosync key to /etc/corosync/authkey.
[root@C7-2 corosync]# scp corosync.conf authkey root@C7-3:/etc/corosync/
#相互传 注意主机名
[root@C7-2 corosync]# systemctl start corosync
[root@C7-2 corosync]# yum -y install pcs
[root@C7-2 corosync]# systemctl start pcsd
[root@C7-2 corosync]# useradd -S /usr/sbin/nologin hacluster
[root@C7-2 corosync]# echo "passw0rd"|passwd --stdin hacluster
[root@C7-3 corosync]# yum -y install pcs
[root@C7-3 corosync]# systemctl start pcsd
[root@C7-3 corosync]# useradd -S /usr/sbin/nologin hacluster
[root@C7-3 corosync]# echo "passw0rd"|passwd --stdin hacluster
– 确定两台都启动后再做下续操作 --(两台都做)
[root@C7-2 corosync]# pcs cluster auth C7-3 C7-2
Username: hacluster
Password:
C7-3: Authorized
C7-2: Authorized
[root@C7-3 corosync]# pcs cluster auth C7-2 C7-3
Username: hacluster
Password:
C7-2: Authorized
C7-3: Authorized
[root@C7-2 corosync]# pcs cluster setup --name mycluster C7-2 C7-3 --force
#这里报错 检查pacemaker的启功 重启解决一切问题,
没启动就启动,解决一切问题
执行完上述命令 在C7-3查看配置文件
[root@C7-3 corosync]# cat corosync.conf //执行完创建集群的命令后,会在节点之间单独产生一个配置文件
totem {
version: 2
secauth: off
cluster_name: mycluster
transport: udpu
}
nodelist {
node {
ring0_addr: C7-2
nodeid: 1
}
node {
ring0_addr: C7-3
nodeid: 2
}
}
quorum {
provider: corosync_votequorum
two_node: 1
}
logging {
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
}
两台机器都做如下操作
[root@C7-2 ~]# pcs cluster start
[root@C7-2 ~]# pcs cluster status
Cluster Status:
Stack: corosync
Current DC: C7-2 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum
Last updated: Mon Feb 25 08:19:04 2019
Last change: Sun Feb 24 21:58:37 2019 by root via cibadmin on C7-3
3 nodes configured
0 resources configured
PCSD Status:
C7-2: Online
C7-3: Online
[root@C7-3 ~]# pcs cluster start
[root@C7-3 ~]# pcs cluster status
Cluster Status:
Stack: corosync
Current DC: C7-2 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum
Last updated: Mon Feb 25 08:20:21 2019
Last change: Sun Feb 24 21:58:37 2019 by root via cibadmin on C7-3
3 nodes configured
0 resources configured
PCSD Status:
C7-2: Online
C7-3: Online
[root@C7-2 corosync]# corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
id = 192.168.8.71
status = ring 0 active with no faults
[root@C7-3 ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 2
RING ID 0
id = 192.168.8.72
status = ring 0 active with no faults
[root@C7-2 corosync]# corosync-cmapctl |grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.8.71)
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.8.72)
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 3
runtime.totem.pg.mrp.srp.members.2.status (str) = joined
[root@C7-3 ~]# corosync-cmapctl |grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.8.71)
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.8.72)
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined
[root@C7-2 ~]# crm_verify -L -V ##crm_verify命令用来验证当前的集群配置是否有错误
error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
#出现这个报错 执行下面的命令,上方报错写的很明确了,不过多解释
[root@C7-2 ~]# pcs property set stonith-enabled=false
[root@C7-2 corosync]# cd /etc/yum.repos.d/
[root@C7-2 yum.repos.d]# wget http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/network:ha-clustering:Stable.repo
[root@C7-2 ~]# yum -y install httpd
[root@C7-2 ~]# systemctl start httpd
[root@C7-2 ~]# echo "corosync pacemaker on the openstack
" >/var/www/html/index.html
[root@C7-3 corosync]# cd /etc/yum.repos.d/
[root@C7-3 yum.repos.d]# wget http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/network:ha-clustering:Stable.repo
[root@C7-3 ~]# yum -y install httpd
[root@C7-3 ~]# systemctl start httpd
[root@C7-3 ~]# echo "corosync pacemaker on the openstack
" >/var/www/html/index.html
两个节点安装httpd,注意,只能停止httpd服务,而不能重启,并且不能设置为开机自启动,因为resource manager会自动管理这些服务的运行或停止。
[root@C7-2 ~]# crm
crm(live)# status ##必须保证所有节点都上线,才执行那些命令
crm(live)# ra
crm(live)ra# list systemd
httpd
crm(live)ra# help info
crm(live)ra# classes
crm(live)ra# cd
crm(live)# configure
crm(live)configure# help primitive
[root@C7-3 ~]# crm
crm(live)# status ##必须保证所有节点都上线,才执行那些命令
crm(live)# ra
crm(live)ra# list systemd
httpd
crm(live)ra# help info
crm(live)ra# classes
crm(live)ra# cd
crm(live)# configure
crm(live)configure# help primitive
crm(live)ra# classes
crm(live)ra# list ocf ##ocf是classes
crm(live)ra# info ocf:IPaddr ##IPaddr是provider
crm(live)ra# cd ../
crm(live)#configure
crm(live)configure# primitive WebIP ocf:IPaddr params ip=192.168.8.100 #VIP 高可用的IP 会自动漂移
crm(live)configure# show
node 1: C7-2
node 2: C7-3
primitive WebIP IPaddr \
params ip=192.168.8.100
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.13-10.el7-44eb2dd \
cluster-infrastructure=corosync \
cluster-name=mycluster \
stonith-enabled=false
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure#cd ../
crm(live)# status
WebIP (ocf::heartbeat:IPaddr): Stopped
#上述添加webip资源
crm(live)# configure
crm(live)configure# primitive WebServer systemd:httpd ##systemd是classes命令看到的
crm(live)configure# verify
WARNING: WebServer: default timeout 20s for start is smaller than the advised 100
WARNING: WebServer: default timeout 20s for stop is smaller than the advised 100
crm(live)configure# commit
#上述添加webservice资源
crm(live)configure# help group
crm(live)configure# group WebService WebIP WebServer ##它们之间是有顺序的,IP在哪儿,webserver就在哪儿
crm(live)configure# verify
WARNING: WebServer: default timeout 20s for start is smaller than the advised 100
WARNING: WebServer: default timeout 20s for stop is smaller than the advised 100
crm(live)configure# commit
#上述webip和webservice绑定组资源 (将资源设定成为一组)
crm(live)configure# node standby ##把当前节点设为备节点
https://www.cnblogs.com/yue-hong/p/7988821.html