linux学习之pacemaker搭建高可用集群

1、结点之间信任链接

[root@vm1 ~]# ssh-keygen 
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
[root@vm1 ~]# ssh-copy-id vm1.example.com
[root@vm1 ~]# scp -r .ssh/ vm2.example.com:

2、安装软件pacemaker,由于缺少crm命令,所以安装crmsh和pssh

lftp i:~> get pub/update/crmsh-1.2.6-0.rc2.2.1.x86_64.rpm pub/update/pssh-2.3.1-2.1.x86_64.rpm 
[root@vm1 ~]# yum localinstall crmsh-1.2.6-0.rc2.2.1.x86_64.rpm pssh-2.3.1-2.1.x86_64.rpm -y
[root@vm1 ~]# cd /etc/corosync/
[root@vm1 corosync]# cp corosync.conf.example corosync.conf
[root@vm1 corosync]# vim corosync.conf 配置文件修改添加
                bindnetaddr: 192.168.2.0
                mcastaddr: 226.94.1.1
                mcastport: 8795
service {
        name: pacemaker
        ver: 0 插件自动开启
}
[root@vm1 corosync]# scp corosync.conf vm2.example.com:/etc/corosync/
[root@vm1 ~]# /etc/init.d/corosync start 两个结点开启corosync
[root@vm2 ~]# /etc/init.d/corosync start

[root@vm1 ~]# crm node show
vm1.example.com: normal
vm2.example.com: normal
[root@vm1 ~]# crm configure show
node vm1.example.com
node vm2.example.com
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2"
[root@vm1 ~]# crm_verify -LV 检验会有error

[root@vm1 ~]# crm
crm(live)# configure  
crm(live)configure# property stonith-enabled=false 电源fence,之后会添加
crm(live)configure# commit 提交,只需要在一个结点上做就可以了,再次检验就没有报错了

3、添加虚拟ip

crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 params ip=192.168.2.213 cidr_netmask=24 op monitor interval=30s
crm(live)configure# property no-quorum-policy=ignore 忽略法定人数验证
crm(live)configure# commit 
关闭vm1的corosync,在vm2上crm status,可以一直watch观察
Online: [ vm2.example.com ]
OFFLINE: [ vm1.example.com ]

 vip    (ocf::heartbeat:IPaddr2): Started vm2.example.com 
跳到vm2上了。再次打开vm1的不会跳转回去。

添加http服务
[root@vm1 ~]# vim /etc/httpd/conf/httpd.conf 开启本机的检测 在921行

    SetHandler server-status
    Order deny,allow
    Deny from all
    Allow from 127.0.0.1

两个结点都需要修改。
crm(live)configure# primitive website ocf:heartbeat:apache params configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s
crm(live)configure# colocation website-with-vip inf: website vip 确保资源在同一结点运行
crm(live)configure# commit 
之后访问192.168.2.213就可以了。links -dump http://192.168.2.213
crm(live)configure# location master-node website 10: vm1.example.com 指定优先的loction,该结点恢复会跳转
crm(live)configure# commit 

假如此时vm1的eth0网卡down掉,vm2会接管,vm1网卡再次启动,有可能出现脑裂,即vm1和vm2上启动都启动了vip,http,彼此认为对方失去连接。因此添加fence设备stonith直接操作电源开关。

4、在物理机添加fence_xvm,这里特别注意,物理机关闭火墙,selinux,否则添加fence时会出现time out

[root@ankse cluster]# yum install fence-virt fence-virtd fence-virtd-libvirt fence-virtd-multicast -y
[root@ankse cluster]# fence_virtd -c
Module search path [/usr/lib64/fence-virt]: 

Listener module [multicast]: 
Multicast IP Address [225.0.0.12]: 
Multicast IP Port [1229]: 
Interface [br100]: 
Key File [/etc/cluster/fence_xvm.key]: 
Backend module [libvirt]: 
Libvirt URI [qemu:///system]: 
[root@ankse cluster]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
[root@ankse cluster]# scp fence_xvm.key 192.168.2.199:/etc/cluster/
[root@ankse cluster]# scp fence_xvm.key 192.168.2.202:/etc/cluster/
[root@ankse cluster]# /etc/init.d/fence_virtd start

虚拟机安装
[root@vm1 ~]# yum install fence-virt.x86_64 fence-agents.x86_64 -y
[root@vm1 ~]# stonith_admin -I
 fence_xvm
然后添加fence资源
crm(live)configure# primitive vmfence stonith:fence_xvm params pcmk_host_map=vm1.example.com:vm1;vm2.example.com:vm2 op monitor interval=60s
crm(live)configure# property stonith-enabled=true 添加电源fence
crm(live)configure# commit 
[root@vm1 ~]# stonith_admin -L
 vmfence
1 devices found
然后重新启动corosync,down掉eth0,测试fence功能,该主机会直接重启。

5、添加iscsi资源

[root@ankse ~]# yum install scsi-target-utils.x86_64 -y
[root@ankse ~]# lvcreate -L 1G -n iscsi cinder-volumes
[root@ankse ~]# vim /etc/tgt/targets.conf 

    backing-store /dev/cinder-volumes/iscsi
    initiator-address 192.168.2.199
    initiator-address 192.168.2.202

[root@ankse ~]# /etc/init.d/tgtd start

两个客户端结点配置
[root@vm1 ~]# yum install iscsi-initiator-utils.x86_64 -y
[root@vm1 ~]# iscsiadm -m discovery -t st -p 192.168.2.168
[root@vm1 ~]# iscsiadm -m node -l
[root@vm1 ~]# fdisk -cu /dev/sda 
Command (m for help): n
Partition number (1-4): 1
Command (m for help): w
[root@vm1 ~]# mkfs.ext4 /dev/sda1 

[root@vm2 ~]# yum install parted-2.1-21.el6.x86_64 -y 结点2安装,用来刷新设备
[root@vm2 ~]# partprobe 之后在vm2上就可以看到sda1了

添加资源
crm(live)configure# primitive webdata ocf:heartbeat:Filesystem params device=/dev/sda1 directory=/var/www/html fstype=ext4 op monitor interval=30s
crm(live)configure# delete master-node 
crm(live)configure# group webgrp vip webdata website 
crm(live)configure# location master-node webgrp 10: vm1.example.com
crm(live)configure# commit
测试两个结点是否正常
crm(live)# node standby 观察是否会正常切到vm2,设备会挂载到2
crm(live)# node online 查看是否再回到vm1,设备会挂载到1


你可能感兴趣的:(linux)