centos 7 corosync + pacemaker 搭建
node1:co1 11.100.46.4
node2:co2 11.100.46.7
node3:co3 11.100.46.9
webservice:
vip:11.100.46.11 ofc:heartbeat:Ipaddr
web:systemd
nfs:ocf:heartbeat:filesystem 11.100.46.13
mysql:
mysql-server:
vip: 11.100.46.17 ofc:heartbeat:Ipaddr
# ssh-keygen
# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
# echo ‘11.100.46.4 co1’ >> /etc/hosts
# echo ‘11.100.46.7 co2’ >> /etc/hosts
# echo ‘11.100.46.9 co3’ >> /etc/hosts
# scp /etc/hosts [email protected]:/etc/
# scp /etc/hosts [email protected]:/etc/
co1:
# systemctl stop firewalld.service
# systemctl disabled firewalld
# mv -v /etc/selinux/config{,.bak}
# cat /etc/selinux/config.bak | sed ‘s/SELINUX=enforcing/SELINUX=disabled/’ > /etc/selinux/config
# setenforce Permissive
# getenforce
# yum install iptables-services
# iptables -F
# iptables -X
# iptables -L -n
# iptables-save > /etc/sysconfig/iptables
# systemctl restart iptables
# echo ‘co1’ > /etc/hostname
# hostname co1
co3 bash:
systemctl stop firewalld.service
systemctl disable firewalld
mv -v /etc/selinux/config{,.bak}
cat /etc/selinux/config.bak | sed ‘s/SELINUX=enforcing/SELINUX=disabled/’ > /etc/selinux/config
setenforce Permissive
getenforce
yum -y install iptables-services
iptables -F
iptables -X
iptables -L -n
iptables-save > /etc/sysconfig/iptables
systemctl restart iptables
echo ‘co3’ > /etc/hostname
hostname co3
# yum -y install ntp
# mv /etc/ntp.conf{,.bak} -v
# cat /etc/ntp.conf.bak | sed ‘/^server.*$/d’ | sed ‘/^#broadcast 192.168.1.255/i server 172.16.31.125’ > /etc/ntp.conf
# echo ‘server 172.16.31.125’ >> /etc/ntp/step-tickers
# scp /etc/ntp.conf root@co2:/etc/;scp /etc/ntp.conf root@co3:/etc/
# scp /etc/ntp/step-tickers root@co2:/etc/ntp/;scp /etc/ntp/step-tickers root@co3:/etc/ntp
# ssh co2 ‘yum -y install ntp’;ssh co3 ‘yum -y install ntp’
# systemctl start ntpd;ssh co2 ‘systemctl start ntpd’;ssh co3 ‘systemctl start ntpd’;
# date;ssh co2 date;ssh co3 date
二、安装corosync并验证
# yum -y install corosync; ssh co2 ‘yum -y install corosync’ ;ssh co3 ‘yum -y install corosync’
# vim /etc/corosync/corosync.conf
加入配置文件:
totem {
version: 2
crypto_cipher: aes128
crypto_hash: sha1
interface {
ringnumber: 0
bindnetaddr: 11.100.46.0
mcastaddr: 239.185.1.31
mcastport: 5405
ttl: 1
}
transport: udpu
}
logging {
fileline: off
to_logfile: yes
to_stderr: no
to_syslog: no
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: QUORUM
debug: off
}
}
nodelist {
node {
ring0_addr: 11.100.46.4
nodeid: 1
}
node {
ring0_addr: 11.100.46.7
nodeid: 2
}
node {
ring0_addr: 11.100.46.9
nodeid: 3
}
}
quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
}
# scp /etc/corosync/corosync.conf root@co2:/etc/corosync/;# scp /etc/corosync/corosync.conf root@co3:/etc/corosync/
加入验证秘钥:
# corosync-keygen
# scp /etc/corosync/authkey root@co2:/etc/corosync/;scp /etc/corosync/authkey root@co3:/etc/corosync/
# systemctl start corosync;ssh co2 ‘systemctl start corosync’; ssh co3 ‘systemctl start corosync’
检查corosyn是否正常:
# corosync-cfgtool -s
//no faults就是正常
Printing ring status.
Local node ID 1
RING ID 0
id = 11.100.46.4
status = ring 0 active with no faults
# tail -f /var/log/cluster/corosync.log
//查看日志
# corosync-cmapctl | grep members
//能看见个节点信息
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(11.100.46.4)
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(11.100.46.7)
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined
runtime.totem.pg.mrp.srp.members.3.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.3.ip (str) = r(0) ip(11.100.46.9)
runtime.totem.pg.mrp.srp.members.3.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.3.status (str) = joined
# systemctl restart corosync;ssh co2 ‘systemctl restart corosync’; ssh co3 ‘systemctl restart corosync’
三、配置pacemaker
# yum -y install pacemaker;ssh co2 ‘yum -y install pacemaker’; ssh co3 ‘yum -y install pacemaker’
# vim /etc/sysconfig/pacemaker
//打开日志
PCMK_logfile=/var/log/pacemaker.log
# scp /etc/sysconfig/pacemaker root@co2:/etc/sysconfig/;scp /etc/sysconfig/pacemaker root@co3:/etc/sysconfig/
# systemctl start pacemaker;ssh co2 ‘systemctl start pacemaker’; ssh co3 ‘systemctl start pacemaker’
# systemctl status pacemaker;ssh co2 ‘systemctl status pacemaker’; ssh co3 ‘systemctl status pacemaker’
//查看各主键是否正常
# ps -aux | prep pacemaker
root 9534 0.0 0.8 132652 8336 ? Ss 04:31 0:00 /usr/sbin/pacemakerd -f
haclust+ 9536 0.1 1.6 135324 15364 ? Ss 04:31 0:00 /usr/libexec/pacemaker/cib
root 9537 0.0 0.8 135604 7900 ? Ss 04:31 0:00 /usr/libexec/pacemaker/stonithd
root 9538 0.0 0.5 105092 5012 ? Ss 04:31 0:00 /usr/libexec/pacemaker/lrmd
haclust+ 9539 0.0 0.8 126920 7628 ? Ss 04:31 0:00 /usr/libexec/pacemaker/attrd
haclust+ 9540 0.0 2.2 153104 20864 ? Ss 04:31 0:00 /usr/libexec/pacemaker/pengine
haclust+ 9541 0.1 1.2 186360 11560 ? Ss 04:31 0:00 /usr/libexec/pacemaker/crmd
# crm_mon
//可以看见dc是co1,co1,co2,co3在线
Stack: corosync
Current DC: co1 (version 1.1.15-11.el7-e174ec8) – partition with quorum
Last updated: Tue May 16 17:37:42 2017 Last change: Tue May 16 04:31:34 2017 by hacluster via crmd on co1
3 nodes and 0 resources configured
Online: [ co1 co2 co3 ]
No active resources
四、只需要在任意一台上安装crmsh
# wget http://172.16.31.125/soft/crmsh/{crmsh-3.0.0-2.2.noarch.rpm,crmsh-scripts-3.0.0-2.2.noarch.rpm,python-parallax-1.0.1-29.1.noarch.rpm}
# yum -y install *
crm(live)# ra
crm(live)# configure
crm(live)configure# show
node 1: co1
node 2: co2
node 3: co3
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.15-11.el7-e174ec8 \
cluster-infrastructure=corosync
crm(live)configure# property stonith-enabled=false
crm(live)configure# verify
crm(live)configure# commit
crm(live)ra# list ocf heartbeat
IPaddr2 //ip addr 配置的
IPaddr //ifconfig 配置的
//centos 7 必须使用 ipaddr2
五、配置nfs-server与apache
# yum -y install rpcbind nfs-utils
# systemctl enable rpcbind
# systemctl start rpcbind
# rpcinfo
# mkdir /htdocs/www -pv
# groupadd -r -g 888 apache
# useradd -M -u 888 -g 888 -s /sbin/nologin -r apache
# chmod -R u=rwx,g=rx,o=rx /htdocs/www/
# chown -R apache:apache /htdocs/www/
# echo ‘/htdocs/www 11.100.46.7(rw,all_squash,anonuid=888,anongid=888) 11.100.46.4(rw,all_squash,anonuid=888,anongid=888) 11.100.46.9(rw,all_squash,anonuid=888,anongid=888)’ > /etc/exports
# service nfs start
# exportfs -ar
测试挂载:
# showmount -e
Export list for localhost.localdomain:
/htdocs/www 11.100.46.9,11.100.46.4,11.100.45.7
# mkdir -pv /htdocs/www;ssh co2 ‘mkdir -pv /htdocs/www’;ssh co3 ‘mkdir -pv /htdocs/www’
# yum -y install nfs-utils;ssh co2 ‘yum -y install nfs-utils’;ssh co3 ‘yum -y install nfs-utils’
# mount -t nfs 11.100.46.13:/htdocs/www /htdocs/www/;ssh co2 ‘mount -t nfs 11.100.46.13:/htdocs/www /htdocs/www/’; ssh co3 ‘mount -t nfs 11.100.46.13:/htdocs/www /htdocs/www/’
# ls /htdocs/www/; ssh co2 ‘ls /htdocs/www/’; ssh co3 ‘ls /htdocs/www/’
# echo ‘
# yum -y install httpd; ssh co2 ‘yum -y install httpd’; ssh co3 ‘yum -y install httpd’
# mv -v /etc/httpd/conf/httpd.conf{,.bak}
# systemctl start httpd;ssh co2 ‘systemctl start httpd’; ssh co3 ‘systemctl start httpd’
# cat /etc/httpd/conf/httpd.conf.bak | sed ‘s/Directory \”\/var\/www\”/Directory \”\/htdocs\/www\”/’| sed ‘s/DocumentRoot \”\/var\/www\/html\”/DocumentRoot \”\/htdocs\/www\/\”/g’ > /etc/httpd/conf/httpd.conf
# scp /etc/httpd/conf/httpd.conf root@co2:/etc/httpd/conf/httpd.conf; scp /etc/httpd/conf/httpd.conf root@co3:/etc/httpd/conf/httpd.conf
# systemctl restart httpd;ssh co2 ‘systemctl restart httpd’; ssh co3 ‘systemctl restart httpd’
# umount 11.100.46.13:/htdocs/www/;ssh co2 ‘umount 11.100.46.13:/htdocs/www/’; ssh co3 ‘umount 11.100.46.13:/htdocs/www/’
# systemctl stop httpd;ssh co2 ‘systemctl stop httpd’; ssh co3 ‘systemctl stop httpd’
# ls /htdocs/www/; ssh co2 ‘ls /htdocs/www/’; ssh co3 ‘ls /htdocs/www/’
六、配置mysql与nfs-sverver
1、配置nfs文件系统
nfs-server 11.100.46.13:
# groupadd -r -g 889 mysql
# useradd -M -u 889 -g 889 -s /sbin/nologin -r mysql
# mkdir -pv /mydata/data
# chmod -R u=rwx,g=rx,o=rx /mydata/data
# chown -R mysql:mysql /mydata/data
# setfacl -m u:root:rwx /mydata/data
# echo ‘/mydata/data 11.100.46.7(rw,all_squash,anonuid=889,anongid=889) 11.100.46.4(rw,all_squash,anonuid=889,anongid=889) 11.100.46.9(rw,all_squash,anonuid=889,anongid=889)’ >> /etc/exports
# exportfs -ar
测试挂载:
# showmount -e
co1:
# groupadd -r -g 889 mysql
# useradd -M -u 889 -g 889 -s /sbin/nologin -r mysql
# mkdir -pv /mydata/data
# chmod -R u=rwx,g=rx,o=rx /mydata/data
# chown -R mysql:mysql /mydata/data
# setfacl -m u:root:rwx /mydata/data
# ssh co2 ‘groupadd -r -g 889 mysql’;ssh co3 ‘groupadd -r -g 889 mysql’
# ssh co2 ‘useradd -M -u 889 -g 889 -s /sbin/nologin -r mysql’; ssh co3 ‘useradd -M -u 889 -g 889 -s /sbin/nologin -r mysql’
# ssh co2 ‘mkdir -pv /mydata/data’;ssh co3 ‘mkdir -pv /mydata/data’
# ssh co2 ‘chmod -R u=rwx,g=rx,o=rx /mydata/data’; ssh co3 ‘chmod -R u=rwx,g=rx,o=rx /mydata/data’
# ssh co2 ‘chown -R mysql:mysql /mydata/data’; ssh co3 ‘chown -R mysql:mysql /mydata/data’
2、安装mysql-server
# wget http://172.16.31.125/soft/mariadb-5.5.54-linux-x86_64.tar.gz
yum -y install wget;ssh co2 ‘yum -y install wget’;ssh co3 ‘yum -y install wget’
wget http://172.16.31.125/soft/mariadb-5.5.54-linux-x86_64.tar.gz;ssh co2 ‘wget http://172.16.31.125/soft/mariadb-5.5.54-linux-x86_64.tar.gz’;ssh co3 ‘wget http://172.16.31.125/soft/mariadb-5.5.54-linux-x86_64.tar.gz’
# tar xvf mariadb-5.5.54-linux-x86_64.tar.gz -C /usr/local/;ssh co2 ‘tar xf mariadb-5.5.54-linux-x86_64.tar.gz -C /usr/local/’;ssh co3 ‘tar xf mariadb-5.5.54-linux-x86_64.tar.gz -C /usr/local/’
# ln -sv /usr/local/mariadb-5.5.54-linux-x86_64 /usr/local/mysql;ssh co2 ‘ln -sv /usr/local/mariadb-5.5.54-linux-x86_64 /usr/local/mysql’;ssh co3 ‘ln -sv /usr/local/mariadb-5.5.54-linux-x86_64 /usr/local/mysql’
安装数据库:
# scripts/mysql_install_db –datadir=/mydata/data –user=mysql
添加systemd脚本:
cat > /usr/lib/systemd/system/mysqld.service << EOF
#
# Simple MySQL systemd service file
#
# systemd supports lots of fancy features, look here (and linked docs) for a full list:
# http://www.freedesktop.org/software/systemd/man/systemd.exec.html
#
# Note: this file ( /usr/lib/systemd/system/mysql.service )
# will be overwritten on package upgrade, please copy the file to
#
# /etc/systemd/system/mysql.service
#
# to make needed changes.
#
# systemd-delta can be used to check differences between the two mysql.service files.
#
[Unit]
Description=MySQL Community Server
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
Alias=mysql.service
[Service]
User=mysql
Group=mysql
# Execute pre and post scripts as root
PermissionsStartOnly=true
# Needed to create system tables etc.
#ExecStartPre=/usr/bin/mysql-systemd-start pre
# Start main service
ExecStart=/usr/local/mysql/bin/mysqld_safe
# Don’t signal startup success before a ping works
#ExecStartPost=/usr/bin/mysql-systemd-start post
# Give up if ping don’t get an answer
TimeoutSec=600
Restart=always
PrivateTmp=false
EOF
# scp /usr/lib/systemd/system/mysqld.service root@co2:/usr/lib/systemd/system/;scp /usr/lib/systemd/system/mysqld.service root@co3:/usr/lib/systemd/system/
创建配置文件:
# mkdir -pv /etc/mysql;ssh co2 ‘mkdir -pv /etc/mysql’;ssh co3 ‘mkdir -pv /etc/mysql’
# cat support-files/my-large.cnf | sed ‘/#tmpdir/a skip_name_resolve = on’ | sed ‘/#tmpdir/a innodb_file_per_table = on’ | sed ‘/#tmpdir/a datadir = /mydata/data’ > /etc/mysql/my.cnf
# scp /etc/mysql/my.cnf root@co2:/etc/mysql/my.cnf;scp /etc/mysql/my.cnf root@co3:/etc/mysql/my.cnf
创建pid以及log文件夹:
# mkdir -pv /var/log/mariadb/;ssh co2 ‘mkdir -pv /var/log/mariadb/’;ssh co3 ‘mkdir -pv /var/log/mariadb/’
# chown mysql:mysql /var/log/mariadb/ -R;ssh co2 ‘chown mysql:mysql /var/log/mariadb/ -R’;ssh co3 ‘chown mysql:mysql /var/log/mariadb/ -R’
# mkdir -pv /var/run/mariadb/;ssh co2 ‘mkdir -pv /var/run/mariadb/’;ssh co3 ‘mkdir -pv /var/run/mariadb/’
# chown mysql:mysql /var/run/mariadb/ -R;ssh co2 ‘chown mysql:mysql /var/run/mariadb/ -R’;ssh co3 ‘chown mysql:mysql /var/run/mariadb/ -R’
创建环境变量:
# echo ‘export PATH=”/usr/local/mysql/bin/:${PATH}” ‘ > /etc/profile.d/mysql.sh
# scp /etc/profile.d/mysql.sh root@co2:/etc/profile.d/;# scp /etc/profile.d/mysql.sh root@co3:/etc/profile.d/
# . /etc/profile.d/mysql.sh;ssh co2 ‘. /etc/profile.d/mysql.sh’;ssh co3 ‘. /etc/profile.d/mysql.sh’
测试mysql是否能够正常运行:
# mount -t nfs 11.100.46.13:/mydata/data/ /mydata/data/;ssh co2 ‘mount -t nfs 11.100.46.13:/mydata/data/ /mydata/data/’;ssh co3 ‘mount -t nfs 11.100.46.13:/mydata/data/ /mydata/data/’
# umount /mydata/data/;ssh co2 ‘umount /mydata/data/’;ssh co3 ‘umount /mydata/data/’
ssh co2 ”;ssh co3 ”
ssh co2 ”;ssh co3 ”
七、配置pacemaker资源
# systemctl enable httpd;ssh co2 ‘systemctl enable httpd’; ssh co3 ‘systemctl enable httpd’
# systemctl enable rpcbind; ssh co2 ‘systemctl enable rpcbind’ ; ssh co3 ‘systemctl enable rpcbind’
# systemctl enable nfs;ssh co2 ‘systemctl enable nfs’; ssh co3 ‘systemctl enable nfs’
# systemctl enable mysqld;ssh co2 ‘ systemctl enable mysqld’;ssh co3 ‘systemctl enable mysqld’
# crm
crm(live)# ra list ocf
crm(live)# ra info ocf:heartbeat:IPaddr2
primitive://start多少秒超时了就换个节点,monitor status如果不正常就会超时转移到其它节点上去
op_type :: start | stop | monitor
//定义主资源
crm(live)configure# primitive webip ocf:heartbeat:IPaddr2 params ip=”11.100.46.11″ op monitor timeout=20s interval=10s
crm(live)configure# primitive web_service systemd:httpd op start timeout=100s op stop timeout=100s op monitor timeout=100s interval=60s
crm(live)configure# primitive web_store ocf:heartbeat:Filesystem params device=”11.100.46.13:/htdocs/www/” directory=”/htdocs/www/” fstype=”nfs” op start timeout=60s op stop timeout=60s op monitor timeout=40s interval=20s
status timeout=100
monitor timeout=100 interval=60
//定义约束
crm(live)configure# help colocation
crm(live)configure# colocation webserver_with_webstore inf: web_service web_store
crm(live)configure# colocation webstore_with_webip inf: web_store webip
crm(live)configure# verify
crm(live)configure# commit
colocation c1 inf: A ( B C )
//定义启动顺序
crm(live)configure# help order
crm(live)configure# order webstore_after_webip Mandatory: webip web_store
crm(live)configure# order webservice_after_webstore Mandatory: web_store web_service
//定义资源对节点的倾向性(首要运行的节点为100,备份为90)
crm(live)configure# help location
lcrm(live)configure# location webserver_pref_co1 webip 100: co1
crm(live)configure# location webservi_pref_co1 web_service 100: co1
crm(live)configure# location webstore_pref_co1 web_store 100: co1
crm(live)configure# location webserver_pref_co3 webip 90: co3
crm(live)configure# location webservi_pref_co3 web_service 90: co3
crm(live)configure# location webstore_pref_co3 web_store 90: co3
crm(live)configure# verify
crm(live)configure# commit
//配置资源粘性,每个资源都是101
crm(live)configure# property default-resource-stickiness=101
crm(live)configure# primitive mysql systemd:mysqld op start timeout=100s op stop timeout=100s op status timeout=100 op monitor timeout=100 interval=60
crm(live)configure# primitive mysql_store ocf:heartbeat:Filesystem params device=”11.100.46.13:/mydata/data/” directory=”/mydata/data/” fstype=”nfs” op start timeout=60s op stop timeout=60s op monitor timeout=40s interval=20s
crm(live)configure# order mysql_after_nfs Mandatory: mysql_store mysql
crm(live)configure# colocation mysql_with_nfs inf: mysql mysql_store
crm(live)configure# location mysqlsver_pref_co2 webip 100: co2
crm(live)configure# location mysqlnfs_pref_co2 webip 100: co2
crm(live)configure# location mysqlsver_pref_co3 webip 90: co3
crm(live)configure# location mysqlnfs_pref_co3 webip 90: co3
crm(live)configure# property default-resource-stickiness=88
八、crm常用命令
crm(live)resource# stop webip
crm(live)resource# stop web_store
crm(live)resource# stop web_server
crm(live)configure# delete web_server
crm(live)configure# delete web_store
crm(live)node# standby co1
crm(live)node# online co1
crm(live)configure# edit
crm(live)configure# edit xml
crm(live)configure# show xml
crm(live)# status
Stack: corosync
Current DC: co2 (version 1.1.15-11.el7-e174ec8) – partition with quorum
Last updated: Wed May 17 02:19:08 2017 Last change: Wed May 17 02:15:48 2017 by root via cibadmin on co1
3 nodes and 3 resources configured
Online: [ co1 co2 co3 ]
Full list of resources:
web_service (systemd:httpd): Started co1
webip (ocf::heartbeat:IPaddr2): Started co1
web_store (ocf::heartbeat:Filesystem): Started co1
//查看配置文件
crm(live)configure# show
node 1: co1 \
attributes standby=off
node 2: co2 \
attributes standby=off
node 3: co3
primitive web_service systemd:httpd \
op start timeout=100s interval=0 \
op stop timeout=100s interval=0 \
op monitor timeout=100s interval=60s
primitive web_store Filesystem \
params device=”11.100.46.13:/htdocs/www/” directory=”/htdocs/www/” fstype=nfs \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor timeout=40s interval=20s
primitive webip IPaddr2 \
params ip=11.100.46.11 \
op start timeout=20s interval=0 \
op stop timeout=20s interval=0 \
op monitor timeout=20s interval=10s
location webserver_pref_co1 webip 100: co1
colocation webserver_with_webstore_and_webip inf: web_service ( web_store webip )
location webservi_pref_co1 web_service 100: co1
order webservice_after_webstore Mandatory: web_store web_service
order webstore_after_webip Mandatory: webip web_store
location webstore_pref_co1 web_store 100: co1
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.15-11.el7-e174ec8 \
cluster-infrastructure=corosync \
stonith-enabled=false \
default-resource-stickiness=99
# crm_verify -L -V
# yum -y install perl-libwww-perl
# yum -y install perl-MailTools
# yum -y install perl-devel