说明:VIP地址为:172.16.22.1
一共有三台服务器:
MySQL1 172.16.22.10 node1
MySQL2 172.16.22.11 node2
NFS Server:172.16.22.2 nfs
如图:
备注:配置成高可用集群,服务是绝对不能开机启动,也不能启动此服务的,而是由高可用集群控制服务的启动与关闭的
一、创建逻辑卷以及安装mysql
创建逻辑卷用来做mysql数据共享存储的,并挂载逻辑卷,开机自动挂载
1.在nfs上:
# vim /etc/sysconfig/network-scripts/ifcfg-eth0 #更改nfs的IP地址
更改IPADDR、NETMASK为:
IPADDR=172.16.22.2
NETMASK=255.255.0.0
# service network restart
# setenforce 0
- # fdisk /dev/sda
- p
- n
- e
- n
- +20G
- p
- t
- 5
- 8e
- w
- # partprobe /dev/sda
- # pvcreate /dev/sda5
- # vgcreate myvg /dev/sda5
- # lvcreate -L 10G -n mydata myvg
- # mke2fs -j -L MYDATA /dev/myvg/mydata
- # mkdir /mydata
- # vim /etc/fstab #在最后加上下面一行
- LABEL=MYDATA /mydata ext3 defaults 0 0
- # mount -a
- # mount
- # groupadd -g 306 -r mysql
- # useradd -g mysql -r -u 306 -s /sbin/nologin mysql
- # id mysql
- # chown -R mysql:mysql /mydata/
- # vim /etc/exports #写上下面一行
- /mydata 172.16.0.0/16(rw,no_root_squash)
- # service nfs start
- # rpcinfo -p localhost
- # chkconfig nfs on
- # showmount -e 172.16.22.2
2.准备mysql服务
在node1上:
- # groupadd -g 306 -r mysql
- # useradd -g mysql -r -u 306 mysql
- # mkdir /mydata
- # mount -t nfs 172.16.22.2:/mydata /mydata/
- # ls /mydata/
- # cd /mydata
- # touch 1.txt
- # tar xvf mysql-5.5.22-linux2.6-i686.tar.gz -C /usr/local
- # cd /usr/local/
- # ln -sv mysql-5.5.22-linux2.6-i686 mysql
- # cd mysql
- # chown -R mysql:mysql .
- # scripts/mysql_install_db --user=mysql --datadir=/mydata/data #初始化mysql
- # chown -R root .
为mysql提供主配置文件:
# cd /usr/local/mysql
# cp support-files/my-large.cnf /etc/my.cnf
# vim /etc/my.cnf
修改thread_concurrency = 8为:
thread_concurrency = 2
并增加如下一行:
datadir = /mydata/data
为mysql提供sysv服务脚本:
# cd /usr/local/mysql
# cp support-files/mysql.server /etc/rc.d/init.d/mysqld
备注:提供此脚本的好处就是可以使用诸如service mysqld start的命令了。
- # chkconfig --add mysqld
- # service mysqld start
- # /usr/local/mysql/bin/mysql
- # service mysqld stop
- # chkconfig mysqld off
- # chkconfig --list mysqld
在node2上:(由于node1上已经初始化过mysql,并且通过挂在NFS的/mydata作为mysql数据库存储路径了)
- # groupadd -g 306 -r mysql
- # useradd -g mysql -r -u 306 mysql
- # mkdir /mydata
- # mount -t nfs 172.16.22.2:/mydata /mydata/
- # tar xvf mysql-5.5.22-linux2.6-i686.tar.gz -C /usr/local
- # cd /usr/local/
- # ln -sv mysql-5.5.22-linux2.6-i686 mysql 创建链接
- # cd mysql
- # chown -R root:mysql .
- # cd /usr/local/mysql
# cp support-files/my-large.cnf /etc/my.cnf 为mysql提供主配置文件:
修改thread_concurrency = 8为:
thread_concurrency = 2
并增加如下一行:
datadir = /mydata/data
为mysql提供sysv服务脚本:
# cd /usr/local/mysql
# cp support-files/mysql.server /etc/rc.d/init.d/mysqld
备注:提供此脚本的好处就是可以使用诸如service mysqld start的命令了。
- # chkconfig --add mysqld
- # chkconfig mysqld off
- # chkconfig --list mysqld
- # service mysqld start
- # /usr/local/mysql/bin/mysql
mysql> create database ad; 在node1查看是否也有数据库?
# service mysqld stop
二、安装集群软件
1.前提条件:
(1).在node1上:
更改系统时间(保证node1和node2的时间都正确,并保持二者时间一致)
# date
# hwclock -s
# hostname node1.linuxidc.com
# vim /etc/sysconfig/network #把主机名改为node1.linuxidc.com
HOSTNAME=node1.linuxidc.com
# vim /etc/sysconfig/network-scripts/ifcfg-eth0 #更改网卡地址为
172.16.22.10,子网掩码为255.255.0.0
IPADDR=172.16.22.10
NETMASK=255.255.0.0
# service network restart
# vim /etc/hosts #增加如下两项:
172.16.22.10 node1.linuxidc.com node1
172.16.22.11 node2.linuxidc.com node2
(2).在node2上:
更改系统时间(保证node1和node2的时间都正确,并保持二者时间一致)
# date
# hwclock -s
# hostname node2.linuxidc.com
# vim /etc/sysconfig/network #把主机名改为node2.linuxidc.com
HOSTNAME=node2.linuxidc.com
# vim /etc/sysconfig/network-scripts/ifcfg-eth0 #更改网卡地址为
172.16.22.11,子网掩码为255.255.0.0
IPADDR=172.16.22.11
NETMASK=255.255.0.0
# service network restart
# vim /etc/hosts #增加如下两项:
172.16.22.10 node1.linuxidc.com node1
172.16.22.11 node2.linuxidc.com node2
(3).实现双机互信:
在node1上:
- # ssh-keygen -t rsa
- # ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
在node2上:
- # ssh-keygen -t rsa
- # ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
2.安装所需软件:
所需软件如下:
cluster-glue-1.0.6-1.6.el5.i386.rpm
cluster-glue-libs-1.0.6-1.6.el5.i386.rpm
corosync-1.2.7-1.1.el5.i386.rpm
corosynclib-1.2.7-1.1.el5.i386.rpm
heartbeat-3.0.3-2.3.el5.i386.rpm
heartbeat-libs-3.0.3-2.3.el5.i386.rpm
libesmtp-1.0.4-5.el5.i386.rpm
pacemaker-1.1.5-1.1.el5.i386.rpm
pacemaker-cts-1.1.5-1.1.el5.i386.rpm
pacemaker-libs-1.1.5-1.1.el5.i386.rpm
perl-TimeDate-1.16-5.el5.noarch.rpm
resource-agents-1.0.4-1.1.el5.i386.rpm
软件包放在/root下(node1和node2都需要下载)
(1).在node1上:
- # yum --nogpgcheck localinstall *.rpm -y
- # cd /etc/corosync
- # cp corosync.conf.example corosync.conf
- # vim corosync #添加如下内容:
- service {
- ver: 0
- name: pacemaker
- use_mgmtd: yes
- }
- aisexec {
- user: root
- group: root
- }
并设定此配置文件中 bindnetaddr后面的IP地址为你的网卡所在网络的网络地址,我们这里的两个节点在172.16.0.0网络,因此这里将其设定为172.16.0.0;如下:
bindnetaddr: 192.168.0.0
生成节点间通信时用到的认证密钥文件:
# corosync-keygen
将corosync和authkey复制至node2:
- # scp -p corosync.conf authkey node2:/etc/corosync/
- # cd
- # mkdir /var/log/cluster
- # ssh node2 ‘# mkdir /var/log/cluster’
- # service corosync start
- # ssh node2 '/etc/init.d/corosync start'
- # crm_mon
配置stonish,quorum以及stickiness
- # crm
- crm(live)#configure
- crm(live)configure# property stonith-enabled=false
- crm(live)configure# verify
- crm(live)configure# commit
- crm(live)configure# property no-quorum-policy=ignore
- crm(live)configure# verify
- crm(live)configure# commit
- crm(live)configure# rsc_defaults resource-stickiness=100 #(粘性值只要大于0,表示更乐意留在当前节点)
- crm(live)configure# verify
- crm(live)configure# commit
- crm(live)configure# show
- crm(live)configure# primitive myip ocf:heartbeat:IPaddr params ip='172.16.22.1'
- crm(live)configure# commit
- crm(live)configure# exit
- # ifconfig
(2).在node2上:
# umount /mydata
# yum --nogpgcheck localinstall *.rpm -y
(3).在node1上:
# umount /mydata
配置资源:mynfs(文件系统)
- # crm
- crm(live)# ra
- crm(live)ra# list ocf heartbeat
- crm(live)ra# meta ocf:heartbeat:Filesystem
- crm(live)ra# cd
- crm(live)# configure
- crm(live)configure# primitive mynfs ocf:heartbeat:Filesystem params
- device="172.16.22.2:/mydata" directory="/mydata" fstype="nfs" op
- start timeout=60s op stop timeout=60s
- crm(live)configure# commit
配置资源:mysqld(mysqld一定要跟mynfs在一起,mynfs要先于mysqld启动)
- crm(live)configure# primitive mysqld lsb:mysqld
- crm(live)configure# show
配置绑定关系(摆列约束)
crm(live)configure# colocation mysqld_and_mynfs inf: mysqld mynfs
crm(live)configure# show
说明:inf表明mysqld mynfs永远在一起
再定义次序(order)
- crm(live)configure# order mysqld_after_mynfs mandatory: mynfs mysqld:start
- crm(live)configure# show
- crm(live)configure# order mysqld_after_myip mandatory: myip mysqld:start
- crm(live)configure# commit
- crm(live)configure# show
- node node1.linuxidc.com \
- attributes standby="on"
- node node2.linuxidc.com \
- attributes standby="off"
- primitive myip ocf:heartbeat:IPaddr \
- params ip="172.16.22.1"
- primitive mynfs ocf:heartbeat:Filesystem \
- params device="172.16.22.2:/mydata" directory="/mydata" fstype="nfs" \
- op start interval="0" timeout="60s" \
- op stop interval="0" timeout="60s"
- primitive mysqld lsb:mysqld
- colocation mysqld_and_mynfs inf: mysqld mynfs myip
- order mysqld_after_myip inf: myip mysqld:start
- order mysqld_after_mynfs inf: mynfs mysqld:start
- property $id="cib-bootstrap-options" \
- dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2" \
- stonith-enabled="false" \
- no-quorum-policy="ignore"
- rsc_defaults $id="rsc-options" \
- resource-stickiness="100"
- crm(live)configure# exit
- # ls /mydata/data
- # /usr/local/mysql/bin/mysql
- mysql> SHOW DATABASES;
- mysql> GRANT ALL ON *.* TO root@'%' IDENTIFIED BY '123456';
- mysql> flush privileges;
- # crm status
- # ls /mydata
- # ls /mydata/data
- # crm node standby
说明:mandatory为强制性的
(4).在node2
# crm node online
# crm status
(5).在windows上连接:mysql -uroot -h172.16.22.1 -p123456
mysql> create database a;
mysql> show databases;
如下图:
此时可以正常操作mysql。
让node2故障,在node2上:
# crm node standby
# crm status
在node1上:
# crm node online
在windows上连接:mysql -uroot -h172.16.22.1 -p123456
mysql> show databases;
如下图:
此时可以看到mysql数据库中有database a,说明数据库一切正常,这就是高可用集群所实现的功能。