实验目的:
用两台mysql服务器基于drbd构建高可用主从服务器,当node1为主服务器时,node2用DRBD实现数据的实时备份,当node1故障时,node2取代node1工作。
- 主机规划:
- node1:node1.magedu.com ip 172.16.14.10
- node2:node2.magedu.com ip 172.16.14.11
一:准备工作:
- 1、DNS域名解析
- 用uname -n的结果来识别node1,node2
- Node1:
- # sed -i 's@\(HOSTNAME=\).*@\1node1.magedu.com@g' /etc/sysconfig/network
- # hostname node1.magedu.com
- Node2:
- # sed -i 's@\(HOSTNAME=\).*@\1node2.magedu.com@g' /etc/sysconfig/network
- # hostname node2.magedu.com
- 修改node1,node2的/etc/hosts添加:
- 172.16.14.10 node1.magedu.com node1
- 172.16.14.11 node2.magedu.com node2
- 2、设定两个节点可以基于密钥进行ssh通信
- node1:
- # ssh-keygen -t rsa
- # ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
- node2:
- # ssh-keygen -t rsa
- # ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
- 3、两个节点时间同步
- [root@node2 ~]# date 112811022012;ssh node1 'date 112811022012'
- Fri Nov 28 11:02:00 CST 2012
- Fri Nov 28 11:02:00 CST 2012
- 4、是selinux处于关闭状态
- #setenforce 0
二:在node1和node2上配置Primary/Secondary模型的drbd
1、node1和node2两个节点上各提供了一个大小相同的分区作为drbd设备;两个节点上均 为/dev/sda5,大小为1G。
- # fdisk /dev/sda
- 分区后结果:
- Device Boot Start End Blocks Id System
- /dev/sda1 * 1 13 104391 83 Linux
- /dev/sda2 14 5235 41945715 8e Linux LVM
- /dev/sda3 5236 5366 1052257+ 82 Linux swap / Solaris
- /dev/sda4 5367 13054 61753860 5 Extended
- /dev/sda5 5367 5489 987966 83 Linux
- # partprobe /dev/sda
2、安装软件包:
- 这里使用最新的8.3的版本
- drbd83-8.3.8-1.el5.centos.i386.rpm
- kmod-drbd83-8.3.8-1.el5.centos.i686.rpm
- 下载地址为:http://mirrors.sohu.com/centos/5.8/extras/i386/RPMS/
- 下载完成后直接安装:
- # yum -y --nogpgcheck localinstall drbd83-8.3.8-1.el5.centos.i386.rpm
- kmod-drbd83-8.3.8-1.el5.centos.i686.rpm
3、配置drbd:
- 下面的操作在node1.magedu.com上完成。
- 1)复制样例配置文件为即将使用的配置文件:
- # cp /usr/share/doc/drbd83-8.3.8/drbd.conf /etc
- # vim /etc/drbd.d/global-common.conf ##修改如下
- usage-count no;
- startup {
- #wfc-timeout 120;
- #degr-wfc-timeout 120;
- }
- disk {
- on-io-error detach;
- #fencing resource-only;
- }
- net {
- cram-hmac-alg "sha1"; ##校验算法
- shared-secret "mydrbdlab"; ##校验密码
- }
- syncer {
- rate 1000M; ##同步时的速率
- }
- }
- 3)、定义一个资源/etc/drbd.d/web.res,添加如下内容:
- resource web {
- on node1.magedu.com {
- device /dev/drbd0;
- disk /dev/sda5;
- address 172.16.14.10:7789;
- meta-disk internal;
- }
- on node2.magedu.com {
- device /dev/drbd0;
- disk /dev/sda5;
- address 172.16.14.11:7789;
- meta-disk internal;
- }
- }
将上面两个文件的内容复制到node2上,保证node1,node2内容相同:
- # scp -r /etc/drbd.* node2:/etc
4、在两个节点上对资源进行初始化并启动服务:
- 1)初始化资源,在node1和node2上分别执行:
- # drbdadm create-md web
- 2)启动服务,在node1和node2上分别执行:
- /etc/init.d/drbd start
- 3)查看启动状态:
- # cat /proc/drbd
- [root@node1 ~]# cat /proc/drbd
- version: 8.3.8 (api:88/proto:86-94)
- GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by
- [email protected], 2010-06-04 08:04:16
- 0: cs:Connected ro:Secondary/Secondary
- ds:Inconsistent/Inconsistent C r----
- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1
- wo:b oos:987896
- //此时链接状态是Secondary/Secondary
下面将node1设置为Primary;使用命令为:
- # drbdsetup /dev/drbd0 primary –o
- 注:也可使用如下命令
- # drbdadm -- --overwrite-data-of-peer primary web
查看数据同步过程的状态:
- [root@node1 ~]# drbd-overview
- 0:web SyncSource Primary/Secondary UpToDate/Inconsistent
- C r----
- [==>.................] sync'ed: 18.6% (810392/987896)K
- delay_probe: 16
数据同步完后两个节点的状态为:
- [root@node1 ~]# drbd-overview
- 0:web Connected Primary/Secondary UpToDate/UpToDate C r----
- 节点node2上:
- [root@node2 ~]# drbd-overview
- 0:web Connected Secondary/Primary UpToDate/UpToDate C r----
//可以发现节点已经有了主从了
5、创建文件系统:
文件系统的挂载只能在Primary节点进行,因此,也只有在设置了主节点后才能对drbd设备进行格式化:
- # mke2fs -j -L DRBD /dev/drbd0
- # mkdir /data/mydata
- # mount /dev/drbd0 /data/mydata
6、测试是否可以同步
- # cd /data/mydata
- # umount /data/mydata ##卸载
- # drbdadm secondary mysql ##将node1切换为secondary
- 在node2上:
- # drbdadm primary mysql ##dr2切换为Primary
- # mount /dev/drbd0 /data/mydata ##挂载,并查看是否有数据
- # ls /data/mydata
为了把drbd成为集群的资源,需将drbd开机时不能自动启动
- # chkconfig drbd off
三:安装Mysql:
1、在node1,node2上建立mysql用户:
- # groupadd -r -g 306 mysql ##mysql组为系统组
- # useradd -r -g mysql -u 306 mysql ##mysql用户为系统用户
- # id mysql
2、编译安装mysql-5.5.28
- 编译安装mysql:
- tar xf mysql-5.5.28-linux2.6-i686.tar.gz -C /usr/local
- cd /usr/local
- ln -sv mysql-5.5.28-linux2.6-i686/ mysql
- cd mysql
- chown -R root:mysql .
- 初始化mysql:
- scripts/mysql_install_db --user=mysql --datadir=/data/mydata
- cp support-files/my-large.cnf /etc/my.cnf
- vim /etc/my.cnf
- thread_concurrency = 2 ##根据需要修改后面参数
- datadir = /data/mydata ##指定路径
- 添加服务脚本:
- cp support-files/mysql.server /etc/rc.d/init.d/mysqld
- 启动服务:
- service mysqld start
- /usr/local/mysql/bin/mysql ## 全路径启动mysql
- 创建一个库teatdb
- >create database testdb;
3、drbd主从切换;为在node2上安装Mysql做准备:
- node1上:
- servcie mysqld stop ##停止mysqld
- umount /data/mydata ##卸载数据目录
- drbdadm secondary web ##将node1设为备用
- drbd-overview
- 切换到node2:
- drbdadm primary web ##节点2为primary
- chown mysql:mysql /data/mydata
- //将/data/mydata的属主属组改为mysql:mysql
- mount /dev/drbd0 /data/mydata
- [root@node2 mysql]# drbd-overview
- [root@node2 mysql]# mount |grep /data/mydata
- /dev/drbd0 on /data/mydata type ext3 (rw)
4、在node2上,mysql不用再初始化,只需将配置文件从node1复制过来就行:
- scp node1:/etc/my.cnf /etc ##复制配置文件
- cd mysql
- chown -R mysql.mysql .
- cp support-files/mysql.server /etc/rc.d/init.d/mysqld
- service mysqld start
- #/usr/local/mysql/bin/mysql
- mysql> show databases; ##查看库信息
- +--------------------+
- | Database |
- +--------------------+
- | information_schema |
- | #mysql50#drbd.d |
- | mysql |
- | performance_schema |
- | test |
- | testdb | ##在node1上创建的testdb库
- +--------------------+
- 6 rows in set (0.08 sec)
5、至此两个节点上的mysql配置完成,下面将node1,node2上的mysqld和drbd服务停止,并开机不能自动启动;因为需将mysqld和drbd成为下面集群的资源
- # service mysqld stop
- # chkconfig mysqld off
- # umount /data/mydata
- # drbdadm secondary web
- # service drbd stop
- # chkconfig drbd off
四:安装配置corosync和pacemaker
1、首先安装以下rpm包:
- cluster-glue-1.0.6-1.6.el5.i386.rpm
- cluster-glue-libs-1.0.6-1.6.el5.i386.rpm
- corosync-1.2.7-1.1.el5.i386.rpm
- corosynclib-1.2.7-1.1.el5.i386.rpm
- heartbeat-3.0.3-2.3.el5.i386.rpm
- heartbeat-libs-3.0.3-2.3.el5.i386.rpm
- libesmtp-1.0.4-5.el5.i386.rpm
- pacemaker-1.1.5-1.1.el5.i386.rpm
- pacemaker-cts-1.1.5-1.1.el5.i386.rpm
- pacemaker-libs-1.1.5-1.1.el5.i386.rpm
- perl-TimeDate-1.16-5.el5.noarch.rpm
- resource-agents-1.0.4-1.1.el5.i386.rpm
# yum -y --nogpgcheck localinstall *.rpm ##放在一个目录下一起安装
2、配置corosync和pacemaker
- cd /etc/corosync/
- ls
- cp corosync.conf.example corosync.conf
- vim corosync.conf
- 作如下修改:
- version: 2
- secauth: on
- threads: 1
- bindnetaddr: 172.16.0.0
- mcastaddr: 226.94.14.14
- to_syslog: no
- 添加一个服务:
- service {
- ver: 0
- name: pacemaker
- }
- 定义aisexec的执行身份:
- aisexec {
- name: root
- group: root
- }
- # mkdir /var/log/cluster
- # corosync-keygen
- 复制到node2:
- scp -p corosync.conf authkey node2:/etc/corosync/
- 在第二个节点上创建目录:
- mkdir /var/log/cluster
- 在节点一上启动corosync
- service corosync start
- ssh rs2 'service corosync start'
- 查看日志:
- tail -30 /var/log/cluster/corosync.log
- 查看状态:
- # crm status
- Starting Corosync Cluster Engine (corosync): [ OK ]
- [root@node1 corosync]# setenforce 0
- [root@node1 corosync]# crm status
- ============
- Last updated: Wed Nov 28 16:19:42 2012
- Stack: openais
- Current DC: node1.magedu.com - partition WITHOUT quorum
- Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f
- 2 Nodes configured, 2 expected votes
- 0 Resources configured.
- ============
- Node node1.magedu.com: UNCLEAN (offline)
- Node node2.magedu.com: UNCLEAN (offline)
- Online: [ node1.magedu.com ]
- 配置corosync:
- crm
- configure
- 配置两个全局属性:
- property no-quorum-policy=ignore ##不具备法定票数时忽略
- property stonith-enabled=false ##是否启用stonish设备
- verify ##检查语法
- commit ##提交
- cd
- status
- crm(live)# ra ##查看资源代理
- crm(live)ra# list ocf heartbeat
- //可以看到有drbd;说明heartbeat提供的有drbd代理
- crm(live)ra# classes
- heartbeat
- lsb
- ocf / heartbeat linbit pacemaker
- stonith
- crm(live)ra# list ocf linbit
- drbd //同样提供的有drbd;这两种都可使用
五:定义资源:
1、定义primitive资源:
- crm(live)configure# primitive Mydrbd ocf:linbit:drbd params
- drbd_resource='web' op start timeout=240 op stop timeout=100
- 信息点说明:
- // primitive ##用于定义primitive类的资源,后跟名字
- //ocf:linbit:drbd
- ##使用linbit类型的drbd(另一个由heartbeat提供)
- // drbd_resource='web' ##web是上面自定义的资源名称
- // op ##后面跟指定的操作
- // timeout ##超时时间
- crm(live)configure# show xml ##查看资源信息
2、定义为主从类资源
- crm(live)configure# master ms_Mydrbd Mydrbd meta
- master-max='1' master-node-max='1' clone-max='2'
- clone-node-max='1' notify='true'
- // master ##定义主从的关键字
- //ms_Mydrbd ##主从资源的名字,一定是某一个主资源的主从
- // Mydrbd ##原有primitive的资源名字
- // meta ##为主从定义额外的属性
- // master-max ##最多有几个主资源
- // master-node-max ##每个节点运行几个主的
- // clone-max ##最多有几个从的
- // clone-node-max ##每个节点运行几个从的
- // notify ##是否通知
- 然后检查语法,提交
- crm(live)# status
- Online: [ node1.magedu.com node2.magedu.com ]
- Master/Slave Set: ms_Mydrbd [Mydrbd]
- Masters: [ node1.magedu.com ]
- Slaves: [ node2.magedu.com ]
- //显示主从节点
- 此时node1上:
- [root@node1 corosync]# drbd-overview
- 0:web Connected Primary/Secondary UpToDate/UpToDate C r----
- //说明nide1为primary
- crm node standby ##让node1为standby
- crm node online ##重新上线
- crm status ##查看状态
- Master/Slave Set: ms_Mydrbd [Mydrbd]
- Masters: [ node2.magedu.com ]
- Slaves: [ node1.magedu.com ]
- [root@node1 corosync]# drbd-overview
- 0:web Connected Secondary/Primary UpToDate/UpToDate C r----
- //有上面信息可以知道,node2为primary;主从已转换
3、定义Filesystem资源
- # primitive MyFS ocf:heartbeat:Filesystem params
- device="/dev/drbd0" directory="/data/mydata" fstype="ext3"
- op start timeout=60 op stop timeout=60
- # colocation MyFS_on_ms_Mydrbd_master inf: MyFS
- ms_Mydrbd:Master
- # order MyFS_after_ms_Mydrbd_master inf: ms_Mydrbd:promote
- MyFS:start
- # verify
- # commit
- crm(live)# status
- Master/Slave Set: ms_Mydrbd [Mydrbd]
- Masters: [ node2.magedu.com ]
- Slaves: [ node1.magedu.com ]
- MyFS (ocf::heartbeat:Filesystem): Started
- node2.magedu.com
- 到node2查看是否挂载:
- [root@node2 corosync]# ls /data/mydata
- drbd.conf mysql-bin.000002 mysql-bin.index
- drbd.d mysql-bin.000003 node1.magedu.com.err
- //显示部分
- [root@node2 corosync]# crm node standby
- [root@node2 corosync]# crm node online
- [root@node2 corosync]# crm status
- Master/Slave Set: ms_Mydrbd [Mydrbd]
- Masters: [ node1.magedu.com ]
- Slaves: [ node2.magedu.com ]
- MyFS (ocf::heartbeat:Filesystem): Started
- node1.magedu.com
- //让node2为standby;发现MyFS在node1上启动
4、定义mysql资源
- # primitive Mysql lsb:mysqld
- # colocation Mysql_with_MyFS inf: Mysql MyFS
- # order Mysql_after_MyFS mandatory: MyFS Mysql
- 然后检查提交
- crm(live)# status
- Master/Slave Set: ms_Mydrbd [Mydrbd]
- Masters: [ node1.magedu.com ]
- Slaves: [ node2.magedu.com ]
- MyFS (ocf::heartbeat:Filesystem): Started
- node1.magedu.com
- Mysql (lsb:mysqld): Started node1.magedu.com
- // mysql在node1启动起来
- [root@node1 ~]# /usr/local/mysql/bin/mysql
- mysql> create database mydb;
- Query OK, 1 row affected (0.03 sec)
- //新建一个mydb库
- 切换主从:
- [root@node1 ~]# crm node standby
- [root@node1 ~]# crm node online
- [root@node1 ~]# crm status
- Master/Slave Set: ms_Mydrbd [Mydrbd]
- Masters: [ node2.magedu.com ]
- Slaves: [ node1.magedu.com ]
- MyFS (ocf::heartbeat:Filesystem): Started
- node2.magedu.com
- Mysql (lsb:mysqld): Started node2.magedu.com
- 在node2上登陆mysql,可以看到mydb库
5、定义Ip地址资源
- crm(live)configure# primitive MyIP ocf:heartbeat:IPaddr2
- params ip="172.16.14.2"
- crm(live)configure# colocation MyIP_with_ms_Mydrbd_master
- inf: MyIP ms_Mydrbd:Master
- crm(live)configure# verify
- crm(live)configure# commit
- crm(live)configure# cd
- crm(live)# status
- Master/Slave Set: ms_Mydrbd [Mydrbd]
- Masters: [ node2.magedu.com ]
- Slaves: [ node1.magedu.com ]
- MyFS (ocf::heartbeat:Filesystem): Started
- node2.magedu.com
- Mysql (lsb:mysqld): Started node2.magedu.com
- MyIP (ocf::heartbeat:IPaddr2): Started node2.magedu.com
- 可以在node2上切换查看
#############################最终结果#########################################
- node node1.magedu.com \
- attributes standby="off"
- node node2.magedu.com \
- attributes standby="off"
- primitive MyFS ocf:heartbeat:Filesystem \
- params device="/dev/drbd0" directory="/data/mydata"
- fstype="ext3" \
- op start interval="0" timeout="60" \
- op stop interval="0" timeout="60"
- primitive MyIP ocf:heartbeat:IPaddr2 \
- params ip="172.16.14.2"
- primitive Mydrbd ocf:linbit:drbd \
- params drbd_resource="web" \
- op start interval="0" timeout="240" \
- op stop interval="0" timeout="100"
- primitive Mysql lsb:mysqld
- ms ms_Mydrbd Mydrbd \
- meta master-max="1" master-node-max="1" clone-max="2"
- clone-node-max="1" notify="true"
- colocation MyFS_on_ms_Mydrbd_master inf: MyFS ms_Mydrbd:Master
- colocation MyIP_with_ms_Mydrbd_master inf: MyIP
- ms_Mydrbd:Master
- colocation Mysql_with_MyFS inf: Mysql MyFS
- order MyFS_after_ms_Mydrbd_master inf: ms_Mydrbd:promote
- MyFS:start
- order Mysql_after_MyFS inf: MyFS Mysql
- property $id="cib-bootstrap-options" \
- dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80abab
- d6ca3902f" \
- cluster-infrastructure="openais" \
- expected-quorum-votes="2" \
- no-quorum-policy="ignore" \
- stonith-enabled="false"