1.DRBD基本介绍
DRBD是一种分布式复制块设备基于软件,实现镜像存储的解决方案,支持的块设备包括硬盘,分区,逻辑卷等,DRBD技术的核心功能是通过一个Linux内核模块的方式实现的。
DRBD命令工具包括:
drbdadm:DRBD程序套件的高层次管理工具,它从配置文件中的所有DRBD配置参数。
drbdsetup:允许用户配置已经加载到正在运行的内核的DRBD模块的程序,它是DRBD程序套件中的底层工具。
drbdmeta:用于用户创建,转储,还原和修改DRBD的元数据结构的程序。
DRBD的角色:
Primary:在Primary角色的DRBD设备可以无限制地被用于读取和写入操作,它可用于创建和挂载文件系统,生的或直接的I/O块设备等。
Secondary:在Secondary角色的DRBD设备接收到对端节点的设备的所有更新。
DRBD支持复制协议:
异步(Asynchronous)复制协议。在这种复制方式中,在本地写操作完成之后,并且发往对点的复制数据包已经进入本地TCP发送缓存后,那么本地写操作在primary节点上的操作被认为是完成了。在这种模式下,如果出现故障转移,那么数据可能将会丢失,处于备用节点的数据在故障转移之后数据始终没有改变,那么最近更新、主节点崩溃之前的数据将会被丢失。
内存同步(semi-synchronous)复制协议:在这种协议下,当本地磁盘写操作完成,并且复制数据已经到达了对点,那么本地写操作就认为在primary节点上的写操作被认为是写完成。通常在这种方式下,如果出现故障转移时不会有数据丢,然而万一两个节点同时出现掉电,存放在内存中的数据没有及时的写入磁盘,那么这将会使得primary节点存储的数据将会出现不可挽回的破会,最近在primary节点上写入的数据可能将会被丢失。
同步(Synchronous)复制协议:仅在本地和远程磁盘写操作已经确认被写入,本地写操作在primary节点上的写操作才被认为是写完成。因此,在这个数据复制模式中,单个节点损坏是不会丢失数据的。如果两个节点(或者是它们的存储子系统)同时发生了不可挽回的损坏,那么在这种复制协议下数据将会被丢失。
2.下面首先来配置DRBD:
前提:
1)本配置共有两个测试节点,分别node1.luojianlong.com和node2.luojianlong.com,相对应的IP地址分别为192.168.30.116和192.168.30.117;
2)node1和node2两个节点上各提供了一个大小相同的分区作为drbd设备;我们这里在两个节点上均为/dev/sdb1,大小为10G;
3)系统为CentOS 6.4,x86_64平台;
准备工作:
两个节点的主机名称和对应的IP地址解析服务可以正常工作,且每个节点的主机名称需要跟"uname -n“命令的结果保持一致;因此,需要保证两个节点上的/etc/hosts文件均为下面的内容:
[root@node1 ~]# cat /etc/hosts 192.168.30.116 node1.luojianlong.com node1 192.168.30.117 node2.luojianlong.com node2 [root@node2 ~]# cat /etc/hosts 192.168.30.116 node1.luojianlong.com node1 192.168.30.117 node2.luojianlong.com node2
为了使得重新启动系统后仍能保持如上的主机名称,还分别需要在各节点执行类似如下的命令:
[root@node1 ~]# sed -i 's@\(HOSTNAME=\).*@\1node1.luojianlong.com@g' /etc/sysconfig/network [root@node2 ~]# sed -i 's@\(HOSTNAME=\).*@\1node2.luojianlong.com@g' /etc/sysconfig/network
安装软件包
drbd共有两部分组成:内核模块和用户空间的管理工具。其中drbd内核模块代码已经整合进Linux内核2.6.33以后的版本中,因此,如果您的内核版本高于此版本的话,你只需要安装管理工具即可;否则,您需要同时安装内核模块和管理工具两个软件包,并且此两者的版本号一定要保持对应。
目前适用CentOS 5的drbd版本主要有8.0、8.2、8.3三个版本,其对应的rpm包的名字分别为drbd, drbd82和drbd83,对应的内核模块的名字分别为kmod-drbd, kmod-drbd82和kmod-drbd83。而适用于CentOS 6的版本为8.4,其对应的rpm包为drbd和drbd-kmdl,但在实际选用时,要切记两点:drbd和drbd-kmdl的版本要对应;另一个是drbd-kmdl的版本要与当前系统的内容版本相对应。各版本的功能和配置等略有差异;我们实验所用的平台为x86_64且系统为CentOS 6.4,因此需要同时安装内核模块和管理工具。我们这里选用最新的8.4的版本(drbd-8.4.3-33.el6.x86_64.rpm和drbd-kmdl-2.6.32-431.3.1.el6-8.4.3-33.el6.x86_64.rpm),下载地址为ftp://rpmfind.net/linux/atrpms/
[root@node1 ~]# wget ftp://rpmfind.net/linux/atrpms/el6-x86_64/atrpms/stable/drbd-8.4.3-33.el6.x86_64.rpm [root@node1 ~]# wget ftp://rpmfind.net/linux/atrpms/el6-x86_64/atrpms/stable/drbd-kmdl-2.6.32-431.3.1.el6-8.4.3-33.el6.x86_64.rpm [root@node1 ~]# rpm -ivh drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-431.3.1.el6-8.4.3-33.el6.x86_64.rpm [root@node1 ~]# scp drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-431.3.1.el6-8.4.3-33.el6.x86_64.rpm node2.luojianlong.com:/root/ [root@node2 ~]# rpm -ivh drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-431.3.1.el6-8.4.3-33.el6.x86_64.rpm
创建用于drbd的磁盘设备:
[root@node1 ~]# fdisk /dev/sdb Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1305, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305): Using default value 1305 Command (m for help): p Disk /dev/sdb: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x3ac91d3f Device Boot Start End Blocks Id System /dev/sdb1 1 1305 10482381 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@node1 ~]# partprobe Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
在node2上执行相同的操作,重启系统。
[root@node1 ~]# cat /proc/partitions major minor #blocks name 8 0 31457280 sda 8 1 512000 sda1 8 2 30944256 sda2 8 16 10485760 sdb 8 17 10482381 sdb1 253 0 26877952 dm-0 253 1 4063232 dm-1 [root@node2 ~]# cat /proc/partitions major minor #blocks name 8 0 31457280 sda 8 1 512000 sda1 8 2 30944256 sda2 8 16 10485760 sdb 8 17 10482381 sdb1 253 0 26877952 dm-0 253 1 4063232 dm-1
配置drbd
drbd的主配置文件为/etc/drbd.conf;为了管理的便捷性,目前通常会将些配置文件分成多个部分,且都保存至/etc/drbd.d目录中,主配置文件中仅使用"include"指令将这些配置文件片断整合起来。通常,/etc/drbd.d目录中的配置文件为global_common.conf和所有以.res结尾的文件。其中global_common.conf中主要定义global段和common段,而每一个.res的文件用于定义一个资源。
在配置文件中,global段仅能出现一次,且如果所有的配置信息都保存至同一个配置文件中而不分开为多个文件的话,global段必须位于配置文件的最开始处。目前global段中可以定义的参数仅有minor-count, dialog-refresh, disable-ip-verification和usage-count。
common段则用于定义被每一个资源默认继承的参数,可以在资源定义中使用的参数都可以在common段中定义。实际应用中,common段并非必须,但建议将多个资源共享的参数定义为common段中的参数以降低配置文件的复杂度。
resource段则用于定义drbd资源,每个资源通常定义在一个单独的位于/etc/drbd.d目录中的以.res结尾的文件中。资源在定义时必须为其命名,名字可以由非空白的ASCII字符组成。每一个资源段的定义中至少要包含两个host子段,以定义此资源关联至的节点,其它参数均可以从common段或drbd的默认中进行继承而无须定义。
下面的操作在node1.luojianlong.com完成
[root@node1 ~]# mv /etc/drbd.d/global_common.conf /etc/drbd.d/global_common.conf.bak [root@node1 ~]# vi /etc/drbd.d/global_common.conf global { usage-count no; # minor-count dialog-refresh disable-ip-verification } common { protocol C; handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { #wfc-timeout 120; #degr-wfc-timeout 120; } disk { on-io-error detach; #fencing resource-only; } net { cram-hmac-alg "sha1"; shared-secret "mydrbdlab"; } syncer { rate 1000M; } }
定义一个资源/etc/drbd.d/web.res
[root@node1 ~]# vi /etc/drbd.d/web.res resource web { on node1.luojianlong.com { device /dev/drbd0; disk /dev/sdb1; address 192.168.30.116:7789; meta-disk internal; } on node2.luojianlong.com { device /dev/drbd0; disk /dev/sdb1; address 192.168.30.117:7789; meta-disk internal; } }
以上文件在两个节点上必须相同,因此,可以基于ssh将刚才配置的文件全部同步至另外一个节点。
[root@node1 ~]# scp /etc/drbd.d/* node2.luojianlong.com:/etc/drbd.d/
在两个节点上初始化已定义的资源并启动服务:
# 初始化资源,在Node1和Node2上分别执行: [root@node1 ~]# drbdadm create-md web Writing meta data... initializing activity log NOT initializing bitmap lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory New drbd meta data block successfully created. lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory [root@node2 ~]# drbdadm create-md web Writing meta data... initializing activity log NOT initializing bitmap lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory New drbd meta data block successfully created. lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory # 启动服务,在Node1和Node2上分别执行: [root@node1 ~]# /etc/init.d/drbd start # 查看启动状态: [root@node1 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2014-01-05 14:30:44 0: cs:WFConnection ro:Secondary/Unknown ds:Inconsistent/DUnknown C r----s ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:10482024 [root@node2 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2014-01-05 14:30:44 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:10482024 # 也可以使用drbd-overview命令来查看 [root@node2 ~]# drbd-overview 0:web/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r----- [root@node1 ~]# drbd-overview 0:web/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
[root@node1 ~]# drbd-overview 0:web/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
[root@node1 ~]# drbdadm -- --overwrite-data-of-peer primary web # 而后再次查看状态,可以发现数据同步过程已经开始 [root@node1 ~]# drbd-overview 0:web/0 SyncSource Primary/Secondary UpToDate/Inconsistent C r---n- [>...................] sync'ed: 5.5% (9684/10236)M
等数据同步完成以后再次查看状态,可以发现节点已经实时状态,且节点已经有了主次
[root@node1 ~]# drbd-overview 0:web/0 Connected Primary/Secondary UpToDate/UpToDate C r----- [root@node2 ~]# drbd-overview 0:web/0 Connected Secondary/Primary UpToDate/UpToDate C r-----
创建文件系统
[root@node1 ~]# mke2fs -t ext4 -L DRBD /dev/drbd0 [root@node1 ~]# mkdir /mnt/drbd [root@node1 ~]# mount /dev/drbd0 /mnt/drbd
切换Primary和Secondary节点,对主Primary/Secondary模型的drbd服务来讲,在某个时刻只能有一个节点为Primary,因此,要切换两个节点的角色,只能在先将原有的Primary节点设置为Secondary后,才能把原来的Secondary节点设置为Primary:
[root@node1 ~]# cp /etc/fstab /mnt/drbd/ [root@node1 ~]# umount /mnt/drbd/ [root@node1 ~]# drbdadm secondary web [root@node1 ~]# drbd-overview 0:web/0 Connected Secondary/Secondary UpToDate/UpToDate C r----- [root@node2 ~]# drbdadm primary web [root@node2 ~]# drbd-overview 0:web/0 Connected Primary/Secondary UpToDate/UpToDate C r----- [root@node2 ~]# mkdir /mnt/drbd [root@node2 ~]# mount /dev/drbd0 /mnt/drbd/ [root@node2 ~]# ls /mnt/drbd/ fstab lost+found
3.下面安装配置corosync,pacemaker,实现mariadb,drbd高可用功能,实现俩台节点自动切换主备
在节点一和节点二上分别安装corosync,pacemaker
[root@node1 ~]# yum -y install corosync pacemaker [root@node2 ~]# yum -y install corosync pacemaker
在节点一和节点二上分别关闭NetworkManager服务
[root@node1 ~]# service NetworkManager stop [root@node1 ~]# chkconfig NetworkManager off [root@node2 ~]# service NetworkManager stop [root@node2 ~]# chkconfig NetworkManager off
在节点一和节点二上分别安装crmsh-1.2.6-4
[root@node1 ~]# yum -y --nogpgcheck localinstall crmsh*.rpm pssh*.rpm [root@node2 ~]# yum -y --nogpgcheck localinstall crmsh*.rpm pssh*.rpm
编辑corosync配置文件:
totem { version: 2 secauth: on threads: 0 interface { ringnumber: 0 bindnetaddr: 192.168.30.0 mcastaddr: 226.94.1.1 mcastport: 5405 ttl: 1 } } logging { fileline: off to_stderr: no to_logfile: yes to_syslog: no logfile: /var/log/cluster/corosync.log debug: off timestamp: on logger_subsys { subsys: AMF debug: off } } amf { mode: disabled } service { ver: 0 name: pacemaker # use_mgmtd: yes } aisexec { user: root group: root }
生成集群认证密钥
[root@node1 ~]# corosync-keygen Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/random. Press keys on your keyboard to generate entropy. Press keys on your keyboard to generate entropy (bits = 176). Press keys on your keyboard to generate entropy (bits = 240). Press keys on your keyboard to generate entropy (bits = 304). Press keys on your keyboard to generate entropy (bits = 368). Press keys on your keyboard to generate entropy (bits = 432). Press keys on your keyboard to generate entropy (bits = 496). Press keys on your keyboard to generate entropy (bits = 560). Press keys on your keyboard to generate entropy (bits = 624). Press keys on your keyboard to generate entropy (bits = 688). Press keys on your keyboard to generate entropy (bits = 752). Press keys on your keyboard to generate entropy (bits = 816). Press keys on your keyboard to generate entropy (bits = 880). Press keys on your keyboard to generate entropy (bits = 944). Press keys on your keyboard to generate entropy (bits = 1008). Writing corosync key to /etc/corosync/authkey. [root@node1 corosync]# scp authkey corosync.conf [email protected]:/etc/corosync/
启动corosync服务
[root@node1 ~]# service corosync start Starting Corosync Cluster Engine (corosync): [ OK ]
接着可以执行如下命令启动node2上的corosync
[root@node1 ~]# ssh node2 '/etc/init.d/corosync start'
注意:启动node2需要在node1上使用如上命令进行,不要在node2节点上直接启动
请确保有stonith-enabled和no-quorum-policy已经关闭:
[root@node1 ~]# crm configure property stonith-enabled=false [root@node1 ~]# crm configure property no-quorum-policy=ignore [root@node1 ~]# crm configure show node node1.luojianlong.com node node2.luojianlong.com property $id="cib-bootstrap-options" \ dc-version="1.1.10-14.el6_5.1-368c726" \ cluster-infrastructure="classic openais (with plugin)" \ expected-quorum-votes="2" \ stonith-enabled="false" \ no-quorum-policy="ignore" \ last-lrm-refresh="1395731263" rsc_defaults $id="rsc-options" \ resource-stickiness="100"
按照集群服务的要求,首先确保两个节点上的 drbd服务已经停止,且不会随系统启动而自动启动:
[root@node1 ~]# umount /mnt/drbd/ [root@node1 ~]# /etc/init.d/drbd stop Stopping all DRBD resources: . [root@node1 ~]# drbd-overview drbd not loaded [root@node1 ~]# chkconfig drbd off [root@node2 ~]# /etc/init.d/drbd stop Stopping all DRBD resources: . [root@node2 ~]# drbd-overview drbd not loaded [root@node2 ~]# chkconfig drbd off
配置drbd为集群资源,drbd需要同时运行在两个节点上,但只能有一个节点(primary/secondary模型)是Master,而另一个节点为Slave;因此,它是一种比较特殊的集群资源,其资源类型为多态(Multi- state)clone类型,即主机节点有Master和Slave之分,且要求服务刚启动时两个节点都处于slave状 态。
[root@node1 ~]# crm crm(live)# configure # 定义mydrbd主资源 crm(live)configure# primitive mydrbd ocf:linbit:drbd params drbd_resource=web op monitor role=Master interval=10 timeout=10 timeout=20 op monitor role=Slave interval=20 timeout=20 op start timeout=240 op stop timeout=100 crm(live)configure# verify crm(live)configure# commit # 定义主从资源 crm(live)configure# master ms_mydrbd mydrbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" crm(live)configure# verify crm(live)configure# commit crm(live)configure# show mydrbd primitive mydrbd ocf:linbit:drbd \ params drbd_resource="web" \ op monitor role="Master" interval="10" timeout="20" \ op monitor role="Slave" interval="20" timeout="20" \ op start timeout="240" interval="0" \ op stop timeout="100" interval="0" crm(live)configure# show ms_mydrbd ms ms_mydrbd mydrbd \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" crm(live)# status Last updated: Wed Mar 26 14:12:07 2014 Last change: Wed Mar 26 14:11:52 2014 via crmd on node2.luojianlong.com Stack: classic openais (with plugin) Current DC: node1.luojianlong.com - partition with quorum Version: 1.1.10-14.el6_5.1-368c726 2 Nodes configured, 2 expected votes 2 Resources configured Online: [ node1.luojianlong.com node2.luojianlong.com ] Master/Slave Set: ms_mydrbd [mydrbd] Masters: [ node1.luojianlong.com ] Slaves: [ node2.luojianlong.com ]
由上面的信息可以看出此时的drbd服务的Primary节点为node2.luojianlong.com,Secondary节点为 node1.luojianlong.com。当然,也可以在node2上使用如下命令验正当前主机是否已经成为drbd资源的 Primary节点:
[root@node2 ~]# drbd-overview 0:web/0 Connected Secondary/Primary UpToDate/UpToDate C r-----
为Primary节点上的drbd资源创建自动挂载的集群服务
ms_mydrbd的Master节点即为drbd服务的Primary节点,此节点的设备/dev/drbd0可以挂载使用,在 某集群服务的应用当中也需要能够实现自动挂载。
此外,此自动挂载的集群资源需要运行于drbd服务的Master节点上,并且只能在drbd服务将某节点设置 为Primary以后方可启动。因此,还需要为这两个资源建立排列约束和顺序约束。
# 在node1和node2上分别建立数据目录用来挂载drbd设备 [root@node1 ~]# mkdir /mydata [root@node2 ~]# mkdir /mydata [root@node1 ~]# crm crm(live)# configure crm(live)configure# primitive myfs ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata" fstype="ext4" op monitor interval=40 timeout=40 op start timeout=60 op stop timeout=60 crm(live)configure# colocation myfs_with_ms_mydrbd inf: myfs ms_mydrbd:Master crm(live)configure# verify crm(live)configure# commit
查看集群中资源的运行状态
crm(live)# status Last updated: Wed Mar 26 14:25:03 2014 Last change: Wed Mar 26 14:24:12 2014 via cibadmin on node1.luojianlong.com Stack: classic openais (with plugin) Current DC: node1.luojianlong.com - partition with quorum Version: 1.1.10-14.el6_5.1-368c726 2 Nodes configured, 2 expected votes 3 Resources configured Online: [ node1.luojianlong.com node2.luojianlong.com ] Master/Slave Set: ms_mydrbd [mydrbd] Masters: [ node1.luojianlong.com ] Slaves: [ node2.luojianlong.com ] myfs (ocf::heartbeat:Filesystem): Started node1.luojianlong.com
4.下面在node1和node2上安装mariadb
[root@node1 ~]# useradd -r -u 99 mysql [root@node1 ~]# tar xf mariadb-5.5.32-linux-x86_64.tar.gz -C /usr/local/ [root@node1 ~]# cd /usr/local/ [root@node1 local]# ln -s mariadb-5.5.32-linux-x86_64 mysql [root@node1 local]# cd mysql/ [root@node1 mysql]# chown -R root.mysql * [root@node1 mysql]# ./scripts/mysql_install_db --user=mysql --datadir=/mydata/data [root@node1 mysql]# cp support-files/my-large.cnf /etc/my.cnf [root@node1 mysql]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld [root@node1 mysql]# chmod +x /etc/rc.d/init.d/mysqld # 初始化之前要将node1和node2设置为standby,然后将node1挂载drbd,以将数据写到drbd设备上 [root@node1 ~]# crm node standby [root@node2 ~]# crm node standby [root@node1 ~]# mount /dev/drbd0 /mydata/ [root@node1 mysql]# ./scripts/mysql_install_db --user=mysql --datadir=/mydata/data [root@node1 mysql]# vi /etc/my.cnf # 添加如下俩行 datadir = /mydata/data innodb_file_per_table = 1 [root@node1 mysql]# service mysqld start Starting MySQL...... SUCCESS! root@node1 ~]# mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.5.32-MariaDB-log MariaDB Server Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.00 sec) mysql> [root@node1 ~]# service mysqld stop [root@node1 ~]# chkconfig mysqld off #node上做相同的操作,但是不需要初始化maridb
创建测试用户
mysql> grant all privileges on *.* to 'root'@'192.168.30.%' identified by '123456'; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec)
创建用于mariadb高可用的VIP资源和mariadb资源
[root@node1 ~]# crm configure crm(live)configure# primitive myvip ocf:heartbeat:IPaddr params ip=192.168.30.230 op monitor interval=20 timeout=20 on-fail=restart crm(live)configure# verify crm(live)configure# commit crm(live)configure# primitive myserver lsb:mysqld op monitor interval=20 timeout=20 on-fail=restart crm(live)configure# verify crm(live)configure# commit
定义myvip,myserver,myfs的colocation约束
crm(live)configure# colocation myserver_with_myfs inf: myserver myfs crm(live)configure# colocation myvip_with_myserver mandatory: myvip myserver crm(live)configure# verify crm(live)configure# commit
定义各资源的排列性约束
crm(live)configure# order myfs_after_ms_mydrbd mandatory: ms_mydrbd:promote myfs:start crm(live)configure# order myvip_before_myserver mandatory: myvip myserver crm(live)configure# order myfs_before_myserver mandatory: myfs:start myserver:start crm(live)configure# verify crm(live)configure# commit
5.测试mariadb的高可用:
[root@node1 ~]# crm status Last updated: Wed Mar 26 15:47:38 2014 Last change: Wed Mar 26 15:43:09 2014 via cibadmin on node1.luojianlong.com Stack: classic openais (with plugin) Current DC: node1.luojianlong.com - partition with quorum Version: 1.1.10-14.el6_5.1-368c726 2 Nodes configured, 2 expected votes 5 Resources configured Online: [ node1.luojianlong.com node2.luojianlong.com ] Master/Slave Set: ms_mydrbd [mydrbd] Masters: [ node1.luojianlong.com ] Slaves: [ node2.luojianlong.com ] myfs (ocf::heartbeat:Filesystem): Started node1.luojianlong.com myserver (lsb:mysqld): Started node1.luojianlong.com myvip (ocf::heartbeat:IPaddr): Started node1.luojianlong.com
以上信息发现主节点在node1上面
[root@localhost ~]# mysql -u root -p123456 -h 192.168.30.230 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.5.32-MariaDB-log MariaDB Server Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases -> ; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.00 sec) mysql> create database abc; Query OK, 1 row affected (0.00 sec)
在测试机上访问数据库正常:
下面将node1设置为备用节点,查看效果
[root@node1 ~]# crm status Last updated: Wed Mar 26 16:51:39 2014 Last change: Wed Mar 26 16:51:34 2014 via crm_attribute on node1.luojianlong.com Stack: classic openais (with plugin) Current DC: node1.luojianlong.com - partition with quorum Version: 1.1.10-14.el6_5.1-368c726 2 Nodes configured, 2 expected votes 5 Resources configured Node node1.luojianlong.com: standby Online: [ node2.luojianlong.com ] Master/Slave Set: ms_mydrbd [mydrbd] Masters: [ node2.luojianlong.com ] Stopped: [ node1.luojianlong.com ] myfs (ocf::heartbeat:Filesystem): Started node2.luojianlong.com myvip (ocf::heartbeat:IPaddr): Started node2.luojianlong.com
以上信息发现所有资源转移到node2,node2成为主节点
[root@localhost ~]# mysql -u root -p123456 -h 192.168.30.230 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.5.32-MariaDB-log MariaDB Server Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | abc | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.00 sec)
在测试机测试发现数据库访问正常,有刚才新创建的数据库。
到此,基于corosync,pacemaker和drbd提供mariadb高可用配置完成。