Centos7下配置DRBD三节点模式(HA Cluster+Backup Node)

操作环境

CentOS Linux release 7.4.1708 (Core)

DRBDADM_BUILDTAG=GIT-hash:\ ee126652638328b55dc6bff47d07d6161ab768db\ build\ by\ root@drbd-node2\,\ 2018-07-30\ 22:23:07
DRBDADM_API_VERSION=2
DRBD_KERNEL_VERSION_CODE=0x09000e
DRBD_KERNEL_VERSION=9.0.14
DRBDADM_VERSION_CODE=0x090500
DRBDADM_VERSION=9.5.0
 

注意这里的DRBDADM_VERSION是9.5版本的,前面Blog里面这里是9.3版本的。

在9.3版本中,HA Cluster与Backup Node之间无法进行数据同步,但是在9.5版本中却可以。

 

网络拓扑图

Centos7下配置DRBD三节点模式(HA Cluster+Backup Node)_第1张图片

操作步骤

安装前准备

参考前面的两篇Blog《Centos6下drbd9安装与基本配置》、《Centos7下配置DRBD搭建Active/Stanby iSCSi Cluster》进行准备

配置DRBD

编辑DRBD配置文件如下,3个节点的配置文件相同,其中ip192.168.0.225为HA Cluster的VIP:

[root@drbd-node1 ~]# vi /etc/drbd.d/scsivol.res
resource scsivol {
        protocol C;
        device  /dev/drbd0;
        disk    /dev/vdb1;
        meta-disk       internal;

        on drbd-node1 {
                address 10.10.200.228:7788;
        }
        on drbd-node2 {
                address 10.10.200.229:7788;
        }
}

resource scsivol-U {
        protocol A;
        stacked-on-top-of scsivol {
                device  /dev/drbd10;
                address  192.168.0.225:7789;
        }
        on drbd-node3 {
                device  /dev/drbd10;
                disk    /dev/vdb1;
                address 192.168.0.226:7789;
                meta-disk internal;
        }
}

配置HA Cluster VIP

设置HA Cluster VIP为192.168.0.225步骤如下:

1.配置HA参数

[root@drbd-node1 ~]# crm
crm(live)# configure
crm(live)configure# primitive p_drbd_r0 ocf:linbit:drbd \
  params drbd_resource="scsivol" \
  op start timeout=240 \
  op promote timeout=90 \
  op demote timeout=90 \
  op stop timeout=100 \
  op monitor interval="29" role="Master" \
  op monitor interval="31" role="Slave"

2.配置Master/Slave

crm(live)configure# ms ms_drbd_r0 p_drbd_r0 \
  meta master-max=1 master-node-max=1 \
  notify=true clone-max=2 clone-node-max=1

3.配置VIP 

primitive p_ip_stacked ocf:heartbeat:IPaddr2 \
params ip="192.168.0.225" nic="ens7"

4.配置完成后,可以从3个节点上ping通192.168.0.225

初始化HA Cluster

分别在drbd-node1,drbd-node2上启动scsivol

[root@drbd-node1 ~]# drbdadm create-md scsivol
[root@drbd-node1 ~]# drbdadm up scsivol 

[root@drbd-node2 ~]# drbdadm create-md scsivol
[root@drbd-node2 ~]# drbdadm up scsivol 

设置drbd-node1节点为主节点

[root@drbd-node1 ~]# drbdadm primary scsivol

查看HA Cluster状态

[root@drbd-node1 ~]# drbd-overview 
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.

 0:scsivol/0    Connected(2*) Primar/Second UpToDa/UpToDa 

初始化Backup Node

在主节点drbd-node1上初始化scsivol-U

[root@drbd-node1 ~]# drbdadm create-md --stacked scsivol-U
[root@drbd-node1 ~]# drbdadm up --stacked scsivol-U
[root@drbd-node1 ~]# drbdadm primary --stacked scsivol-U

在backup节点drbd-node3上面初始化scsivol-U

[root@drbd-node3 ~]# drbdadm create-md scsivol-U
[root@drbd-node3 ~]# drbdadm up scsivol-U 

在drbd-node1主节点上查看drbd cluster状态,可以查看到2个资源的连接状态

[root@drbd-node1 ~]# drbd-overview 
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.

 0:scsivol/0    Connected(2*) Primar/Second UpToDa/UpToDa 
10:scsivol-U/0  Connected(2*) Primar/Second UpToDa/UpToDa 

在crm中配置stacked资源

上面我们已经在drbd中配置到了backup 节点,这里需要在crm中对其进行配置

primitive p_drbd_r0-U ocf:linbit:drbd \
params drbd_resource="scsivol-U"
ms ms_drbd_r0-U p_drbd_r0-U \
meta master-max="1" clone-max="1" \
clone-node-max="1" master-node-max="1" \
notify="true" globally-unique="false"
colocation c_drbd_r0-U_on_drbd_r0 \
inf: ms_drbd_r0-U ms_drbd_r0:Master
colocation c_drbd_r0-U_on_ip \
inf: ms_drbd_r0-U p_ip_stacked
order o_ip_before_r0-U \
inf: p_ip_stacked ms_drbd_r0-U:start
order o_drbd_r0_before_r0-U \
inf: ms_drbd_r0:promote ms_drbd_r0-U:start

完成后查看crm状态

[root@drbd-node1 mnt]# crm status
Stack: corosync
Current DC: drbd-node1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Jul 31 21:50:59 2018
Last change: Tue Jul 31 21:37:44 2018 by root via cibadmin on drbd-node1

2 nodes configured
4 resources configured

Online: [ drbd-node1 drbd-node2 ]

Full list of resources:

 Master/Slave Set: ms_drbd_r0 [p_drbd_r0]
     Masters: [ drbd-node1 ]
     Slaves: [ drbd-node2 ]
 p_ip_stacked   (ocf::heartbeat:IPaddr2):       Started drbd-node1
 Master/Slave Set: ms_drbd_r0-U [p_drbd_r0-U]
     Masters: [ drbd-node1 ]

模拟宕机测试

此时我们模拟drbd-node1节点宕机,我们在drbd-node2节点上查看drbd状态,scsivol-U已经自动在drbd-node2节点上启动了。

在模拟drbd-node1节点宕机前,我们现在drbd-node1中向drbd设备中写入数据如下:

[root@drbd-node1 ~]# mount /dev/drbd10 /mnt/
[root@drbd-node1 ~]# cd /mnt/
[root@drbd-node1 mnt]# ls
[root@drbd-node1 mnt]# touch a
[root@drbd-node1 mnt]# dd if=/dev/zero of=a bs=1M count=102400
dd: error writing 鈇? No space left on device
9495+0 records in
9494+0 records out
9955340288 bytes (10 GB) copied, 703.805 s, 14.1 MB/s

查看drbd状态

[root@drbd-node1 mnt]# drbd-overview 
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.

 0:scsivol/0    Connected(2*) Primar/Second UpToDa/UpToDa 
10:scsivol-U/0  Connected(2*) Primar/Second UpToDa/UpToDa /mnt xfs 9.4G 9.4G 20K 100% 

奇怪的是,在drbd-node1正常的时候,scsivol-U数据状态显示的是UpToDa/UpToDa,但是当drbd-node1宕机,drbd-node2接管scsivol-U后,此时这里又重新开始同步数据了。

[root@drbd-node2 ~]# drbd-overview 
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.

 0:scsivol/0    Connec/C'ting Primar/Unknow UpToDa/DUnkno 
10:scsivol-U/0  Connected(2*) Primar/Second UpToDa/Incons 

此时我们在模拟drbd-node2节点宕机,在drbd-node3节点上查看drbd状态

[root@drbd-node3 ~]# drbd-overview 
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.

10:scsivol-U/0  Connec/C'ting Second/Unknow UpToDa/DUnkno 

查看drbd设备中的数据

[root@drbd-node3 ~]# mount /dev/drbd10 /mnt/
[root@drbd-node3 ~]# cd /mnt/
[root@drbd-node3 mnt]# ls
a
[root@drbd-node3 mnt]# ll
total 9722012
-rw-r--r-- 1 root root 9955340288 Jul 31 21:50 a
[root@drbd-node3 mnt]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   44G  2.8G   42G   7% /
devtmpfs                 3.9G     0  3.9G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G  8.4M  3.9G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/vda1               1014M  145M  870M  15% /boot
tmpfs                    783M     0  783M   0% /run/user/0
/dev/drbd10              9.4G  9.4G   20K 100% /mnt

数据与drbd-node1宕机前是一致的。

你可能感兴趣的:(存储,linux)