Centos7下配置DRBD三节点模式(Cluster)

操作环境

CentOS Linux release 7.4.1708 (Core)

DRBDADM_BUILDTAG=GIT-hash:\ ee126652638328b55dc6bff47d07d6161ab768db\ build\ by\ root@drbd-node2\,\ 2018-07-30\ 22:23:07
DRBDADM_API_VERSION=2
DRBD_KERNEL_VERSION_CODE=0x09000e
DRBD_KERNEL_VERSION=9.0.14
DRBDADM_VERSION_CODE=0x090500
DRBDADM_VERSION=9.5.0

网络拓扑图

与《Centos7下配置DRBD三节点模式(HA Cluster+Backup Node)》不同,现在的3节点模式下3个节点同属一个cluster,3个节点之间都用Protocol C进行数据传输。而在《Centos7下配置DRBD三节点模式(HA Cluster+Backup Node)》中,HA Cluster种的2个节点使用Protocol C进行数据传输,HA Cluster与backup节点之间使用Protocol A进行数据传输。

Centos7下配置DRBD三节点模式(Cluster)_第1张图片

操作步骤

安装前准备

参考前面的两篇Blog《Centos6下drbd9安装与基本配置》、《Centos7下配置DRBD搭建Active/Stanby iSCSi Cluster》进行准备

DRBD配置

drbd配置文件编写如下,将该配置文件分别复制到3个节点上,根据drbd connection的计算公式C=N(N-1)/2,C代表连接数,N代表节点数,那个在3节点的情况下,连接数为3,所以在配置文件中有3条连接配置:

[root@drbd-node1 drbd.d]# vi /etc/drbd.d/scsivol.res
resource scsivol {
        device  /dev/drbd1;
        disk    /dev/vdb1;
        meta-disk internal;
        on drbd-node1 {
                address 10.10.200.228:7000;
                node-id 0;
        }
        on drbd-node2 {
                address 10.10.200.229:7001;
                node-id 1;
        }
        on drbd-node3 {
                address 10.10.200.226:7002;
                node-id 2;
        }


        connection {
                host drbd-node1   port 7010;
                host drbd-node2   port 7001;
        }
        connection {
                host drbd-node1   port 7020;
                host drbd-node3   port 7002;
        }
        connection {
                host drbd-node2   port 7012;
                host drbd-node3   port 7021;
        }
}

除了可以使用上面这个方式进行编写配置文件,还可以使用下述方法,通过connection-mesh就不用一条条的编写连接信息了。

resource scsivol {
        device  /dev/drbd1;
        disk    /dev/vdb1;
        meta-disk internal;
        on drbd-node1 {
                address 10.10.200.228:7000;
                node-id 0;
        }
        on drbd-node2 {
                address 10.10.200.229:7001;
                node-id 1;
        }
        on drbd-node3 {
                address 10.10.200.226:7002;
                node-id 2;
        }
	connection-mesh {
	    hosts drbd-node1 drbd-node2 drbd-node3;
   		 net {
      	  use-rle no;
   		}
	}
}

如果服务器上配置了多个网口,可以指定网口用于drbd节点之间传输数据

resource scsivol {
        device  /dev/drbd1;
        disk    /dev/vdb1;
        meta-disk internal;
        on drbd-node1 {
                address 10.10.200.228:7000;
                node-id 0;
        }
        on drbd-node2 {
                address 10.10.200.229:7001;
                node-id 1;
        }
        on drbd-node3 {
                address 10.10.200.226:7002;
                node-id 2;
        }


        connection {
                host drbd-node1 192.168.0.228  port 7010;
                host drbd-node2 192.168.0.229  port 7001;
        }
        connection {
                host drbd-node1 192.168.1.228  port 7020;
                host drbd-node3 192.168.1.226  port 7002;
        }
        connection {
                host drbd-node2 192.168.2.229  port 7012;
                host drbd-node3 192.168.2.226  port 7021;
        }
}

DRBD Resource初始化

分别在3个节点上执行以下命令:

[root@drbd-node2 ~]# drbdadm create-md scsivol
md_offset 107374178304
al_offset 107374145536
bm_offset 107367591936

Found xfs filesystem
     9765552 kB data area apparently used
   104851164 kB left usable by current configuration

Even though it looks like this would place the new meta data into
unused space, you still need to confirm, as this is only a guess.

Do you want to proceed?
[need to type 'yes' to confirm] yes

initializing activity log
initializing bitmap (6400 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
success
[root@drbd-node2 ~]# drbdadm up scsivol

在3个节点上分别初始化drbd resource后,选择drbd-node1作为主节点

[root@drbd-node1 drbd.d]# drbdadm primary scsivol --froce

查看DRBD状态

[root@drbd-node1 drbd.d]# drbdadm status scsivol
scsivol role:Primary
  disk:UpToDate
  drbd-node2 role:Secondary
    replication:SyncSource peer-disk:Inconsistent done:22.22
  drbd-node3 role:Secondary
    replication:SyncSource peer-disk:Inconsistent done:34.05

数据同步完成后,状态信息如下:

[root@drbd-node1 drbd.d]# drbdadm status
scsivol role:Primary
  disk:UpToDate
  drbd-node2 role:Secondary
    peer-disk:UpToDate
  drbd-node3 role:Secondary
    peer-disk:UpToDate

失效切换测试

先在主节点上挂载drbd设备,创建文件系统,并吸入10G的数据

[root@drbd-node1 drbd.d]# mkfs.xfs /dev/drbd1 -f
meta-data=/dev/drbd1             isize=512    agcount=4, agsize=6553198 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=26212791, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=12799, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@drbd-node1 drbd.d]# mount /dev/drbd1 /mnt/
[root@drbd-node1 drbd.d]# cd /mnt/
[root@drbd-node1 mnt]# touch a
[root@drbd-node1 mnt]# dd if=/dev/zero of=a bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 258.881 s, 41.5 MB/s

数据写入后,关闭drbd-node1,模拟主节点宕机。这是我们查看drbd-node2节点状态,drbd-node2以及drbd-node3都为secondary状态

[root@drbd-node2 /]# drbdadm status
scsivol role:Secondary
  disk:UpToDate
  drbd-node1 connection:Connecting
  drbd-node3 role:Secondary
    peer-disk:UpToDate

在drbd-node2上,挂载drbd设备至/mnt目录,数据正常。

[root@drbd-node2 /]# mount /dev/drbd1 /mnt/
[root@drbd-node2 /]# ls /mnt/
a
[root@drbd-node2 /]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   41G  2.1G   39G   6% /
devtmpfs                 486M     0  486M   0% /dev
tmpfs                    1.0G   54M  971M   6% /dev/shm
tmpfs                    497M  6.6M  490M   2% /run
tmpfs                    497M     0  497M   0% /sys/fs/cgroup
/dev/vda1               1014M  127M  888M  13% /boot
tmpfs                    100M     0  100M   0% /run/user/0
/dev/drbd1               100G  9.1G   91G  10% /mnt

此时,再来查看drbd cluster状态,在挂载drbd设备后,drbd-node2状态变更为primary状态。

[root@drbd-node2 /]# drbdadm status
scsivol role:Primary
  disk:UpToDate
  drbd-node1 connection:Connecting
  drbd-node3 role:Secondary
    peer-disk:UpToDate

再次模拟drbd-node2节点宕机,在drbd-node3节点上查看drbd信息如下

[root@drbd-node3 ~]# drbdadm status
scsivol role:Secondary
  disk:UpToDate
  drbd-node1 connection:Connecting
  drbd-node2 connection:Connecting

挂载drbd设备至/mnt目录,查看数据

[root@drbd-node3 ~]# mount /dev/drbd1 /mnt/
[root@drbd-node3 ~]# ls /mnt/
a
[root@drbd-node3 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   44G  2.8G   42G   7% /
devtmpfs                 3.9G     0  3.9G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G  8.4M  3.9G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/vda1               1014M  145M  870M  15% /boot
tmpfs                    783M     0  783M   0% /run/user/0
/dev/drbd1               100G  9.1G   91G  10% /mnt

查看drbd状态,此时drbd-node3为主节点

[root@drbd-node3 ~]# drbdadm status
scsivol role:Primary
  disk:UpToDate
  drbd-node1 connection:Connecting
  drbd-node2 connection:Connecting

 

你可能感兴趣的:(linux)