Centos7下配置DRBD Cluster扩展节点

操作环境

CentOS Linux release 7.4.1708 (Core)

DRBDADM_BUILDTAG=GIT-hash:\ ee126652638328b55dc6bff47d07d6161ab768db\ build\ by\ root@drbd-node2\,\ 2018-07-30\ 22:23:07
DRBDADM_API_VERSION=2
DRBD_KERNEL_VERSION_CODE=0x09000e
DRBD_KERNEL_VERSION=9.0.14
DRBDADM_VERSION_CODE=0x090500
DRBDADM_VERSION=9.5.0

网络拓扑图

原DRBD Cluster中有4个节点,需要将其扩展为5个节点,新添加节点为drbd-node5

Centos7下配置DRBD Cluster扩展节点_第1张图片

操作步骤

原DRBD Cluster信息

[root@drbd-node1 drbd.d]# drbdadm status
scsivol role:Secondary
  disk:UpToDate
  drbd-node2 role:Secondary
    peer-disk:UpToDate
  drbd-node3 role:Secondary
    peer-disk:UpToDate
  drbd-node4 role:Secondary
    peer-disk:UpToDate

扩展DRBD Cluster

修改DRBD 配置文件

原配置文件如下:

[root@drbd-node1 drbd.d]# vi scsivol.res
resource scsivol {
        device  /dev/drbd1;
        disk    /dev/vdb1;
        meta-disk internal;
        on drbd-node1 {
                address 10.10.200.228:7000;
                node-id 0;
        }
        on drbd-node2 {
                address 10.10.200.229:7001;
                node-id 1;
        }
        on drbd-node3 {
                address 10.10.200.226:7002;
                node-id 2;
        }
        on drbd-node4 {
                address 10.10.200.230:7003;
                node-id 3;
        }
        connection-mesh {
            hosts drbd-node1 drbd-node2 drbd-node3 drbd-node4;
                 net {
                          use-rle no;
                }
        }
}

添加新节点drbd-node5信息后,而后将配置文件分发到DRBD Cluster中的所有节点,配置文件如下:

[root@drbd-node1 drbd.d]# vi scsivol.res
resource scsivol {
        device  /dev/drbd1;
        disk    /dev/vdb1;
        meta-disk internal;
        on drbd-node1 {
                address 10.10.200.228:7000;
                node-id 0;
        }
        on drbd-node2 {
                address 10.10.200.229:7001;
                node-id 1;
        }
        on drbd-node3 {
                address 10.10.200.226:7002;
                node-id 2;
        }
        on drbd-node4 {
                address 10.10.200.230:7003;
                node-id 3;
        }
        on drbd-node5 {
                address 10.10.200.231:7004;
                node-id 4;
        }
        connection-mesh {
            hosts drbd-node1 drbd-node2 drbd-node3 drbd-node4 drbd-node5;
                 net {
                          use-rle no;
                }
        }
}

配置DRBD Cluster元数据

在扩展drbd cluster节点之前,需要在元数据配置信息里面更改最大节点数,否则无法添加新节点,会提示错误信息如下:

 drbd-node1 kernel: drbd scsivol/0 drbd1 drbd-node5: Not enough free bitmap slots

1.在所有节点上,导出drbd cluster元数据信息,其中scsivol为drbd resource名称,在drbd配置文件中已定义。

[root@drbd-node1 drbd.d]# drbdmeta --force $(drbdadm sh-minor scsivol) v09 $(drbdadm sh-ll-dev scsivol) internal dump-md > scsivol.metadata

元数据配置信息如下,需要修改max-peers,以及添加bitmap[n]

[root@drbd-node1 drbd.d]# vi scsivol.metadata 
# DRBD meta data dump
# 2018-08-23 19:24:06 +0800 [1535023446]
# drbd-node1> drbdmeta --force 1 v09 /dev/vdb1 internal dump-md
#

version "v09";

This_is_an_unclean_meta_data_dump._Don't_trust_the_bitmap.
# You should "apply-al" first, if you plan to restore this.

max-peers 3;
# md_size_sect 3912
# md_offset 21474832384
# al_offset 21474799616
# bm_offset 21472833536
......
# al-extents 1237;
la-size-sect 20969528;
bm-byte-per-bit 4096;
device-uuid 0xBE4D681B6892E302;
la-peer-max-bio-size 1048576;
al-stripes 1;
al-stripe-size-4k 8;
# bm-bytes 983040;
bitmap[0] {
   # at 0kB
    40960 times 0x0000000000000000;
}
bitmap[1] {
   # at 0kB
    40960 times 0x0000000000000000;
}
bitmap[2] {
   # at 0kB
    4096 times 0xFFFFFFFFFFFFFFFF;
   # at 3145728kB
    36864 times 0x0000000000000000;
}
# bits-set 262144;

修改后的元数据配置信息如下:

[root@drbd-node1 drbd.d]# vi scsivol.metadata 
# DRBD meta data dump
# 2018-08-23 19:24:06 +0800 [1535023446]
# drbd-node1> drbdmeta --force 1 v09 /dev/vdb1 internal dump-md
#

version "v09";

This_is_an_unclean_meta_data_dump._Don't_trust_the_bitmap.
# You should "apply-al" first, if you plan to restore this.

max-peers 4;
# md_size_sect 3912
# md_offset 21474832384
# al_offset 21474799616
# bm_offset 21472833536
......
# bm-bytes 983040;
bitmap[0] {
   # at 0kB
    40960 times 0x0000000000000000;
}
bitmap[1] {
   # at 0kB
    40960 times 0x0000000000000000;
}
bitmap[2] {
   # at 0kB
    4096 times 0xFFFFFFFFFFFFFFFF;
   # at 3145728kB
    36864 times 0x0000000000000000;
}
bitmap[3] {
   # at 0kB
    40960 times 0x0000000000000000;
}
# bits-set 262144;


2.导入元数据

在导入元数据前,需要先将scsivol down掉

[root@drbd-node1 drbd.d]# drbdadm down scsivol

在所有节点上,再次导入元数据

[root@drbd-node1 drbd.d]# drbdmeta --force $(drbdadm sh-minor scsivol) v09 $(drbdadm sh-ll-dev scsivol) internal restore-md scsivol.metadata 

Valid meta-data in place, overwrite?
*** confirmation forced via --force option ***
reinitializing
Successfully restored meta data

初始化新节点

在新节点drbd-node5上,初始化drbd resource 

[root@drbd-node5 drbd.d]# drbdadm create-md scsivol
You want me to create a v09 style flexible-size internal meta data block.
There appears to be a v09 flexible-size internal meta data block
already in place on /dev/vdb1 at byte offset 21475233792

Do you really want to overwrite the existing meta-data?
[need to type 'yes' to confirm] yes

md_offset 21475233792
al_offset 21475201024
bm_offset 21472575488

Found some data

 ==> This might destroy existing data! <==

Do you want to proceed?
[need to type 'yes' to confirm] yes

initializing activity log
initializing bitmap (2564 KB) to all zero
ioctl(/dev/vdb1, BLKZEROOUT, [21472575488, 2625536]) failed: Inappropriate ioctl for device
Using slow(er) fallback.
100%
Writing meta data...
New drbd meta data block successfully created.

启动drbd resource

[root@drbd-node5 ~]# drbdadm up scsivol

等待一段时间的同步后,查看drbd cluster状态

[root@drbd-node1 ~]# drbdadm status
scsivol role:Secondary
  disk:UpToDate
  drbd-node2 role:Secondary
    peer-disk:UpToDate
  drbd-node3 role:Secondary
    peer-disk:UpToDate
  drbd-node4 role:Secondary
    peer-disk:UpToDate
  drbd-node5 role:Secondary
    peer-disk:UpToDate

发现所有的节点都是secondary状态,重新设置drbd-node1为主节点

[root@drbd-node1 ~]# drbdadm primary scsivol
[root@drbd-node1 ~]# drbdadm status
scsivol role:Primary
  disk:UpToDate
  drbd-node2 role:Secondary
    peer-disk:UpToDate
  drbd-node3 role:Secondary
    peer-disk:UpToDate
  drbd-node4 role:Secondary
    peer-disk:UpToDate
  drbd-node5 role:Secondary
    peer-disk:UpToDate

总结

drbd节点扩展这块做的不够好,在配置过程中,时长会碰到奇怪的问题,相同的步骤不是每次都能成功。

 

你可能感兴趣的:(linux)