转载自:http://www.dearda.com/index.php/archives/2020

另有配置档说明好文:https://blog.51cto.com/aaronsa/2130434

CentOS7.6中drbd-9.0.17安装配置和验证
da2019年04月16日
两台机器的操作系统:CentOS 7.6,uname -r:3.10.0-957.el7.x86_64

到drbd官网下载文件:
drbd-9.0.17-1.tar.gz
drbd-utils-9.8.0.tar.gz

两台机器增加hosts记录并确定已生效
192.168.1.108 centos1
192.168.1.109 centos2

1、安装drbd模块
安装kernel相关组件,yum install kernel-*,包括kernel-devel,kernel-headers,kernel-tools等,
注意安装的kernel-devel的版本要与uname -r的版本一致,关系到后面的安装,否则会出现报错“Module drbd not found”。
完成后有新增目录/usr/src/kernels/3.10.0-957.el7.x86_64

安装drbd内核模块,直接make安装
tar xzf drbd-9.0.17-1.tar.gz
cd drbd-9.0.17-1
make KDIR=/usr/src/kernels/3.10.0-957.el7.x86_64/
make install

完成安装后新增如下文件
ll /lib/modules/3.10.0-957.el7.x86_64/updates/
total 13832
-rw-r–r– 1 root root 13496680 Apr 16 19:31 drbd.ko
-rw-r–r– 1 root root 662264 Apr 16 19:31 drbd_transport_tcp.ko

加载drbd模块并确认生效
modprobe drbd
lsmod|grep drbd
drbd 554407 0
libcrc32c 12644 4 xfs,drbd,nf_nat,nf_conntrack

2、安装drbd-utils
yum install -y flex po4a libxslt docbook*
tar xzf drbd-utils-9.8.0.tar.gz
cd drbd-utils-9.8.0
./configure –prefix=/usr/local/drbd-utils-9.8.0 –without-83support –without-84support
make&&make install

进入drbd-utils安装文件的目录复制drbd-overview.pl这个脚本到系统目录
cd drbd-9.0.17-1/scripts
cp -p drbd-overview.pl /usr/sbin/
chmod +x /usr/sbin/drbd-overview.pl

3、修改drbd配置

两台机器新增硬盘sdb,通过fdisk分区为sdb1

共三个配置文件
/usr/local/drbd-utils-9.8.0/etc/drbd.conf 保持默认
/usr/local/drbd-utils-9.8.0/etc/drbd.d/global_common.conf 保持默认
新建配置
vim /usr/local/drbd-utils-9.8.0/etc/drbd.d/r000.res
内容如下:

resource r000 {
on centos1 {
device /dev/drbd1;
disk /dev/sdb1;
address 192.168.1.108:8888;
meta-disk internal;
}
on centos2 {
device /dev/drbd1;
disk /dev/sdb1;
address 192.168.1.109:8888;
meta-disk internal;
}
}

新建资源
drbdadm create-md r000
drbdadm up r000

查看节点的角色,当前两台均为备机
[root@centos1 block]# drbdadm role r000
Secondary
[root@centos2 drbd-utils-9.8.0]# drbdadm role r000
Secondary

将主节点设置primary
drbdadm primary –force r000

查看drbd状态执行drbd-overview
主节点显示如下:
[root@centos1 ~]# drbd-overview.pl
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.

1:r000/0 Connected(2*) Primar/Second UpToDa/UpToDa

从节点显示如下:
[root@centos2 scripts]# drbd-overview.pl
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.

1:r000/0 Connected(2*) Second/Primar UpToDa/UpToDa

4、验证drbd主从同步

上一步设置了/dev/drbd1此设备为同步,则将此设备当成一块硬盘用于操作即可。

主节点执行格式化、挂载、写文件:
mkfs.ext4 /dev/drbd1
mkdir /mnt/drbd_dir
mount /dev/drbd1 /mnt/drbd_dir
dd if=/dev/zero of=/mnt/drbd_dir/100M.file bs=1M count=100
touch test{1,2,3,4,5}

查看drbd状态,显示已使用120M
[root@centos1 ~]# drbd-overview.pl
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.

1:r000/0 Connected(2*) Primar/Second UpToDa/UpToDa /mnt/drbd_dir ext4 4.8G 120M 4.5G 3%

查看共享目录下的文件:
[root@centos1 ~]# ll /mnt/drbd_dir
total 102416
-rw-r–r– 1 root root 104857600 Apr 16 23:19 100M.file
drwx—— 2 root root 16384 Apr 16 23:19 lost+found
-rw-r–r– 1 root root 0 Apr 16 23:25 test1
-rw-r–r– 1 root root 0 Apr 16 23:25 test2
-rw-r–r– 1 root root 0 Apr 16 23:25 test3
-rw-r–r– 1 root root 0 Apr 16 23:25 test4
-rw-r–r– 1 root root 0 Apr 16 23:25 test5

验证从节点是否有复制到文件:

主节点卸载硬盘并降级:
umount /dev/drbd1
drbdadm secondary r000

从节点升级为primary并挂载文件系统
drbdadm primary r000
mkdir /mnt/drbd_dir
mount /dev/drbd1 /mnt/drbd_dir

文件与主节点看到的一致
ll /mnt/drbd_dir
[root@centos2 scripts]# ll /mnt/drbd_dir
total 102416
-rw-r–r– 1 root root 104857600 Apr 16 23:19 100M.file
drwx—— 2 root root 16384 Apr 16 23:19 lost+found
-rw-r–r– 1 root root 0 Apr 16 23:25 test1
-rw-r–r– 1 root root 0 Apr 16 23:25 test2
-rw-r–r– 1 root root 0 Apr 16 23:25 test3
-rw-r–r– 1 root root 0 Apr 16 23:25 test4
-rw-r–r– 1 root root 0 Apr 16 23:25 test5

下面是一个配置双主模型的例子。结合PACEMAKER,COROSYNC,RESOURCE-AGENT,配置OCFS,DLM可以实现双活DRBD资源

resource mydrbd {

        net {
                protocol C;
                allow-two-primaries yes;
        }

        startup {
                become-primary-on both;
        }

        disk {
                fencing resource-and-stonith;
        }

        handlers {
                # Make sure the other node is confirmed
                # dead after this!
                outdate-peer "/sbin/kill-other-node.sh";
        }

        on node1 {
                device  /dev/drbd0;
                disk    /dev/vg0/mydrbd;
                address 172.16.200.11:7789;
                meta-disk       internal;
        }

        on node2 {
                device  /dev/drbd0;
                disk    /dev/vg0/mydrbd;
                address 172.16.200.12:7789;
                meta-disk       internal;
        }
}