配置使用 DRBD

介绍:

DRBD(Distributed Replicated Blocak Device): 分布式复制块设备

DRBD: 主从
-primary: 可执行读、写操作
-secondary: 文件系统不能挂载
DRBD: dual primay, 双主
-磁盘调度器(DIsk Scheduler):合并读请求,合并写请求
Procotol:
-A: Async, 异步
-B:semi sync, 半同步
-C:sync, 同步
DRBD Source:
-资源名称:可以是除了空白字符外的任意ACSII码字符;
-DRBD设备:在双方节点上,此DRBD设备的设备文件;一般为/dev/drbdN,其主设备号147
-磁盘:在双方节点上,各自提供的存储设备;
-网络配置:双方数据同步时所使用的网络属性;

环境:

drbd1: 10.11.8.145
drbd2: 10.11.8.158

安装:

前提: 时间同步,hosts解析,ssh双机互信

kernel 2.6.32 之前包含32: 编译安装 drbd
kernel 2.6.33 之后包含33(内核中整合了module), 只安装管理工具即可: 编译 drbd-utils

提供drbd配置文件:

root@drbd1:~# cat /etc/drbd.conf
# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example
include "drbd.d/global_common.conf";
include "drbd.d/*.res";

/etc/drbd.d/global_common.conf : 全局配置文件
/etc/drbd.d/*.res : 资源定义文件

root@drbd1:~# vim /etc/drbd.d/global_common.conf
# DRBD is the result of over a decade of development by LINBIT.
# In case you need professional services for DRBD or have
# feature requests visit http://www.linbit.com
 
global {
    usage-count yes; #参与在线使用计数器
    # minor-count dialog-refresh disable-ip-verification
    # cmd-timeout-short 5; cmd-timeout-medium 121; cmd-timeout-long 600;
}
 
common {
    handlers { #处理脚本
        # These are EXAMPLE handlers only.
        # They may have severe implications,
        # like hard resetting the node under certain circumstances.
        # Be careful when chosing your poison.
 
         pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
         pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
         local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
        # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
        # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
        # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
        # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
        # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
    }
 
    startup {
        # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
    }
 
    options {
        # cpu-mask on-no-data-accessible
    }
 
    disk {
        resync-rate 1000M; #同步速率
        on-io-error detach; #当磁盘IO错误时的动作
        # size on-io-error fencing disk-barrier disk-flushes
        # disk-drain md-flushes resync-rate resync-after al-extents
                # c-plan-ahead c-delay-target c-fill-target c-max-rate
                # c-min-rate disk-timeout
    }
 
    net {
        cram-hmac-alg "sha1"; #通信时的加密算法
        shared-secret "Z5yWHwfgV3Ca"; #身份验证时所使用的秘钥
        protocol C; #通信协议
        # protocol timeout max-epoch-size max-buffers unplug-watermark
        # connect-int ping-int sndbuf-size rcvbuf-size ko-count
        # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
        # after-sb-1pri after-sb-2pri always-asbp rr-conflict
        # ping-timeout data-integrity-alg tcp-cork on-congestion
        # congestion-fill congestion-extents csums-alg verify-alg
        # use-rle
    }
}

usage-count {val}: Please participate in DRBD's online usage counter [http://usage.drbd.org]. The most convenient way to do so is to set this option to yes. Valid options are: yes, no and ask.
pri-on-incon-degr {cmd}: This handler is called if the node is primary, degraded and if the local copy of the data is inconsistent.
pri-lost-after-sb {cmd}: The node is currently primary, but lost the after-split-brain auto recovery procedure. As as consequence, it should be abandoned.

具体参数参考官方文档: https://www.drbd.org/en/doc/

提供资源所需的磁盘设备:
lvcreate -L 4G -n data vol1

这里我新建了一个逻辑卷/dev/dm-2

创建资源配置文件:

root@drbd1:~# vim /etc/drbd.d/data.res
resource data {
        device  /dev/drbd0;
        disk    /dev/dm-2;
        meta-disk       internal;
        on drbd1 {
                address 10.11.8.145:7789;
        }
        on drbd2 {
                address 10.11.8.158:7789;
        }
}

PS: drbd 注册使用的端口为 7788 - 7799

将配置文件复制到drbd2:
root@drbd1:~# scp /etc/drbd.d/* drbd2:/etc/drbd.d/

在两个节点上分别初始化资源: (必须先执行此步,才可启动服务)

root@drbd1:~# drbdadm create-md data
root@drbd2:~# drbdadm create-md data
启动drbd服务:
root@drbd1:~# service drbd start
root@drbd2:~# service drbd start
查看drbd状态:
root@drbd1:~# drbd-overview
 0:data/0  Connected Secondary/Secondary Inconsistent/Inconsistent
将drbd1设为primary并进行数据同步:
root@drbd1:~# drbdadm primary data --force
root@drbd1:~# drbd-overview
  0:web  SyncSource Primary/Secondary UpToDate/Inconsistent C r----
    [============>.......] sync'ed: 66.2% (172140/505964)K delay_probe: 35
root@drbd1:~# drbd-overview
 0:data/0  Connected Primary/Secondary UpToDate/UpToDate

故障转移测试:

root@drbd1:~# mke2fs -j /dev/drbd0 #执行时间可能稍长
root@drbd1:~# mount /dev/drbd0 /data/drbd/
写入数据

此处注意: 要在drbd2上挂载, 必须现在drbd1上卸载
并且, 因为drbd设备只能在primary节点上挂载, 因此还需要将drbd1设为secondary, 将drbd2设为primary后, 才可以在drbd2上挂载

root@drbd1:~# umount /dev/drbd0 #必须先卸载
root@drbd1:~# drbdadm secondary data
root@drbd2:~# drbdadm primary data
root@drbd2:~# mount /dev/drbd0 /data/drbd/ $drbd2上挂载
  查看数据

完成. 以下为补充

drbdadm 基本命令:

drbdadm up #启用资源
drbdadm down #停用资源
drbdadm primary #升级资源
drbdadm secondary #降级资源
drbdadm create-md #初始化资源
drbdadm adjust #重新配置资源
drbdadm connect #启动连接
drbdadm disconnect #关闭连接
-d, --dry-run : 只打印出命令的输出, 并不真正执行命令

PS: DUnknown故障参考此处: DRBD 故障恢复

配置资源双主模型的示例:

 
resource mydrbd {
 
        net {
                protocol C;
                allow-two-primaries yes;
        }
 
        startup {
                become-primary-on both;
        }
 
        disk {
                fencing resource-and-stonith;
        }
 
        handlers {
                # Make sure the other node is confirmed
                # dead after this!
                outdate-peer "/sbin/kill-other-node.sh";
        }
 
        on node1 {
                device  /dev/drbd0;
                disk    /dev/vg0/mydrbd;
                address 172.16.200.11:7789;
                meta-disk       internal;
        }
 
        on node2 {
                device  /dev/drbd0;
                disk    /dev/vg0/mydrbd;
                address 172.16.200.12:7789;
                meta-disk       internal;
        }
}

你可能感兴趣的:(drbd)