Centos6.3下DRBD安装配置笔记

-----------------------闲    扯-------------------------------

   最近准备更新点负载均衡高可用的文档,所以把之前一直想攻克的DRBD今天抽空给搞定了。

   DRBD(Distributed Replicated Block Device) 我们可以理解为它其实就是个网络RAID-1,两台服务器间就算某台因断电或者宕机也不会对数据有任何影响,而真正的热切换可以通过Heartbeat方案解决,不需要人工干预。

例如:DRBD+Heartbeat+Mysql进行主从结构分离,作为DRBD+HeartBeat+NFS的备份存储解决方案。

--------------------废话不多说,开搞---------------------------


系统版本:centos6.3 x64(内核2.6.32)

DRBD:DRBD-8.4.3


node1:   192.168.7.88(drbd1.example.com)

node2:   192.168.7.89 (drbd2.example.com)


(node1)为仅主节点配置

(node2)为仅从节点配置

(node1,node2)为主从节点共同配置


一.准备环境:(node1,node2)


1.关闭iptables和SELINUX,避免安装过程中报错。

# service iptables stop

# setenforce 0

# vi /etc/sysconfig/selinux

---------------

SELINUX=disabled

---------------


2.设置hosts文件

# vi /etc/hosts

-----------------

192.168.7.88    drbd1.example.com    drbd1

192.168.7.89    drbd2.example.com    drbd2

-----------------


3.在两台虚拟机分别添加一块2G硬盘sdb作为DRBD,分别分区为sdb1,大小1G,并在本地系统创建/data目录,不做挂载操作。

# fdisk /dev/sdb

----------------

n-p-1-1-"+1G"-w

----------------

# mkdir /data


4.时间同步:(重要)

# ntpdate -u asia.pool.ntp.org


5.更改主机名:

(node1)

# vi /etc/sysconfig/network

----------------

HOSTNAME=server.example.com

----------------


(node2)

# vi /etc/sysconfig/network

----------------

HOSTNAME=client.example.com

----------------



二.DRBD的安装配置:


1.安装依赖包:(node1,node2)

# yum install gcc gcc-c++ make glibc flex kernel-devel kernel-headers


2.安装DRBD:(node1,node2)

# wget http://oss.linbit.com/drbd/8.4/drbd-8.4.3.tar.gz

# tar zxvf drbd-8.4.3.tar.gz

# cd drbd-8.4.3

# ./configure --prefix=/usr/local/drbd --with-km

# make KDIR=/usr/src/kernels/2.6.32-279.el6.x86_64/

# make install

# mkdir -p /usr/local/drbd/var/run/drbd

# cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/rc.d/init.d

# chkconfig --add drbd

# chkconfig drbd on

加载DRBD模块:

# modprobe drbd

查看DRBD模块是否加载到内核:

# lsmod |grep drbd


3.参数配置:(node1,node2)

# vi /usr/local/drbd/etc/drbd.conf

清空文件内容,并添加如下配置:

---------------

resource r0{

protocol C;


startup { wfc-timeout 0; degr-wfc-timeout 120;}

disk { on-io-error detach;}

net{

  timeout 60;

  connect-int 10;

  ping-int 10;

  max-buffers 2048;

  max-epoch-size 2048;

}

syncer { rate 30M;}


on drbd1.example.com{

  device /dev/drbd0;

  disk   /dev/sdb1;

  address 192.168.7.88:7788;

  meta-disk internal;

}

on drbd2.example.com{

  device /dev/drbd0;

  disk   /dev/sdb1;

  address 192.168.7.89:7788;

  meta-disk internal;

}

}

---------------


4.创建DRBD设备并激活ro资源:(node1,node2)

# mknod /dev/drbd0 b 147 0

# drbdadm create-md r0


等待片刻,显示success表示drbd块创建成功

----------------

Writing meta data...

initializing activity log

NOT initializing bitmap

New drbd meta data block successfully created.


               --== Creating metadata ==--

As with nodes, we count the total number of devices mirrored by DRBD

at http://usage.drbd.org.


The counter works anonymously. It creates a random number to identify

the device and sends that random number, along with the kernel and

DRBD version, to usage.drbd.org.


http://usage.drbd.org/cgi-bin/insert_usage.pl?


nu=716310175600466686&ru=15741444353112217792&rs=1085704704


* If you wish to opt out entirely, simply enter 'no'.

* To continue, just press [RETURN]


success

----------------


再次输入该命令:

# drbdadm create-md r0

成功激活r0

----------------

[need to type 'yes' to confirm] yes


Writing meta data...

initializing activity log

NOT initializing bitmap

New drbd meta data block successfully created.

----------------


5.启动DRBD服务:(node1,node2)

# service drbd start

注:需要主从共同启动方能生效


6。查看状态:(node1,node2)

# cat /proc/drbd

----------------

# cat /proc/drbd

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected],


2013-05-27 20:45:19

0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

   ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1060184

----------------

# service drbd status

----------------

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected],


2013-05-27 20:45:19

m:res  cs         ro                   ds                         p  mounted  fstype

0:r0   Connected  Secondary/Secondary  Inconsistent/Inconsistent  C

----------------

这里ro:Secondary/Secondary表示两台主机的状态都是备机状态,ds是磁盘状态,显示的状态内容为“不一致”,这是因为DRBD无法判断哪一方为主机,应以哪一方的磁盘数据作为标准。


7.将drbd1.example.com主机配置为主节点:(node1)


# drbdsetup /dev/drbd0 primary --force

分别查看主从DRBD状态:

(node1)

# service drbd status

--------------------

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected],


2013-05-27 20:45:19

m:res  cs         ro                 ds                 p  mounted  fstype

0:r0   Connected  Primary/Secondary  UpToDate/UpToDate  C

---------------------

(node2)

# service drbd status

---------------------

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected],


2013-05-27 20:49:06

m:res  cs         ro                 ds                 p  mounted  fstype

0:r0   Connected  Secondary/PrimaryUpToDate/UpToDate  C

---------------------

ro在主从服务器上分别显示 Primary/Secondary和Secondary/Primary

ds显示UpToDate/UpToDate

表示主从配置成功。


8.挂载DRBD:(node1)


从刚才的状态上看到mounted和fstype参数为空,所以我们这步开始挂载DRBD到系统目录

# mkfs.ext4 /dev/drbd0

# mount /dev/drbd0 /data

注:Secondary节点上不允许对DRBD设备进行任何操作,包括只读,所有的读写操作只能在Primary节点上进行,只有当Primary节点挂掉时,Secondary节点才能提升为Primary节点继续工作。


9.模拟DRBD1故障,DRBD2接管并提升为Primary

(node1)

# cd /data

# touch 1 2 3 4 5

# cd ..

# umount /data

# drbdsetup /dev/drbd0 secondary

注:这里实际生产环境若DRBD1宕机,在DRBD2状态信息中ro的值会显示为Secondary/Unknown,只需要进行DRBD提权操作即可。

(node2)

# drbdsetup /dev/drbd0 primary

#  mount  /dev/drbd0 /data

# cd /data

# touch 6 7 8 9 10

# ls

--------------

1  10  2  3  4  5  6  7  8  9  lost+found

--------------

# service drbd status

--------------

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected],


2013-05-27 20:49:06

m:res  cs         ro                 ds                 p  mounted  fstype

0:r0   Connected  Primary/Secondary  UpToDate/UpToDate  C  /data    ext4

--------------


(node1)

# service drbd status

---------------

drbd driver loaded OK; device status:

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected],


2013-05-27 20:45:19

m:res  cs         ro                 ds                 p  mounted  fstype

0:r0   Connected  Secondary/Primary  UpToDate/UpToDate  C

---------------


DRBD大功告成。。。


不过如何保证DRBD主从结构的智能切换,实现高可用,这里就需要Heartbeat来实现了。


Heartbeat会在DRBD主端挂掉的情况下,自动切换从端为主端并自动挂载/data分区。


注:(摘自酒哥的构建高可用LINUX服务器第2版)

假设你把Primary的eth0挡掉,然后直接在Secondary上进行主Primary主机的提升,并且mount上,你可能会发现在Primary上测试考入的文件确实同步过来了,之后把Primary的eth0恢复后,看看有没有自动恢复 主从关系。经过查看,发现DRBD检测出了Split-Brain的状况,也就是两个节点都处于standalone状态,故障描述如下:Split-Brain detected,dropping connection! 这就是传说中的“脑裂”。

这里是DRBD官方推荐的手动恢复方案:

(node2)

# drbdadm secondary r0

# drbdadm disconnect all

# drbdadm --discard-my-data connect r0

(node1)

# drbdadm disconnect all

# drbdadm connect r0

# drbdsetup /dev/drbd0 primary


这里本人实际模拟的是将Primary端重启,然后立即进行Secondary提权操作,待Primary端重启完毕,将其降权,查看两边的status,结果都为standalone状态,很奇怪的也出现“脑裂”情况,不知道是什么情况?有经验的朋友可以帮忙指点一下。


DRBD+HeartBeat+NFS传送门:http://showerlee.blog.51cto.com/2047005/1212185


你可能感兴趣的:(drbd)