user --> ha( 双机热备 436) --> lb( 调度器 ) --> 应用( www ftp ) --> sql( mysql pgsql oracle redis )--> 文件系统( mfs hdfs )--> i/o( ssd )
RHCA 442 413 318
虚拟:RHEV openstack
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
分布式文件系统 mfs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
http://www.moosefs.org/
Master:192.168.2.149
Chunkserver:192.168.2.150
192.168.2.125
Client:192.168.2.126
当停止服务时,先停止客户端
1.服务端 主机ip:192.168.2.149
(1)下载mfs包 lftp i
get mfs-1.6.27-1.tar.gz
(2)编译需要 yum install -y gcc make rpm-build fuse-devel zlib-devel
(3)打rpm包 rpmbuild -tb mfs-1.6.27-1.tar.gz ( 有时不能识别-1,mv修改名字即可 )
结果:+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd mfs-1.6.27
+ rm -rf /root/rpmbuild/BUILDROOT/mfs-1.6.27-2.x86_64
+ exit 0
rpm包:ls /root/rpmbuild/RPMS/x86_64/ ( 生成的rpm包 )
mfs-cgi-1.6.27-2.x86_64.rpm mfs-client-1.6.27-2.x86_64.rpm
mfs-cgiserv-1.6.27-2.x86_64.rpm mfs-master-1.6.27-2.x86_64.rpm
mfs-chunkserver-1.6.27-2.x86_64.rpm mfs-metalogger-1.6.27-2.x86_64.rpm
(4)安装cgi包 cd /root/rpmbuild/RPMS/x86_64/
rpm -ivh mfs-master-1.6.27-2.x86_64.rpm ( master主机 )
mfs-cgi-1.6.27-2.x86_64.rpm ( cgi监控 )
mfs-cgiserv-1.6.27-2.x86_64.rpm ( cgi监控服务 )
结果:Preparing... ########################################### [100%]
1:mfs-cgi ########################################### [ 33%]
2:mfs-cgiserv ########################################### [ 67%]
3:mfs-master ########################################### [100%]
(5)复制配置文件cp /etc/mfs/mfsexports.cfg.dist /etc/mfs/mfsexports.cfg
cp /etc/mfs/mfsmaster.cfg.dist /etc/mfs/mfsmaster.cfg
cp /etc/mfs/mfstopology.cfg.dist /etc/mfs/mfstopology.cfg
结果:ls /etc/mfs/
mfsexports.cfg mfsmaster.cfg mfstopology.cfg
mfsexports.cfg.dist mfsmaster.cfg.dist mfstopology.cfg.dist
cp /var/lib/mfs/metadata.mfs.empty /var/lib/mfs/metadata.mfs
( 如果没有复制此目录,mfsmaster起不来,当起来以后metadata.mfs就不存在了,关闭就又出现了 )
结果:ls /var/lib/mfs/ ( 数据目录 )
metadata.mfs metadata.mfs.empty
(6)改权限 原因:vim /etc/mfs/mfsmaster.cfg
# WORKING_USER = nobody ( 默认用户为nobody )
( 此文件中凡是用#注释掉的变量均使用其默认值,基本不需要就可以工作 )
修改:chown nobody /var/lib/mfs/ -R ( 时mfs的数据存放处,默认是让nobody才可以访问,所以需要修改权限 )
结果:ll /var/lib/mfs/*
-rw-r--r-- 1 nobody root 8 Aug 4 10:32 /var/lib/mfs/metadata.mfs
-rw-r--r-- 1 nobody root 8 Aug 4 10:24 /var/lib/mfs/metadata.mfs.empty
(7)开启服务 mfsmaster start
结果:working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
loading sessions ... file not found
if it is not fresh installation then you have to restart all active mounts !!!
exports file has been loaded
mfstopology: incomplete definition in line: 7
mfstopology: incomplete definition in line: 7
mfstopology: incomplete definition in line: 22
mfstopology: incomplete definition in line: 22
mfstopology: incomplete definition in line: 28
mfstopology: incomplete definition in line: 28
topology file has been loaded
loading metadata ...
create new empty filesystemmetadata file has been loaded
no charts data file - initializing empty charts
master <-> metaloggers module: listen on *:9419
master <-> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly
(8)查看文件 ls /var/lib/mfs/ ( 此时metadata.mfs不存在了,出现了sessions.mfs和metadata.mfs.back )
结果:metadata.mfs.back metadata.mfs.empty sessions.mfs
(9)启动CGI监控 mfscgiserv
结果:lockfile created and locked
starting simple cgi server (host: any , port: 9425 , rootpath: /usr/share/mfscgi)
(10)执行权限 chmod +x /usr/share/mfscgi/*.cgi
查看: ll /usr/share/mfscgi/
-rwxr-xr-x 1 root root 1881 Aug 4 10:24 chart.cgi ****
-rwxr-xr-x 1 root root 270 Aug 4 10:24 err.gif
-rwxr-xr-x 1 root root 562 Aug 4 10:24 favicon.ico
-rwxr-xr-x 1 root root 510 Aug 4 10:24 index.html
-rwxr-xr-x 1 root root 3555 Aug 4 10:24 logomini.png
-rwxr-xr-x 1 root root 107456 Aug 4 10:24 mfs.cgi *****
-rwxr-xr-x 1 root root 5845 Aug 4 10:24 mfs.css
(11)网页访问 http://192.168.2.149:9425
2.chunkserve 主机ip:192.168.2.150
(1)安装mfs-chunkserver
scp /root/rpmbuild/RPMS/x86_64/mfs-chunkserver-1.6.27-2.x86_64.rpm 192.168.2.150:( 149主机 )
rpm -ivh mfs-chunkserver-1.6.27-2.x86_64.rpm
结果:Preparing... ########################################### [100%]
1:mfs-chunkserver ########################################### [100%]
(2)复制配置文件 cp /etc/mfs/mfschunkserver.cfg.dist /etc/mfs/mfschunkserver.cfg
cp /etc/mfs/mfshdd.cfg.dist /etc/mfs/mfshdd.cfg
(3)解析 vim /etc/hosts
内容:192.168.2.149 mfsmaster
(4)添加虚拟磁盘( 图形添加一块虚拟磁盘 )
fdisk -cu /dev/vdb ( n p 1 空 空 t 8e p w )
p : Device Boot Start End Blocks Id System
/dev/vdb1 2048 16777215 8387584 8e Linux LVM
pv: pvcreate /dev/vdb1
vg: vgcreate mfsvg /dev/vdb1
lv: lvcreate -L 4g -n demo mfsvg
lvs:
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 4.91g
lv_swap VolGroup -wi-ao---- 612.00m
demo mfsvg -wi-a----- 4.00g
格式化: mkfs.ext4 /dev/mfsvg/demo
(5)挂载
建挂载目录:mkdir /mnt/chunk1
挂载文件: vim /etc/fstab
内容:/dev/mfsvg/demo /mnt/chunk1 ext4 defaults 0 0
检测挂载: mount -a
查看挂载: df
结果:/dev/mapper/mfsvg-demo 4128448 139256 3779480 4% /mnt/chunk1
(6)添加存储块 vim /etc/mfs/mfshdd.cfg
内容:/mnt/chunk1
(7)改权限 chown -R nobody.nobody /mnt/chunk1/
(8)建目录 ( 监控结果存放目录 )
原因:ll /var/lib/mfs
ls: cannot access /var/lib/mfs: No such file or directory
建立: mkdir /var/lib/mfs
权限: chown -R nobody /var/lib/mfs/
(9)开启服务 mfschunkserver
结果:working directory: /var/lib/mfs
lockfile created and locked
initializing mfschunkserver modules ...
hdd space manager: path to scan: /mnt/chunk1/
hdd space manager: start background hdd scanning (searching for available chunks)
main server module: listen on *:9422
no charts data file - initializing empty charts
mfschunkserver daemon initialized properly
开启服务后就会生成如下文件
ls /var/lib/mfs/.mfschunkserver.lock ( 隐藏文件 )
/var/lib/mfs/.mfschunkserver.lock
ls /mnt/chunk1/
0 0C 18 24 30 3C 48 54 60 6C 78 84 90 9C A8 B4 C0 CC
D8 E4 F0 FC 01 0D 19 25 31 3D 49 55 61 6D 79 85 91 9D ....... FF
(10)端口 netstat -antlp
结果:tcp 0 0 0.0.0.0:9422 0.0.0.0:* LISTEN 1717/mfschunkserver
tcp 0 0 192.168.2.150:47737 192.168.2.149:9420 ESTABLISHED 1717/mfschunkserver
(11)拉伸lvm
拉伸:lvextend -l +1023 /dev/mfsvg/demo
结果:Extending logical volume demo to 8.00 GiB
Logical volume demo successfully resized
查看:lvs
结果:demo mfsvg -wi-ao---- 8.00g
df没变:df -h
结果:/dev/mapper/mfsvg-demo 4.0G 137M 3.7G 4% /mnt/chunk1
resize2fs /dev/mfsvg/demo
结果:resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mfsvg/demo is mounted on /mnt/chunk1; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/mfsvg/demo to 2096128 (4k) blocks.
The filesystem on /dev/mfsvg/demo is now 2096128 blocks long.
查看:df -h
/dev/mapper/mfsvg-demo 7.9G 138M 7.4G 2% /mnt/chunk1
3.chunkserve主机2 主机ip:192.168.2.125
(1)安装scp mfs-chunkserver-1.6.27-2.x86_64.rpm 192.168.2.125:( 150主机 )
rpm -ivh mfs-chunkserver-1.6.27-2.x86_64.rpm
结果:Preparing... ########################################### [100%]
1:mfs-chunkserver ########################################### [100%]
(2)解析 vim /etc/hosts
内容:192.168.2.149 mfsmaster
(3)存放目录 mkdir /var/lib/mfs
chown -R nobody /var/lib/mfs/
(4) 图形添加虚拟磁盘
fdisk -cu /dev/vdb ( n p 1 空 空 t 8e p w )
p结果:/dev/vdb1 2048 16777215 8387584 8e Linux LVM
pv: pvcreate /dev/vdb1
vg: vgcreate mfsvg /dev/vdb1
lv: lvcreate -L 8g -n demo mfsvg
lvs:
结果:demo mfsvg -wi-a----- 8.00g
格式化:mkfs.ext4 /dev/mfsvg/demo
(4)挂载
mkdir /mnt/chunk1
vim /etc/fstab
内容:/dev/mfsvg/demo /mnt/chunk1 ext4 defaults 0 0
mount -a
结果:/dev/mapper/mfsvg-demo 8252856 149492 7684140 2% /mnt/chunk1
chown -R nobody.nobody /mnt/chunk1/
(5)复制配置文件 cp /etc/mfs/mfschunkserver.cfg.dist /etc/mfs/mfschunkserver.cfg
cp /etc/mfs/mfshdd.cfg.dist /etc/mfs/mfshdd.cfg
(6)挂载点 vim /etc/mfs/mfshdd.cfg
内容:/mnt/chunk1
(7)开启服务 mfschunkserver
(8)网页查看:图形页面--server
4.client 主机ip:192.168.2.126
(1)安装 scp /root/rpmbuild/RPMS/x86_64/mfs-client-1.6.27-2.x86_64.rpm 192.168.2.126:( 149主机 )
yum localinstall -y mfs-client-1.6.27-2.x86_64.rpm
(2)复制配置文件
cp /etc/mfs/mfsmount.cfg.dist /etc/mfs/mfsmount.cfg
(3)解析 vim /etc/hosts
内容:192.168.2.149 mfsmaster
(4)挂载 mkdir /mnt/mfs
vim /etc/mfs/mfsmount.cfg
内容:/mnt/mfs
mfsmount
结果:mfsmaster accepted connection with parameters: read-write,restricted_ip ; root mapped to root:root
df
结果:mfsmaster:9421 14851200 0 14851200 0% /mnt/mfs
(5)测试 cp /etc/passwd /mnt/mfs/
mfsfileinfo /mnt/mfs/passwd
结果:/mnt/mfs/passwd:
chunk 0: 0000000000000001_00000001 / (id:1 ver:1)
copy 1: 192.168.2.150:9422
mkdir /mnt/mfs/dir1
mkdir /mnt/mfs/dir2
mv /mnt/mfs/passwd /mnt/mfs/dir1/
mfsgetgoal /mnt/mfs/dir2/
结果:/mnt/mfs/dir2/: 1
mfsgetgoal /mnt/mfs/dir1
结果:/mnt/mfs/dir1: 1
mfssetgoal -r 2 /mnt/mfs/dir2/
结果:/mnt/mfs/dir2/:
inodes with goal changed: 1
inodes with goal not changed: 0
inodes with permission denied: 0
cp /etc/fstab /mnt/mfs/dir2/
mfsfileinfo /mnt/mfs/dir2/fstab
结果:/mnt/mfs/dir2/fstab:
chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
copy 1: 192.168.2.125:9422
copy 2: 192.168.2.150:9422
mfschunkserver stop ( 150主机 )
mfsfileinfo /mnt/mfs/dir2/fstab
结果:/mnt/mfs/dir2/fstab:
chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
copy 1: 192.168.2.125:9422
mfsfileinfo /mnt/mfs/dir1/passwd
结果:/mnt/mfs/dir1/passwd:
chunk 0: 0000000000000001_00000001 / (id:1 ver:1)
no valid copies !!!
mfschunkserver( 150主机 )
mfsfileinfo /mnt/mfs/dir2/fstab
结果:/mnt/mfs/dir2/fstab:
chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
copy 1: 192.168.2.125:9422
copy 2: 192.168.2.150:9422
mfsfileinfo /mnt/mfs/dir1/passwd
结果:/mnt/mfs/dir1/passwd:
chunk 0: 0000000000000001_00000001 / (id:1 ver:1)
copy 1: 192.168.2.150:9422
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mfs存储会出现的情况
1.当mfschunkserver服务停止时修改挂载点里的文件再次挂载时会出现问题,再次触发( 修改文件 )即可 ( 126主机 )
(1)chunk主机1 mfschunkserver stop ( 150主机 )
结果:sending SIGTERM to lock owner (pid:1783)
waiting for termination ... terminated
(2)查看文件 mfsfileinfo /mnt/mfs/dir2/fstab
结果:/mnt/mfs/dir2/fstab:
chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
copy 1: 192.168.2.125:9422
(3)修改挂载目录里的文件 vim /mnt/mfs/dir2/fstab
(4)开启mfschunkserver ( 150主机 )
mfschunkserver
结果:working directory: /var/lib/mfs
lockfile created and locked
initializing mfschunkserver modules ...
hdd space manager: path to scan: /mnt/chunk1/
hdd space manager: start background hdd scanning (searching for available chunks)
main server module: listen on *:9422
stats file has been loaded
mfschunkserver daemon initialized properly
(5)查看文件( 服务开启 )
mfsfileinfo /mnt/mfs/dir2/fstab
结果:/mnt/mfs/dir2/fstab:
chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
copy 1: 192.168.2.125:9422
(6)触发(修改)vim /mnt/mfs/dir2/fstab
(7)查看文件( 触发后 )
mfsfileinfo /mnt/mfs/dir2/fstab
结果:/mnt/mfs/dir2/fstab:
chunk 0: 0000000000000006_00000001 / (id:6 ver:1)
copy 1: 192.168.2.125:9422
copy 2: 192.168.2.150:9422
2.大文件会分块存储( 这样读取速度会很快 ) ( 126主机 )
(1)建立打文件 cd /mnt/mfs/dir2/
dd if=/dev/zero of=bigfile bs=1M count=200
结果:200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 35.7793 s, 5.9 MB/s
(2)查看文件 mfsfileinfo /mnt/mfs/dir2/bigfile
结果:/mnt/mfs/dir2/bigfile:
chunk 0: 0000000000000009_00000001 / (id:9 ver:1)
copy 1: 192.168.2.125:9422
copy 2: 192.168.2.150:9422
chunk 1: 000000000000000A_00000001 / (id:10 ver:1)
copy 1: 192.168.2.125:9422
copy 2: 192.168.2.150:9422
chunk 2: 000000000000000B_00000001 / (id:11 ver:1)
copy 1: 192.168.2.125:9422
copy 2: 192.168.2.150:9422
chunk 3: 000000000000000C_00000001 / (id:12 ver:1)
copy 1: 192.168.2.125:9422
copy 2: 192.168.2.150:9422
3.误删文件 ( 126主机 )
(1)删除文件 rm -f /mnt/mfs/dir2/fstab
(2)建目录 mkdir /mnt/meta
(3)挂载 mfsmount -m /mnt/meta/ -H mfsmaster
结果:mfsmaster accepted connection with parameters: read-write,restricted_ip
df
结果:mfsmaster:9421 14851008 527488 14323520 4% /mnt/mfs
(4)恢复 cd /mnt/meta/trash/
mv 0000000F\|dir2\|fstab undel/
cat /mnt/mfs/dir2/fstab
结果:tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
4.master主机挂掉后
当没有文件传输时,可在服务器重启之后,运行 mfsmetarestore –a 进行修复,之后执行mfsmaster start 恢复 master 服务。
(1)master主机挂了 6815 ? S< 0:41 mfsmaster start
kill -9 6815
(2)直接开启服务 mfsmaster
working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
loading sessions ... ok
sessions file has been loaded
exports file has been loaded
mfstopology: incomplete definition in line: 7
mfstopology: incomplete definition in line: 7
mfstopology: incomplete definition in line: 22
mfstopology: incomplete definition in line: 22
mfstopology: incomplete definition in line: 28
mfstopology: incomplete definition in line: 28
topology file has been loaded
loading metadata ...
can't open metadata file
if this is new instalation then rename /var/lib/mfs/metadata.mfs.empty as /var/lib/mfs/metadata.mfs
init: file system manager failed !!!
error occured during initialization - exiting
(3)恢复 mfsmetarestore -a
loading objects (files,directories,etc.) ... ok
loading names ... ok
loading deletion timestamps ... ok
loading chunks data ... N1
N2
N3
N4
N5
N6
N7
N8
N9
N10
N11
N12
ok
checking filesystem consistency ... ok
connecting files and chunks ... L
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
C
ok
store metadata into file: /var/lib/mfs/metadata.mfs
(4)再次开启 mfsmaster
working directory: /var/lib/mfs
lockfile created and locked
initializing mfsmaster modules ...
loading sessions ... ok
sessions file has been loaded
exports file has been loaded
mfstopology: incomplete definition in line: 7
mfstopology: incomplete definition in line: 7
mfstopology: incomplete definition in line: 22
mfstopology: incomplete definition in line: 22
mfstopology: incomplete definition in line: 28
mfstopology: incomplete definition in line: 28
topology file has been loaded
loading metadata ...
loading objects (files,directories,etc.) ... ok
loading names ... ok
loading deletion timestamps ... ok
loading chunks data ... ok
checking filesystem consistency ... ok
connecting files and chunks ... ok
all inodes: 17
directory inodes: 3
file inodes: 14
chunks: 12
metadata file has been loaded
stats file has been loaded
master <-> metaloggers module: listen on *:9419
master <-> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly
[root@server149 ~]# chown -R nobody /var/lib/mfs/
[root@server149 ~]# ll /var/lib/mfs/
total 776
-rw-r----- 1 nobody nobody 4682 Aug 4 14:45 changelog.1.mfs
-rw-r----- 1 nobody nobody 1105 Aug 4 12:46 changelog.3.mfs
-rw-r--r-- 1 nobody root 1421 Aug 4 15:59 metadata.mfs.back
-rw-r----- 1 nobody nobody 358 Aug 4 14:13 metadata.mfs.back.1
-rw-r--r-- 1 nobody root 8 Aug 4 10:24 metadata.mfs.empty
-rw-r----- 1 nobody nobody 369 Aug 4 15:00 sessions.mfs
-rw-r----- 1 nobody nobody 762516 Aug 4 15:00 stats.mfs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
出现问题:
1.rpmbuild -tb mfs-1.6.27-1.tar.gz
error: File /root/mfs-1.6.27.tar.gz: No such file or directory
解决:mv mfs-1.6.27-1.tar.gz mfs-1.6.27.tar.gz
2.rpmbuild -tb mfs-1.6.27.tar.gz
error: Failed build dependencies:
fuse-devel is needed by mfs-1.6.27-2.x86_64
zlib-devel is needed by mfs-1.6.27-2.x86_64
3.lvextend -L +4g /dev/mfsvg/demo
Extending logical volume demo to 8.00 GiB
Insufficient free space: 1024 extents needed, but only 1023 available
结果:lvextend -l +1023 /dev/mfsvg/demo
Extending logical volume demo to 8.00 GiB
Logical volume demo successfully resized
客户端和存储主机都链接在master主机,其他的主机挂了不会有影响,但是master不可以挂了,
为了解决master单点故障,如下:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~数据存储 ( 节点同步存储 ) drpd~~~~~~~~~~~~~~~~~
数据存储 ( 节点同步存储 ) drpd
( 详情可见 7月22日 截图 )
(一)配置drbd
(1)在接收同步信息的主机上安装drbd
* 1.接收数据主机1 安装drbd:( 192.168.1.149 )
(1)安装gcc,make( 编译需要的工具 )
yum install gcc make -y
(2)下载 lftp i
解压 tar zfx drbd-8.4.3.tar.gz
(3)编译 cd drbd-8.4.3
第一次执行: ./configure --with-km --enable-spec
报错结果: ( 缺少flex )
yum install -y flex
第二次执行: ./configure --with-km --enable-spec
报错结果: ( 缺少rpm-build )
yum install -y rpm-build
第三次执行: ./configure --with-km --enable-spec
执行结果: + umask 022 ( 前面省略 )
+ cd /root/rpmbuild/BUILD
+ cd drbd-8.4.3
+ rm -rf /root/rpmbuild/BUILDROOT/drbd-8.4.3-2.el6.x86_64
+ exit 0
(4)打包成rpm
cd /root/drbd-8.4.3
1. rpmbuild -bb drbd.spec
报错信息: ( 找不到源码 )
cp /root/drbd-8.4.3.tar.gz /root/rpmbuild/SOURCES/
rpmbuild -bb drbd.spec
打包成功:+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd drbd-8.4.3
+ rm -rf /root/rpmbuild/BUILDROOT/drbd-8.4.3-2.el6.x86_64
+ exit 0
2. rpmbuild -bb drbd-km.spec
报错信息:error: Failed build dependencies: ( 缺少kernel-devel )
kernel-devel is needed by drbd-km-8.4.3-2.el6.x86_64
yum install -y kernel-devel
rpmbuild -bb drbd-km.spec
打包成功:+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd drbd-8.4.3
+ rm -rf /root/rpmbuild/BUILDROOT/drbd-km-8.4.3-2.el6.x86_64
+ exit 0
(5)查看打包好的rpm包
ls /root/rpmbuild/RPMS/x86_64/ ( 10个 )
drbd-8.4.3-2.el6.x86_64.rpm
drbd-bash-completion-8.4.3-2.el6.x86_64.rpm
drbd-heartbeat-8.4.3-2.el6.x86_64.rpm
drbd-km-2.6.32_431.el6.x86_64-8.4.3-2.el6.x86_64.rpm
drbd-pacemaker-8.4.3-2.el6.x86_64.rpm
drbd-udev-8.4.3-2.el6.x86_64.rpm
drbd-utils-8.4.3-2.el6.x86_64.rpm
drbd-xen-8.4.3-2.el6.x86_64.rpm
(6)安装打包好的rpm包
rpm -ivh /root/rpmbuild/RPMS/x86_64/*
(7)检测是否安装成功
modprobe -l | grep drbd
结果:updates/drbd.ko ( 表示安装成功 )
(8)把打包好的rpm包考给另一个接收主机上
scp /root/rpmbuild/RPMS/x86_64/* 192.168.2.161:
* 2.接收数据主机2 安装drbd:( 192.168.1.161 )
(1)直接安装 rpm -ivh /root/*
Preparing... ########################################### [100%]
1:drbd-utils ########################################### [ 13%]
2:drbd-bash-completion ########################################### [ 25%]
3:drbd-heartbeat ########################################### [ 38%]
4:drbd-pacemaker ########################################### [ 50%]
5:drbd-udev ########################################### [ 63%]
6:drbd-xen ########################################### [ 75%]
7:drbd ########################################### [ 88%]
8:drbd-km-2.6.32_431.el6.########################################### [100%]
(2)查看是否安装成功
modprobe -l | grep drbd
结果:updates/drbd.ko ( 安装成功 )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(2)在接收同步信息的主机上进行配置
( 图形添加虚拟磁盘,如果不添加就没有vdb,添加--虚拟--确定 )
* 1.在接收主机上配置 ( [root@server77 ~]# )
(1)查看 vim /etc/drbd.d/global_common.conf ( 全局配置文件,不需要动 )
vim /etc/drbd.conf ( 查看此文件,需要建立*.res文件 )
# You can find an example in /usr/share/doc/drbd.../drbd.conf.example
include "drbd.d/global_common.conf";
include "drbd.d/*.res"; ( 此处包含文件,只要是.res结尾的就可以了 )
(2)配置 vim /etc/drbd.d/example.res ( 此文件需要自己建立,内容手工添加 )
resource example { ( example 是存储数据资源的名字,可以自己命名 )
meta-disk internal;
device /dev/drbd1;
syncer {
verify-alg sha1;
}
on server149.example.com {
disk /dev/drbd/demo;
address 192.168.2.149:7789;
}
on server161.example.com {
disk /dev/drbd/demo;
address 192.168.2.161:7789;
}
}
(3)建立lvm
建立 fdisk -cu /dev/vdb ( n p 1 空 空 t 8e w )
PV: pvcreate /dev/vdb1
Physical volume "/dev/vdb1" successfully created
VG: vgcreate drbd /dev/vdb1
Volume group "drbd" successfully created
LV: lvcreate -L 2G -n demo drbd
Logical volume "demo" created
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
具体 fdisk -cu /dev/vdb ( n p 1 空 空 t 8e w ) ( 建立lvm )
具体执行结果如下:
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xd59c73fd.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n ***
Command action
e extended
p primary partition (1-4)
p ***
Partition number (1-4): 1 ***
First sector (2048-16777215, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-16777215, default 16777215):
Using default value 16777215
Command (m for help): t ***
Selected partition 1 ***
Hex code (type L to list codes): 8e ***
Changed system type of partition 1 to 8e (Linux LVM)
Command (m for help): p ***
Disk /dev/vdb: 8589 MB, 8589934592 bytes
16 heads, 63 sectors/track, 16644 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xd59c73fd
Device Boot Start End Blocks Id System
/dev/vdb1 2048 16777215 8387584 8e Linux LVM
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
查看 lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 4.91g
lv_swap VolGroup -wi-ao---- 612.00m
demo drbd -wi-a----- 2.00g
(4)把配置文件拷贝到另一个主机上
scp /etc/drbd.d/example.res 192.168.2.161:/etc/drbd.d/
(5)在另一个主机上建立lvm ( [root@server161 ~]# )
fdisk -cu /dev/vdb
pvcreate /dev/vdb1
vgcreate drbd /dev/vdb1
lvcreate -L 2G -n demo drbd
(6)初始化( 两台主机都进行初始化 )
drbdadm create-md example ( 149,161 )
执行结果:
--== Thank you for participating in the global usage survey ==--
The server's response is:
you are the 18678th user to install this version
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfully created.
success
(7)启服务 ( 两台主机都要启 )
/etc/init.d/drbd start ( 149,161 )
结果;
Starting DRBD resources: [ ( 149主 )
create res: example
prepare disk: example
adjust disk: example
adjust net: example
]
..........
***************************************************************
DRBD's startup script waits for the peer node(s) to appear.
- In case this node was already a degraded cluster before the
reboot the timeout is 0 seconds. [degr-wfc-timeout]
- If the peer was available before the reboot the timeout will
expire after 0 seconds. [wfc-timeout]
(These values are for resource 'example'; 0 sec -> wait forever)
To abort waiting enter 'yes' [ 18]:
/etc/init.d/drbd start ( 161副 )
Starting DRBD resources: [
create res: example
prepare disk: example
adjust disk: example
adjust net: example
]
(8)查看/proc/drbd文件设置主,副
查看 cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2014-07-22 12:34:51
1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- **( Secondary/Secondary )
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:2097052
设置主:
drbdsetup /dev/drbd1 primary --force ( 149 )
查看结果:
* cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2014-07-22 12:34:51
1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n- **( Primary/Secondary )
ns:1746292 nr:0 dw:0 dr:1748632 al:0 bm:105 lo:0 pe:12 ua:2 ap:0 ep:1 wo:f oos:362396
[===============>....] sync'ed: 82.9% (362396/2097052)K
finish: 0:00:12 speed: 28,400 (28,436) K/sec
* cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2014-07-22 12:34:51
1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- *** ( Primary/Secondary UpToDate/UpToDate )
ns:2097052 nr:0 dw:0 dr:2097716 al:0 bm:128 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
(9)格式化 mkfs.ext4 /dev/drbd1
(10)挂载上进行测试( 必须挂在主服务上,现在主机是149 )
挂载: mount /dev/drbd1 /var/www/html/
selinux:getenforce ( 需要考虑selinux )
ll -dZ /var/www/html ( 安全上下文 )
测试页面:vim /var/www/html/index.html ( server77 )
网页访问:192.168.2.117 ( server77 )
卸载: umount /var/www/html/
mount /dev/drbd1 /mnt
umount /mnt/
drbdadm secondary example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
集群管理 pacemaker
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
节点1:(vm2)192.168.2.77
节点2:(vm3)192.168.2.62
物理机:192.168.2.1
时间同步:date
关闭火墙:iptables -F
相互解析: vim /etc/hosts
192.168.2.77 server77.example.com
192.168.2.62 server62.example.com
192.168.2.1 server1.example.com
基本配置
(1)停止之前做的集群 ( 之前做了keepalived,做了什么就停什么,两个都停 )
/etc/init.d/keepalived stop
(2)安装pacemaker服务( 两个都需要安装 )
yum install -y pacemaker
(3)配置 cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf
vim /etc/corosync/corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: 192.168.2.0 ( ip的网段 ) ***
mcastaddr: 226.94.1.1 ( 广播地址 )
mcastport: 4468 ( 广播端口,两个节点必须一样,但是和其他人的节点要不一样 )***
ttl: 1
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
service { ( 添加service模块 )
name: pacemaker ( 服务名称 )
ver: 0 ( 0:开启后台,1:不开启后台 )
}
复制给另一个节点 scp /etc/corosync/corosync.conf 192.168.2.62:/etc/corosync/
(4)安装crm ( 没有crm命令,所以安装以下三个包 ,crm可以用tab键补齐 )
lftp 192.168.2.251
yum localinstall -y crmsh-1.2.6-0.rc2.2.1.x86_64.rpm pssh-2.3.1-4.1.x86_64.rpm python-pssh-2.3.1-4.1.x86_64.rpm
(5)开启服务 /etc/init.d/corosync start
检测:crm
crm(live)# configure
crm(live)configure# show
出现下面两个节点62,77证明成功了
node server62.examlpe.com
node server77.example.com
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2"
crm(live)configure# quit
bye
(6)检测( 检测时有错误 )
crm_verify -LV
错误结果:error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
解决错误:crm
crm(live)# configure
crm(live)configure# property stonith-enabled=false ( 先关闭fence )
crm(live)configure# commit
crm(live)configure# quit
再次检测( 没有报错了 ) crm_verify -LV
(7)添加vip crm
crm(live)# configure
crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 params ip=192.168.2.117 cidr_netmask=32 op monitor interval=30s
crm(live)configure# commit
crm(live)configure# quit
bye
查看:ip addr show
inet 192.168.2.62/24 brd 192.168.2.255 scope global eth0
inet 192.168.2.117/32 brd 192.168.2.255 scope global eth0
监控查看:crm_mon
Online: [ server62.examlpe.com server77.example.com ]
vip (ocf::heartbeat:IPaddr2): Started server62.examlpe.com
查看信息:crm(live)configure# show
node server62.examlpe.com
node server77.example.com
primitive vip ocf:heartbeat:IPaddr2 \
params ip="192.168.2.117" cidr_netmask="32" \
op monitor interval="30s"
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false"
(8)避免资源丢失
出现问题:/etc/init.d/corosync stop
监控结果:( vip 不见了 )
crm_mon
Online: [ server77.example.com ]
OFFLINE: [ server62.examlpe.com ]
解决问题:/etc/init.d/corosync start
crm
crm(live)# configure
crm(live)configure# property no-quorum-policy=ignore ( 忽略法定人数 )
crm(live)configure# commit
crm(live)configure# quit
bye
再次实验:( vip未丢失,转到77主机上了 )
/etc/init.d/corosync stop
Online: [ server77.example.com ]
OFFLINE: [ server62.examlpe.com ]
vip (ocf::heartbeat:IPaddr2): Started server77.example.com
(9)添加资源
安装资源:( 两个节点都需要安装 )
yum install -y httpd
配置服务:( 两个节点都需要配置 )
vim /etc/httpd/conf/httpd.conf
SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1
测试页面:vim /var/www/html/index.html ( 77 62 )
添加资源:/etc/init.d/corosync start
crm
crm(live)# configure
crm(live)configure# primitive apache ocf:heartbeat:apache params configfile="/etc/httpd/conf/httpd.conf"
op monitor interval=30s
crm(live)configure# commit ( 虽然有警告,但是可以忽略 )
WARNING: apache: default timeout 20s for start is smaller than the advised 40s
WARNING: apache: default timeout 20s for stop is smaller than the advised 60s
监控页面:Online: [ server62.examlpe.com server77.example.com ]
vip (ocf::heartbeat:IPaddr2): Started server77.example.com
apache (ocf::heartbeat:apache): Started server62.examlpe.com
(10)避免资源不在一个主机上
链接一起:crm(live)configure# colocation apache-with-vip inf: apache vip
crm(live)configure# commit
监控页面: Online: [ server62.examlpe.com server77.example.com ]
vip (ocf::heartbeat:IPaddr2): Started server77.example.com
apache (ocf::heartbeat:apache): Started server77.example.com
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
添加fence
(1)物理机 ( 获取fence的key )( 获取key必须在真机上进行,管理主机 192.168.2.1 )
( 当fence配置好以后,如果没有设置开机自启动,再次使用集群是必须先开启fence再开启需要管理的主机(虚拟机) )
1.如果以前做过( 如下 )
/etc/init.d/fence_virtd start
netstat -aunlp | grep 1229
udp 0 0 0.0.0.0:1229 0.0.0.0:* 2463/fence_virtd
( 在ha主机建立完/etc/cluster/ 目录后在复制 )
scp /etc/cluster/fence_xvm.key 192.168.2.77:/etc/cluster/
scp /etc/cluster/fence_xvm.key 192.168.2.62:/etc/cluster/
2.如果没有做过( 如下 )
(1)安装fence
yum search fence
yum install -y fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64 fence-virtd.x86_64
rpm -qa | grep fence
fence-virtd-libvirt-0.2.3-15.el6.x86_64
fence-virtd-multicast-0.2.3-15.el6.x86_64
fence-virtd-0.2.3-15.el6.x86_64
(2)获取fence_xvm.key
fence_virtd -c ( 需要填写的如下,其他的空格即可 )
Interface [none]: br0 ( 物理机O(真机) )
Backend module [checkpoint]: libvirt
Replace /etc/fence_virt.conf with the above [y/N]? y
(3)开启服务 /etc/init.d/fence_virtd start
Starting fence_virtd: [ OK ]
(4)建立存放目录 mkdir /etc/cluster
(5)获取key cd /etc/cluter/
dd if=/dev/urandom of=fence_xvm.key bs=128 count=1
运行结果如下:
1+0 records in
1+0 records out
128 bytes (128 B) copied, 0.000301578 s, 424 kB/s
查看key ll /etc/cluter/fence_xvm.key
-rw-r--r--. 1 root root 128 Jul 24 13:20 /etc/cluter/fence_xvm.key
(6)key复制给被管理主机
scp /etc/cluster/fence_xvm.key 192.168.2.77:/etc/cluster/
scp /etc/cluster/fence_xvm.key 192.168.2.62:/etc/cluster/
(7)重启服务 /etc/init.d/fence_virtd restart
(8)查看端口 netstat -anulp | grep 1229
udp 0 0 0.0.0.0:1229 0.0.0.0:* 12686/fence_virtd
(2)节点机( 两个节点都需要做1-4,开启添加在一个节点上5.6,监控在另一个节点上 7 )
1.建立放fence的key的目录
mkdir /etc/cluster
2.安装fence yum install -y fence*
3.检测命令 stonith_admin -I ( 有fence_xvm就可以了 )
4.观察过程参数 stonith_admin -M -a fence_xvm
5.开启fence crm
crm(live)# configure
crm(live)configure# property stonith-enabled=true ( 开启fence )
crm(live)configure# commit
( 虽然有错误,是因为还没有配置fence的原因,所以在此可以忽略 )
error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
Do you still want to commit? y
6.添加fence crm(live)configure# primitive fence_xvm stonith:fence_xvm params pcmk_host_map=
"server77.example.com:vm2 server62.example.com:vm3" op monitor interval=60s
( "主机名:域名(建立虚拟机时的名字) 主机名:域名(建立虚拟机时的名字) " )
crm(live)configure# commit
7.监控页面 vip (ocf::heartbeat:IPaddr2): Started server77.example.com
apache (ocf::heartbeat:apache): Started server77.example.com
fence_xvm (stonith:fence_xvm): Started server62.examlpe.com
( 出现上面此行就证明添加成功 )
8.测试( 在77主机上 )
77关闭时:在77上关闭eth0:( 执行完此操作,77主机被重启 )
ifconfig eth0 down
监控页面:( 资源立即跳转到62主机上 )
Online: [ server62.examlpe.com ]
OFFLINE: [ server77.example.com ]
vip (ocf::heartbeat:IPaddr2): Started server62.examlpe.com
apache (ocf::heartbeat:apache): Started server62.examlpe.com
fence_xvm (stonith:fence_xvm): Started server62.examlpe.com
网页测试:192.168.2.117( 出现62,但是刷新不变 )
77开启时:开启corosync服务
/etc/init.d/corosync start
监控页面:( 资源立即跳转到77主机上 )
Online: [ server62.examlpe.com server77.example.com ]
vip (ocf::heartbeat:IPaddr2): Started server77.example.com
apache (ocf::heartbeat:apache): Started server77.example.com
fence_xvm (stonith:fence_xvm): Started server62.examlpe.com
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
出现问题
(1)在crm编辑输入错误时 输入edit,可以进行修改
(2)在crm编辑时出现如下错误
crm(live)# configure
ERROR: running cibadmin -Ql: Could not establish cib_rw connection: Connection refused (111)
Signon to CIB failed: Transport endpoint is not connected
Init failed, could not perform requested operations
可能: 1.服务没有开启 /etc/init.d/corosync start
2.配置可能有错 vim /etc/corosync/corosync.conf
(3)在获取fence的key时 管理机没有1229端口
1.可能没有配置 fence_virtd -c ( 进行配置 )( 需要填写的如下,其他的空格即可 )
Interface [none]: br0 ( 物理机O(真机) )
Backend module [checkpoint]: libvirt
Replace /etc/fence_virt.conf with the above [y/N]? y
2.查看配置 cat /etc/fence_virt.conf
fence_virtd {
listener = "multicast";
backend = "libvirt";
module_path = "/usr/lib64/fence-virt";
}
listeners {
multicast {
key_file = "/etc/cluster/fence_xvm.key";
address = "225.0.0.12";
family = "ipv4";
port = "1229";
interface = "br0";
}
}
backends {
libvirt {
uri = "qemu:///system";
}
}
(4)在开启fence时监控出现错误
可能: 1.停止服务再开启服务 /etc/init.d/corosync stop
/etc/init.d/corosync start
2.开启fence参数 crm(live)configure# property stonith-enabled=true ( 必须是true不可以是yes )
master1 master2
\ /
/var/lib/mfs
( drbd )
(二)添加drbd,与corosync整合
1.激活drbp
2.挂载 ( 先激活,再挂载 )
(1)编辑 crm
crm(live)# configure
1.添加数据资源:webdata如下:
webdata:数据名(可随意) ocf:linbit:drbd:调用drbd example:资源名
monitor interval=60s:每隔60s监控
crm(live)configure# primitive webdata ocf:linbit:drbd params drbd_resource=example op monitor interval=60s
2.主从接管数据资源(ms)
webdataclone:数据克隆(可随意定义) master-max=1:第一个节点(主节点)
master-node-max=1:第一个(主)节点最多个数 clone-max=2:第二个节点(克隆节点)
clone-node-max=1:第二个(克隆)节点最多个数 notify=true (生效)
两个主必须格式化成gfs2,ext4不可以
crm(live)configure# ms webdataclone webdata meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
3.添加数据文件系统wedfs
webfs:web文件系统 ocf:heartbeat:Filesystem:调用文件系统 device:设备
directory:挂载点 fstype:文件类型 有特殊字符时同“”
crm(live)configure# primitive webfs ocf:heartbeat:Filesystem params device="/dev/drbd1" directory="/var/www/html" fstype=ext4
4.绑定webfs和drbd
crm(live)configure# colocation webfs_on_drbd inf: webfs webdataclone:Master
5.顺序先webfs再webdata
crm(live)configure# order webfs-after-webdata inf: webdataclone:promote webfs:start
6.整合apache和webfs
crm(live)configure# colocation apache-with-webfs inf: apache webfs
7.顺序先apache再webfs
crm(live)configure# order apache-after-webfs inf: webfs apache
crm(live)configure# commit
完成后监控页面:
Online: [ server62.example.com server77.example.com ]
vip (ocf::heartbeat:IPaddr2): Started server77.example.com
apache (ocf::heartbeat:apache): Started server77.example.com
fence_xvm (stonith:fence_xvm): Started server62.example.com
Master/Slave Set: webdataclone [webdata]
Masters: [ server77.example.com ]
Slaves: [ server62.example.com ]
webfs (ocf::heartbeat:Filesystem): Started server77.example.com
(2)测试 1./etc/init.d/corosync stop ( 77节点 )
监控页面:Online: [ server62.example.com ]
OFFLINE: [ server77.example.com ]
vip (ocf::heartbeat:IPaddr2): Started server62.example.com
apache (ocf::heartbeat:apache): Started server62.example.com
fence_xvm (stonith:fence_xvm): Started server62.example.com
Master/Slave Set: webdataclone [webdata]
Masters: [ server62.example.com ]
Stopped: [ server77.example.com ]
webfs (ocf::heartbeat:Filesystem): Started server62.example.com
2./etc/init.d/corosync start ( 77节点 )
监控页面:Online: [ server62.example.com server77.example.com ]
vip (ocf::heartbeat:IPaddr2): Started server77.example.com
apache (ocf::heartbeat:apache): Started server77.example.com
fence_xvm (stonith:fence_xvm): Started server62.example.com
Master/Slave Set: webdataclone [webdata]
Masters: [ server77.example.com ]
Slaves: [ server62.example.com ]
webfs (ocf::heartbeat:Filesystem): Started server77.example.com
查看挂载:df( 77主节点自动挂载 )
结果:/dev/drbd1 2064108 35844 1923412 2% /var/www/html
出现问题:
(1)当停止corosync停不掉时,配置时出现问题,先杀死再开启
/etc/init.d/corosync stop
killall -9 corosync
/etc/init.d/corosync start
(2)想删除节点
crm(live)configure# cd
crm(live)# node
crm(live)node# delete server62.examlpe.com
结果:INFO: node server62.examlpe.com deleted
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5.ha主机( master主机2 ) 主机ip:192.168.2.52
(1)停服务( 客户端--存储主机--master主机 )
客户端主机(126) : umount /mnt/mfs/
存储主机1:(125): mfschunkserver stop
结果:sending SIGTERM to lock owner (pid:1693)
waiting for termination ... terminated
存储主机2:(150):mfschunkserver stop
结果:sending SIGTERM to lock owner (pid:1824)
waiting for termination ... terminated
master主机:(149): mfsmaster stop
结果:sending SIGTERM to lock owner (pid:6918)
waiting for termination ... terminated
(2)在mstart2主机上安装mstart的rpm包
获得master的rpm包( 149 ):scp /root/rpmbuild/RPMS/x86_64/mfs-master-1.6.27-2.x86_64.rpm 192.168.2.52:
安装rpm包 ( 52 ):rpm -ivh mfs-master-1.6.27-2.x86_64.rpm
结果:Preparing... ########################################### [100%]
1:mfs-master ########################################### [100%]
复制模板 ( 52 )
cp /etc/mfs/mfsexports.cfg.dist /etc/mfs/mfsexports.cfg
cp /etc/mfs/mfsmaster.cfg.dist /etc/mfs/mfsmaster.cfg
cp /etc/mfs/mfstopology.cfg.dist /etc/mfs/mfstopology.cfg
(3)master主机1确保安装了heartbeat
rpm -q heartbeat
结果:heartbeat-3.0.4-2.el6.x86_64
[ /etc/init.d/drbd status
结果:drbd driver loaded OK; device status:
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2014-08-04 16:32:53
m:res cs ro ds p mounted fstype
1:example Connected Primary/Secondary UpToDate/UpToDate C
(4)修改权限
1.cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2014-08-04 16:32:53
1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:2163328 nr:0 dw:66276 dr:2098072 al:17 bm:128 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
2.mount /dev/drbd1 /var/lib/mfs/
3.ll -d /var/lib/mfs/
drwxr-xr-x 3 nobody root 4096 Aug 4 17:25 /var/lib/mfs/
4.umount /var/lib/mfs/
[root@server149 ~]# vim /etc/init.d/mfs
#!/bin/bash
#
# Init file for the MooseFS master service
#
# chkconfig: - 92 84
#
# description: MooseFS master
#
# processname: mfsmaster
# Source function library.
# Source networking configuration.
. /etc/init.d/functions
. /etc/sysconfig/network
# Source initialization configuration.
# Check that networking is up.
[ "${NETWORKING}" == "no" ] && exit 0
[ -x "/usr/sbin/mfsmaster" ] || exit 1
[ -r "/etc/mfsmaster.cfg" ] || exit 1
[ -r "/etc/mfsexports.cfg" ] || exit 1
RETVAL=0
prog="mfsmaster"
datadir="/var/lib/mfs"
mfsbin="/usr/sbin/mfsmaster"
mfsrestore="/usr/sbin/mfsmetarestore"
start () {
echo -n $"Starting $prog: "
$mfsbin start >/dev/null 2>&1
if [ $? -ne 0 ];then
$mfsrestore -a >/dev/null 2>&1 && $mfsbin start >/dev/null 2>&1
fi
RETVAL=$?
echo
return $RETVAL
}
stop () {
echo -n $"Stopping $prog: "
$mfsbin -s >/dev/null 2>&1 || killall -9 $prog #>/dev/null 2>&1
RETVAL=$?
echo
return $RETVAL
}
restart () {
stop
start
}
reload () {
echo -n $"reload $prog: "
$mfsbin reload >/dev/null 2>&1
RETVAL=$?
echo
return $RETVAL
}
restore () {
echo -n $"restore $prog: "
$mfsrestore -a >/dev/null 2>&1
RETVAL=$?
echo
return $RETVAL
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
reload)
reload
;;
restore)
restore
;;
status)
status $prog
RETVAL=$?
;;
*)
echo $"Usage: $0 {start|stop|restart|reload|restore|status}"
RETVAL=1
esac
exit $RETVAL
[root@server149 ~]# chmod +x /etc/init.d/mfs
[root@server149 ~]# /etc/init.d/mfs status
mfsmaster is stopped
[root@server149 ~]# mount /dev/drbd1 /var/lib/mfs/
[root@server149 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root 5067808 1159672 3650704 25% /
tmpfs 251048 0 251048 0% /dev/shm
/dev/vda1 495844 33469 436775 8% /boot
/dev/drbd1 2064108 36616 1922640 2% /var/lib/mfs
[root@server149 ~]# /etc/init.d/mfs start
Starting mfsmaster:
[root@server149 ~]# /etc/init.d/mfs status
mfsmaster (pid 11756) is running...