软RAID(mdadm)配置与维护
1.fdsik 分区
按n 分区
按t键改分区信息为fd
新建分区
/dev/sdc1
/dev/sdd1
/dev/sde1
/dev/sdf1
例举如下:
Disk /dev/sdc: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 652 5237158+ fd Linux raid autodetect
2.使part设备生效
#partprobe
3.用mdadm 创建/dev/md0
# mdadm -C /dev/md0 -a yes -l 5 -n 3 -x 1 /dev/sd{c,d,e,f}1
复制代码
mdadm: array /dev/md0 started.
mdadm 各参数解释:
-C 是Create(创建)
-l 是RAID级别 (此例为RAID5)
-n 是所用硬盘数量 (此例为3个,分别为/dev/sd{c,d,e}1
-x 是热备(spare)盘数 (此例为1个 /dev/sdf1
/dev/sd{c,d,e,f}1 为device(设备名)此例为/dev/sd(c,d,e,f}1,"sdc1,sdd1,sde1"为RAID5盘三个,“sdf1”为热备(spare)盘
4.用mdadm 显示与检查/dev/md0分区情况
#mdadm --detail /dev/md0
复制代码
/dev/md0:
Version : 0.90
Creation Time : Fri Jul 8 04:00:27 2011
Raid Level : raid5
Array Size : 10474112 (9.99 GiB 10.73 GB)
Used Dev Size : 5237056 (4.99 GiB 5.36 GB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Jul 8 04:00:27 2011
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 4
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 71% complete
UUID : ac0b6e5b:904397f1:d8d86aac:277f8585
Events : 0.1
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 49 1 active sync /dev/sdd1
4 8 65 2 spare rebuilding /dev/sde1
3 8 81 - spare /dev/sdf1
5.格式化/dev/md0 RAID5 分区
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
1310720 inodes, 2618528 blocks
130926 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
6.创建mdadm.conf文件以使机器重启动生效可以正常使用/dev/md0
#mdadm -D -s > /etc/mdadm.conf
复制代码
# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=0.90 spares=1 UUID=ac0b6e5b:904397f1:d8d86aac:277f8585
II.测试RAID 5 功能与管理RAID5
a.首先测试spare热备功能
#mkdir /r5
#mount /dev/md0 /r5
#cd /r5
#seq 1000000 > test.txt
seq 写1-1000000数到 test.txt 下
1.强力set /dev/md0里的一个分区为faulty
#mdadm /dev/md0 -f /dev/sdd1
复制代码
#mdadm --detail /dev/md0
Rebuild Status : 53% complete
UUID : ac0b6e5b:904397f1:d8d86aac:277f8585
Events : 0.4
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
3 8 81 1 spare rebuilding /dev/sdf1
2 8 65 2 active sync /dev/sde1
4 8 49 - faulty spare /dev/sdd1
注意:查看时一定要看“3 8 81 1 spare rebuilding /dev/sdf1”
这一行,为热备(spare)盘sdf1顶替sdd1上面为同步中“spare rebuilding”
看这个分区/dev/sdf1完成后(在做换盘操作)如State:active sync 这是正常状态。
2.删除标记为faulty 分区/dev/sdd1
#mdadm /dev/md0 -r /dev/sdd1
复制代码
mdadm: hot removed /dev/sdd1
#mdadm --detail /dev/md0
Layout : left-symmetric
Chunk Size : 64K
UUID : ac0b6e5b:904397f1:d8d86aac:277f8585
Events : 0.8
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 81 1 active sync /dev/sdf1
2 8 65 2 active sync /dev/sde1
3.删除后加入分区
a.还是用这个刚刚删除的“以有问题的/dev/sdd1"分区
b.用fdisk 分区删除原用并重新分区为空的分区
c.用fdisk 按t 标示分区为fd
分好如下:
Disk /dev/sdd1: 5362 MB, 5362850304 bytes
255 heads, 63 sectors/track, 651 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1p1 1 651 5229126 fd Linux raid autodetect
d. 把/dev/sdd1 加入到/dev/md0中去
# mdadm /dev/md0 -a /dev/sdd1
复制代码
mdadm: added /dev/sdd1
# mdadm --detail /dev/md0
UUID : ac0b6e5b:904397f1:d8d86aac:277f8585
Events : 0.8
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 81 1 active sync /dev/sdf1
2 8 65 2 active sync /dev/sde1
3 8 49 - spare /dev/sdd1
4.关于其他RAID 如-l 0 (RAID0) -l 1 的配置基本与 -l 5(RAID5)没多少区别
a.RAID0 意义不大,要是两个硬盘还只有起到了加速IO的作用无数据安全可言,-n 只能是2
b.RAID1 写入IO可能要弱些但安全有一定的保障(容量减半)-n 也是2 其他我也没有测试。
c.注意:一定不要用一个物理硬盘的几个分区作为RAID的子盘,这样做根本起不到加速IO与加固硬盘内容安全的目的,只会加 重 I/O的负载,切记!
5.这个mdadm 的运用,我个人的看法,只感觉是个折中的方案,有米的话还是硬RAID方案(安全高效)。
注:以上在RHEL5.5与centos5.5 测试通过