创建软件RAID的指令是mdadm,允许将任何块设备做成RAID。
mdadm是一个模式化的命令,有如下几种工作模式:
创建模式:Create 对应的选项:-C 或--create
管理模式:Manage 对应的选项:--add、--fail、--remove
监控模式:Monitor 对应的选项:-F或--monitor
增长模式:Grow 对应的选项:-G或--grow
装配模式:Assemble 对应的选项:-A或--assemble
创建模式中的专用选项:
-l:指定raid level。长选项:--level=
-n:RAID磁盘的个数。长选项:--raid-devices=
-x:备用磁盘的个数。长选项:--spare-devices=
-a:自动创建设备文件。长选项:--auto= 。默认为--auto=yes
-c:指定chunk的大小,即每一次分配数据块的大小。长选项:--chunk。默认为64KB
示例1:
步骤1:磁盘分区,并将分区类型改为fd
步骤2:让kernel识别分区。partx -a /dev/sda
[root@Server3 ~]# cat /proc/partitions major minor #blocks name 8 0 41943040 sda 8 1 204800 sda1 8 2 10485760 sda2 8 4 1 sda4 8 5 1059932 sda5 8 6 1060258 sda6 [root@Server3 ~]# [root@Server3 ~]# partx -a /dev/sda BLKPG: Device or resource busy error adding partition 1 BLKPG: Device or resource busy error adding partition 2 BLKPG: Device or resource busy error adding partition 4 BLKPG: Device or resource busy error adding partition 5 BLKPG: Device or resource busy error adding partition 6 [root@Server3 ~]# [root@Server3 ~]# cat /proc/partitions major minor #blocks name 8 0 41943040 sda 8 1 204800 sda1 8 2 10485760 sda2 8 4 1 sda4 8 5 1059932 sda5 8 6 1060258 sda6 8 7 1060258 sda7 8 8 1060258 sda8 8 9 1060258 sda9 [root@Server3 ~]#
步骤3:创建RAID
[root@Server3 ~]# mdadm --create /dev/md0 --auto=yes --level=0 --raid-devices=2 /dev/sda{6,7} mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@Server3 ~]#
步骤4:格式化,挂载
格式化:
[root@Server3 ~]# mke2fs -t ext4 /dev/md0 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 132464 inodes, 529408 blocks 26470 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=545259520 17 block groups 32768 blocks per group, 32768 fragments per group 7792 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 27 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@Server3 ~]#
挂载:
[root@Server3 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 7.8G 1.6G 83% / tmpfs 245M 0 245M 0% /dev/shm /dev/sda1 194M 28M 156M 16% /boot [root@Server3 ~]# mount /dev/md0 /backup/ [root@Server3 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 7.8G 1.6G 83% / tmpfs 245M 0 245M 0% /dev/shm /dev/sda1 194M 28M 156M 16% /boot /dev/md0 2.0G 68M 1.9G 4% /backup [root@Server3 ~]#
测试RAID0的性能:
说明:RAID0的性能比没有做RAID的单块设备的性能,提供了1倍多。没有做RAID的设备,写入一个1500M的内容需要8.352s,做了RAID0的设备写入1500M的内容只用了3.971s。
查看RAID设备的状态:cat /proc/mdstat
[root@Server3 backup]# cat /proc/mdstat Personalities : [raid0] md0 : active raid0 sda7[1] sda6[0] 2117632 blocks super 1.2 512k chunks unused devices: <none> [root@Server3 backup]#
查看RAID设备的详细信息:mdadm -D /dev/md0 长选项:mdadm --detail
[root@Server3 ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Jul 16 15:16:34 2014 Raid Level : raid0 Array Size : 2117632 (2.02 GiB 2.17 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 16 15:16:34 2014 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K Name : Server3:0 (local to host Server3) UUID : d3c6d4f8:7447631e:c8941e61:a538f979 Events : 0 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 8 7 1 active sync /dev/sda7 [root@Server3 ~]#
管理模式:添加(--add)、删除(--remove)、模拟设备损坏(--fail)。
模拟设备损坏:--fail
[root@Server3 ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Jul 16 16:06:18 2014 Raid Level : raid1 Array Size : 1059222 (1034.57 MiB 1084.64 MB) Used Dev Size : 1059222 (1034.57 MiB 1084.64 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 16 16:07:17 2014 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Name : Server3:0 (local to host Server3) UUID : d19b864a:36cdb744:52bc5688:39a9668f Events : 18 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 0 0 1 removed 1 8 7 - faulty spare /dev/sda7 [root@Server3 ~]#
移除损坏的设备:--remove
[root@Server3 ~]# mdadm --manage /dev/md0 --remove /dev/sda7 mdadm: hot removed /dev/sda7 from /dev/md0 [root@Server3 ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Jul 16 16:06:18 2014 Raid Level : raid1 Array Size : 1059222 (1034.57 MiB 1084.64 MB) Used Dev Size : 1059222 (1034.57 MiB 1084.64 MB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Wed Jul 16 16:09:00 2014 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : Server3:0 (local to host Server3) UUID : d19b864a:36cdb744:52bc5688:39a9668f Events : 21 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 0 0 1 removed [root@Server3 ~]#
添加设备:--add
[root@Server3 ~]# mdadm --manage /dev/md0 --add /dev/sda7 mdadm: added /dev/sda7 [root@Server3 ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Jul 16 16:06:18 2014 Raid Level : raid1 Array Size : 1059222 (1034.57 MiB 1084.64 MB) Used Dev Size : 1059222 (1034.57 MiB 1084.64 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 16 16:11:04 2014 State : clean, degraded, recovering Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Rebuild Status : 24% complete Name : Server3:0 (local to host Server3) UUID : d19b864a:36cdb744:52bc5688:39a9668f Events : 36 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 2 8 7 1 spare rebuilding /dev/sda7 [root@Server3 ~]#
停止阵列:--stop
[root@Server3 ~]# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 [root@Server3 ~]#
重新启用阵列:--assemble
[root@Server3 ~]# mdadm --assemble /dev/md0 /dev/sda{6,7} mdadm: /dev/md0 has been started with 2 drives. [root@Server3 ~]# cat /proc/mdstat Personalities : [raid0] [raid1] md0 : active raid1 sda6[0] sda7[2] 1059222 blocks super 1.2 [2/2] [UU] unused devices: <none> [root@Server3 ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Jul 16 16:06:18 2014 Raid Level : raid1 Array Size : 1059222 (1034.57 MiB 1084.64 MB) Used Dev Size : 1059222 (1034.57 MiB 1084.64 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 16 16:11:07 2014 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : Server3:0 (local to host Server3) UUID : d19b864a:36cdb744:52bc5688:39a9668f Events : 51 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 2 8 7 1 active sync /dev/sda7 [root@Server3 ~]#
彻底删除RAID
[root@Server3 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 7.8G 1.6G 83% / tmpfs 245M 0 245M 0% /dev/shm /dev/sda1 194M 28M 156M 16% /boot /dev/md0 3.0G 69M 2.8G 3% /backup [root@Server3 ~]# umount /dev/md0 [root@Server3 ~]# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 [root@Server3 ~]# mdadm --zero-superblock /dev/sda{6,7,8,9,10} [root@Server3 ~]# 说明:--zero-superblock 加上该选项时,会判断如果该阵列是否包含一个有效的阵列超级快,若有则将该超级块中阵列信息抹除。 [root@Server3 ~]# rm /etc/mdadm.conf 删除mdadm的配置文件。如果以前配置过。
开机自动装配RAID设备:
mdadm --detail --scan > /etc/mdadm.conf 作用是将raid信息保存进mdadm.conf,以后装配时,就可以用mdadm -A /dev/md0这种格式来装配。
示例2:创建RAID5,并演示所有操作
测试1:创建RAID 5,并指定RAID等级,RAID设备数和备份设备数
[root@Server3 ~]# mdadm --create /dev/md0 --auto=yes --level=5 --raid-devices=4 /dev/sda{6,7,8,9} --spare-devices=1 /dev/sda10 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@Server3 ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Jul 16 16:36:03 2014 Raid Level : raid5 Array Size : 3176448 (3.03 GiB 3.25 GB) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 4 Total Devices : 5 Persistence : Superblock is persistent Update Time : Wed Jul 16 16:36:29 2014 State : clean Active Devices : 4 Working Devices : 5 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : Server3:0 (local to host Server3) UUID : 3eed72e6:c0f4fa9c:72ff6b5d:32b3cc86 Events : 20 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 8 7 1 active sync /dev/sda7 2 8 8 2 active sync /dev/sda8 5 8 9 3 active sync /dev/sda9 4 8 10 - spare /dev/sda10 [root@Server3 ~]#
测试2:模拟设备故障
[root@Server3 ~]# mdadm --manage /dev/md0 --fail /dev/sda6 mdadm: set /dev/sda6 faulty in /dev/md0 [root@Server3 ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Jul 16 16:36:03 2014 Raid Level : raid5 Array Size : 3176448 (3.03 GiB 3.25 GB) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 4 Total Devices : 5 Persistence : Superblock is persistent Update Time : Wed Jul 16 16:42:52 2014 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Rebuild Status : 14% complete Name : Server3:0 (local to host Server3) UUID : 3eed72e6:c0f4fa9c:72ff6b5d:32b3cc86 Events : 24 Number Major Minor RaidDevice State 4 8 10 0 spare rebuilding /dev/sda10 1 8 7 1 active sync /dev/sda7 2 8 8 2 active sync /dev/sda8 5 8 9 3 active sync /dev/sda9 0 8 6 - faulty spare /dev/sda6 [root@Server3 ~]# 说明:可以看到,当模拟sda6出现故障时,备份盘sda10立马被激活。
测试3:移除故障盘
[root@Server3 ~]# mdadm --manage /dev/md0 --remove /dev/sda6 mdadm: hot removed /dev/sda6 from /dev/md0 [root@Server3 ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Jul 16 16:36:03 2014 Raid Level : raid5 Array Size : 3176448 (3.03 GiB 3.25 GB) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed Jul 16 16:44:28 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : Server3:0 (local to host Server3) UUID : 3eed72e6:c0f4fa9c:72ff6b5d:32b3cc86 Events : 42 Number Major Minor RaidDevice State 4 8 10 0 active sync /dev/sda10 1 8 7 1 active sync /dev/sda7 2 8 8 2 active sync /dev/sda8 5 8 9 3 active sync /dev/sda9 [root@Server3 ~]#
测试4:添加新的备用盘
[root@Server3 ~]# mdadm --manage /dev/md0 --add /dev/sda6 mdadm: added /dev/sda6 [root@Server3 ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Jul 16 16:36:03 2014 Raid Level : raid5 Array Size : 3176448 (3.03 GiB 3.25 GB) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 4 Total Devices : 5 Persistence : Superblock is persistent Update Time : Wed Jul 16 16:46:10 2014 State : clean Active Devices : 4 Working Devices : 5 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : Server3:0 (local to host Server3) UUID : 3eed72e6:c0f4fa9c:72ff6b5d:32b3cc86 Events : 43 Number Major Minor RaidDevice State 4 8 10 0 active sync /dev/sda10 1 8 7 1 active sync /dev/sda7 2 8 8 2 active sync /dev/sda8 5 8 9 3 active sync /dev/sda9 6 8 6 - spare /dev/sda6 [root@Server3 ~]#
测试5:格式化,开机自动挂载
[root@Server3 ~]# echo "/dev/md0 /backup ext4 defaults 0 0" >> /etc/fstab [root@Server3 ~]# mount -a [root@Server3 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 7.8G 1.6G 83% / tmpfs 245M 0 245M 0% /dev/shm /dev/sda1 194M 28M 156M 16% /boot /dev/md0 3.0G 69M 2.8G 3% /backup [root@Server3 ~]#
测试6:停止RAID
[root@Server3 ~]# mdadm --stop /dev/md0 mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group? [root@Server3 ~]# umount /dev/md0 [root@Server3 ~]# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 [root@Server3 ~]# 说明:停止之前需要先卸载
测试7:重新装载RAID。mdadm -As 长选项 mdadm --assemble --scan
[root@Server3 ~]# mdadm --assemble --scan mdadm: /dev/md/0 has been started with 4 drives and 1 spare. [root@Server3 ~]#
测试8:重启系统,测试是否自动挂载RAID
[root@Server3 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 7.8G 1.6G 83% / tmpfs 245M 0 245M 0% /dev/shm /dev/sda1 194M 28M 156M 16% /boot [root@Server3 ~]# mount -a mount: special device /dev/md0 does not exist [root@Server3 ~]# ls -l /dev/m mapper/ mcelog md/ md127 mem midi [root@Server3 ~]# ls -l /dev/md 说明:重启系统后,发现RAID设备没有自动挂载,并且在/dev目录下md0设备也没有了,而是变成了md127,这是因为没有将raid信息添加到/etc/mdadm.conf这个配置文件的原因。 查看/proc/mdstat: [root@Server3 ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md127 : active raid5 sda6[6](S) sda9[5] sda8[2] sda10[4] sda7[1] 3176448 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> [root@Server3 ~]#
解决方法:
1. 获取md127的UUID信息:mdadm --detail /dev/md127 | grep 'UUID'
[root@Server3 ~]# mdadm --detail /dev/md127 | grep UUID UUID : 3eed72e6:c0f4fa9c:72ff6b5d:32b3cc86 [root@Server3 ~]#
2. 创建配置文件mdadm.conf
[root@Server3 ~]# cat /etc/mdadm.conf DEVICE /dev/sda6 /dev/sda7 /dev/sda8 /dev/sda9 /dev/sd10 ARRAY /dev/md0 UUID=3eed72e6:c0f4fa9c:72ff6b5d:32b3cc86 [root@Server3 ~]# 说明: DEVICE:指定所有的RAID设备。 ARRAY:定义RAID名称和UUID
3. 重启系统
[root@Server3 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 7.8G 1.6G 83% / tmpfs 245M 0 245M 0% /dev/shm /dev/sda1 194M 28M 156M 16% /boot /dev/md0 3.0G 69M 2.8G 3% /backup [root@Server3 ~]# [root@Server3 ~]# [root@Server3 ~]# [root@Server3 ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sda8[2] sda6[6] sda7[1] sda9[5] 3176448 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> [root@Server3 ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Jul 16 16:36:03 2014 Raid Level : raid5 Array Size : 3176448 (3.03 GiB 3.25 GB) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed Jul 16 17:19:26 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : Server3:0 (local to host Server3) UUID : 3eed72e6:c0f4fa9c:72ff6b5d:32b3cc86 Events : 64 Number Major Minor RaidDevice State 6 8 6 0 active sync /dev/sda6 1 8 7 1 active sync /dev/sda7 2 8 8 2 active sync /dev/sda8 5 8 9 3 active sync /dev/sda9 [root@Server3 ~]# 说明:重启系统以后,发现可以自动挂载了,但是spare 磁盘没了。
重新添加一个spare设备。并重启系统就可以了。
[root@Server3 ~]# mdadm --manage /dev/md0 --add /dev/sda10 mdadm: added /dev/sda10 [root@Server3 ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Jul 16 16:36:03 2014 Raid Level : raid5 Array Size : 3176448 (3.03 GiB 3.25 GB) Used Dev Size : 1058816 (1034.17 MiB 1084.23 MB) Raid Devices : 4 Total Devices : 5 Persistence : Superblock is persistent Update Time : Wed Jul 16 17:32:23 2014 State : clean Active Devices : 4 Working Devices : 5 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : Server3:0 (local to host Server3) UUID : 3eed72e6:c0f4fa9c:72ff6b5d:32b3cc86 Events : 65 Number Major Minor RaidDevice State 6 8 6 0 active sync /dev/sda6 1 8 7 1 active sync /dev/sda7 2 8 8 2 active sync /dev/sda8 5 8 9 3 active sync /dev/sda9 4 8 10 - spare /dev/sda10 [root@Server3 ~]#
综上所述:在创建好raid之后,如果没有创建配置文件/etc/mdadm.conf的话,重启系统后,新创建的md设备会变成md127,因此在创建好raid后,需要创建mdadm.conf配置文件,该文件中的信息可以采用如下方式添加:
[root@Server3 ~]# echo "DEVICE /dev/sda6 /dev/sda7 /dev/sda8 /dev/sda9 /dev/sda10 " >> /etc/mdadm.conf [root@Server3 ~]# mdadm -Ds >> /etc/mdadm.conf [root@Server3 ~]# echo "MAILADDR [email protected]" >> /etc/mdadm.conf MAILADDR指定出问题时监控系统发邮件的地址 # 格式如下: DEVICE /dev/sda6 /dev/sda7 /dev/sda8 /dev/sda9 /dev/sda10 ARRAY /dev/md0 metadata=1.2 spares=1 name=Server3:0 UUID=3eed72e6:c0f4fa9c:72ff6b5d:32b3cc86 MAILADDR [email protected] #DEVICE行指明:依据该配置文件开启阵列时,去查找那些设备的超级快信息;若没有该行,就去搜索mtab中所有设备分区的超级快信息;所以改行可以不写,但是只要写上,以后添加spare 设备时就需要同时修改改行信息; #ARRAY行指明raid的名称,UUID等信息。
可以参考:http://molinux.blog.51cto.com/2536040/516008