磁盘管理之使用mdadm创建软Raid 及raid验证

RAID管理之使用mdadm创建软Raid


一、软RAID概念

在内核2.6以上是由MD模块来操作的,可将磁盘分区模拟成一个raid,一旦模拟之后软raid之后就被当成一个独立的设备来使用:

/dev/md0/dev/md1 ...

其md*仅代表模拟raid的编号


二、创建软RAID

1、md: 可以基于任何块设备来实现,比如:将/dev/sda5sda6 创建为一个raid1 (但是没有任何意义,Q:是5坏了还是6坏了呢? - -)如果不得不做软raid的话那么必须切记--千万不要使用同一块硬盘分区来做raid


2.mdadm命令常用参数

mdadm程序是一个独立的程序,能完成所有的软RAID管理功能,常用参数如下:

-C:创建一个raid

-A:可将其他raid拆卸下来安装在自己的硬盘上

-F:监控raid故障,及时发邮件

-G:改变激活阵列的大小或形态

#如果不加任何参数则默认使用管理模式


-a {yes|no}:是否为新建的raid0设备自动创建设备文件 #选择yes,不然无法继续

-l :指定raid级别

-n :用于raid的块设备的个数,但不包括备盘

-x :指定备盘的块设备个数

device ....


例:

如创建一个大小为10G的RAID0,分别可以使用方法:

(1)2块5G硬盘

(2)5块2G硬盘

(3)10块1G硬盘


2. 创建一个10G的RAID0,使其每个新加分区提供5G空间

(1)准备工作:准备将虚拟机添加2块10G硬盘,当然越大越好

(2)添加好硬盘,使用fdisk –l 命令查看硬盘信息


[root@test~]# fdisk -l /dev/sd[a-z]


Disk/dev/sda: 7516 MB, 7516192768 bytes

255heads, 63 sectors/track, 913 cylinders

Units =cylinders of 16065 * 512 = 8225280 bytes

Sectorsize (logical/physical): 512 bytes / 512 bytes

I/Osize (minimum/optimal): 512 bytes / 512 bytes

Diskidentifier: 0x000ee302


Device Boot Start End Blocks Id System

/dev/sda1 * 1 26 204800 83 Linux

Partition1 does not end on cylinder boundary.

/dev/sda2 26 91 524288 82 Linux swap / Solaris

Partition2 does not end on cylinder boundary.

/dev/sda3 91 914 6609920 83 Linux

Partition3 does not end on cylinder boundary.


Disk/dev/sdb: 10.7 GB, 10737418240 bytes

255heads, 63 sectors/track, 1305 cylinders

Units =cylinders of 16065 * 512 = 8225280 bytes

Sectorsize (logical/physical): 512 bytes / 512 bytes

I/Osize (minimum/optimal): 512 bytes / 512 bytes

Diskidentifier: 0x00000000



Disk/dev/sdc: 10.7 GB, 10737418240 bytes

255heads, 63 sectors/track, 1305 cylinders

Units =cylinders of 16065 * 512 = 8225280 bytes

Sectorsize (logical/physical): 512 bytes / 512 bytes

I/Osize (minimum/optimal): 512 bytes / 512 bytes

Diskidentifier: 0x00000000


(3)创建新分区

[root@test~]# fdisk /dev/sdb

Command(m for help): n

Commandaction

e extended

p primary partition (1-4)

p

Partitionnumber (1-4): 1

Firstcylinder (1-1305, default 1):

Usingdefault value 1

Lastcylinder, +cylinders or +size{K,M,G} (1-1305, default 1305): +5G


Command(m for help): p


Disk/dev/sdb: 10.7 GB, 10737418240 bytes

255heads, 63 sectors/track, 1305 cylinders

Units =cylinders of 16065 * 512 = 8225280 bytes

Sectorsize (logical/physical): 512 bytes / 512 bytes

I/Osize (minimum/optimal): 512 bytes / 512 bytes

Diskidentifier: 0x36c58cc5


Device Boot Start End Blocks Id System

/dev/sdb1 1 654 5253223+ 83 Linux


(4)更改分区类型:

可以使用l参数来查看当前支持哪些文件系统类型:

Command(m for help): l


0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris

1 FAT12 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT-

2 XENIXroot 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT-

3 XENIXusr 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT-

4 FAT16<32M 41 PPC PReP Boot 85 Linux extended c7 Syrinx

5 Extended 42 SFS 86 NTFS volume set da Non-FSdata

6 FAT16 4d QNX4.x 87 NTFS volume set db CP/M / CTOS /.

7 HPFS/NTFS 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility

8 AIX 4f QNX4.x 3rd part 8e Linux LVM df BootIt

9 AIXbootable 50 OnTrack DM 93 Amoeba e1 DOS access

a OS/2Boot Manag 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O

b W95FAT32 52 CP/M 9f BSD/OS e4 SpeedStor

c W95FAT32 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs

e W95FAT16 (LBA) 54 OnTrackDM6 a5 FreeBSD ee GPT

f W95Ext'd (LBA) 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/

10 OPUS 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b

11 Hidden FAT12 5c Priam Edisk a8 Darwin UFS f1 SpeedStor

12 Compaq diagnost 61 SpeedStor a9 NetBSD f4 SpeedStor

14 Hidden FAT16 <3 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary

16 Hidden FAT16 64 Novell Netware af HFS / HFS+ fb VMware VMFS

17 Hidden HPFS/NTF 65 Novell Netware b7 BSDI fs fc VMware VMKCORE

18 AST SmartSleep 70 DiskSecure Mult b8 BSDI swap fd Linux raid auto

1b Hidden W95 FAT3 75 PC/IX bb Boot Wizard hid fe LANstep

1c Hidden W95 FAT3 80 Old Minix be Solaris boot ff BBT

#真正使用linux软raid必须使用fd文件格式,所以还必须将其更改为fd类形文件格式:

#使用t参数更改文件类型

Command(m for help): t

Selectedpartition 1

#输入需要更改的文件类型,这里输入fd

Hexcode (type L to list codes): fd

Changedsystem type of partition 1 to fd (Linux raid autodetect)

#查看列表信息可以看到文件信息已经更改为linuxraid autodetect

Command(m for help): p


Disk/dev/sdb: 10.7 GB, 10737418240 bytes

255heads, 63 sectors/track, 1305 cylinders

Units =cylinders of 16065 * 512 = 8225280 bytes

Sectorsize (logical/physical): 512 bytes / 512 bytes

I/Osize (minimum/optimal): 512 bytes / 512 bytes

Diskidentifier: 0x36c58cc5


Device Boot Start End Blocks Id System

/dev/sdb1 1 654 5253223+ fd Linux raid autodetect

#保存退出

Command(m for help): w

Thepartition table has been altered!


Callingioctl() to re-read partition table.

Syncing disks.

#保存退出后一定不要格式化,因为不能格式化软raid底层的raid块

另一块磁盘也需要做相同操作,这里不做演示

(5)使用partx命令读取分区表信息,使其生成响应分区设备:

[root@test~]# partx -a /dev/sdb1 /dev/sdb

[root@test~]# partx -a /dev/sdc1 /dev/sdc

#查看其分区表:

[root@test~]# cat /proc/partitions

majorminor #blocks name


8 0 7340032 sda

8 1 204800 sda1

8 2 524288 sda2

8 3 6609920 sda3

8 16 10485760 sdb

8 17 5253223 sdb1

8 32 10485760 sdc

8 33 5253223 sdc1


(6)创建RAID0

#查看raid信息

[root@test ~]# cat /proc/mdstat

Personalities :

unused devices:


#使用mdadm创建分区:

#指定创建的设备名md0;

#-a yes 表示自动创建设备文件;

#-l 指定级别(raid0);

#-n 选择设备2个分别是sdb1、sdc1;

[root@test ~]# mdadm -C /dev/md0 -l 0 -n 2 /dev/sdb1 /dev/sdc1

mdadm: Defaulting to version 1.2 metadata

mdadm: array /dev/md0 started.

再来查看软raid信息:

[root@test ~]# cat /proc/mdstat

Personalities : [raid0]

md0 : active raid0 sdc1[1] sdb1[0]

10506240 blocks super 1.2 512k chunks


unused devices:

#查看软raid设备:

[root@test ~]# ls /dev/md*

/dev/md0


/dev/md:

md-device-map

#md-device-map保存了当前主机的所有设备的映射设备

[root@test ~]# cat /dev/md/md-device-map

md0 1.2 59e47d02:52b9eb51:a49efcfe:9e45b7fd /dev/md0


(7)当一个md设备创建完成之后,可以将它格式化:

[root@test ~]# mke2fs -text4 /dev/md0

(8)挂载设备:

[root@test ~]# mount /dev/md0/backup/

[root@test ~]# df -h

文件系统容量已用可用 已用%% 挂载点

/dev/sda3 6.3G 3.1G 2.9G 52% /

tmpfs 245M 0 245M 0% /dev/shm

/dev/sda1 194M 28M 156M 16% /boot

/dev/md0 9.9G 151M 9.3G 2% /backup


[root@test ~]# ll /backup/

total 16

drwx------. 2 root root 16384 Nov 19 06:28 lost+found

3.监控软RAID

设备文件主设备号:major,用于标示设备类别,如:sd开头的硬盘 都使用一种设备号,所以硬盘和键盘分别是不同的设备,使用不同的设备号,靠主设备号来区别,而minor是用来标示次设备号:用于区分同一种类别下不同的具体设备。

如下所示,看到设备号 8,0 使用逗号隔开的2个数字,其中8表示主设备号,0表示次设备号(同一种类别下不同的设备)

[root@test ~]# ls -l /dev/sd*

brw-rw----. 1 root disk 8, 0 Nov 19 05:27 /dev/sda

brw-rw----. 1 root disk 8, 1 Nov 19 05:27 /dev/sda1

brw-rw----. 1 root disk 8, 2 Nov 19 05:27 /dev/sda2

brw-rw----. 1 root disk 8, 3 Nov 19 05:27 /dev/sda3

brw-rw----. 1 root disk 8, 16 Nov 19 05:42 /dev/sdb

brw-rw----. 1 root disk 8, 17 Nov 19 06:14 /dev/sdb1

brw-rw----. 1 root disk 8, 32 Nov 19 05:46 /dev/sdc

brw-rw----. 1 root disk 8, 33 Nov 19 06:14 /dev/sdc1

监控软raid可以使用使用mdadm –D 命令来监控软raid,格式如下:

#mdadm -D raid设备名

[root@test ~]# mdadm -D /dev/md0

/dev/md0:

Version : 1.2

Creation Time : Tue Nov19 06:14:36 2013

Raid Level : raid0

Array Size : 10506240(10.02 GiB 10.76 GB)

Raid Devices : 2

Total Devices : 2

Persistence :Superblock is persistent


Update Time : Tue Nov19 06:14:36 2013

State : clean

Active Devices : 2

Working Devices : 2

Failed Devices : 0

Spare Devices : 0


Chunk Size : 512K


Name :test:0 (local to host test)

UUID :027de459:51ebb952:fefc9ea4:fdb7459e

Events : 0


Number Major Minor RaidDevice State

0 8 17 0 active sync /dev/sdb1

1 8 33 1 active sync /dev/sdc1



4. 创建大小为2G的raid1:

(1)需求:2块容量为2G的硬盘分区:

(2)创建分区,并将文件系统类型设置为Linux raid autodetect,步骤略。

#查看分区信息:

[root@test ~]# fdisk -l

Disk /dev/sda: 7516 MB, 7516192768 bytes

255 heads, 63 sectors/track, 913 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000ee302


Device Boot Start End Blocks Id System

/dev/sda1 * 1 26 204800 83 Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2 26 91 524288 82 Linux swap / Solaris

Partition 2 does not end on cylinder boundary.

/dev/sda3 91 914 6609920 83 Linux

Partition 3 does not end on cylinder boundary.


Disk /dev/sdb: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x36c58cc5


Device Boot Start End Blocks Id System

/dev/sdb1 1 654 5253223+ fd Linux raid autodetect

/dev/sdb2 655 916 2104515 fd Linux raid autodetect


Disk /dev/sdc: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x2d692c9f


Device Boot Start End Blocks Id System

/dev/sdc1 1 654 5253223+ fd Linux raid autodetect

/dev/sdc2 655 916 2104515 fd Linux raid autodetect


Disk /dev/md127: 10.8 GB, 10758389760 bytes

2 heads, 4 sectors/track, 2626560 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 524288 bytes / 1048576 bytes

Disk identifier: 0x00000000


(3)创建RAID

[root@test ~]# mdadm -C /dev/md0 -l 1 -n 2 /dev/sd{b,c}2

#可以看到以下警告信息:当前创建了1.x版本的raid1,其本身不适用于挂载到boot上,忽略信息选择y继续

mdadm: Note: this array has metadata at the start and

may not be suitable asa boot device. If you plan to

store '/boot' on thisdevice please ensure that

your boot-loaderunderstands md/v1.x metadata, or use

--metadata=0.90

Continue creating array? y #输入y

mdadm: Defaulting to version 1.2 metadata

mdadm: array /dev/md0 started.

#查看软raid信息

[root@test ~]# cat /proc/mdstat

Personalities : [raid0] [raid1]

md0 : active raid1 sdc2[1] sdb2[0]

2102400 blocks super1.2 [2/2] [UU]


md127 : active raid0 sdb1[0] sdc1[1]

10506240 blockssuper 1.2 512k chunks


unused devices:

(4)将其格式化:

[root@test ~]# mke2fs -t ext4 /dev/md0

#挂载并查看容量大小

[root@test ~]# mount /dev/md0 /mnt/

[root@test ~]# df -h

文件系统容量已用可用 已用%% 挂载点

/dev/sda3 6.3G 3.1G 2.9G 52% /

tmpfs 245M 0 245M 0% /dev/shm

/dev/sda1 194M 28M 156M 16% /boot

/dev/md0 2.0G 68M 1.9G 4% /mnt

#测试是否可读写文件

[root@test mnt]# pwd

/mnt

[root@test mnt]# cp /etc/inittab .

[root@test mnt]# tail -3 inittab

# 6 - reboot (Do NOT setinitdefault to this)

#

id:3:initdefault:


(5)测试RAID可用性

这里需要模拟raid损坏并查看其效果

使用参数:

设定为模拟损坏

-f #把指定设备模拟损坏

-r #把损坏的设备一处

-a #新增一个设备到阵列中

#模拟raid1中sdb2硬盘损坏:

[root@test ~]# mdadm /dev/md0 -f /dev/sdb2

mdadm: set /dev/sdb2 faulty in /dev/md0

#使用-D参数监控查看其raid状态:

[root@test ~]# mdadm -D /dev/md0

/dev/md0:

Version : 1.2

Creation Time : Tue Nov19 07:16:13 2013

Raid Level : raid1

Array Size : 2102400(2.01 GiB 2.15 GB)

Used Dev Size : 2102400(2.01 GiB 2.15 GB)

Raid Devices : 2

Total Devices : 2

Persistence :Superblock is persistent


Update Time : Tue Nov19 08:47:29 2013

State : clean,degraded

Active Devices : 1

Working Devices : 1

Failed Devices : 1

Spare Devices : 0


Name :test:0 (local to host test)

UUID :8503cb53:5e62228e:8f063b14:d9b984d4

Events : 18


Number Major Minor RaidDevice State

0 0 0 0 removed

1 8 34 1 active sync /dev/sdc2

#可用看到以下信息,sdb2目前状态为fauly spare ,当前状态为空闲盘

0 8 18 - faulty spare /dev/sdb2


(6)再次查看当前raid分区是否可读写文件

[root@test mnt]# pwd

/mnt

[root@test mnt]# head -1 ./inittab

# inittab is only used by upstart for the default runlevel.

确保无误


(7)移除设备:

可使用-r参数来移除设备文件:

#将设备md0 中的sdb2 分区移除

[root@test mnt]# mdadm /dev/md0 -r /dev/sdb2

mdadm: hot removed /dev/sdb2 from /dev/md0

#查看其raid状态

[root@test mnt]# mdadm -D /dev/md0

/dev/md0:

Version : 1.2

Creation Time : Tue Nov19 07:16:13 2013

Raid Level : raid1

Array Size : 2102400(2.01 GiB 2.15 GB)

Used Dev Size : 2102400(2.01 GiB 2.15 GB)

Raid Devices : 2

Total Devices : 1

Persistence :Superblock is persistent


Update Time : Tue Nov19 08:55:11 2013

State : clean, degraded ##不再冗余,被当做一个硬盘来使用

Active Devices : 1

Working Devices : 1

Failed Devices : 0

Spare Devices : 0


Name :test:0 (local to host test)

UUID :8503cb53:5e62228e:8f063b14:d9b984d4

Events : 35


Number Major Minor RaidDevice State

0 0 0 0 removed

1 8 34 1 active sync /dev/sdc2

如上所示,可以看到只剩下/dev/sdc2 这一个分区信息


(8)将新分区添加至raid

假如设备修好或者更换了硬盘需添加设备则可以使用-a参数来进行添加设备

[root@test mnt]# mdadm /dev/md0 -a /dev/sdb2

mdadm: added /dev/sdb2

#查看md0文件设备信息

[root@test mnt]# mdadm -D /dev/md0

/dev/md0:

Version : 1.2

Creation Time : Tue Nov19 07:16:13 2013

Raid Level : raid1

Array Size : 2102400(2.01 GiB 2.15 GB)

Used Dev Size : 2102400(2.01 GiB 2.15 GB)

Raid Devices : 2

Total Devices : 2

Persistence :Superblock is persistent


Update Time : Tue Nov19 09:10:00 2013

State : clean,degraded, recovering

Active Devices : 1

Working Devices : 2

Failed Devices : 0

Spare Devices : 1


Rebuild Status : 76%complete


Name :test:0 (local to host test)

UUID :8503cb53:5e62228e:8f063b14:d9b984d4

Events : 51


Number Major Minor RaidDevice State

2 8 18 0 spare rebuilding /dev/sdb2

1 8 34 1 active sync /dev/sdc2

#如果设备没有修好,添加其他设备号,如sdd2也可以


5.创建一个10G的raid1

需求:准备3块5G硬盘分区

#因为只有2个新分区 所以还在虚拟机新加一块硬盘创建一个新分区

#因为之前创建raid已经将空间分配出去,没有多余空间,所以要先重建raid

(1)重建raid

[root@test ~]# ls /dev/md*

/dev/md126 /dev/md127


/dev/md:

md-device-map test:0 test:0_0

#查看raid信息

[root@test ~]# cat/proc/mdstat

Personalities : [raid0] [raid1]

md126 : active raid0 sdc1[1] sdb1[0]

10506240 blockssuper 1.2 512k chunks


md127 : active (auto-read-only) raid1 sdc2[1] sdb2[2]

2102400 blocks super1.2 [2/2] [UU]


unused devices:


(2)将md126停止

停止raid可使用 –S 参数,这样之后就不会在启动

[root@test ~]# mdadm -S /dev/md126

mdadm: stopped /dev/md126

[root@test ~]# cat/proc/mdstat

Personalities : [raid0] [raid1]

md127 : active (auto-read-only) raid1 sdc2[1] sdb2[2]

2102400 blocks super1.2 [2/2] [UU]


unused devices:

#再次确认是否还存在md设备

[root@test ~]# ls/dev/md*

/dev/md127


/dev/md:

md-device-map test:0


(3)将其重建为raid,指定2块硬盘分区,备盘1块

[root@test ~]# mdadm -C /dev/md0 -l 1 -n 2 -x 1 /dev/sdb1/dev/sdc1 /dev/sdd1

#查看raid信息:

[root@test ~]# cat /proc/mdstat

Personalities : [raid0] [raid1]

md0 : active raid1 sdd1[2](S) sdc1[1] sdb1[0]

5249088 blocks super1.2 [2/2] [UU]


md127 : active (auto-read-only) raid1 sdc2[1] sdb2[2]

2102400 blocks super1.2 [2/2] [UU]


[root@test ~]# mdadm -D /dev/md0

/dev/md0:

Version : 1.2

Creation Time : Tue Nov19 09:46:34 2013

Raid Level : raid1

Array Size : 5249088(5.01 GiB 5.38 GB)

Used Dev Size : 5249088(5.01 GiB 5.38 GB)

Raid Devices : 2

Total Devices : 3

Persistence :Superblock is persistent


Update Time : Tue Nov19 09:47:02 2013

State : clean

Active Devices : 2

Working Devices : 3

Failed Devices : 0

Spare Devices : 1


Name :test:0 (local to host test)

UUID :0ef21803:ed80e363:4550aa9b:68d53ea6

Events : 17


Number Major Minor RaidDevice State

0 8 17 0 active sync /dev/sdb1

1 8 33 1 active sync /dev/sdc1


2 8 49 - spare /dev/sdd1

如果模拟sdb1 sdc1 任何一块盘损坏那么sdd1将会顶替上去

#格式化md0

[root@test ~]# mdadm /dev/md0 -f /dev/sdc1

mdadm: set /dev/sdc1 faulty in /dev/md0

#监控md0信息

[root@test ~]# mdadm -D /dev/md0 | tail -5

Number Major Minor RaidDevice State

0 8 17 0 active sync /dev/sdb1

2 8 49 1 active sync /dev/sdd1


1 8 33 - faulty spare /dev/sdc1

#这时,设备sdc1已经被损坏了也就是说已经可以将其移除了

(4)移除设备

[root@test ~]# mdadm/dev/md0 -r /dev/sdc1

mdadm: hot removed /dev/sdc1 from /dev/md0

#再查看md0设备信息

[root@test ~]# mdadm -D /dev/md0

/dev/md0:

Version : 1.2

Creation Time : Tue Nov19 09:46:34 2013

Raid Level : raid1

Array Size : 5249088(5.01 GiB 5.38 GB)

Used Dev Size : 5249088(5.01 GiB 5.38 GB)

Raid Devices : 2

Total Devices : 2

Persistence :Superblock is persistent


Update Time : Tue Nov19 09:59:19 2013

State : clean

Active Devices : 2

Working Devices : 2

Failed Devices : 0

Spare Devices : 0


Name :test:0 (local to host test)

UUID :0ef21803:ed80e363:4550aa9b:68d53ea6

Events : 37


Number Major Minor RaidDevice State

0 8 17 0 active sync /dev/sdb1

2 8 49 1 active sync /dev/sdd1

#可以看到,目前文件设备为2个,sdc1损坏后,sdd1将替换上去,并不影响数据的读写


6.创建一个大小为10G的raid5

需求:至少需要3个5G硬盘分区

(1)移除设备md0

#首先确保没有任何相关的挂载点,再使用-S 参数将其移除

[root@test ~]# mdadm -S /dev/md0

mdadm: stopped /dev/md0

(2)重置raid

[root@test ~]# mdadm -C /dev/md0 -a yes -l 5 -n3 -c 256 /dev/sdc1/dev/sdb1 /dev/sdd1

#-c 指定chunk ,如果不指定chunk默认为512

#查看raid信息

[root@test ~]# cat /proc/mdstat

Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]

md0 : active raid5 sdd1[3] sdb1[1] sdc1[0]

10498048 blockssuper 1.2 level 5, 256k chunk, algorithm 2 [3/3] [UUU]


md127 : active (auto-read-only) raid1 sdc2[1] sdb2[2]

2102400 blocks super1.2 [2/2] [UU]


unused devices:

#查看md0信息状态

[root@test ~]# mdadm -D /dev/md0

/dev/md0:

Version : 1.2

Creation Time : Tue Nov19 10:12:32 2013

Raid Level : raid5

Array Size : 10498048(10.01 GiB 10.75 GB) #总大小

Used Dev Size : 5249024(5.01 GiB 5.38 GB) #每个硬盘的大小

Raid Devices : 3

Total Devices : 3

Persistence :Superblock is persistent


Update Time : Tue Nov19 10:12:58 2013

State : clean

Active Devices : 3

Working Devices : 3

Failed Devices : 0

Spare Devices : 0


Layout :left-symmetric

Chunk Size : 256K


Name :test:0 (local to host test)

UUID :5f2ec8bd:ef76ce16:b68d0751:739bf1bb

Events : 18


Number Major Minor RaidDevice State

0 8 33 0 active sync /dev/sdc1

1 8 17 1 active sync /dev/sdb1

8 49 2 active sync /dev/sdd1

(3)格式化分区

[root@test ~]# mke2fs -t ext4 /dev/md0

#挂载

[root@test ~]# mount /dev/md0 /mnt/

#查看是否可读写

[root@test mnt]# pwd

/mnt

[root@test mnt]# echo test > 1.txt

[root@test mnt]# cat 1.txt

test

(4)验证raid5:

#将其一块硬盘损坏

[root@test ~]# mdadm /dev/md0 -f /dev/sdc1

mdadm: set /dev/sdc1 faulty in /dev/md0

#查看状态

[root@test ~]# mdadm -D /dev/md0 | tail -5

0 0 0 0 removed

1 8 17 1 active sync /dev/sdb1

3 8 49 2 active sync /dev/sdd1


0 8 33 - faulty spare /dev/sdc1

#再检查其挂载点是否能访问,如下所示读写正常

[root@test ~]# cd /mnt/

[root@test mnt]# ls

1.txt lost+found

[root@test mnt]# cat 1.txt

test


(5)验证2:再将第二块硬盘模拟损坏查看其效果

[root@test ~]# mdadm /dev/md0 -f /dev/sdd1

#查看raid信息

[root@test ~]# mdadm -D /dev/md0 | tail

0 0 0 0 removed

1 8 17 1 active sync /dev/sdb1

2 0 0 2 removed


0 8 33 - faulty spare /dev/sdc1

3 8 49 - faulty spare /dev/sdd1


#查看其挂载点是否可读写

[root@test ~]# cd /mnt/

[root@test mnt]# ls

1.txt lost+found passwd

[root@test mnt]# touch 2

[root@test mnt]# ll

#重新挂载md0

[root@test ~]# umount -lf /mnt/

[root@test ~]# mount /dev/md0 /mnt

mount: wrong fs type, bad option, bad superblock on /dev/md0,

missing codepage orhelper program, or other error

In some casesuseful info is found in syslog - try

dmesg | tail or so

#可以看到,挂载失败


(6)在一个新主机上重新识别为raid5

mdadm -A /dev/md2 -a yes -l 5 -n 3 /dev/sd{a,b,c}1


由此可以总结,RAID5采用独立存取的阵列方式,校验信息被均匀的分散到阵列的各个磁盘上

由于是分散来存储校验码,所以raid5有左对称右对称的概念,一般来讲都左对称这种方法最好,性能好 组织简单 ;没有一块硬盘来做校验,在数据的吞吐能力上,没有所谓的检验码称为瓶颈一说,所以raid5融合了4了优点同时避免了4的缺点




本文转自zuzhou 51CTO博客,原文链接:http://blog.51cto.com/yijiu/1328304

你可能感兴趣的:(运维,操作系统,5g)