Linux系统下的软RAID1
实验环境:
vmware
虚拟机
1.
添加三块硬盘(其中一块做hot spare
),sdb
、sdc
、sdd
,添加完成后,启动linux
,用fdisk �Cl
命令查看
[root@ning ~]# fdisk -l
Disk /dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdd: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
2.
用fdisk
命令创建分区
[root@ning ~]# fdisk /dev/sdb *
进入目标设备
Command (m for help): ? *
打?,显示所有任务
?: unknown command
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition *
删除分区
l list known partition types *
显示支持分区
m print this menu
n add a new partition *
创建分区
o create a new empty DOS partition table
p print the partition table
q quit without saving changes *
不保存退出
s create a new empty Sun disklabel
t change a partition's system id *
修改分区类型
u change display/entry units
v verify the partition table
w write table to disk and exit *
保存退出
x extra functionality (experts only)
按
n
,进入创建分区子选项
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p *
创建主分区
Partition number (1-4): 1
First cylinder (1-130, default 1): *
直接按
enter
键,默认设置最大
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130):
Using default value 130
Command (m for help): w *
保存
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
再分别创建其他两个分区
3.
用mdadm
命令创建RAID
-- create -C
创建磁盘阵列
:
参数
阵列设备
/dev/mdx
-- level -l
制定阵列类型
:
参数
raidx
-- raid-devices -n
制定阵列内包含设备:
参数
数量、设备
2 /dev/sdb1 /dev/sdc1
[root@ning ~]# mdadm -C /dev/md0 -l raid1 -n 2 /dev/sdb1 /dev/sdc1
mdadm: /dev/sdb1 appears to contain an ext2fs file system
size=1044096K mtime=Thu Nov 5 16:08:28 2009
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid1 devices=3 ctime=Thu Nov 5 18:16:44 2009
mdadm: /dev/sdc1 appears to contain an ext2fs file system
size=1044096K mtime=Thu Nov 5 16:08:28 2009
mdadm: /dev/sdc1 appears to be part of a raid array:
level=raid1 devices=3 ctime=Thu Nov 5 18:16:44 2009
Continue creating array? Y *y
,
yes
mdadm: array /dev/md0 started.
[root@ning ~]# cat /proc/mdstat *
查看创建结果
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
1044096 blocks [2/2] [UU]
unused devices: <none>
[root@ning ~]# mdadm -D /dev/md0 *
查看详细信息
/dev/md0:
Version : 00.90.03
Creation Time : Thu Nov 5 18:24:08 2009
Raid Level : raid1
Array Size : 1044096 (1019.80 MiB 1069.15 MB)
Used Dev Size : 1044096 (1019.80 MiB 1069.15 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Nov 5 18:24:21 2009
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : ae669b67:2ab07be0:9a080fd7:8a195d80
Events : 0.2
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
4.
格式化
[root@ning ~]# mkfs.ext3 /dev/md0 *
以
ext3
文件系统格式化
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
130560 inodes, 261024 blocks
13051 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
16320 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
5.
挂载
[root@ning ~]# mkdir abc *
创建目录
abc
[root@ning ~]# mount /dev/md0 /root/abc *
挂在到
/root/abc
目录下
[root@ning ~]# df �Ch *
查看挂载是否成功
文件系统
容量
已用
可用
已用
%
挂载点
/dev/sda6 14G 596M 12G 5% /
/dev/sda3 2.0G 108M 1.8G 6% /var
/dev/sda2 3.8G 2.7G 1003M 73% /usr
/dev/sda1 99M 11M 83M 12% /boot
tmpfs 125M 0 125M 0% /dev/shm
/dev/md0 1004M 18M 936M 2% /root/abc
6.
设置raid
初始化
[root@ning ~]# vi /etc/mdadm.conf *mdadm.conf
为
raid
初始化编辑文件
DEVICE /dev/sdb1 /dev/sdc1
ARRAY /dev/md0 DEVICE=/dev/sdb1,/dev/sdc1
:
wq *
保存退出
vi
7.
配置自动挂载
[root@ning ~]# vi /etc/fstab
LABEL=/ / ext3 defaults 1 1
LABEL=/var /var ext3 defaults 1 2
LABEL=/usr /usr ext3 defaults 1 2
LABEL=/boot /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
LABEL=SWAP-sda5 swap swap defaults 0 0
/dev/md0 /root/abc ext3 defaults 0 0
# Beginning of the block added by the VMware software
.host:/ /mnt/hgfs vmhgfs defaults,ttl=5 0 0
# End of the block added by the VMware software
注释:
/dev/md0
为挂载设备
/root/abc
为挂载点
ext3
为文件类型
defaults
为挂在选项
第一个
0
,是否备份分区文件,
0
不备份
1
备份
第二个
0
,
fsck 0
不检查,
1
出现错误检查,
2
每次检查
8.
停止与启动
停止与启动都会根据
/etc/mdadm.conf
来操作
mdadm --stop /dev/md0
mdadm --assemble /dev/md0
9.
Hot spare
[root@ning ~]# mdadm /dev/md0 --fail /dev/sdb1 *
让
sdb1
出现故障
mdadm: unrecognised word on ARRAY line: DEVICE=/dev/sdb1,/dev/sdc1
mdadm: ARRAY line /dev/md0 has no identity information.
mdadm: set /dev/sdb1 faulty in /dev/md0
[root@ning ~]# mdadm -D /dev/md0 *
查看
md0
状态
mdadm: unrecognised word on ARRAY line: DEVICE=/dev/sdb1,/dev/sdc1
mdadm: ARRAY line /dev/md0 has no identity information.
/dev/md0:
Version : 00.90.03
Creation Time : Thu Nov 5 18:24:08 2009
Raid Level : raid1
Array Size : 1044096 (1019.80 MiB 1069.15 MB)
Used Dev Size : 1044096 (1019.80 MiB 1069.15 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Nov 5 19:00:36 2009
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
UUID : ae669b67:2ab07be0:9a080fd7:8a195d80
Events : 0.4
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 33 1 active sync /dev/sdc1
2 8 17 - faulty spare /dev/sdb1
[root@ning ~]# mdadm /dev/md0 --add /dev/sdd1 *
添加
sdd1
mdadm: unrecognised word on ARRAY line: DEVICE=/dev/sdb1,/dev/sdc1
mdadm: ARRAY line /dev/md0 has no identity information.
mdadm: added /dev/sdd1
[root@ning ~]# mdadm -D /dev/md0 *
查看
md0
状态
mdadm: unrecognised word on ARRAY line: DEVICE=/dev/sdb1,/dev/sdc1
mdadm: ARRAY line /dev/md0 has no identity information.
/dev/md0:
Version : 00.90.03
Creation Time : Thu Nov 5 18:24:08 2009
Raid Level : raid1
Array Size : 1044096 (1019.80 MiB 1069.15 MB)
Used Dev Size : 1044096 (1019.80 MiB 1069.15 MB)
Raid Devices : 2
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Nov 5 19:02:48 2009
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
UUID : ae669b67:2ab07be0:9a080fd7:8a195d80
Events : 0.6
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 33 1 active sync /dev/sdc1
2 8 17 - faulty spare /dev/sdb1
[root@ning ~]# mdadm /dev/md0 --remove /dev/sdb1 *
拿掉
sdb1
10.
Linux
下的raid1
配置完成,raid5
的设置与此类似,需要至少3
块磁盘,最好准备4
块,其中一块做为热备份。