Linux下创建RAID 10

创建RAID10,最少需要4个磁盘
[root@kashu ~]# fdisk -l /dev/sdb
  Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1          65      522081   83  Linux
/dev/sdb2              66         130      522112+  83  Linux
/dev/sdb3             131         195      522112+  83  Linux
/dev/sdb4             196        2610    19398487+   5  Extended
/dev/sdb5             196         260      522081   83  Linux

1、先创建两个RAID 1:
[root@kashu ~]# mdadm -C /dev/md0 -l1 -n2 /dev/sdb[1-2]
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

[root@kashu ~]# mdadm -C /dev/md1 -l1 -n2 /dev/sdb{3,5}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
检查一下,OK
[root@kashu ~]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md1 : active raid1 sdb5[1] sdb3[0]
     522069 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sdb2[1] sdb1[0]
     522069 blocks super 1.2 [2/2] [UU]
unused devices: <none>

2、再创建一个RAID 0:
[root@kashu ~]# mdadm -C /dev/md10 -l0 -n2 /dev/md[0-1]
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.

检查一下,OK
[root@kashu ~]# mdadm -D /dev/md*
mdadm: /dev/md does not appear to be an md device
/dev/md0:
       Version : 1.2
 Creation Time : Thu May  2 03:06:16 2013
    Raid Level : raid1
    Array Size : 522069 (509.92 MiB 534.60 MB)
 Used Dev Size : 522069 (509.92 MiB 534.60 MB)
  Raid Devices : 2
 Total Devices : 2
   Persistence : Superblock is persistent

   Update Time : Thu May  2 03:09:43 2013
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

          Name : kashu.localdomain:0  (local to host kashu.localdomain)
          UUID : eea41616:85b7da19:652e8088:696cd948
        Events : 17

   Number   Major   Minor   RaidDevice State
      0       8       17        0      active sync   /dev/sdb1
      1       8       18        1      active sync   /dev/sdb2
/dev/md1:
       Version : 1.2
 Creation Time : Thu May  2 03:06:41 2013
    Raid Level : raid1
    Array Size : 522069 (509.92 MiB 534.60 MB)
 Used Dev Size : 522069 (509.92 MiB 534.60 MB)
  Raid Devices : 2
 Total Devices : 2
   Persistence : Superblock is persistent

   Update Time : Thu May  2 03:09:43 2013
         State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

          Name : kashu.localdomain:1  (local to host kashu.localdomain)
          UUID : 9ee855c7:9fc9c27e:67f1fb01:a2430406
        Events : 17

   Number   Major   Minor   RaidDevice State
      0       8       19        0      active sync   /dev/sdb3
      1       8       21        1      active sync   /dev/sdb5
/dev/md10:
       Version : 1.2
 Creation Time : Thu May  2 03:09:43 2013
    Raid Level : raid0
    Array Size : 1041408 (1017.17 MiB 1066.40 MB)
  Raid Devices : 2
 Total Devices : 2
   Persistence : Superblock is persistent

   Update Time : Thu May  2 03:09:43 2013
         State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

    Chunk Size : 512K

          Name : kashu.localdomain:10  (local to host kashu.localdomain)
          UUID : de8aaea0:b405c41b:c2390d75:91051b0b
        Events : 0

   Number   Major   Minor   RaidDevice State
      0       9        0        0      active sync   /dev/md0
      1       9        1        1      active sync   /dev/md1

3、新建文件系统
[root@kashu ~]# mkfs.ext4 /dev/md10
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
65152 inodes, 260352 blocks
13017 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8144 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376

Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

4、创建挂载点,并挂载使用
[root@kashu ~]# mkdir /mnt/raid10
[root@kashu ~]# mount /dev/md10 /mnt/raid10
[root@kashu ~]# ll /mnt/raid10
total 16
drwx------ 2 root root 16384 May  2 03:11 lost+found
[root@kashu ~]# df -hT /dev/md10
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/md10     ext4   1001M   39M  912M   5% /mnt/raid10
5、添加挂载信息到/etc/fstab文件中
[root@kashu ~]# blkid /dev/md10
/dev/md10: UUID="f82eefa3-28f3-4dda-8873-63e29f4d9abf" TYPE="ext4"

[root@kashu ~]# vim /etc/fstab
UUID="f82eefa3-28f3-4dda-8873-63e29f4d9abf"/mnt/raid10ext4defaults00
6、把RAID阵列相关信息添加到/etc/mdadm.conf文件中,注意,这里写入了3条RAID阵列信息
[root@kashu ~]# mdadm -Ds >> /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 name=kashu.localdomain:0 UUID=eea41616:85b7da19:652e8088:696cd948
ARRAY /dev/md1 metadata=1.2 name=kashu.localdomain:1 UUID=9ee855c7:9fc9c27e:67f1fb01:a2430406
ARRAY /dev/md10 metadata=1.2 name=kashu.localdomain:10 UUID=de8aaea0:b405c41b:c2390d75:91051b0b
如何删除RAID 10呢?注意,删除的时候要注意删除的顺序,是自上而下删除。换一种说法,删除时是反着创建时的顺序来的。
1)先把挂载点卸载
[root@kashu ~]# umount /mnt/raid10
2)再把RAID 0删除
[root@kashu ~]# mdadm -S /dev/md10
mdadm: stopped /dev/md10
3)删除RAID阵列中md0和md1这两个成员组的超级块信息(这个容易忘记删除,注意!)
[root@kashu ~]# mdadm --zero-superblock /dev/md[0-1]
4)再把RAID 1删除
[root@kashu ~]# mdadm -S /dev/md[0-1]
mdadm: stopped /dev/md0
mdadm: stopped /dev/md1
5)删除RAID阵列中各个成员磁盘的超级块信息
[root@kashu ~]# mdadm --zero-superblock /dev/sdb[1-3,5]
6)删除/etc/fstab中的对应的挂载信息
7)删除/etc/mdadm.conf中对应的RAID阵列信息


你可能感兴趣的:(linux)