linux中创建RAID5及维护

一、创建软RAID5

1. 先添加1个20G的硬盘

2. 创建分区:fdisk

   设备 Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     2099199     1048576   83  Linux
/dev/sdb2         2099200     4196351     1048576   83  Linux
/dev/sdb3         4196352     6293503     1048576   83  Linux
/dev/sdb4         6293504    41943039    17824768    5  Extended
/dev/sdb5         6295552     8392703     1048576   83  Linux
/dev/sdb6         8394752    10491903     1048576   83  Linux

3. 创建RAID5

mdadm --create :创建新的阵列
level=5 :设置磁盘阵列的等级为5
raid-device=    :指定使用几个磁盘作为阵列的设备
spare-device= :使用几个磁盘作为备用磁盘

如果没有mdadm命令:yum install mdadm 

[root@host ~]# mdadm --create --auto=yes /dev/md0 --level=5 --raid-devices=3 --spare-devices=1 /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb5
mdadm: cannot open /dev/sdb1: Device or resource busy  ---- 不能打开/dev/sdb1
#运行:partx -a /dev/sdb 或 kpartx -a /dev/sdb,还是不行就重新启动下 加载。此处重新建了个分区
[root@host ~]# mdadm --create --auto=yes /dev/md0 --level=5 --raid-devices=3 --spare-devices=1 /dev/sdb2 /dev/sdb3 /dev/sdb5 /dev/sdb6      ---这里分开写几个分区,没有出现报错
mdadm: /dev/sdb2 appears to contain an ext2fs file system
       size=20480K  mtime=Tue Mar 31 19:56:34 2020
mdadm: largest drive (/dev/sdb3) exceeds size (18432K) by more than 1%
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

查看/dev/md0

[root@host ~]# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Apr  3 15:31:04 2020
        Raid Level : raid5
        Array Size : 36864 (36.00 MiB 37.75 MB)
     Used Dev Size : 18432 (18.00 MiB 18.87 MB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Fri Apr  3 15:31:06 2020
             State : clean
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : host.home:0  (local to host host.home)
              UUID : 7c146a0b:7cf6dd16:d3bd5957:455a240b
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       19        1      active sync   /dev/sdb3
       4       8       21        2      active sync   /dev/sdb5

       3       8       22        -      spare   /dev/sdb6

创建配置文件:

[root@host ~]# mdadm --detail --scan > /etc/mdadm.conf
    ---配置文件名为mdadm.conf,本身不存在,需要手动配置,然后再调整格式。此文件的作用是系统启动时自动加载RAID5可以直接使用。
[root@host ~]# vi /etc/mdadm.conf
DEVICE /dev/sdb2 /dev/sdb3 /dev/sdb5 /dev/sdb6
ARRAY /dev/md0 metadata=1.2 spares=1 name=host.home:0 UUID=7c146a0b:7cf6dd16:d3bd5957:455a240b

#手动加载命令:
mdadm --assemble /dev/md0 /dev/sdb2 /dev/sdb3 /dev/sdb5 /dev/sdb6

创建文件系统:

[root@host ~]# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
文件系统标签=
OS type: Linux
块大小=1024 (log=0)
分块大小=1024 (log=0)
Stride=512 blocks, Stripe width=1024 blocks
9240 inodes, 36864 blocks
1843 blocks (5.00%) reserved for the super user
第一个数据块=1
Maximum filesystem blocks=33685504
5 block groups
8192 blocks per group, 8192 fragments per group
1848 inodes per group
Superblock backups stored on blocks:
        8193, 24577

Allocating group tables: 完成
正在写入inode表: 完成
Creating journal (4096 blocks): 完成
Writing superblocks and filesystem accounting information: 完成

二、维护RAID

模拟磁盘故障:

1. 设定/dev/sdb3磁盘损坏,移除磁盘
[root@host ~]# mdadm /dev/md0 --fail /dev/sdb3   ----先标记sdb3为故障磁盘
mdadm: set /dev/sdb3 faulty in /dev/md0
 
[root@host ~]# cat /proc/mdstat                  ---查看当前阵列的状态
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb5[4] sdb6[3] sdb3[1](F) sdb2[0]
      36864 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
 
unused devices:                            ---此时RAID5阵列已经自动替换修复完成
 
[root@host ~]# mdadm /dev/md0 --remove /dev/sdb3
mdadm: hot removed /dev/sdb3 from /dev/md0       ---移除磁盘sdb3
 
添加新磁盘,修复阵列:
[root@host ~]# fdisk -l
/dev/sdb1            2048     2099199     1048576   83  Linux
/dev/sdb2         2099200     4196351     1048576   83  Linux
/dev/sdb3         4196352     6293503     1048576   83  Linux
/dev/sdb4         6293504    41943039    17824768    5  Extended
/dev/sdb5         6295552     8392703     1048576   83  Linux
/dev/sdb6         8394752    10491903     1048576   83  Linux
/dev/sdb7        10493952    12591103     1048576   83  Linux

2.加入新的磁盘至阵列中
[root@host ~]# mdadm /dev/md0 --add /dev/sdb7
mdadm: added /dev/sdb7             ---添加sdb7到阵列中

3.查看,已经修复成功
[root@host ~]# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Apr  3 15:31:04 2020
        Raid Level : raid5
        Array Size : 36864 (36.00 MiB 37.75 MB)
     Used Dev Size : 18432 (18.00 MiB 18.87 MB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent
 
       Update Time : Fri Apr  3 16:10:24 2020
             State : clean
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1
 
            Layout : left-symmetric
        Chunk Size : 512K
 
Consistency Policy : resync
 
              Name : host.home:0  (local to host host.home)
              UUID : 7c146a0b:7cf6dd16:d3bd5957:455a240b
            Events : 39
 
    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       3       8       22        1      active sync   /dev/sdb6
       4       8       21        2      active sync   /dev/sdb5
 
       5       8       23        -      spare   /dev/sdb7
删除RAID:
mdadm --stop /dev/md0
mdadm /dev/md0 --remove /dev/sda2 /dev/sda3 /dev/sda5 /dev/sda6

-----------------------------------------------------------------------------------RAID简单介绍------ 返回目录

你可能感兴趣的:(linux,文件系统管理)