RAID概念
RAID有两种创建方式:
RAID常见的几种类型:
RAID的三个关键技术:
常见RAID的工作原理:
RAID0工作原理:
RAID1工作原理:
RAID5工作原理:
RAID10工作原理:
RAID硬盘失效处理
热备:HotSpare
定义:当冗余的RAID组中某个硬盘失效时,在不干扰当前RAID系统的正常使用的情况下,用RAID系统中另外一个正常的备用硬盘自动顶替失效硬盘,及时保证RAID系统的冗余性
全局式:备用硬盘为系统中所有的冗余RAID组共享
专用式:备用硬盘为系统中某一组冗余RAID组专用
如下图所示:是一个全局热备的示例,该热备盘由系统中两个RAID组共享,可自动顶替任何一个RAID中的一个失效硬盘
热插拔:HotSwap
定义:在不影响系统正常运转的情况下,用正常的物理硬盘替换RAID系统中失效硬盘。
RAID-0-1-5-10搭建及使用-删除RAID及注意事项
RAID的实现方式
面试题:我们做硬件RAID,是在装系统前还是之后?
答:先做阵列才装系统 ,一般服务器启动时,有显示进入配置Riad的提示。
硬RAID:需要RAID卡,我们的磁盘是接在RAID卡的,由它统一管理和控制。数据也由它来进行分配和维护;它有自己的cpu,处理速度快
软RAID:通过操作系统实现
实战搭建RAID阵列
Mdadm命令详解
搭建RAID10:
1.添加四块磁盘
2.下载mdadm
[root@lu~]# yum install mdadm -y
3.创建raid10阵列
[root@lu ~]# mdadm -Cv /dev/md0 -a yes -n 4 -l 10 /dev/sd{b,c,d,e} #-C创建 v显示过程 /dev/md0 软raid -a yes 检测设备 -n磁盘数 -l 10 raid10 /dev/sd{b,c,d,e} 磁盘 mdadm: layout defaults to n2 mdadm: layout defaults to n2 mdadm: chunk size defaults to 512K mdadm: size set to 20954112K mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
4.格式磁盘阵列为ext4
[root@lu ~]# mkfs.ext4 /dev/md0 mapper/ mcelog md0 mem midi mqueue/ [root@lu ~]# mkfs.ext4 /dev/md0 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 2621440 inodes, 10477056 blocks 523852 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2157969408 320 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
5.挂载
[root@lu ~]# mkdir /raid10 [root@lu ~]# mount /dev/md0 /raid10 [root@lu ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 17G 1.2G 16G 7% / devtmpfs 224M 0 224M 0% /dev tmpfs 236M 0 236M 0% /dev/shm tmpfs 236M 5.6M 230M 3% /run tmpfs 236M 0 236M 0% /sys/fs/cgroup /dev/sda1 1014M 130M 885M 13% /boot tmpfs 48M 0 48M 0% /run/user/0 /dev/md0 40G 49M 38G 1% /raid10
6.查看/dev/md0的详细信息
[root@lu ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Feb 28 19:08:25 2019 Raid Level : raid10 Array Size : 41908224 (39.97 GiB 42.91 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Feb 28 19:11:41 2019 State : clean, resyncing Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Consistency Policy : resync Resync Status : 96% complete Name : ken:0 (local to host ken) UUID : c5df1175:a6b1ad23:f3d7e80b:6b56fe98 Events : 26 Number Major Minor RaidDevice State 0 8 16 0 active sync set-A /dev/sdb 1 8 32 1 active sync set-B /dev/sdc 2 8 48 2 active sync set-A /dev/sdd 3 8 64 3 active sync set-B /dev/sde
7.写入到配置文件中
[root@lu ~]# echo "/dev/md0 /raid10 ext4 defaults 0 0" >> /etc/fstab #重启永久生效 但易造成系统重启失败,需进入救援模式删除文件
损坏磁盘阵列及修复
之所以在生产环境中部署RAID 10磁盘阵列,是为了提高硬盘存储设备的读写速度及数据的安全性,但由于我们的硬盘设备是在虚拟机中模拟出来的,因此对读写速度的改善可能并不直观。
在确认有一块物理硬盘设备出现损坏而不能继续正常使用后,应该使用mdadm命令将其移除,然后查看RAID磁盘阵列的状态,可以发现状态已经改变。
1.模拟磁盘损坏
[root@lu ~]# mdadm /dev/md0 -f /dev/sdb mdadm: set /dev/sdb faulty in /dev/md0 [root@lu ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Feb 28 19:08:25 2019 Raid Level : raid10 Array Size : 41908224 (39.97 GiB 42.91 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Feb 28 19:15:59 2019 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Consistency Policy : resync Name : lu:0 (local to host lu) UUID : c5df1175:a6b1ad23:f3d7e80b:6b56fe98 Events : 30 Number Major Minor RaidDevice State - 0 0 0 removed 1 8 32 1 active sync set-B /dev/sdc 2 8 48 2 active sync set-A /dev/sdd 3 8 64 3 active sync set-B /dev/sde 0 8 16 - faulty /dev/sdb
2.添加新磁盘
在RAID 10级别的磁盘阵列中,当RAID 1磁盘阵列中存在一个故障盘时并不影响RAID 10磁盘阵列的使用。当购买了新的硬盘设备后再使用mdadm命令来予以替换即可,在此期间我们可以在/RAID目录中正常地创建或删除文件。由于我们是在虚拟机中模拟硬盘,所以先重启系统,然后再把新的硬盘添加到RAID磁盘阵列中
[root@lu ~]# reboot [root@lu ~]# umount /raid10 [root@lu ~]# mdadm /dev/md0 -a /dev/sdb mdadm: added /dev/sdb [root@lu ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Feb 28 19:08:25 2019 Raid Level : raid10 Array Size : 41908224 (39.97 GiB 42.91 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Feb 28 19:19:14 2019 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : near=2 Chunk Size : 512K Consistency Policy : resync Rebuild Status : 7% complete #这里显示重建进度 Name : ken:0 (local to host ken) UUID : c5df1175:a6b1ad23:f3d7e80b:6b56fe98 Events : 35 Number Major Minor RaidDevice State 4 8 16 0 spare rebuilding /dev/sdb #rebuilding重建中 1 8 32 1 active sync set-B /dev/sdc 2 8 48 2 active sync set-A /dev/sdd 3 8 64 3 active sync set-B /dev/sde [root@lu ~]# mdadm -D /dev/md0 #查看是否重建完成
实战搭建raid5阵列+备份盘
1.添加磁盘
2.创建RAID5阵列
[root@lu ~]# mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sd{b,c,d,e} mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: size set to 20954112K mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
3.格式化为ext4
[root@lu ~]# mkfs.ext4 /dev/md0 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 2621440 inodes, 10477056 blocks 523852 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2157969408 320 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
4.挂载
[root@lu ~]# mount /dev/md0 /raid5 [root@lu ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 17G 1.2G 16G 7% / devtmpfs 476M 0 476M 0% /dev tmpfs 488M 0 488M 0% /dev/shm tmpfs 488M 7.7M 480M 2% /run tmpfs 488M 0 488M 0% /sys/fs/cgroup /dev/sda1 1014M 130M 885M 13% /boot tmpfs 98M 0 98M 0% /run/user/0 /dev/md0 40G 49M 38G 1% /raid5
5.查看阵列信息,可以发现有一个备份盘/dev/sde
[root@lu ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Feb 28 19:35:10 2019 Raid Level : raid5 Array Size : 41908224 (39.97 GiB 42.91 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Feb 28 19:37:11 2019 State : active Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Name : ken:0 (local to host ken) UUID : b693fe72:4452bd3f:4d995779:ee33bc77 Events : 76 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 4 8 48 2 active sync /dev/sdd 3 8 64 - spare /dev/sde
6.模拟/dev/sdb磁盘损坏,可以发现/dev/sde备份盘立即开始构建
[root@lu ~]# mdadm /dev/md0 -f /dev/sdb mdadm: set /dev/sdb faulty in /dev/md0 [root@lu ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Feb 28 19:35:10 2019 Raid Level : raid5 Array Size : 41908224 (39.97 GiB 42.91 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Feb 28 19:38:41 2019 State : active, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Rebuild Status : 2% complete Name : lu:0 (local to host lu) UUID : b693fe72:4452bd3f:4d995779:ee33bc77 Events : 91 Number Major Minor RaidDevice State 3 8 64 0 spare rebuilding /dev/sde 1 8 32 1 active sync /dev/sdc 4 8 48 2 active sync /dev/sdd 0 8 16 - faulty /dev/sdb