mdadm --create 创建一个新的RAID;
mdadm --detail指出RAID的详细信息;
mdadm --stop 停止指定RAID设备;
mdadm --level 设置RAID的级别;
mdadm --raid-devices 活动磁盘数据;
mdadm --spare-devices 备用磁盘。
`以磁盘法分区模拟磁盘`
[root@localhost ~]# fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001e99f
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 1332 10485760 83 Linux
/dev/sda3 1332 1593 2097152 82 Linux swap / Solaris
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x6af65ca1
Device Boot Start End Blocks Id System
/dev/sdb1 1 132 1060258+ 83 Linux
/dev/sdb2 133 264 1060290 83 Linux
/dev/sdb3 265 396 1060290 83 Linux
/dev/sdb4 397 528 1060290 83 Linux
`创建RAID 指定创建类型、工作区间、活动块数、备份块数`
[root@localhost ~]# mdadm --create /dev/md0 --level=5 --raid-devices=3 --spare-devices=1 /dev/sdb[1-4]
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
`查看创建磁盘的详细信息`
[root@localhost ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Mar 31 22:02:31 2019
Raid Level : raid5
Array Size : 2117632 (2.02 GiB 2.17 GB)
Used Dev Size : 1058816 (1034.00 MiB 1084.23 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Mar 31 22:02:46 2019
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : localhost:0 (local to host localhost)
UUID : 8b6556f5:0b1b3164:341904dd:98cd6ea1
Events : 18
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 18 1 active sync /dev/sdb2
4 8 19 2 active sync /dev/sdb3
3 8 20 - spare /dev/sdb4
`修改配置信息`
[root@localhost ~]# vi /etc/mdadm.conf
`格式化,创建文件系统`
[root@localhost ~]# mkfs.ext4 /dev/md0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
132464 inodes, 529408 blocks
26470 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=545259520
17 block groups
32768 blocks per group, 32768 fragments per group
7792 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
`挂载`
[root@localhost ~]# mount /dev/md0 /mnt
`查看挂载信息`
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.8G 1.1G 8.2G 12% /
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 190M 30M 150M 17% /boot
/dev/sr0 3.7G 3.7G 0 100% /media
/dev/md0 2.0G 3.1M 1.9G 1% /mnt
[root@localhost ~]# cd /mnt
[root@localhost mnt]# ls -l
total 16
drwx------. 2 root root 16384 Mar 31 22:04 lost+found
[root@localhost mnt]# touch file
[root@localhost mnt]# echo "hello" > file
[root@localhost mnt]# cat file
hello
`模拟磁盘出错`
[root@localhost mnt]# mdadm /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
[root@localhost mnt]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb3[4] sdb4[3] sdb2[1] sdb1[0](F)
2117632 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
`查看坏后的RAID信息`
[root@localhost mnt]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Mar 31 22:02:31 2019
Raid Level : raid5
Array Size : 2117632 (2.02 GiB 2.17 GB)
Used Dev Size : 1058816 (1034.00 MiB 1084.23 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Mar 31 22:06:46 2019
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost:0 (local to host localhost)
UUID : 8b6556f5:0b1b3164:341904dd:98cd6ea1
Events : 37
Number Major Minor RaidDevice State
3 8 20 0 active sync /dev/sdb4
1 8 18 1 active sync /dev/sdb2
4 8 19 2 active sync /dev/sdb3
0 8 17 - faulty /dev/sdb1
`移除错误信息`
[root@localhost mnt]# mdadm /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md0
[root@localhost mnt]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Mar 31 22:02:31 2019
Raid Level : raid5
Array Size : 2117632 (2.02 GiB 2.17 GB)
Used Dev Size : 1058816 (1034.00 MiB 1084.23 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sun Mar 31 22:09:23 2019
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost:0 (local to host localhost)
UUID : 8b6556f5:0b1b3164:341904dd:98cd6ea1
Events : 38
Number Major Minor RaidDevice State
3 8 20 0 active sync /dev/sdb4
1 8 18 1 active sync /dev/sdb2
4 8 19 2 active sync /dev/sdb3
`添加磁盘`
[root@localhost mnt]# mdadm /dev/md0 --add /dev/sdb1
mdadm: added /dev/sdb1
[root@localhost mnt]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Mar 31 22:02:31 2019
Raid Level : raid5
Array Size : 2117632 (2.02 GiB 2.17 GB)
Used Dev Size : 1058816 (1034.00 MiB 1084.23 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Mar 31 22:10:04 2019
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : localhost:0 (local to host localhost)
UUID : 8b6556f5:0b1b3164:341904dd:98cd6ea1
Events : 39
Number Major Minor RaidDevice State
3 8 20 0 active sync /dev/sdb4
1 8 18 1 active sync /dev/sdb2
4 8 19 2 active sync /dev/sdb3
5 8 17 - spare /dev/sdb1
`即使有坏掉的磁盘,之前新建的数据依然存在`
[root@localhost mnt]# ls -l
total 20
-rw-r--r--. 1 root root 6 Mar 31 22:05 file
drwx------. 2 root root 16384 Mar 31 22:04 lost+found
[root@localhost mnt]# cat file
hello
pvcreate创建pv;
pvs产看pv信息;
pvdisplay查看pv详细信息;
pvscan -n仅显示不属于任何卷组的物理卷;
pvscan -e显示属于卷组的物理卷;
pvscan -s段格式输出;
pvremove移除物理卷;
pvmove移除物理卷的PE,将硬盘的数据移至其他硬盘上。
vgcreate创建vg;
vgs查看vg信息;
vgdisplay查看vg详细信息;
vgcreate -s指定PE的大小;
vgextend向卷组中添加新的物理卷;
vgreduce从卷组中删除成员;
vgremove删除卷组;
vgrename修改卷组的名称;
vgchang 改变卷组的工作状态;
vgexport导出卷组。
lvcreate创建lv;
lvs产看lv信息;
lvdisplay查看lv详细信息;
lvextend扩展逻辑卷的大小;
lvreduce缩减逻辑卷的大小;
lvremove删除逻辑卷;
lvrename修改逻辑卷的名称;
lvconvert恢复快照。
`创建pv`
[root@localhost ~]# pvcreate /dev/sdb1 /dev/sdb2
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdb2" successfully created
`查看pv`
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 lvm2 ---- 1.01g 1.01g
/dev/sdb2 lvm2 ---- 1.01g 1.01g
`查看vg`
[root@localhost ~]# vgcreate vg1 /dev/sdb1 /dev/sdb2
Volume group "vg1" successfully created
[root@localhost ~]# vgcreate -s 16M vg2 /dev/sdb3
Physical volume "/dev/sdb3" successfully created
Volume group "vg2" successfully created
`查看vg`
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg1 2 0 0 wz--n- 2.02g 2.02g
vg2 1 0 0 wz--n- 1.00g 1.00g
`创建lv`
[root@localhost ~]# lvcreate -n lv1 -L 100M vg1
Logical volume "lv1" created.
[root@localhost ~]# lvcreate -l 10 -n lv2 vg2
Logical volume "lv2" created.
`查看lv`
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv1 vg1 -wi-a----- 100.00m
lv2 vg2 -wi-a----- 160.00m
`查看lv详细信息`
[root@localhost ~]# lvdisplay
--- Logical volume ---
LV Path /dev/vg2/lv2
LV Name lv2
VG Name vg2
LV UUID RXCVK6-lJXP-IEcI-axCo-NW9n-daex-OIV21o
LV Write Access read/write
LV Creation host, time localhost, 2019-04-01 02:34:31 +0800
LV Status available
# open 0
LV Size 160.00 MiB
Current LE 10
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Path /dev/vg1/lv1
LV Name lv1
VG Name vg1
LV UUID ffqOov-Hq3g-4GY6-MSyO-2Pig-F6cO-2wTJAL
LV Write Access read/write
LV Creation host, time localhost, 2019-04-01 02:33:59 +0800
LV Status available
# open 0
LV Size 100.00 MiB
Current LE 25
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
`创建文件系统`
[root@localhost ~]# mkfs.ext4 /dev/vg1/lv1
`挂载使用`
[root@localhost ~]# mount /dev/vg1/lv1 /mnt
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.8G 990M 8.3G 11% /
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 190M 30M 150M 17% /boot
/dev/sr0 3.7G 3.7G 0 100% /media
/dev/mapper/vg1-lv1 93M 1.6M 87M 2% /mnt
`卸载`
[root@localhost ~]# umount /mnt
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.8G 990M 8.3G 11% /
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 190M 30M 150M 17% /boot
/dev/sr0 3.7G 3.7G 0 100% /media
`删除lv`
[root@localhost ~]# lvremove /dev/vg1/lv1
Do you really want to remove active logical volume lv1? [y/n]: y
Logical volume "lv1" successfully removed
`删除vg`
[root@localhost ~]# vgremove vg1
Volume group "vg1" successfully removed
`将pv物理卷转化为普通分区`
[root@localhost ~]# pvremove /dev/sdb1 /dev/sdb2
Labels on physical volume "/dev/sdb1" successfully wiped
Labels on physical volume "/dev/sdb2" successfully wiped
[root@localhost ~]# pvcreate /dev/sdb1 /dev/sdb2 /dev/sdb3
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdb2" successfully created
Physical volume "/dev/sdb3" successfully created
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 vg lvm2 a--u 1.01g 1.01g
/dev/sdb2 vg lvm2 a--u 1.01g 1.01g
/dev/sdb3 lvm2 ---- 1.01g 1.01g
[root@localhost ~]# vgcreate vg /dev/sdb1 /dev/sdb2
Volume group "vg" successfully created
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg 2 0 0 wz--n- 2.02g 2.02g
[root@localhost ~]# vgextend vg /dev/sdb3
Volume group "vg" successfully extended
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg 3 0 0 wz--n- 3.02g 3.02g
[root@localhost ~]# pvcreate /dev/sdb1 /dev/sdb2 /dev/sdb3
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdb2" successfully created
Physical volume "/dev/sdb3" successfully created
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 vg lvm2 a--u 1.01g 1.01g
/dev/sdb2 vg lvm2 a--u 1.01g 1.01g
/dev/sdb3 lvm2 ---- 1.01g 1.01g
[root@localhost ~]# vgcreate vg /dev/sdb1 /dev/sdb2 /dev/sdb3
Volume group "vg" successfully created
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg 3 0 0 wz--n- 3.02g 3.02g
[root@localhost ~]# vgreduce vg /dev/sdb3
Removed "/dev/sdb3" from volume group "vg"
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg 2 0 0 wz--n- 2.02g 2.02g
[root@localhost ~]#
[root@localhost ~]# lvcreate -L 100M -n lv vg
Logical volume "lv" created.
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv vg -wi-a----- 100.00m
[root@localhost ~]# lvextend -L +60M /dev/vg/lv
Size of logical volume vg/lv changed from 100.00 MiB (25 extents) to 160.00 MiB (40 extents).
Logical volume lv successfully resized.
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv vg -wi-a----- 160.00m
[root@localhost ~]# umount /mnt
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.8G 990M 8.3G 11% /
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 190M 30M 150M 17% /boot
/dev/sr0 3.7G 3.7G 0 100% /media
[root@localhost ~]# e2fsck -f /dev/vg/lv
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vg/lv: 11/40960 files (0.0% non-contiguous), 10819/163840 blocks
[root@localhost ~]# resize2fs /dev/vg/lv 60M
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/vg/lv to 61440 (1k) blocks.
The filesystem on /dev/vg/lv is now 61440 blocks long.
[root@localhost ~]# lvreduce -L -60M /dev/vg/lv
WARNING: Reducing active logical volume to 100.00 MiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vg/lv? [y/n]: y
Size of logical volume vg/lv changed from 160.00 MiB (40 extents) to 100.00 MiB (25 extents).
Logical volume lv successfully resized.
[root@localhost ~]# mount /dev/vg/lv /mnt
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.8G 990M 8.3G 11% /
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 190M 30M 150M 17% /boot
/dev/sr0 3.7G 3.7G 0 100% /media
/dev/mapper/vg-lv 54M 1.3M 50M 3% /mnt
`首先给创建好的lv中写入数据`
[root@nebulalinux01 mylv1]# echo "hello" > mysnop_file
[root@nebulalinux01 mylv1]# cat mysnop_file
hello
[root@nebulalinux01 mylv1]#
`创建快照`
[root@nebulalinux01 mylv1]# lvcreate -L 20M -s -n mysnop /dev/myvg/mylv1
Logical volume "mysnop" created.
[root@nebulalinux01 mylv1]#
`原始卷中写入数据,快照卷的使用率会增加`
[root@nebulalinux01 mylv1]# lvdisplay | grep %
Allocated to snapshot 0.06%
[root@nebulalinux01 mylv1]# dd if=/dev/zero of=/mnt/mylv1/file bs=1M count=5
记录了5+0 的读入
记录了5+0 的写出
5242880字节(5.2 MB)已复制,0.0135244 秒,388 MB/秒
[root@nebulalinux01 mylv1]# lvdisplay | grep %
Allocated to snapshot 25.27%
[root@nebulalinux01 mylv1]# echo “hello” > mysnop_file2
[root@nebulalinux01 mylv1]# ls
file lost+found mysnop_file mysnop_file2
`恢复原始数据`
[root@nebulalinux01 mnt]# lvconvert --merge /dev/myvg/mysnop
Merging of volume mysnop started.
mylv1: Merged: 84.9%
mylv1: Merged: 100.0%
Merge of snapshot into logical volume mylv1 has finished.
Logical volume "mysnop" successfully removed
[root@nebulalinux01 mnt]# mount /dev/myvg/mylv1 /mnt/mylv1/
[root@nebulalinux01 mnt]# cd mylv1/
[root@nebulalinux01 mylv1]# ls
file lost+found mysnop_file