RAID和LVM详解

RAID

磁盘阵列

  • RAID即为磁盘阵列,其可以由很多价格便宜的磁盘组成,利用磁盘提供的数据产生相应的加速效果提升整个磁盘系统的功能。
  1. RAID0 简单的、无数据校验的条带化技术,将所有的磁盘条带化组成大容量存储空间,将数据存储在磁盘中。读性能提升写性能下降,是单个磁盘的n倍,并且利用率达到100%,但是不允许有错,不提供数据的冗余保护。最少需要两块磁盘来创建RAID0。
  2. RAID1 采用镜像存储,它将数据完全一致的写到工作磁盘和镜像磁盘中,因此其利用率只有50%。磁盘的读性能提升,但是写性能下降,不过其保护的数据的安全性能,一旦工作磁盘发生故障,系统将会自动从镜像盘读取数据。至少需要两块盘来完成RAID1。
  3. RAID5 性价比最好的磁盘,具有RAID0和RAID1的共同性能,RAID5同时存储数据和校验数据,数据库和对应的校验信息保存在不同的磁盘上,当一块磁盘损坏后,系统可以根同一条带其他数据和对应的数据重新构建损坏数据。读写性能均提升,利用率是总磁盘数-1,至少需要3块磁盘构建。
  4. RAID6 双重保护数据,保护阵列中两个磁盘同时发生故障,数据仍能安全保存。但是此方法的代价高,需要至少4块磁盘构建。利用率为磁盘书-2,运行有两块错误磁盘。
  5. RAID01 先条带化,后镜像。数据同时写入到两个磁盘阵列中。至少需要4个磁盘创建,读写性能均提升,利用率为50%。
  6. RAID10 先镜像,后条带化。数据同时写入到两个磁盘阵列中。至少需要4个磁盘创建,读写性能均提升,利用率为50%。

软RAID

mdadm --create 创建一个新的RAID;

mdadm --detail指出RAID的详细信息;

mdadm --stop 停止指定RAID设备;

mdadm --level 设置RAID的级别;

mdadm --raid-devices 活动磁盘数据;

mdadm --spare-devices 备用磁盘。

`以磁盘法分区模拟磁盘`
[root@localhost ~]# fdisk -l

Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001e99f

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          26      204800   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              26        1332    10485760   83  Linux
/dev/sda3            1332        1593     2097152   82  Linux swap / Solaris

Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x6af65ca1

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         132     1060258+  83  Linux
/dev/sdb2             133         264     1060290   83  Linux
/dev/sdb3             265         396     1060290   83  Linux
/dev/sdb4             397         528     1060290   83  Linux
`创建RAID 指定创建类型、工作区间、活动块数、备份块数`
[root@localhost ~]# mdadm --create /dev/md0  --level=5 --raid-devices=3 --spare-devices=1 /dev/sdb[1-4]
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
`查看创建磁盘的详细信息`
[root@localhost ~]# mdadm --detail  /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Mar 31 22:02:31 2019
     Raid Level : raid5
     Array Size : 2117632 (2.02 GiB 2.17 GB)
  Used Dev Size : 1058816 (1034.00 MiB 1084.23 MB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sun Mar 31 22:02:46 2019
          State : clean 
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost:0  (local to host localhost)
           UUID : 8b6556f5:0b1b3164:341904dd:98cd6ea1
         Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       18        1      active sync   /dev/sdb2
       4       8       19        2      active sync   /dev/sdb3

       3       8       20        -      spare   /dev/sdb4
`修改配置信息`
[root@localhost ~]# vi /etc/mdadm.conf
`格式化,创建文件系统`
[root@localhost ~]# mkfs.ext4 /dev/md0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
132464 inodes, 529408 blocks
26470 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=545259520
17 block groups
32768 blocks per group, 32768 fragments per group
7792 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912

Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
`挂载`
[root@localhost ~]# mount /dev/md0 /mnt
`查看挂载信息`
[root@localhost ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       9.8G  1.1G  8.2G  12% /
tmpfs           491M     0  491M   0% /dev/shm
/dev/sda1       190M   30M  150M  17% /boot
/dev/sr0        3.7G  3.7G     0 100% /media
/dev/md0        2.0G  3.1M  1.9G   1% /mnt
[root@localhost ~]# cd /mnt
[root@localhost mnt]# ls -l
total 16
drwx------. 2 root root 16384 Mar 31 22:04 lost+found
[root@localhost mnt]# touch file
[root@localhost mnt]# echo "hello" > file
[root@localhost mnt]# cat file
hello
`模拟磁盘出错`
[root@localhost mnt]# mdadm /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
[root@localhost mnt]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdb3[4] sdb4[3] sdb2[1] sdb1[0](F)
      2117632 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      
unused devices: <none>
`查看坏后的RAID信息`
[root@localhost mnt]# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Mar 31 22:02:31 2019
     Raid Level : raid5
     Array Size : 2117632 (2.02 GiB 2.17 GB)
  Used Dev Size : 1058816 (1034.00 MiB 1084.23 MB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sun Mar 31 22:06:46 2019
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost:0  (local to host localhost)
           UUID : 8b6556f5:0b1b3164:341904dd:98cd6ea1
         Events : 37

    Number   Major   Minor   RaidDevice State
       3       8       20        0      active sync   /dev/sdb4
       1       8       18        1      active sync   /dev/sdb2
       4       8       19        2      active sync   /dev/sdb3

       0       8       17        -      faulty   /dev/sdb1
 `移除错误信息`
[root@localhost mnt]# mdadm /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md0
[root@localhost mnt]# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Mar 31 22:02:31 2019
     Raid Level : raid5
     Array Size : 2117632 (2.02 GiB 2.17 GB)
  Used Dev Size : 1058816 (1034.00 MiB 1084.23 MB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Sun Mar 31 22:09:23 2019
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost:0  (local to host localhost)
           UUID : 8b6556f5:0b1b3164:341904dd:98cd6ea1
         Events : 38

    Number   Major   Minor   RaidDevice State
       3       8       20        0      active sync   /dev/sdb4
       1       8       18        1      active sync   /dev/sdb2
       4       8       19        2      active sync   /dev/sdb3
`添加磁盘`
[root@localhost mnt]# mdadm /dev/md0 --add /dev/sdb1
mdadm: added /dev/sdb1
[root@localhost mnt]# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Mar 31 22:02:31 2019
     Raid Level : raid5
     Array Size : 2117632 (2.02 GiB 2.17 GB)
  Used Dev Size : 1058816 (1034.00 MiB 1084.23 MB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sun Mar 31 22:10:04 2019
          State : clean 
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost:0  (local to host localhost)
           UUID : 8b6556f5:0b1b3164:341904dd:98cd6ea1
         Events : 39

    Number   Major   Minor   RaidDevice State
       3       8       20        0      active sync   /dev/sdb4
       1       8       18        1      active sync   /dev/sdb2
       4       8       19        2      active sync   /dev/sdb3

       5       8       17        -      spare   /dev/sdb1
`即使有坏掉的磁盘,之前新建的数据依然存在`
[root@localhost mnt]# ls -l
total 20
-rw-r--r--. 1 root root     6 Mar 31 22:05 file
drwx------. 2 root root 16384 Mar 31 22:04 lost+found
[root@localhost mnt]# cat file
hello

LVM逻辑分区

  • lvm是建立对磁盘的动态管理,建立在硬盘和分区之上的一个逻辑层,提高磁盘分区管理的灵活性。
  • lvm中的基本术语
  • PE 物理卷的划分单元,具有唯一编号。可以被lvm寻址的最小单元,其大小是可以配置的。
  • PV 物理卷,硬盘分区或者从逻辑上与硬盘分区具有同样功能的设备,lvm中的基本存储逻辑块。
  • VG lvm卷组,由其物理卷组成,可以在卷组上创建一个或者多个“LVM分区”,LVM卷组由一个或者多个物理卷组组成。
  • LV LVM的逻辑卷,在逻辑卷上可以建立文件系统。
  • LE 逻辑卷被划分为可以访问的基本单位,在同一个卷组中LE和PE的大小是一样的。

pv命令

pvcreate创建pv;

pvs产看pv信息;

pvdisplay查看pv详细信息;

pvscan -n仅显示不属于任何卷组的物理卷;

pvscan -e显示属于卷组的物理卷;

pvscan -s段格式输出;

pvremove移除物理卷;

pvmove移除物理卷的PE,将硬盘的数据移至其他硬盘上。

vg命令

vgcreate创建vg;

vgs查看vg信息;

vgdisplay查看vg详细信息;

vgcreate -s指定PE的大小;

vgextend向卷组中添加新的物理卷;

vgreduce从卷组中删除成员;

vgremove删除卷组;

vgrename修改卷组的名称;

vgchang 改变卷组的工作状态;

vgexport导出卷组。

lv命令

lvcreate创建lv;

  • lvcreate -n指定逻辑卷的名称;
  • lvcreate -L指定逻辑卷的大小,使用空间逻辑;
  • lvcreate -l指定逻辑卷的大小 ,使用PE个数指定;
  • lvcreate -s创建快照。

lvs产看lv信息;

lvdisplay查看lv详细信息;

lvextend扩展逻辑卷的大小;

lvreduce缩减逻辑卷的大小;

lvremove删除逻辑卷;

lvrename修改逻辑卷的名称;

lvconvert恢复快照。

创建LVM

  1. 创建物理卷 pv
  2. 创建卷组 vg (由一个或者多个物理卷pv组成)
  3. 创建 逻辑卷 lv (可以指定大小,也可以利用pe确定大小)
`创建pv`
[root@localhost ~]# pvcreate /dev/sdb1 /dev/sdb2
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdb2" successfully created
 `查看pv`
[root@localhost ~]# pvs
  PV         VG   Fmt  Attr PSize PFree
  /dev/sdb1       lvm2 ---- 1.01g 1.01g
  /dev/sdb2       lvm2 ---- 1.01g 1.01g
 `查看vg`
[root@localhost ~]# vgcreate vg1 /dev/sdb1 /dev/sdb2
  Volume group "vg1" successfully created
[root@localhost ~]# vgcreate -s 16M vg2 /dev/sdb3
  Physical volume "/dev/sdb3" successfully created
  Volume group "vg2" successfully created
 `查看vg`
[root@localhost ~]# vgs
  VG   #PV #LV #SN Attr   VSize VFree
  vg1    2   0   0 wz--n- 2.02g 2.02g
  vg2    1   0   0 wz--n- 1.00g 1.00g
 `创建lv`
[root@localhost ~]# lvcreate -n lv1 -L 100M vg1
  Logical volume "lv1" created.
[root@localhost ~]# lvcreate -l 10 -n lv2 vg2
  Logical volume "lv2" created.
 `查看lv`
[root@localhost ~]# lvs
  LV   VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv1  vg1  -wi-a----- 100.00m                                                    
  lv2  vg2  -wi-a----- 160.00m         
  `查看lv详细信息`                                           
[root@localhost ~]# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/vg2/lv2
  LV Name                lv2
  VG Name                vg2
  LV UUID                RXCVK6-lJXP-IEcI-axCo-NW9n-daex-OIV21o
  LV Write Access        read/write
  LV Creation host, time localhost, 2019-04-01 02:34:31 +0800
  LV Status              available
  # open                 0
  LV Size                160.00 MiB
  Current LE             10
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/vg1/lv1
  LV Name                lv1
  VG Name                vg1
  LV UUID                ffqOov-Hq3g-4GY6-MSyO-2Pig-F6cO-2wTJAL
  LV Write Access        read/write
  LV Creation host, time localhost, 2019-04-01 02:33:59 +0800
  LV Status              available
  # open                 0
  LV Size                100.00 MiB
  Current LE             25
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
 `创建文件系统`
[root@localhost ~]# mkfs.ext4 /dev/vg1/lv1
 `挂载使用`
[root@localhost ~]# mount /dev/vg1/lv1 /mnt
[root@localhost ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/sda2            9.8G  990M  8.3G  11% /
tmpfs                491M     0  491M   0% /dev/shm
/dev/sda1            190M   30M  150M  17% /boot
/dev/sr0             3.7G  3.7G     0 100% /media
/dev/mapper/vg1-lv1   93M  1.6M   87M   2% /mnt

删除lvm

  1. 先用umount卸载逻辑卷;
  2. 如果将挂载信息写入/etc/fstab,就要删除里面相应的挂载信息;
  3. 通过lvremove删除逻辑卷;
  4. 通过vgremove删除卷组;
  5. 通过pvremove将物理卷转化为普通分区;
`卸载`
[root@localhost ~]# umount /mnt
[root@localhost ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       9.8G  990M  8.3G  11% /
tmpfs           491M     0  491M   0% /dev/shm
/dev/sda1       190M   30M  150M  17% /boot
/dev/sr0        3.7G  3.7G     0 100% /media
`删除lv`
[root@localhost ~]# lvremove /dev/vg1/lv1
Do you really want to remove active logical volume lv1? [y/n]: y
  Logical volume "lv1" successfully removed
`删除vg`
[root@localhost ~]# vgremove vg1
  Volume group "vg1" successfully removed
`将pv物理卷转化为普通分区`
[root@localhost ~]# pvremove /dev/sdb1 /dev/sdb2
  Labels on physical volume "/dev/sdb1" successfully wiped
  Labels on physical volume "/dev/sdb2" successfully wiped

vg扩容

[root@localhost ~]# pvcreate /dev/sdb1 /dev/sdb2 /dev/sdb3
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdb2" successfully created
  Physical volume "/dev/sdb3" successfully created
[root@localhost ~]# pvs
  PV         VG   Fmt  Attr PSize PFree
  /dev/sdb1  vg   lvm2 a--u 1.01g 1.01g
  /dev/sdb2  vg   lvm2 a--u 1.01g 1.01g
  /dev/sdb3       lvm2 ---- 1.01g 1.01g
[root@localhost ~]# vgcreate vg /dev/sdb1 /dev/sdb2
  Volume group "vg" successfully created
[root@localhost ~]# vgs
  VG   #PV #LV #SN Attr   VSize VFree
  vg     2   0   0 wz--n- 2.02g 2.02g
[root@localhost ~]# vgextend vg /dev/sdb3
  Volume group "vg" successfully extended
[root@localhost ~]# vgs
  VG   #PV #LV #SN Attr   VSize VFree
  vg     3   0   0 wz--n- 3.02g 3.02g

vg缩减

[root@localhost ~]# pvcreate /dev/sdb1 /dev/sdb2 /dev/sdb3
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdb2" successfully created
  Physical volume "/dev/sdb3" successfully created
[root@localhost ~]# pvs
  PV         VG   Fmt  Attr PSize PFree
  /dev/sdb1  vg   lvm2 a--u 1.01g 1.01g
  /dev/sdb2  vg   lvm2 a--u 1.01g 1.01g
  /dev/sdb3       lvm2 ---- 1.01g 1.01g
[root@localhost ~]# vgcreate vg /dev/sdb1 /dev/sdb2  /dev/sdb3
  Volume group "vg" successfully created
[root@localhost ~]# vgs
  VG   #PV #LV #SN Attr   VSize VFree
  vg     3   0   0 wz--n- 3.02g 3.02g
[root@localhost ~]# vgreduce vg /dev/sdb3
  Removed "/dev/sdb3" from volume group "vg"
[root@localhost ~]# vgs
  VG   #PV #LV #SN Attr   VSize VFree
  vg     2   0   0 wz--n- 2.02g 2.02g
[root@localhost ~]# 

lv扩容

[root@localhost ~]# lvcreate -L 100M -n lv vg
  Logical volume "lv" created.
[root@localhost ~]# lvs
  LV   VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv   vg   -wi-a----- 100.00m                                                    
[root@localhost ~]# lvextend -L +60M /dev/vg/lv
  Size of logical volume vg/lv changed from 100.00 MiB (25 extents) to 160.00 MiB (40 extents).
  Logical volume lv successfully resized.
[root@localhost ~]# lvs
  LV   VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv   vg   -wi-a----- 160.00m         

lv缩减

  1. 先卸载逻辑卷;
    2.使用e2fsck 强制检测逻辑卷的空余空间;
  2. 使用resize2fs将文件系统减小;
  3. 使用lvreduce将逻辑卷减小。
[root@localhost ~]# umount /mnt
[root@localhost ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       9.8G  990M  8.3G  11% /
tmpfs           491M     0  491M   0% /dev/shm
/dev/sda1       190M   30M  150M  17% /boot
/dev/sr0        3.7G  3.7G     0 100% /media    
[root@localhost ~]# e2fsck -f /dev/vg/lv
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vg/lv: 11/40960 files (0.0% non-contiguous), 10819/163840 blocks
[root@localhost ~]# resize2fs /dev/vg/lv 60M
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/vg/lv to 61440 (1k) blocks.
The filesystem on /dev/vg/lv is now 61440 blocks long.

[root@localhost ~]# lvreduce -L -60M /dev/vg/lv
  WARNING: Reducing active logical volume to 100.00 MiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vg/lv? [y/n]: y
  Size of logical volume vg/lv changed from 160.00 MiB (40 extents) to 100.00 MiB (25 extents).
  Logical volume lv successfully resized.
[root@localhost ~]# mount /dev/vg/lv /mnt
[root@localhost ~]# df -h
Filesystem         Size  Used Avail Use% Mounted on
/dev/sda2          9.8G  990M  8.3G  11% /
tmpfs              491M     0  491M   0% /dev/shm
/dev/sda1          190M   30M  150M  17% /boot
/dev/sr0           3.7G  3.7G     0 100% /media
/dev/mapper/vg-lv   54M  1.3M   50M   3% /mnt

LVM快照

  1. VG中需要预留存放快照本身的空间,不能全部被占满。
  2. 快照所在的VG必须与被备份的LV相同,也就是说,快照存放的位置必须与被照卷存放在同一个VG上。否则
    快照会失败。
    3 .如果快照卷满了,则会自动失效,因为快照区记录的是数据变化前的数据,也就是说数据的修改量不能超过
    快照区的大小,否则这个快照就不能用了。
`首先给创建好的lv中写入数据`
[root@nebulalinux01 mylv1]# echo "hello" > mysnop_file
[root@nebulalinux01 mylv1]# cat mysnop_file
hello
[root@nebulalinux01 mylv1]#
`创建快照`
[root@nebulalinux01 mylv1]# lvcreate -L 20M -s -n mysnop /dev/myvg/mylv1
Logical volume "mysnop" created.
[root@nebulalinux01 mylv1]#
`原始卷中写入数据,快照卷的使用率会增加`
[root@nebulalinux01 mylv1]# lvdisplay | grep %
Allocated to snapshot  0.06%
[root@nebulalinux01 mylv1]# dd if=/dev/zero of=/mnt/mylv1/file bs=1M count=5
记录了5+0 的读入
记录了5+0 的写出
5242880字节(5.2 MB)已复制,0.0135244 秒,388 MB/秒
[root@nebulalinux01 mylv1]# lvdisplay | grep %
Allocated to snapshot  25.27%
[root@nebulalinux01 mylv1]# echo “hello” > mysnop_file2
[root@nebulalinux01 mylv1]# ls
file lost+found mysnop_file mysnop_file2
`恢复原始数据`
[root@nebulalinux01 mnt]# lvconvert --merge /dev/myvg/mysnop
Merging of volume mysnop started.
mylv1: Merged: 84.9%
mylv1: Merged: 100.0%
Merge of snapshot into logical volume mylv1 has finished.
Logical volume "mysnop" successfully removed
[root@nebulalinux01 mnt]# mount /dev/myvg/mylv1 /mnt/mylv1/
[root@nebulalinux01 mnt]# cd mylv1/
[root@nebulalinux01 mylv1]# ls
file lost+found mysnop_file

总结

  • LVM是对磁盘的动态化管理,合理的分配磁盘达到数据的完全存储,同时达到资源的合理化管理。
  • RAID是对利用磁盘做成相应的磁盘阵,提高数据存储的同时,提高数据的安全性。

你可能感兴趣的:(RAID和LVM详解)