linux软raid和lvm

 

RAID
硬件RAID:
Raid卡
阵列
软件RAID
 
 
 
1、查看系统中是否支持你的硬件RAID设备
[root@qiuri ~]# dmraid -l
asr      : Adaptec HostRAID ASR (0,1,10)
ddf1     : SNIA DDF1 (0,1,4,5,linear)
hpt37x : Highpoint HPT37X (S,0,1,10,01)
hpt45x : Highpoint HPT45X (S,0,1,10)
isw      : Intel Software RAID (0,1)
jmicron : JMicron ATARAID (S,0,1)
lsi      : LSI Logic MegaRAID (0,1,10)
nvidia : NVidia RAID (S,0,1,10,5)
pdc      : Promise FastTrack (S,0,1,10)
sil      : Silicon Image(tm) Medley(tm) (0,1,10)
via      : VIA Software RAID (S,0,1,10)
dos      : DOS partitions on SW RAIDs
 
nvidia:设备代码
NVidia RAID:名称
(S,0,1,10,5)支持RAID等级
2、设置硬件RAID    设备
大多数使用BIOS去设置
3、启用RAID设备
 
[root@qiuri ~]# dmraid -a y
验证是否启用
[root@qiuri ~]# ls /dev/mapper/
control sil******
 
查看RAID设备:
[root@qiuri ~]# dmraid –r
 
 
1、 raid名称
2、 设备名称文件名
3、 Raid等级
4、 状态 ok
5、 扇区数量
 
      查看RAID配置
[root@qiuri ~]# dmraid –s
 
 停用RAID:
[root@qiuri ~]# dmraid -a n
 
 
配置软件RAID
一、产生组织单元
1)      硬盘或分区 分区类型改为“fd”
2)      rpm –qa |grep mdadm
 
mdadm [mode] <raiddevice> [options] <component-devices>
 
二、配置软件RAID
[root@qiuri ~]# mdadm --create /dev/md0 --level raid1 --raid-devices 2 /dev/hdb1 /dev/hdb2
mdadm: array /dev/md0 started.
[root@qiuri ~]#
验证:查看软件RAID详细使用情况:
[root@qiuri ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb2[1] hdb1[0]
      1953664 blocks [2/2] [UU]
 
unused devices: <none>
[root@qiuri ~]#
 
三、设置RAID的配置文件/etc/mdadm.conf
[root@qiuri ~]# vi /etc/mdadm.conf
DEVICE /dev/hdb1 /dev/hdb2
ARRAY /dev/md0 DEVICES=/dev/hdb1,/dev/hdb2
 
四、创建文件系统
[root@qiuri ~]# mkfs.ext3 /dev/md0
 
 
五、挂在文件系统
 
[root@qiuri ~]# mkdir /mnt/raid
[root@qiuri ~]# mount /dev/md0 /mnt/raid/
[root@qiuri ~]# df -h
Filesystem             Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       28G 4.4G   23G 17% /
/dev/hda1               99M   12M   83M 13% /boot
tmpfs                  163M     0 163M   0% /dev/shm
df: `/media/RHEL_5.2 i386 DVD': No such file or directory
/dev/hdc               2.9G 2.9G     0 100% /media
/dev/md0               1.9G   35M 1.8G   2% /mnt/raid
 
管理软件磁盘阵列
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
[root@qiuri ~]# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.03
 Creation Time : Thu Aug 20 03:39:18 2009
     Raid Level : raid1    #raid等级
     Array Size : 1953664 (1908.20 MiB 2000.55 MB)
 Used Dev Size : 1953664 (1908.20 MiB 2000.55 MB)
   Raid Devices : 2      #磁盘数量
 Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent
 
    Update Time : Thu Aug 20 03:45:50 2009
          State : clean #md0当前配置
 Active Devices : 2    #启用
Working Devices : 2
 Failed Devices : 0    #故障
 Spare Devices : 0
 
           UUID : d2920143:d29ffec1:ad2fcd18:813f7a5e
         Events : 0.2
 
    Number   Major   Minor   RaidDevice State   #每一组详细信息
       0       3       65        0      active sync   /dev/hdb1
       1       3       66        1      active sync   /dev/hdb2
 
 
模拟故障
[root@qiuri ~]# mdadm /dev/md0 --set-faulty /dev/hdb2
mdadm: set /dev/hdb2 faulty in /dev/md0
 
验证:
[root@qiuri ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb2[2](F) hdb1[0]
      1953664 blocks [2/1] [U_]    #发现少了一个U,代表坏了一块硬盘
 
unused devices: <none>
 
 
 
 
 
 
 
 
 
 
 
 
[root@qiuri ~]# mdadm --detail /dev/md0 |tail -5
    Number   Major   Minor   RaidDevice State
       0       3       65        0      active sync   /dev/hdb1
       1       0        0        1      removed
 
       2       3       66        -      faulty spare   /dev/hdb2
 
将坏掉的硬盘移除(卸载)
[root@qiuri ~]# mdadm /dev/md0 --remove /dev/hdb2
mdadm: hot removed /dev/hdb2
验证移除:
[root@qiuri ~]# mdadm --detail /dev/md0 |tail -5
         Events : 0.6
 
    Number   Major   Minor   RaidDevice State
       0       3       65        0      active sync   /dev/hdb1
       1       0        0        1      removed
将新的硬盘添加到RAID:(挂载)
[root@qiuri ~]# mdadm /dev/md0 --add /dev/hdb2
mdadm: re-added /dev/hdb2
验证:
[root@qiuri ~]# mdadm --detail /dev/md0 |tail -5
         Events : 0.6
 
    Number   Major   Minor   RaidDevice State
       0       3       65        0      active sync   /dev/hdb1
       1       3       66        1      active sync   /dev/hdb2
[root@qiuri ~]#
 
启用与停用多重磁盘设备
停用RAID,注意我们当前raid是否挂在,如果挂载,需要先卸载,再停用
[root@qiuri ~]# umount /mnt/raid/
[root@qiuri ~]# mdadm --stop /dev/md0   
mdadm: stopped /dev/md0
[root@qiuri ~]#
验证:
[root@qiuri ~]# mdadm --detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.
启用RAID,注意/etc/mdadem.conf是否配置。
[root@qiuri ~]# mdadm --assemble --scan /dev/md0
mdadm: /dev/md0 has been started with 2 drives.
[root@qiuri ~]#
 
监控多重磁盘设备
 
1) 配置邮件
2) 启用mdmonitor服务
[root@qiuri ~]# service mdmonitor start
 
 
 
自动挂载:
在/etc/fstab中添加:
/dev/md0     /mnt/raid     ext3         defaults 0 0
 
让RAID开机启动.配置RIAD配置文件吧.默认名字为mdadm.conf,这个文件默认是不存在的,要自己建立.该配置文件存在的主要作用是系统启动的时候能够自动加载软RAID,同时也方便日后管理.
 
# mdadm --detail --scan > /etc/mdadm.conf
网上看的
 
 
 
图形:
 
 
 
LVM(逻辑卷管理)
 
物理卷:使用物理设备(分区、磁盘)pv
卷组:物理卷集合,使用一个目录名称表示
逻辑卷:将卷组划分后,成为逻辑卷。
 
 
 
准备工作:
分区:
 
建立物理卷
 
 
[root@qiuri ~]# pvcreate /dev/hdd1 /dev/hdd2 /dev/hdd3
 Physical volume "/dev/hdd1" successfully created
 Physical volume "/dev/hdd2" successfully created
 Physical volume "/dev/hdd3" successfully created
 
 
[root@qiuri ~]# pvscan
 /dev/cdrom: open failed: Read-only file system
 Attempt to close device '/dev/cdrom' which is not open.
 PV /dev/hdd1                    lvm2 [1.86 GB]
 PV /dev/hdd2                    lvm2 [1.86 GB]
 PV /dev/hdd3                    lvm2 [1.86 GB]
 Total: 4 [35.46 GB] / in use: 1 [29.88 GB] / in no VG: 3 [5.59 GB]
 
 
建立卷组:
[root@qiuri ~]# vgcreate vg0 /dev/hdd1 /dev/hdd2 /dev/hdd3
 Volume group "vg0" successfully created
[root@qiuri ~]#
 
[root@qiuri ~]# vgscan
 Reading all physical volumes. This may take a while...
 /dev/cdrom: open failed: Read-only file system
 Attempt to close device '/dev/cdrom' which is not open.
 Found volume group "vg0" using metadata type lvm2
 
建立逻辑卷:
[root@qiuri ~]# lvcreate -n lv0 -L 1000M vg0
 Logical volume "lv0" created
[root@qiuri ~]#
格式化:
[root@qiuri ~]# mkfs.ext3 /dev/vg0/lv0
 
[root@qiuri ~]# mkdir /mnt/lv0
[root@qiuri ~]# mount /dev/vg0/lv0 /mnt/lv0/
 
 
查看:
 
[root@qiuri ~]# pvdisplay /dev/hdd1
 --- Physical volume ---
 PV Name                /dev/hdd1
 VG Name                vg0
 PV Size                1.86 GB / not usable 3.96 MB
 Allocatable            yes
 PE Size (KByte)        4096 
 Total PE               476
 Free PE                226
 Allocated PE           250
 PV UUID                Wkcb0g-VHaf-D372-qdj1-12x1-qxva-OvyXCp
 
[root@qiuri ~]# vgdisplay
 --- Volume group ---
 VG Name                vg0
 System ID
 Format                 lvm2
 Metadata Areas         3
 Metadata Sequence No 2
 VG Access              read/write
 VG Status              resizable
 MAX LV                 0
 Cur LV                 1
 Open LV                1
 Max PV                 0
 Cur PV                 3
 Act PV                 3
 VG Size                5.58 GB
 PE Size                4.00 MB
 Total PE               1428
 Alloc PE / Size        250 / 1000.00 MB
 Free PE / Size        1178 / 4.60 GB
 VG UUID                Um5lbv-RknO-OPwy-mYyl-2FGn-dBK2-v0zdCJ
[root@qiuri ~]#
 
[root@qiuri ~]# lvdisplay
 --- Logical volume ---
 LV Name                 /dev/vg0/lv0
 VG Name                 vg0
 LV UUID                 J2XIlI-Nag2-wwG0-l2dq-wNOZ-AgIK-9pNQ6O
 LV Write Access         read/write
 LV Status               available
 # open                  1
 LV Size                 1000.00 MB
 Current LE              250
 Segments                1
 Allocation              inherit
 Read ahead sectors      auto
 - currently set to      256
 Block device            253:2
 
 --- Logical volume ---
 LV Name                 /dev/VolGroup00/LogVol00
 VG Name                 VolGroup00
 LV UUID                 EIHgaM-BY5z-ydzD-G01u-Tu3b-Dxre-ayZsp5
 LV Write Access         read/write
 LV Status               available
 # open                  1
 LV Size                 28.72 GB
 Current LE              919
 Segments                1
 Allocation              inherit
 Read ahead sectors      auto
 - currently set to      256
 Block device            253:0
 
 --- Logical volume ---
 LV Name                 /dev/VolGroup00/LogVol01
 VG Name                 VolGroup00
 LV UUID                 9AsYUt-qP3d-nbYp-KhZX-oWYb-Kitm-KqkEje
 LV Write Access         read/write
 LV Status               available
 # open                  1
 LV Size                 1.16 GB
 Current LE              37
 Segments                1
 Allocation              inherit
 Read ahead sectors      auto
 - currently set to      256
 Block device            253:1
 
[root@qiuri ~]#
 
 
看到这样错误:(参考:硬盘分区章节)
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
 
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
 
解决:
[root@qiuri ~]# partprobe /dev/hdd
 
 
查看原卷组容量:
[root@qiuri ~]# vgdisplay vg0 |grep 'VG Size'
 VG Size                5.58 GB
验证/dev/hdd4是否是PV:
[root@qiuri ~]# pvdisplay /dev/hdd4
 No physical volume label read from /dev/hdd4
 Failed to read physical volume "/dev/hdd4"
将/dev/hdd4创建为PV:
[root@qiuri ~]# pvcreate /dev/hdd4
 Physical volume "/dev/hdd4" successfully created
再次验证:
[root@qiuri ~]# pvdisplay /dev/hdd4
 "/dev/hdd4" is a new physical volume of "2.41 GB"
 --- NEW Physical volume ---
 PV Name                /dev/hdd4
 VG Name
 PV Size                2.41 GB
 Allocatable            NO
 PE Size (KByte)        0
 Total PE               0
 Free PE                0
 Allocated PE           0
 PV UUID                LsnML0-ny5i-rEGK-7xq3-LU2Y-Yr0i-sjrzAG
 
[root@qiuri ~]#
 
 
将/dev/hdd4的PV加入现有卷组vg0中
[root@qiuri ~]# vgextend vg0 /dev/hdd4
 /dev/cdrom: open failed: Read-only file system
 /dev/cdrom: open failed: Read-only file system
 Attempt to close device '/dev/cdrom' which is not open.
 Volume group "vg0" successfully extended
[root@qiuri ~]#
验证是否成功扩容:
[root@qiuri ~]# vgdisplay vg0 |grep 'VG Size'
 VG Size                7.98 GB
卸载 /dev/hdd4的PV:
[root@qiuri ~]# vgreduce vg0 /dev/hdd4
 Removed "/dev/hdd4" from volume group "vg0"
验证是否卸载:
[root@qiuri ~]# vgdisplay vg0 |grep 'VG Size'
 VG Size                5.58 GB
[root@qiuri ~]#
 
 
 
调整逻辑卷:
[root@qiuri ~]# df -h
Filesystem             Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       28G 4.4G   23G 17% /
/dev/hda1               99M   12M   83M 13% /boot
tmpfs                  163M     0 163M   0% /dev/shm
/dev/mapper/vg0-lv0    985M   18M 918M   2% /mnt/lv0
 
扩大逻辑卷的容量,以扩大500M为例
[root@qiuri ~]# lvextend -L +500M /dev/vg0/lv0
 Extending logical volume lv0 to 1.46 GB
 Logical volume lv0 successfully resized
查看当前lv容量,发现容量并没有扩大
[root@qiuri ~]# df -h
Filesystem             Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       28G 4.4G   23G 17% /
/dev/hda1               99M   12M   83M 13% /boot
tmpfs                  163M     0 163M   0% /dev/shm
/dev/mapper/vg0-lv0   985M   18M 918M   2% /mnt/lv0
[root@qiuri ~]#
 
 
[root@qiuri ~]# lvreduce -L -200M /dev/vg0/lv0
 WARNING: Reducing active and open logical volume to 1.27 GB
 THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lv0? [y/n]: y
 Reducing logical volume lv0 to 1.27 GB
 Logical volume lv0 successfully resized
[root@qiuri ~]# lvdisplay
 --- Logical volume ---
 LV Name                 /dev/vg0/lv0
 VG Name                 vg0
 LV UUID                 J2XIlI-Nag2-wwG0-l2dq-wNOZ-AgIK-9pNQ6O
 LV Write Access         read/write
 LV Status               available
 # open                  1
 LV Size                 1.27 GB
 
 
 
 
[root@qiuri ~]#umount /mnt/lv0
[root@qiuri ~]# resize2fs /dev/vg0/lv0
resize2fs 1.39 (29-May-2006)
Please run 'e2fsck -f /dev/vg0/lv0' first.
 
[root@qiuri ~]# e2fsck -f /dev/vg0/lv0               强制检测硬盘
e2fsck 1.39 (29-May-2006)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vg0/lv0: 11/128000 files (9.1% non-contiguous), 8444/256000 blocks
[root@qiuri ~]# resize2fs /dev/vg0/lv0
resize2fs 1.39 (29-May-2006)
Resizing the filesystem on /dev/vg0/lv0 to 384000 (4k) blocks.
The filesystem on /dev/vg0/lv0 is now 384000 blocks long.
 
[root@qiuri ~]# mount /dev/vg0/lv0 /mnt/lv0/
 
[root@qiuri ~]# df -h
Filesystem             Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       28G 4.4G   23G 17% /
/dev/hda1               99M   12M   83M 13% /boot
tmpfs                  163M     0 163M   0% /dev/shm
/dev/mapper/vg0-lv0    1.5G   18M 1.4G   2% /mnt/lv0
[root@qiuri ~]#
 
减小逻辑卷大小:
注意:如果当前逻辑卷有数据的话,注意先备份
 
[root@qiuri ~]# lvreduce -L -200M /dev/vg0/lv0
 
 
 
卸载:
[root@qiuri ~]# lvremove /dev/vg0/lv0
 Logical volume "lv0" successfully removed
[root@qiuri ~]# vgremove vg0
 Volume group "vg0" successfully removed
[root@qiuri ~]# pvremove /dev/hdd1 /dev/hdd2 /dev/hdd3 /dev/hdd4
 /dev/cdrom: open failed: Read-only file system
 Attempt to close device '/dev/cdrom' which is not open.
 Labels on physical volume "/dev/hdd1" successfully wiped
 Labels on physical volume "/dev/hdd2" successfully wiped
 Labels on physical volume "/dev/hdd3" successfully wiped
 Labels on physical volume "/dev/hdd4" successfully wiped
[root@qiuri ~]# pvscan
 /dev/cdrom: open failed: Read-only file system
 Attempt to close device '/dev/cdrom' which is not open.
 PV /dev/hda2    VG VolGroup00   lvm2 [29.88 GB / 0    free]
 Total: 1 [29.88 GB] / in use: 1 [29.88 GB] / in no VG: 0 [0    ]
[root@qiuri ~]#
 
 
实验:
RAID:5
 
LVM
 

你可能感兴趣的:(linux,职场,休闲,linux软raid和lvm)