raid10+lvm+quota

一.创建RAID
1   添加3块硬盘

2 创建raid
 [root@localhost ~]# mdadm --create --auto=yes /dev/md10 --level=10 --raid-devices=3  /dev/sdb /dev/sdc /dev/sdd
  
       查看结果:
   [root@localhost ~]# mdadm --detail /dev/md10
   mdadm: array /dev/md10 started.
/dev/md10:
        Version : 0.90
  Creation Time : Fri Jan  4 18:26:02 2013
     Raid Level : raid10
     Array Size : 7864224 (7.50 GiB 8.05 GB)
  Used Dev Size : 5242816 (5.00 GiB 5.37 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 10
    Persistence : Superblock is persistent

    Update Time : Fri Jan  4 18:26:02 2013
          State : clean, resyncing
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 64K

 Rebuild Status : 80% complete

           UUID : eaaabc40:c0c3aeb2:743ed475:5a167feb
         Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd

二.创建LVM 
1 创建PV
 [root@localhost ~]# pvcreate /dev/md10
         查看结果:
   Physical volume "/dev/md10" successfully created
2 创建VG
 [root@localhost ~]# vgcreate wanghanvg /dev/md10
        查看结果:
  /dev/hdc: open failed: 只读æ–sful
  /dev/cdrom: open failed: 只读æ–¯»æ
  Attempt to close device '/dev/cdrom' which is not open.
  Volume group "wanghanvg" successfully created
3 创建LV
 [root@localhost ~]# lvcreate -n wanghanlv /dev/wanghanvg
        查看结果:
   Logical volume "lv" created
4 格式化LVM
 mkfs.ext3 /dev/wanghanvg/wanghanlv
5 开机启动逻辑卷
 vim /etc/fstab
 /dev/wanghanvg/wanghanlv              /root/wanghan                  ext3                  defaults,usrquota,grpquota 0 0
6 挂载lvm
   mkdir wanghan
 mount /dev/wanghanvg/wanghanlv /root/wanghan
        查看结果:
[root@localhost ~]# cd wanghan
[root@localhost wanghan]# ls
lost+found

三.创建磁盘配额
1 创建用户
 vim useradd.sh #新建文件编写创建用户脚本
 #!/bin/bash
   groupadd wanghan
   for username in wanghan1 wanghan2 wanghan3 wanghan4 wanghan5
   do
         useradd -g wanghan $username
         echo "111111" | passwd --stdin $username
   done
     
 sh useradd.sh #执行脚本
2     创建软连接
   [root@localhost ~]# ln -s /home/* /root/wanghan
3 创建配额文件
   [root@localhost wanghan]# mount -o remount,usrquota,grpquota /root/wanghan
   [root@localhost wanghan]# quotacheck -augv
        查看结果:
 [root@localhost wanghan]# ls
aquota.group  lost+found  wanghan2  wanghan4
aquota.user   wanghan1    wanghan3  wanghan5

4 配置用户配额
  edquota -u wanghan1  #编辑用户配额
Disk quotas for user wanghan1 (uid 500):
  Filesystem                   blocks       soft       hard     inodes     soft     hard
  /dev/mapper/vg-lv                 0      8000      10000     0           0      0
[root@localhost ~]# edquota -p wanghan1 -u wanghan2
[root@localhost ~]# edquota -p wanghan1 -u wanghan3
[root@localhost ~]# edquota -p wanghan1 -u wanghan4
[root@localhost ~]# edquota -p wanghan1 -u wanghan5
 
5 开启配额
 quotaon -a
6 给用户有写入的权限
 chmod 777 root
7 切换用户
 su - wanghan1
8  进入到/root/wanghan/wanghan1目录下
 [abc1@localhost ~]$ cd /root/wanghan
9 复制文件查看手否超出(软限制)配额
 [wanghan1@localhost wanghan1]$ dd if=/dev/zero of=wanghan1.txt bs=1M count=9
        查看结果:

10 验证用户是否能超出硬限制
 

 

你可能感兴趣的:(lvm,quota,raid10)