raid+lvm+quota

添加四块硬盘,并设置分区:

[root@localhost ~]# fdisk /dev/sdb
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305

Command (m for help): w
 

[root@localhost ~]# partprobe
[root@localhost ~]# fdisk -l

Disk /dev/sda: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        6527    52323705   8e  Linux LVM

Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        1305    10482381   83  Linux

Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1        1305    10482381   83  Linux

Disk /dev/sdd: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1        1305    10482381   83  Linux

Disk /dev/sde: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1        1305    10482381   83  Linux
[root@localhost ~]#
创建raid10
[root@localhost ~]# mdadm  --create  --auto=yes  /dev/md10  --level=10  --raid-devices=4  /dev/sdb1  /dev/sdc1  /dev/sdd1  /dev/sde1
mdadm: array /dev/md10 started.
[root@localhost ~]#

设置自动加载:
[root@localhost ~]# mdadm --detail /dev/md10

UUID : 7f69985d:9b421c68:81720cb7:5e9d83f1

[root@localhost ~]# vim /dev/mdadm.conf

ARRAY  /dev/md10  UUID=7f69985d:9b421c68:81720cb7:5e9d83f1
[root@localhost ~]# vim /etc/fstab
/dev/md10  /mnt/raid10  ext3  defaults   0   0

创建pv
[root@localhost ~]# pvcreate /dev/md10
  Physical volume "/dev/md10" successfully created
[root@localhost ~]# pvscan
  PV /dev/sda2   VG VolGroup00      lvm2 [59.88 GB / 0    free]
  PV /dev/md10                      lvm2 [19.99 GB]
  Total: 2 [79.87 GB] / in use: 1 [59.88 GB] / in no VG: 1 [19.99 GB]
[root@localhost ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               VolGroup00
  PV Size               59.90 GB / not usable 22.10 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              1916
  Free PE               0
  Allocated PE          1916
  PV UUID               E27Joj-Dxph-p2af-RiLb-ZIzC-CjIT-BjqROE
  
  "/dev/md10" is a new physical volume of "19.99 GB"
  --- NEW Physical volume ---
  PV Name               /dev/md10
  VG Name              
  PV Size               19.99 GB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               nbJfDj-t3ak-L6B2-Gazj-jtnF-vWEN-8V0f3k
  
创建vg
[root@localhost ~]# vgcreate -s 16m vfastvg /dev/md10
  /dev/cdrom: open failed: 只读文件系统
  /dev/cdrom: open failed: 只读文件系统
  Attempt to close device '/dev/cdrom' which is not open.
  /dev/cdrom: open failed: 只读文件系统
  Attempt to close device '/dev/cdrom' which is not open.
  /dev/cdrom: open failed: 只读文件系统
  Attempt to close device '/dev/cdrom' which is not open.
  Volume group "vfastvg" successfully created
[root@localhost ~]#

[root@localhost ~]# vgscan
[root@localhost ~]# pvscan
[root@localhost ~]# pvdisplay /dev/md10
  --- Physical volume ---
  PV Name               /dev/md10
  VG Name               vfastvg
  PV Size               19.99 GB / not usable 9.25 MB
  Allocatable           yes
  PE Size (KByte)       16384
  Total PE              1279
  Free PE               1279
  Allocated PE          0
  PV UUID               nbJfDj-t3ak-L6B2-Gazj-jtnF-vWEN-8V0f3k
  
[root@localhost ~]#

建立lv
[root@localhost ~]# lvcreate -l 200 -n vfastlv vfastvg
  /dev/cdrom: open failed: 只读文件系统
  Logical volume "vfastlv" created
[root@localhost ~]#
[root@localhost ~]# lvdisplay /dev/vfastvg/vfastlv
  --- Logical volume ---
  LV Name                /dev/vfastvg/vfastlv
  VG Name                vfastvg
  LV UUID                oGw2O6-c25i-bgK1-chkR-fE3N-MfPn-FwgsVB
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                3.12 GB
  Current LE             200
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     512
  Block device           253:2
  
[root@localhost ~]#

查看权限:[root@localhost ~]# ll /dev/vfastvg/vfastlv
lrwxrwxrwx 1 root root 27 01-04 17:48 /dev/vfastvg/vfastlv -> /dev/mapper/vfastvg-vfastlv
[root@localhost ~]#

格式化lv:
[root@localhost ~]# mkfs.ext3 /dev/vfastvg/vfastlv

放大lv:

[root@localhost ~]# lvresize  -l  +100  /dev/vfastvg/vfastlv
  /dev/cdrom: open failed: 只读文件系统
  Extending logical volume vfastlv to 4.69 GB
  Logical volume vfastlv successfully resized
[root@localhost ~]# lvdisplay  /dev/vfastvg/vfastlv

[root@localhost ~]# resize2fs  /dev/vfastvg/vfastlv

快照:
[root@localhost ~]# lvcreate -l 100 -s -n vfastlvs /dev/vfastvg/vfastlv
  /dev/cdrom: open failed: 只读文件系统
  Logical volume "vfastlvs" created
[root@localhost ~]# lvdisplay /dev/vfastvg/vfastlvs
[root@localhost ~]# resize2fs /dev/vfastvg/vfastlv
[root@localhost ~]# lvcreate -l 100 -s -n vfastlvs /dev/vfastvg/vfastlv
  /dev/cdrom: open failed: 只读文件系统
  Logical volume "vfastlvs" created
[root@localhost ~]# lvdisplay /dev/vfastvg/vfastlvs[root@localhost ~]# mkdir -pv /mnt/snapshot
[root@localhost ~]# mount /dev/vfastvg/vfastlvs /mnt/snapshot
[root@localhost ~]#
[root@localhost ~]# cd /mnt/snapshot/
[root@localhost snapshot]#
[root@localhost snapshot]# ls
lost+found
[root@localhost snapshot]# cd /etc/
[root@localhost etc]#
[root@localhost etc]# cat fstab

磁盘配额:

创建用户:
[root@localhost ~]# vim useradd.sh

#!/bin/bash
groupadd wu
for username in w1 w2
do
    useradd -g wu $username
    echo "123456" |passwd --stdin $username
done

[root@localhost ~]# sh useradd.sh
Changing password for user w1.
passwd: all authentication tokens updated successfully.
Changing password for user w2.
passwd: all authentication tokens updated successfully.
[root@localhost ~]#

创建目录并挂载:[root@localhost ~]# mkdir /mnt/vfastlv
[root@localhost ~]#
[root@localhost ~]# mount /dev/vfastvg/vfastlv /mnt/vfastlv
[root@localhost ~]#
[root@localhost ~]# cd /mnt/vfastlv

[root@localhost vfastlv]# mount -o remount,usrquota,grpquota /mnt/vfastlv
 

开启加载配额功能:
[root@localhost vfastlv]# vim /etc/fstab
/dev/vfastvg/vfastlv  /mnt/vfastlv  ext3   defaults,usrquota,grpquota  0    0
[root@localhost vfastlv]# unmount /mnt/vfastlv
[root@localhost vfastlv]# vim /etc/fstab
[root@localhost vfastlv]#
[root@localhost vfastlv]# mount -o
[root@localhost vfastlv]# mount
/dev/mapper/vfastvg-vfastlv on /mnt/vfastlv type ext3 (rw,usrquota,grpquota)
[root@localhost vfastlv]# ls
lost+found
[root@localhost vfastlv]# quotacheck -avug
[root@localhost vfastlv]# ls
aquota.group  aquota.user  lost+found

编辑配额
[root@localhost vfastlv]# edquota -u w1

Disk quotas for user w1 (uid 510):
  Filesystem                   blocks       soft       hard     inodes     soft     hard
  /dev/mapper/vfastvg-vfastlv          0     20000     30000          0    0        0
~                                                                                            
[root@localhost vfastlv]# edquota -p w1 -u w2
[root@localhost vfastlv]# edquota -g wu
Disk quotas for group wu (gid 501):
  Filesystem                   blocks       soft       hard     inodes     soft     hard
  /dev/mapper/vfastvg-vfastlv          0     40000    60000          0        0        0
~                                                                                               
[root@localhost vfastlv]# edquota -t
Grace period before enforcing soft limits for users:
Time units may be: days, hours, minutes, or seconds
  Filesystem             Block grace period     Inode grace period
  /dev/mapper/vfastvg-vfastlv                  14days                  7days

磁盘配额报告:
[root@localhost vfastlv]# repquota -auvs
*** Report for user quotas on device /dev/mapper/vfastvg-vfastlv
Block grace time: 14days; Inode grace time: 7days
                        Block limits                File limits
User            used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
root      --   72000       0       0              4     0     0      
w1        --       0   20000   30000              0     0     0      
w2        --       0   20000   30000              0     0     0      

Statistics:
Total blocks: 7
Data blocks: 1
Entries: 3
Used average: 3.000000

测试:

[root@localhost vfastlv]# chmod  o+w  /mnt/vfastlv
[root@localhost vfastlv]#
[root@localhost vfastlv]# su - w1
[w1@localhost ~]$ cd  /mnt/vfastlv
[w1@localhost vfastlv]$
[w1@localhost vfastlv]$ dd  if=/dev/zero of=w1 bs=1M count=18
18+0 records in
18+0 records out
18874368 bytes (19 MB) copied, 0.210064 seconds, 89.9 MB/s
[w1@localhost vfastlv]$ dd if=/dev/zero of=w11 bs=1M count=10
dm-2: warning, user block quota exceeded.
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.179783 seconds, 58.3 MB/s
[w1@localhost vfastlv]$ dd if=/dev/zero of=w111 bs=1M count=50
dm-2: write failed, user block limit reached.
dd: 写入 “w111”: 超出磁盘限额
2+0 records in
1+0 records out
1314816 bytes (1.3 MB) copied, 0.0535413 seconds, 24.6 MB/s
[w1@localhost vfastlv]$

你可能感兴趣的:(raid+lvm+quota)