lvm简单实用

author:skate
time:2009/04/03

 


os: centos4.7
ram:32G
disk:4块300G硬盘

 

 

作为一个系统设计者来说,前期规划很重要,既要满足现在的要求有要方便以后的扩展

我的硬盘系统做的是 raid10, 考虑到以后的空间扩展,我把可能变化的数据区做成卷组

方便的以后的扩展和管理,因为目前来说,空间具体应用到什么程度,还不完全清楚,我

预留了175G的空间为空闲


查看目前的内存:

Last login: Fri Apr  3 02:33:37 2009 from 192.168.2.87
[root@ticketb ~]# free -g
             total       used       free     shared    buffers     cached
Mem:            31          0         31          0          0          0
-/+ buffers/cache:          0         31
Swap:           30          0         30

 

查看目前的磁盘空间:

 

[root@ticketb ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             981M  202M  730M  22% /
none                   16G     0   16G   0% /dev/shm
/dev/sda9             2.9G   37M  2.7G   2% /tmp
/dev/sda7             4.9G  1.9G  2.7G  42% /usr
/dev/sda8             2.9G  116M  2.7G   5% /var

 

查看磁盘的分区情况(在安装操作系统的时候,我已经分好区了):

 

[root@ticketb ~]# fdisk -l

Disk /dev/sda: 598.8 GB, 598879502336 bytes
255 heads, 63 sectors/track, 72809 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         127     1020096   83  Linux
/dev/sda2             128       25623   204796620   8e  Linux LVM
/dev/sda3           25624       44745   153597465   8e  Linux LVM
/dev/sda4           44746       72809   225424080    5  Extended
/dev/sda5           44746       47295    20482843+  8e  Linux LVM
/dev/sda6           47296       49207    15358108+  82  Linux swap
/dev/sda7           49208       49844     5116671   83  Linux
/dev/sda8           49845       50226     3068383+  83  Linux
/dev/sda9           50227       50608     3068383+  83  Linux
/dev/sda10          50609       52676    16611178+  82  Linux swap


lvm的创建

 

1. 创建物理卷

 

[root@ticketb ~]# pvcreate /dev/sda2
  Physical volume "/dev/sda2" successfully created
[root@ticketb ~]# pvcreate /dev/sda3
  Physical volume "/dev/sda3" successfully created
[root@ticketb ~]# pvcreate /dev/sda5
  Physical volume "/dev/sda5" successfully created

  查看所创建的物理卷:

 

[root@ticketb ~]# pvscan
  PV /dev/sda2                      lvm2 [195.31 GB]
  PV /dev/sda3                      lvm2 [146.48 GB]
  PV /dev/sda5                      lvm2 [19.53 GB]
  Total: 3 [361.33 GB] / in use: 0 [0   ] / in no VG: 3 [361.33 GB]

2.创建卷组:

 

[root@ticketb ~]# vgcreate -s 256m vghome /dev/sda5
  Volume group "vghome" successfully created
[root@ticketb ~]# vgcreate -s 256m vgoradata /dev/sda3
  Volume group "vgoradata" successfully created
[root@ticketb ~]# vgcreate -s 256m vgbackup /dev/sda2
  Volume group "vgbackup" successfully created

  查看所创建的卷组:

 

[root@ticketb ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vghome" using metadata type lvm2
  Found volume group "vgoradata" using metadata type lvm2
  Found volume group "vgbackup" using metadata type lvm2

  显示创建的卷组的详细信息:

 

[root@ticketb ~]# vgdisplay vghome
  --- Volume group ---
  VG Name               vghome
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               19.50 GB
  PE Size               256.00 MB
  Total PE              78
  Alloc PE / Size       0 / 0  
  Free  PE / Size       78 / 19.50 GB
  VG UUID               s6X9cb-ajYj-E66N-w8EZ-Lbpc-mmL7-GqDZ5p
  
3.创建逻辑卷

 

[root@ticketb ~]# lvcreate -L 19.5G -n lvhome vghome
  Logical volume "lvhome" created

  下面是没创建成功 ,因为空间不够


[root@ticketb ~]# lvcreate -L 195.31G -n lvbackup vgbackup
  Rounding up size to full physical extent 195.50 GB
  Insufficient free extents (781) in volume group vgbackup: 782 required

[root@ticketb ~]# lvcreate -L 195.25G -n lvbackup vgbackup
  Logical volume "lvbackup" created

[root@ticketb ~]# lvcreate -L 146.25G -n lvoradata vgoradata
  Logical volume "lvoradata" created

  查看创建的逻辑卷


[root@ticketb ~]# lvscan
  ACTIVE            '/dev/vghome/lvhome' [19.50 GB] inherit
  ACTIVE            '/dev/vgoradata/lvoradata' [146.25 GB] inherit
  ACTIVE            '/dev/vgbackup/lvbackup' [195.25 GB] inherit


[root@ticketb ~]# vgdisplay vgbackup
  --- Volume group ---
  VG Name               vgbackup
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               195.25 GB
  PE Size               256.00 MB
  Total PE              781
  Alloc PE / Size       0 / 0  
  Free  PE / Size       781 / 195.25 GB
  VG UUID               kQBMDg-6M24-69bm-gc3I-bIk9-Jg00-XyXklk
  

[root@ticketb ~]# vgdisplay vgoradata
  --- Volume group ---
  VG Name               vgoradata
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               146.25 GB
  PE Size               256.00 MB
  Total PE              585
  Alloc PE / Size       0 / 0  
  Free  PE / Size       585 / 146.25 GB
  VG UUID               m4ZW7x-oaAt-5g0X-J6El-Yv62-wxEc-OojYbQ
  
4. 创建文件系统

 

[root@ticketb ~]# mkfs.ext3 /dev/vghome/lvhome
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
2555904 inodes, 5111808 blocks
255590 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
156 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000

Writing inode tables: done                           
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@ticketb ~]# mkfs.ext3 /dev/vgoradata/lvoradata
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
19169280 inodes, 38338560 blocks
1916928 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
1170 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done                           
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@ticketb ~]# mkfs.ext3 /dev/vgbackup/lvbackup
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
25591808 inodes, 51183616 blocks
2559180 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
1562 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done                           
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.


5. 挂载文件系统:


[root@ticketb ~]# mkdir -p /u01/oradata
[root@ticketb ~]# mkdir -p /u01/backup
[root@ticketb ~]# ls
anaconda-ks.cfg  Desktop  install.log  install.log.syslog
[root@ticketb ~]# mount -t ext3 /dev/vghome/lvhome /home
[root@ticketb ~]# mount -t ext3 /dev/vgoradata/lvoradata /u01/oradata
[root@ticketb ~]# mount -t ext3 /dev/vgbackup/lvbackup /u01/backup

查看挂载情况:

 

[root@ticketb ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             981M  202M  730M  22% /
none                   16G     0   16G   0% /dev/shm
/dev/sda9             2.9G   37M  2.7G   2% /tmp
/dev/sda7             4.9G  1.9G  2.7G  42% /usr
/dev/sda8             2.9G  116M  2.7G   5% /var
/dev/mapper/vghome-lvhome
                       20G   76M   19G   1% /home
/dev/mapper/vgoradata-lvoradata
                      144G   92M  137G   1% /u01/oradata
/dev/mapper/vgbackup-lvbackup
                      193G   92M  183G   1% /u01/backup
[root@ticketb ~]#

 

让系统启动后自动挂载lvm,可以编辑文件/etc/fstab

我的系统的挂载情况如下:


root@ticketA ~]# more /etc/fstab
# This file is edited by fstab-sync - see 'man fstab-sync' for details
LABEL=/                 /                       ext3    defaults        1 1
none                    /dev/pts                devpts  gid=5,mode=620  0 0
none                    /dev/shm                tmpfs   defaults        0 0
none                    /proc                   proc    defaults        0 0
none                    /sys                    sysfs   defaults        0 0
LABEL=/tmp              /tmp                    ext3    defaults        1 2
LABEL=/usr              /usr                    ext3    defaults        1 2
LABEL=/var              /var                    ext3    defaults        1 2
LABEL=SWAP-sda6         swap                    swap    defaults        0 0
/dev/sda10              swap                    swap    defaults        0 0
/dev/mapper/vghome-lvhome /home                 ext3    defaults        1 2
/dev/mapper/vgoradata-lvoradata /u01/oradata    ext3    defaults        1 2
/dev/mapper/vgbackup-lvbackup /u01/backup       ext3    defaults        1 2
/dev/scd0               /media/cdrom            auto    pamconsole,fscontext=system_u:object_r:re
movable_t,exec,noauto,managed 0 0
[root@ticketA ~]#

 

 

 

---end---

你可能感兴趣的:(linux,centos,ext,System,Access,扩展)