linux 磁盘分配

一.fdisk 磁盘分区
[root@localhost ~]# fdisk -l   
(显示当前磁盘分区情况)

Disk /dev/hdb: 5368 MB, 5368709120 bytes
16 heads, 63 sectors/track, 10402 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes

   
DeviceBoot      Start         End      Blocks   Id  System

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   DeviceBoot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        2610    20860402+  8e  Linux LVM

以上信息表明,有2块硬盘,一块是SCSI的,一块是IDE的。其中/dev/hdb5G的,有16个磁面,63个扇区。10402个磁柱.每个磁柱的容量是0.516096M。每一个分区的容量是 Blocks = (相应分区End数值 -相应分区Start数值)x单位cylinder(磁柱)的容量,比如说/dev/sda1=(13-1)*8225.280=98703.36=98M (由于硬盘都是按1000M为单位,所以有一些偏差),之所以没有/dev/hda是因为已经被光驱占用了。

下面对Disk/dev/hdb进行分区,分区方案如下:
/dev/hdb1  
主分区    容量:1G
/dev/hdb2  
扩展分区容量:4G
/dev/hdb5  
逻辑分区容量1G
/dev/hdb6  
逻辑分区容量1G
/dev/hdb7  
逻辑分区容量1G
/dev/hdb8  
逻辑分区容量1G

说明:分区(包括扩展分区)的总个数不能超过四个;也不能把扩展分区包围在主分区之间;分区(包括扩展分区)的总个数不能超过四个;也不能把扩展分区包围在主分区之间;系统默认会将磁盘编号1-4给主分区,从5~16给逻辑分区。所以LINUX系统最多只能存在16个分区。

[root@localhost ~]# fdisk /dev/hdb  
对硬盘/dev/hdb进行分区)

The number of cylinders for this disk is set to10402.
There is nothing wrong with that, but this islarger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., oldversions of LILO)
2) booting and partitioning software from otherOSs
   (e.g.,DOS FDISK, OS/2 FDISK)


Command (m for help): n   (创建分区)
Command action
   e   
extended
   p   primary partition (1-4)  
p   (创建主分区)
Partition number (1-4): 1    
(选择分区ID号)
First cylinder (1-10402, default1):    
(起始磁柱值直接回车默认就行)
Using default value 1
Last cylinder or +size or +sizeM or +sizeK(1-10402, default 10402): +1G  
(结束磁柱值 1G)

Command (m for help): n
Command action
   e   
extended
   p   primary partition (1-4)
e  (划分扩展分区)
Partition number (1-4): 2
First cylinder (1940-10402, default 1940):
Using default value 1940
Last cylinder or +size or +sizeM or +sizeK(1940-10402, default 10402):
Using default value 10402
Command (m for help): n
Command action
   l   
logical (5 or over)
   p   primary partition (1-4)
l  (划分逻辑分区1
First cylinder (1940-10402, default 1940):
Using default value 1940
Last cylinder or +size or +sizeM or +sizeK(1940-10402, default 10402): +1G
Command (m for help): n
Command action
   l   
logical (5 or over)
   p   primary partition (1-4)
l (划分逻辑分区2
First cylinder (3879-10402, default 3879):
Using default value 3879
Last cylinder or +size or +sizeM or +sizeK(3879-10402, default 10402): +1G
Command (m for help): n
Command action
   l   
logical (5 or over)
   p   primary partition (1-4)
l (划分逻辑分区3
First cylinder (5818-10402, default 5818):
Using default value 5818
Last cylinder or +size or +sizeM or +sizeK(5818-10402, default 10402): +1G
Command (m for help): n
Command action
   l   
logical (5 or over)
   p   primary partition (1-4)
l (划分逻辑分区4
First cylinder (7757-10402, default 7757):+1G^H^H^H^H^H^H
Value out of range.
First cylinder (7757-10402, default 7757):
Using default value 7757
Last cylinder or +size or +sizeM or +sizeK(7757-10402, default 10402): +1G

Command (m for help): 
p  (打印输入分区信息)

Disk /dev/hdb: 5368 MB, 5368709120 bytes
16 heads, 63 sectors/track, 10402 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes

   DeviceBoot      
Start         End      Blocks   Id  System
/dev/hdb1               1        1939      977224+  83  Linux
/dev/hdb2            1940       10402     4265352    5  Extended
/dev/hdb5            1940        3878      977224+  83  Linux
/dev/hdb6            3879        5817      977224+  83  Linux
/dev/hdb7            5818        7756      977224+  83  Linux
/dev/hdb8            7757        9695      977224+  83  Linux

partprobe /dev/hdb   (同步写入分区表,不用重启LINUX
以上分区结束。

参考文章:http://www.pconline.com.cn/pcjob/system/linux/others/0512/743298.html
                  http://www.360doc.com/content/07/0808/00/9144_659342.shtml

                  http://server.zol.com.cn/127/1271401.html

.LVM(logical volume management)逻辑卷管理器

LVM(logical volume management)
就是解决磁盘空间不足的问题,必须满足的是其它磁盘分区要有剩余的空间我们才能创建逻辑卷管理。
在逻辑卷管理器中我们会常用到以下:
Pv
代表的是物理卷可一块硬盘也可以是一个分区
Vg
代表的是逻辑卷组
Lv
代表的是逻辑卷

解决方案1:创建1个新的LVM分区,划分大小为500M并使用
1.创建物理卷
[root@localhost ~]# pvcreate /dev/hdb5     
(创建PV
  Physicalvolume "/dev/hdb5" successfully created

[root@localhost ~]# pvdisplay/dev/hdb5    (显示PV信息)
  ---Physical volume ---

  PVName               /dev/hdb5
  VGName               vg1
  PVSize               954.32 MB / not usable 2.32 MB
  Allocatable           yes
  PESize (KByte)       4096
  TotalPE              238
  FreePE               113
  AllocatedPE          125
  PVUUID               c0230M-LDop-9P6h-urrm-PVfS-wF5O-7RsxnR

2.创建逻辑卷组
[root@localhost ~]# vgcreate vg1 /dev/hdb5  
vg1是逻辑卷组名,可以随便起)
Volume group "vg1" successfullycreated
[root@localhost ~]# vgdisplayvg1                    
(显示VG信息)
  ---Volume group ---

  VGName               vg1
  SystemID           
  Format                lvm2
  MetadataAreas        1
  MetadataSequence No  2
  VGAccess             read/write
  VGStatus             resizable
  MAXLV                0
  CurLV                1
  OpenLV               1
  MaxPV                0
  CurPV                1
  ActPV                1
  VGSize               952.00 MB
  PESize               4.00 MB
  TotalPE              238
  AllocPE / Size       125/ 500.00 MB
  Free  PE /Size       113/ 452.00 MB
  VGUUID               erhVbI-5Ckz-befx-QpFZ-RbhV-Hl9K-qZ0x9Z

3.创建逻辑卷 (500M
[root@localhost ~]# lvcreate -L 500M  vg1

  Logicalvolume "lvol0" created
[root@localhost ~]# lvdisplay vg1
  ---Logical volume ---
  LVName                /dev/vg1/lvol0
  VGName                vg1
  LVUUID                X0t8to-qvNP-smxr-xPOH-I8p9-c3Ht-V01Fy2
  LVWrite Access        read/write
  LVStatus              available
  #open                 1
  LVSize                500.00 MB
  CurrentLE             125
  Segments               1
  Allocation             inherit
  Readahead sectors     auto
  -currently set to     256
  Blockdevice           253:2

4.格式化分区(ext3格式)
[root@localhost ~]# mkfs.ext3/dev/vg1/lvol0   
(或者可以写成  mkfs.ext3/dev/mapper/vg1-lvol0
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
128016 inodes, 512000 blocks
25600 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
63 block groups
8192 blocks per group, 8192 fragments per group
2032 inodes per group
Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729,204801, 221185, 401409


Writing inode tables:done                          
Creating journal (8192 blocks): done
Writing superblocks and filesystem accountinginformation: done

This filesystem will be automatically checkedevery 27 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

同步分区表 partprobe /dev/hdb

5.
挂载使用:
[root@localhost ~]#mkdir -p /tmp/test1  (
创建一个挂载点或者自己指定挂载点)
[root@localhost ~]#mount /dev/mapper/vg1-lvol0/tmp/test1  
(挂载)
[root@localhost ~]# df -lh
文件系统             容量  已用可用已用%挂载点
/dev/mapper/VolGroup00-LogVol00
                       18G  
2.1G   15G  13% /
/dev/sda1              99M   12M   82M  13% /boot
tmpfs                 252M     0  252M   0% /dev/shm
/dev/mapper/vg1-lvol0
                      485M   11M  449M   3% /tmp/test1
[root@localhost test1]# mkdir 123 &&touch 456
[root@localhost test1]# ll -trh
总计 14K
drwx------ 2 root root  12K 10-23 20:51 lost+found

drwxr-xr-x 2 root root 1.0K 10-23 20:54 test
-rw-r--r-- 1 root root    0 10-23 21:13 456
drwxr-xr-x 2 root root 1.0K 10-23 21:13 123

测试成功

解决方案2:在现有LVM环境中,给LVM扩容,增加300M使用空间

1.
确定VG容量还有多少剩余空间
[root@localhost /]# vgdisplay vg1
  ---Volume group ---

  VGName               vg1
  SystemID           
  Format                lvm2
  MetadataAreas        1
  MetadataSequence No  2
  VGAccess             read/write
  VGStatus             resizable
  MAXLV                0
  CurLV                1
  OpenLV               1
  MaxPV                0
  CurPV                1
  ActPV                1
  VGSize               952.00 MB
  PESize               4.00 MB
  TotalPE              238
  AllocPE / Size       125/ 500.00 MB  (已经使用的容量)
  Free  
PE / Size       113 / 452.00 MB    (空闲容量)
  VGUUID               
erhVbI-5Ckz-befx-QpFZ-RbhV-Hl9K-qZ0x9Z

从以上信息可以看出,整个VG共有1G(实际使用 952M)空间,其中已经使用了100M,剩余452M可以增加

2.
增加分区VG容量300M
[root@localhost tmp]# lvextend -L +300M/dev/mapper/vg1-lvol0
 Logical volume lvol0 successfully resized

3.
调整新增分区容量的文件系统
[root@localhost tmp]# resize2fs  -p /dev/mapper/vg1-lvol0

resize2fs 1.39 (29-May-2006)
Filesystem at /dev/mapper/vg1-lvol0 is mountedon /tmp/test1; on-line resizing required
Performing an on-line resize of /dev/mapper/vg1-lvol0to 819200 (1k) blocks.
The filesystem on /dev/mapper/vg1-lvol0 is now819200 blocks long.

4. 查看结果
[root@localhost ~]# df -lh
文件系统             容量  已用可用已用%挂载点
/dev/mapper/VolGroup00-LogVol00
                       18G  
2.1G   15G  13% /
/dev/sda1              99M   12M   82M  13% /boot
tmpfs                 252M     0  252M   0% /dev/shm
/dev/mapper/vg1-lvol0
                      775M   
11M  725M   2% /tmp/test1    (观察发现,该分区已经增加了300M,现有775M

解决方案3:减少分区容量200M
1.
创建PV,VG,LV(略)  
查看现有磁盘容量
[root@localhost ~]# df -lh
文件系统             容量  已用可用已用%挂载点
/dev/mapper/VolGroup00-LogVol00
                       18G  
2.1G   15G  13% /
/dev/sda1              99M   12M   82M  13% /boot
tmpfs                 252M     0  252M   0% /dev/shm
/dev/mapper/vg2-lvol0
                      194M  5.6M  179M   4% /tmp/test2
/dev/mapper/vg1-lvol0
                      591M   
17M  545M   3% /tmp/test1

2.
取消挂载
(ext2/ext3文件系统来说,resize2fs工具并不能在线缩小文件系统。什么是在线?在增大分区里边说的那样,对一个正常使用的文件系统进行增大操作,就叫在线操作。因此,需要先卸载了这个分区的文件系统才行)

[root@localhost ~]# umount /tmp/test1

3.
检查文件系统正确性(关键步骤):
[root@localhost ~]# e2fsck -f/dev/mapper/vg1-lvol0   
-f强制检查  -p打印输入)
e2fsck 1.39 (29-May-2006)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/vg1-lvol0: 11/76800 files (9.1%non-contiguous), 6635/153600 blocks

如果不检查文件系统,接下来减少分区会有如下提示,可能对后续操作有影响!!
[root@localhost ~]# resize2fs/dev/mapper/vg1-lvol0 50M
resize2fs 1.39 (29-May-2006)
Please run 'e2fsck -f /dev/mapper/vg1-lvol0 'first.

4.
修改文件系统大小(关键步骤):
[root@localhost ~]# resize2fs/dev/mapper/vg1-lvol0 200M  
(可以加 -p参数)
resize2fs 1.39 (29-May-2006)
Resizing the filesystem on /dev/mapper/vg1-lvol0to 51200 (4k) blocks.
The filesystem on /dev/mapper/vg1-lvol0 is now51200 blocks long.

5.
减少逻辑卷大小(关键步骤):
[root@localhost ~]# lvreduce -L 200M/dev/mapper/vg1-lvol0
  WARNING:Reducing active logical volume to 200.00 MB

  THISMAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lvol0? [y/n]: y
  Reducinglogical volume lvol0 to 200.00 MB
  Logicalvolume lvol0 successfully resized

6.查看结果:

[root@localhost ~]# vgdisplay vg1
  ---Volume group ---

  VGName               vg1
  SystemID            
  Format                lvm2
  MetadataAreas        1
  MetadataSequence No  5
  VGAccess             read/write
  VGStatus             resizable
  MAXLV                0
  CurLV                1
  OpenLV               0
  MaxPV                0
  CurPV                1
  ActPV                1
  VGSize               952.00 MB
  PESize               4.00 MB
  TotalPE              238
  AllocPE / Size       50/ 200.00 MB
  Free  
PE /Size       188/ 752.00 MB
  VGUUID               
erhVbI-5Ckz-befx-QpFZ-RbhV-Hl9K-qZ0x9Z

[root@localhost ~]# df -lh
文件系统             容量  已用可用已用%挂载点
/dev/mapper/VolGroup00-LogVol00
                       18G  
2.1G   15G  13% /
/dev/sda1              99M   12M   82M  13% /boot
tmpfs                 252M     0  252M   0% /dev/shm
/dev/mapper/vg2-lvol0
                      194M  
5.6M  179M   4% /tmp/test2  (观察发现,该分区已经从原来的591M变成了200M

参考网址:
1.LVM
详细说明http://tech.foolpig.com/2010/04/01/lvm/
2.E2fsck
用法  http://wiki.chinaunix.net/index.php/E2fsck
3.
新手学LVM扩容  http://world77.blog.51cto.com/414605/382230
4.VMWARE
终于扩容成功  http://blog.csdn.net/junglyfine/archive/2009/12/09/4974269.aspx
5.
管理LVM逻辑分区         http://blog.chinaunix.net/u1/33254/showart_371203.html
6.LVM
(逻辑管理器)     http://blog.csdn.net/xjtuse_mal/archive/2010/05/09/5572335.aspx
7. RHEL5
LVM扩容       http://blog.sina.com.cn/s/blog_3f7e47f20100iy53.html
8.LINUX
查看的一些命令  http://www.360doc.com/content/07/0808/00/9144_659342.shtml
9. umount device busy
解决方法http://blog.csdn.net/yunshine/archive/2009/04/07/4055509.aspx
10. umount
解决   http://www.linuxforum.net/forum/showflat.php?Board=linuxK&Number=598759
11
RAID            http://www.chinaeda.cn/show.aspx?id=18290&cid=46

.LINUX创建软RAID
1.增加硬盘分区格式化,参照之前的步骤(略)


2.
创建RAID5(共四个硬盘 hd6 hd7hd8 hd9)
[root@localhost ~]# mdadm --create /dev/md0 -l 5-n 3 -x 1 /dev/hdb7 /dev/hdb6 /dev/hdb8 /dev/hdb9
mdadm: /dev/hdb7 appears to be part of a raidarray:
    level=raid5 devices=3 ctime=WedNov  
3 03:41:34 2010
mdadm: /dev/hdb6 appears to be part of a raidarray:
    level=raid5 devices=3 ctime=WedNov  3 03:41:34 2010
mdadm: /dev/hdb8 appears to be part of a raidarray:
    level=raid5 devices=3 ctime=WedNov  3 03:41:34 2010
mdadm: /dev/hdb9 appears to be part of a raidarray:
    level=raid5 devices=3 ctime=WedNov  3 03:41:34 2010
Continue creating array? Y
mdadm: array /dev/md0 started.

3.查看RAID信息
[root@localhost ~]# mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90

  CreationTime : Wed Nov  3 06:54:152010
     Raid Level : raid5
     Array Size : 1954304 (1908.82 MiB2001.21 MB)
  UsedDev Size : 977152 (954.41 MiB 1000.60 MB)
   RaidDevices : 3
  TotalDevices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Wed Nov  3 06:54:15 2010
          State : clean, degraded, recovering
 Active Devices : 2
Working Devices : 4
 Failed Devices : 0
  SpareDevices : 2

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 21% complete

           UUID :a4ec0ae2:2a155628:7f52fc73:b72a2dad
         Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       3       71        0      active sync   /dev/hdb7
       1       3       70        1      active sync   /dev/hdb6
       4       3       72        2      spare rebuilding   /dev/hdb8

       3       3       73        -      spare   /dev/hdb9

4.删除RAID
[root@localhost ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
[root@localhost ~]# mdadm -D /dev/md0

创建RAID1 mdadm --create /dev/md0 -l 1 -n 2 /dev/hdb8 /dev/hdb9

参考文章:
LinuxAS5
raid的实现过程http://www.examda.com/linux/fudao/20090601/091219752.html
linux
RAID配置:  http://david0341.javaeye.com/blog/382399
解决RAID常见问题;:   http://tech.watchstor.com/storage-systems-125644.htm
mdadm 操作:    http://hi.baidu.com/haigang/blog/item/e4fbd9339a0b2d4aac4b5f60.html

你可能感兴趣的:(linux,command,扩展,2010,磁盘,raid5)