一)虚拟机分区扩容

参考转载地址:https://segmentfault.com/a/1190000007645451 (亲测有效)


VirtualBox中安装了CentOS 7,给同事用来做kafka和zookeeper测试服务器。昨晚kafka意外终止,看了日志发现是/root只分配了1GiB大小,已接近饱和。开始bing,总结一下步骤:

列出各分区使用情况:

# df -Th

Filesystem              Type      Size  Used Avail Use% Mounted on

/dev/mapper/centos-root xfs       997M  994M  2.9M 100% /

devtmpfs                devtmpfs  915M     0  915M   0% /dev

tmpfs                   tmpfs     921M     0  921M   0% /dev/shm

tmpfs                   tmpfs     921M   17M  905M   2% /run

tmpfs                   tmpfs     921M     0  921M   0% /sys/fs/cgroup

/dev/mapper/centos-usr  xfs       4.9G  1.6G  3.4G  33% /usr

/dev/sda1               xfs        97M   66M   31M  69% /boot

/dev/mapper/centos-var  xfs       2.4G  473M  1.9G  21% /var

可以看见 /dev/mapper/centos-root 已经使用了100%,我们准备为它扩容。


在VirtualBox中添加一块新的虚拟磁盘:

先关闭操作系统。在VirtualBox中选中当前VM,设置->存储->控制器->SATA控制器,点击图标“添加虚拟硬盘”,新建一个虚拟硬盘。我增加了一块3GiB大小的虚拟磁盘,保存后再次启动虚拟机。



创建新分区

先看一下现有磁盘情况:

# fdisk -l

Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: dos

Disk identifier: 0x000940ec


   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *        2048      206847      102400   83  Linux

/dev/sda2          206848    41943039    20868096   8e  Linux LVM


Disk /dev/sdb: 3221 MB, 3221225472 bytes, 6291456 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes



Disk /dev/mapper/centos-swap: 2097 MB, 2097152000 bytes, 4096000 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes



Disk /dev/mapper/centos-usr: 5242 MB, 5242880000 bytes, 10240000 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes



Disk /dev/mapper/centos-root: 1048 MB, 1048576000 bytes, 2048000 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

...

已经可以看到新磁盘/dev/sdb


对新磁盘进行分区

 # fdisk /dev/sdb

在fdisk的交互模式中,依次输入:

n  //创建新分区

p  //创建主分区

<回车>  //默认分区编号

<回车>  //默认起始扇区位置。

<回车>  //默认结束扇区位置。

w  //写入分区表

这样就把整个磁盘分成了一个区。


再看一下现有磁盘情况

# partprobe    //强制内核读取磁盘分区表,不然下面命令将不能显示出新的分区

# fdisk -l

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1            2048     6291455     3144704   83  Linux


将新分区用于扩展/root

先看一下卷分组:

# vgdisplay -v

Finding all volume groups

    Finding volume group "centos"

  --- Volume group ---

  VG Name               centos            //记录下卷分组名,以待后用

  System ID             

  Format                lvm2

  Metadata Areas        1

  Metadata Sequence No  6

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                5

  Open LV               5

  Max PV                0

  Cur PV                1

  Act PV                1

  VG Size               19.90 GiB

  PE Size               4.00 MiB

  Total PE              5094

  Alloc PE / Size       5093 / 19.89 GiB

  Free  PE / Size       1 / 4.00 MiB

  VG UUID               vtJL08-7Jxi-5IqK-3fUg-Pben-682a-wiv2GL

   

  --- Logical volume ---

  LV Path                /dev/centos/root            //待扩展的分区,记录待用

  LV Name                root

  VG Name                centos

  LV UUID                ZWTgoT-AMWs-g54v-dZA1-NQUj-mqGa-8tmr4U

  LV Write Access        read/write

  LV Creation host, time localhost, 2016-07-03 21:59:31 -0400

  LV Status              available

  # open                 1

  LV Size                1000.00 MiB

  Current LE             250

  Segments               1

  Allocation             inherit

  Read ahead sectors     auto

   currently set to     256

  Block device           253:2

  ...

  --- Physical volumes ---

  PV Name               /dev/sda2     

  PV UUID               fiVH1e-lwfi-63Lr-oIlK-GDZI-dcuZ-T04VlC

  PV Status             allocatable

  Total PE / Free PE    5094 / 1

  ...


为之前新增的分区创建物理卷

# pvcreate /dev/sdb1

查看结果:

# pvdisplay

--- Physical volume ---

  PV Name               /dev/sda2

  VG Name               centos

  PV Size               19.90 GiB / not usable 3.00 MiB

  Allocatable           yes 

  PE Size               4.00 MiB

  Total PE              5094

  Free PE               1

  Allocated PE          5093

  PV UUID               fiVH1e-lwfi-63Lr-oIlK-GDZI-dcuZ-T04VlC

   

  "/dev/sdb1" is a new physical volume of "3.00 GiB"

  --- NEW Physical volume ---

  PV Name               /dev/sdb1

  VG Name               

  PV Size               3.00 GiB

  Allocatable           NO

  PE Size               0   

  Total PE              0

  Free PE               0

  Allocated PE          0

  PV UUID               VGXSAn-UiZ0-Fy40-eQxb-53xA-5hZM-3eGPg0


扩展卷分组,"centos"是vgdisplay命令查到的卷分组名

# vgextend centos /dev/sdb1

查看扩展结果,可用下面命令:

# lvdisplay


下面扩展逻辑卷/dev/centos/root

# lvextend -L +3G /dev/centos/root

再运行下面命令将文件系统扩大:

# xfs_growfs /dev/centos/root

再运行下面命令查看扩展后结果:

# df -Th

Filesystem              Type      Size  Used Avail Use% Mounted on

/dev/mapper/centos-root xfs       4.0G  995M  3.1G  25% /

devtmpfs                devtmpfs  915M     0  915M   0% /dev

tmpfs                   tmpfs     921M     0  921M   0% /dev/shm

tmpfs                   tmpfs     921M  8.4M  913M   1% /run

tmpfs                   tmpfs     921M     0  921M   0% /sys/fs/cgroup

/dev/mapper/centos-usr  xfs       4.9G  1.6G  3.4G  33% /usr

/dev/mapper/centos-home xfs       9.8G  391M  9.4G   4% /home

/dev/mapper/centos-var  xfs       2.4G  469M  1.9G  20% /var

/dev/sda1               xfs        97M   66M   31M  69% /boot



二)物理机分区容量调整

centos7默认xfs为文件系统, xfs因为只能增大,不能减少,所以需要调整/home,增大/,需要安装xfsdump,步骤如下,做个记录:
# yum install -y xfsdump 

查看现有分区情况: # df -h
/dev/mapper/cl_hadoop-root   50G  3.1G   47G   7% /
devtmpfs                    7.7G     0  7.7G   0% /dev
tmpfs                       7.8G     0  7.8G   0% /dev/shm
tmpfs                       7.8G  8.4M  7.8G   1% /run
tmpfs                       7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/cl_hadoop-home  142G  424M  141G   1% /home
/dev/vda1                  1014M  139M  876M  14% /boot
tmpfs                       1.6G     0  1.6G   0% /run/user/0

需要减少/home目录,增大/目录,先备份一下/home目录:  # xfsdump -l 0 -L home -M home -f /opt/home.xfsdump /home

卸载/home目录: # umount /home

调整/home目录到5G:  # lvreduce -L 5G /dev/mapper/cl_hadoop-home 
  WARNING: Reducing active logical volume to 5.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce cl_hadoop/home? [y/n]: y
  Size of logical volume cl_hadoop/home changed from 141.18 GiB (36142 extents) to 5.00 GiB (1280 extents).
  Logical volume cl_hadoop/home successfully resized.

增大/目录: # lvextend -l +100%FREE /dev/cl_hadoop/root 
  Size of logical volume cl_hadoop/root changed from 50.00 GiB (12800 extents) to 186.18 GiB (47663 extents).
  Logical volume cl_hadoop/root successfully resized.

延伸/目录空间: # xfs_growfs /dev/cl_hadoop/root
meta-data=/dev/mapper/cl_hadoop-root isize=512    agcount=4, agsize=3276800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=13107200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=6400, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 13107200 to 48806912

重新格式化home分区: # mkfs.xfs -f /dev/mapper/cl_hadoop-home
meta-data=/dev/mapper/cl_hadoop-home isize=512    agcount=4, agsize=327680 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1310720, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

挂在home目录:  # mount /home

还原备份文件到/home目录: # xfsrestore -f /opt/home.xfsdump /home
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.4 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description: 
xfsrestore: hostname: hadoop
xfsrestore: mount point: /home
xfsrestore: volume: /dev/mapper/cl_hadoop-home
xfsrestore: session time: Sat Aug  5 19:49:06 2017
xfsrestore: level: 0
xfsrestore: session label: "home"
xfsrestore: media label: "home"
xfsrestore: file system id: f716ff51-3556-491a-a324-de943e23277b
xfsrestore: session id: 47a21c87-c24e-47fe-a501-cd5f5565c6c5
xfsrestore: media id: 553706c1-917a-440a-9cd8-31779fa17089
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 4 directories and 14 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: restore complete: 1 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore:   stream 0 /opt/home.xfsdump OK (success)
xfsrestore: Restore Status: SUCCESS

至此,完整容量调整!