CentOS7 安装使用 ZFS

介绍

ZFS文件系统的英文名称为ZettabyteFileSystem,也叫动态文件系统(DynamicFileSystem),是第一个128位文件系统。最初是由Sun公司为Solaris10操作系统开发的文件系统。作为OpenSolaris开源计划的一部分,ZFS于2005年11月发布,被Sun称为是终极文件系统,经历了10年的活跃开发,而最新的开发将全面开放,并重新命名为OpenZFS。

ZFS 与 openZFS
甲骨文收购Sun后不久,OpenSolaris成为了密切的来源。 ZFS的所有进一步开发也成为封闭源。 ZFS的许多开发人员对此感到不满。由于这一决定,三分之二的核心ZFS开发者,包括Ahrens和Bonwick,离开了Oracle。他们与其他公司一起在2013年9月创建了OpenZFS项目。该项目率先开展了ZFS的开源开发。

让我们回到上面提到的许可证问题。由于OpenZFS项目与Oracle是分开的,因此有些人可能想知道为什么他们不会将许可证更改为与GPL兼容的东西,因此它可以包含在Linux内核中。根据OpenZFS网站的说法,更改许可证将涉及将任何贡献代码的人联系到当前的OpenZFS实施(包括初始的,常见的ZFS代码,直到OpenSolaris)并获得他们更改许可证的许可。由于这项工作几乎不可能(因为一些贡献者可能已经死亡或很难找到),他们决定保留他们拥有的许可证。

特性

ZFS是一种先进的、高度可扩展的文件系统,最初是由Sun公司开发的,现在OpenZFS是项目的一部分。不同于其它文件系统,它不仅是一个文件系统逻辑卷管理器。ZFS使其受欢迎的特性是:

  • 数据完整性——数据一致性和完整性通过即写即拷和校验技术保证。
  • 存储空间池——可用存储驱动器一起放入称为zpool的单个池。
  • 软件RAID ——像发出一个命令一样,建立一个raidz数组。
  • 内置的卷管理器——ZFS充当卷管理器。
  • Snapshots、克隆、压缩——这些都是一些ZFS提供的高级功能。
  • 最大单个文件大小为 16 EB(1 EB = 1024 PB)
  • 最大 256 千万亿(256*1015 )的 ZB(1 ZB = 1024 EB)的存储

专业术语

在我们继续之前,让我们了解一些ZFS的常用术语。

术语 解析
Pool 存储驱动器的逻辑分组,它是ZFS的基本构建块,从这里将存储空间分配给数据集。
Datasets ZFS文件系统的组件即文件系统、克隆、快照和卷被称为数据集。
Mirror 一个虚拟设备存储相同的两个或两个以上的磁盘上的数据副本,在一个磁盘失败的情况下,相同的数据是可以用其他磁盘上的镜子。
Resilvering 在恢复设备时将数据从一个磁盘复制到另一个磁盘的过程。
Scrub 擦除用于一致性检验在ZFS像在其他文件系统如何使用fsck。

安装

安装EPEL仓库

epel(RHEL 7)

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

epel(RHEL 6)

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo

epel(RHEL 5)

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-5.repo

安装内核开发包


/* 先升级kernel*/
yum update kernel

/* 安装kernel开发包*/
yum install kernel-devel

更新内核后,最好重启系统。

安装zfs源

yum localinstall --nogpgcheck http://download.zfsonlinux.org/epel/zfs-release.el7_6.noarch.rpm

根据系统版本安装yum源:
查看centos版本号:cat /etc/redhat-release
https://github.com/zfsonlinux/zfs/wiki/RHEL-and-CentOS
https://zfsonlinux.org/

安装zfs

#默认安装的是dmks的,依赖于kernel-devel,所以必装,如果你是自己编译的内核,自己把devel包搞上去,不过如果你是CentOS6请务必确保内核编译使用的GCC为系统自带的版本,高版本编译的内核无法兼容zfs安装过程中即时编译的.ko内核模块,会出现状况的(别问我怎么知道的)
yum install zfs -y

验证zfs模块插入到内核使用的lsmod命令,如果没有,使用‘modprobe命令手动插入它。

[root@localhost ~]# lsmod | grep zfs
[root@localhost ~]# modprobe zfs
[root@localhost ~]# lsmod | grep zfs
zfs                  3564468  0 
zunicode              331170  1 zfs
zavl                   15236  1 zfs
icp                   270148  1 zfs
zcommon                73440  1 zfs
znvpair                89131  2 zfs,zcommon
spl                   102412  4 icp,zfs,zcommon,znvpair

检查是否我们可以使用zfs的命令

[root@localhost ~]# zfs list
no datasets available

zfs 使用

这里列出一些zfs的简单命令

#查看当前存储池挂载状态
zfs list
 
#查看当前存储池状态
zpool status
 
#使用 sdb、sdc、sdd 这几块硬盘创建一个名为 senra-zfs的池
zpool create senra-zfs sdb sdc sdd
#可以使用-f启用强制模式,这个在正常的创建中没有必要,如果碰到你要创建raidz或者mirror类型的池,那么这个可以帮助你忽略由于添加的硬盘容量不相等导致的错误提示
 
#查看存储池 senra-zfs 的一些信息
zpool get all senra-zfs
 
#将硬盘 sde 添加到池 senra-zfs 中
zpool add senra-zfs sde
 
#使用硬盘 sdf 替换 senra-zfs 池中的 sde
zpool replace senra-zfs sde sdf
 
#检测池 senra-zfs 是否存在问题
zpool scrub senra-zfs
 
#查看池 senra-zfs 的IO使用状况,可以加 -v 来详细到池所拥有的每块磁盘
zpool iostat senra-zfs

详细用法,请看Oracle的文档
zfs命令 ——> https://docs.oracle.com/cd/E26926_01/html/E29115/zfs-1m.html
zpool命令 ——> https://docs.oracle.com/cd/E26926_01/html/E29115/zpool-1m.html

zfs 使用实例

创建虚拟磁盘

创建4个虚拟磁盘,每个大小64MB

dd if=/dev/zero of=disk0.img bs=64M count=1;losetup /dev/loop0 ./disk0.img
dd if=/dev/zero of=disk1.img bs=64M count=1;losetup /dev/loop1 ./disk1.img
dd if=/dev/zero of=disk2.img bs=64M count=1;losetup /dev/loop2 ./disk2.img
dd if=/dev/zero of=disk3.img bs=64M count=1;losetup /dev/loop3 ./disk3.img

[root@localhost zfs_img]# dd if=/dev/zero of=disk0.img bs=64M count=1;losetup /dev/loop0 ./disk0.img
1+0 records in
1+0 records out
67108864 bytes (67 MB) copied, 0.0515194 s, 1.3 GB/s
[root@localhost zfs_img]# dd if=/dev/zero of=disk1.img bs=64M count=1;losetup /dev/loop1 ./disk1.img
1+0 records in
1+0 records out
67108864 bytes (67 MB) copied, 0.0504093 s, 1.3 GB/s
[root@localhost zfs_img]# dd if=/dev/zero of=disk2.img bs=64M count=1;losetup /dev/loop2 ./disk2.img
1+0 records in
1+0 records out
67108864 bytes (67 MB) copied, 0.0509505 s, 1.3 GB/s
[root@localhost zfs_img]# dd if=/dev/zero of=disk3.img bs=64M count=1;losetup /dev/loop3 ./disk3.img
1+0 records in
1+0 records out
67108864 bytes (67 MB) copied, 0.0532667 s, 1.3 GB/s

创建 ZFS 池

zpool create mypool raidz /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3
zfs list

[root@localhost zfs_img]# zpool create mypool raidz /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3
[root@localhost zfs_img]# zfs list
NAME     USED  AVAIL  REFER  MOUNTPOINT
mypool   105K  83.7M  32.9K  /mypool

查看存储池的属性

zfs get all mypool

[root@localhost zfs_img]# zfs get all mypool
NAME    PROPERTY              VALUE                  SOURCE
mypool  type                  filesystem             -
mypool  creation              Sun Jan  6 17:42 2019  -
mypool  used                  105K                   -
mypool  available             83.7M                  -
mypool  referenced            32.9K                  -
mypool  compressratio         1.00x                  -
mypool  mounted               yes                    -
mypool  quota                 none                   default
mypool  reservation           none                   default
mypool  recordsize            128K                   default
mypool  mountpoint            /mypool                default
mypool  sharenfs              off                    default
mypool  checksum              on                     default
mypool  compression           off                    default
mypool  atime                 on                     default
mypool  devices               on                     default
mypool  exec                  on                     default
mypool  setuid                on                     default
mypool  readonly              off                    default
mypool  zoned                 off                    default
mypool  snapdir               hidden                 default
mypool  aclinherit            restricted             default
mypool  createtxg             1                      -
mypool  canmount              on                     default
mypool  xattr                 on                     default
mypool  copies                1                      default
mypool  version               5                      -
mypool  utf8only              off                    -
mypool  normalization         none                   -
mypool  casesensitivity       sensitive              -
mypool  vscan                 off                    default
mypool  nbmand                off                    default
mypool  sharesmb              off                    default
mypool  refquota              none                   default
mypool  refreservation        none                   default
mypool  guid                  15540430041160261066   -
mypool  primarycache          all                    default
mypool  secondarycache        all                    default
mypool  usedbysnapshots       0B                     -
mypool  usedbydataset         32.9K                  -
mypool  usedbychildren        71.8K                  -
mypool  usedbyrefreservation  0B                     -
mypool  logbias               latency                default
mypool  dedup                 off                    default
mypool  mlslabel              none                   default
mypool  sync                  standard               default
mypool  dnodesize             legacy                 default
mypool  refcompressratio      1.00x                  -
mypool  written               32.9K                  -
mypool  logicalused           31.5K                  -
mypool  logicalreferenced     12K                    -
mypool  volmode               default                default
mypool  filesystem_limit      none                   default
mypool  snapshot_limit        none                   default
mypool  filesystem_count      none                   default
mypool  snapshot_count        none                   default
mypool  snapdev               hidden                 default
mypool  acltype               off                    default
mypool  context               none                   default
mypool  fscontext             none                   default
mypool  defcontext            none                   default
mypool  rootcontext           none                   default
mypool  relatime              off                    default
mypool  redundant_metadata    all                    default
mypool  overlay               off                    default

启用zfs压缩

压缩功能默认是关闭的,通过以下命令可以启用压缩功能

zfs create mypool/myzdev1
zfs list
zfs set compression=on mypool/myzdev1
zfs get all mypool/myzdev1

[root@localhost zfs_img]# zfs create mypool/myzdev1
[root@localhost zfs_img]# zfs list
NAME             USED  AVAIL  REFER  MOUNTPOINT
mypool           147K  83.6M  32.9K  /mypool
mypool/myzdev1  32.9K  83.6M  32.9K  /mypool/myzdev1
[root@localhost zfs_img]# zfs set compression=on mypool/myzdev1
[root@localhost zfs_img]# zfs get all mypool/myzdev1
NAME            PROPERTY              VALUE                  SOURCE
mypool/myzdev1  type                  filesystem             -
mypool/myzdev1  creation              Sun Jan  6 17:47 2019  -
mypool/myzdev1  used                  32.9K                  -
mypool/myzdev1  available             83.6M                  -
mypool/myzdev1  referenced            32.9K                  -
mypool/myzdev1  compressratio         1.00x                  -
mypool/myzdev1  mounted               yes                    -
mypool/myzdev1  quota                 none                   default
mypool/myzdev1  reservation           none                   default
mypool/myzdev1  recordsize            128K                   default
mypool/myzdev1  mountpoint            /mypool/myzdev1        default
mypool/myzdev1  sharenfs              off                    default
mypool/myzdev1  checksum              on                     default
mypool/myzdev1  compression           on                     local
mypool/myzdev1  atime                 on                     default
mypool/myzdev1  devices               on                     default
mypool/myzdev1  exec                  on                     default
mypool/myzdev1  setuid                on                     default
mypool/myzdev1  readonly              off                    default
mypool/myzdev1  zoned                 off                    default
mypool/myzdev1  snapdir               hidden                 default
mypool/myzdev1  aclinherit            restricted             default
mypool/myzdev1  createtxg             66                     -
mypool/myzdev1  canmount              on                     default
mypool/myzdev1  xattr                 on                     default
mypool/myzdev1  copies                1                      default
mypool/myzdev1  version               5                      -
mypool/myzdev1  utf8only              off                    -
mypool/myzdev1  normalization         none                   -
mypool/myzdev1  casesensitivity       sensitive              -
mypool/myzdev1  vscan                 off                    default
mypool/myzdev1  nbmand                off                    default
mypool/myzdev1  sharesmb              off                    default
mypool/myzdev1  refquota              none                   default
mypool/myzdev1  refreservation        none                   default
mypool/myzdev1  guid                  6186521002636645406    -
mypool/myzdev1  primarycache          all                    default
mypool/myzdev1  secondarycache        all                    default
mypool/myzdev1  usedbysnapshots       0B                     -
mypool/myzdev1  usedbydataset         32.9K                  -
mypool/myzdev1  usedbychildren        0B                     -
mypool/myzdev1  usedbyrefreservation  0B                     -
mypool/myzdev1  logbias               latency                default
mypool/myzdev1  dedup                 off                    default
mypool/myzdev1  mlslabel              none                   default
mypool/myzdev1  sync                  standard               default
mypool/myzdev1  dnodesize             legacy                 default
mypool/myzdev1  refcompressratio      1.00x                  -
mypool/myzdev1  written               32.9K                  -
mypool/myzdev1  logicalused           12K                    -
mypool/myzdev1  logicalreferenced     12K                    -
mypool/myzdev1  volmode               default                default
mypool/myzdev1  filesystem_limit      none                   default
mypool/myzdev1  snapshot_limit        none                   default
mypool/myzdev1  filesystem_count      none                   default
mypool/myzdev1  snapshot_count        none                   default
mypool/myzdev1  snapdev               hidden                 default
mypool/myzdev1  acltype               off                    default
mypool/myzdev1  context               none                   default
mypool/myzdev1  fscontext             none                   default
mypool/myzdev1  defcontext            none                   default
mypool/myzdev1  rootcontext           none                   default
mypool/myzdev1  relatime              off                    default
mypool/myzdev1  redundant_metadata    all                    default
mypool/myzdev1  overlay               off                    default

查看压缩

zfs get compressratio mypool

[root@localhost zfs_img]# wget -O /mypool/myzdev1/kernel-debug-devel-3.10.0-957.el7.x86_64.rpm https://mirrors.aliyun.com/centos/7/os/x86_64/Packages/kernel-debug-devel-3.10.0-957.el7.x86_64.rpm
--2019-01-06 17:53:41--  https://mirrors.aliyun.com/centos/7/os/x86_64/Packages/kernel-debug-devel-3.10.0-957.el7.x86_64.rpm
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 183.232.156.251, 183.232.156.246, 183.232.156.245, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|183.232.156.251|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 17581280 (17M) [application/x-redhat-package-manager]
Saving to: ‘/mypool/myzdev1/kernel-debug-devel-3.10.0-957.el7.x86_64.rpm’

100%[=========================================================>] 17,581,280  6.41MB/s   in 2.6s   

2019-01-06 17:53:44 (6.41 MB/s) - ‘/mypool/myzdev1/kernel-debug-devel-3.10.0-957.el7.x86_64.rpm’ saved [17581280/17581280]

[root@localhost zfs_img]# ls -al /mypool/myzdev1/
total 11472
drwxr-xr-x. 2 root root        3 Jan  6 17:53 .
drwxr-xr-x. 3 root root        3 Jan  6 17:47 ..
-rw-r--r--. 1 root root 17581280 Nov 12 22:30 kernel-debug-devel-3.10.0-957.el7.x86_64.rpm
[root@localhost zfs_img]# du -ah /mypool/myzdev1/
12M     /mypool/myzdev1/kernel-debug-devel-3.10.0-957.el7.x86_64.rpm
12M     /mypool/myzdev1/
[root@localhost zfs_img]# zfs get compressratio mypool
NAME    PROPERTY       VALUE  SOURCE
mypool  compressratio  1.49x  -
[root@localhost zfs_img]# 

检查pool池状态

zpool status mypool

[root@localhost zfs_img]# zpool status mypool
  pool: mypool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            loop0   ONLINE       0     0     0
            loop1   ONLINE       0     0     0
            loop2   ONLINE       0     0     0
            loop3   ONLINE       0     0     0

errors: No known data errors

损坏 ZFS 池

dd if=/dev/zero of=/zfs_img/disk3.img bs=64M count=1

[root@localhost zfs_img]# dd if=/dev/zero of=/zfs_img/disk3.img bs=64M count=1
1+0 records in
1+0 records out
67108864 bytes (67 MB) copied, 0.549323 s, 122 MB/s
[root@localhost zfs_img]# 

清理并检查池

zpool scrub mypool
zpool status mypool

[root@localhost zfs_img]# zpool scrub mypool
[root@localhost zfs_img]# zpool status mypool
  pool: mypool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 0h0m with 0 errors on Sun Jan  6 18:00:55 2019
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      DEGRADED     0     0     0
          raidz1-0  DEGRADED     0     0     0
            loop0   ONLINE       0     0     0
            loop1   ONLINE       0     0     0
            loop2   ONLINE       0     0     0
            loop3   UNAVAIL      0     0     0  corrupted data

errors: No known data errors
[root@localhost zfs_img]# wc -l /mypool/myzdev1/kernel-debug-devel-3.10.0-957.el7.x86_64.rpm 
113894 /mypool/myzdev1/kernel-debug-devel-3.10.0-957.el7.x86_64.rpm
[root@localhost zfs_img]# 

修复池

当设备发生故障或损坏,我们可以使用replace命令替换它。

zpool replace mypool /dev/loop3 /dev/loop4

[root@localhost zfs_img]# dd if=/dev/zero of=disk4.img bs=64M count=1;losetup /dev/loop4 ./disk4.img
1+0 records in
1+0 records out
67108864 bytes (67 MB) copied, 1.24482 s, 53.9 MB/s
[root@localhost zfs_img]# zpool replace mypool /dev/loop3 /dev/loop4
[root@localhost zfs_img]# zpool status mypool
  pool: mypool
 state: ONLINE
  scan: resilvered 3.79M in 0h0m with 0 errors on Sun Jan  6 18:04:33 2019
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            loop0   ONLINE       0     0     0
            loop1   ONLINE       0     0     0
            loop2   ONLINE       0     0     0
            loop4   ONLINE       0     0     0

errors: No known data errors
[root@localhost zfs_img]# zpool scrub mypool 
[root@localhost zfs_img]# zpool status mypool
  pool: mypool
 state: ONLINE
  scan: scrub repaired 0B in 0h0m with 0 errors on Sun Jan  6 18:05:17 2019
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            loop0   ONLINE       0     0     0
            loop1   ONLINE       0     0     0
            loop2   ONLINE       0     0     0
            loop4   ONLINE       0     0     0

errors: No known data errors

添加新磁盘

zpool add mypool spare /dev/loop5

[root@localhost ~]# zpool status
  pool: mypool
 state: ONLINE
  scan: scrub repaired 0B in 0h0m with 0 errors on Sun Jan  6 18:05:17 2019
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            loop0   ONLINE       0     0     0
            loop1   ONLINE       0     0     0
            loop2   ONLINE       0     0     0
            loop4   ONLINE       0     0     0

errors: No known data errors
[root@localhost ~]# dd if=/dev/zero of=disk5.img bs=64M count=1;losetup /dev/loop5 ./disk5.img 
1+0 records in
1+0 records out
67108864 bytes (67 MB) copied, 1.46401 s, 45.8 MB/s
[root@localhost ~]# zpool add mypool spare /dev/loop5 
[root@localhost ~]# zpool status
  pool: mypool
 state: ONLINE
  scan: scrub repaired 0B in 0h0m with 0 errors on Sun Jan  6 18:05:17 2019
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            loop0   ONLINE       0     0     0
            loop1   ONLINE       0     0     0
            loop2   ONLINE       0     0     0
            loop4   ONLINE       0     0     0
        spares
          loop5     AVAIL   

errors: No known data errors

移除池里的磁盘

zpool remove mypool /dev/loop5

[root@localhost ~]# zpool status
  pool: mypool
 state: ONLINE
  scan: scrub repaired 0B in 0h0m with 0 errors on Sun Jan  6 18:05:17 2019
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            loop0   ONLINE       0     0     0
            loop1   ONLINE       0     0     0
            loop2   ONLINE       0     0     0
            loop4   ONLINE       0     0     0
        spares
          loop5     AVAIL   

errors: No known data errors
[root@localhost ~]# zpool remove mypool /dev/loop5 
[root@localhost ~]# zpool status
  pool: mypool
 state: ONLINE
  scan: scrub repaired 0B in 0h0m with 0 errors on Sun Jan  6 18:05:17 2019
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            loop0   ONLINE       0     0     0
            loop1   ONLINE       0     0     0
            loop2   ONLINE       0     0     0
            loop4   ONLINE       0     0     0

errors: No known data errors

查看储存池IO统计信息

zpool iostat -v mypool

[root@localhost ~]# zpool iostat -v mypool  
              capacity     operations     bandwidth 
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
mypool      15.3M   209M      0      3  16.8K  19.9K
  raidz1    15.3M   209M      0      3  16.8K  19.9K
    loop0       -      -      0      0  5.01K  5.41K
    loop1       -      -      0      0  5.01K  5.24K
    loop2       -      -      0      0  5.06K  5.41K
    loop4       -      -      0      0  3.50K  7.82K
----------  -----  -----  -----  -----  -----  -----
[root@localhost ~]# 

查看储存池默认的挂载点

默认挂载点是以根 / 开始的,可以用下面的命令更改默认挂载点

zfs umount -a
zfs set mountpoint=/testpoint/myzdev1 mypool/myzdev1
zfs mount -a

[root@localhost ~]# df
Filesystem              1K-blocks    Used Available Use% Mounted on
/dev/mapper/centos-root  17811456 1828584  15982872  11% /
devtmpfs                  1918800       0   1918800   0% /dev
tmpfs                     1930760       0   1930760   0% /dev/shm
tmpfs                     1930760   11956   1918804   1% /run
tmpfs                     1930760       0   1930760   0% /sys/fs/cgroup
/dev/sda1                 1038336  193792    844544  19% /boot
tmpfs                      386152       0    386152   0% /run/user/0
mypool                      74112       0     74112   0% /mypool
mypool/myzdev1              85632   11520     74112  14% /mypool/myzdev1
[root@localhost ~]# zfs umount -a
[root@localhost ~]# zfs set mountpoint=/testpoint/myzdev1 mypool/myzdev1
[root@localhost ~]# zfs mount -a
[root@localhost ~]# df
Filesystem              1K-blocks    Used Available Use% Mounted on
/dev/mapper/centos-root  17811456 1828584  15982872  11% /
devtmpfs                  1918800       0   1918800   0% /dev
tmpfs                     1930760       0   1930760   0% /dev/shm
tmpfs                     1930760   11956   1918804   1% /run
tmpfs                     1930760       0   1930760   0% /sys/fs/cgroup
/dev/sda1                 1038336  193792    844544  19% /boot
tmpfs                      386152       0    386152   0% /run/user/0
mypool                      74112       0     74112   0% /mypool
mypool/myzdev1              85632   11520     74112  14% /testpoint/myzdev1
[root@localhost ~]# 

创建快照

zfs snapshot mypool/myzdev1@2019-1-6
zfs list -t snapshot

[root@localhost ~]# echo "text1">/testpoint/myzdev1/snapshottest.txt
[root@localhost ~]# cat /testpoint/myzdev1/snapshottest.txt
text1
[root@localhost ~]# zfs snapshot mypool/myzdev1@2019-1-6
[root@localhost ~]# zfs list -t snapshot
NAME                      USED  AVAIL  REFER  MOUNTPOINT
mypool/myzdev1@2019-1-6     0B      -  11.2M  -

回滚快照

zfs rollback mypool/myzdev1@2019-1-6

[root@localhost ~]# echo "text2">/testpoint/myzdev1/snapshottest.txt
[root@localhost ~]# cat /testpoint/myzdev1/snapshottest.txt         
text2
[root@localhost ~]# zfs rollback mypool/myzdev1@2019-1-6
[root@localhost ~]# cat /testpoint/myzdev1/snapshottest.txt
text1
[root@localhost ~]# 

销毁池

zpool destroy mypool

[root@localhost ~]# zpool destroy mypool
[root@localhost ~]# zpool status
no pools available
[root@localhost ~]# cat /testpoint/myzdev1/snapshottest.txt
cat: /testpoint/myzdev1/snapshottest.txt: No such file or directory
[root@localhost ~]# 

后话

本文只是介绍如何在CentOS 7安装ZFS和一些基本使用。如果需要全面深入学习,可以进一步的阅读官方页面。

你可能感兴趣的:(CentOS7 安装使用 ZFS)