Docker数据持久化——Docker容器存储插件Convoy

一、Convoy简介

Convoy是一种支持多种存储类型的Docker存储卷宗插件,由Go语言编写,支持快照、备份和恢复。支持的后端存储设备有多种,devicemapper、NFS、DigitalOcean和EBS等。

二、实验规划

在本实验中,我们选择两台CentOS7.x,一台已经安装好Docker,另一台安装NFS网络文件共享服务。安装Docker的一台服务器作为NFS的客户端,另一台作为NFS服务器。

1、实验环境

host1(已安装Docker):

IP:192.168.10.11 关闭防火墙

系统版本:CentOS Linux release 7.6.1810 (Core)

Docker版本:Docker version 18.09.2, build 6247962

host2(作为NFS服务器):

IP:192.168.10.15 关闭防火墙

系统版本:CentOS Linux release 7.6.1810 (Core)

2、实验准备

两台主机网络畅通,需要为host1安装Docker,具体安装方法不再赘述,另外需要掌握docker和docker-compose相关知识。

三、NFS网络文件共享系统

注意:
以下操作如没有特别说明,均是在host2上进行操作。

1、安装

CentOS/RHEL 6系列中:

# yum install -y rpcbind
# yum install -y nfs-utils

CentOS/RHEL 7系列中:

# yum install -y nfs-utils

因为在CentOS7中已经默认安装了rpcbind。

2、创建共享目录

# mkdir /data

3、配置共享

# vim /etc/exports
/data/ 192.168.10.0/24(rw,no_root_squash,sync)

4、NFS权限说明

(1)普通用户

当设置all_squash时:

访客一律被映射为匿名用户(nfsnobody)。

当设置no_all_squash时:

访客被映射为服务器上相同uid的用户,因此在客户端应建立与服务端uid一致的用户,否则也映射为nfsnobody。root除外,因为root_suqash为默认选项,除非指定了no_root_squash。

(2)root用户

当设置root_squash时:

访客以root用户访问NFS服务端时,被映射为nfsnobody用户。

当设置no_root_squash时:

访客以root用户访问NFS服务端时,被映射为root用户。以其他用户访问时同样映射为对应uid的用户,因为no_all_squash是默认选项。

NFS权限选项说明:
ro:共享目录只读
rw:共享目录可读可写
all_squash:所有访问用户都映射为匿名用户或用户组
no_all_squash(默认):访问用户先与本机用户匹配,匹配失败后再映射为匿名用户或用户组
root_squash(默认):将来访的root用户映射为匿名用户或用户组
no_root_squash:来访的root用户保持root帐号权限
anonuid=:指定匿名访问用户的本地用户UID,默认为nfsnobody(65534)
anongid=:指定匿名访问用户的本地用户组GID,默认为nfsnobody(65534)secure(默认):限制客户端只能从小于1024的tcp/ip端口连接服务器
insecure:允许客户端从大于1024的tcp/ip端口连接服务器
sync:将数据同步写入内存缓冲区与磁盘中,效率低,但可以保证数据的一致性
async:将数据先保存在内存缓冲区中,必要时才写入磁盘
wdelay(默认):检查是否有相关的写操作,如果有则将这些写操作一起执行,这样可以提高效率
no_wdelay:若有写操作则立即执行,应与sync配合使用
subtree_check(默认) :若输出目录是一个子目录,则nfs服务器将检查其父目录的权限
no_subtree_check :即使输出目录是一个子目录,nfs服务器也不检查其父目录的权限,这样可以提高效率

5、启动服务

(1)启动rpcbind服务

CentOS/RHEL 6:
# service rpcbind start
CentOS/RHEL 7:
# systemctl start rpcbind

(2)启动nfs服务

CentOS/RHEL 6:
# service nfs start
CentOS/RHEL 7:
# systemctl start nfs

(3)查看RPC服务的注册状况

# rpcinfo -p localhost
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  44241  status
    100024    1   tcp  59150  status
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    2   udp  20048  mountd
    100005    2   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049  nfs_acl
    100021    1   udp  35427  nlockmgr
    100021    3   udp  35427  nlockmgr
    100021    4   udp  35427  nlockmgr
    100021    1   tcp  36870  nlockmgr
    100021    3   tcp  36870  nlockmgr
    100021    4   tcp  36870  nlockmgr

(4)列出已设置的共享

先在nfs服务端本机查看共享目录。

# showmount -e localhost
Export list for localhost:
/data 192.168.10.0/24

注意:
以下操作如没有特殊说明,均是在host1操作。

6、客户端挂载使用

(1)测试能否列出共享目录

跟服务端测试相同,使用showmount 加服务器IP查看共享情况。

# showmount -e 192.168.10.15
Export list for 192.168.10.15:
/data 192.168.10.0/24

(2)创建客户端挂载点

# mkdir /nfs

(3)挂载

# mount -t nfs 192.168.10.15:/data/ /nfs

(4)查看挂载情况

# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   56G   52G  3.8G  94% /
devtmpfs                 3.9G     0  3.9G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G   35M  3.9G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda5               1014M  234M  781M  24% /boot
/dev/sda1                 98M   36M   63M  36% /boot/efi
tmpfs                    790M   40K  790M   1% /run/user/1000
tmpfs                    790M     0  790M   0% /run/user/0
192.168.10.15:/data       13G  1.7G   11G  14% /nfs

四、Docker容器卷宗插件Convoy

项目地址:https://github.com/rancher/convoy

注意:
以下操作如没有特殊说明,均是在host1上面操作。

1、下载安装

获取安装包:
# wget https://github.com/rancher/convoy/releases/download/v0.5.2/convoy.tar.gz
解压:
# tar -xvf convoy.tar.gz 
convoy/
convoy/convoy-pdata_tools
convoy/SHA1SUMS
convoy/convoy
进入解压目录查看内容
# cd convoy/
# ls -l
total 39728
-rwxr-xr-x 1 root root 17607008 Dec  8 10:32 convoy
-rwxr-xr-x 1 root root 23068627 Dec  8 10:32 convoy-pdata_tools
-rw-r--r-- 1 root root      124 Dec  8 10:32 SHA1SUMS

解压后目录内有两个可执行文件和一个SHA1校验信息文件,可以查看一下里面内容,然后和官方给出的校验信息作对比,防止文件被篡改。

# cat SHA1SUMS 
569b079b2dd867f659b86ac8beda2ad6d348d4aa *convoy/convoy-pdata_tools
6bf6eb6b3b4f042ed2088367e9eea956e9154b9f *convoy/convoy

此处不作文件完整性校验,直接安装。安装方式也很简单,只需将可执行文件添加到PATH路径即可:

# cp convoy convoy-pdata_tools /usr/local/bin/

注意:
此处使用相对路径,因解压后已经进入解压目录。所以cp命令操作的对象直接是可执行文件,不附带目录层次。

2、设置Docker的卷插件

以下操作的前提是您已经安装Docker,否则操作无效。

(1)创建插件配置文件目录

# mkdir /etc/docker/plugins/

(2)创建Convoy插件配置文件

# bash -c 'echo "unix:///var/run/convoy/convoy.sock" > /etc/docker/plugins/convoy.spec'

3、Convoy守护进程的停止与启动

(1)启动Convoy守护进程

# convoy daemon --drivers vfs --driver-opts vfs.path=/nfs &
[1] 20318
[root@ops ~]# DEBU[0000] Found existing config. Ignoring command line opts, loading config from /var/lib/rancher/convoy  pkg=daemon
DEBU[0000]                                               driver=vfs driver_opts=map[vfs.path:/nfs] event=init pkg=daemon reason=prepare root=/var/lib/rancher/convoy
DEBU[0000]                                               driver=vfs event=init pkg=daemon reason=complete
DEBU[0000] Registering GET, /info                        pkg=daemon
DEBU[0000] Registering GET, /volumes/list                pkg=daemon
DEBU[0000] Registering GET, /volumes/                    pkg=daemon
DEBU[0000] Registering GET, /snapshots/                  pkg=daemon
DEBU[0000] Registering GET, /backups/list                pkg=daemon
DEBU[0000] Registering GET, /backups/inspect             pkg=daemon
DEBU[0000] Registering POST, /volumes/mount              pkg=daemon
DEBU[0000] Registering POST, /volumes/umount             pkg=daemon
DEBU[0000] Registering POST, /snapshots/create           pkg=daemon
DEBU[0000] Registering POST, /backups/create             pkg=daemon
DEBU[0000] Registering POST, /volumes/create             pkg=daemon
DEBU[0000] Registering DELETE, /backups                  pkg=daemon
DEBU[0000] Registering DELETE, /volumes/                 pkg=daemon
DEBU[0000] Registering DELETE, /snapshots/               pkg=daemon
DEBU[0000] Registering plugin handler POST, /VolumeDriver.Remove  pkg=daemon
DEBU[0000] Registering plugin handler POST, /VolumeDriver.Mount  pkg=daemon
DEBU[0000] Registering plugin handler POST, /VolumeDriver.Path  pkg=daemon
DEBU[0000] Registering plugin handler POST, /VolumeDriver.Get  pkg=daemon
DEBU[0000] Registering plugin handler POST, /VolumeDriver.Capabilities  pkg=daemon
DEBU[0000] Registering plugin handler POST, /Plugin.Activate  pkg=daemon
DEBU[0000] Registering plugin handler POST, /VolumeDriver.Create  pkg=daemon
DEBU[0000] Registering plugin handler POST, /VolumeDriver.Unmount  pkg=daemon
DEBU[0000] Registering plugin handler POST, /VolumeDriver.List  pkg=daemon
WARN[0000] Remove previous sockfile at /var/run/convoy/convoy.sock  pkg=daemon

执行启动命令后屏幕输出以上内容,终端出现“假死”情况,直接回车即可继续使用该终端。

注意:
daemon参数表示convoy启动守护进程
--drivers后面接的是后端存储设备类型,Device Mapper对应的参数是devicemapper,NFS对应的参数是vfs,其他类型请自行点击项目地址,在项目页面下方查看。

(2)Convoy守护进程的停止

因为convoy没有提供停止守护进程的功能 ,所以只能使用kill命令将其进程杀死。

获取进程PID:

# ps -ef|grep convoy
root     15461     1  0 Feb26 ?        00:00:00 convoy daemon --drivers vfs --driver-opts vfs.path=/nfs
root     21438   534  0 15:37 pts/1    00:00:00 grep --color=auto convoy

杀死进程:

# kill -9 15461

结合awk工具杀死进程:

# kill -9 $(ps -ef|grep convoy|awk -F" " 'NR==1{print $2}')

4、Convoy的使用

(1)卷的创建、查看与删除

创建卷:

convoy create 

例如,创建一个名为docker_test的卷,使用如下命令:

# convoy create docker_test
docker_test

注意:
守护进程启动后vfs.path对应的目录会自动生成一个config目录,请勿将之删除。

如果不小心将vfs.path对应目录内的config目录删除或系统没有自动生成config目录,创建卷宗时会出现以下错误:

# convoy create test
DEBU[0038] Calling: POST, /volumes/create, request: POST, /v1/volumes/create  pkg=daemon
DEBU[0038]                                               event=create object=volume opts=map[VolumeIOPS:0 PrepareForVM:false Size:0 BackupURL: VolumeName:test VolumeType: EndpointURL: VolumeDriverID:] pkg=daemon reason=prepare volume=test
ERRO[0038] Handler for POST /volumes/create returned error: Couldn't get flock. Error: open /nfs/config/vfs_volume_test.json.lock: no such file or directory  pkg=daemon
ERRO[0000] Error response from server, Couldn't get flock. Error: open /nfs/config/vfs_volume_test.json.lock: no such file or directory
 
{
    "Error": "Error response from server, Couldn't get flock. Error: open /nfs/config/vfs_volume_test.json.lock: no such file or directory\n"
}

出现这种错误时需要手动在vfs.path对应的目录内创建config目录,创建完成后再执行创建卷的命令就不会报错。

查看系统中所有卷:

convoy list

例如,列出系统中所有的卷,将以json格式显示,如下:

# convoy list
{
    "docker_test": {
        "Name": "docker_test",
        "Driver": "vfs",
        "MountPoint": "",
        "CreatedTime": "Wed Feb 27 10:37:56 +0800 2019",
        "DriverInfo": {
            "Driver": "vfs",
            "MountPoint": "",
            "Path": "/nfs/docker_test",
            "PrepareForVM": "false",
            "Size": "0",
            "VolumeCreatedAt": "Wed Feb 27 10:37:56 +0800 2019",
            "VolumeName": "docker_test"
        },
        "Snapshots": {}
    },
    "test": {
        "Name": "test",
        "Driver": "vfs",
        "MountPoint": "",
        "CreatedTime": "Wed Feb 27 10:37:11 +0800 2019",
        "DriverInfo": {
            "Driver": "vfs",
            "MountPoint": "",
            "Path": "/nfs/test",
            "PrepareForVM": "false",
            "Size": "0",
            "VolumeCreatedAt": "Wed Feb 27 10:37:11 +0800 2019",
            "VolumeName": "test"
        },
        "Snapshots": {}
    }
}

查看某个特定卷的信息:

convoy inspect 

例如,查看刚刚创建的名为docker_test的卷的信息,使用如下命令:

# docker inspect docker_test
[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "convoy",
        "Labels": null,
        "Mountpoint": "",
        "Name": "docker_test",
        "Options": null,
        "Scope": "local"
    }
]

删除卷:

convoy delete 

例如,删除系统中的test卷,使用如下命令:

# convoy delete test

删除卷后再进行查看:

# convoy list
{
    "docker_test": {
        "Name": "docker_test",
        "Driver": "vfs",
        "MountPoint": "",
        "CreatedTime": "Wed Feb 27 10:37:56 +0800 2019",
        "DriverInfo": {
            "Driver": "vfs",
            "MountPoint": "",
            "Path": "/nfs/docker_test",
            "PrepareForVM": "false",
            "Size": "0",
            "VolumeCreatedAt": "Wed Feb 27 10:37:56 +0800 2019",
            "VolumeName": "docker_test"
        },
        "Snapshots": {}
    }
}

看到如上信息,说明已经成功的删除了test卷。

(2)快照的创建、查看与删除

创建快照:

convoy snapshot create  --name 

例如,对docker_test卷做一个名为docker_test-01的快照,使用如下命令:

# convoy snapshot create docker_test --name docker_test-01
docker_test-01

查看快照:

convoy snapshot inspect 

例如,查看快照docker_test-01的详情,使用如下命令:

# convoy snapshot inspect docker_test-01
{
    "Name": "docker_test-01",
    "VolumeName": "docker_test",
    "VolumeCreatedAt": "Wed Feb 27 10:37:56 +0800 2019",
    "CreatedTime": "Wed Feb 27 11:54:53 +0800 2019",
    "DriverInfo": {
        "Driver": "vfs",
        "FilePath": "/var/lib/rancher/convoy/vfs/snapshots/docker_test_docker_test-01.tar.gz",
        "SnapshotCreatedAt": "Wed Feb 27 11:54:53 +0800 2019",
        "SnapshotName": "docker_test-01",
        "VolumeUUID": "docker_test"
    }
}

删除快照:

convoy snapshot delete 

例如,删除docker_test-01这个快照,使用如下命令:

# convoy snapshot delete docker_test-01

(3)快照的备份、备份查看、备份恢复与备份删除

创建快照备份:

convoy backup create  --dest vfs:///

例如,对快照docker_test-01做备份,命令如下:

# convoy backup create docker_test-01 --dest vfs:///backup
vfs:///backup?backup=backup-c0fd6c56ac6c4b03\u0026volume=docker_test

查看快照备份:

convoy backup list 

例如,查看上一步创建的快照备份信息,使用如下命令:

# convoy backup list vfs:///backup
{
    "vfs:///backup?backup=backup-2bd89ea77a144dbc\u0026volume=docker_test": {
        "BackupName": "backup-2bd89ea77a144dbc",
        "BackupURL": "vfs:///backup?backup=backup-2bd89ea77a144dbc\u0026volume=docker_test",
        "CreatedTime": "Wed Feb 27 15:59:07 +0800 2019",
        "DriverName": "vfs",
        "SnapshotCreatedAt": "Wed Feb 27 15:58:44 +0800 2019",
        "SnapshotName": "docker_test-02",
        "VolumeCreatedAt": "Wed Feb 27 10:37:56 +0800 2019",
        "VolumeName": "docker_test",
        "VolumeSize": "0"
    },
    "vfs:///backup?backup=backup-c0fd6c56ac6c4b03\u0026volume=docker_test": {
        "BackupName": "backup-c0fd6c56ac6c4b03",
        "BackupURL": "vfs:///backup?backup=backup-c0fd6c56ac6c4b03\u0026volume=docker_test",
        "CreatedTime": "Wed Feb 27 13:40:24 +0800 2019",
        "DriverName": "vfs",
        "SnapshotCreatedAt": "Wed Feb 27 11:54:53 +0800 2019",
        "SnapshotName": "docker_test-01",
        "VolumeCreatedAt": "Wed Feb 27 10:37:56 +0800 2019",
        "VolumeName": "docker_test",
        "VolumeSize": "0"
    }
}

通过以上命令可以查看到,当前vfs:///backup这个dest_url下有两个快照备份体,详细情况都显示在终端中。

查看某个具体的快照备份详情:

convoy backup inspect 

温馨提示:
此处的是指具体的备份体的url,在备份创建成功后会显示在终端,也可以通过上一步的命令查看,json字典中的“vfs:///backup?backup=backup-xxx*”就是此处所说的,您可以根据json中给出的详情结合备份体创建的时间确定哪一个是您需要操作的

例如,查看上一步对快照docker_test-01做的备份,可以使用如下命令:

# convoy backup inspect vfs:///backup?backup=backup-c0fd6c56ac6c4b03\u0026volume=docker_test  
#注意:以上内容为一行命令
{
    "BackupName": "backup-c0fd6c56ac6c4b03",
    "BackupURL": "vfs:///backup?backup=backup-c0fd6c56ac6c4b03\u0026volume=docker_test",
    "CreatedTime": "Wed Feb 27 13:40:24 +0800 2019",
    "DriverName": "vfs",
    "SnapshotCreatedAt": "Wed Feb 27 11:54:53 +0800 2019",
    "SnapshotName": "docker_test-01",
    "VolumeCreatedAt": "Wed Feb 27 10:37:56 +0800 2019",
    "VolumeName": "docker_test",
    "VolumeSize": "0"
}

注意:
以上两种方法都可以查看快照备份的详情,区别是list后接的dest_url参数是创建备份时指定的url,创建备份的时候会在系统上创建一个对应的目录,此处是“/backup”,且使用list加dest_url参数查看到的结果时“vfs:///backup”这个url下面的所有备份体的详细信息;而使用inspect加backup_url参数查询到的是系统中某个具体的备份体的详细情况,而不是所有备份体都列出来,且使用inspect命令时接vfs:///backup参数会出现报错。此处有点绕,dest_url和backup_url的关系可以理解为:dest_url(vfs:///backup)是一个大集合,而backup_url(vfs:///backup?backup=backup-c0fd6c56ac6c4b03\u0026volume=docker_test)是vfs:///backup这个大集合中的一个元素。

备份的恢复:

先在docker_test卷下创建一个a.txt文件:

# cd /nfs/docker_test
# echo "hello world\!" > a.txt
# cat a.txt
hello world\!

创建一个名为aaa的快照:

# convoy snapshot create docker_test --name aaa
DEBU[0354] Calling: POST, /snapshots/create, request: POST, /v1/snapshots/create  pkg=daemon
DEBU[0354]                                               event=create object=snapshot pkg=daemon reason=prepare snapshot=aaa volume=docker_test
DEBU[0354]                                               event=create object=snapshot pkg=daemon reason=complete snapshot=aaa volume=docker_test
DEBU[0354] Response:  aaa                                pkg=daemon
aaa

对快照aaa进行备份:

# convoy backup create aaa --dest vfs:///backup
DEBU[0466] Calling: POST, /backups/create, request: POST, /v1/backups/create  pkg=daemon
DEBU[0466]                                               dest_url=vfs:///backup driver=vfs endpoint_url= event=backup object=snapshot pkg=daemon reason=prepare snapshot=aaa volume=docker_test
DEBU[0466] Loaded driver for %vvfs:///backup             pkg=vfs
DEBU[0466]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[0466]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[0466] Creating backup                               event=backup filepath=/var/lib/rancher/convoy/vfs/snapshots/docker_test_aaa.tar.gz object=snapshot pkg=objectstore reason=start snapshot=aaa
DEBU[0466]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/backups/backup_backup-f2c4c175df4d45b5.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[0466]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/backups/backup_backup-f2c4c175df4d45b5.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[0466] Created backup                                event=backup object=snapshot pkg=objectstore reason=complete snapshot=aaa
DEBU[0466]                                               dest_url=vfs:///backup driver=vfs endpoint_url= event=backup object=snapshot pkg=daemon reason=complete snapshot=aaa volume=docker_test
DEBU[0466] Response:  vfs:///backup?backup=backup-f2c4c175df4d45b5\u0026volume=docker_test  pkg=daemon
vfs:///backup?backup=backup-f2c4c175df4d45b5\u0026volume=docker_test

对a.txt文件进行修改:

# echo "123456abcde" > a.txt
# cat a.txt 
123456abcde

将文件恢复到修改前的状态,即通过快照备份进行恢复:

命令语法:
# convoy create  --backup 
通过快照恢复到修改前的命令:
# convoy create docker_test1 --backup vfs:///backup?backup=backup-f2c4c175df4d45b5\u0026volume=docker_test
DEBU[2131] Calling: POST, /volumes/create, request: POST, /v1/volumes/create  pkg=daemon
DEBU[2131]                                               event=create object=volume opts=map[VolumeType: Size:0 BackupURL:vfs:///backup?backup=backup-f2c4c175df4d45b5&volume=docker_test VolumeName:docker_test1 VolumeDriverID: EndpointURL: VolumeIOPS:0 PrepareForVM:false] pkg=daemon reason=prepare volume=docker_test1
DEBU[2131] Loaded driver for %vvfs:///backup             pkg=vfs
DEBU[2131]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[2131]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[2131] Loaded driver for %vvfs:///backup             pkg=vfs
DEBU[2131]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[2131]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[2131]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/backups/backup_backup-f2c4c175df4d45b5.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[2131]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/backups/backup_backup-f2c4c175df4d45b5.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[2131] Created volume                                event=create object=volume pkg=daemon reason=complete volume=docker_test1
DEBU[2131] Response:  docker_test1                       pkg=daemon
docker_test1

注意:
此处使用从快照恢复功能不能直接恢复到原来的docker_test卷宗,操作的实质是通过该备份重新创建了一个卷宗。

如果直接将备份恢复到现有的卷,会出现报错,如下:

# convoy create docker_test --backup vfs:///backup?backup=backup-f2c4c175df4d45b5\u0026volume=docker_test
DEBU[2096] Calling: POST, /volumes/create, request: POST, /v1/volumes/create  pkg=daemon
ERRO[2096] Handler for POST /volumes/create returned error: Volume docker_test already exists   pkg=daemon
ERRO[0000] Error response from server, Volume docker_test already exists 
 
{
    "Error": "Error response from server, Volume docker_test already exists \n"
}
[root@ops docker_test]# convoy create docker_test1 --backup vfs:///backup?backup=backup-f2c4c175df4d45b5\u0026volume=docker_test

查看恢复情况:

# cd /nfs/docker_test1/
# ll
total 4
-rw-r--r-- 1 root root 14 Mar  6 11:22 a.txt
# cat a.txt 
hello world\!

备份的删除:

命令语法:
convoy backup delete 
删除前面为快照aaa创建的备份,使用如下命令:
# convoy backup delete vfs:///backup?backup=backup-f2c4c175df4d45b5\u0026volume=docker_test

DEBU[11513] Calling: DELETE, /backups, request: DELETE, /v1/backups  pkg=daemon
DEBU[11513] Loaded driver for %vvfs:///backup             pkg=vfs
DEBU[11513] Loaded driver for %vvfs:///backup             pkg=vfs
DEBU[11513]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[11513]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[11513]                                               dest_url=vfs:///backup?backup=backup-f2c4c175df4d45b5&volume=docker_test driver=vfs endpoint_url= event=remove object=snapshot pkg=daemon reason=prepare
DEBU[11513] Loaded driver for %vvfs:///backup             pkg=vfs
DEBU[11513]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[11513]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[11513] Loaded driver for %vvfs:///backup             pkg=vfs
DEBU[11513]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[11513]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[11513]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/backups/backup_backup-f2c4c175df4d45b5.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[11513]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/backups/backup_backup-f2c4c175df4d45b5.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[11513] Removed convoy-objectstore/volumes/do/ck/docker_test/backups/backup_backup-f2c4c175df4d45b5.cfg on objectstore  pkg=objectstore
DEBU[11513]                                               dest_url=vfs:///backup?backup=backup-f2c4c175df4d45b5&volume=docker_test driver=vfs endpoint_url= event=remove object=snapshot pkg=daemon reason=complete

(4)一些疑问

快照的实体存放在何处?

先查看快照信息:

# convoy snapshot inspect test-01
DEBU[16148] Calling: GET, /snapshots/, request: GET, /v1/snapshots/  pkg=daemon
{
    "Name": "test-01",
    "VolumeName": "test",
    "VolumeCreatedAt": "Wed Mar 06 11:19:53 +0800 2019",
    "CreatedTime": "Wed Mar 06 15:02:31 +0800 2019",
    "DriverInfo": {
        "Driver": "vfs",
        "FilePath": "/var/lib/rancher/convoy/vfs/snapshots/test_test-01.tar.gz",
        "SnapshotCreatedAt": "Wed Mar 06 15:02:31 +0800 2019",
        "SnapshotName": "test-01",
        "VolumeUUID": "test"
    }
}

注意:
此处的快照是另外创建的,跟前面没有任何关联,若您没有创建名为test-01的快照,则使用此命令会出现报错,因此,在此处您应当使用自己创建的快照名查看快照的详细信息。

以上命令的输出结果中,有FilePath,FilePath对应的值就是当前的快照实体,我们可以使用命令查看:

# ll /var/lib/rancher/convoy/vfs/snapshots/test_test-01.tar.gz
-rw-r--r-- 1 root root 128 Mar  6 15:02 /var/lib/rancher/convoy/vfs/snapshots/test_test-01.tar.gz

在“/var/lib/rancher/convoy/vfs/snapshots/”这个目录下,存放了我们创建的所有快照,如果不记得之前创建的快照名,可以到此目录下查看所有的已创建的快照:

# cd /var/lib/rancher/convoy/vfs/snapshots/
# ll
total 16
-rw-r--r-- 1 root root 185 Mar  6 11:24 docker_test_aaa.tar.gz
-rw-r--r-- 1 root root 142 Feb 27 11:54 docker_test_docker_test-01.tar.gz
-rw-r--r-- 1 root root 182 Feb 27 15:58 docker_test_docker_test-02.tar.gz
-rw-r--r-- 1 root root 128 Mar  6 15:02 test_test-01.tar.gz

卷宗和快照被删除后,还可以通过快照的备份恢复吗?(或者说卷宗和快照被删除了,原来在这些被删除的快照上面创建的备份还能用来恢复吗?)

首先,对docker_test创建一个快照:

# convoy snapshot create docker_test --name aaa
DEBU[17187] Calling: POST, /snapshots/create, request: POST, /v1/snapshots/create  pkg=daemon
DEBU[17187]                                               event=create object=snapshot pkg=daemon reason=prepare snapshot=aaa volume=docker_test
DEBU[17187]                                               event=create object=snapshot pkg=daemon reason=complete snapshot=aaa volume=docker_test
DEBU[17187] Response:  aaa                                pkg=daemon
aaa

然后对快照进行备份:

# convoy backup create aaa --dest vfs:///backup
DEBU[17254] Calling: POST, /backups/create, request: POST, /v1/backups/create  pkg=daemon
DEBU[17254]                                               dest_url=vfs:///backup driver=vfs endpoint_url= event=backup object=snapshot pkg=daemon reason=prepare snapshot=aaa volume=docker_test
DEBU[17254] Loaded driver for %vvfs:///backup             pkg=vfs
DEBU[17254]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[17254]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[17254] Creating backup                               event=backup filepath=/var/lib/rancher/convoy/vfs/snapshots/docker_test_aaa.tar.gz object=snapshot pkg=objectstore reason=start snapshot=aaa
DEBU[17254]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/backups/backup_backup-f9f54dab546449da.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[17254]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/backups/backup_backup-f9f54dab546449da.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[17254] Created backup                                event=backup object=snapshot pkg=objectstore reason=complete snapshot=aaa
DEBU[17254]                                               dest_url=vfs:///backup driver=vfs endpoint_url= event=backup object=snapshot pkg=daemon reason=complete snapshot=aaa volume=docker_test
DEBU[17254] Response:  vfs:///backup?backup=backup-f9f54dab546449da\u0026volume=docker_test  pkg=daemon
vfs:///backup?backup=backup-f9f54dab546449da\u0026volume=docker_test

删除docker_test卷:

# convoy delete docker_test
DEBU[18130] Calling: DELETE, /volumes/, request: DELETE, /v1/volumes/  pkg=daemon
DEBU[18130]                                               event=delete object=volume pkg=daemon reason=prepare volume=docker_test
DEBU[18130] Cleaning up /nfs/docker_test for volume docker_test  pkg=vfs
DEBU[18130]                                               event=delete object=volume pkg=daemon reason=complete volume=docker_test

删除aaa快照:

# convoy snapshot delete aaa
DEBU[17327] Calling: DELETE, /snapshots/, request: DELETE, /v1/snapshots/  pkg=daemon
DEBU[17327]                                               event=delete object=snapshot pkg=daemon reason=prepare snapshot=aaa volume=docker_test
DEBU[17327]                                               event=delete object=snapshot pkg=daemon reason=complete snapshot=aaa volume=docker_test

注意:
删除快照也可以到快照存放路径使用系统命令进行删除。同样,也可以使用这种方法去删除卷,但不同的是使用系统命令删除卷之后,使用卷查看命令依然能查到该卷的信息。

使用快照aaa的备份尝试恢复(用备份创建一个名为aaa的卷):

# convoy create aaa --backup vfs:///backup?backup=backup-f9f54dab546449da\u0026volume=docker_test
DEBU[17332] Calling: POST, /volumes/create, request: POST, /v1/volumes/create  pkg=daemon
DEBU[17332]                                               event=create object=volume opts=map[Size:0 VolumeName:aaa BackupURL:vfs:///backup?backup=backup-f9f54dab546449da&volume=docker_test EndpointURL: VolumeDriverID: VolumeType: VolumeIOPS:0 PrepareForVM:false] pkg=daemon reason=prepare volume=aaa
DEBU[17332] Loaded driver for %vvfs:///backup             pkg=vfs
DEBU[17332]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[17332]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[17332] Loaded driver for %vvfs:///backup             pkg=vfs
DEBU[17332]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[17332]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/volume.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[17332]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/backups/backup_backup-f9f54dab546449da.cfg kind=vfs object=config pkg=objectstore reason=start
DEBU[17332]                                               filepath=convoy-objectstore/volumes/do/ck/docker_test/backups/backup_backup-f9f54dab546449da.cfg kind=vfs object=config pkg=objectstore reason=complete
DEBU[17332] Created volume                                event=create object=volume pkg=daemon reason=complete volume=aaa
DEBU[17332] Response:  aaa                                pkg=daemon
aaa

通过以上实验我们发现,即使我们删除了备份的母体——快照和快照的母体——卷,备份依然是有效的,我们可以继续使用备份进行恢复。

那么,问题来了……

系统运行时间久了,有些备份我们已经很久没有使用了,也占用了不少存储空间,而且忘记了具体的url,这种情况下使用命令删除备份就显得很麻烦,那如何一次性删除不用的备份呢?

此时最有效的办法就是进入到快照备份目录,找到并删除相应的备份文件和配置文件。

备份的存放目录是我们创建备份时指定的dest_url,在创建备份时convoy会自动在系统上创建该目录。以下是dest_url的目录结构:

# cd /backup
# tree
.
├── convoy-objectstore
│   └── volumes
│       ├── do
│       │   └── ck
│       │       └── docker_test
│       │           ├── BackupFiles
│       │           │   ├── backup-2bd89ea77a144dbc.bak
│       │           │   ├── backup-c0fd6c56ac6c4b03.bak
│       │           │   └── backup-f9f54dab546449da.bak
│       │           ├── backups
│       │           │   ├── backup_backup-2bd89ea77a144dbc.cfg
│       │           │   ├── backup_backup-c0fd6c56ac6c4b03.cfg
│       │           │   └── backup_backup-f9f54dab546449da.cfg
│       │           └── volume.cfg
│       └── te
│           └── st
│               └── test
│                   ├── BackupFiles
│                   │   └── backup-f8f4ad07043244eb.bak
│                   ├── backups
│                   │   └── backup_backup-f8f4ad07043244eb.cfg
│                   └── volume.cfg
└── home.tar

12 directories, 11 files

说明:
所有备份文件都按volume进行分类,在volumes下使用卷名的首4个字母分两个进行递归命名。如:docker_test卷产生的所有备份文件是存放在volumes/do/ck//BackupFiles下面。

五、在Docker中使用Convony插件实现数据持久化存储

1、命令行模式下使用Convoy对Docker做数据持久化

注意:
此处以nginx容器为例,其他服务请自行按需求操作。

(1)获取nginx的镜像

# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
f7e2b70d04ae: Pull complete 
08dd01e3f3ac: Pull complete 
d9ef3a1eb792: Pull complete 
Digest: sha256:98efe605f61725fd817ea69521b0eeb32bef007af0e3d0aeb6258c6e6fe7fc1a
Status: Downloaded newer image for nginx:latest

(2)查看镜像

# docker images
REPOSITORY                                      TAG                 IMAGE ID            CREATED             SIZE
nginx                                           latest              881bd08c0b08        29 hours ago        109MB
php                                             7.2.13              a1c0790840ba        2 months ago        367MB
ubuntu                                          latest              1d9c17228a9e        2 months ago        86.7MB                  

(3)创建www卷并创建一个index.html文件

创建卷:
# convoy create www
DEBU[22537] Calling: POST, /volumes/create, request: POST, /v1/volumes/create  pkg=daemon
DEBU[22537]                                               event=create object=volume opts=map[BackupURL: VolumeDriverID: PrepareForVM:false VolumeIOPS:0 Size:0 EndpointURL: VolumeName:www VolumeType:] pkg=daemon reason=prepare volume=www
DEBU[22537] Created volume                                event=create object=volume pkg=daemon reason=complete volume=www
DEBU[22537] Response:  www                                pkg=daemon
www

创建index文件:
# echo 'hello convoy!' > /nfs/www/index.html
# cat /nfs/www/index.html
hello convoy!

(4)启动nginx容器,并挂载卷

# docker run -itd --name TestConvoy -p 80:80 -v /nfs/www/:/usr/share/nginx/html --volume-driver=convoy nginx
f0254ddb0225625d9d95cdb58b54557644f50901c058469901dda43093a6f6cc

(5)访问测试

# curl 127.0.0.1

403 Forbidden

403 Forbidden


nginx/1.15.9

如出现以上错误,则说明是卷的权限问题,默认使用root创建卷后,卷的权限如下:

# ll
total 0
drwx------ 2 root root  19 Mar  6 11:22 aaa
drwx------ 2 root root  19 Mar  6 11:22 aaa1
drwxr-xr-x 2 root root 116 Mar  6 17:34 config
drwx------ 2 root root   6 Mar  6 11:19 test
drwx------ 2 root root  24 Mar  6 17:37 www

因此,需要更改卷的权限为其他人有读取和执行权限。如下:

# chmod -R 755 www/
# ll
total 0
drwx------ 2 root root  19 Mar  6 11:22 aaa
drwx------ 2 root root  19 Mar  6 11:22 aaa1
drwxr-xr-x 2 root root 116 Mar  6 17:34 config
drwx------ 2 root root   6 Mar  6 11:19 test
drwxr-xr-x 2 root root  24 Mar  6 17:37 www

再进行访问测试:

# curl 127.0.0.1
hello convoy!

(6)不提前创建映射目录,在启动过程中自动创建

在上一步的实验中,我们启动容器前创建了一个存放index文件的目录,并在启动容器的过程中使用了绝对路径的方式映射。而此处,我们将不提前创建目录,而是使用插件自动创建。

启动容器:

# docker run -itd --name TestConvoy -p 80:80 -v wwwroot:/usr/share/nginx/html --volume-driver=convoy nginx
586e1fecf03d30321fe838394b138fbd93da40b8575bc3d2dc17d752987b4143

访问测试:

# curl 127.0.0.1



Welcome to nginx!



Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

很显然,这个页面是nginx默认的index文件。

此时我们切换到/nfs目录进行查看:

# cd /nfs
# ll
total 4
drwx------ 2 root root  19 Mar  6 11:22 aaa
drwx------ 2 root root  19 Mar  6 11:22 aaa1
drwxr-xr-x 2 root root 179 Mar  7 10:38 config
drwx------ 2 root root   6 Mar  6 11:19 test
drwxr-xr-x 2 root root  24 Mar  6 17:37 www
drwxr-xr-x 2 root root  40 Mar  7 17:48 wwwroot

发现已经自动创建了wwwroot目录,原因是我们在启动过程中指定了volume驱动类型为convoy,在容器启动过程中,docker调用了convoy插件,自动创建了一个convoy卷并映射到了我们指定的容器目录,并且,容器内的目录没有像上个实验一样被替换,而是容器内部的目录替换了宿主机的目录。可进入目录查看:

# cd /nfs/wwwroot/
# ll
total 8
-rw-r--r-- 1 root root 494 Feb 26 22:13 50x.html
-rw-r--r-- 1 root root 612 Feb 26 22:13 index.html

2、使用docker-compose工具调用convoy插件实现数据持久化

(1)编辑docker-compose.yml文件

# vim docker-compose.yml
version: '3'
services:
  TestConvoy:
    image: nginx:latest
    container_name: nginx
    volumes:
      - html:/usr/share/nginx/html
    ports:
      - 80:80
volumes:
  html:
    driver: convoy

(2)使用docker-compose启动nginx容器

# docker-compose up -d
DEBU[26106] Handle plugin get volume: POST /VolumeDriver.Get  pkg=daemon
DEBU[26106] Request from docker: &{nfs_html map[]}        pkg=daemon
DEBU[26106]                                               event=mountpoint object=volume pkg=daemon reason=prepare volume=nfs_html
DEBU[26106]                                               event=mountpoint mountpoint= object=volume pkg=daemon reason=complete volume=nfs_html
DEBU[26106] Found volume nfs_html for docker              pkg=daemon
DEBU[26106] Response:  {
    "Volume": {
        "Name": "nfs_html"
    }
}  pkg=daemon
DEBU[26106] Handle plugin get volume: POST /VolumeDriver.Get  pkg=daemon
DEBU[26106] Request from docker: &{nfs_html map[]}        pkg=daemon
DEBU[26106]                                               event=mountpoint object=volume pkg=daemon reason=prepare volume=nfs_html
DEBU[26106]                                               event=mountpoint mountpoint= object=volume pkg=daemon reason=complete volume=nfs_html
DEBU[26106] Found volume nfs_html for docker              pkg=daemon
DEBU[26106] Response:  {
    "Volume": {
        "Name": "nfs_html"
    }
}  pkg=daemon
DEBU[26106] Handle plugin get volume: POST /VolumeDriver.Get  pkg=daemon
DEBU[26106] Request from docker: &{nfs_html map[]}        pkg=daemon
DEBU[26106]                                               event=mountpoint object=volume pkg=daemon reason=prepare volume=nfs_html
DEBU[26106]                                               event=mountpoint mountpoint= object=volume pkg=daemon reason=complete volume=nfs_html
DEBU[26106] Found volume nfs_html for docker              pkg=daemon
DEBU[26106] Response:  {
    "Volume": {
        "Name": "nfs_html"
    }
}  pkg=daemon
Starting nginx ... 
DEBU[26106] Handle plugin get volume: POST /VolumeDriver.Get  pkg=daemon
DEBU[26106] Request from docker: &{nfs_html map[]}        pkg=daemon
DEBU[26106]                                               event=mountpoint object=volume pkg=daemon reason=prepare volume=nfs_html
DEBU[26106]                                               event=mountpoint mountpoint= object=volume pkg=daemon reason=complete volume=nfs_html
DEBU[26106] Found volume nfs_html for docker              pkg=daemon
DEBU[26106] Response:  {
    "Volume": {
        "Name": "nfs_html"
    }
}  pkg=daemon
DEBU[26106] Handle plugin mount volume: POST /VolumeDriver.Mount  pkg=daemon
DEBU[26106] Request from docker: &{nfs_html map[]}        pkg=daemon
DEBU[26106] Mount volume: nfs_html for docker             pkg=daemon
DEBU[26106]                                               event=mount object=volume opts=map[MountPoint:] pkg=daemon reason=prepare volume=nfs_html
DEBU[26106]                                               event=list mountpoint=/nfs/nfs_html object=volume pkg=daemon reason=complete volume=nfs_html
DEBU[26106] Response:  {
    "Mountpoint": "/nfs/nfs_html"
Starting nginx ... done

启动过程中,自动创建了/nfs/nfs_html目录,可以进入查看:

# cd /nfs/nfs_html/
# ll
total 8
-rw-r--r-- 1 root root 494 Feb 26 22:13 50x.html
-rw-r--r-- 1 root root 612 Feb 26 22:13 index.html

(3)访问测试

从以上的信息中,我们可以看到,挂载的目录中自动生成了index文件和50x文件,这个文件是nginx自带的。

# curl 127.0.0.1



Welcome to nginx!



Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

很显然,这是nginx原生的index页面。我们也可以将自己的index文件复制到此处。如下:

# cp www/index.html nfs_html/
cp: overwrite ‘nfs_html/index.html’? yes
# curl 127.0.0.1
hello convoy!

3、总结

当我们提前创建映射目录并使用docker run命令进行目录映射时,启动容器过程中,是把我们宿主机目录内的东西“写进去”,写到容器内部目录;而不提前创建映射目录和通过docker-compose实验我们发现,在容器启动过程中会自动在我们的nfs文件系统挂载目录内调用convoy插件创建一个卷,而对文件的操作刚好与提前创建目录并使用绝对路径映射的方式相反,不提前创建映射目录和docker-compose是将容器内部的数据写到宿主机目录。

注意:
因本人水平有限,因此,文档中难免存在不准确的地方和描述不恰当的地方,如有不当敬请理解与指正。

你可能感兴趣的:(Docker数据持久化——Docker容器存储插件Convoy)