Docker是一个开源的应用容器引擎,开发人员可以非常容易地打包已经开发好的应用,同时将应用相关的依赖包也打包到这样一个可移植的容器中,然后发布到任意的Linux主机系统上。Docker是基于Linux Container(LXC)技术实现的一个轻量级虚拟化解决方案,用户可以直接使用容器(Container),来构建自己的应用程序,应用开发人员无需将注意力集中在容器的管理上。Docker的目标是“Build, Ship and Run Any App, Anywhere”,这说明了使用Docker能够实现应用运行的可移植性、便捷性,对开发人员非常友好,只要你的应用是基于Docker进行构建和部署的,在任何时候任何支持Docker的Linux发行版操作系统上都可以运行你的应用程序。
Docker是基于Go语言开发的, 代码开源,可以在Github上查看对应的源码:https://github.com/docker/docker.git。
基本构架
Docker基于Client-Server架构,Docker daemon是服务端,Docker client是客户端。Docker的基本架构,如下图所示:
上图中,除了展现了Docker的Client、Server、Containers、Images、Registry之间的关系,我们主要说明Docker daemon和Docker client,关于其他组件我们后面详述:
- Docker daemon
Docker daemon运行在宿主机上,它是一个long-running进程,用户通过Docker client与Docker daemon进行交互。
- Docker client
Docker client为用户提供了与Docker daemon交互的接口,在安装Docker的时候就已经安装,可以通过docker命令来操作。一个Docker client可以与同一个宿主机上的Docker daemon交互,也可以与远程的Docker daemon进行交互。
基本概念
Registry
Registry是一个服务,它负责管理一个或多个Repository(仓库),而Repository还包含公共仓库(Public Repository)和私有仓库(Private Repository)。默认的Registry是Docker Hub,它管理了按照不同用途分类的很多公共仓库,任何人都可以到Docker Hub上查找自己需要的Image,或者可以使用docker search命令来搜索对应Image,例如我们查询关键词hadoop,示例命令如下所示:
1
|
docker search hadoop
|
查询结果如下所示:
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
sequenceiq/hadoop-docker An easy way to try Hadoop 428 [OK]
sequenceiq/hadoop-ubuntu An easy way to try Hadoop on Ubuntu 40 [OK]
uhopper/hadoop Base Hadoop image with dynamic configurati... 16 [OK]
ruo91/hadoop Apache hadoop 2.x - Pseudo-Distributed Mode 12 [OK]
harisekhon/hadoop Apache Hadoop (HDFS + Yarn, tags 2.5 - 2.7) 8 [OK]
gelog/hadoop Use at your own risk. 5 [OK]
athlinks/hadoop Distributed Highly Available Hadoop Cluste... 3 [OK]
dockmob/hadoop Docker images for Apache Hadoop (YARN, HDF... 3 [OK]
uhopper/hadoop-resourcemanager Hadoop resourcemanager 3 [OK]
harisekhon/hadoop-dev Apache Hadoop (HDFS + Yarn) + Dev Tools + ... 3 [OK]
izone/hadoop Hadoop 2.7.3 Ecosystem fully distributed, ... 3 [OK]
uhopper/hadoop-namenode Hadoop namenode 2 [OK]
singularities/hadoop Apache Hadoop 2 [OK]
uhopper/hadoop-datanode Hadoop datanode 2 [OK]
uhopper/hadoop-nodemanager Hadoop nodemanager 2 [OK]
lewuathe/hadoop-master Multiple node hadoop cluster on Docker. 2 [OK]
robingu/hadoop hadoop 2.7 1 [OK]
mcapitanio/hadoop Docker image running Hadoop in psedo-distr... 1 [OK]
takaomag/hadoop docker image of archlinux (hadoop) 1 [OK]
ymian/hadoop hadoop 0 [OK]
2breakfast/hadoop hadoop in docker 0 [OK]
ading1977/hadoop Docker image for multi-node hadoop cluster. 0 [OK]
meteogroup/hadoop Apache™ Hadoop® in a docker image. 0 [OK]
hegand/hadoop-base Hadoop base docker image 0 [OK]
elek/hadoop Base image for hadoop components (yarn/hdfs) 0 [OK]
|
上面可以看到,与hadoop相关的Image都被列出来了,可以根据自己的需要选择对应的Image下载并构建应用。
Image
Docker Image是Docker Container的基础,一个Image是对一个Root文件系统的执行变更操作的有序集合,也包括在运行时一个Container内部需要执行的参数的变化。
一个Image是静态的、无状态的,它具有不变性。如果想要修改一个Image,实际是重新创建了新的Image,在原来Image基础上修改后的一个副本。所以,往往我们制作一个Image的时候,可以基于已经存在的Image来构建新的的Image,然后Push到Repository中。
Repository
一个Repository是Docker Image的集合,它可以被Push到Registry而被共享,在Docker Hub就可以看到很多组织或个人贡献的Image,供大家共享。当然,你也可以将自己构建的Image Push到私有的Repository中。在Repository中不同的Image是通过tag来识别的,例如latest 、5.5.0等等。
Container
Container是一个Docker Image的运行时实例,从一个Image可以创建多个包含该应用的Container。一个Container包含如下几个部分:
- 一个Docker Image
- 执行环境
- 一个标准指令的集合
安装启动Docker
我使用了CentOS 7操作系统,可以非常容易地安装Docker环境。假设,下面我们都是用root用户进行操作,执行如下命令进行准备工作:
1
2
3
4
5
6
7
|
yum
install
-y yum-utils
yum-config-manager \
--add-repo \
https:
//docs
.docker.com
/engine/installation/linux/repo_files/centos/docker
.repo
yum makecache fast
|
上面首先安装了yum-utils,它提供了yum-config-manager管理工具,然后安装了最新稳定版本的Repository文件,最后更新yum的package索引。
安装最新版本的Docker,当前是1.13.1,执行如下命令:
1
|
sudo
yum -y
install
docker-engine
|
首次安装docker-engine,输出类似如下日志信息:
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
|
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.btte.net
* extras: mirrors.btte.net
* updates: mirrors.btte.net
Resolving Dependencies
--> Running transaction check
---> Package docker-engine.x86_64 0:1.13.1-1.el7.centos will be installed
--> Processing Dependency: docker-engine-selinux >= 1.13.1-1.el7.centos for package: docker-engine-1.13.1-1.el7.centos.x86_64
--> Running transaction check
---> Package docker-engine-selinux.noarch 0:1.13.1-1.el7.centos will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================================================================================================================================
Package Arch Version Repository Size
=================================================================================================================================================================================================================
Installing:
docker-engine x86_64 1.13.1-1.el7.centos docker-main 19 M
Installing for dependencies:
docker-engine-selinux noarch 1.13.1-1.el7.centos docker-main 28 k
Transaction Summary
=================================================================================================================================================================================================================
Install 1 Package (+1 Dependent package)
Total download size: 19 M
Installed size: 65 M
Downloading packages:
warning: /var/cache/yum/x86_64/7/docker-main/packages/docker-engine-selinux-1.13.1-1.el7.centos.noarch.rpm: Header V4 RSA/SHA512 Signature, key ID 2c52609d: NOKEY ] 1.2 MB/s | 944 kB 00:00:14 ETA
Public key for docker-engine-selinux-1.13.1-1.el7.centos.noarch.rpm is not installed
(1/2): docker-engine-selinux-1.13.1-1.el7.centos.noarch.rpm | 28 kB 00:00:01
(2/2): docker-engine-1.13.1-1.el7.centos.x86_64.rpm | 19 MB 00:00:04
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 4.5 MB/s | 19 MB 00:00:04
Retrieving key from https://yum.dockerproject.org/gpg
Importing GPG key 0x2C52609D:
Userid : "Docker Release Tool (releasedocker)
Fingerprint: 5811 8e89 f3a9 1289 7c07 0adb f762 2157 2c52 609d
From : https://yum.dockerproject.org/gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : docker-engine-selinux-1.13.1-1.el7.centos.noarch 1/2
libsemanage.semanage_direct_install_info: Overriding docker module at lower priority 100 with module at priority 400.
restorecon: lstat(/var/lib/docker) failed: No such file or directory
warning: %post(docker-engine-selinux-1.13.1-1.el7.centos.noarch) scriptlet failed, exit status 255
Non-fatal POSTIN scriptlet failure in rpm package docker-engine-selinux-1.13.1-1.el7.centos.noarch
Installing : docker-engine-1.13.1-1.el7.centos.x86_64 2/2
Verifying : docker-engine-selinux-1.13.1-1.el7.centos.noarch 1/2
Verifying : docker-engine-1.13.1-1.el7.centos.x86_64 2/2
Installed:
docker-engine.x86_64 0:1.13.1-1.el7.centos
Dependency Installed:
docker-engine-selinux.noarch 0:1.13.1-1.el7.centos
Complete!
|
可见,Docker已经成功安装。下面,我们就可以启动Docker了,执行如下命令,启动Docker(Docker Engine):
1
|
systemctl start docker
|
可以查看一下当前系统上的进程,执行ps -ef | grep docker确认Docker已经启动:
1
2
3
|
root 2717 1 8 21:52 ? 00:00:00 /usr/bin/dockerd
root 2723 2717 1 21:52 ? 00:00:00 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc
root 2920 2645 0 21:52 pts/0 00:00:00 grep --color=auto docker
|
下面,我们验证一下,Docker启动了,应该就可以在一个Container中运行一个准备好的应用,执行如下命令:
1
|
docker run hello-world
|
基于一个名称为hello-world的Image,启动Container并运行它,启动过程如下所示:
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
78445dd45222: Pull complete
Digest: sha256:c5515758d4c5e1e838e9cd307f6c6a0d620b5e07e6f927b07d05f6d12a1ac8d7
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://cloud.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide
|
首先可以看到,因为本地没有下载过该Image,所以会先从Docker Hub上下载,对应的tag是latest。另外,也可以看到提示信息“Hello from Docker! ”,表示我们的环境配置没问题,可以启动Container运行应用程序。这里,还给出了运行我们这个名称为hello-world的示例Image在Container中运行过程中,Docker的基本运行机制如下所示:
- Docker Client连接到Docker daemon
- Docker daemon从Docker Hub上下载名称为hello-world的Image
- Docker daemon基于这个Image创建了一个新的Container,并运行应用程序,输出“Hello from Docker!”
- Docker daemon将结果输出到Docker Client,也就是我们的终端上
现在,我们可能想知道hello-world这个Image是如何构建,才能够最终在我们的Docker Container中运行,请看下文。
构建Image
通过创建Dockerfile可以构建Image,Docker会从一个Dockerfile中读取一系列指令来构建Image。一个Dockerfile是一个文本文件,它包含了一组能够运行的命令行,这些命令行就组装成了一个Docker Image。
下面,我们看一下前面提到的名称为hello-world的Image是如何构建,可以在Github上看到该Image的代码,链接在这里:https://github.com/docker-library/hello-world。
hello-world一定对应一个Dockerfile,内容如下所示:
1
2
3
|
FROM scratch
COPY hello /
CMD [
"/hello"
]
|
上面这3条命令,就对应着hello-world这个Image:
第一行,FROM命令:是从一个已知的基础Image来构建新的Image,这里scratch是一个显式指定的空Image;
第二行,COPY命令:是将指定的新文件或目录,拷贝到Container中指定的目录下面,这里讲hello这个可执行文件复制到Container中的根路径/下面;
第三行,CMD命令:是运行指定的命令行,包含指定的命令名称、参数列表
可见,上面的hello可执行文件是已经构编译好的文件,它是从一个C程序文件(Github链接:https://github.com/docker-library/hello-world/blob/master/hello.c)编译而来的,源码文件hello.c内容如下所示:
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
|
#include
#ifndef DOCKER_IMAGE
#define DOCKER_IMAGE "hello-world"
#endif
#ifndef DOCKER_GREETING
#define DOCKER_GREETING "Hello from Docker!"
#endif
const
char
message[] =
"\n"
DOCKER_GREETING
"\n"
"This message shows that your installation appears to be working correctly.\n"
"\n"
"To generate this message, Docker took the following steps:\n"
" 1. The Docker client contacted the Docker daemon.\n"
" 2. The Docker daemon pulled the \""
DOCKER_IMAGE
"\" image from the Docker Hub.\n"
" 3. The Docker daemon created a new container from that image which runs the\n"
" executable that produces the output you are currently reading.\n"
" 4. The Docker daemon streamed that output to the Docker client, which sent it\n"
" to your terminal.\n"
"\n"
"To try something more ambitious, you can run an Ubuntu container with:\n"
" $ docker run -it ubuntu bash\n"
"\n"
"Share images, automate workflows, and more with a free Docker ID:\n"
" https://cloud.docker.com/\n"
"\n"
"For more examples and ideas, visit:\n"
" https://docs.docker.com/engine/userguide/\n"
"\n"
;
void
_start() {
//write(1, message, sizeof(message) - 1);
syscall(SYS_write, 1, message,
sizeof
(message) - 1);
//_exit(0);
syscall(SYS_exit, 0);
}
|
编译生成可执行文件hello,然后可以使用Docker的build命令来构建生成Image:
1
|
docker build -t hello-world
|
现在,hello-world是如何构建Image的就已经非常清楚了。下面,我们通过参考官网的用户指南,编写一个Dockerfile来制作一个Image,了解如何实现自己的应用:
- 编写Dockerfile
首先,创建一个单独的目录来存放我们将要构建的Dockerfile文件:
1
2
3
|
mkdir
mydockerbuild
cd
mydockerbuild
vi
Dockerfile
|
在Dockerfile中输入如下内容:
1
2
3
|
FROM docker
/whalesay
:latest
RUN apt-get -y update && apt-get
install
-y fortunes
CMD
/usr/games/fortune
-a | cowsay
|
上面FROM命令表示,Docker基于该docker/whalesay:latest来构建新的Image,这个Image在Docker Hub上,链接在这里:https://hub.docker.com/r/docker/whalesay/,对应的源码可以看Github:https://github.com/docker/whalesay。RUN命令行表示安装fortunes程序包,最后的CMD命令指示将运行/usr/games/fortune命令。
- 构建Image
保存上述3行命令到文件中,在当前mydockerbuild目录中执行构建Image的命令:
1
|
docker build -t docker-whale .
|
构建过程,输出信息如下:
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
|
Sending build context to Docker daemon 2.048 kB
Step 1/3 : FROM docker/whalesay:latest
---> 6b362a9f73eb
Step 2/3 : RUN apt-get -y update && apt-get install -y fortunes
---> Running in bfddc2134d23
Ign http://archive.ubuntu.com trusty InRelease
Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]
Get:2 http://archive.ubuntu.com trusty-security InRelease [65.9 kB]
Hit http://archive.ubuntu.com trusty Release.gpg
Get:3 http://archive.ubuntu.com trusty-updates/main Sources [485 kB]
Get:4 http://archive.ubuntu.com trusty-updates/restricted Sources [5957 B]
Get:5 http://archive.ubuntu.com trusty-updates/universe Sources [220 kB]
Get:6 http://archive.ubuntu.com trusty-updates/main amd64 Packages [1197 kB]
Get:7 http://archive.ubuntu.com trusty-updates/restricted amd64 Packages [20.4 kB]
Get:8 http://archive.ubuntu.com trusty-updates/universe amd64 Packages [516 kB]
Get:9 http://archive.ubuntu.com trusty-security/main Sources [160 kB]
Get:10 http://archive.ubuntu.com trusty-security/restricted Sources [4667 B]
Get:11 http://archive.ubuntu.com trusty-security/universe Sources [59.4 kB]
Get:12 http://archive.ubuntu.com trusty-security/main amd64 Packages [730 kB]
Get:13 http://archive.ubuntu.com trusty-security/restricted amd64 Packages [17.0 kB]
Get:14 http://archive.ubuntu.com trusty-security/universe amd64 Packages [199 kB]
Hit http://archive.ubuntu.com trusty Release
Hit http://archive.ubuntu.com trusty/main Sources
Hit http://archive.ubuntu.com trusty/restricted Sources
Hit http://archive.ubuntu.com trusty/universe Sources
Hit http://archive.ubuntu.com trusty/main amd64 Packages
Hit http://archive.ubuntu.com trusty/restricted amd64 Packages
Hit http://archive.ubuntu.com trusty/universe amd64 Packages
Fetched 3745 kB in 55s (67.1 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
fortune-mod fortunes-min librecode0
Suggested packages:
x11-utils bsdmainutils
The following NEW packages will be installed:
fortune-mod fortunes fortunes-min librecode0
0 upgraded, 4 newly installed, 0 to remove and 92 not upgraded.
Need to get 1961 kB of archives.
After this operation, 4817 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ trusty/main librecode0 amd64 3.6-21 [771 kB]
Get:2 http://archive.ubuntu.com/ubuntu/ trusty/universe fortune-mod amd64 1:1.99.1-7 [39.5 kB]
Get:3 http://archive.ubuntu.com/ubuntu/ trusty/universe fortunes-min all 1:1.99.1-7 [61.8 kB]
Get:4 http://archive.ubuntu.com/ubuntu/ trusty/universe fortunes all 1:1.99.1-7 [1089 kB]
debconf: unable to initialize frontend: Dialog
debconf: (TERM is not set, so the dialog frontend is not usable.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
dpkg-preconfigure: unable to re-open stdin:
Fetched 1961 kB in 5s (340 kB/s)
Selecting previously unselected package librecode0:amd64.
(Reading database ... 13116 files and directories currently installed.)
Preparing to unpack .../librecode0_3.6-21_amd64.deb ...
Unpacking librecode0:amd64 (3.6-21) ...
Selecting previously unselected package fortune-mod.
Preparing to unpack .../fortune-mod_1%3a1.99.1-7_amd64.deb ...
Unpacking fortune-mod (1:1.99.1-7) ...
Selecting previously unselected package fortunes-min.
Preparing to unpack .../fortunes-min_1%3a1.99.1-7_all.deb ...
Unpacking fortunes-min (1:1.99.1-7) ...
Selecting previously unselected package fortunes.
Preparing to unpack .../fortunes_1%3a1.99.1-7_all.deb ...
Unpacking fortunes (1:1.99.1-7) ...
Setting up librecode0:amd64 (3.6-21) ...
Setting up fortune-mod (1:1.99.1-7) ...
Setting up fortunes-min (1:1.99.1-7) ...
Setting up fortunes (1:1.99.1-7) ...
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
---> 98403143b081
Removing intermediate container bfddc2134d23
Step 3/3 : CMD /usr/games/fortune -a | cowsay
---> Running in 8831a7231adc
---> 08d234c4ee26
Removing intermediate container 8831a7231adc
Successfully built 08d234c4ee26
|
或者,可以通过-f选项,直接指定Dockerfile文件的绝对路径,构建命令如下所示:
1
|
docker build -f ~
/mydockerbuild/Dockerfile
-t docker-whale .
|
这样我们自己的Image就构建好了,名称为docker-whale。下面,看下构建我们这个Image的基本流程流程:
- Docker检查确保当前Dockerfile中是否有需要build的内容
- Docker检查是否存在whalesay这个Image
- Docker会启动一个临时的容器6b362a9f73eb,来运行whalesay这个image。在这个临时的Container中,Docker会执行RUN这行命令,安装fortune程序包
- 一个新的中间container被创建8831a7231adc,在Dockerfile中增加了一个CMD层(Layer),对应一个Container,然后中间container8831a7231adc被删除
我们在构建一个Image时,会自动下载依赖的Docker Image,其实也可以预先下载对应的Image,使用类似下面的命令:
1
|
docker pull mysql:5.5
|
这样就可以下载MySQL 5.5的Image到本地。
- 查看构建的Image
查看当前image列表,其中包含我们刚刚构建好的Image,执行docker images命令,结果如下所示:
1
2
3
4
5
|
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-whale latest 08d234c4ee26 9 minutes ago 256 MB
ubuntu latest f49eec89601e 5 weeks ago 129 MB
hello-world latest 48b5124b2768 6 weeks ago 1.84 kB
docker/whalesay latest 6b362a9f73eb 21 months ago 247 MB
|
第一个docker-whale,就是我们自己创建的。
- 启动Docker Container
接着,基于我们已经构建好的Image,在Docker Container中运行这个应用,执行命令:
1
|
docker run docker-whale
|
运行结果,如下所示:
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
|
______________________________
/ IBM: \
| |
| I've Been Moved |
| |
| Idiots Become Managers |
| |
| Idiots Buy More |
| |
| Impossible to Buy Machine |
| |
| Incredibly Big Machine |
| |
| Industry's Biggest Mistake |
| |
| International Brotherhood of |
| Mercenaries |
| |
| It Boggles the Mind |
| |
| It's Better Manually |
| |
\ Itty-Bitty Machines /
------------------------------
\
\
\
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/
|
另外,我们可以进入到Docker Hub:https://hub.docker.com,创建一个自己的账号,然后创建自己的Image,当然也可以找到各种免费共享的Image,可以基于这些Image来构建自己的Image。Docker Hub页面,如下所示:
下面是一个例子,可以在启动Docker Container时,通过命令行直接向Container内部应用传递参数值,命令行如下所示:
1
2
|
docker run docker
/whalesay
cowsay boo
docker run docker
/whalesay
cowsay boo-boo
|
可以看到,输出的内容根据启动Container传递的参数值而变化。
- 查看Docker Container
查看当前主机上所有状态的Docker Container,可以执行如下命令(下面的命令都是等价的):
1
2
3
|
docker
ps
-a
docker container
ps
-a
docker container
ls
-a
|
示例结果,如下所示:
1
2
3
4
5
6
7
8
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ab157767bbd training/postgres "su postgres -c '/..." 6 seconds ago Up 5 seconds 5432/tcp pgdb
da91889d6313 training/postgres "su postgres -c '/..." 49 seconds ago Up 2 seconds 5432/tcp webappdb
5d86616e9a1d docker-whale "/bin/sh -c '/usr/..." 24 minutes ago Exited (0) 7 seconds ago elastic_mcnulty
abec6410bcac docker/whalesay "cowsay boo" 27 minutes ago Exited (0) 27 minutes ago upbeat_edison
72d0b2bb5d6a training/postgres "su postgres -c '/..." 4 hours ago Up 4 hours 5432/tcp db
fc9b0bb6ae8e ubuntu "/bin/bash" 4 hours ago Up 4 hours networktest
fc9b0bb6ae8e ubuntu "/bin/bash" 7 days ago Exited (255) 3 days ago networktest
|
查看当前运行中的Container,可以执行如下命令查看(下面的命令都是等价的):
1
2
3
|
docker
ps
docker container
ps
docker container
ls
|
Docker网络
Docker支持Container之间通过网络互连,提供了两种网络Driver,分别为Bridge和Overlay,我们也可以实现自己的网络Driver插件来管理我们的Docker Container网络。目前,有很多Docker网络的解决方案,如Flannel、Weave、Pipework、libnetwork等,感兴趣可以深入研究一下。
在安装Docker Engine之后,会包含三个默认的网络,可以通过如下命令查看当前所有的网络:
1
|
docker network
ls
|
结果如下所示:
1
2
3
4
|
NETWORK ID NAME DRIVER SCOPE
b92d9ca4d992 bridge bridge local
6d33880bf521 host host local
a200b158f39c none null local
|
名称为host的网络,表示宿主机的网络,如果启动Docker Container指定该网络,则Container与宿主机使用相同的Network Namespace,也就是启动的Container的网络会使用宿主机的网卡、IP、端口。
在启动Docker Container时,如果我们没有显式指定网络名称,Docker会使用默认的bridge网络。这种网络模式下,Docker会为Container创建一个独立于宿主机的Network Namespace,并使用独立的IP段,Container连接到一个虚拟网桥上,默认是docker0网桥。虚拟网桥与交换机的工作方式类似,启动的Docker Container连接到虚拟网桥上,这就构成了一个二层网络。
为了更加直观说明,我们参考了网上的一个Docker网络的结构图,如下图所示:
下面,通过Docker网络功能,看如何将Container网络连接起来。
- 创建Docker网络
创建一个Docker网络,名称为my-bridge-network,执行如下命令:
1
|
docker network create -d bridge my-bridge-network
|
创建的结果,输出了新建Docker网络的ID,如下所示:
1
|
fc19452525e5d2f5f1fc109656f0385bf2f268b47788353c3d9ee672da31b33a
|
上面fc19452525e5d2f5f1fc109656f0385bf2f268b47788353c3d9ee672da31b33a就是新创建网络my-bridge-network的ID,可以通过如下命令查看:
1
|
docker network
ls
|
当前主机上存在的所有Docker网络信息,如下所示:
1
2
3
4
5
|
NETWORK ID NAME DRIVER SCOPE
b92d9ca4d992 bridge bridge local
6d33880bf521 host host local
fc19452525e5 my-bridge-network bridge local
a200b158f39c none null local
|
- 查看一个Docker网络
查看一个Docker网络的详细信息,查看默认的bridge网络,可以执行如下命令:
1
|
docker network inspect bridge
|
执行结果,如下所示:
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
|
[
{
"Name": "bridge",
"Id": "2872de41fddddc22420eecad253107e09a305f3512ade31d4172d3b80723d8b6",
"Created": "2017-03-05T21:46:12.413438219+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"5ab157767bbd991401c351cfb452d663f5cd93dd1edc56767372095a5c2e7f73": {
"Name": "pgdb",
"EndpointID": "e0368c3219bcafea7c2839b7ede628fa67ad0a5350d150fdf55a4aa88c01c480",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"da91889d63139019bbdcc6266704fb21e0a1800d0ae63b3448e65d1e17ef7368": {
"Name": "webappdb",
"EndpointID": "422ab05dd2cbb55266964b31f0dd9292688f1459e3a687662d1b119875d4ce44",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
|
通过上面结果信息可以看到,当前bridge网络的ID为2872de41fddddc22420eecad253107e09a305f3512ade31d4172d3b80723d8b6,在该Docker网络内部运行中的Container有2个,分别为pgdb和webapp,他们在Container内部的IP地址分别为172.17.0.2和172.17.0.3,因为在同一个bridge网络中,所以共享相同的IP地址段。
或者,我们也可以格式化输出某个Container所在网络的设置,执行如下命令:
1
|
docker inspect --
format
=
'{{json .NetworkSettings.Networks}}'
pgdb
|
输出结果如下所示(结果格式化过):
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
|
{
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "2872de41fddddc22420eecad253107e09a305f3512ade31d4172d3b80723d8b6",
"EndpointID": "e0368c3219bcafea7c2839b7ede628fa67ad0a5350d150fdf55a4aa88c01c480",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02"
}
}
|
可见和上面的命令类似,能输出Docker网络bridge的基本信息。
- 断开Container网络
可以断开一个Container的网络,来将一个Container从一个Docker网络中移除,只需要指定网络名称和Container名称即可(或者Container的ID),命令如下所示:
1
2
3
|
docker network disconnect bridge pgdb
或
docker network disconnect bridge 5ab157767bbd991401c351cfb452d663f5cd93dd1edc56767372095a5c2e7f73
|
- 连通处于两个子网中的Docker Container
下面,运行一个Web application,默认使用bridge网络:
1
|
docker run -d --name myweb training
/webapp
python app.py
|
通过命令:
1
|
docker inspect --
format
=
'{{json .NetworkSettings.Networks}}'
myweb
|
可以查看该应用连接网络的状况,如下所示(结果格式化过):
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
|
{
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "2872de41fddddc22420eecad253107e09a305f3512ade31d4172d3b80723d8b6",
"EndpointID": "a4e66b540e632c346f345c7972617ccdfaa4ef36eefbdc3a298d524b5cf13897",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.4",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:04"
}
}
|
或者,获取直接Container的IP地址,执行命令:
1
|
docker inspect --
format
=
'{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
myweb
|
结果如下:
1
|
172.17.0.4
|
接着,我们再在my-bridge-network网络中启动一个Container,名称为mydb,执行如下命令:
1
|
docker run -d --name mydb --network my-bridge-network training
/postgres
|
查看mydb应用连接网络的状态(结果格式化过):
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
|
{
"my-bridge-network": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"fbfbad9e0bd3"
],
"NetworkID": "fc19452525e5d2f5f1fc109656f0385bf2f268b47788353c3d9ee672da31b33a",
"EndpointID": "49c7afbf24be165b98ea29dbfd7b1e2c0eecd9c1ef16a7efde00ab92d0563985",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:02"
}
}
|
应用mydb所在网络为my-bridge-network,IP地址为172.18.0.2。
下面,测试从我们的mydb应用所在Container,连接到myweb应用所在的Container(,实际是跨了2个子网,即从my-bridge-network网络连接到bridge网络)。执行如下命令,使得可以在默认的bridge网络中的Container连接到my-bridge-network中的Container,执行如下命令:
1
|
docker network connect my-bridge-network myweb
|
这样,就可以进入到在my-bridge-network网络中的mydb应用所在Container中,通过ping命令,来ping另一个默认bridge网络中myweb应用:
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
|
[root@localhost mydockerbuild]
# docker exec -it mydb bash
root@fbfbad9e0bd3:/
# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:ac:12:00:02
inet addr:172.18.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe12:2
/64
Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:36 errors:0 dropped:0 overruns:0 frame:0
TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3530 (3.5 KB) TX bytes:1124 (1.1 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1
/128
Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:26 errors:0 dropped:0 overruns:0 frame:0
TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:2274 (2.2 KB) TX bytes:2274 (2.2 KB)
root@fbfbad9e0bd3:/
# ping myweb
PING myweb (172.18.0.3) 56(84) bytes of data.
64 bytes from myweb.my-bridge-network (172.18.0.3): icmp_seq=1 ttl=64
time
=0.318 ms
64 bytes from myweb.my-bridge-network (172.18.0.3): icmp_seq=2 ttl=64
time
=2.06 ms
64 bytes from myweb.my-bridge-network (172.18.0.3): icmp_seq=3 ttl=64
time
=0.506 ms
64 bytes from myweb.my-bridge-network (172.18.0.3): icmp_seq=4 ttl=64
time
=0.404 ms
^C
--- myweb
ping
statistics ---
4 packets transmitted, 4 received, 0% packet loss,
time
3003ms
rtt min
/avg/max/mdev
= 0.318
/0
.822
/2
.061
/0
.718 ms
|
可见,在不同Docker网络的两个Container之间的网络是连通的。
Docker Data Volumes
一个Data Volume是在一个或多个Container里面的特定的目录,它能够绕过Union Filesystem,提供持久化或共享数据的特性。
添加一个Data Volume,执行如下命令:
1
|
docker run -d -P --name vweb -
v
/webapp
training
/webapp
python app.py
|
添加一个Data Volume,使用-v选项,目录名为/webapp,该目录是在Container内部的目录,可以通过执行命令docker inspect vweb查看当前Container中对应的信息,如下所示:
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
|
[
{
"Id"
:
"fcea99542d4d2838102fc4b627c68a201b868d85f229722325d83968b32c8b33"
,
"Created"
:
"2017-03-05T16:53:12.614318467Z"
,
"Path"
:
"python"
,
"Args"
: [
"app.py"
],
"State"
: {
"Status"
:
"running"
,
"Running"
:
true
,
"Paused"
:
false
,
"Restarting"
:
false
,
"OOMKilled"
:
false
,
"Dead"
:
false
,
"Pid"
: 7555,
"ExitCode"
: 0,
"Error"
:
""
,
"StartedAt"
:
"2017-03-05T16:53:13.380982103Z"
,
"FinishedAt"
:
"0001-01-01T00:00:00Z"
},
"Image"
:
"sha256:6fae60ef344644649a39240b94d73b8ba9c67f898ede85cf8e947a887b3e6557"
,
"ResolvConfPath"
:
"/var/lib/docker/containers/fcea99542d4d2838102fc4b627c68a201b868d85f229722325d83968b32c8b33/resolv.conf"
,
"HostnamePath"
:
"/var/lib/docker/containers/fcea99542d4d2838102fc4b627c68a201b868d85f229722325d83968b32c8b33/hostname"
,
"HostsPath"
:
"/var/lib/docker/containers/fcea99542d4d2838102fc4b627c68a201b868d85f229722325d83968b32c8b33/hosts"
,
"LogPath"
:
"/var/lib/docker/containers/fcea99542d4d2838102fc4b627c68a201b868d85f229722325d83968b32c8b33/fcea99542d4d2838102fc4b627c68a201b868d85f229722325d83968b32c8b33-json.log"
,
"Name"
:
"/vweb"
,
"RestartCount"
: 0,
"Driver"
:
"overlay"
,
"MountLabel"
:
""
,
"ProcessLabel"
:
""
,
"AppArmorProfile"
:
""
,
"ExecIDs"
: null,
"HostConfig"
: {
"Binds"
: null,
"ContainerIDFile"
:
""
,
"LogConfig"
: {
"Type"
:
"json-file"
,
"Config"
: {}
},
"NetworkMode"
:
"default"
,
"PortBindings"
: {},
"RestartPolicy"
: {
"Name"
:
"no"
,
"MaximumRetryCount"
: 0
},
"AutoRemove"
:
false
,
"VolumeDriver"
:
""
,
"VolumesFrom"
: null,
"CapAdd"
: null,
"CapDrop"
: null,
"Dns"
: [],
"DnsOptions"
: [],
"DnsSearch"
: [],
"ExtraHosts"
: null,
"GroupAdd"
: null,
"IpcMode"
:
""
,
"Cgroup"
:
""
,
"Links"
: null,
"OomScoreAdj"
: 0,
"PidMode"
:
""
,
"Privileged"
:
false
,
"PublishAllPorts"
:
true
,
"ReadonlyRootfs"
:
false
,
"SecurityOpt"
: null,
"UTSMode"
:
""
,
"UsernsMode"
:
""
,
"ShmSize"
: 67108864,
"Runtime"
:
"runc"
,
"ConsoleSize"
: [
0,
0
],
"Isolation"
:
""
,
"CpuShares"
: 0,
"Memory"
: 0,
"NanoCpus"
: 0,
"CgroupParent"
:
""
,
"BlkioWeight"
: 0,
"BlkioWeightDevice"
: null,
"BlkioDeviceReadBps"
: null,
"BlkioDeviceWriteBps"
: null,
"BlkioDeviceReadIOps"
: null,
"BlkioDeviceWriteIOps"
: null,
"CpuPeriod"
: 0,
"CpuQuota"
: 0,
"CpuRealtimePeriod"
: 0,
"CpuRealtimeRuntime"
: 0,
"CpusetCpus"
:
""
,
"CpusetMems"
:
""
,
"Devices"
: [],
"DiskQuota"
: 0,
"KernelMemory"
: 0,
"MemoryReservation"
: 0,
"MemorySwap"
: 0,
"MemorySwappiness"
: -1,
"OomKillDisable"
:
false
,
"PidsLimit"
: 0,
"Ulimits"
: null,
"CpuCount"
: 0,
"CpuPercent"
: 0,
"IOMaximumIOps"
: 0,
"IOMaximumBandwidth"
: 0
},
"GraphDriver"
: {
"Name"
:
"overlay"
,
"Data"
: {
"LowerDir"
:
"/var/lib/docker/overlay/59f20340fa5232f5b13300a715b6d422acc32d21385f48336cead00c3227c63a/root"
,
"MergedDir"
:
"/var/lib/docker/overlay/9c602e4263c42984824b7f1e3c62416cb6056332e6447e65c3d08de7c1f50cd6/merged"
,
"UpperDir"
:
"/var/lib/docker/overlay/9c602e4263c42984824b7f1e3c62416cb6056332e6447e65c3d08de7c1f50cd6/upper"
,
"WorkDir"
:
"/var/lib/docker/overlay/9c602e4263c42984824b7f1e3c62416cb6056332e6447e65c3d08de7c1f50cd6/work"
}
},
"Mounts"
: [
{
"Type"
:
"volume"
,
"Name"
:
"228bc2018d65523797450822a068550fb8afbdf6ca2e4010a32cbb36961e3d5f"
,
"Source"
:
"/var/lib/docker/volumes/228bc2018d65523797450822a068550fb8afbdf6ca2e4010a32cbb36961e3d5f/_data"
,
"Destination"
:
"/webapp"
,
"Driver"
:
"local"
,
"Mode"
:
""
,
"RW"
:
true
,
"Propagation"
:
""
}
],
"Config"
: {
"Hostname"
:
"fcea99542d4d"
,
"Domainname"
:
""
,
"User"
:
""
,
"AttachStdin"
:
false
,
"AttachStdout"
:
false
,
"AttachStderr"
:
false
,
"ExposedPorts"
: {
"5000/tcp"
: {}
},
"Tty"
:
false
,
"OpenStdin"
:
false
,
"StdinOnce"
:
false
,
"Env"
: [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd"
: [
"python"
,
"app.py"
],
"Image"
:
"training/webapp"
,
"Volumes"
: {
"/webapp"
: {}
},
"WorkingDir"
:
"/opt/webapp"
,
"Entrypoint"
: null,
"OnBuild"
: null,
"Labels"
: {}
},
"NetworkSettings"
: {
"Bridge"
:
""
,
"SandboxID"
:
"3f2f86ae96ec76c08e8841c7b8eb75e586000397a8acef9a0098ddf02f2c7da7"
,
"HairpinMode"
:
false
,
"LinkLocalIPv6Address"
:
""
,
"LinkLocalIPv6PrefixLen"
: 0,
"Ports"
: {
"5000/tcp"
: [
{
"HostIp"
:
"0.0.0.0"
,
"HostPort"
:
"32768"
}
]
},
"SandboxKey"
:
"/var/run/docker/netns/3f2f86ae96ec"
,
"SecondaryIPAddresses"
: null,
"SecondaryIPv6Addresses"
: null,
"EndpointID"
:
"39693d7b104dab973e7ed27d16bb71b290be39aa83cce5e78f8b80de35309c5a"
,
"Gateway"
:
"172.17.0.1"
,
"GlobalIPv6Address"
:
""
,
"GlobalIPv6PrefixLen"
: 0,
"IPAddress"
:
"172.17.0.5"
,
"IPPrefixLen"
: 16,
"IPv6Gateway"
:
""
,
"MacAddress"
:
"02:42:ac:11:00:05"
,
"Networks"
: {
"bridge"
: {
"IPAMConfig"
: null,
"Links"
: null,
"Aliases"
: null,
"NetworkID"
:
"2872de41fddddc22420eecad253107e09a305f3512ade31d4172d3b80723d8b6"
,
"EndpointID"
:
"39693d7b104dab973e7ed27d16bb71b290be39aa83cce5e78f8b80de35309c5a"
,
"Gateway"
:
"172.17.0.1"
,
"IPAddress"
:
"172.17.0.5"
,
"IPPrefixLen"
: 16,
"IPv6Gateway"
:
""
,
"GlobalIPv6Address"
:
""
,
"GlobalIPv6PrefixLen"
: 0,
"MacAddress"
:
"02:42:ac:11:00:05"
}
}
}
}
]
|
从上面可以看到,在应用vweb所在Container内部的Data Volume为/webapp。
也可以mount一个宿主机的目录,作为Docker Container的Data Volume:
1
|
docker run -d -P --name vvweb -
v
/src/webapp
:
/webapp
training
/webapp
python app.py
|
上面命令行中,-v选项的值通过冒号分隔,前半部分是宿主机的目录,而后半部分是Container中的相对目录,并且要求宿主机的目录一定包含Container中的Data Volume的路径。
Docker的Data Volume默认是read-write模式,可以手动指定为只读模型,执行如下命令:
1
|
docker run -d -P --name web -
v
/src/webapp
:
/webapp
:ro training
/webapp
python app.py
|
另外,也可以创建一个用来存储的Data Volume Container,便于多个Container中的应用共享数据。例如创建一个用来存储数据库数据的Data Volume Container,执行如下命令:
1
|
docker create -
v
/dbdata
--name dbstore training
/postgres
/bin/true
|
创建了一个名称为dbstore的Container。如果其他应用想要共享我们创建的用于存储的Data Volume Container,可以在启动应用Container的时候指定Data Volume,例如启动下面两个Container使用我们创建的dbstore作为共享Data Volume:
1
2
|
docker run -d --volumes-from dbstore --name db1 training
/postgres
docker run -d --volumes-from dbstore --name db2 training
/postgres
|
db1和db2这两个Container共享我们创建的dbstore Data Volume Container,查看这两个Container对应的Volume信息,执行如下命令行:
1
2
|
docker inspect db1
docker inspect db2
|
结果分别取出两个Container的Mounts信息,对比发现内容是相同的,如下所示:
01
02
03
04
05
06
07
08
09
10
11
|
"Mounts": [
{
"Name": "741950cc3ef8d901dc6cfdbebf8450082a0d22b07957f43bd0de73d05447b365",
"Source": "/var/lib/docker/volumes/741950cc3ef8d901dc6cfdbebf8450082a0d22b07957f43bd0de73d05447b365/_data",
"Destination": "/dbdata",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
|
可见,只能作为Data Volume使用的Container,可以被其他多个应用所共享。
参考链接
- https://hub.docker.com/explore/
- https://docs.docker.com/engine/understanding-docker/
- https://docs.docker.com/engine/reference/glossary
- https://docs.docker.com/engine/installation/linux/centos/
- https://docs.docker.com/engine/installation/linux/linux-postinstall/
- https://docs.docker.com/engine/userguide/
- https://docs.docker.com/engine/userguide/intro/
- https://docs.docker.com/engine/getstarted/step_three/
- https://docs.docker.com/engine/examples/
- https://opskumu.gitbooks.io/docker/content/chapter6.html
- https://docs.docker.com/engine/getstarted/step_four/
- https://docs.docker.com/engine/reference/builder/
- https://docs.docker.com/engine/tutorials/networkingcontainers/
- https://docs.docker.com/engine/tutorials/dockervolumes/