容器的数据卷分为:本地存储、共享存储;使用本地数据卷存储,当该卷所在的机器出现故障,严重可导致数据永远丢失。
Portworx (px-dev)在各个节点上保存多份备份,可以很好解决上面的问题,并可以把各个服务器上的磁盘充分利用起来组成一个存储池。
以下做的实验是:在一个docker swarm上创建一个service并挂载一个数据Volume,模拟一个节点故障(关闭该service所在的服务器电源),再查看该service是否还在运行,数据是否还在。
一、环境准备:
1.docker集群:
在VirtualBox新建立3个虚拟机,并额外为每个虚拟机添加一个虚拟3G的磁盘
a.master public:192.168.5.172 private:10.2.2.2
b.node1 public:
192.168.5.173 private:
10.2.2.3
c.node2 public:
192.168.5.174 private:
10.2.2.4
2. 系统: centos 7.2 X64
3. docker版本: 1.12.3
二、准备etcd,
在master机器上:
1.下载新版:
[root@master ~]# wget https://github.com/coreos/etcd/releases/download/v3.0.14/etcd-v3.0.14-linux-amd64.tar.gz
2.解压并启动etcd集群:
[root@master ~]# tar -xzvf ./etcd-v3.0.14-linux-amd64.tar
[root@master ~]# cd etcd-v3.0.14-linux-amd64
[root@master etcd-v3.0.14-linux-amd64]# ./etcd --name infra0 --initial-advertise-peer-urls http://10.2.2.2:2380 --listen-peer-urls http://10.2.2.2:2380 --listen-client-urls http://10.2.2.2:2379,http://127.0.0.1:2379 --advertise-client-urls http://10.2.2.2:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster infra0=http://10.2.2.2:2380 --initial-cluster-state new
3.检查etcd状态是否正常:
[root@master etcd-v3.0.14-linux-amd64]# export ETCDCTL_API=3
[root@master etcd-v3.0.14-linux-amd64]# ./etcdctl put foo bar
OK
[root@master etcd-v3.0.14-linux-amd64]# ./etcdctl --endpoints=[10.2.2.2:2379] get foo
foo
bar
三、创建Docker集群
1.在master机器:
[root@master ~]# docker swarm init --advertise-addr 10.2.2.2
Swarm initialized: current node (dsc27q1loiyq96zuuxcfz8ax0) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-2cpw3vkhjkm66hacnmuc6cflzp5xrtmnwu7y3jv1zbqcize4ag-dwmys6qun4zwlhxx9o533nocu \
10.2.2.2:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
2. 在node1和node2机
器上执行加入docker集群
[root@node1 ~]# docker swarm join \
> --token SWMTKN-1-2cpw3vkhjkm66hacnmuc6cflzp5xrtmnwu7y3jv1zbqcize4ag-dwmys6qun4zwlhxx9o533nocu \
> 10.2.2.2:2377
This node joined a swarm as a worker.
3.在master上查看docker集群状态:
[root@master ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
14diukpd9nyx654v2bn3gsg3p node2 Ready Active
4wwjunj89u8dpulckr914qj2f node1 Ready Active
dsc27q1loiyq96zuuxcfz8ax0 * master Ready Active Leader
四,运行px-dev在docker上
1.在master上运行px-dev:
a.检查存储设备:
[root@master ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 7.5G 0 part
├─centos-root 253:0 0 6.7G 0 lvm /
└─centos-swap 253:1 0 820M 0 lvm [SWAP]
sdb 8:16 0 3G 0 disk
b.创建配置文件目录:
[root@master ~]# mkdir -p /etc/pwx
c.下载样例文件:
[root@master ~]#
wget https://raw.githubusercontent.com/portworx/px-dev/master/conf/config.json /etc/pwx/
d.修改config.json
[root@master ~]# cat /etc/pwx/config.json
{
"clusterid": "7ac2ed6f-7e4e-4e1d-8e8c-3a6df1fb61a5",
"kvdb": [
"etcd:http://10.2.2.2:2379"
],
"storage": {
"devices": [
"/dev/sdb"
]
}
}
e.运行portworx容器:
[root@master ~]# docker pull
portworx/px-dev
[root@master ~]#
docker run --restart=always --name px -d --net=host \
--privileged=true \
-v /run/docker/plugins:/run/docker/plugins \
-v /var/lib/osd:/var/lib/osd:shared \
-v /dev:/dev \
-v /etc/pwx:/etc/pwx \
-v /opt/pwx/bin:/export_bin:shared \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/cores:/var/cores \
-v /usr/src:/usr/src \
--ipc=host \
portworx/px-dev
867fd9a4485afd3e1eb1789525f12b65f0176e457871ad4da67578134f684f39
f.检查状态:
[root@master ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
867fd9a4485a portworx/px-dev "/docker-entry-point." 3 minutes ago Up 3 minutes px
[root@master ~]# docker exec -it px sh
sh-4.2# /opt/
flexvolume/ pwx/
sh-4.2# /opt/pwx/bin/pxctl status
Status: PX is operational
Node ID: 3ac5bb0c-b15d-42eb-a614-4e1e611b0cb2
IP: 192.168.5.172
Local Storage Pool: 1 device
Device
Path
Media Type
Size
Last-Scan
1
/dev/sdb
STORAGE_MEDIUM_MAGNETIC
3.0 GiB
08 Nov 16 13:56 UTC
total
-
3.0 GiB
Cluster Summary
Cluster ID: 7ac2ed6f-7e4e-4e1d-8e8c-3a6df1fb61a5
Node IP: 192.168.5.172 - Capacity: 17 MiB/3.0 GiB Online (This node)
Global Storage Pool
Total Used
: 17 MiB
Total Capacity
: 3.0 GiB
2.在node1,node2上运行px-dev:(方法跟master一样:重复1.的a,b,c,d,e,f步骤就行)
3.再次在master上检查px-dev状态:此时node1,node2的加入存储集群。
[root@master ~]# docker exec -it px sh
sh-4.2# /opt/pwx/bin/pxctl status
Status: PX is operational
Node ID: 3ac5bb0c-b15d-42eb-a614-4e1e611b0cb2
IP: 192.168.5.172
Local Storage Pool: 1 device
Device
Path
Media Type
Size
Last-Scan
1
/dev/sdb
STORAGE_MEDIUM_MAGNETIC
3.0 GiB
08 Nov 16 14:16 UTC
total
-
3.0 GiB
Cluster Summary
Cluster ID: 7ac2ed6f-7e4e-4e1d-8e8c-3a6df1fb61a5
Node IP: 192.168.5.172 - Capacity: 17 MiB/3.0 GiB Online (This node)
Node IP: 192.168.5.173 - Capacity: 17 MiB/3.0 GiB Online
Node IP: 192.168.5.174 - Capacity: 17 MiB/3.0 GiB Online
Global Storage Pool
Total Used
: 51 MiB
Total Capacity
: 9.0 GiB
四,检查数据可用性:
1. 建立共享卷:
[root@master ~]# docker volume create -d pxd --name share-vol-1
2.新建立一个Service
[root@master ~]# docker service create --name test1 --mount type=volume,source=share-vol-1,destination=/data,volume-label="color=red",volume-label="shape=round" busybox:latest ping www.baidu.com
dn33f5wr8l8np7s0tk1mg2f2z
[root@master ~]# docker service ps test1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
9h3l0cppxufgq0t2sgka7g5aj test1.1 busybox:latest node2 Running Preparing 41 seconds ago
3.向share-vol-1卷写入数据:
[root@node2 ~]# docker exec test1.1.9h3l0cppxufgq0t2sgka7g5aj touch /data/test.log
4.关闭节点:node2
[root@master ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
14diukpd9nyx654v2bn3gsg3p node2 Down Active
4wwjunj89u8dpulckr914qj2f node1 Ready Active
dsc27q1loiyq96zuuxcfz8ax0 * master Ready Active Leader
看到node2的状态是:down
5.再次查看servie:test
[root@master ~]# docker service ps test1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
cs8p4knjarn5e8il6p0k028xe test1.1 busybox:latest master Running Running 28 seconds ago
9h3l0cppxufgq0t2sgka7g5aj \_ test1.1 busybox:latest node2 Shutdown Running 9 minutes ago
看到test1 在master上重新运行起来;
6.检查数据是否还存在
[root@master ~]# docker exec test1.1.cs8p4knjarn5e8il6p0k028xe ls /data
test.log
可以看到test.log还存在,并没有丢失。
参考:
http://docs.portworx.com/run-with-docker.html