Docker Swarm集群搭建与测试

Docker Swarm

文档

https://docs.docker.com/engine/swarm/

参考

https://qiita.com/Brutus/items/b3dfe5957294caa82669

https://docs.docker.jp/swarm/overview.html

架构示意

Manager可以管理集群,也可以运行容器

worker只能运行容器

Docker Swarm集群搭建与测试_第1张图片

搭建

docker环境搭建

OS设置

# 关闭SELinux,firewalld

# 网络设置
[root@vm1 ~]# ip -br a | grep 0s8 | awk '{print $3}'
192.168.50.100/24

[root@vm2 ~]# ip -br a | grep 0s8 | awk '{print $3}'
192.168.50.120/24

安装docker,compose

# docker
[root@vm1 ~]# cat install-docker.sh
yum remove docker* -y
rm -rf /var/lib/docker
yum -y install wget
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io -y
docker --version
systemctl enable docker --now
docker run hello-world
[root@vm1 ~]# bash install-docker.sh

# 安装docker-compose(任意)
[root@vm1 ~]# curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
[root@vm1 ~]# chmod +x /usr/local/bin/docker-compose

[root@vm1 ~]# docker -v
Docker version 20.10.12, build e91ed57

[root@vm1 ~]# docker-compose -v
docker-compose version 1.29.2, build 5becea4c

[root@vm2 ~]# docker -v
Docker version 20.10.12, build e91ed57

[root@vm2 ~]# docker-compose -v
docker-compose version 1.29.2, build 5becea4c

docker0网络(不用修改)

[root@vm1 ~]# docker network inspect bridge -f "{{.IPAM.Config}}"
[{192.168.80.0/24  192.168.80.1 map[]}]

[root@vm2 ~]# docker network inspect bridge -f "{{.IPAM.Config}}"
[{192.168.90.0/24  192.168.90.1 map[]}]

Swarm集群搭建

初始化#1(manager)

[root@vm1 ~]# docker swarm init --advertise-addr 192.168.50.100
Swarm initialized: current node (kdcrkd6sqteevq9jgy70fd0h0) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

# 亦可手动获取令牌token

[root@vm1 ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377

# 查看集群节点

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12

worker加入集群

[root@vm2 ~]# docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377
This node joined a swarm as a worker.

查看集群节点

[root@vm2 ~]# docker node ls
Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.

ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active                          20.10.12

添加label

dns主机别名?

[root@vm1 ~]# docker node update --label-add name=swarm-master-1 vm1
vm1

[root@vm1 ~]# docker node update --label-add name=swarm-master-2 vm2
vm2

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active                          20.10.12

查看label

[root@vm1 ~]# docker node inspect vm1 -f "{{.Spec.Labels}}"
map[name:swarm-master-1]

[root@vm1 ~]# docker node inspect vm2 -f "{{.Spec.Labels}}"
map[HOSTNAME:master-2 name:master-2]

这个label貌似没用?

[root@vm1 ~]# docker node update --help
Usage:  docker node update [OPTIONS] NODE
Update a node
Options:
      --availability string   Availability of the node ("active"|"pause"|"drain")
      --label-add list        Add or update a node label (key=value)
      --label-rm list         Remove a node label if exists
      --role string           Role of the node ("worker"|"manager")

[root@vm1 ~]# docker node update --label-add name=master-2 vm2
vm2
[root@vm1 ~]# echo $?
0
[root@vm1 ~]#

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active                          20.10.12

[root@vm1 ~]# docker node promote master-2
Error: No such node: master-2

[root@vm1 ~]# docker node promote master-2
Error: No such node: master-2
[root@vm1 ~]# docker node update --label-add HOSTNAME=master-2 vm2
vm2
[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active                          20.10.12
[root@vm1 ~]# docker node promote master-2
Error: No such node: master-2

提升worker为master

[root@vm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0 *   vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq     vm2        Ready     Active         Reachable        20.10.12

2号可以查看集群情况了

[root@vm2 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
kdcrkd6sqteevq9jgy70fd0h0     vm1        Ready     Active         Leader           20.10.12
4hh92oj2meotbi0etnje15bzq *   vm2        Ready     Active         Reachable        20.10.12

查看节点的信息

[root@vm1 ~]# docker node inspect vm2 -f "{{.Spec.Labels}}"
map[HOSTNAME:master-2 name:master-2]
[root@vm1 ~]# docker node inspect vm2

创建网络

指定为overlay

[root@vm1 ~]# docker network create -d overlay --subnet=192.168.82.0/24 --gateway=192.168.82.1 --attachable swarm-net
xywzrf7ftwenaxbu0zmewh183

查看

[root@vm1 ~]# docker network inspect swarm-net -f "{{.IPAM}}"
{default map[] [{192.168.82.0/24  192.168.82.1 map[]}]}

创建service并验证

创建

[root@vm1 ~]# docker service create --replicas 3 -p 10080:80 --network swarm-net --name nginx-cluster nginx
r4v6w094yxl370bynyzghh37a
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service converged
[root@vm1 ~]#

查看

[root@vm1 ~]# docker service ls
ID             NAME            MODE         REPLICAS   IMAGE          PORTS
r4v6w094yxl3   nginx-cluster   replicated   3/3        nginx:latest   *:10080->80/tcp

[root@vm1 ~]# ss -ntl | grep 10080
LISTEN 0      128                *:10080            *:*

[root@vm1 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
3ddd0e479de6   nginx:latest   "/docker-entrypoint.…"   7 minutes ago   Up 7 minutes   80/tcp    nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l

[root@vm1 ~]# docker port nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l
[root@vm1 ~]# echo $?
0
[root@vm1 ~]#

访问

[root@vm1 ~]# curl 192.168.50.100:10080



Welcome to nginx!
...

在2号机上

[root@vm2 ~]# docker service  ls
ID             NAME            MODE         REPLICAS   IMAGE          PORTS
r4v6w094yxl3   nginx-cluster   replicated   3/3        nginx:latest   *:10080->80/tcp
[root@vm2 ~]# ss -ntl
State         Recv-Q        Send-Q               Local Address:Port                Peer Address:Port        Process
LISTEN        0             128                        0.0.0.0:22                       0.0.0.0:*
LISTEN        0             128                              *:10080                          *:*
LISTEN        0             128                              *:2377                           *:*
LISTEN        0             128                              *:7946                           *:*
LISTEN        0             128                           [::]:22                          [::]:*


[root@vm2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
6b1e246bdc34   nginx:latest   "/docker-entrypoint.…"   8 minutes ago   Up 8 minutes   80/tcp    nginx-cluster.3.yofvioldzci3k4geve7lykyrs
0d6709372322   nginx:latest   "/docker-entrypoint.…"   8 minutes ago   Up 8 minutes   80/tcp    nginx-cluster.2.gp7szi6348r0gfakr6v1i42ga
[root@vm2 ~]#

# 另一个容器在1号机上运行
[root@vm1 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
3ddd0e479de6   nginx:latest   "/docker-entrypoint.…"   9 minutes ago   Up 9 minutes   80/tcp    nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l

2号上也能看到overlay网络

[root@vm2 ~]# docker network ls | grep swarm-net
xywzrf7ftwen   swarm-net         overlay   swarm

访问2号机ip,看看转发

[root@vm2 ~]# curl 192.168.50.120:10080



Welcome to nginx!