10 Docker网络

  • 理解Docker0
  • 参数--link
  • 自定义网络
  • 网络连通
  • 实战:部署redis集群

理解Docker0

清空所有环境,帮助学习网络的理解

# 移除所有镜像
[hekai@localhost ~]$ docker rm -f $(docker ps -aq)

# 移除所有容器
[hekai@localhost ~]$ docker rmi -f $(docker images -aq)

三个网络

# 问题:docker是如何处理容器网络访问的?
[hekai@localhost ~]$ docker run -d -P --name tomcat01 tomcat

# 查看容器的内部地址
[hekai@localhost ~]$ docker exec -it tomcat01 ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
131: eth0@if132:  mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
[hekai@localhost ~]$

# 看linux能不能ping通容器内部
[hekai@localhost ~]$ ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.112 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.073 ms
# 显然是可以的!

# 但是两个容器之间能ping通吗?

原理

1、每启动一个docker容器,docker就会给docker容器分配一个ip,只要安装了docker,就会有一个网卡docker0桥接模式,使用的技术是 evth-pair技术!

再次查看本地ip,发现多了一个vethxxx

# 可以发现,容器内的是131,本地是132,因为容器带来的网卡都是一对一对的
# evth-pair 就是一对的虚拟设备接口,他们都是成对出现的,一段连着协议,一段彼此相连
# evth-pair 充当一个桥梁,连接各种虚拟网络设备的OpenStac, Docker容器之间的连接,
# OVS连接,都是使用evth-pair技术

3、测试两个容器之间是否可以ping通

# 首先再启动一个容器tomcat02
docker run -d -P --name tomcat02 tomcat

# 尝试ping
[hekai@localhost ~]$ docker exec -it tomcat02 ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.124 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.082 ms
^C
--- 172.17.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 22ms
rtt min/avg/max/mdev = 0.082/0.103/0.124/0.021 ms

# 结论:容器和容器之间是可以相互ping通的,tomcat01和tomcat02是公用一个路由器,docker0

所有的容器不指定网络情况下,都是docker0路由的,docker会给我们的容器分配一个默认的IP

小结

Docker使用的是Linux的桥接,宿主机中是一个Docker容器的网桥docker0

Docker中的所有网络接口都是虚拟的,转化效率高

只要容器被删除,网桥消失

查看网卡中的东西

[hekai@localhost ~]$ docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
1d62f1dd42f9   bridge    bridge    local
d3c7ee72696e   host      host      local
a8f0c9a8baa0   none      null      local
[hekai@localhost ~]$ docker network inspect 1d62f1dd42f9

参数--link

思考:我们编写了一个微服务,database url=ip; 项目不重启,数据库ip换掉了,我们希望可以处理这个问题,可以通过名字来访问容器?

springcloud feign = 服务名

mysql ip很多

# 直接使用名字来ping是不行的
[hekai@localhost ~]$  docker exec -it tomcat02 ping tomcat01
ping: tomcat01: Name or service not known

# 如何解决这个问题?
# 使用--link启动一个tomcat03
[hekai@localhost ~]$ docker run -d -P --name tomcat03 --link tomcat02 tomcat
6a37229c4b6a8413231480376d053240fca653c38ab9204a68dc4c68d426de0b
[hekai@localhost ~]$

# 用tomcat03去pingtomcat02
[hekai@localhost ~]$ docker exec -it tomcat03 ping tomcat02
PING tomcat02 (172.17.0.3) 56(84) bytes of data.
64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.115 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=0.079 ms
^C
--- tomcat02 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 32ms
rtt min/avg/max/mdev = 0.079/0.097/0.115/0.018 ms
[hekai@localhost ~]$
# 发现可以ping通

# 反向可以ping通吗?
[hekai@localhost ~]$ docker exec -it tomcat02 ping tomcat03
ping: tomcat03: Name or service not known
# 显然不行,因为没有配置

探究:inspect

[hekai@localhost ~]$ docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
1d62f1dd42f9   bridge    bridge    local
d3c7ee72696e   host      host      local
a8f0c9a8baa0   none      null      local
[hekai@localhost ~]$ docker network inspect 1d62f1dd42f9

但是inspect查看tomcat03时,信息太多,不容易看到怎么连接过去tomcat2的

但是tomcat03是否在本地配置了tomcat02的配置?

# 查看hosts配置,--link原理发现!
[hekai@localhost ~]$ docker exec -i tomcat03 cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3      tomcat02 4ab5dad3ff46
172.17.0.4      6a37229c4b6a
# 增加了172.17.0.3      tomcat02 4ab5dad3ff46

现在玩Docker已经不建议使用--link了!
自定义网络,不适用docker0
docker0问题:不支持容器名连接访问

自定义网络

查看所有docker网络

网络模式
bridge:桥接模式(默认,自己创建也是用bridge模式)
none:不配置网络
host:和宿主机共享网络
container:容器之间直接互连(局限很大,少用)

测试

# 直接启动的时候,--net bridge是默认参数
[hekai@localhost ~]$ docker run -d -P --name tomcat01 tomcat
[hekai@localhost ~]$ docker run -d -P --name tomcat01 --net bridge tomcat

# docker0特点:默认,域名不能用来访问,--link可以打通连接

# 可以自定义一个网络,docker network create
[hekai@localhost ~]$ docker network create --help

Usage:  docker network create [OPTIONS] NETWORK

Create a network

Options:
      --attachable           Enable manual container attachment
      --aux-address map      Auxiliary IPv4 or IPv6 addresses used by Network driver (default map[])
      --config-from string   The network from which to copy the configuration
      --config-only          Create a configuration only network
  -d, --driver string        Driver to manage the Network (default "bridge")
      --gateway strings      IPv4 or IPv6 Gateway for the master subnet
      --ingress              Create swarm routing-mesh network
      --internal             Restrict external access to the network
      --ip-range strings     Allocate container ip from a sub-range
      --ipam-driver string   IP Address Management Driver (default "default")
      --ipam-opt map         Set IPAM driver specific options (default map[])
      --ipv6                 Enable IPv6 networking
      --label list           Set metadata on a network
  -o, --opt map              Set driver specific options (default map[])
      --scope string         Control the network's scope
      --subnet strings       Subnet in CIDR format that represents a network segment

# --subnet 子网掩码,一定要配置(这里一定要注意网关,比如是172开头的配置了192开头可能影响后续网络连接!!!)

[hekai@localhost ~]$ docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
5590469f454a0fada3e9614bd541655988950929a77c4a39069e0e63d33e313a
[hekai@localhost ~]$ docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
1d62f1dd42f9   bridge    bridge    local
d3c7ee72696e   host      host      local
5590469f454a   mynet     bridge    local
a8f0c9a8baa0   none      null      local
[hekai@localhost ~]$

# 查看mynet的配置
[hekai@localhost ~]$ docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "5590469f454a0fada3e9614bd541655988950929a77c4a39069e0e63d33e313a",
        "Created": "2021-03-16T12:07:37.248233125+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
# 自己的网络就配置好了
# 启动两个容器
[hekai@localhost ~]$ docker run -d -P --name tomcat-net-01 --net mynet tomcat
b32fc7ec5a6e1319e8e7910632c80a57eb5927f93313f1d4a074fd9023aada81
[hekai@localhost ~]$ docker run -d -P --name tomcat-net-02 --net mynet tomcat
77a9ca3cdc097796be715c0e6440c63db7bfee37e6a119086b140dece46a8b1c
[hekai@localhost ~]$

[hekai@localhost ~]$ docker inspect mynet
[
    {
        "Name": "mynet",
        "Id": "5590469f454a0fada3e9614bd541655988950929a77c4a39069e0e63d33e313a",
        "Created": "2021-03-16T12:07:37.248233125+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "77a9ca3cdc097796be715c0e6440c63db7bfee37e6a119086b140dece46a8b1c": {
                "Name": "tomcat-net-02",
                "EndpointID": "0f29727d7b36b8757bf9d40350d12fe3d6455297d7983b3bf01120aa2a7bd577",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/16",
                "IPv6Address": ""
            },
            "b32fc7ec5a6e1319e8e7910632c80a57eb5927f93313f1d4a074fd9023aada81": {
                "Name": "tomcat-net-01",
                "EndpointID": "1955652ae224c426c79771670a396750e570ce56792640faf0664eda30ab70e9",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

# 再次测试ping连接
[hekai@localhost ~]$ docker exec -it tomcat-net-01 ping 192.168.0.3
PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.088 ms
^C
--- 192.168.0.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 27ms
rtt min/avg/max/mdev = 0.088/0.109/0.130/0.021 ms

[hekai@localhost ~]$ docker exec -it tomcat-net-01 ping tomcat-net-02
PING tomcat-net-02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.102 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.084 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=3 ttl=64 time=0.091 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=4 ttl=64 time=0.095 ms
^C
--- tomcat-net-02 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 59ms
rtt min/avg/max/mdev = 0.084/0.093/0.102/0.006 ms
# 没有--link,也可以根据名字ping通!

自定义的网络docker都已经帮我们维护好了对应的关系,推荐我们平时这样使用网络

好处
redis # 不同的集群使用不同的网络,保证集群是安全和健康的
mysql # 不同的集群使用不同的网络,保证集群是安全和健康的

网络连通

[hekai@localhost ~]$ docker exec -it tomcat01 ping tomcat-net-01
ping: tomcat-net-01: Name or service not known
[hekai@localhost ~]$
# 显然直接连接不行
# 测试打通 tomcat01  tomcat-net-01
[hekai@localhost ~]$ docker network connect mynet tomcat01
[hekai@localhost ~]$ docker inspect mynet
[
    {
        "Name": "mynet",
        "Id": "5590469f454a0fada3e9614bd541655988950929a77c4a39069e0e63d33e313a",
        "Created": "2021-03-16T12:07:37.248233125+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "51e200a1d99e3c348b22d1231778ee417f9c092a4ac46459f505f19b142c9057": {
                "Name": "tomcat01",
                "EndpointID": "ea5ab2336fd3240bb43ac9defdd68c7a45fdebb49af2d51dfedeee15c74efe8e",
                "MacAddress": "02:42:c0:a8:00:04",
                "IPv4Address": "192.168.0.4/16",
                "IPv6Address": ""
            },
            "77a9ca3cdc097796be715c0e6440c63db7bfee37e6a119086b140dece46a8b1c": {
                "Name": "tomcat-net-02",
                "EndpointID": "0f29727d7b36b8757bf9d40350d12fe3d6455297d7983b3bf01120aa2a7bd577",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/16",
                "IPv6Address": ""
            },
            "b32fc7ec5a6e1319e8e7910632c80a57eb5927f93313f1d4a074fd9023aada81": {
                "Name": "tomcat-net-01",
                "EndpointID": "1955652ae224c426c79771670a396750e570ce56792640faf0664eda30ab70e9",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]
[hekai@localhost ~]$
# 发现连通之后就将tomcat01放到了mynet网络下

[hekai@localhost ~]$ docker exec -it tomcat01 ping tomcat-net-01
PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.114 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.102 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=3 ttl=64 time=0.103 ms
^C
--- tomcat-net-01 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 38ms
rtt min/avg/max/mdev = 0.102/0.106/0.114/0.010 ms
# 显然上面tomcat01打通了,下面tomcat02没打通
[hekai@localhost ~]$ docker exec -it tomcat02 ping tomcat-net-01
ping: tomcat-net-01: Name or service not known
[hekai@localhost ~]$

结论:假设要跨网络操作别人,需要使用docker network connect连通!

实战:部署redis集群

# 创建一个网卡

# 首先清理容器
[hekai@localhost ~]$ docker rm -f $(docker ps -aq)
3e76cf8b4dac
51e200a1d99e
77a9ca3cdc09
b32fc7ec5a6e
# 创建一个redis网络
[hekai@localhost ~]$ docker network create redis --subnet 172.16.0.0/16
f82a6409056fa1b3f3a5c96d1a03b6b6195a18593552ab744169e28d625cca92
[hekai@localhost ~]$ docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
1d62f1dd42f9   bridge    bridge    local
d3c7ee72696e   host      host      local
5590469f454a   mynet     bridge    local
a8f0c9a8baa0   none      null      local
f82a6409056f   redis     bridge    local
[hekai@localhost ~]$ docker network inspect redis
[
    {
        "Name": "redis",
        "Id": "f82a6409056fa1b3f3a5c96d1a03b6b6195a18593552ab744169e28d625cca92",
        "Created": "2021-03-16T14:45:22.229751861+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.16.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
# 循环配置6个redis环境(root环境)
for port in $(seq 1 6);\
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.16.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done

# 启动一个的集群
docker run -p 6371:6379 -p 16371:16379 --name redis-1 -v /mydata/redis/node-1/data:/data -v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf -d --net redis --ip 172.16.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

# 循环启动
for port in $(seq 2 6);\
do \
docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} -v /mydata/redis/node-${port}/data:/data -v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf -d --net redis --ip 172.16.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
done

root@X8DT6-ubuntu:/mydata/redis/node-1/conf# for port in $(seq 2 6);\
> do \
> docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} -v /mydata/redis/node-${port}/data:/data -v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf -d --net redis --ip 172.16.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
> done
ef0b410eea0a5d0b2dcf1c2b31b0a10f18eece31e57a1a4404965a671a97a195
98412d407ae720649bffe462b8f9e8a3a089001e56d3b3bf4b574331a35a9012
db977fd9874126a202374fdeeffafdabf16e047662b517fc80f10deb9e279efa
b8da639aa947337b2819d8ecd15051ca5e8922812baf5aa6a2c951d13a5cf658
623a4a69e8f97262156b78b785fbe94906e4cf254c0633c8f3280884856a836a
root@X8DT6-ubuntu:/mydata/redis/node-1/conf#

# 可以看到6个都启动了
root@X8DT6-ubuntu:/mydata/redis/node-1/conf# docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS              PORTS                                              NAMES
623a4a69e8f9   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   48 seconds ago       Up 46 seconds       0.0.0.0:6376->6379/tcp, 0.0.0.0:16376->16379/tcp   redis-6
b8da639aa947   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   50 seconds ago       Up 48 seconds       0.0.0.0:6375->6379/tcp, 0.0.0.0:16375->16379/tcp   redis-5
db977fd98741   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   51 seconds ago       Up 49 seconds       0.0.0.0:6374->6379/tcp, 0.0.0.0:16374->16379/tcp   redis-4
98412d407ae7   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   52 seconds ago       Up 50 seconds       0.0.0.0:6373->6379/tcp, 0.0.0.0:16373->16379/tcp   redis-3
ef0b410eea0a   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   54 seconds ago       Up 52 seconds       0.0.0.0:6372->6379/tcp, 0.0.0.0:16372->16379/tcp   redis-2
2efafd9aed20   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:6371->6379/tcp, 0.0.0.0:16371->16379/tcp   redis-1
root@X8DT6-ubuntu:/mydata/redis/node-1/conf#

# 进入redis-1
root@X8DT6-ubuntu:/mydata/redis/node-1/conf# docker exec -it redis-1 /bin/sh
/data # ls
appendonly.aof  nodes.conf
/data # pwd
/data
/data #

# 创建集群
/data # redis-cli --cluster create 172.16.0.11:6379 172.16.0.12:6379 172.16.0.13:6379 172.16.0.14:6379 172.16.0.15:6379 172.16.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.16.0.15:6379 to 172.16.0.11:6379
Adding replica 172.16.0.16:6379 to 172.16.0.12:6379
Adding replica 172.16.0.14:6379 to 172.16.0.13:6379
M: 56648fb39aa6568c1fe8c71ebede683ed6dd47e3 172.16.0.11:6379
   slots:[0-5460] (5461 slots) master
M: 49255ecdc8ba9a02eb542979ea7d08aeda9e0013 172.16.0.12:6379
   slots:[5461-10922] (5462 slots) master
M: a5fb7a42122ec6eedded0293dc2ffa551add96d4 172.16.0.13:6379
   slots:[10923-16383] (5461 slots) master
S: b7282a9754d2079c57c8550aafb1ebd989d43a4e 172.16.0.14:6379
   replicates a5fb7a42122ec6eedded0293dc2ffa551add96d4
S: fa55de0b00dc485074fd8f7496d5761c32fc1e6e 172.16.0.15:6379
   replicates 56648fb39aa6568c1fe8c71ebede683ed6dd47e3
S: 698771267efc489c4a0d7b85cf5a7267a66e0cbc 172.16.0.16:6379
   replicates 49255ecdc8ba9a02eb542979ea7d08aeda9e0013
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 172.16.0.11:6379)
M: 56648fb39aa6568c1fe8c71ebede683ed6dd47e3 172.16.0.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 49255ecdc8ba9a02eb542979ea7d08aeda9e0013 172.16.0.12:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: fa55de0b00dc485074fd8f7496d5761c32fc1e6e 172.16.0.15:6379
   slots: (0 slots) slave
   replicates 56648fb39aa6568c1fe8c71ebede683ed6dd47e3
M: a5fb7a42122ec6eedded0293dc2ffa551add96d4 172.16.0.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 698771267efc489c4a0d7b85cf5a7267a66e0cbc 172.16.0.16:6379
   slots: (0 slots) slave
   replicates 49255ecdc8ba9a02eb542979ea7d08aeda9e0013
S: b7282a9754d2079c57c8550aafb1ebd989d43a4e 172.16.0.14:6379
   slots: (0 slots) slave
   replicates a5fb7a42122ec6eedded0293dc2ffa551add96d4
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

# 创建完成

# 进入集群, redis-cli -c
/data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:135
cluster_stats_messages_pong_sent:124
cluster_stats_messages_sent:259
cluster_stats_messages_ping_received:119
cluster_stats_messages_pong_received:135
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:259

127.0.0.1:6379> cluster nodes
49255ecdc8ba9a02eb542979ea7d08aeda9e0013 172.16.0.12:6379@16379 master - 0 1615883052570 2 connected 5461-10922
fa55de0b00dc485074fd8f7496d5761c32fc1e6e 172.16.0.15:6379@16379 slave 56648fb39aa6568c1fe8c71ebede683ed6dd47e3 0 1615883052000 5 connected
a5fb7a42122ec6eedded0293dc2ffa551add96d4 172.16.0.13:6379@16379 master - 0 1615883051000 3 connected 10923-16383
698771267efc489c4a0d7b85cf5a7267a66e0cbc 172.16.0.16:6379@16379 slave 49255ecdc8ba9a02eb542979ea7d08aeda9e0013 0 1615883051000 6 connected
56648fb39aa6568c1fe8c71ebede683ed6dd47e3 172.16.0.11:6379@16379 myself,master - 0 1615883050000 1 connected 0-5460
b7282a9754d2079c57c8550aafb1ebd989d43a4e 172.16.0.14:6379@16379 slave a5fb7a42122ec6eedded0293dc2ffa551add96d4 0 1615883051000 4 connected
127.0.0.1:6379>

# 设置两个值
127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 172.16.0.13:6379
OK
# 可以发现使用的是13来出来的,即redis-3,现在新开终端把redis-3停掉
root@X8DT6-ubuntu:/mydata/redis# docker stop redis-3
redis-3
root@X8DT6-ubuntu:/mydata/redis#

# 使用get a获得值,发现从14获得,原来是13的从机
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.16.0.14:6379
"b"
172.16.0.14:6379>

# 再次使用docker nodes查看,发现14变为master
172.16.0.14:6379> cluster nodes
a5fb7a42122ec6eedded0293dc2ffa551add96d4 172.16.0.13:6379@16379 master,fail - 1615883321508 1615883319000 3 connected
fa55de0b00dc485074fd8f7496d5761c32fc1e6e 172.16.0.15:6379@16379 slave 56648fb39aa6568c1fe8c71ebede683ed6dd47e3 0 1615883507410 5 connected
49255ecdc8ba9a02eb542979ea7d08aeda9e0013 172.16.0.12:6379@16379 master - 0 1615883507511 2 connected 5461-10922
698771267efc489c4a0d7b85cf5a7267a66e0cbc 172.16.0.16:6379@16379 slave 49255ecdc8ba9a02eb542979ea7d08aeda9e0013 0 1615883506406 2 connected
b7282a9754d2079c57c8550aafb1ebd989d43a4e 172.16.0.14:6379@16379 myself,master - 0 1615883506000 7 connected 10923-16383
56648fb39aa6568c1fe8c71ebede683ed6dd47e3 172.16.0.11:6379@16379 master - 0 1615883506000 1 connected 0-5460

使用了docker之后,所有的技术都会慢慢变得简单!

你可能感兴趣的:(10 Docker网络)