Docker网络模型详解

目录

一、Docker网络基础

1.1、端口映射

1.2、端口暴露

1.3、容器互联

二、Docker网络模式

2.1、Host模式

 2.2、container模式

 2.3、none模式

 2.4、bridge模式

2.5、Overlay模式


        网络是激活Docker体系的唯一途径,如果Docker没有比较出色的容器网络,那么Docker根本没有如今的竞争力,起初Docker网络的解决方案并不理想,但是经过最近几年的发展,再加上很多云计算服务商都参与了进来,大批的SDN方案如雨后春笋般的冒了出来。

安装docker-ce

[root@localhost ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

[root@localhost ~]# yum -y install yum-utils device-mapper-persistent-data lvm2
[root@localhost ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

[root@localhost ~]# ls /etc/yum.repos.d/
backup  Centos-aliyun.repo  CentOS-Media.repo  docker-ce.repo

[root@localhost ~]# yum -y install docker-ce
[root@localhost ~]# systemctl start docker
[root@localhost ~]# systemctl enable docker

阿里云镜像加速器

[root@localhost ~]# cat << END > /etc/docker/daemon.json
{
        "registry-mirrors":[ "https://nyakyfun.mirror.aliyuncs.com" ]
}
END
[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker

[root@localhost ~]# docker version
Client: Docker Engine - Community
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        afacb8b
 Built:             Wed Mar 11 01:27:04 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b
  Built:            Wed Mar 11 01:25:42 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

[root@docker ~]# docker pull nginx
[root@docker ~]# docker pull busybox
[root@docker ~]# docker pull mysql

[root@docker ~]# vim /etc/sysctl.conf 
net.ipv4.ip_forward = 1
[root@docker ~]# sysctl -p

一、Docker网络基础

Docker目前对单节点的设备提供了将容器端口映射到宿主机和容器互联两个网络服务。

1.1、端口映射

在Docker中容器默认是无法与外部通信的,需要在启动命令中加入对应的参数才允许容器与外界通信。

当Docker中运行一个Web服务时,需要把容器内的Web服务应用程序端口映射到本地宿主机的端口。这样,用户访问宿主机指定的端口的话,就相当于访问容器内部的Web服务端口。

1、使用-P选项时Docker会随机映射一个端口至容器内部的开放端口

[root@localhost ~]# docker run -d -P --name test1 nginx

372d4b33a7c4e29c507a636c8ddcb9a59c1aa9780d786dc67ad854a7d9ed1812

使用docker port可以查看端口映射情况

[root@localhost ~]# docker port test1

80/tcp -> 0.0.0.0:32768
80/tcp -> [::]:32768

使用docker logs查看Nginx的日志

Docker网络模型详解_第1张图片

[root@localhost ~]# docker logs test1

192.168.2.1 - - [04/Aug/2023:05:49:03 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.1901.188" "-"

2023/08/04 05:49:03 [error] 31#31: *1 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 192.168.2.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "192.168.2.118:32768", referrer: "http://192.168.2.118:32768/"

192.168.2.1 - - [04/Aug/2023:05:49:03 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://192.168.2.118:32768/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.1901.188" "-"

查看映射的随机端口范围

[root@localhost ~]# cat /proc/sys/net/ipv4/ip_local_port_range 
32768	60999

2、使用-p可以指定要映射到的本地端口。

                Local_Port:Container_Port

端口映射参数中指定了宿主机的8000映射到容器内部的80端口,可以多次使用-p选项

[root@localhost ~]# docker run -d -p800:80 --name test2 nginx

19e1a09c75c609a8bb3ac5daf798a03e8aa87da115a2d3c057927040749d60e1

这种方式会映射到所有接口地址,所有访客都可以通过宿主机所有IP的端口来访问容器。

                Local_IP:Local_Port:Container_Port

映射到指定地址的指定端口

[root@localhost ~]# docker run -d -p192.168.2.118:900:80 --name test3 nginx
c5bf1e2d5d8457bc51505af9f4d97cbc3ea02527a4c57571aa4b2676b4ec46c9

Docker网络模型详解_第2张图片

                Local_IP::Container_Port

映射到指定地址,但是宿主机端口是随机分配的

[root@localhost ~]# docker run -d -p192.168.2.118::80 --name test4 nginx

1586ce7373d6cbe6bb734a953f32da8062c777de8f52c0764dababda07d7e927

[root@localhost ~]# docker port test4

80/tcp -> 192.168.2.118:32768

Docker网络模型详解_第3张图片

                指定传输协议

[root@localhost ~]# docker run -d -p 80:80/tcp --name test5 nginx

b92208c1d7574336733781bcdb4820f2aaed10ce56c6be0d39493d33f69f66fa

[root@localhost ~]# docker port test5

80/tcp -> 0.0.0.0:80
80/tcp -> [::]:80

Docker网络模型详解_第4张图片

1.2、端口暴露

        咱们之前讲过EXPOSE命令用于端口暴露,很多人会把端口暴露和端口映射混为一谈,目前有两种方式用于端口暴露,--expose和EXPOSE方式,这两种方式作用相同,但是--expose可以接受端口范围作为参数,例如--expose=2000~3000。

        Dockerfile的作者一般在包含EXPOSE规则时都只提示哪个端口提供哪个服务。访问时还需要运维人员通过端口映射来指定。--expose和EXPOSE只是为其他命令提供所需信息的元数据。

通过docker inspect container_name查看网络配置:

[root@localhost ~]# docker inspect test1

        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "972edda9e908f2f1cf678f91d4f5c510ffcbb28547cf45a9f825e400d601c8ee",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "80/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "32769"
                    },
                    {
                        "HostIp": "::",
                        "HostPort": "32769"
                    }
                ]
            },
[root@localhost ~]# docker inspect test2

        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "4f077987d1877ced0407cd4901c822ba36d0ca118372ae763bbd9d238dc939dd",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "80/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "80"
                    },
                    {
                        "HostIp": "::",
                        "HostPort": "80"
                    }
                ]
            },
[root@localhost ~]# docker inspect test3

        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "3dd6eb5e28997310053d82c6437417c0a2af0d137e389628bd8591608c6f2837",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "80/tcp": [
                    {
                        "HostIp": "192.168.2.118",
                        "HostPort": "900"
                    }
                ]
            },
[root@localhost ~]# docker inspect test4

        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "0a5805fe41823cc9ad6ad1d8fc767f36d3ea07b42ba819c0893a0cc95a8c36be",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "80/tcp": [
                    {
                        "HostIp": "192.168.2.118",
                        "HostPort": "32768"
                    }
                ]
            },
[root@localhost ~]# docker inspect test5

        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "4f077987d1877ced0407cd4901c822ba36d0ca118372ae763bbd9d238dc939dd",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "80/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "80"
                    },
                    {
                        "HostIp": "::",
                        "HostPort": "80"
                    }
                ]
            },

1.3、容器互联

        容器互联是除了端口映射外另一种可以与容器通信的方式。端口映射的用途是宿主机网络与容器的通信,而容器互联是容器之间的通信。

        当前实现容器互联有两种方式,一种是把两个容器放进一个用户自定义的网络中,另一种是使用--link参数(已经弃用,即将删除的功能)。

        为什么要使用一个单独的网络来连接两个容器呢?设想一下后端容器需要调用一个数据库环境,数据库容器和后端服务容器如果使用上下文中的暴露端口或者映射端口来通信,势必会把数据库的端口也暴露在外网中,导致数据库容器的安全性大大降低,为了解决这个问题,docker允许用户建立一个独立的网络来放置相应的容器,只有在该网络中的容器才能相互通信,外部容器是无法进入这个特定网络中的。

        一个容器可以同时加入多个网络,使用不同地址可以访问不同网络中的容器。

用户自定义的网络

首先创建两个容器,命名为container1和container2

[root@localhost ~]# docker run -itd --name=container1 busybox

3e2c5410167796c81e6d5d45c37f4ea3392be4988a49715c7841d5f860c9e30c

[root@localhost ~]# docker run -itd --name=container2 busybox

441f07a0d9d1b38a532688656ffccb518992b932f54942ddd687a969301be1ce

        接下来创建一个独立的容器网络,这里使用bridge驱动(桥接模式),其他可选的值还有overlay和macvlan。

[root@localhost ~]# docker network create -d bridge --subnet 172.25.0.0/16 demo_net

48374a5085af63a71af6d9e2407069c8524f8d10b32dab5221430d290cac7beb

[root@localhost ~]# docker network ls

NETWORK ID     NAME                DRIVER    SCOPE
d4ad71936d91   bridge              bridge    local
87fb4f081b8d   compose_lnmp_lnmp   bridge    local
48374a5085af   demo_net            bridge    local
03c3bf444251   host                host      local
3d3885ad93ce   none                null      local

使用--subnet和--gateway可以指定子网和网关,现在我们把container2加入到demo_net中

[root@localhost ~]# docker network connect demo_net container2
[root@localhost ~]# docker network inspect demo_net

        "Containers": {
            "441f07a0d9d1b38a532688656ffccb518992b932f54942ddd687a969301be1ce": {
                "Name": "container2",
                "EndpointID": "fccef14a0f858bddac423b6ef18efed7cecb10f2673044291e4e556c444ea7d9",
                "MacAddress": "02:42:ac:19:00:02",
                "IPv4Address": "172.25.0.2/16",
                "IPv6Address": ""
            }
        },

        使用docker network inspect可以查看网络中容器的连接状态。Container2已经在demo_net网络中,注意IP地址使自动分配的。

启动第三个容器:

[root@localhost ~]# docker run --network=demo_net --ip=172.25.3.3 -itd --name=container3 busybox

5c75c9849d122e5ec7eca053213553b75a738bfbc695ce218788065858e44ffe

[root@localhost ~]# docker network inspect demo_net

        "Containers": {
            "441f07a0d9d1b38a532688656ffccb518992b932f54942ddd687a969301be1ce": {
                "Name": "container2",
                "EndpointID": "fccef14a0f858bddac423b6ef18efed7cecb10f2673044291e4e556c444ea7d9",
                "MacAddress": "02:42:ac:19:00:02",
                "IPv4Address": "172.25.0.2/16",
                "IPv6Address": ""
            },
            "5c75c9849d122e5ec7eca053213553b75a738bfbc695ce218788065858e44ffe": {
                "Name": "container3",
                "EndpointID": "4a91ab4b43d3d7a0c642583dc3af2ee1f5979ba213a16c9bf1a4098c04780093",
                "MacAddress": "02:42:ac:19:03:03",
                "IPv4Address": "172.25.3.3/16",
                "IPv6Address": ""
            }
        },

Docker网络模型详解_第5张图片

 查看三个容器内部的网络

[root@localhost ~]# docker exec -it container1 ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:07  
          inet addr:172.17.0.7  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:656 (656.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


[root@localhost ~]# docker exec -it container2 ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:08  
          inet addr:172.17.0.8  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:656 (656.0 B)  TX bytes:0 (0.0 B)

eth1      Link encap:Ethernet  HWaddr 02:42:AC:19:00:02  
          inet addr:172.25.0.2  Bcast:172.25.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1102 (1.0 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)



[root@localhost ~]# docker exec -it container3 ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:19:03:03  
          inet addr:172.25.3.3  Bcast:172.25.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:656 (656.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


[root@localhost ~]# docker exec -it container2 ping 172.17.0.7
PING 172.17.0.7 (172.17.0.7): 56 data bytes
64 bytes from 172.17.0.7: seq=0 ttl=64 time=0.091 ms
64 bytes from 172.17.0.7: seq=1 ttl=64 time=0.063 ms
64 bytes from 172.17.0.7: seq=2 ttl=64 time=0.062 ms
^C
--- 172.17.0.7 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.062/0.072/0.091 ms

[root@localhost ~]# docker exec -it container2 ping 172.25.3.3
PING 172.25.3.3 (172.25.3.3): 56 data bytes
64 bytes from 172.25.3.3: seq=0 ttl=64 time=0.073 ms
64 bytes from 172.25.3.3: seq=1 ttl=64 time=0.063 ms
^C
--- 172.25.3.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.063/0.068/0.073 ms


[root@localhost ~]# docker exec -it container2 ping container3
PING container3 (172.25.3.3): 56 data bytes
64 bytes from 172.25.3.3: seq=0 ttl=64 time=0.061 ms
64 bytes from 172.25.3.3: seq=1 ttl=64 time=0.063 ms
64 bytes from 172.25.3.3: seq=2 ttl=64 time=0.057 ms
^C
--- container3 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.057/0.060/0.063 ms

使用link参数

        容器的连接(linking)系统是除了端口映射外另一种可以与容器中应用进行交互的方式。它会在源和接收容器之间创建一个隧道,接收容器可以看到源容器指定的信息。

使用这个参数容器必须设置一个名字,也就是--name指定的值。

[root@localhost ~]# docker run -itd --name test busybox
93ce1c55db603cec4130a9c7536b217905b208c8704b3bf5b7a603f262e85af4

--link参数的格式: --link name:alias , 其中name是要链接的容器的名称,alias是这个链接的别名。

[root@localhost ~]# docker run -itd --name=link --link test:test busybox

edf8fca7bd5cb18166025e8cd7d57a711535b255a305efd598cc3462755d9d61

[root@localhost ~]# docker exec -it link ping test

PING test (172.17.0.9): 56 data bytes
64 bytes from 172.17.0.9: seq=0 ttl=64 time=0.109 ms
64 bytes from 172.17.0.9: seq=1 ttl=64 time=0.063 ms
^C
--- test ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.063/0.086/0.109 ms

如果忘记设置名字可以通过docker rename来重命名容器,容器名字是唯一的。

此外--link还可以传递环境变量,实现在两个容器之间共享环境变量。

二、Docker网络模式

安装Docker时会自动创建3个网络,可以使用docker network ls命令列出这些网络。

[root@docker ~]# docker network ls

NETWORK ID         NAME             DRIVER              SCOPE
2479ceca4e84        bridge              bridge              local
39e2de843b38        host                host                local
2621f28b1641        none                null                local

        我们在使用docker run创建容器时,可以用--net选项指定容器的网络模式,Docker有以下4种网络模式:

  1. Host模式,使用--net=host指定。
  2. Container模式,使用--net=container:NAME_or_ID指定。
  3. None模式,使用--net=none指定。
  4. Bridge模式,使用--net=bridge指定,默认设置。

2.1、Host模式

        Docker底层使用了Linux的Namespaces技术来进行资源隔离,如PID Namespace隔离进程,Mount Namespace隔离文件系统,Network Namespace隔离网络等。一个Network Namespace提供了一份独立的网络环境,包括网卡、路由、Iptables规则等都与其他的Network Namespace隔离。一个Docker容器一般会分配一个独立的Network Namespace。但如果启动容器的时候使用host模式,那么这个容器将不会获得一个独立的Network Namespace,而是和宿主机共用一个Root Network Namespace。容器将不会虚拟出自己的网卡,配置自己的IP等,而是使用宿主机的IP和端口。出于安全考虑不推荐使用这种网络模式。

        我们在192.168.2.118/24的机器上用Host模式启动一个含有WEB应用的Docker容器,监听TCP 80端口。当我们在容器中执行任何类似ifconfig命令查看网络环境时,看到的都是宿主机上的信息。而外界访问容器中的应用,则直接使用192.168.2.118:80即可,不用任何NAT转换,就如直接跑在宿主机中一样。但是,容器的其他方面,如文件系统、进程列表等还是和宿主机隔离的。

[root@localhost ~]# docker run -itd --net=host --name=host busybox

4b233a47a491e5229f75b37eeee036f4df6036ffcd593e53343114630318568c

[root@localhost ~]# docker exec -it host ifconfig

br-48374a5085af Link encap:Ethernet  HWaddr 02:42:3D:56:1D:4F  
          inet addr:172.25.0.1  Bcast:172.25.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:3dff:fe56:1d4f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:252 (252.0 B)  TX bytes:726 (726.0 B)

br-87fb4f081b8d Link encap:Ethernet  HWaddr 02:42:29:F6:00:41  
          inet addr:172.18.0.1  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

docker0   Link encap:Ethernet  HWaddr 02:42:9A:81:12:AF  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:9aff:fe81:12af/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:119 errors:0 dropped:0 overruns:0 frame:0
          TX packets:140 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:13256 (12.9 KiB)  TX bytes:17939 (17.5 KiB)

ens33     Link encap:Ethernet  HWaddr 00:0C:29:63:D5:CD  
          inet addr:192.168.2.118  Bcast:192.168.2.255  Mask:255.255.255.0
          inet6 addr: fe80::f96a:fa95:aee6:3313/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6211 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4486 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:489662 (478.1 KiB)  TX bytes:603611 (589.4 KiB)

ens36     Link encap:Ethernet  HWaddr 00:0C:29:63:D5:D7  
          inet addr:192.168.108.160  Bcast:192.168.108.255  Mask:255.255.255.0
          inet6 addr: fe80::a926:5eaf:574f:8d2b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:146078 errors:0 dropped:0 overruns:0 frame:0
          TX packets:38560 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:218715288 (208.5 MiB)  TX bytes:2379733 (2.2 MiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

veth04b43b1 Link encap:Ethernet  HWaddr 32:2E:29:28:60:34  
          inet6 addr: fe80::302e:29ff:fe28:6034/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:20 errors:0 dropped:0 overruns:0 frame:0
          TX packets:42 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:3171 (3.0 KiB)  TX bytes:5086 (4.9 KiB)

veth1cb072a Link encap:Ethernet  HWaddr A2:84:1C:23:D5:E8  
          inet6 addr: fe80::a084:1cff:fe23:d5e8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:17 errors:0 dropped:0 overruns:0 frame:0
          TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1186 (1.1 KiB)  TX bytes:1919 (1.8 KiB)

veth31cf25c Link encap:Ethernet  HWaddr 76:01:7A:DF:AB:B5  
          inet6 addr: fe80::7401:7aff:fedf:abb5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:20 errors:0 dropped:0 overruns:0 frame:0
          TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1420 (1.3 KiB)  TX bytes:2064 (2.0 KiB)

veth4a339f9 Link encap:Ethernet  HWaddr 4A:36:37:3F:C9:A4  
          inet6 addr: fe80::4836:37ff:fe3f:c9a4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:9 errors:0 dropped:0 overruns:0 frame:0
          TX packets:23 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:658 (658.0 B)  TX bytes:1802 (1.7 KiB)

veth51cba05 Link encap:Ethernet  HWaddr D6:8F:9A:FD:CC:27  
          inet6 addr: fe80::d48f:9aff:fefd:cc27/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:30 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:732 (732.0 B)  TX bytes:1782 (1.7 KiB)

veth6d65dd9 Link encap:Ethernet  HWaddr D6:40:5C:F4:6C:D9  
          inet6 addr: fe80::d440:5cff:fef4:6cd9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:280 (280.0 B)  TX bytes:1076 (1.0 KiB)

veth9bd7dd3 Link encap:Ethernet  HWaddr 8E:16:F3:7D:22:97  
          inet6 addr: fe80::8c16:f3ff:fe7d:2297/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:938 (938.0 B)  TX bytes:1594 (1.5 KiB)

vethab1b3fc Link encap:Ethernet  HWaddr CA:2F:1F:F3:B2:A5  
          inet6 addr: fe80::c82f:1fff:fef3:b2a5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:280 (280.0 B)  TX bytes:1076 (1.0 KiB)

vethb705f49 Link encap:Ethernet  HWaddr EE:66:4A:41:AE:38  
          inet6 addr: fe80::ec66:4aff:fe41:ae38/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:19 errors:0 dropped:0 overruns:0 frame:0
          TX packets:38 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:3321 (3.2 KiB)  TX bytes:5446 (5.3 KiB)

vethd6128f3 Link encap:Ethernet  HWaddr 0E:81:14:31:F2:18  
          inet6 addr: fe80::c81:14ff:fe31:f218/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:922 (922.0 B)

vethe042114 Link encap:Ethernet  HWaddr 7E:59:DD:43:4B:72  
          inet6 addr: fe80::7c59:ddff:fe43:4b72/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:922 (922.0 B)

[root@localhost ~]# ifconfig

br-48374a5085af: flags=4163  mtu 1500
        inet 172.25.0.1  netmask 255.255.0.0  broadcast 172.25.255.255
        inet6 fe80::42:3dff:fe56:1d4f  prefixlen 64  scopeid 0x20
        ether 02:42:3d:56:1d:4f  txqueuelen 0  (Ethernet)
        RX packets 146115  bytes 218717508 (208.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 38562  bytes 2379883 (2.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

br-87fb4f081b8d: flags=4099  mtu 1500
        inet 172.18.0.1  netmask 255.255.0.0  broadcast 172.18.255.255
        ether 02:42:29:f6:00:41  txqueuelen 0  (Ethernet)
        RX packets 6256  bytes 493238 (481.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4529  bytes 616807 (602.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4163  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:9aff:fe81:12af  prefixlen 64  scopeid 0x20
        ether 02:42:9a:81:12:af  txqueuelen 0  (Ethernet)
        RX packets 119  bytes 13256 (12.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 140  bytes 17939 (17.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163  mtu 1500
        inet 192.168.2.118  netmask 255.255.255.0  broadcast 192.168.2.255
        inet6 fe80::f96a:fa95:aee6:3313  prefixlen 64  scopeid 0x20
        ether 00:0c:29:63:d5:cd  txqueuelen 1000  (Ethernet)
        RX packets 6256  bytes 493238 (481.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4529  bytes 616807 (602.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens36: flags=4163  mtu 1500
        inet 192.168.108.160  netmask 255.255.255.0  broadcast 192.168.108.255
        inet6 fe80::a926:5eaf:574f:8d2b  prefixlen 64  scopeid 0x20
        ether 00:0c:29:63:d5:d7  txqueuelen 1000  (Ethernet)
        RX packets 146115  bytes 218717508 (208.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 38562  bytes 2379883 (2.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth04b43b1: flags=4163  mtu 1500
        inet6 fe80::302e:29ff:fe28:6034  prefixlen 64  scopeid 0x20
        ether 32:2e:29:28:60:34  txqueuelen 0  (Ethernet)
        RX packets 20  bytes 3171 (3.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 42  bytes 5086 (4.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth1cb072a: flags=4163  mtu 1500
        inet6 fe80::a084:1cff:fe23:d5e8  prefixlen 64  scopeid 0x20
        ether a2:84:1c:23:d5:e8  txqueuelen 0  (Ethernet)
        RX packets 17  bytes 1186 (1.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 26  bytes 1919 (1.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth31cf25c: flags=4163  mtu 1500
        inet6 fe80::7401:7aff:fedf:abb5  prefixlen 64  scopeid 0x20
        ether 76:01:7a:df:ab:b5  txqueuelen 0  (Ethernet)
        RX packets 20  bytes 1420 (1.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 27  bytes 2064 (2.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth4a339f9: flags=4163  mtu 1500
        inet6 fe80::4836:37ff:fe3f:c9a4  prefixlen 64  scopeid 0x20
        ether 4a:36:37:3f:c9:a4  txqueuelen 0  (Ethernet)
        RX packets 9  bytes 658 (658.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 23  bytes 1802 (1.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth51cba05: flags=4163  mtu 1500
        inet6 fe80::d48f:9aff:fefd:cc27  prefixlen 64  scopeid 0x20
        ether d6:8f:9a:fd:cc:27  txqueuelen 0  (Ethernet)
        RX packets 14  bytes 732 (732.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 30  bytes 1782 (1.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth6d65dd9: flags=4163  mtu 1500
        inet6 fe80::d440:5cff:fef4:6cd9  prefixlen 64  scopeid 0x20
        ether d6:40:5c:f4:6c:d9  txqueuelen 0  (Ethernet)
        RX packets 4  bytes 280 (280.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 14  bytes 1076 (1.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth9bd7dd3: flags=4163  mtu 1500
        inet6 fe80::8c16:f3ff:fe7d:2297  prefixlen 64  scopeid 0x20
        ether 8e:16:f3:7d:22:97  txqueuelen 0  (Ethernet)
        RX packets 13  bytes 938 (938.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21  bytes 1594 (1.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethab1b3fc: flags=4163  mtu 1500
        inet6 fe80::c82f:1fff:fef3:b2a5  prefixlen 64  scopeid 0x20
        ether ca:2f:1f:f3:b2:a5  txqueuelen 0  (Ethernet)
        RX packets 4  bytes 280 (280.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 14  bytes 1076 (1.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethb705f49: flags=4163  mtu 1500
        inet6 fe80::ec66:4aff:fe41:ae38  prefixlen 64  scopeid 0x20
        ether ee:66:4a:41:ae:38  txqueuelen 0  (Ethernet)
        RX packets 19  bytes 3321 (3.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 38  bytes 5446 (5.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethd6128f3: flags=4163  mtu 1500
        inet6 fe80::c81:14ff:fe31:f218  prefixlen 64  scopeid 0x20
        ether 0e:81:14:31:f2:18  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13  bytes 922 (922.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethe042114: flags=4163  mtu 1500
        inet6 fe80::7c59:ddff:fe43:4b72  prefixlen 64  scopeid 0x20
        ether 7e:59:dd:43:4b:72  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13  bytes 922 (922.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 2.2container模式

        这个模式可以指定新创建的容器和已经存在的一个容器共享一个Network Namespace,而不是和宿主机共享。新创建的容器不会创建自己的网卡,配置自己的IP,而是和一个指定的容器共享IP、端口范围等。同样,两个容器除了网络方面,其他的如文件系统、进程列表等还是隔离的。两个容器的进程可以通过lo网卡设备通信。

        使用--net=container:container_id/container_name,多个容器使用共同的网络看到的ip是一样的。

[root@localhost ~]# docker run -itd --name=con1 busybox

96fcea253c1a321d117113e861bf116830f9bb62051adfef352b9c7399e2ed7c

[root@localhost ~]# docker exec -it con1 ifconfig

eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:0B  
          inet addr:172.17.0.11  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:656 (656.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)



[root@localhost ~]# docker run -itd --net=container:con1 --name=con2 busybox

88c3987bb39e74e4cc83066adb495b3c5c1adbe50e538ccf5732410ed5ff7d84

[root@localhost ~]# docker exec -it con2 ifconfig

eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:0B  
          inet addr:172.17.0.11  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:656 (656.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


 2.3none模式

        在这种模式下,Docker容器拥有自己的Network Namespace,但是并不为Docker容器进行任何网络配置。也就是说,这个Docker容器没有网卡、IP、路由等信息。需要我们自己为Docker容器添加网卡、配置IP等。

使用--net=none指定,这种模式下不会配置任何网络。

[root@localhost ~]# docker run -itd --name=none --net=none busybox

40c8b9a5a60b4b2aa5cbee883eba2376e90d2bd29f3d430f227a3a968253a6bb

[root@localhost ~]# docker exec -it none ifconfig
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

 2.4bridge模式

        bridge模式是Docker默认的网络设置,属于一种NAT网络模型,Docker daemon在启动的时候就会建立一个docker0网桥(通过-b参数可以指定),每个容器使用bridge模式启动时,Docker都会为容器创建一对虚拟网络接口(veth pair)设备,这对接口一端在容器的Network Namespace,另一端在docker0,这样就实现了容器与宿主机之间的通信。

 Docker网络模型详解_第6张图片

         在bridge模式下,Docker容器与外部网络通信都是通过iptables规则控制的,这也是Docker网络性能低下的一个重要原因。使用iptables -vnL -t nat可以查看NAT表,在Chain Docker中可以看到容器桥接的规则。

 

[root@localhost ~]# iptables -vnL -t nat

Chain PREROUTING (policy ACCEPT 24 packets, 3632 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  150  7800 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 12 packets, 2748 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 31 packets, 3636 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 36 packets, 4056 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MASQUERADE  all  --  *      !br-48374a5085af  172.25.0.0/16        0.0.0.0/0           
    7   464 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           
    0     0 MASQUERADE  all  --  *      !br-87fb4f081b8d  172.18.0.0/16        0.0.0.0/0           
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:80
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.3           172.17.0.3           tcp dpt:80
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.4           172.17.0.4           tcp dpt:80
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.5           172.17.0.5           tcp dpt:80
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.6           172.17.0.6           tcp dpt:80

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  br-48374a5085af *       0.0.0.0/0            0.0.0.0/0           
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           
    0     0 RETURN     all  --  br-87fb4f081b8d *       0.0.0.0/0            0.0.0.0/0           
    2   104 DNAT       tcp  --  !docker0 *       0.0.0.0/0            192.168.2.118        tcp dpt:900 to:172.17.0.2:80
    2   104 DNAT       tcp  --  !docker0 *       0.0.0.0/0            192.168.2.118        tcp dpt:32768 to:172.17.0.3:80
    2   104 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:172.17.0.4:80
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:32769 to:172.17.0.5:80
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:800 to:172.17.0.6:80

Docker网络模型详解_第7张图片

 

2.5、Overlay模式

        这是Docker原生的跨主机多子网的网络模型,当创建一个新的网络时,Docker会在主机上创建一个Network Namespace,Network Namespace内有一个网桥,网桥上有一个vxlan接口,每个网络占用一个vxlan ID,当容器被添加到网络中时,Docker会分配一对veth网卡设备,与bridge模式类似,一端在容器里面,另一端在本地的Network Namespace中。

        容器A、B、C都在主机A上面,而容器D、E则在主机B上面,现在通过Overlay网络模型可以实现容器A、B、D处于同一个子网,而容器C、E则处于另一个子网中。

 Docker网络模型详解_第8张图片

         Overlay中有一个vxlan ID,值得范围为256~1000,vxlan隧道会把每一个ID相同的网络沙盒连接起来实现一个子网。

你可能感兴趣的:(基础知识命令,基础知识,docker,网络,容器)