可以借助ip netns命令来完成对 Network Namespace 的各种操作。ip netns命令来自于iproute安装包,一般系统会默认安装,如果没有的话,请自行安装。
注意:ip netns命令修改网络配置时需要 sudo 权限。
可以通过ip netns命令完成对Network Namespace 的相关操作,可以通过ip netns help查看命令帮助信息:
[root@localhost yum.repos.d]# ip netns help
Usage: ip netns list
ip netns add NAME
ip netns attach NAME PID
ip netns set NAME NETNSID
ip [-all] netns delete [NAME]
ip netns identify [PID]
ip netns pids NAME
ip [-all] netns exec [NAME] cmd ...
ip netns monitor
ip netns list-id [target-nsid POSITIVE-INT] [nsid POSITIVE-INT]
NETNSID := auto | POSITIVE-INT
[root@localhost yum.repos.d]#
默认情况下,Linux系统中是没有任何 Network Namespace的,所以ip netns list命令不会返回任何信息。
通过命令创建一个名为ns0的命名空间:
[root@localhost yum.repos.d]# ip netns list
[root@localhost yum.repos.d]# ip netns add ns0
[root@localhost yum.repos.d]# ip netns list
新创建的 Network Namespace 会出现在/var/run/netns/目录下。如果相同名字的 namespace 已经存在,命令会报Cannot create namespace file “/var/run/netns/ns0”: File exists的错误
[root@localhost ~]# cd /var/run/netns/
[root@localhost netns]# ls
ns0
[root@localhost netns]#
对于每个 Network Namespace 来说,它会有自己独立的网卡、路由表、ARP 表、iptables 等和网络相关的资源。
ip命令提供了ip netns exec
子命令可以在对应的 Network Namespace 中执行命令。
查看新创建 Network Namespace 的网卡信息
# 进入容器执行命令的两种方式
第一:
[root@localhost docker]# docker run -d --name web --rm httpd
[root@localhost docker]# docker exec -it web ls
bin build cgi-bin conf error htdocs icons include logs modules
[root@localhost docker]#
第二:
[root@localhost ~]# ip netns exec ns0 ip a
1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 //孤岛模式
可以看到,新创建的Network Namespace中会默认创建一个lo回环网卡,此时网卡处于关闭状态。此时,尝试去 ping 该lo回环网卡,会提示Network is unreachable
# 这时ping容器是不通的
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1
connect: Network is unreachable
# 如果想ping通,把lo回环网卡给启用
[root@localhost ~]# ip netns exec ns0 ip link set lo up
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.041 ms
^Z
[1]+ Stopped ip netns exec ns0 ping 127.0.0.1
# 再次查看
[root@localhost ~]# ip netns exec ns0 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 //已经激活
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
# 此时在创建一个命名空间测试看看
[root@localhost ~]# ip netns add ns1
[root@localhost ~]# ls /var/run/netns/
ns0 ns1
[root@localhost ~]# ip netns list
ns1
ns0
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[root@localhost ~]# ip netns exec ns1 ip link set lo up
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 //也被激活
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
[root@localhost ~]#
我们可以在不同的 Network Namespace 之间转移设备(如veth)。由于一个设备只能属于一个 Network Namespace ,所以转移后在这个 Network Namespace 内就看不到这个设备了。
简单描述来说就是在真机中创建一对网卡出来,一个放在真机上面,一个放在命名空间中 ,真机和命名空间就可以打通一个通道,可以进行通信了。
其中,veth设备属于可转移设备,而很多其它设备(如lo、vxlan、ppp、bridge等)是不可以转移的。
veth pair 全称是 Virtual Ethernet Pair,是一个成对的端口,所有从这对端口一 端进入的数据包都将从另一端出来,反之也是一样。
引入veth pair是为了在不同的 Network Namespace 直接 进行通信,利用它可以直接将两个 Network Namespace 连接起来。
virt1
# 为了避免干扰,先把容器删除
[root@localhost ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cbf5fb5a33a4 httpd "httpd-foreground" 23 minutes ago Up 23 minutes 80/tcp web
[root@localhost ~]# docker rm -f web
web
# 创建一对网卡
[root@localhost ~]# ip link add type veth
[root@localhost ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:70:02:d8 brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.136.142/24 brd 192.168.136.255 scope global dynamic noprefixroute ens160
valid_lft 1200sec preferred_lft 1200sec
inet6 fe80::20c:29ff:fe70:2d8/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:69:e1:26:85 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:69ff:fee1:2685/64 scope link
valid_lft forever preferred_lft forever
6: veth0@veth1: mtu 1500 qdisc noop state DOWN group default qlen 1000 //一个veth1
link/ether 8a:ff:35:ba:2f:61 brd ff:ff:ff:ff:ff:ff
7: veth1@veth0: mtu 1500 qdisc noop state DOWN group default qlen 1000 //一个veth0
link/ether 6e:1c:c5:3f:bd:df brd ff:ff:ff:ff:ff:ff
[root@localhost ~]#
可以看到,此时系统中新增了一对veth pair,将veth0和veth1两个虚拟网卡连接了起来,此时这对 veth pair 处于”未启用“状态。
下面我们利用veth pair实现两个不同的 Network Namespace 之间的通信。刚才我们已经创建了一个名为ns0的 Network Namespace,下面再创建一个信息Network Namespace,命名为ns1
[root@localhost ~]# ip netns list
ns1
ns0
[root@localhost ~]#
然后我们将veth0加入到ns0,将veth1加入到ns1
# 将veth0加入到ns0
[root@localhost ~]# ip link set veth0 netns ns0
[root@localhost ~]# ip netns exec ns0 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
6: veth0@if7: mtu 1500 qdisc noop state DOWN group default qlen 1000 //加入成功,真机和ns0连通了,但是这时还是不能ping通,因为没有IP
link/ether 8a:ff:35:ba:2f:61 brd ff:ff:ff:ff:ff:ff link-netnsid 0
# 我们将它激活启用
[root@localhost ~]# ip netns exec ns0 ip link set veth0 up
[root@localhost ~]# ip netns exec ns0 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
6: veth0@if7: mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000 (看到UP ,表示起来了,但是还是没有ip)
link/ether 8a:ff:35:ba:2f:61 brd ff:ff:ff:ff:ff:ff link-netnsid 0
# 可以配置IP给他
[root@localhost ~]# ip netns exec ns0 ip addr add 1.1.1.1/24 dev veth0
[root@localhost ~]# ip netns exec ns0 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
6: veth0@if7: mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
link/ether 8a:ff:35:ba:2f:61 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 1.1.1.1/24 scope global veth0 //ip 有了
valid_lft forever preferred_lft forever
[root@localhost ~]#
# 在真机上配置veth1,要在同一网段
[root@localhost ~]# ip link set veth1 up
[root@localhost ~]# ip addr add 1.1.1.2/24 dev veth1
[root@localhost ~]# ip a
3: docker0: mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:69:e1:26:85 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:69ff:fee1:2685/64 scope link
valid_lft forever preferred_lft forever
7: veth1@if6: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 6e:1c:c5:3f:bd:df brd ff:ff:ff:ff:ff:ff link-netns ns0
inet 1.1.1.2/24 scope global veth1
# ping一下1.1.1.1
[root@localhost ~]# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=0.150 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=0.094 ms
^Z
[2]+ Stopped ping 1.1.1.1
但是这是真机的ns0和命名空间给打通了,但是ns1和ns0没有打通
# 打通ns1和ns0,把veth1移动给ns1用
[root@localhost ~]# ip link set veth1 netns ns1
# 查看真机,已经没有veth1了
[root@localhost ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:70:02:d8 brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.136.142/24 brd 192.168.136.255 scope global dynamic noprefixroute ens160
valid_lft 995sec preferred_lft 995sec
inet6 fe80::20c:29ff:fe70:2d8/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:69:e1:26:85 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:69ff:fee1:2685/64 scope link
valid_lft forever preferred_lft forever
[root@localhost ~]#
# 查看ns1,你会发现移入ns1中,veth1被禁用了,自动关闭了
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
7: veth1@if6: mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 6e:1c:c5:3f:bd:df brd ff:ff:ff:ff:ff:ff link-netns ns0
# 激活veth1,但是还是没有我们在真机中配置的1.1.1.2的ip
[root@localhost ~]# ip netns exec ns1 ip link set veth1 up
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
7: veth1@if6: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 6e:1c:c5:3f:bd:df brd ff:ff:ff:ff:ff:ff link-netns ns0
inet6 fe80::6c1c:c5ff:fe3f:bddf/64 scope link
valid_lft forever preferred_lft forever
[root@localhost ~]#
# 还是要重新配置IP
[root@localhost ~]# ip netns exec ns1 ip addr add 1.1.1.2/24 dev veth1
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
7: veth1@if6: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 6e:1c:c5:3f:bd:df brd ff:ff:ff:ff:ff:ff link-netns ns0
inet 1.1.1.2/24 scope global veth1
valid_lft forever preferred_lft forever
inet6 fe80::6c1c:c5ff:fe3f:bddf/64 scope link
valid_lft forever preferred_lft forever
[root@localhost ~]#
查看这对veth pair的状态
[root@localhost ~]# ip netns exec ns0 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
6: veth0@if7: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 8a:ff:35:ba:2f:61 brd ff:ff:ff:ff:ff:ff link-netns ns1
inet 1.1.1.1/24 scope global veth0
valid_lft forever preferred_lft forever
inet6 fe80::88ff:35ff:feba:2f61/64 scope link
valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
7: veth1@if6: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 6e:1c:c5:3f:bd:df brd ff:ff:ff:ff:ff:ff link-netns ns0
inet 1.1.1.2/24 scope global veth1
valid_lft forever preferred_lft forever
inet6 fe80::6c1c:c5ff:fe3f:bddf/64 scope link
valid_lft forever preferred_lft forever
[root@localhost ~]#
从上面可以看出,我们已经成功启用了这个veth pair,并为每个veth设备分配了对应的ip地址。我们尝试在ns1中访问ns0中的ip地址:
# 这时我们去ping rs0 网络1.1.1.1
[root@localhost ~]# ip netns exec ns1 ping 1.1.1.1 //ping通,表示rs0和rs1打通
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=0.119 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=0.078 ms
^Z
[3]+ Stopped ip netns exec ns1 ping 1.1.1.1
[root@localhost ~]#
# 先把设备down,停止
[root@localhost ~]# ip netns exec ns0 ip link set veth0 down
# 设置重命名
[root@localhost ~]# ip netns exec ns0 ip link set veth0 name eth0
[root@localhost ~]# ip netns exec ns0 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
6: eth0@if7: mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 8a:ff:35:ba:2f:61 brd ff:ff:ff:ff:ff:ff link-netns ns1
inet 1.1.1.1/24 scope global eth0
valid_lft forever preferred_lft forever
# 把设备激活
[root@localhost ~]# ip netns exec ns0 ip link set eth0 up
[root@localhost ~]# ip netns exec ns0 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
6: eth0@if7: mtu 1500 qdisc noqueue state UP group default qlen 1000 //eth0 修改成功
link/ether 8a:ff:35:ba:2f:61 brd ff:ff:ff:ff:ff:ff link-netns ns1
inet 1.1.1.1/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::88ff:35ff:feba:2f61/64 scope link
valid_lft forever preferred_lft forever
[root@localhost ~]#
[root@localhost ~]# docker run -it --name t1 --rm busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
5cc84ad355aa: Pull complete
Digest: sha256:5acba83a746c7608ed544dc1533b87c737a0b0fb730301639a0179f9344b1678
Status: Downloaded newer image for busybox:latest
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:876 (876.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # exit
[root@localhost ~]# docker container ls -a
# 在创建容器时添加--network bridge与不加--network选项效果是一致的
[root@localhost ~]# docker run -it --name t1 --network bridge --rm busybox
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:806 (806.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # exit
[root@localhost ~]#
[root@localhost ~]# docker run -it --name t1 --network none --rm busybox
/ # ifconfig -a
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # exit
[root@localhost ~]#
启动第一个容器
[root@localhost ~]# docker run -it --name b1 --rm busybox
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:736 (736.0 B) TX bytes:0 (0.0 B)
启动第二个容器
[root@localhost ~]# docker run -it --name b2 --rm busybox
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03
inet addr:172.17.0.3 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:516 (516.0 B) TX bytes:0 (0.0 B)
可以看到名为b2的容器IP地址是172.17.0.3,与第一个容器的IP地址不是一样的,也就是说并没有共享网络,此时如果我们将第二个容器的启动方式改变一下,就可以使名为b2的容器IP与b1容器IP一致,也即共享IP,但不共享文件系统。
[root@localhost ~]# docker run -it --name b2 --rm --network container:b1 busybox/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:876 (876.0 B) TX bytes:0 (0.0 B)
此时我们在b1容器上创建一个目录
/ # mkdir /tmp/data
/ # ls /tmp/
data
/ #
到b2容器上检查/tmp目录会发现并没有这个目录,因为文件系统是处于隔离状态,仅仅是共享了网络而已。
在b2容器上部署一个站点
/ # echo 'hello world' > /tmp/index.html
/ # ls /tmp/
index.html
/ # httpd -h /tmp
/ # netstat -antl
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 :::80 :::* LISTEN
/ #
在b1容器上用本地地址去访问此站点
/ # wget -O - -q 127.0.0.1:80
hello world
/ #
container模式下的容器间关系就相当于一台主机上的两个不同进程
启动容器时直接指明模式为host
[root@localhost ~]# docker run -it --name b2 --rm --network host busybox
/ # ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:69:E1:26:85
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:69ff:fee1:2685/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:3139 errors:0 dropped:0 overruns:0 frame:0
TX packets:4910 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:130970 (127.9 KiB) TX bytes:29454939 (28.0 MiB)
ens160 Link encap:Ethernet HWaddr 00:0C:29:70:02:D8
inet addr:192.168.136.142 Bcast:192.168.136.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe70:2d8/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:324391 errors:0 dropped:0 overruns:0 frame:0
TX packets:106607 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:459987830 (438.6 MiB) TX bytes:7479990 (7.1 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
此时如果我们在这个容器中启动一个http站点,我们就可以直接用宿主机的IP直接在浏览器中访问这个容器中的站点了。
[root@localhost ~]# docker run -it --name t1 --network bridge --rm busybox
/ # hostname
8047bd0f07b3
[root@localhost ~]# docker run -it --name t1 --network bridge --hostname ttq --rm busybox
/ # hostname
ttq
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 ttq // 注入主机名时会自动创建主机名到IP的映射关系
/ # cat /etc/resolv.conf
# Generated by NetworkManager
search localdomain
nameserver 192.168.136.2 // DNS也会自动配置为宿主机的DNS
/ # ping www.baidu.com
PING www.baidu.com (153.3.238.102): 56 data bytes
64 bytes from 153.3.238.102: seq=0 ttl=127 time=49.525 ms
64 bytes from 153.3.238.102: seq=1 ttl=127 time=134.583 ms
^Z[1]+ Stopped ping www.baidu.com
/ #
[root@localhost ~]# docker run -it --name t1 --network bridge --hostname ttq --dns 114.114.114.114 --rm busybox
/ # cat /etc/resolv.conf
search localdomain
nameserver 114.114.114.114
/ # nslookup -type=a www.baidu.com
Server: 114.114.114.114
Address: 114.114.114.114:53
Non-authoritative answer:
www.baidu.com canonical name = www.a.shifen.com
Name: www.a.shifen.com
Address: 153.3.238.110
Name: www.a.shifen.com
Address: 153.3.238.102
[root@localhost ~]# docker run -it --name t1 --network bridge --hostname ttq --add-host www.a.com:1.1.1.1 --rm busybox
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
1.1.1.1 www.a.com
172.17.0.2 ttq
/ #
执行docker run的时候有个-p选项,可以将容器中的应用端口映射到宿主机中,从而实现让外部主机可以通过访问宿主机的某端口来访问容器内应用的目的。
-p选项能够使用多次,其所能够暴露的端口必须是容器确实在监听的端口。
-p选项的使用格式:
动态端口指的是随机端口,具体的映射结果可使用docker port命令查看。
[root@localhost ~]# docker run --name web --rm -p 8080:80 httpd
[root@localhost ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 2048 0.0.0.0:8080 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 2048 [::]:8080 [::]:*
LISTEN 0 128 [::]:22 [::]:*
[root@localhost ~]#
z
[root@localhost ~]# docker run --name web --rm -p 192.168.136.142::80 httpd
[root@localhost ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 2048 192.168.136.142:32769 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
[root@localhost ~]# docker run --name web --rm -p 192.168.136.142:8080:80 httpd
[root@localhost ~]# docker run --name web --rm -p 80 nginx
以上命令执行后会一直占用着前端,我们新开一个终端连接来看一下容器的80端口被映射到了宿主机的什么端口上
[root@localhost ~]# docker port web
80/tcp -> 192.168.136.142:8080
由此可见,容器的80端口被暴露到了宿主机的32769端口上,此时我们在宿主机上访问一下这个端口看是否能访问到容器内的站点
[root@localhost ~]# curl hhtp://127.0.0.1:32770
Welcome to nginx!
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
iptables防火墙规则将随容器的创建自动生成,随容器的删除自动删除规则。
将容器端口映射到指定IP的随机端口
[root@localhost ~]# docker run --name web --rm -p 192.168.136.142::80 nginx
在另一个终端上查看端口映射情况
[root@localhost ~]# docker port web
80/tcp -> 192.168.136.142:32766
将容器端口映射到宿主机的指定端口
root@localhost ~]# docker run --name web --rm -p 80:80 nginx
在另一个终端上查看端口映射情况
[root@localhost ~]# docker port web
80/tcp -> 0.0.0.0:80
官方文档相关配置
自定义docker0桥的网络属性信息需要修改/etc/docker/daemon.json
配置文件
[root@lvs docker]# cd /etc/docker/
[root@lvs docker]# ls
daemon.json
[root@lvs docker]# vim daemon.json
[root@lvs docker]# systemctl daemon-reload
[root@lvs docker]# cat daemon.json
{
"bip": "192.168.1.1/24",
"registry-mirrors": ["https://8iitoqxc.mirror.aliyuncs.com"]
}
[root@lvs docker]# systemctl restart docker
[root@lvs docker]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:47:40:10 brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.136.154/24 brd 192.168.136.255 scope global dynamic noprefixroute ens160
valid_lft 1571sec preferred_lft 1571sec
inet6 fe80::20c:29ff:fe47:4010/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:16:fe:e8:54 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 brd 192.168.1.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:16ff:fefe:e854/64 scope link
valid_lft forever preferred_lft forever
[root@lvs docker]#
[root@lvs docker]# cd /etc/docker/
[root@lvs docker]# ls
daemon.json
[root@lvs docker]# vim daemon.json
[root@lvs docker]# systemctl daemon-reload
[root@lvs docker]# cat daemon.json
{
"bip": "192.168.1.1/24",
"registry-mirrors": ["https://8iitoqxc.mirror.aliyuncs.com"]
}
[root@lvs docker]# systemctl restart docker
[root@lvs docker]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:47:40:10 brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.136.154/24 brd 192.168.136.255 scope global dynamic noprefixroute ens160
valid_lft 1571sec preferred_lft 1571sec
inet6 fe80::20c:29ff:fe47:4010/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:16:fe:e8:54 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 brd 192.168.1.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:16ff:fefe:e854/64 scope link
valid_lft forever preferred_lft forever
[root@lvs docker]#
核心选项为bip,即bridge ip之意,用于指定docker0桥自身的IP地址;其它选项可通过此地址计算得出。
dockerd守护进程的C/S,其默认仅监听Unix Socket格式的地址(/var/run/docker.sock),如果要使用TCP套接字,则需要修改/etc/docker/daemon.json配置文件,添加如下内容,然后重启docker服务:
"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
在客户端上向dockerd直接传递“-H|–host”选项指定要控制哪台主机上的docker容器
docker -H 192.168.136.142:2375 ps
创建一个额外的自定义桥,区别于docker0
[root@localhost ~]# docker network create -d bridge --subnet "192.168.2.0/24" --gateway "192.168.2.1" br0
e7de76acb324a28c22675864a604142719d722f1c23c802cdcf50368e7b67d04
[root@localhost ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
e7de76acb324 br0 bridge local
b300920c51bd bridge bridge local
8a48a37f125b host host local
d8d81abec0d9 none null local
[root@localhost ~]#
使用新创建的自定义桥来创建容器:
[root@localhost ~]# docker run -it --name b1 --network br0 busybox
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:02:02
inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:11 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1042 (1.0 KiB) TX bytes:0 (0.0 B)
再创建一个容器,使用默认的bridge桥:
[root@localhost ~]# docker run --name b2 -it busybox
/ # ls
bin dev etc home proc root sys tmp usr var
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:806 (806.0 B) TX bytes:0 (0.0 B)