抛出问题
在上一节,我们删除eth0,创建br0,然后用pipework给容器配置网络没有问题,但是,在使用Dockerfile构建镜像时,当Dockerfile有RUN yum makecache
、RUN yum install -y xxx
的时候,docker build
会卡死。
原因如下:
- 如果使用
--network=none
,因为在构建的时候无法使用pipework,所以构建时没有网络无法下载导致卡死。 - 如果使用
--network=host
,因为docker0
已经被删除,所以构建时没有网络无法下载导致卡死。
配置不同主机间的容器通信
保留默认虚拟桥接卡docker0的配置
- 配置桥接网卡
[root@localhost ~]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# cp ifcfg-eth0 ifcfg-eth0.bak
[root@localhost network-scripts]# cp ifcfg-eth0 ifcfg-br0
[root@localhost network-scripts]# vi ifcfg-eth0
增加BRIDGE=br0
,删除IPADDR,NETMASK,GATEWAY,DNS
的设置
TYPE=Ethernet
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eth0
#static ip
DEVICE=eth0
NM_CONTROLLED=yes
ONBOOT=yes
#BOOTPROTO=static
BRIDGE=br0
[root@localhost network-scripts]# vim ifcfg-br0
修改DEVICE为br0
,Type为Bridge
,把eth0的网络设置设置到这里来(里面应该有ip,网关,子网掩码或DNS设置)
TYPE=Bridge
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=br0
#static ip
DEVICE=br0
NM_CONTROLLED=yes
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.175.211
NETMASK=255.255.255.0
GATEWAY=192.168.175.2
DNS1=192.168.175.2
- 重启网卡报错如下:
# systemctl restart network
......
Nov 23 22:09:08 hdcoe02 systemd[1]: network.service: control process exited, code=exited status=1
Nov 23 22:09:08 hdcoe02 systemd[1]: Failed to start LSB: Bring up/down networking.
Nov 23 22:09:08 hdcoe02 systemd[1]: Unit network.service entered failed state.
解决办法:
# systemctl enable NetworkManager-wait-online.service
# systemctl stop NetworkManager
# systemctl restart network.service
实际测试,到这一步再重启网卡还是报错,需要重启主机,网络才生效
-rw-r--r--. 1 root root 26134 9月 16 2015 network-functions-ipv6
[root@docker2 network-scripts]# ifconfig
br0: flags=4163 mtu 1500
inet 192.168.175.211 netmask 255.255.255.0 broadcast 192.168.175.255
inet6 fe80::20c:29ff:febf:c635 prefixlen 64 scopeid 0x20
ether 00:0c:29:bf:c6:35 txqueuelen 0 (Ethernet)
RX packets 505 bytes 37474 (36.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 131 bytes 25535 (24.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163 mtu 1500
ether 00:0c:29:bf:c6:35 txqueuelen 1000 (Ethernet)
RX packets 508 bytes 45303 (44.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 135 bytes 31779 (31.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 0 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@docker2 network-scripts]#
- 启动docker,下载centos镜像
[root@docker2 network-scripts]# service docker restart
Redirecting to /bin/systemctl restart docker.service
[root@docker2 network-scripts]# docker search centos6
[root@docker2 network-scripts]# docker pull docker.io/guyton/centos6
开启一个容器并指定网络模式为none(这样,创建的容器就不会通过docker0自动分配ip了,而是根据pipework工具自定ip指定)
docker run -itd --net=none --name=mycs6 docker.io/guyton/centos6 /bin/bash
容器名称为mycs6pipework给容器配置网络
pipework br0 -i eth0 mycs6 192.168.175.88/[email protected]
容器ip为192.168.175.88
进入容器。查看网卡、路由
[root@docker2 ~]# docker attach mycs6
[root@d303c996f022 /]# ifconfig
eth0 Link encap:Ethernet HWaddr DE:42:63:53:6D:B4
inet addr:192.168.175.88 Bcast:192.168.175.255 Mask:255.255.255.0
inet6 addr: fe80::dc42:63ff:fe53:6db4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:648 (648.0 b) TX bytes:690 (690.0 b)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
[root@d303c996f022 /]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.175.211 0.0.0.0 UG 0 0 0 eth0
192.168.175.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
[root@d303c996f022 /]
另外pipework不能添加静态路由,如果有需求则可以在run的时候加上--privileged=true 权限在容器中手动添加,但这种方法安全性有缺陷。除此之外,可以通过ip netns(--help参考帮助)添加静态路由,以避免创建容器使用--privileged=true选项造成一些不必要的安全问题:
- 获取指定容器的pid
docker inspect --format="{{ .State.Pid }}" mycs6
- 设置默认路由
[root@docker2 ~]# docker inspect --format="{{ .State.Pid}}" mycs6
5324
[root@docker2 ~]# ln -s /proc/5324/ns/net /var/run/netns/5324
[root@docker2 ~]# ip netns exec 5324 ip route del default #删除默认路由
[root@docker2 ~]# ip netns exec 5324 ip route add default dev eth0 via 192.168.175.2 #添加默认路由,192.168.175.2为网络的网关,请根据实际情况设置
- 拓展命令
- 查看所有路由:
ip netns exec 5324 ip route show
- 删除路由:
ip netns exec 5324 ip route del 192.168.0.0/16 via 192.168.175.211 dev eth0
- 查看所有路由:
查看网络配置情况
- 在容器中ping 百度
[root@d303c996f022 /]# ping www.baidu.com
PING www.a.shifen.com (163.177.151.109) 56(84) bytes of data.
64 bytes from 163.177.151.109: icmp_seq=1 ttl=128 time=9.46 ms
64 bytes from 163.177.151.109: icmp_seq=2 ttl=128 time=9.52 ms
......
- 在容器中ping宿主机
[root@d303c996f022 /]# ping 192.168.175.211
PING 192.168.175.211 (192.168.175.211) 56(84) bytes of data.
64 bytes from 192.168.175.211: icmp_seq=1 ttl=64 time=0.357 ms
64 bytes from 192.168.175.211: icmp_seq=2 ttl=64 time=0.119 ms
......
按照上面的步骤,在局域网的另一台主机(ip:192.168.175.212)的docker中配置容器centos6(ip:192.168.175.89)
在192.168.175.89
中ping192.168.175.88
[root@d303c996f022 /]# ifconfig
eth0 Link encap:Ethernet HWaddr DE:42:63:53:6D:B4
inet addr:192.168.175.89 Bcast:192.168.175.255 Mask:255.255.255.0
inet6 addr: fe80::dc42:63ff:fe53:6db4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:11172 errors:0 dropped:0 overruns:0 frame:0
TX packets:9717 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:32938316 (31.4 MiB) TX bytes:534712 (522.1 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:10 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:840 (840.0 b) TX bytes:840 (840.0 b)
[root@d303c996f022 /]# ping 192.168.175.88
PING 192.168.175.88 (192.168.175.88) 56(84) bytes of data.
64 bytes from 192.168.175.88: icmp_seq=1 ttl=64 time=0.764 ms
64 bytes from 192.168.175.88: icmp_seq=2 ttl=64 time=0.539 ms
......
在192.168.175.88
中ping192.168.175.89
[root@7c6fa69d45e1 /]# ifconfig
eth0 Link encap:Ethernet HWaddr 66:F1:47:6F:50:A3
inet addr:192.168.175.88 Bcast:192.168.175.255 Mask:255.255.255.0
inet6 addr: fe80::64f1:47ff:fe6f:50a3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:12391 errors:0 dropped:0 overruns:0 frame:0
TX packets:9487 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:33046798 (31.5 MiB) TX bytes:520252 (508.0 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
[root@7c6fa69d45e1 /]# ping 192.168.175.89
PING 192.168.175.89 (192.168.175.89) 56(84) bytes of data.
64 bytes from 192.168.175.89: icmp_seq=1 ttl=64 time=0.716 ms
64 bytes from 192.168.175.89: icmp_seq=2 ttl=64 time=0.554 ms
......
证明,不同宿主机间的两个容器是互通的