pipework不仅可以使用Linux bridge连接Docker容器,还可以与OpenVswitch结合,实现Docker容器的VLAN划分。下面,就来简单演示一下,在单机环境下,如何实现Docker容器间的二层隔离。
为了演示隔离效果,我们将4个容器放在了同一个IP网段中。但实际他们是二层隔离的两个网络,有不同的广播域。
安装openvswitch
安装基础环境
[root@localhost ~]# yum -y install gcc make python-devel openssl-devel kernel-devel graphviz kernel-debug-devel autoconf automake rpm-build redhat-rpm-config libtool
下载openvswitch的包
[root@localhost src]# wget http://openvswitch.org/releases/openvswitch-2.3.1.tar.gz
解压与打包
[root@localhost src]# tar zxvf openvswitch-2.3.1.tar.gz [root@localhost src]# mkdir -p ~/rpmbuild/SOURCES [root@localhost src]# cp openvswitch-2.3.1.tar.gz ~/rpmbuild/SOURCES/ [root@localhost src]# sed 's/openvswitch-kmod, //g' openvswitch-2.3.1/rhel/openvswitch.spec > openvswitch-2.3.1/rhel/openvswitch_no_kmod.spec [root@localhost src]#rpmbuild -bb --without check openvswitch-2.3.1/rhel/openvswitch_no_kmod.spec
之后会在~/rpmbuild/RPMS/x86_64/里有2个文件
[root@localhost src]# ls -l ~/rpmbuild/RPMS/x86_64/
total 9552
-rw-r--r--. 1 root root 2013568 Aug 16 15:47 openvswitch-2.3.1-1.x86_64.rpm
-rw-r--r--. 1 root root 7763632 Aug 16 15:47 openvswitch-debuginfo-2.3.1-1.x86_64.rpm
安装第一个就可以
[root@localhost ~]# yum localinstall /root/rpmbuild/RPMS/x86_64/openvswitch-2.3.1-1.x86_64.rpm
提示信息按Y
Is this ok [y/d/N]: y
启动
[root@localhost ~]# /sbin/chkconfig openvswitch on [root@localhost ~]# /sbin/service openvswitch start
Starting openvswitch (via systemctl): [ OK ]
或者使用
[root@localhost ~]# systemctl start openvswitch
查看状态
[root@localhost ~]# /sbin/service openvswitch status
ovsdb-server is running with pid 39963
ovs-vswitchd is running with pid 39976
或者使用
[root@localhost ~]# systemctl status openvswitch
● openvswitch.service - LSB: Open vSwitch switch
Loaded: loaded (/etc/rc.d/init.d/openvswitch; bad; vendor preset: disabled)
Active: active (running) since Wed 2017-08-16 15:55:58 CST; 2min 27s ago
Docs: man:systemd-sysv-generator(8)
Process: 39936 ExecStart=/etc/rc.d/init.d/openvswitch start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/openvswitch.service
├─39962 ovsdb-server: monitoring pid 39963 (healthy)
├─39963 ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:/var/run/openvswitch/d...
├─39975 ovs-vswitchd: monitoring pid 39976 (healthy)
└─39976 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-...
Aug 16 15:55:57 localhost.localdomain systemd[1]: Starting LSB: Open vSwitch switch...
Aug 16 15:55:57 localhost.localdomain openvswitch[39936]: /etc/openvswitch/conf.db does not exist ... (warning).
Aug 16 15:55:57 localhost.localdomain openvswitch[39936]: Creating empty database /etc/openvswitch/conf.db [ OK ]
Aug 16 15:55:57 localhost.localdomain openvswitch[39936]: Starting ovsdb-server [ OK ]
Aug 16 15:55:57 localhost.localdomain openvswitch[39936]: Configuring Open vSwitch system IDs [ OK ]
Aug 16 15:55:57 localhost.localdomain openvswitch[39936]: Inserting openvswitch module [ OK ]
Aug 16 15:55:58 localhost.localdomain openvswitch[39936]: Starting ovs-vswitchd [ OK ]
Aug 16 15:55:58 localhost.localdomain openvswitch[39936]: Enabling remote OVSDB managers [ OK ]
Aug 16 15:55:58 localhost.localdomain systemd[1]: Started LSB: Open vSwitch switch.
可以看到是正常运行状态
安装pipework
[root@localhost ~]# cd /usr/src/ [root@localhost src]# ls
centos6.tar debug kernels openvswitch-2.3.1 pipework-master.zip
centos7.tar docker-jdeathe.tar nginx-1.11.2.tar.gz openvswitch-2.3.1.tar.gz registry.tar
[root@localhost src]# unzip pipework-master.zip [root@localhost src]# cp -p pipework-master/pipework /usr/local/bin/
创建交换机,把物理网卡加入ovs1(xshell可能会断,如果断了就去物理机上面看)
[root@localhost ~]# ovs-vsctl add-br ovs1;ovs-vsctl add-port ovs1 ens33;ip link set ovs1 up;ifconfig ens33 0;ifconfig ovs1 192.168.1.105 [root@localhost ~]# ifconfig
ens33: flags=4163
inet6 fe80::20c:29ff:fef8:acec prefixlen 64 scopeid 0x20
ether 00:0c:29:f8:ac:ec txqueuelen 1000 (Ethernet)
RX packets 20837 bytes 23279865 (22.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 15371 bytes 2103761 (2.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ovs1: flags=4163
inet 192.168.1.105 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::20c:29ff:fef8:acec prefixlen 64 scopeid 0x20
ether 00:0c:29:f8:ac:ec txqueuelen 0 (Ethernet)
RX packets 27 bytes 2900 (2.8 KiB)
RX errors 0 dropped 2 overruns 0 frame 0
TX packets 47 bytes 5451 (5.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
在主机A上创建4个Docker容器,duyuheng(1-4)
[root@localhost ~]# docker run -itd --name duyuheng1 docker.nmgkj.com /bin/bash
9da698d33081e03dceda4c3c1e8f08408af2d031c2f16c24a6ec3c4ab15b1538
[root@localhost ~]# docker run -itd --name duyuheng2 docker.nmgkj.com /bin/bash
ef1b3dc379f14b44835f5c9ff9076a365eddd0fcc5cfab5fa75c741fd13c1876
[root@localhost ~]# docker run -itd --name duyuheng3 docker.nmgkj.com /bin/bash
94f2c6ad810ad75497b6019baecb5b68e5887374ae4d5824d08aa8f2bd9a5554
[root@localhost ~]# docker run -itd --name duyuheng4 docker.nmgkj.com /bin/bash
c099c7de1e5fc12359013dc56d3a7b6d0dc2e4c1d7a2ce2b347babfcf9e2cce6
将duyuheng1和duyuheng2划分到一个vlan中,vlan在mac地址后加@指定,此处mac地址省略。
[root@localhost ~]# pipework ovs1 duyuheng1 192.168.1.1/24 @100 [root@localhost ~]# pipework ovs1 duyuheng2 192.168.1.2/24 @100
将duyuheng3,duyuheng4划分到另一个vlan中
[root@localhost ~]# pipework ovs1 duyuheng3 192.168.1.3/24 @200 [root@localhost ~]# pipework ovs1 duyuheng4 192.168.1.4/24 @200
完成上述操作后,使用docker attach连到容器中,然后用ping命令测试连通性,发现test1和test2可以相互通信,但与test3和test4隔离。这样,一个简单的VLAN隔离容器网络就已经完成。
进入容器内验证
[root@localhost ~]# docker exec -it 9da698d33081 /bin/bash [root@9da698d33081 /]# ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.709 ms
^C
--- 192.168.1.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1029ms
rtt min/avg/max/mdev = 0.087/0.398/0.709/0.311 ms
[root@9da698d33081 /]# ping 192.168.1.4
PING 192.168.1.4 (192.168.1.4) 56(84) bytes of data.
^C
--- 192.168.1.4 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1813ms
由于OpenVswitch本身支持VLAN功能,所以这里pipework所做的工作和之前介绍的基本一样,只不过将Linux bridge替换成了OpenVswitch,在将veth pair的一端加入ovs0网桥时,指定了tag。底层操作如下:
ovs-vsctl add-port ovs0 veth* tag=100