Openstack Neutron网络实验-03

本文继续介绍在将Openstack的多实例在多计算节点间的网络拓扑。

上一篇文章中在Opentack环境中创建了以下内容:

  1. 租户网络test-net,包含子网:test-net-subnet,并启用了dhcp。
  2. 外部网络external-net,包含子网:external-net-subnet。
  3. 虚机实例test-vm1,并接入网络test-net。
  4. 路由器router01, 接入了两个子网test-net-subnet和external-net-subnet。

1. 增加计算节点

参考《使用vagrant和virtualbox搭建openstack集群》添加一个计算节点。

增加计算节点后,os-ctl1上拓扑变化如下:

[root@os-ctl1 ~]# ovs-vsctl show 
...
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "vxlan-c0a83810"
            Interface "vxlan-c0a83810"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.56.15", out_key=flow, remote_ip="192.168.56.16"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
...
[root@os-ctl1 ~]# ip add | grep ^[0-9]    
...
18: vxlan_sys_4789:  mtu 65000 qdisc noqueue master ovs-system state UNKNOWN qlen 1000

结果显示,在br-tun网桥中多了一个vxlan-c0a83810端口,另外多了一个vxlan_sys_4789设备。

而新增的os-cpu1的网络拓扑如下:

[root@os-cpu1 nova]# ovs-vsctl show
34d348ce-5a92-4971-bfe8-846967a95d1e
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-c0a8380f"
            Interface "vxlan-c0a8380f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.56.16", out_key=flow, remote_ip="192.168.56.15"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "2.9.0"
[root@os-cpu1 nova]# brctl show    
bridge name     bridge id               STP enabled     interfaces
[root@os-cpu1 nova]# ip address | grep ^[0-9]
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
3: eth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
4: ovs-system:  mtu 1500 qdisc noop state DOWN qlen 1000
5: br-int:  mtu 1500 qdisc noop state DOWN qlen 1000
6: br-tun:  mtu 1500 qdisc noop state DOWN qlen 1000
7: vxlan_sys_4789:  mtu 65000 qdisc noqueue master ovs-system state UNKNOWN qlen 1000

和Allinone os-ctl1节点相比,新增的计算节点os-cpu1上只有br-tun和br-int两个OVS网桥,没有br-ex,原因显而易见,计算节点不直接访问外网,而是经过网络节点路由,因此不需要br-ex。另外br-tun网桥上的vxlan-c0a8380f端口正好和Allinone节点上的vxlan-c0a83810组成一对VXLAN的VTEP隧道端点设备。

注意到这对VTEP设备的local_ip配置,可以看到就是两台机器的eth1网口IP 192.168.56.15和192.168.56.16,这个local_ip可以在neutron配置文件/etc/neutron/plugins/ml2/openvswitch_agent.ini中指定。也就是说packstack在安装的时候,默认使用安装时指定的IP所在网口作为计算节点间Tunnel通信端点。但是安装时使用的网口一般都用于管理,如果想使用另外一个网口承载租户网络的流量,可修改网络节点和计算节点上的local_ip参数。

os-ctl1:

# grep local_ip /etc/neutron/plugins/ml2/openvswitch_agent.ini
local_ip=192.168.56.15

os-cpu1:

# grep local_ip /etc/neutron/plugins/ml2/openvswitch_agent.ini
local_ip=192.168.56.16

而vxlan_sys_4789和网络节点是一致的,这个就是VXLAN中Overlay网络的虚拟端口。VXLAN是一种Overlay技术,将上层Overlay网络的二层数据帧封装下层Underlay网络的UDP包中发送。Overlay网络的二层数据帧正式通过vxlan_sys_4789这个接口进行发送接收。经过这个端口发送的数据包将被底层Underlay网络封装,而解封之后的数据包通过这个端口送给接收方。4789就是VXLAN用于隧道通信的UDP端口。

2. 创建新的实例

创建多一个Openstack实例test-vm2,并接入网络test-net。由于os-ctl1的空闲内存比os-cpu1的多,因此这个实例仍然运行在os-ctl1上。为了进行跨主机通信的实验,需要将其手动migrate到os-cpu1上。

菜单路径:Admin -> Compute -> Instances -> "test-vm2" -> "Migrate Insance"

注意:只能在Admin菜单中才能migrate实例

迁移完后两个实例分别运行在两台宿主机上:

2vm-on-2host.PNG

进入这两个实例发现他们已经能互相通信,并且迁移到os-cpu1的test-vm2也可以访问外部网络。

[root@os-ctl1 ~]# virsh console 1
Connected to domain instance-00000004
Escape character is ^]

$ hostname
test-vm1
$ ip a
1: lo:  mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1450 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:21:6a:bb brd ff:ff:ff:ff:ff:ff
    inet 172.16.2.11/24 brd 172.16.2.255 scope global eth0
    inet6 fe80::f816:3eff:fe21:6abb/64 scope link 
       valid_lft forever preferred_lft forever
$ ping 172.16.2.6
PING 172.16.2.6 (172.16.2.6): 56 data bytes
64 bytes from 172.16.2.6: seq=0 ttl=64 time=4.679 ms
�
--- 172.16.2.6 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 4.679/4.679/4.679 ms
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=36 time=27.441 ms
�
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 27.441/27.441/27.441 ms
[root@os-cpu1 ~]# virsh console 1
Connected to domain instance-00000003
Escape character is ^]

$ hostname
test-vm2
$ ip a
1: lo:  mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1450 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:b8:0d:8a brd ff:ff:ff:ff:ff:ff
    inet 172.16.2.6/24 brd 172.16.2.255 scope global eth0
    inet6 fe80::f816:3eff:feb8:d8a/64 scope link 
       valid_lft forever preferred_lft forever
$ ping 172.16.2.11
PING 172.16.2.11 (172.16.2.11): 56 data bytes
64 bytes from 172.16.2.11: seq=0 ttl=64 time=10.496 ms
�
--- 172.16.2.11 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 10.496/10.496/10.496 ms
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=36 time=28.747 ms
�
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 28.747/28.747/28.747 ms

最后两台宿主机和两个nova实例形成的网络拓扑如下:

neutron-6.jpg

3. 关于VXLAN的VTEP端点

如果在Openstack中再添加一个新的计算节点或者网络节点,我们会发现每个节点上的br-tun又增加一个VETP端口和新节点形成隧道,也就是说,在VXLAN网络中,VETP隧道是在任意两个节点间建立的。

Allinone节点:

# ovs-vsctl show
...
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "vxlan-c0a83810"
            Interface "vxlan-c0a83810"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.56.15", out_key=flow, remote_ip="192.168.56.16"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-c0a83811"
            Interface "vxlan-c0a83811"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.56.15", out_key=flow, remote_ip="192.168.56.17"}

计算节点:

# ovs-vsctl show
...
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-c0a8380f"
            Interface "vxlan-c0a8380f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.56.16", out_key=flow, remote_ip="192.168.56.15"}
        Port "vxlan-c0a83811"
            Interface "vxlan-c0a83811"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.56.16", out_key=flow, remote_ip="192.168.56.17"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
...

你可能感兴趣的:(Openstack Neutron网络实验-03)