A journey of a packet within OpenContrail

In this post we will see how a packet generated by a VM is able to reach another VM or an external resource, what are the key concepts/components in the context of Neutron using the OpenContrail plugin. We will focus on OpenContrail, how it implements the overlay and the tools that it provides to check/troubleshoot how the packet are forwarded. Before getting started, I’ll give a little overview of the key concepts of OpenContrail.

Virtual networks, Overlay with OpenContrail

For the overlay, OpenContrail uses MPLS L3VPNs and MPLS EVPNs in order to address both l3 overlay and l2 overlay. There are a lot of components within OpenContrail, however we will focus on two key components – controller and the vRouter.

For the control plane each controller acts as a BGP Route Reflector using the BGP and the XMPP protocols. BGP is used between the controllers and the physical routers. XMPP is used between the controllers and the vRouters. The XMPP protocol transports BGP route announcements but also some other informations for non routing needs.

For the data plane, OpenContrail supports GRE/VXLAN/UDP for the tunneling. OpenContrail requires the following features to be supported by the gateway router :

  • L3VPN
    • http://tools.ietf.org/html/rfc4364
  • MP-BGP
    • http://tools.ietf.org/html/rfc4760
  • Dynamic Tunneling

 In this post we will focus on the data plane area.

The packet’s journey

In order to show what is the journey of a packet, let’s play with the following topology, where we have two VMs on two different networks connected thanks to a router.

Assuming we have allowed the ICMP packets by setting the security groups accordingly we can start a ping from vm1 toward vm2.

There are a lot of introspection tools within OpenContrail which can be used to get a clear status on how the packets are forwarded.

Initiating a ping between vm1 and vm2, we can check step by step where the packets go.

Since the VMs are not on the same network, they will both use their default gateway. The local vRouter answers to the ARP request of the default gateway IP with its own MAC.

Now that we have seen that the packets will be forwarded to the local vRouter, we are going to check how the vRouter will forward them.

So let’s start by checking at the data plane layer by browsing the vRouter agent introspect Web interface running on the compute nodes hosting our VMs athttp://:8085/agent.xml

There is a plenty of sub-interfaces, but we will only use three of them:

  • VrfListReq, http://:8085/Snh_VrfListReqWhich gives you the networks and the VRFs related. For a given VRF – let’s say the Unicast VRF (ucindex) – we can see all the routes.
  • ItfReq, http://:8085/Snh_ItfReqWhich gives you all the interfaces handled by the vRouter.
  • MplsReq, http://:8085/Snh_MplsReqWhich gives all the association MPLS Label/NextHop for the given vRouter

These interfaces are just XML document rendered thanks to a XSL stylesheet, so can be easily processed by some monitoring scripts for example.

We can start by the interfaces (ItfReq) introspect page to find the TAP interface corresponding to VM1. The name of the TAP contains a part of the neutron port ID.

Beside the interface we see the VRF name associated to the network that the interface belong to. On the same line we have some others informations, security group, floating-ips, VM id, etc.

Clicking on the VRF link brings us to the index page of this VRF. We see that we have links to VRFs according to their type: Unicast, Multicast, Layer 2. By default, OpenContrail doesn’t handle the Layer 2. As said before most of the Layer 2 traffic from the virtual machines are trapped by the local vRouter which acts as an ARP responder. But some specific packets like broadcasts still need to be handled, that’s why there is a specific Layer 2 VRF.

Clicking on the link in the ucindex (Unicast) column, we can see all the unicast L3 routes of our virtual network handled by this vRouter. Since vm1 should be able to reach vm2, we should see a route with the IP of vm2.
Thanks to this interface we see that in order to reach the IP 192.168.0.3 which is the IP of our vm2, the packet is going to be forwarded through a GRE tunnel whose endpoint is the IP of the compute node hosting vm2. That’s what we see in the “dip” (Destination IP) field. We see that the packet will be encapsulated in a MPLS packet. The MPLS label will be 16, as shown in the label column.

Ok, so we saw at the agent level how the packet is going to be forwarded, but we may want to check on the datapath side. OpenContrail provides command line tools for that purpose.

In the case of the agent for instance, we can see the interfaces handled by the vRouter kernel module and the associated VRF.

We have our TAP interface at this index 3 and the VRF associated which is the number 1.

Let’s now check the routes for this VRF. For that purpose we use the rt command line.

We see that the MPLS label used is 16. In order to know how the packet will be forwarded we have to check the NextHop used for this route.

We have almost the same informations that the agent gave us. Here in the Oiffield, we have the interface where the packet will be sent to the other compute node. Thanks to the vif command line we can get the details about this interface.

As the packet will go through the eth0 interface, a tcpdump should confirm what we described above.

As the tunnel endpoint shows, the packet will be directly forwarded to the compute node that is hosting the destination VM, not using a third party routing device.

On the other side, the vRouter on the second compute node will receive the encapsulated packet. According to the MPLS Label, it does a lookup on a MPLS Label/NextHop as we can see on its introspect.


As we can see here the NextHop field for the Label 16 is the TAP interface of our second VM. On the datapath side we can check the same informations. Checking the MPLS Label/NextHop table :

..and finally the NextHop and the interface with the following commands :

 

This post was just an overview on how the packets are forwarded from one node to another and what are the interfaces/tools that you can use for troubleshooting purpose. One of the interesting thing with OpenContrail is that almost all the components have their own introspect interface helping you a lot during troubleshooting sessions. As we saw, the routing is fully distributed in OpenContrail, each vRouter handles a part of the routing using well known routing protocols like BGP/MPLS which proved their ability to scale.

你可能感兴趣的:(网络,vmware)