2-Walkthrough 第一部分

walkthrough的命令语法:

  • $ 在shell下执行Linux命令
  • mininet> 在Mininet CLI 中执行Mininet命令
  • # 在root shell下执行Linux命令

显示启动选项

· $ sudo mn -h ·

显示启动选项

查看Mininet的启动参数

$ sudo mn -h 

Hosts and Switches

This walkthrough will cover typical usage of the majority of options listed.
Start Wireshark
To view control traffic using the OpenFlow Wireshark dissector, first open wireshark in the background:
$ sudo wireshark &

In the Wireshark filter box, enter this filter, then click Apply
:
of

In Wireshark, click Capture, then Interfaces, then select Start on the loopback interface (lo
).
For now, there should be no OpenFlow packets displayed in the main window.
Note: Wireshark is installed by default in the Mininet VM image. If the system you are using does not have Wireshark and the OpenFlow plugin installed, you may be able to install both of them using Mininet’s install.sh
script as follows:

$ cd ~$ git clone https://github.com/mininet/mininet # if it's not already there$ mininet/util/install.sh -w

If Wireshark is installed but you cannot run it (e.g. you get an error like $DISPLAY not set
, please consult the FAQ: https://github.com/mininet/mininet/wiki/FAQ#wiki-x11-forwarding.)
Setting X11 up correctly will enable you to run other GUI programs and the xterm
terminal emulator, used later in this walkthrough.
Interact with Hosts and Switches
Start a minimal topology and enter the CLI:
$ sudo mn

The default topology is the minimal
topology, which includes one OpenFlow kernel switch connected to two hosts, plus the OpenFlow reference controller. This topology could also be specified on the command line with --topo=minimal
. Other topologies are also available out of the box; see the --topo
section in the output of mn -h
.
All four entities (2 host processes, 1 switch process, 1 basic controller) are now running in the VM. The controller can be outside the VM, and instructions for that are at the bottom.
If no specific test is passed as a parameter, the Mininet CLI comes up.
In the Wireshark window, you should see the kernel switch connect to the reference controller.
Display Mininet CLI commands:
mininet> help

Display nodes:
mininet> nodes

Display links:
mininet> net

Dump information about all nodes:
mininet> dump

You should see the switch and two hosts listed.
If the first string typed into the Mininet CLI is a host, switch or controller name, the command is executed on that node. Run a command on a host process:
mininet> h1 ifconfig -a

You should see the host’s h1-eth0
and loopback (lo
) interfaces. Note that this interface (h1-eth0
) is not seen by the primary Linux system when ifconfig
is run, because it is specific to the network namespace of the host process.
In contrast, the switch by default runs in the root network namespace, so running a command on the “switch” is the same as running it from a regular terminal:
mininet> s1 ifconfig -a

This will show the switch interfaces, plus the VM’s connection out (eth0
).
For other examples highlighting that the hosts have isolated network state, run arp
and route
on both s1
and h1
.
It would be possible to place every host, switch and controller in its own isolated network namespace, but there’s no real advantage to doing so, unless you want to replicate a complex multiple-controller network. Mininet does support this; see the --innamespace
option.
Note that only the network is virtualized; each host process sees the same set of processes and directories. For example, print the process list from a host process:
mininet> h1 ps -a

This should be the exact same as that seen by the root network namespace:
mininet> s1 ps -a

It would be possible to use separate process spaces with Linux containers, but currently Mininet doesn’t do that. Having everything run in the “root” process namespace is convenient for debugging, because it allows you to see all of the processes from the console using ps
, kill
, etc.
Test connectivity between hosts
Now, verify that you can ping from host 0 to host 1:
mininet> h1 ping -c 1 h2

If a string appears later in the command with a node name, that node name is replaced by its IP address; this happened for h2.
You should see OpenFlow control traffic. The first host ARPs for the MAC address of the second, which causes a packet_in
message to go to the controller. The controller then sends a packet_out
message to flood the broadcast packet to other ports on the switch (in this example, the only other data port). The second host sees the ARP request and sends a reply. This reply goes to the controller, which sends it to the first host and pushes down a flow entry.
Now the first host knows the MAC address of the second, and can send its ping via an ICMP Echo Request. This request, along with its corresponding reply from the second host, both go the controller and result in a flow entry pushed down (along with the actual packets getting sent out).
Repeat the last ping
:
mininet> h1 ping -c 1 h2

You should see a much lower ping
time for the second try (< 100us). A flow entry covering ICMP ping
traffic was previously installed in the switch, so no control traffic was generated, and the packets immediately pass through the switch.
An easier way to run this test is to use the Mininet CLI built-in pingall
command, which does an all-pairsping
:
mininet> pingall

Run a simple web server and client
Remember that ping
isn’t the only command you can run on a host! Mininet hosts can run any command or application that is available to the underlying Linux system (or VM) and its file system. You can also enter any bash
command, including job control (&
, jobs
, kill
, etc..)
Next, try starting a simple HTTP server on h1
, making a request from h2
, then shutting down the web server:
mininet> h1 python -m SimpleHTTPServer 80 &mininet> h2 wget -O - h1...mininet> h1 kill %python

Exit the CLI:
mininet> exit

Cleanup
If Mininet crashes for some reason, clean it up:
$ sudo mn -c

Part 2: Advanced Startup Options

你可能感兴趣的:(2-Walkthrough 第一部分)