I want to change the link bandwidth dynamically in mininet. Through google, I find the way to achive such goal in [1]. After works of several hours, I finally get my test running. And through iperf test, the bandidth is changed. But I also encounter a error:
Error: qdisc htb 5: root refcnt 2 r2q 10 default 1 direct_packets_stat 0 direct_qlen 1000
qdisc netem 10: parent 5:1 limit 1000 delay 20.0ms
It seems such error does not impact the results much. And I have not find solution.
bandwidth.json
[
{
"time":0,
"type": "iperf",
"params": {
"src": "hl1",
"dst": "hr1",
"duration": 5
}
},
{
"time": 7,
"type": "editLink",
"params": {
"src": "hl1",
"dst": "s1",
"bw": 2
}
},
{
"time":10,
"type": "iperf",
"params": {
"src": "hl1",
"dst": "hr1",
"duration": 5
}
}
]
minischee.py
"""
This is the scheduler class of python 3.0.
Previous versions are not valid since we need to use kwargs with the
event functions. We explicitly add the source code to make minievents
compatible with python 2.x versions.
A generally useful event scheduler class.
Each instance of this class manages its own queue.
No multi-threading is implied; you are supposed to hack that
yourself, or use a single instance per application.
Each instance is parametrized with two functions, one that is
supposed to return the current time, one that is supposed to
implement a delay. You can implement real-time scheduling by
substituting time and sleep from built-in module time, or you can
implement simulated time by writing your own functions. This can
also be used to integrate scheduling with STDWIN events; the delay
function is allowed to modify the queue. Time can be expressed as
integers or floating point numbers, as long as it is consistent.
Events are specified by tuples (time, priority, action, argument, kwargs).
As in UNIX, lower priority numbers mean higher priority; in this
way the queue can be maintained as a priority queue. Execution of the
event means calling the action function, passing it the argument
sequence in "argument" (remember that in Python, multiple function
arguments are be packed in a sequence) and keyword parameters in "kwargs".
The action function may be an instance method so it
has another way to reference private data (besides global variables).
"""
# https://github.com/liuheng92/DASH_NET/blob/master/mininet/minisched.py
# XXX The timefunc and delayfunc should have been defined as methods
# XXX so you can define new kinds of schedulers using subclassing
# XXX instead of having to define a module or class just to hold
# XXX the global state of your particular time and delay functions.
import time
import heapq
from collections import namedtuple
try:
import threading
except ImportError:
import dummy_threading as threading
try:
from time import monotonic as _time
except ImportError:
from time import time as _time
__all__ = ["scheduler"]
class Event(namedtuple('Event', 'time, priority, action, argument, kwargs')):
def __eq__(s, o): return (s.time, s.priority) == (o.time, o.priority)
def __ne__(s, o): return (s.time, s.priority) != (o.time, o.priority)
def __lt__(s, o): return (s.time, s.priority) < (o.time, o.priority)
def __le__(s, o): return (s.time, s.priority) <= (o.time, o.priority)
def __gt__(s, o): return (s.time, s.priority) > (o.time, o.priority)
def __ge__(s, o): return (s.time, s.priority) >= (o.time, o.priority)
_sentinel = object()
class scheduler:
def __init__(self, timefunc=_time, delayfunc=time.sleep):
"""Initialize a new instance, passing the time and delay
functions"""
self._queue = []
self._lock = threading.RLock()
self.timefunc = timefunc
self.delayfunc = delayfunc
def enterabs(self, time, priority, action, argument=(), kwargs=_sentinel):
"""Enter a new event in the queue at an absolute time.
Returns an ID for the event which can be used to remove it,
if necessary.
"""
if kwargs is _sentinel:
kwargs = {}
event = Event(time, priority, action, argument, kwargs)
with self._lock:
heapq.heappush(self._queue, event)
return event # The ID
def enter(self, delay, priority, action, argument=(), kwargs=_sentinel):
"""A variant that specifies the time as a relative time.
This is actually the more commonly used interface.
"""
time = self.timefunc() + delay
return self.enterabs(time, priority, action, argument, kwargs)
def cancel(self, event):
"""Remove an event from the queue.
This must be presented the ID as returned by enter().
If the event is not in the queue, this raises ValueError.
"""
with self._lock:
self._queue.remove(event)
heapq.heapify(self._queue)
def empty(self):
"""Check whether the queue is empty."""
with self._lock:
return not self._queue
def run(self, blocking=True):
"""Execute events until the queue is empty.
If blocking is False executes the scheduled events due to
expire soonest (if any) and then return the deadline of the
next scheduled call in the scheduler.
When there is a positive delay until the first event, the
delay function is called and the event is left in the queue;
otherwise, the event is removed from the queue and executed
(its action function is called, passing it the argument). If
the delay function returns prematurely, it is simply
restarted.
It is legal for both the delay function and the action
function to modify the queue or to raise an exception;
exceptions are not caught but the scheduler's state remains
well-defined so run() may be called again.
A questionable hack is added to allow other threads to run:
just after an event is executed, a delay of 0 is executed, to
avoid monopolizing the CPU when other threads are also
runnable.
"""
# localize variable access to minimize overhead
# and to improve thread safety
lock = self._lock
q = self._queue
delayfunc = self.delayfunc
timefunc = self.timefunc
pop = heapq.heappop
while True:
with lock:
if not q:
break
time, priority, action, argument, kwargs = q[0]
now = timefunc()
if time > now:
delay = True
else:
delay = False
pop(q)
if delay:
if not blocking:
return time - now
delayfunc(time - now)
else:
action(*argument, **kwargs)
delayfunc(0) # Let other threads run
@property
def queue(self):
"""An ordered list of upcoming events.
Events are named tuples with fields for:
time, priority, action, arguments, kwargs
"""
# Use heapq to sort the queue rather than using 'sorted(self._queue)'.
# With heapq, two events scheduled at the same time will show in
# the actual order they would be retrieved.
with self._lock:
events = self._queue[:]
return list(map(heapq.heappop, [events]*len(events)))
dumbbell.py
# CMU 18731 HW2
# Code referenced from:[email protected]:huangty/cs144_bufferbloat.git
# Edited by: Deepti Sunder Prakash
# https://github.com/dhruvityagi/netsec/blob/master/dumbbell.py
#!/usr/bin/python
from mininet.topo import Topo
from mininet.node import CPULimitedHost
from mininet.link import TCLink
from mininet.net import Mininet
from mininet.log import lg, info
from mininet.util import dumpNodeConnections
from mininet.cli import CLI
from subprocess import Popen, PIPE
from time import sleep, time
from multiprocessing import Process
from argparse import ArgumentParser
import time
import sys
import os
import json
import threading
from minisched import scheduler
class Trigger(threading.Thread):
def __init__(self,task):
threading.Thread.__init__(self)
self.scheduler=task
self.running=True
def stop_task(self):
self.running=False
def run(self):
self.scheduler.run();
class DumbbellTopo(Topo):
"Dumbbell topology for Shrew experiment"
def build(self, n=6, bw_net=100, delay='20ms', bw_host=10):
#TODO:Add your code to create the topology.
#Add 2 switches
s1 = self.addSwitch('s1')
s2 = self.addSwitch('s2')
self.addLink(s1, s2,bw=bw_net, delay=delay)
#Left Side
a1 = self.addHost('a1')
hl1 = self.addHost('hl1')
hl2 = self.addHost('hl2')
# 10 Mbps, 20ms delay
self.addLink(hl1, s1, bw=bw_host, delay=delay)
self.addLink(hl2, s1, bw=bw_host, delay=delay)
self.addLink(a1, s1, bw=bw_host, delay=delay)
#Right Side
a2 = self.addHost('a2')
hr1 = self.addHost('hr1')
hr2 = self.addHost('hr2')
# 10 Mbps, 20ms delay
self.addLink(hr1, s2, bw=bw_host, delay=delay)
self.addLink(hr2, s2, bw=bw_host, delay=delay)
self.addLink(a2, s2, bw=bw_host, delay=delay)
class DumbbellTest:
def __init__(self,event_file=None):
self.event_file=event_file
self.perf_count=1
self.perf_port=5000
self.topo = DumbbellTopo()
self.net = Mininet(topo=self.topo, host=CPULimitedHost, link=TCLink,
autoPinCpus=True)
self.net.start()
self.scheduler = scheduler(time.time, time.sleep)
self.trigger=Trigger(self.scheduler)
self.load_event()
self.trigger.start()
def myiperf(self,**kwargs):
kwargs.setdefault('protocol', 'TCP')
kwargs.setdefault('duration', 5)
kwargs.setdefault('bw', 100000)
info('***iperf event at t={time}: {args}\n'.format(time=time.time(), args=kwargs))
if not os.path.exists("output"):
os.makedirs("output")
pre_out="output/iperf-%s"%str(self.perf_count)
self.perf_count+=1
print "iperf run"
server_output =pre_out+"-{protocol}-server-{src}-{dst}.txt".format(**kwargs)
client_output =pre_out+"-{protocol}-client-{src}-{dst}.txt".format(**kwargs)
client, server = self.net.get(kwargs['src'], kwargs['dst'])
iperf_server_cmd = ''
iperf_client_cmd = ''
iperf_server_cmd = 'iperf -s -i 1 -p %s'%(str(self.perf_port))
temp='iperf -c %s -p %s -t %s'
iperf_client_cmd =temp%(server.IP(),str(self.perf_port),kwargs['duration'])
server.sendCmd('{cmd} &>{output} &'.format(cmd=iperf_server_cmd, output=server_output))
info('iperf server command: {cmd} -s -i 1 &>{output} &\n'.format(cmd=iperf_server_cmd,
output=server_output))
# This is a patch to allow sendingCmd while iperf is running in background.CONS: we can not know when
# iperf finishes and get their output
server.waiting = False
if kwargs['protocol'].lower() == 'tcp':
while 'Connected' not in client.cmd(
'sh -c "echo A | telnet -e A %s %s"' % (server.IP(),str(self.perf_port))):
info('Waiting for iperf to start up...\n')
sleep(.5)
self.perf_port=self.perf_port+1
info('iperf client command: {cmd} &>{output} &\n'.format(
cmd = iperf_client_cmd, output=client_output))
client.sendCmd('{cmd} &>{output} &'.format(
cmd = iperf_client_cmd, output=client_output))
# This is a patch to allow sendingCmd while iperf is running in background.CONS: we can not know when
# iperf finishes and get their output
client.waiting = False
#*** Error: qdisc htb 5: root refcnt 2 r2q 10 default 1 direct_packets_stat 0 direct_qlen 1000
#qdisc netem 10: parent 5:1 limit 1000 delay 20.0ms
#not figure out why
def editLink(self,**kwargs):
n1, n2 = self.net.get(kwargs['src'], kwargs['dst'])
intf_pairs = n1.connectionsTo(n2)
#info('***editLink event at t={time}: {args}\n'.format(time=time.time(), args=kwargs))
for intf_pair in intf_pairs:
n1_intf, n2_intf = intf_pair
n1_intf.config(**kwargs)
n2_intf.config(**kwargs)
def load_event(self):
event_type_to_f = {'editLink': self.editLink, 'iperf': self.myiperf}
if self.event_file and os.path.exists(self.event_file):
json_events = json.load(open(self.event_file))
for event in json_events:
event_type = event['type']
print event_type
self.scheduler.enter(event['time'], 1, event_type_to_f[event_type], kwargs=event['params'])
def bbnet(self):
#kwargs={'src':'hl1','dst':'hr1'}
#self.myiperf(**kwargs)
time.sleep(20)
#CLI(self.net)
self.trigger.join()
self.net.stop()
if __name__ == '__main__':
dumbbell=DumbbellTest('bandwidth.json')
dumbbell.bbnet()
## mn -c
Try the mothod in [4], in which the tc is used directly to change bandwidth dynamically.
Reference:
[1] DASH_NET
[2] Changing bandwidth limits of TCIntf at runtime
[3] smooth change link
[4] bbr measurement framework