目前,Neutron有一个QoS的proposal(https://wiki.openstack.org/wiki/Neutron/QoS#Documents),但是只有Ciscso和NVP插件实现了QoS功能,其他插件还未实现。因而,如果想在Neutron中来做网络QoS,还需要额外费些力。
前面的proposal设计和接口都有了,完全可以自己来实现QoS功能:
1. 创建QoS-Rules数据库,写入QoS规则,主键qos_id
2. 创建QoS-Port-Binding数据库,记录port_id 与 qos_id绑定关系。
3. 创建虚拟机时,nova调用Quantum暴露出来的API,将绑定关系写入数据库。
4. ovs-agent通过远程调用函数(参数port_id)向ovs-plugin取得QoS规则。
5. ovs-agent将规则施行于Interface上。
比如,OVS的QoS可以用下面这个命令来实现
def set_interface_qos(self, interface, rate, burst): ingress_policing_rate = "ingress_policing_rate=%s" % rate ingress_policing_burst = "ingress_policing_burst=%s" % burst args = ["set", "interface", interface, ingress_policing_rate, ingress_policing_burst] self.run_vsctl(args) def clear_interface_qos(self, interface): ingress_policing_rate = "ingress_policing_rate=0" ingress_policing_burst = "ingress_policing_burst=0" args = ["set", "interface", interface, ingress_policing_rate, ingress_policing_burst] self.run_vsctl(args)
具体实现见下面几篇文章:
http://blog.csdn.net/spch2008/article/details/9279445
http://blog.csdn.net/spch2008/article/details/9281947
http://blog.csdn.net/spch2008/article/details/9281779
http://blog.csdn.net/spch2008/article/details/9283561
http://blog.csdn.net/spch2008/article/details/9283627
http://blog.csdn.net/spch2008/article/details/9283927
http://blog.csdn.net/spch2008/article/details/9287311
How to set Server CPU,disk IO,Bandwidth consumption limit for instances using the new feature of nova. By using cgroup,libvirt can set the per instance CPU time consumption percent. and the instances's read_iops,read_byteps, write_iops,write_byteps.also libvirt support limit the instances in/out bandwidth. (https://wiki.openstack.org/wiki/InstanceResourceQuota)
bandwidth params :vif_inbound_average,vif_inbound_peak,vif_inbound_burst,vif_outbound_average,vif_outbound_peak,vif_outbound_burst
Incoming and outgoing traffic can be shaped independently. The bandwidth element can have at most one inbound and at most one outbound child elements. Leaving any of these children element out result in no QoS applied on that traffic direction. So, when you want to shape only network's incoming traffic, use inbound only, and vice versa. Each of these elements have one mandatory attribute average. It specifies average bit rate on interface being shaped. Then there are two optional attributes: peak, which specifies maximum rate at which bridge can send data, and burst, amount of bytes that can be burst at peak speed. Accepted values for attributes are integer numbers, The units for average and peak attributes are kilobytes per second, and for the burst just kilobytes. The rate is shared equally within domains connected to the network.
Config Bandwidth limit for instance network traffic
nova-manage flavor set_key --name m1.small --key quota:vif_inbound_average --value 10240 nova-manage flavor set_key --name m1.small --key quota:vif_outbound_average --value 10240 or using python-novaclient with admin credentials nova flavor-key m1.small set quota:vif_inbound_average=10240 nova flavor-key m1.small set quota:vif_outbound_average=10240
这儿的网络QoS是直接使用libvirt提供的参数来实现的:(http://www.libvirt.org/formatnetwork.html)
... <forward mode='nat' dev='eth0'/> <bandwidth> <inbound average='1000' peak='5000' burst='5120'/> <outbound average='128' peak='256' burst='256'/> </bandwidth> ...
The <bandwidth>
element allows setting quality of service for a particular network (since 0.9.4). Setting bandwidth
for a network is supported only for networks with a <forward>
mode of route
, nat
, or no mode at all (i.e. an "isolated" network). Setting bandwidth
is not supported for forward modes of bridge
, passthrough
, private
, or hostdev
. Attempts to do this will lead to a failure to define the network or to create a transient network.
The <bandwidth>
element can only be a subelement of a domain's <interface>
, a subelement of a <network>
, or a subelement of a <portgroup>
in a <network>
.
As a subelement of a domain's <interface>
, the bandwidth only applies to that one interface of the domain. As a subelement of a <network>
, the bandwidth is a total aggregate bandwidth to/from all guest interfaces attached to that network, not to each guest interface individually. If a domain's <interface>
has <bandwidth>
element values higher than the aggregate for the entire network, then the aggregate bandwidth for the <network>
takes precedence. This is because the two choke points are independent of each other where the domain's <interface>
bandwidth control is applied on the interface's tap device, while the <network>
bandwidth control is applied on the interface part of the bridge device created for that network.
As a subelement of a <portgroup>
in a <network>
, if a domain's <interface>
has a portgroup
attribute in its <source>
element and if the <interface>
itself has no <bandwidth>
element, then the <bandwidth>
element of the portgroup will be applied individually to each guest interface defined to be a member of that portgroup. Any <bandwidth>
element in the domain's <interface>
definition will override the setting in the portgroup (since 1.0.1).
Incoming and outgoing traffic can be shaped independently. The bandwidth
element can have at most one inbound
and at most one outbound
child element. Leaving either of these children elements out results in no QoS applied for that traffic direction. So, when you want to shape only incoming traffic, use inbound
only, and vice versa. Each of these elements have one mandatory attribute - average
(or floor
as described below). The attributes are as follows, where accepted values for each attribute is an integer number.
average
peak
outbound
element is ignored (as Linux ingress filters don't know it yet).
burst
peak
speed.
floor
inbound
element. This attribute guarantees minimal throughput for shaped interfaces. This, however, requires that all traffic goes through one point where QoS decisions can take place, hence why this attribute works only for virtual networks for now (that is
<interface type='network'/>
with a forward type of route, nat, or no forward at all). Moreover, the virtual network the interface is connected to is required to have at least inbound QoS set (
average
at least). If using the
floor
attribute users don't need to specify
average
. However,
peak
and
burst
attributes still require
average
. Currently, the Linux kernel doesn't allow ingress qdiscs to have any classes therefore
floor
can be applied only on
inbound
and not
outbound
.
Attributes average
, peak
, and burst
are available since 0.9.4, while the floor
attribute is available since 1.0.1.
而libvirt中的网络QoS实际上是基于tc来实现的,使用 tc -s -d qdisc 很容易就查到最终的tc配置。
该方法要求虚拟机是基于libvirt的,并且虚拟机和网络相关的服务器上的操作系统要支持Linux Advanced Routing & Traffic Control。
这种方法其实是上面两种方法的结合,即还是在neutron中对外开放设置QoS接口,但把OVS的ingress_policing_rate等换成tc来实现。
关于tc的使用可以看看http://lartc.org/lartc.html,很详细。
该方法要求虚拟机和网络相关的服务器上的操作系统要支持Linux Advanced Routing & Traffic Control。