note: 这是使用Markdown编辑器对我的前面写的一个同名的博客的再排版。
问题1 :网络设备是怎么利用linux内核的DCB子系统,来达到融合网络流量的各种各样的QoS需求的?
问题2:融合网卡或者存储流量是否也可以使用到DCB子系统,他们是怎样工作的?
本文将对上面这两个问题进行解答;本文首先大体介绍了DCB机制和它的使用环境;然后介绍一个使用DCB的应用程序lldpad的例子;再然后介绍一个DCB子系统中重要的数据结构;最后介绍DCB内核模块和驱动的具体实现。
首先,DCB是什么呢?
整个DCB过程,是要把各种各样的流量,可能是FCoE流量,可能是一般的TCP流量,还可能是其他的视频流量等等,他们对于QoS的要求各不相同,有的要求不丢包,有的要求带宽保证等等,但是他们也许都需要从这一个网络接口发送到他们各自的目的地中去。由于意识到这个需求,在linux内核中,2.4?或者更早的版本中加入了TC(流量分类)模块,用来对不同的流量类型进行不同的处理。之前说到驱动模块是网络接口的agent,网络设备很可能也和内核一样,对于不同的流量类型也有不同的QoS处理,不过如果网络设备没有这个多队列和相关的处理,那么也不影响QoS的处理,只不过不能提速,在网卡这一块可能会成为瓶颈。使用了多个队列的网卡,在内部可能有一个处理器来进行这些复杂的多队列处理,然后驱动的DCB代码部分很可能需要完成TC映射的功能。但是如果说网卡没有多队列,内核的QoS任然可以发挥它的作用(当然要消耗CPU是必须的啦)。
驱动是网卡的agent,就相当于网卡是个聋哑人,主机是不能和网卡说话的,通过驱动,网卡和主机能够进行交互。主机说要从这接口发送包出去,于是调用驱动的发送函数,把包发送出去。如果网卡物理层收到包,则通过驱动的中断函数来处理这些包(为了提高效率,现在也有可能使用到NAPI)。本来发送和接受都好说,当然是用最快的速度处理发包和收包就好。但是当流量类型渐渐复杂的时候,就要针对不同的流量进行不同的处理。比如说我们的主机上有飞机,坦克,自行车,当都必须经过这个接口出去时,我们按照飞机的优先级最高,自行车优先级最低,只有当飞机和坦克都过去了之后,自行车才能够通行(我们的主机内存很多,给飞机一个队,坦克一个队,自行车一个队)。如果这个接口很宽,我们还可以把这个接口设置几个队列,要出去的直接在接口上拍队(当然也是先通过内存的平滑缓冲)。道路上认为一次只能通过一个东西。
用户的流量特征我们可以通过lldpad设置,设置了这个值,而且设置的而这些信息是要和对端交换机进行交互的,于是通过那个接口发出去,设置的这个值是要和内核交互的,当内核协议栈收到这些信息的时候,需要配置相应的队列和算法(同时还要获取硬件信息,如果有必要还要把tc队列映射到网卡队列),和驱动交互的另外一个原因很可能是因为dcb-lldp需要使用驱动本身要传递的网卡DCB信息和要求。
关于存储流量,比如FCoE流量,是不会经过内核的Qdisc的,他经过的是FcoE模块,在FCoE模块中也是会使用DCB模块的额,从而达到和Qdisc一样的目的?
Q.FCoE模块为何也要用到dcb模块中的函数?
A.FCoE模块作为FCoE协议的处理模块,需要配置相应的FCoE类型和DCB类型的包的DCB参数。(这是不是假如它不再经过802.1p网络层次?)
Q.发挥了什么作用?
A.获取了FCoE和FIP类型的的优先级
Up(FCoE优先级);Fup(FIP优先级)
static void fcoe_dcb_create(struct fcoe_interface *fcoe)
{
#ifdef CONFIG_DCB
int dcbx;
u8 fup, up;
struct net_device *netdev = fcoe->realdev;
struct fcoe_port *port = lport_priv(fcoe->ctlr.lp);
struct dcb_app app = {
.priority = 0,
.protocol = ETH_P_FCOE
};
...
app.selector = DCB_APP_IDTYPE_ETHTYPE;
up = dcb_getapp(netdev, &app);
app.protocol = ETH_P_FIP;
fup = dcb_getapp(netdev, &app);
}
port->priority = ffs(up) ? ffs(up) - 1 : 0;
fcoe->ctlr.priority = ffs(fup) ? ffs(fup) - 1 : port->priority;
linux内核DCB子系统
static int __init netlink_proto_init(void)
sock_register(&netlink_family_ops);
}
static const struct net_proto_familynetlink_family_ops = {
.family = PF_NETLINK,
.create = netlink_create,
.owner = THIS_MODULE, /* for consistency 8) */
};
在netlink_create中调用
static int __netlink_create(struct net *net, struct socket *sock,
struct mutex *cb_mutex, int protocol)
{
struct sock *sk;
struct netlink_sock *nlk;
sock->ops = &netlink_ops;
sk = sk_alloc(net, PF_NETLINK, GFP_KERNEL, &netlink_proto);
…
}
static const struct proto_opsnetlink_ops= {
.family = PF_NETLINK,
.owner = THIS_MODULE,
.release = netlink_release,
.bind = netlink_bind,
.connect = netlink_connect,
.socketpair = sock_no_socketpair,
.accept = sock_no_accept,
.getname = netlink_getname,
.poll = datagram_poll,
.ioctl = sock_no_ioctl,
.listen = sock_no_listen,
.shutdown = sock_no_shutdown,
.setsockopt = netlink_setsockopt,
.getsockopt = netlink_getsockopt,
.sendmsg = netlink_sendmsg,//这可能就是和那些系统调用对应的函数
.recvmsg = netlink_recvmsg,
.mmap = sock_no_mmap,
.sendpage = sock_no_sendpage,
};
**
**
这里的dcbnl_ops应该算是routing子系统中的一个需要用到的结构。
为了进一步理解,还是把这个过程弄清楚吧。
到现在lldpad怎么操作,基本上梳理清楚,至于lldpad怎么和内核程序进行交互。应该要看一看struct dcbnl_rtnl_ops出现在内核的哪些部分。
include/net/dcbnl.h定义这个结构
定义:include/net/dcbnl.h, line 46
43 * Ops struct for the netlink callbacks. Used by DCB-enabled drivers through* the netdevice struct.
· 45 */
· 46 struct dcbnl_rtnl_ops {
· 47 /* IEEE 802.1Qaz std */
· 56 int (*ieee_delapp) (struct net_device *, struct dcb_app *);
· 59
· 60 /* CEE std */
· 61 u8 (*getstate)(struct net_device *);
· u16 *);
· 96 int (*peer_getapptable)(struct net_device *, struct dcb_app *);
· 97
· 98 /* CEE peer */
· 99 int (*cee_peer_getpg) (struct net_device *, struct cee_pg *);
· 100 int (*cee_peer_getpfc) (struct net_device *, struct cee_pfc *);
· 101 };
· 102
· 103 #endif /* __NET_DCBNL_H__ */
· 104
```referenced in by驱动(DCB enabled)和DCB模块
然后referenced in很多的地方:
被很多的驱动引用,比如broadcom的bnx2x(这个编译内核的时候没有必要选上吧)。mellanox/mlx4;/qlogic/qlcnic;intel/ixgbe;等
这些基本上都是定义并且实现了这个结构;并且把这个结构作为netdev的子结构传递给netdev结构体。
然后在net/dcb/dcbnl.c(DCB模块)中有这些函数
static int dcbnl_build_peer_app(struct net_device *netdev, struct sk_buff* skb,int app_nested_type, int app_info_type, int app_entry_type)
然后这个里面使用了一个变量指针ops,将这个netdev的dcbnl_ops传递给它。
const struct dcbnl_rtnl_ops *ops = netdev->dcbnl_ops;
static int dcbnl_ieee_fill(struct sk_buff *skb, struct net_device *netdev)
1035 {
1036 struct nlattr *ieee, *app;
1037 struct dcb_app_type *itr;
1038 const struct dcbnl_rtnl_ops *ops = netdev->dcbnl_ops;
1039 int dcbx;
static int dcbnl_cee_pg_fill(struct sk_buff *skb, struct net_device *dev,
1142 int dir)
1143 {
1144 u8pgid, up_map, prio, tc_pct;
1145 const struct dcbnl_rtnl_ops *ops = dev->dcbnl_ops;
还有dcbnl_notify
dcbnl_cee_fill(
dcbnl_cee_get/* Handle CEE DCBX GET commands. */
"se-preview-section-delimiter">
DCB模块
-----
"se-preview-section-delimiter">
Net/dcb目录
---------
主要是和应用程序交互,解析应用程序的包,执行相关的功能,然后去调用变量的callback函数进行get或者set操作,再将结果反馈给应用程序lldpad。
"se-preview-section-delimiter">
DCB子系统注册rtnetlink
-----------------
感觉看一下这个3.2的代码完全没有什么问题的呀。差不多就是那个论文中提到的那样子
Note:翻译自代码,可是翻译起来真的好带感呢。
__rtnl_register函数:注册一个rtnetlink消息类型(是提供给模块自己注册的哦)
参数
@protocol:协议家族或者PF_UNSPEC
@msgtype:rtnetlink的消息类型
@doit:每次请求消息调用的函数指针
@dumpit:每次dump请求NLM_F_DUM调用的函数指针
@calit:计算dump消息大小的指针函数
"se-preview-section-delimiter">
static struct rtnl_link*rtnl_msg_handlers[RTNL_FAMILY_MAX + 1];
int __rtnl_register(int protocol, intmsgtype,
rtnl_doit_func doit, rtnl_dumpit_funcdumpit,
rtnl_calcit_func calcit){
struct rtnl_link *tab;
intmsgindex;
BUG_ON(protocol< 0 || protocol > RTNL_FAMILY_MAX);
msgindex= rtm_msgindex(msgtype);
tab= rtnl_msg_handlers[protocol];
if(tab == NULL) {
tab= kcalloc(RTM_NR_MSGTYPES, sizeof(*tab), GFP_KERNEL);
if(tab == NULL)
return-ENOBUFS;
rtnl_msg_handlers[protocol]= tab;
}
if(doit)
tab[msgindex].doit= doit;
if(dumpit)
tab[msgindex].dumpit= dumpit;
if(calcit)
tab[msgindex].calcit= calcit;
return0;
}
EXPORT_SYMBOL_GPL(__rtnl_register);
void rtnl_register(int protocol, intmsgtype,
rtnl_doit_func doit, rtnl_dumpit_funcdumpit,
rtnl_calcit_func calcit)
{
if(__rtnl_register(protocol, msgtype, doit, dumpit, calcit) < 0)
panic(“Unableto register rtnetlink message handler, ”
“protocol = %d, message type =%d\n”,
protocol, msgtype);
}
EXPORT_SYMBOL_GPL(rtnl_register);
当消息到达doit之后:
Dcb/dcbnl.c中的
static int __init dcbnl_init(void)
{
INIT_LIST_HEAD(&dcb_app_list);
rtnl_register(PF_UNSPEC,RTM_GETDCB, dcb_doit, NULL, NULL);
rtnl_register(PF_UNSPEC,RTM_SETDCB, dcb_doit, NULL, NULL);
return0;
}
然后这个dcb_doit做了很多的事情,比如说解析skb和其他头部,然后进行相关的操作。
static int dcb_doit(struct sk_buff *skb,structnlmsghdr *nlh, void *arg)
{
structnet *net = sock_net(skb->sk);
structnet_device *netdev;
structdcbmsg dcb = (struct dcbmsg )NLMSG_DATA(nlh);
structnlattr *tb[DCB_ATTR_MAX + 1];
u32pid = skb ? NETLINK_CB(skb).pid : 0;
intret = -EINVAL;
if(!net_eq(net, &init_net))
return-EINVAL;
ret= nlmsg_parse(nlh, sizeof(*dcb), tb, DCB_ATTR_MAX,
dcbnl_rtnl_policy);
if(ret < 0)
returnret;
if(!tb[DCB_ATTR_IFNAME])
return-EINVAL;
netdev= dev_get_by_name(&init_net, nla_data(tb[DCB_ATTR_IFNAME]));
if(!netdev)
return-EINVAL;
if(!netdev->dcbnl_ops)
gotoerrout;
switch(dcb->cmd) {
caseDCB_CMD_GSTATE:
ret= dcbnl_getstate(netdev, tb, pid, nlh->nlmsg_seq,
nlh->nlmsg_flags);
<div class="se-preview-section-delimiter">div>
**
* enum dcbnl_attrs - DCB top-level netlink attributes
*
* @DCB_ATTR_UNDEFINED: unspecified attribute to catch errors
* @DCB_ATTR_IFNAME: interface name of the underlying device (NLA_STRING)
* @DCB_ATTR_STATE: enable state of DCB in the device (NLA_U8)
* @DCB_ATTR_PFC_STATE: enable state of PFC in the device (NLA_U8)
* @DCB_ATTR_PFC_CFG: priority flow control configuration (NLA_NESTED)
* @DCB_ATTR_NUM_TC: number of traffic classes supported in the device (NLA_U8)
* @DCB_ATTR_PG_CFG: priority group configuration (NLA_NESTED)
* @DCB_ATTR_SET_ALL: bool to commit changes to hardware or not (NLA_U8)
* @DCB_ATTR_PERM_HWADDR: MAC address of the physical device (NLA_NESTED)
* @DCB_ATTR_CAP: DCB capabilities of the device (NLA_NESTED)
* @DCB_ATTR_NUMTCS: number of traffic classes supported (NLA_NESTED)
* @DCB_ATTR_BCN: backward congestion notification configuration (NLA_NESTED)
* @DCB_ATTR_IEEE: IEEE 802.1Qaz supported attributes (NLA_NESTED)
* @DCB_ATTR_DCBX: DCBX engine configuration in the device (NLA_U8)
* @DCB_ATTR_FEATCFG: DCBX features flags (NLA_NESTED)
* @DCB_ATTR_CEE: CEE std supported attributes (NLA_NESTED)
*/
<div class="se-preview-section-delimiter">div>
struct nlattr {
__u16 nla_len;
__u16 nla_type;
};
struct nlmsghdr {
__u32 nlmsg_len; /* Length of message including header */
__u16 nlmsg_type; /* Message content */
__u16 nlmsg_flags; /* Additional flags */
__u32 nlmsg_seq; /* Sequence number */
__u32 nlmsg_pid; /* Sending process port ID */
};
自定义的一些DCB属性,比如:
class="se-preview-section-delimiter">
/* DCB netlink attributes policy */
static const struct nla_policy dcbnl_rtnl_policy[DCB_ATTR_MAX + 1] = {
[DCB_ATTR_IFNAME] = {.type = NLA_NUL_STRING, .len = IFNAMSIZ - 1},
[DCB_ATTR_STATE] = {.type = NLA_U8},
[DCB_ATTR_PFC_CFG] = {.type = NLA_NESTED},
[DCB_ATTR_PG_CFG] = {.type = NLA_NESTED},
[DCB_ATTR_SET_ALL] = {.type = NLA_U8},
[DCB_ATTR_PERM_HWADDR] = {.type = NLA_FLAG},
[DCB_ATTR_CAP] = {.type = NLA_NESTED},
[DCB_ATTR_PFC_STATE] = {.type = NLA_U8},
[DCB_ATTR_BCN] = {.type = NLA_NESTED},
[DCB_ATTR_APP] = {.type = NLA_NESTED},
[DCB_ATTR_IEEE] = {.type = NLA_NESTED},
[DCB_ATTR_DCBX] = {.type = NLA_U8},
[DCB_ATTR_FEATCFG] = {.type = NLA_NESTED},
};
关于rtmsg的解析,是这样实现的
<div class="se-preview-section-delimiter">div>
/**
* nlmsg_parse - parse attributes of a netlink message
* @nlh: netlink message header
* @hdrlen: length of family specific header
* @tb: destination array with maxtype+1 elements
* @maxtype: maximum attribute type to be expected
* @policy: validation policy
*
* See nla_parse()
*/
static inline int nlmsg_parse(const struct nlmsghdr *nlh, int hdrlen,
struct nlattr *tb[], int maxtype,
const struct nla_policy *policy)
{
if (nlh->nlmsg_len < nlmsg_msg_size(hdrlen))
return -EINVAL;
return nla_parse(tb, maxtype, nlmsg_attrdata(nlh, hdrlen),
nlmsg_attrlen(nlh, hdrlen), policy);
}
Parses a stream of attributes and stores a pointer to each attribute in
* the tb array accessible via the attribute type. Attributes with a type
* exceeding maxtype will be silently ignored for backwards compatibility
* reasons. policy may be set to NULL if no validation is required.
*
* Returns 0 on success or a negative error code.
*/
int nla_parse(struct nlattr **tb, int maxtype, const struct nlattr *head,
int len, const struct nla_policy *policy)
{
const struct nlattr *nla;
int rem, err;
memset(tb, 0, sizeof(struct nlattr *) * (maxtype + 1));
nla_for_each_attr(nla, head, len, rem) {
u16 type = nla_type(nla);
if (type > 0 && type <= maxtype) {
if (policy) {
err = validate_nla(nla, maxtype, policy);
if (err < 0)
goto errout;
}
tb[type] = (struct nlattr *)nla;
}
}
关于dcbmsg结构是在dcbnl.h中定义的
<div class="se-preview-section-delimiter">div>
struct dcbmsg {
__u8 dcb_family;
__u8 cmd;
__u16 dcb_pad;
};
命令包括:
<div class="se-preview-section-delimiter">div>
/**
* enum dcbnl_commands - supported DCB commands
*
* @DCB_CMD_UNDEFINED: unspecified command to catch errors
* @DCB_CMD_GSTATE: request the state of DCB in the device
* @DCB_CMD_SSTATE: set the state of DCB in the device
* @DCB_CMD_PGTX_GCFG: request the priority group configuration for Tx
* @DCB_CMD_PGTX_SCFG: set the priority group configuration for Tx
* @DCB_CMD_PGRX_GCFG: request the priority group configuration for Rx
* @DCB_CMD_PGRX_SCFG: set the priority group configuration for Rx
* @DCB_CMD_PFC_GCFG: request the priority flow control configuration
* @DCB_CMD_PFC_SCFG: set the priority flow control configuration
* @DCB_CMD_SET_ALL: apply all changes to the underlying device
* @DCB_CMD_GPERM_HWADDR: get the permanent MAC address of the underlying
* device. Only useful when using bonding.
* @DCB_CMD_GCAP: request the DCB capabilities of the device
* @DCB_CMD_GNUMTCS: get the number of traffic classes currently supported
* @DCB_CMD_SNUMTCS: set the number of traffic classes
* @DCB_CMD_GBCN: set backward congestion notification configuration
* @DCB_CMD_SBCN: get backward congestion notification configration.
* @DCB_CMD_GAPP: get application protocol configuration
* @DCB_CMD_SAPP: set application protocol configuration
* @DCB_CMD_IEEE_SET: set IEEE 802.1Qaz configuration
* @DCB_CMD_IEEE_GET: get IEEE 802.1Qaz configuration
* @DCB_CMD_GDCBX: get DCBX engine configuration
* @DCB_CMD_SDCBX: set DCBX engine configuration
* @DCB_CMD_GFEATCFG: get DCBX features flags
* @DCB_CMD_SFEATCFG: set DCBX features negotiation flags
* @DCB_CMD_CEE_GET: get CEE aggregated configuration
* @DCB_CMD_IEEE_DEL: delete IEEE 802.1Qaz configuration
*/
一个标准的netlink replay call的例子如下,比如get的时候就是基本上调用了它。
<div class="se-preview-section-delimiter">div>
static int dcbnl_reply(u8 value, u8 event, u8 cmd, u8 attr, u32 pid,
u32 seq, u16 flags)
{
struct sk_buff *dcbnl_skb;
struct dcbmsg *dcb;
struct nlmsghdr *nlh;
int ret = -EINVAL
dcbnl_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
if (!dcbnl_skb)
return ret;
nlh = NLMSG_NEW(dcbnl_skb, pid, seq, event, sizeof(*dcb), flags);
dcb = NLMSG_DATA(nlh);
dcb->dcb_family = AF_UNSPEC;
dcb->cmd = cmd;
dcb->dcb_pad = 0;
ret = nla_put_u8(dcbnl_skb, attr, value);
if (ret)
goto err;
/* end the message, assign the nlmsg_len. */
nlmsg_end(dcbnl_skb, nlh);
ret = rtnl_unicast(dcbnl_skb, &init_net, pid);
“`
驱动:比如82599的驱动Ixgbe,或者netfpga的驱动都可以根据自己的需要来定义这个结构dcbnl_ops中的函数指针。Dcbnl_ops就是实现了很多的函数体,这些函数体都要驱动的dcb.c(差不多类似的名字)中实现。这个在netfpga的时候,主要是要和lldpad进行交互,82555可能设置了硬件的DCB配置,可是netfpga的硬件并没有这个功能。不过DCB模块也会和FCoE模块进行交互,好像调用的就是DCB模块中的函数(⊙▽⊙不知道这些函数还有没有调用那个变量的函数,如果调用了,才有意思呢。!关于丽丽总是在问的app 优先级。。还是要结合lldpad一起看会比较好吧。其实我倒是不在乎这个)。
setall应该是重置
get/set pfc tcnum app等。
这些硬件的配置利用的结构是adapter->dcb_cfg(与硬件相关的结构体),这个是网卡的默认DCB配置。
lldpad的readme
lldpad源码
ixgbe源码
内核相关资料