Linux NAPI机制分析

1、概述

在NAPI之前,网卡每收到一个包就会触发一个中断通知cpu读取数据包,当数据包比较多时,中断数过多势必影响cpu性能,因此Linux引入NAPI机制,NAPI就是在收到中断后,先将网卡收包模式切换成poll模式,等收包完成后重新进入中断模式,本节主要分析Linux的NAPI实现机制。

NAPI的主要流程如下图,物理网卡收到包后触发irq中断通知cpu(触发中断后,默认disable该中断),中断上半部处理里将网卡设备的napi->poll_list加入到softnet_data->poll_list,然后触发rx软中断,软中断处理函数在通过napi_poll方法调用设备自己的poll函数(ixbge_poll)。

Linux NAPI机制分析_第1张图片

在NAPI模式下,系统会为软中断线程及napi各分配一个额度值(软中断的额度为netdev_budget,默认值是300,所有napi共用;每个napi的额度是weight_p,默认值是64),在一次poll流程里,ixgbe_poll每接收一个报文就消耗一个额度,如果ixgbe_poll消耗的额度为napi的额度,说明此时网卡收到的报文比较多,因此需要继续下一次poll,每次napi_poll消耗的额度会累加,当超过软中断线程的额度时,退出本次软中断处理流程;当ixgbe_poll消耗的额度没有达到napi的额度时,说明网卡报文不多,因此重新开启队列中断,进入中断模式。

2、详细流程分析

ixgbe_msix_clean_rings

驱动注册msi中断处理函数入口为ixgbe_msix_clean_rings,当网卡触发irq中断时,进入ixgbe_msix_clean_rings;

static irqreturn_t ixgbe_msix_clean_rings(int irq, void *data)
{
	struct ixgbe_q_vector *q_vector = data;

	/* EIAM disabled interrupts (on this vector) for us */

	if (q_vector->rx.ring || q_vector->tx.ring)
		napi_schedule_irqoff(&q_vector->napi);

	return IRQ_HANDLED;
}

中断处理函数最终调用napi_scheduler,主napi_scheduler将napi->poll_list加入到sd->poll_list,然后触发rx软中断

static inline void ____napi_schedule(struct softnet_data *sd,
				     struct napi_struct *napi)
{
	list_add_tail(&napi->poll_list, &sd->poll_list);
	__raise_softirq_irqoff(NET_RX_SOFTIRQ);
}

net_rx_action

中断流程触发软中断后,结束中断上半部,进入中断下半部处理流程,rx软中断处理函数为net_rx_action,在net_rx_action里,首先为软中断处理过程分配额度(netdev_budget:600),然后调用napi_poll,napi_poll每次使用的额度值累加,如果超过netdev_budget或者napi_poll超过2个tick周期,则退出软中断过程,退出之前将napi->poll_list重新加入到sd->poll_list,等待下一次调度。

static void net_rx_action(struct softirq_action *h)
{
	struct softnet_data *sd = this_cpu_ptr(&softnet_data);
	unsigned long time_limit = jiffies + 2;
	//一次软中断流程处理的配额
	int budget = netdev_budget;
	LIST_HEAD(list);
	LIST_HEAD(repoll);

	local_irq_disable();
	list_splice_init(&sd->poll_list, &list);
	local_irq_enable();

	for (;;) {
		struct napi_struct *n;

		if (list_empty(&list)) {
			if (!sd_has_rps_ipi_waiting(sd) && list_empty(&repoll))
				return;
			break;
		}

		n = list_first_entry(&list, struct napi_struct, poll_list);
		budget -= napi_poll(n, &repoll);

		/* If softirq window is exhausted then punt.
		 * Allow this to run for 2 jiffies since which will allow
		 * an average latency of 1.5/HZ.
		 */
		//如果软中断的配额用完,或者poll的时间超过2个tick,则退出软中断处理流程
		if (unlikely(budget <= 0 ||
			     time_after_eq(jiffies, time_limit))) {
			sd->time_squeeze++;
			break;
		}
	}

	__kfree_skb_flush();
	local_irq_disable();

	//把这个napi重新加到sd->poll_list头部,等待下次软中断再次poll
	list_splice_tail_init(&sd->poll_list, &list);
	list_splice_tail(&repoll, &list);
	list_splice(&list, &sd->poll_list);
	if (!list_empty(&sd->poll_list))
		//如果poll_list不为空,则再次触发软中断
		__raise_softirq_irqoff(NET_RX_SOFTIRQ);

	net_rps_action_and_irq_enable(sd);
}

napi_poll

napi_poll主要是调用设备自己的poll函数,如ixgbe_poll,每次napi_poll也有自己的额度(weight_p:64);ixgbe_poll返回设备本次调用使用的额度,在napi_poll的入口首先把napi->poll_list从链表里移除,然后根据ixgbe_poll返回的已使用的额度决定是否将napi_poll重新加入到repoll链表。

如果本次ixgbe_poll额度没有用完(这种情况在ixgbe_poll里会把poll到的消息全部上送协议栈,并重新进入中断模式),则napi_poll无需重新加入repoll;如果额度用完,说明网卡还有消息包需要处理,如果开启gro,napi_poll先将gro_skb->age超过1个tick的优先上送协议栈,然后把napi_poll重新加入到repoll,napi_poll返回到net_rx_action后,net_rx_action会将repoll链表重新整合到sd->poll_list,在退出net_rx_action时再次判断sd->poll_list是否为空,如果不为空,则继续触发rx软中断。

static int napi_poll(struct napi_struct *n, struct list_head *repoll)
{
	void *have;
	int work, weight;

	//先将napi->poll_list删除
	list_del_init(&n->poll_list);

	have = netpoll_poll_lock(n);

	//一次napi poll的配额
	weight = n->weight;

	/* This NAPI_STATE_SCHED test is for avoiding a race
	 * with netpoll's poll_napi().  Only the entity which
	 * obtains the lock and sees NAPI_STATE_SCHED set will
	 * actually make the ->poll() call.  Therefore we avoid
	 * accidentally calling ->poll() when NAPI is not scheduled.
	 */
	work = 0;
	if (test_bit(NAPI_STATE_SCHED, &n->state)) {
		work = n->poll(n, weight);
		trace_napi_poll(n);
	}

	WARN_ON_ONCE(work > weight);

	//本次napi poll的配额没有用完,进入下一循环
	if (likely(work < weight))
		goto out_unlock;

	/* Drivers must not modify the NAPI state if they
	 * consume the entire weight.  In such cases this code
	 * still "owns" the NAPI instance and therefore can
	 * move the instance around on the list at-will.
	 */
	if (unlikely(napi_disable_pending(n))) {
		napi_complete(n);
		goto out_unlock;
	}

	//本次配额全部用完, 将gro链表的age超过一个tick周期的skb上送协议栈
	if (n->gro_list) {
		/* flush too old packets
		 * If HZ < 1000, flush all packets.
		 */
		napi_gro_flush(n, HZ >= 1000);
	}

	/* Some drivers may have called napi_schedule
	 * prior to exhausting their budget.
	 */
	if (unlikely(!list_empty(&n->poll_list))) {
		pr_warn_once("%s: Budget exhausted after napi rescheduled\n",
			     n->dev ? n->dev->name : "backlog");
		goto out_unlock;
	}

	//如果本次额度用完,还需要继续poll,则将napi->poll_list重新加会到repoll
	list_add_tail(&n->poll_list, repoll);

out_unlock:
	netpoll_poll_unlock(have);

	return work;
}

ixgbe_poll

ixgbe_poll里将napi分配的额度按rx队列数均分,然后每个rx队列轮询去收包,如果有一个rx队列额度值用完,则标记本次poll还未完成;

int ixgbe_poll(struct napi_struct *napi, int budget)
{
	struct ixgbe_q_vector *q_vector =
				container_of(napi, struct ixgbe_q_vector, napi);
	struct ixgbe_adapter *adapter = q_vector->adapter;
	struct ixgbe_ring *ring;
	int per_ring_budget, work_done = 0;
	bool clean_complete = true;

#ifdef CONFIG_IXGBE_DCA
	if (adapter->flags & IXGBE_FLAG_DCA_ENABLED)
		ixgbe_update_dca(q_vector);
#endif

	ixgbe_for_each_ring(ring, q_vector->tx) {
		if (!ixgbe_clean_tx_irq(q_vector, ring, budget))
			clean_complete = false;
	}

	/* Exit if we are called by netpoll or busy polling is active */
	if ((budget <= 0) || !ixgbe_qv_lock_napi(q_vector))
		return budget;

	/* attempt to distribute budget to each queue fairly, but don't allow
	 * the budget to go below 1 because we'll exit polling */
	//将配额数按rx队列数均分
	if (q_vector->rx.count > 1)
		per_ring_budget = max(budget/q_vector->rx.count, 1);
	else
		per_ring_budget = budget;

	ixgbe_for_each_ring(ring, q_vector->rx) {
		int cleaned = ixgbe_clean_rx_irq(q_vector, ring,
						 per_ring_budget);

		work_done += cleaned;
		//如果有ring的配额用完,则标记clean_complete为True
		if (cleaned >= per_ring_budget)
			clean_complete = false;
	}

	ixgbe_qv_unlock_napi(q_vector);
	/* If all work not completed, return budget and keep polling */
	
	//如果有napi分配给rx队列的配额用完了,说明还有接收包需要继续处理,因此clean还未结束,返回到napi_poll,
	//napi_poll里会对gro_list链表里age超过1个tick的skb,先上送协议栈,避免消息包延时太多,并将napi->poll_list
	//重新加入到repoll链表,软中断处理函数退出本次流程前会将repoll重新加入sd->poll_list,并重新触发软中断
	if (!clean_complete)
		return budget;

	//所有的rx队列的额度都没有用完,说明没有消息包需要再处理了,强制将gro_list的skb全部上送协议栈
	/* all work done, exit the polling mode */
	napi_complete_done(napi, work_done);
	if (adapter->rx_itr_setting & 1)
		ixgbe_set_itr(q_vector);
	if (!test_bit(__IXGBE_DOWN, &adapter->state))
		//重新开启rx队列中断
		ixgbe_irq_enable_queues(adapter, BIT_ULL(q_vector->v_idx));

	return min(work_done, budget - 1);
}

如果所有队列的额度值都没用完,则进入napi_complete_done流程,如果有开启gro,则将gro_skb全部上送协议栈,处理完成后通过ixgbe_irq_enable_queues重新是能rx队列中断,进入中断收包模式。

void napi_complete_done(struct napi_struct *n, int work_done)
{
	unsigned long flags;

	/*
	 * don't let napi dequeue from the cpu poll list
	 * just in case its running on a different cpu
	 */
	if (unlikely(test_bit(NAPI_STATE_NPSVC, &n->state)))
		return;

	if (n->gro_list) {
		unsigned long timeout = 0;

		if (work_done)
			timeout = n->dev->gro_flush_timeout;

		//timeout默认为0,因此这里将gro skb全部上送协议栈
		if (timeout && NAPI_STRUCT_HAS(n, timer))
			hrtimer_start(&n->timer, ns_to_ktime(timeout),
				      HRTIMER_MODE_REL_PINNED);
		else
			napi_gro_flush(n, false);
	}
	if (likely(list_empty(&n->poll_list))) {
		WARN_ON_ONCE(!test_and_clear_bit(NAPI_STATE_SCHED, &n->state));
	} else {
		/* If n->poll_list is not empty, we need to mask irqs */
		local_irq_save(flags);
		//将napi->poll_list从sd->poll_list移除,清楚napi的scheded状态
		__napi_complete(n);
		local_irq_restore(flags);
	}
}

3、遗留点

收包模式从poll切换到中断模式的时机

1、poll模式下,消息包都处理完毕,主动切入中断模式;

2、????

你可能感兴趣的:(Linux网络协议栈)