linux内核函数之 blk_plug

https://blog.csdn.net/liumangxiong/article/details/10279089

 

使用:

/**
 * generic_writepages - walk the list of dirty pages of the given address space and writepage() all of them.
 * @mapping: address space structure to write
 * @wbc: subtract the number of written pages from *@wbc->nr_to_write
 *
 * This is a library function, which implements the writepages()
 * address_space_operation.
 *
 * Return: %0 on success, negative error code otherwise
 */
int generic_writepages(struct address_space *mapping,
		       struct writeback_control *wbc)
{
	struct blk_plug plug;
	int ret;

	/* deal with chardevs and other special file */
	if (!mapping->a_ops->writepage)
		return 0;

	blk_start_plug(&plug);
	ret = write_cache_pages(mapping, wbc, __writepage, mapping);
	blk_finish_plug(&plug);
	return ret;
}

blk_plug构建了一个缓存碎片IO的请求队列。用于将顺序请求合并成一个大的请求。合并后请求批量从per-task链表移动到设备请求队列,减少了设备请求队列锁竞争,从而提高了效率。

blk_plug的使用很简单:

1、设置该线程开启请求合并模式  blk_start_plug

2、关闭线程请求合并  blk_finish_plug

至于如何合并、如何下发请求,这些工作都是由内核来完成的。

 

那么blk_plug适用于什么情况呢?由于是专门优化请求合并的,所以适合于连续的小块请求。

下面是一个测试的结果:

测试环境:

SATA控制器:intel 82801JI

OS: linux3.6, redhat 

raid5: 4个ST31000524NS盘

没有blk_plug:

Total (8,16):

Reads Queued: 309811, 1239MiB Writes Queued: 0, 0KiB

Read Dispatches: 283583, 1189MiB Write Dispatches: 0, 0KiB

Reads Requeued: 0 Writes Requeued: 0

Reads Completed: 273351, 1149MiB Writes Completed: 0, 0KiB

Read Merges: 23533, 94132KiB Write Merges: 0, 0KiB

IO unplugs: 0 Timer unplugs: 0

 

添加了 blk_plug:

Total (8,16):

Reads Queued: 428697, 1714MiB Writes Queued: 0, 0KiB

Read Dispatches: 3954, 1714MiB Write Dispatches: 0, 0KiB

Reads Requeued: 0 Writes Requeued: 0

Reads Completed: 3956, 1715MiB Writes Completed: 0, 0KiB

Read Merges: 424743, 1698MiB Write Merges: 0, 0KiB

IO unplugs: 0 Timer unplugs: 3384

可以看出,读请求大量被合并下发了。

blk_plug其他域说明:

magic:用于判断blk_plug是否有效

list:用于缓存请求的队列

cb_list:回调函数的链表,下发请求时会调用到

should_sort:下发之前是否对请求进行排序

 

你可能感兴趣的:(linux内核函数之 blk_plug)