linux内核函数之 blk_plug

分析:

/*
* blk_plug permits building a queue of related requests by holding the I/O
* fragments for a short period. This allows merging of sequential requests
* into single larger request. As the requests are moved from a per-task list to
* the device's request_queue in a batch, this results in improved scalability
* as the lock contention for request_queue lock is reduced.
*
* It is ok not to disable preemption when adding the request to the plug list
* or when attempting a merge, because blk_schedule_flush_list() will only flush
* the plug list when the task sleeps by itself. For details, please see
* schedule() where blk_schedule_flush_plug() is called.
*/
struct blk_plug {
     unsigned long magic; /* detect uninitialized use-cases */
     struct list_head list; /* requests */
     struct list_head cb_list; /* md requires an unplug callback */
     unsigned int should_sort; /* list to be sorted before flushing? */
};
#define BLK_MAX_REQUEST_COUNT 16

struct blk_plug_cb;
typedef void (*blk_plug_cb_fn)(struct blk_plug_cb *, bool);
struct blk_plug_cb {
     struct list_head list;
     blk_plug_cb_fn callback;
     void *data;
};

blk_plug构建了一个缓存碎片IO的请求队列。用于将顺序请求合并成一个大的请求。
合并后请求批量从per-task链表移动到设备请求队列,减少了设备请求队列锁竞争,
从而提高了效率。
blk_plug的使用很简单:
1、设置该线程开启请求合并模式  blk_start_plug
2、关闭线程请求合并  blk_finish_plug
至于如何合并、如何下发请求,这些工作都是由内核来完成的。

那么blk_plug适用于什么情况呢?由于是专门优化请求合并的,所以适合于连续的小块请求。
下面是一个测试的结果:
测试环境:
SATA控制器:intel 82801JI
OS: linux3.6, redhat 
raid5: 4个ST31000524NS盘
没有blk_plug:
Total (8,16):
 Reads Queued:      309811,     1239MiB  Writes Queued:           0,        0KiB
 Read Dispatches:   283583,     1189MiB  Write Dispatches:        0,        0KiB
 Reads Requeued:         0               Writes Requeued:         0
 Reads Completed:   273351,     1149MiB  Writes Completed:        0,        0KiB
 Read Merges:        23533,    94132KiB  Write Merges:            0,        0KiB
 IO unplugs:             0               Timer unplugs:           0

添加了 blk_plug:
Total (8,16):
 Reads Queued:      428697,     1714MiB  Writes Queued:           0,        0KiB
 Read Dispatches:     3954,     1714MiB  Write Dispatches:        0,        0KiB
 Reads Requeued:         0               Writes Requeued:         0
 Reads Completed:     3956,     1715MiB  Writes Completed:        0,        0KiB
 Read Merges:       424743,     1698MiB  Write Merges:            0,        0KiB
 IO unplugs:             0               Timer unplugs:        3384
 
    
可以看出,读请求大量被合并下发了。
blk_plug其他域说明:
magic:用于判断blk_plug是否有效
list:用于缓存请求的队列
cb_list:回调函数的链表,下发请求时会调用到
should_sort:下发之前是否对请求进行排序
 
  

你可能感兴趣的:(Linux内核,存储技术)