一、GCD 简介
1.1 GCD
GCD
(Grand Central Dispatch
)本质是 将任务添加到队列,并且指定执行任务的函数。
GCD
是纯C
语言实现,提供了非常强大的函数。GCD
的优势:
- 是苹果公司为多核的并行运算提出的解决方案。
- 会自动利用更多的
CPU
内核(比如双核、四核)。 - 会自动管理线程的生命周期(创建线程、调度任务、销毁线程)。
- 程序员只需要告诉
GCD
想要执行什么任务,不需要编写任何线程管理代码。
最简单的一个例子:
dispatch_block_t block = ^{
NSLog(@"Hello GCD");
};
//串行队列
dispatch_queue_t quene = dispatch_queue_create("com.HotpotCat.zai", NULL);
//函数
dispatch_async(quene, block);
- 任务使用
block
封装,这个block
块就是任务。任务的block
没有参数也没有返回值 -
quene
是创建的串行队列。 - 通过函数将任务和队列关联在一起。
1.2 GCD 的作用
二、函数与队列
2.1 同步与异步
-
dispatch_async
异步执行任务的函数。- 不用等待当前语句执行完毕,就可以执行下一条语句。
- 会开启线程执行
block
的任务。 - 异步是多线程的代名词。
-
dispatch_sync
同步函数。- 必须等待当前语句执行完毕,才会执行下一条语句。
- 不会开启线程。
- 在当前执行
block
任务。
-
block
块是在函数内部执行的。
有如下案例:
- (void)test {
CFAbsoluteTime time = CFAbsoluteTimeGetCurrent();
dispatch_queue_t queue = dispatch_queue_create("com.HotpotCat.zai", DISPATCH_QUEUE_SERIAL);
dispatch_async(queue, ^{
NSLog(@"1: %f",CFAbsoluteTimeGetCurrent() - time);
method();
});
dispatch_sync(queue, ^{
NSLog(@"2: %f",CFAbsoluteTimeGetCurrent() - time);
method();
});
method();
NSLog(@"3: %f",CFAbsoluteTimeGetCurrent() - time);
}
void method() {
sleep(3);
}
输出:
1: 0.000055
2: 3.000264
3: 9.001459
说明同步异步是一个耗时的操作。
2.2 串行队列与并发队列
- 队列:是一种数据结构,支持
FIFO
原则。 - 串行队列:一次只能进一个任务,任务之间需要排队,
DFQ_WIDTH = 1
。在上图中任务一比任务二先执行,队列中的任务按顺序执行。 - 并发队列:一次能调度多个任务(调度多个并不是执行多个,队列不具备执行任务能力,线程才能执行任务),任务一先调度不一定比任务二先执行,得看线程池的调度情况(先调度不一定先执行)。
⚠️ 队列与线程没有任何关系,队列存储任务,线程执行任务。
2.2.1 案例1
有如下代码:
dispatch_queue_t queue = dispatch_queue_create("com.HotpotCat.zai", DISPATCH_QUEUE_CONCURRENT);
NSLog(@"1");
dispatch_async(queue, ^{
NSLog(@"2");
dispatch_sync(queue, ^{
NSLog(@"3");
});
NSLog(@"4");
});
NSLog(@"5");
输出:1 5 2 3 4
。
分析:queue
是并发队列,dispatch_async
是异步函数。所以先执行1 5
, 在dispatch_async
的block
内部先执行2
,由于dispatch_sync
同步函数导致3
执行完才能执行4
。 所以输出1 5 2 3 4
。
2.2.2 案例2
将上面的例子中DISPATCH_QUEUE_CONCURRENT
改为NULL
(DISPATCH_QUEUE_SERIAL
):
这个时候在执行到
dispatch_sync
的时候发生了死锁。
对于同步函数会进行护犊子,堵塞的是
block
之后的代码。queue
中的任务如下(这里为了简单没有写外层异步函数的任务块):
由于
queue
是串行队列并且支持FIFO
,在queue
中块任务为同步函数需要保证任务3
执行,但是queue
是串行队列,任务3
的执行依赖于任务4
,而任务4
以来块任务所以发生了循环等待,造成死锁。如果改为并发队列(3
和4
可以一起执行)或者任务3
为异步函数调用则就不会发生死锁了。
2.2.3 案例3
继续修改代码,将任务4
删除,代码如下:
- (void)test{
dispatch_queue_t queue = dispatch_queue_create("com.HotpotCat.zai", DISPATCH_QUEUE_SERIAL);
NSLog(@"1");
dispatch_async(queue, ^{
NSLog(@"2");
dispatch_sync(queue, ^{
NSLog(@"3");
});
});
NSLog(@"5");
}
这个时候仍然发生了死锁。对于queue
而言有两个任务块以及一个任务2
。
任务块2
阻塞了任务块1
的执行完毕,任务块2
的执行依赖于任务3
的执行,任务3
的执行完毕以来于任务块1
。这样就造成了死锁。
2.2.4 案例4
- (void)test {
dispatch_queue_t queue = dispatch_queue_create("com.HotpotCat.zai", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue, ^{
NSLog(@"1");
});
dispatch_async(queue, ^{
NSLog(@"2");
});
dispatch_sync(queue, ^{ NSLog(@"3"); });
NSLog(@"0");
dispatch_async(queue, ^{
NSLog(@"7");
});
dispatch_async(queue, ^{
NSLog(@"8");
});
dispatch_async(queue, ^{
NSLog(@"9");
});
}
输出选项:
A: 1230789
B: 1237890
C: 3120798
D: 2137890
queue
是并发队列,任务3
是同步函数执行的。所以任务3
后面的任务会被阻塞。那么任务0
肯定在任务3
之后执行,任务7、8、9
肯定在任务0
之后执行。所以就有1、 2、 3 — 0 — 7、 8、 9
。而由于本身是并发队列,所以1、2、3
之间是无序的,7、8、9
之间也是无序的。所以A、C
符合。
dispatch_async
中增加耗时操作,修改如下:
- (void)test {
dispatch_queue_t queue = dispatch_queue_create("com.HotpotCat.zai", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue, ^{
sleep(8);
NSLog(@"1");
});
dispatch_async(queue, ^{
sleep(7);
NSLog(@"2");
});
dispatch_sync(queue, ^{
sleep(1);
NSLog(@"3");
});
NSLog(@"0");
dispatch_async(queue, ^{
sleep(3);
NSLog(@"7");
});
dispatch_async(queue, ^{
sleep(2);
NSLog(@"8");
});
dispatch_async(queue, ^{
sleep(1);
NSLog(@"9");
});
}
当任务中有耗时操作时1、2
就不一定在0
之前执行了,核心点在于dispatch_sync
只能保证任务3
先执行不能保证1、2
先执行。而0
仍然在7、8、9
之前。
所以只能保证:3
在0
之前执行。0
在7、8、9
之前执行(1、2、7、8、9
无序,3
与1、2
之间无序)。此时上面的代码输出3098721
。
修改queue
为串行队列DISPATCH_QUEUE_SERIAL
后,1、2、3
以及7、8、9
就是按顺序执行了1230789
就选A
了。
小结:
- 同步函数串行队列:
- 不会开启线程,在当前线程执行任务。
- 任务串行执行,一个接着一个。
- 会产生阻塞。
- 同步函数并发队列:
- 不会开启线程,在当前线程执行任务。
- 任务一个接着一个。
- 异步函数串行队列:
- 开启线程,一条新线程。
- 任务一个接着一个。
- 异步函数并发队列:
- 开启线程,在当前线程执行任务。
- 任务异步执行,没有顺序,与
CPU
调用有关。
三、主队列与全局并发队列
通过GCD
创建队列,一般有以下4
种方式:
- (void)test {
//串行队列
dispatch_queue_t serial = dispatch_queue_create("com.HotpotCat.cat", DISPATCH_QUEUE_SERIAL);
//并发队列
dispatch_queue_t concurrent = dispatch_queue_create("com.HotpotCat.zai", DISPATCH_QUEUE_CONCURRENT);
//主队列
dispatch_queue_t mainQueue = dispatch_get_main_queue();
//全局队列
dispatch_queue_t globalQueue = dispatch_get_global_queue(0, 0);
NSLog(@"serial:%@\nconcurrent:%@\nmainQueue:%@\nglobalQueue:%@",serial,concurrent,mainQueue,globalQueue);
}
3.1 主队列
dispatch_get_main_queue
声明如下:
- 是一个特殊的串行队列。绑定在
UI
线程, - 在
main()
之前被自动创建。
3.1.1 dispatch_get_main_queue 源码分析
那么dispatch_get_main_queue
具体是在什么时机创建的呢?
在main_queue
的block
中打断点bt
查看堆栈定位到调用是在libdispatch
中:
在opensource中下载最新的libdispatch-1271.120.2版本源码。
libdispatch
源码注释比较少,宏定义很多。
dispatch_get_main_queue
dispatch_get_main_queue
定义如下:
dispatch_queue_main_t
dispatch_get_main_queue(void)
{
//dispatch_queue_main_t 是类型,真正的对象是_dispatch_main_q
return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q);
}
dispatch_get_main_queue
返回了DISPATCH_GLOBAL_OBJECT
,参数是dispatch_queue_main_t
以及_dispatch_main_q
。
DISPATCH_GLOBAL_OBJECT
的宏定义如下:
#define DISPATCH_GLOBAL_OBJECT(type, object) ((OS_OBJECT_BRIDGE type)&(object))
可以看到dispatch_queue_main_t
是一个类型,真正的对象是object
也就是_dispatch_main_q
_dispatch_main_q
_dispatch_main_q
的函数搜索不到,此时通过赋值可以直接定位到:
当然也可以通过
label
com.apple.main-thread
搜索定位到。
struct dispatch_queue_static_s _dispatch_main_q = {
DISPATCH_GLOBAL_OBJECT_HEADER(queue_main),
#if !DISPATCH_USE_RESOLVERS
.do_targetq = _dispatch_get_default_queue(true),
#endif
.dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(1) |
DISPATCH_QUEUE_ROLE_BASE_ANON,
.dq_label = "com.apple.main-thread",
//DQF_WIDTH(1) 串行队列
.dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH(1),
.dq_serialnum = 1,
};
-
DISPATCH_GLOBAL_OBJECT_HEADER(queue_main)
传递的参数是queue_main
。 -
DQF_WIDTH
区分串行并行队列。并不是通过serialnum
。 - 最终返回的类型是
dispatch_queue_main_t
。在使用过程中都是使用dispatch_queue_t
接收的。
dispatch_queue_static_s
定义如下:
typedef struct dispatch_lane_s {
DISPATCH_LANE_CLASS_HEADER(lane);
/* 32bit hole on LP64 */
} DISPATCH_ATOMIC64_ALIGN *dispatch_lane_t;
// Cache aligned type for static queues (main queue, manager)
struct dispatch_queue_static_s {
struct dispatch_lane_s _as_dl[0]; \
DISPATCH_LANE_CLASS_HEADER(lane);
} DISPATCH_CACHELINE_ALIGN;
内部实际上是dispatch_lane_s
。
3.2 全局队列
dispatch_get_global_queue
的实现如下:
dispatch_queue_global_t
dispatch_get_global_queue(intptr_t priority, uintptr_t flags)
{
dispatch_assert(countof(_dispatch_root_queues) ==
DISPATCH_ROOT_QUEUE_COUNT);
//过量使用直接返回0
if (flags & ~(unsigned long)DISPATCH_QUEUE_OVERCOMMIT) {
return DISPATCH_BAD_INPUT;
}
//根据优先级返回qos
dispatch_qos_t qos = _dispatch_qos_from_queue_priority(priority);
#if !HAVE_PTHREAD_WORKQUEUE_QOS
if (qos == QOS_CLASS_MAINTENANCE) {
qos = DISPATCH_QOS_BACKGROUND;
} else if (qos == QOS_CLASS_USER_INTERACTIVE) {
qos = DISPATCH_QOS_USER_INITIATED;
}
#endif
if (qos == DISPATCH_QOS_UNSPECIFIED) {
return DISPATCH_BAD_INPUT;
}
//调用 _dispatch_get_root_queue
return _dispatch_get_root_queue(qos, flags & DISPATCH_QUEUE_OVERCOMMIT);
}
- 返回
dispatch_queue_global_t
。 - 通过
_dispatch_get_root_queue
获取队列。
_dispatch_get_root_queue
实现:
static inline dispatch_queue_global_t
_dispatch_get_root_queue(dispatch_qos_t qos, bool overcommit)
{
if (unlikely(qos < DISPATCH_QOS_MIN || qos > DISPATCH_QOS_MAX)) {
DISPATCH_CLIENT_CRASH(qos, "Corrupted priority");
}
return &_dispatch_root_queues[2 * (qos - 1) + overcommit];
}
- 先对优先级进行验证。
- 去
_dispatch_root_queues
集合中取数据。
_dispatch_root_queues
实现:
//静态变量集合,随时调用随时取。
struct dispatch_queue_global_s _dispatch_root_queues[] = {
#define _DISPATCH_ROOT_QUEUE_IDX(n, flags) \
((flags & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) ? \
DISPATCH_ROOT_QUEUE_IDX_##n##_QOS_OVERCOMMIT : \
DISPATCH_ROOT_QUEUE_IDX_##n##_QOS)
#define _DISPATCH_ROOT_QUEUE_ENTRY(n, flags, ...) \
[_DISPATCH_ROOT_QUEUE_IDX(n, flags)] = { \
DISPATCH_GLOBAL_OBJECT_HEADER(queue_global), \
.dq_state = DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE, \
.do_ctxt = _dispatch_root_queue_ctxt(_DISPATCH_ROOT_QUEUE_IDX(n, flags)), \
.dq_atomic_flags = DQF_WIDTH(DISPATCH_QUEUE_WIDTH_POOL), \
.dq_priority = flags | ((flags & DISPATCH_PRIORITY_FLAG_FALLBACK) ? \
_dispatch_priority_make_fallback(DISPATCH_QOS_##n) : \
_dispatch_priority_make(DISPATCH_QOS_##n, 0)), \
__VA_ARGS__ \
}
_DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, 0,
.dq_label = "com.apple.root.maintenance-qos",
.dq_serialnum = 4,
),
_DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
.dq_label = "com.apple.root.maintenance-qos.overcommit",
.dq_serialnum = 5,
),
_DISPATCH_ROOT_QUEUE_ENTRY(BACKGROUND, 0,
.dq_label = "com.apple.root.background-qos",
.dq_serialnum = 6,
),
_DISPATCH_ROOT_QUEUE_ENTRY(BACKGROUND, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
.dq_label = "com.apple.root.background-qos.overcommit",
.dq_serialnum = 7,
),
_DISPATCH_ROOT_QUEUE_ENTRY(UTILITY, 0,
.dq_label = "com.apple.root.utility-qos",
.dq_serialnum = 8,
),
_DISPATCH_ROOT_QUEUE_ENTRY(UTILITY, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
.dq_label = "com.apple.root.utility-qos.overcommit",
.dq_serialnum = 9,
),
_DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT, DISPATCH_PRIORITY_FLAG_FALLBACK,
.dq_label = "com.apple.root.default-qos",
.dq_serialnum = 10,
),
_DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT,
DISPATCH_PRIORITY_FLAG_FALLBACK | DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
.dq_label = "com.apple.root.default-qos.overcommit",
.dq_serialnum = 11,
),
_DISPATCH_ROOT_QUEUE_ENTRY(USER_INITIATED, 0,
.dq_label = "com.apple.root.user-initiated-qos",
.dq_serialnum = 12,
),
_DISPATCH_ROOT_QUEUE_ENTRY(USER_INITIATED, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
.dq_label = "com.apple.root.user-initiated-qos.overcommit",
.dq_serialnum = 13,
),
_DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, 0,
.dq_label = "com.apple.root.user-interactive-qos",
.dq_serialnum = 14,
),
_DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
.dq_label = "com.apple.root.user-interactive-qos.overcommit",
.dq_serialnum = 15,
),
};
- 根据下标取对应的队列。
-
_dispatch_root_queues
是一个静态变量集合,随时调用随时获取。 - 在使用过程中都是使用
dispatch_queue_t
接收。 -
DISPATCH_GLOBAL_OBJECT_HEADER(queue_global)
传递的参数是queue_global
。
小结:
主队列(dispatch_get_main_queue()
)
- 专⻔用来在主线程上调度任务的串行队列。
- 不会开启线程。
- 如果当前主线程正在有任务执行,那么无论主队列中当前被添加了什么任务,都不会被调度。
全局并发队列
- 为了方便使用,提供了全局队列
dispatch_get_global_queue(0, 0)
。 - 全局队列是一个并发队列。
- 在使用多线程开发时,如果对队列没有特殊需求,在执行异步任务时,可以直接使用全局队列。
四、dispatch_queue_create
普通的串行队列以及并发队列是通过dispatch_queue_create
创建的,它的实现如下:
dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
return _dispatch_lane_create_with_target(label, attr,
DISPATCH_TARGET_QUEUE_DEFAULT, true);
}
返回dispatch_queue_t
类型,内部直接调用_dispatch_lane_create_with_target
。
4.1 _dispatch_lane_create_with_target
static dispatch_queue_t
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
dispatch_queue_t tq, bool legacy)
{
//优先级配置,串行直接返回,并行设置。
dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa);
// Step 1: Normalize arguments (qos, overcommit, tq) 优先级相关处理
……
// Step 2: Initialize the queue 初始化队列
……
const void *vtable;
dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0;
if (dqai.dqai_concurrent) {
//并行,获取class
vtable = DISPATCH_VTABLE(queue_concurrent);
} else {
//串行
vtable = DISPATCH_VTABLE(queue_serial);
}
……
//创建dq,申请开辟内存。用 dispatch_lane_t 接收,并不是queue
dispatch_lane_t dq = _dispatch_object_alloc(vtable,
sizeof(struct dispatch_lane_s));
//初始化,dqai_concurrent 表示是否并发,并发传递DISPATCH_QUEUE_WIDTH_MAX ,串行传递1
_dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
(dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));
//label赋值
dq->dq_label = label;
//优先级设置
dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,
dqai.dqai_relpri);
if (overcommit == _dispatch_queue_attr_overcommit_enabled) {
dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
}
if (!dqai.dqai_inactive) {
_dispatch_queue_priority_inherit_from_target(dq, tq);
_dispatch_lane_inherit_wlh_from_target(dq, tq);
}
_dispatch_retain(tq);
dq->do_targetq = tq;
_dispatch_object_debug(dq, "%s", __func__);
//返回_dispatch_trace_queue_create 的调用的_dq。trace 追踪
return _dispatch_trace_queue_create(dq)._dq;
}
-
_dispatch_queue_attr_to_info
根据dqa
(串行/并行)创建dqai
。 - 返回的
dqai
进行优先级相关的处理,进行准备工作。 - 初始化队列
- 根据
dqai_concurrent
串行并行获取vtable
(class name
)。参数是queue_concurrent
或者queue_serial
。 -
_dispatch_object_alloc
开辟空间。 -
_dispatch_queue_init
初始化,根据dqai_concurrent
传递DISPATCH_QUEUE_WIDTH_MAX
或者1
区分是串行还是并行。 -
label
赋值。 - 优先级设置。
- 根据
-
_dispatch_trace_queue_create
追踪返回_dq
,重点在dq
。
这里创建
queue
并不是通过queue
相关对象进行创建和接收。
4.2 _dispatch_queue_attr_to_info
dispatch_queue_attr_info_t
_dispatch_queue_attr_to_info(dispatch_queue_attr_t dqa)
{
dispatch_queue_attr_info_t dqai = { };
//串行队列(dqa 为空)直接返回。
if (!dqa) return dqai;
……
//并行队列配置
size_t idx = (size_t)(dqa - _dispatch_queue_attrs);
dqai.dqai_inactive = (idx % DISPATCH_QUEUE_ATTR_INACTIVE_COUNT);
idx /= DISPATCH_QUEUE_ATTR_INACTIVE_COUNT;
dqai.dqai_concurrent = !(idx % DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT);
idx /= DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT;
dqai.dqai_relpri = -(int)(idx % DISPATCH_QUEUE_ATTR_PRIO_COUNT);
idx /= DISPATCH_QUEUE_ATTR_PRIO_COUNT;
dqai.dqai_qos = idx % DISPATCH_QUEUE_ATTR_QOS_COUNT;
idx /= DISPATCH_QUEUE_ATTR_QOS_COUNT;
dqai.dqai_autorelease_frequency =
idx % DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT;
idx /= DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT;
dqai.dqai_overcommit = idx % DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT;
idx /= DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT;
return dqai;
}
- 串行队列(
dqa
为空)直接返回。 - 并行队列进行一系列配置。
4.3 _dispatch_object_alloc
_dispatch_object_alloc
传递的参数是vtable
以及sizeof(struct dispatch_lane_s)
,那么意味着实际上也是dispatch_lane_s
类型。
void *
_dispatch_object_alloc(const void *vtable, size_t size)
{
#if OS_OBJECT_HAVE_OBJC1
const struct dispatch_object_vtable_s *_vtable = vtable;
dispatch_object_t dou;
dou._os_obj = _os_object_alloc_realized(_vtable->_os_obj_objc_isa, size);
dou._do->do_vtable = vtable;
return dou._do;
#else
return _os_object_alloc_realized(vtable, size);
#endif
}
- 调用
_os_object_alloc_realized
申请开辟内存。
4.3.1 _os_object_alloc_realized
inline _os_object_t
_os_object_alloc_realized(const void *cls, size_t size)
{
_os_object_t obj;
dispatch_assert(size >= sizeof(struct _os_object_s));
while (unlikely(!(obj = calloc(1u, size)))) {
_dispatch_temporary_resource_shortage();
}
obj->os_obj_isa = cls;
return obj;
}
内部直接调用calloc
开辟空间。
4.4 _dispatch_queue_init
根据是否并行队列传递DISPATCH_QUEUE_WIDTH_MAX
与1
。
#define DISPATCH_QUEUE_WIDTH_FULL 0x1000ull
#define DISPATCH_QUEUE_WIDTH_POOL (DISPATCH_QUEUE_WIDTH_FULL - 1)
#define DISPATCH_QUEUE_WIDTH_MAX (DISPATCH_QUEUE_WIDTH_FULL - 2)
static inline dispatch_queue_class_t
_dispatch_queue_init(dispatch_queue_class_t dqu, dispatch_queue_flags_t dqf,
uint16_t width, uint64_t initial_state_bits)
{
uint64_t dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(width);
dispatch_queue_t dq = dqu._dq;
……
dq_state |= initial_state_bits;
dq->do_next = DISPATCH_OBJECT_LISTLESS;
//设置DQF_WIDTH,区分串行/并行
dqf |= DQF_WIDTH(width);
os_atomic_store2o(dq, dq_atomic_flags, dqf, relaxed);
dq->dq_state = dq_state;
//标识
dq->dq_serialnum =
os_atomic_inc_orig(&_dispatch_queue_serial_numbers, relaxed);
return dqu;
}
- 根据
width
配置状态。 - 根据
width
配置队列串行/并行以及dq_serialnum
标识。dq_serialnum
通过_dispatch_queue_serial_numbers
赋值。
unsigned long volatile _dispatch_queue_serial_numbers =
DISPATCH_QUEUE_SERIAL_NUMBER_INIT;
// skip zero
// 1 - main_q
// 2 - mgr_q
// 3 - mgr_root_q
// 4,5,6,7,8,9,10,11,12,13,14,15 - global queues
// 17 - workloop_fallback_q
// we use 'xadd' on Intel, so the initial value == next assigned
#define DISPATCH_QUEUE_SERIAL_NUMBER_INIT 17
extern unsigned long volatile _dispatch_queue_serial_numbers;
-
0
跳过。 -
1
主队列。 -
2
管理队列。 -
3
管理队列的目标队列。 -
4~15
全局队列。根据qos
优先级指定不同队列。 -
17
自动创建相关返回队列。
os_atomic_inc_orig
传递的参数是_dispatch_queue_serial_numbers
(p
)以及relaxed
(m
):
#define os_atomic_inc_orig(p, m) \
os_atomic_add_orig((p), 1, m)
#define os_atomic_add_orig(p, v, m) \
_os_atomic_c11_op_orig((p), (v), m, add, +)
//##是连接符号,编译后会被去掉
#define _os_atomic_c11_op_orig(p, v, m, o, op) \
atomic_fetch_##o##_explicit(_os_atomic_c11_atomic(p), v, \
memory_order_##m)
- 最终调用的是
atomic_fetch_add_explicit(_os_atomic_c11_atomic(17), 1, memory_order_relaxed)
。原子替换(p + 1 -> p)
, 并返回p
之前的值。也就相当于是i++
这是原子操作。 - 这里多层宏定义处理是为了兼容不同的
c/c++
版本。
atomic_fetch_add_explicit
参考原子操作atomic_fetch_addC atomic_fetch_add_explicit(volatile A * obj,M arg,memory_order order); enum memory_order { memory_order_relaxed, //不对执行顺序做保证,只保证此操作是原子的 memory_order_consume, // 本线程中,所有后续的有关本原子类型的操作,必须>在本条原子操作完成之后执行 memory_order_acquire, //本线程中,所有后续的读操作必须在本条原子操作完>成后执行 memory_order_release, //本线程中,所有之前的写操作完成后才能执行本条原子操作 memory_order_acq_rel, //同时包含 memory_order_acquire 和 memory_order_release memory_order_seq_cst //全部存取都按顺序执行 };
4.5 _dispatch_trace_queue_create
#define _dispatch_trace_queue_create _dispatch_introspection_queue_create
dispatch_queue_class_t
_dispatch_introspection_queue_create(dispatch_queue_t dq)
{
dispatch_queue_introspection_context_t dqic;
size_t sz = sizeof(struct dispatch_queue_introspection_context_s);
if (!_dispatch_introspection.debug_queue_inversions) {
sz = offsetof(struct dispatch_queue_introspection_context_s,
__dqic_no_queue_inversion);
}
dqic = _dispatch_calloc(1, sz);
dqic->dqic_queue._dq = dq;
if (_dispatch_introspection.debug_queue_inversions) {
LIST_INIT(&dqic->dqic_order_top_head);
LIST_INIT(&dqic->dqic_order_bottom_head);
}
dq->do_introspection_ctxt = dqic;
_dispatch_unfair_lock_lock(&_dispatch_introspection.queues_lock);
LIST_INSERT_HEAD(&_dispatch_introspection.queues, dqic, dqic_list);
_dispatch_unfair_lock_unlock(&_dispatch_introspection.queues_lock);
DISPATCH_INTROSPECTION_INTERPOSABLE_HOOK_CALLOUT(queue_create, dq);
if (DISPATCH_INTROSPECTION_HOOK_ENABLED(queue_create)) {
_dispatch_introspection_queue_create_hook(dq);
}
return upcast(dq)._dqu;
}
-
相关跟踪配置。
五、dispatch_queue_t
由于所有的队列都是通过dispatch_queue_t
来接收的,直接研究dispatch_queue_t
是一个不错的入口。
dispatch_queue_t
的点击会跳转到:
DISPATCH_DECL(dispatch_queue);
5.1 DISPATCH_DECL 源码分析
DISPATCH_DECL
宏定义:
#define DISPATCH_DECL(name) OS_OBJECT_DECL_SUBCLASS(name, dispatch_object)
真实类型是OS_OBJECT_DECL_SUBCLASS
,在源码中它的定义有两个,一个是针对objc
的,定义如下:
#define OS_OBJECT_DECL_SUBCLASS(name, super) \
OS_OBJECT_DECL_IMPL(name, NSObject, )
OS_OBJECT_DECL_IMPL
定义如下:
#define OS_OBJECT_DECL_IMPL(name, adhere, ...) \
OS_OBJECT_DECL_PROTOCOL(name, __VA_ARGS__) \
typedef adhere \
* OS_OBJC_INDEPENDENT_CLASS name##_t
OS_OBJECT_DECL_PROTOCOL
定义如下:
#define OS_OBJECT_DECL_PROTOCOL(name, ...) \
@protocol OS_OBJECT_CLASS(name) __VA_ARGS__ \
@end
- 也就是定义了一个
@protocol
。
OS_OBJC_INDEPENDENT_CLASS
定义如下:
#if __has_attribute(objc_independent_class)
#define OS_OBJC_INDEPENDENT_CLASS __attribute__((objc_independent_class))
#endif // __has_attribute(objc_independent_class)
#ifndef OS_OBJC_INDEPENDENT_CLASS
#define OS_OBJC_INDEPENDENT_CLASS
#endif
为了方便分析,这里假设它走的是OS_OBJC_INDEPENDENT_CLASS
为空。
OS_OBJECT_CLASS
定义如下:
#define OS_OBJECT_CLASS(name) OS_##name
- 本质上是
OS_
拼接。
在整个宏定义中参数 name: dispatch_queue super: dispatch_object
,整个宏定义替换完成后如下:
@protocol OS_dispatch_queue
@end
typedef NSObject *dispatch_queue_t
当然在源码中搜索#define DISPATCH_DECL
可以搜到多个,其中有一个定义如下:
#define DISPATCH_DECL(name) \
typedef struct name##_s : public dispatch_object_s {} *name##_t
替换后如下:
typedef struct dispatch_queue_s : public dispatch_object_s {} *dispatch_queue_t
-
dispatch_queue_t
是一个结构体来自于dispatch_queue_s
。 -
dispatch_queue_s
继承自dispatch_object_s
。 -
dispatch_queue_t -> dispatch_queue_s -> dispatch_object_s
类似于class -> objc_class -> objc_object
。 - 本质上
dispatch_queue_t
是dispatch_queue_s
结构体类型。
5.2 dispatch_queue_s 分析
要研究dispatch_queue_t
那么就要研究它的类型dispatch_queue_s
。
dispatch_queue_s
定义如下:
struct dispatch_queue_s {
DISPATCH_QUEUE_CLASS_HEADER(queue, void *__dq_opaque1);
/* 32bit hole on LP64 */
} DISPATCH_ATOMIC64_ALIGN;
DISPATCH_QUEUE_CLASS_HEADER
宏定义如下:
#define DISPATCH_QUEUE_CLASS_HEADER(x, __pointer_sized_field__) \
_DISPATCH_QUEUE_CLASS_HEADER(x, __pointer_sized_field__); \
/* LP64 global queue cacheline boundary */ \
unsigned long dq_serialnum; \
const char *dq_label; \
DISPATCH_UNION_LE(uint32_t volatile dq_atomic_flags, \
const uint16_t dq_width, \
const uint16_t __dq_opaque2 \
); \
dispatch_priority_t dq_priority; \
union { \
struct dispatch_queue_specific_head_s *dq_specific_head; \
struct dispatch_source_refs_s *ds_refs; \
struct dispatch_timer_source_refs_s *ds_timer_refs; \
struct dispatch_mach_recv_refs_s *dm_recv_refs; \
struct dispatch_channel_callbacks_s const *dch_callbacks; \
}; \
int volatile dq_sref_cnt
内部继承于_DISPATCH_QUEUE_CLASS_HEADER
:
#define _DISPATCH_QUEUE_CLASS_HEADER(x, __pointer_sized_field__) \
DISPATCH_OBJECT_HEADER(x); \
__pointer_sized_field__; \
DISPATCH_UNION_LE(uint64_t volatile dq_state, \
dispatch_lock dq_state_lock, \
uint32_t dq_state_bits \
)
#endif
_DISPATCH_QUEUE_CLASS_HEADER
内部又继承自DISPATCH_OBJECT_HEADER
:
#define DISPATCH_OBJECT_HEADER(x) \
struct dispatch_object_s _as_do[0]; \
_DISPATCH_OBJECT_HEADER(x)
- 这里就对接到了
dispatch_object_s
类型。
DISPATCH_OBJECT_HEADER
内部又使用_DISPATCH_OBJECT_HEADER
:
#define _DISPATCH_OBJECT_HEADER(x) \
struct _os_object_s _as_os_obj[0]; \
OS_OBJECT_STRUCT_HEADER(dispatch_##x); \
struct dispatch_##x##_s *volatile do_next; \
struct dispatch_queue_s *do_targetq; \
void *do_ctxt; \
union { \
dispatch_function_t DISPATCH_FUNCTION_POINTER do_finalizer; \
void *do_introspection_ctxt; \
}
也就是最终使用的是_os_object_s
类型。
OS_OBJECT_STRUCT_HEADER
类型:
#define OS_OBJECT_STRUCT_HEADER(x) \
_OS_OBJECT_HEADER(\
const struct x##_vtable_s *__ptrauth_objc_isa_pointer do_vtable, \
do_ref_cnt, \
do_xref_cnt)
#endif
_OS_OBJECT_HEADER
有3
个成员变量:
#define _OS_OBJECT_HEADER(isa, ref_cnt, xref_cnt) \
isa; /* must be pointer-sized and use __ptrauth_objc_isa_pointer */ \
int volatile ref_cnt; \
int volatile xref_cnt
至此整个继承链为:dispatch_queue_t -> dispatch_queue_s -> dispatch_object_s -> _os_object_s
。
5.3 dispatch_object_s 分析
dispatch_object_s
源码如下:
typedef struct dispatch_object_s {
private:
dispatch_object_s();
~dispatch_object_s();
dispatch_object_s(const dispatch_object_s &);
void operator=(const dispatch_object_s &);
} *dispatch_object_t;
#define DISPATCH_DECL(name) \
typedef struct name##_s : public dispatch_object_s {} *name##_t
#define DISPATCH_DECL_SUBCLASS(name, base) \
typedef struct name##_s : public base##_s {} *name##_t
#define DISPATCH_GLOBAL_OBJECT(type, object) (static_cast(&(object)))
#define DISPATCH_RETURNS_RETAINED
#else /* Plain C */
#ifndef __DISPATCH_BUILDING_DISPATCH__
typedef union {
struct _os_object_s *_os_obj;
struct dispatch_object_s *_do;
struct dispatch_queue_s *_dq;
struct dispatch_queue_attr_s *_dqa;
struct dispatch_group_s *_dg;
struct dispatch_source_s *_ds;
struct dispatch_channel_s *_dch;
struct dispatch_mach_s *_dm;
struct dispatch_mach_msg_s *_dmsg;
struct dispatch_semaphore_s *_dsema;
struct dispatch_data_s *_ddata;
struct dispatch_io_s *_dchannel;
} dispatch_object_t DISPATCH_TRANSPARENT_UNION;
-
dispatch_object_t
是一个联合体,那么意味着它可以是里面数据类型的任意一种。 - 其中有
dispatch_object_s
类型,那么意味者它底层实际上是dispatch_object_t
类型。 -
_os_object_s
也与上面的分析相对应。
总结: 整个继承链为:dispatch_queue_t -> dispatch_queue_s -> dispatch_object_s -> _os_object_s -> dispatch_object_t
。
六、GCD 函数调用流程
对于GCD
的block
块不是我们主动调用的,那么它是在什么时机调用的呢?任务的执行流程是怎么样的呢?
6.1 同步流程
一般同步任务实现如下:
dispatch_sync(dispatch_get_global_queue(0, 0), ^{
NSLog(@"test");
});
6.1.1 dispatch_sync 源码分析
void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
uintptr_t dc_flags = DC_FLAG_BLOCK;
if (unlikely(_dispatch_block_has_private_data(work))) {
return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
}
_dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
#define _dispatch_Block_invoke(bb) \
((dispatch_function_t)((struct Block_layout *)bb)->invoke)
- 核心是
_dispatch_sync_f
的调用。参数是队列、block
、对block
包装成invoke
、DC_FLAG_BLOCK
。 - 重点就是这个
invoke
在什么时机被调用。
_dispatch_sync_f
:
static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
uintptr_t dc_flags)
{
_dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}
-
_dispatch_sync_f
内部只是对_dispatch_sync_f_inline
的调用。
6.1.2 _dispatch_sync_f_inline
由于_dispatch_sync_f_inline
中有很多分支调用,直接下符号断点跟踪流程:
static inline void
_dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt,
dispatch_function_t func, uintptr_t dc_flags)
{
//串行队列
if (likely(dq->dq_width == 1)) {
return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
}
if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
}
dispatch_lane_t dl = upcast(dq)._dl;
// Global concurrent queues and queues bound to non-dispatch threads
// always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
//全局并发队列
if (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) {
return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);
}
if (unlikely(dq->do_targetq->do_targetq)) {
return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);
}
_dispatch_introspection_sync_begin(dl);
//自定义并发队
_dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(
_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)));
}
- 串行队列走
_dispatch_barrier_sync_f
流程。 - 全局并发队列走
_dispatch_sync_f_slow
流程。 - 自定义并发队列走
_dispatch_sync_invoke_and_complete
流程。
6.1.3 _dispatch_barrier_sync_f(串行)
一般调用如下:
dispatch_queue_t queue = dispatch_queue_create("test", NULL);
dispatch_sync(queue, ^{
NSLog(@"test");
});
_dispatch_barrier_sync_f
源码实现:
static void
_dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
dispatch_function_t func, uintptr_t dc_flags)
{
_dispatch_barrier_sync_f_inline(dq, ctxt, func, dc_flags);
}
内部是对_dispatch_barrier_sync_f_inline
的调用:
static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,
dispatch_function_t func, uintptr_t dc_flags)
{
dispatch_tid tid = _dispatch_tid_self();
if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
}
dispatch_lane_t dl = upcast(dq)._dl;
if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dl, tid))) {
return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,
DC_FLAG_BARRIER | dc_flags);
}
if (unlikely(dl->do_targetq->do_targetq)) {
return _dispatch_sync_recurse(dl, ctxt, func,
DC_FLAG_BARRIER | dc_flags);
}
_dispatch_introspection_sync_begin(dl);
_dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
}
会调用到_dispatch_lane_barrier_sync_invoke_and_complete
:
static void
_dispatch_lane_barrier_sync_invoke_and_complete(dispatch_lane_t dq,
void *ctxt, dispatch_function_t func DISPATCH_TRACE_ARG(void *dc))
{
_dispatch_sync_function_invoke_inline(dq, ctxt, func);
……
}
其中调用了_dispatch_sync_function_invoke_inline
:
static inline void
_dispatch_sync_function_invoke_inline(dispatch_queue_class_t dq, void *ctxt,
dispatch_function_t func)
{
dispatch_thread_frame_s dtf;
_dispatch_thread_frame_push(&dtf, dq);
_dispatch_client_callout(ctxt, func);
_dispatch_perfmon_workitem_inc();
_dispatch_thread_frame_pop(&dtf);
}
调用了_dispatch_client_callout
。
在block
实现中打断点堆栈如下:
* frame #0: 0x000000010f94d477 GCDDemo`__29-[ViewController viewDidLoad]_block_invoke(.block_descriptor=0x000000010f950038) at ViewController.m:30:9
frame #1: 0x000000010fbc39c8 libdispatch.dylib`_dispatch_client_callout + 8
frame #2: 0x000000010fbd2bfe libdispatch.dylib`_dispatch_lane_barrier_sync_invoke_and_complete + 132
frame #3: 0x000000010f94d44a GCDDemo`-[ViewController viewDidLoad](self=0x00007f8f32c09ae0, _cmd="viewDidLoad") at ViewController.m:29:5
正好对应上。
6.1.4 _dispatch_sync_f_slow(全局并发)
dispatch_sync(dispatch_get_global_queue(0, 0), ^{
NSLog(@"test");
});
_dispatch_sync_f_slow
源码如下:
static void
_dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt,
dispatch_function_t func, uintptr_t top_dc_flags,
dispatch_queue_class_t dqu, uintptr_t dc_flags)
{
dispatch_queue_t top_dq = top_dqu._dq;
dispatch_queue_t dq = dqu._dq;
if (unlikely(!dq->do_targetq)) {
return _dispatch_sync_function_invoke(dq, ctxt, func);
}
pthread_priority_t pp = _dispatch_get_priority();
struct dispatch_sync_context_s dsc = {
.dc_flags = DC_FLAG_SYNC_WAITER | dc_flags,
.dc_func = _dispatch_async_and_wait_invoke,
.dc_ctxt = &dsc,
.dc_other = top_dq,
.dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG,
.dc_voucher = _voucher_get(),
.dsc_func = func,
.dsc_ctxt = ctxt,
.dsc_waiter = _dispatch_tid_self(),
};
_dispatch_trace_item_push(top_dq, &dsc);
__DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq);
if (dsc.dsc_func == NULL) {
// dsc_func being cleared means that the block ran on another thread ie.
// case (2) as listed in _dispatch_async_and_wait_f_slow.
dispatch_queue_t stop_dq = dsc.dc_other;
return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags);
}
_dispatch_introspection_sync_begin(top_dq);
_dispatch_trace_item_pop(top_dq, &dsc);
_dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags
DISPATCH_TRACE_ARG(&dsc));
}
断点继续跟踪走_dispatch_sync_function_invoke
逻辑(证明系统全局队列没有do_targetq
,自己创建的有。),它是对_dispatch_sync_function_invoke_inline
的封装:
static void
_dispatch_sync_function_invoke(dispatch_queue_class_t dq, void *ctxt,
dispatch_function_t func)
{
_dispatch_sync_function_invoke_inline(dq, ctxt, func);
}
_dispatch_sync_function_invoke_inline
实现如下:
static inline void
_dispatch_sync_function_invoke_inline(dispatch_queue_class_t dq, void *ctxt,
dispatch_function_t func)
{
dispatch_thread_frame_s dtf;
_dispatch_thread_frame_push(&dtf, dq);
_dispatch_client_callout(ctxt, func);
_dispatch_perfmon_workitem_inc();
_dispatch_thread_frame_pop(&dtf);
}
-
_dispatch_client_callout
中传递的参数是block
,实现如下:
void
_dispatch_client_callout(void *ctxt, dispatch_function_t f)
{
_dispatch_get_tsd_base();
void *u = _dispatch_get_unwind_tsd();
if (likely(!u)) return f(ctxt);
_dispatch_set_unwind_tsd(NULL);
f(ctxt);
_dispatch_free_unwind_tsd();
_dispatch_set_unwind_tsd(u);
}
- 内部直接调用了
f(ctxt)
,也就是对block
的调用。 -
_dispatch_client_callout
的实现有多个,具体实现都是对f(ctxt)
的调用。
验证:在block
实现内部直接断点,有如下堆栈:
* frame #0: 0x000000010c232477 GCDDemo`__29-[ViewController viewDidLoad]_block_invoke(.block_descriptor=0x000000010c235038) at ViewController.m:33:9
frame #1: 0x000000010c4a89c8 libdispatch.dylib`_dispatch_client_callout + 8
frame #2: 0x000000010c4ad2ab libdispatch.dylib`_dispatch_sync_function_invoke + 127
frame #3: 0x000000010c232440 GCDDemo`-[ViewController viewDidLoad](self=0x00007fbc29c0b660, _cmd="viewDidLoad") at ViewController.m:32:5
- 从堆栈验证
block
最终也是_dispatch_client_callout
调用的。
6.1.5 _dispatch_sync_invoke_and_complete(自定义并发)
dispatch_queue_t queue = dispatch_queue_create("test", DISPATCH_QUEUE_CONCURRENT);
dispatch_sync(queue, ^{
NSLog(@"test");
});
_dispatch_sync_invoke_and_complete
源码如下:
static void
_dispatch_sync_invoke_and_complete(dispatch_lane_t dq, void *ctxt,
dispatch_function_t func DISPATCH_TRACE_ARG(void *dc))
{
_dispatch_sync_function_invoke_inline(dq, ctxt, func);
_dispatch_trace_item_complete(dc);
_dispatch_lane_non_barrier_complete(dq, 0);
}
- 这里
dispatch_function_t func DISPATCH_TRACE_ARG(void *dc)
看起来是一个参数(其实有可能是一个参数,有可能是两个参数)对应传过来的func DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags))
DISPATCH_TRACE_ARG
定义:
#if DISPATCH_USE_DTRACE_INTROSPECTION || DISPATCH_INTROSPECTION
......
#define DISPATCH_TRACE_ARG(arg) , arg
......
#else
......
#define DISPATCH_TRACE_ARG(arg)
......
#endif
也就是dispatch_function_t func,_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)
或者dispatch_function_t func
。后面的参数根据条件是可选参数。
_dispatch_sync_function_invoke_inline
:
static inline void
_dispatch_sync_function_invoke_inline(dispatch_queue_class_t dq, void *ctxt,
dispatch_function_t func)
{
dispatch_thread_frame_s dtf;
_dispatch_thread_frame_push(&dtf, dq);
_dispatch_client_callout(ctxt, func);
_dispatch_perfmon_workitem_inc();
_dispatch_thread_frame_pop(&dtf);
}
其中调用了_dispatch_client_callout
,block
实现打断点堆栈如下:
* frame #0: 0x000000010a7aa477 GCDDemo`__29-[ViewController viewDidLoad]_block_invoke(.block_descriptor=0x000000010a7ad040) at ViewController.m:29:9
frame #1: 0x000000010aa209c8 libdispatch.dylib`_dispatch_client_callout + 8
frame #2: 0x000000010aa302fb libdispatch.dylib`_dispatch_sync_invoke_and_complete + 132
frame #3: 0x000000010a7aa440 GCDDemo`-[ViewController viewDidLoad](self=0x00007fd5dc506a90, _cmd="viewDidLoad") at ViewController.m:28:5
⚠️异步函数调用并不会开启线程,调用过程中直接执行回调,无论队列串行/并行任务都按序执行。
6.2 异步流程
对于异步函数一般实现:
dispatch_async(dispatch_get_global_queue(0, 0), ^{
NSLog(@"test");
});
6.2.1 dispatch_async 源码分析
void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
dispatch_continuation_t dc = _dispatch_continuation_alloc();
uintptr_t dc_flags = DC_FLAG_CONSUME;
dispatch_qos_t qos;
qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
_dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
- 内部对
work
封装成qos
。 - 调用的是
_dispatch_continuation_async
。
_dispatch_continuation_init
的实现:
static inline dispatch_qos_t
_dispatch_continuation_init(dispatch_continuation_t dc,
dispatch_queue_class_t dqu, dispatch_block_t work,
dispatch_block_flags_t flags, uintptr_t dc_flags)
{
void *ctxt = _dispatch_Block_copy(work);
dc_flags |= DC_FLAG_BLOCK | DC_FLAG_ALLOCATED;
if (unlikely(_dispatch_block_has_private_data(work))) {
dc->dc_flags = dc_flags;
dc->dc_ctxt = ctxt;
// will initialize all fields but requires dc_flags & dc_ctxt to be set
return _dispatch_continuation_init_slow(dc, dqu, flags);
}
//invoke包装成func
dispatch_function_t func = _dispatch_Block_invoke(work);
if (dc_flags & DC_FLAG_CONSUME) {
func = _dispatch_call_block_and_release;
}
return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
}
- 对
work
进行拷贝成ctxt
以及包装成func
。 - 调用
_dispatch_continuation_init_f
。
_dispatch_continuation_init_f
的实现如下:
static inline dispatch_qos_t
_dispatch_continuation_init_f(dispatch_continuation_t dc,
dispatch_queue_class_t dqu, void *ctxt, dispatch_function_t f,
dispatch_block_flags_t flags, uintptr_t dc_flags)
{
pthread_priority_t pp = 0;
dc->dc_flags = dc_flags | DC_FLAG_ALLOCATED;
//将f赋值给dc_func
dc->dc_func = f;
//ctxt 赋值给dc_ctxt
dc->dc_ctxt = ctxt;
// in this context DISPATCH_BLOCK_HAS_PRIORITY means that the priority
// should not be propagated, only taken from the handler if it has one
if (!(flags & DISPATCH_BLOCK_HAS_PRIORITY)) {
pp = _dispatch_priority_propagate();
}
_dispatch_continuation_voucher_set(dc, flags);
return _dispatch_continuation_priority_set(dc, dqu, pp, flags);
}
- 将
invoke
和block
赋值给dc
对应的字段。 -
_dispatch_continuation_priority_set
进行优先级处理。
_dispatch_continuation_priority_set
的实现:
static inline dispatch_qos_t
_dispatch_continuation_priority_set(dispatch_continuation_t dc,
dispatch_queue_class_t dqu,
pthread_priority_t pp, dispatch_block_flags_t flags)
{
dispatch_qos_t qos = DISPATCH_QOS_UNSPECIFIED;
#if HAVE_PTHREAD_WORKQUEUE_QOS
dispatch_queue_t dq = dqu._dq;
if (likely(pp)) {
bool enforce = (flags & DISPATCH_BLOCK_ENFORCE_QOS_CLASS);
bool is_floor = (dq->dq_priority & DISPATCH_PRIORITY_FLAG_FLOOR);
bool dq_has_qos = (dq->dq_priority & DISPATCH_PRIORITY_REQUESTED_MASK);
if (enforce) {
pp |= _PTHREAD_PRIORITY_ENFORCE_FLAG;
qos = _dispatch_qos_from_pp_unsafe(pp);
} else if (!is_floor && dq_has_qos) {
pp = 0;
} else {
qos = _dispatch_qos_from_pp_unsafe(pp);
}
}
dc->dc_priority = pp;
#else
(void)dc; (void)dqu; (void)pp; (void)flags;
#endif
return qos;
}
- 生成
qos
对优先级进行了配置。
为什么异步函数做了优先级处理?
- 异步函数代表异步调用,会产生无序调用。优先级是参考依据。
- 回调必然是异步的。
- 既然被封装成了
qos
,那么必然有个时机取出block
进行回调。
6.2.2 _dispatch_continuation_async
static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu,
dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
{
#if DISPATCH_INTROSPECTION
if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
_dispatch_trace_item_push(dqu, dc);
}
#else
(void)dc_flags;
#endif
return dx_push(dqu._dq, dc, qos);
}
内部调用了dx_push
:
#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z)
qos
是第三个参数z
,那么核心就是dq_push
(dx_vtable
判断队列类型),搜索后赋值如下:
这里就和前面的
3.1
中queue_main
3.2
中queue_global
4.1
中 queue_concurrent
和queue_serial
对应上了:
-
queue_global
:_dispatch_root_queue_push
。 -
queue_main
:_dispatch_main_queue_push
。 -
queue_serial
:_dispatch_lane_push
。 -
queue_concurrent
:_dispatch_lane_concurrent_push
。
6.2.2 _dispatch_root_queue_push(全局队列)
void
_dispatch_root_queue_push(dispatch_queue_global_t rq, dispatch_object_t dou,
dispatch_qos_t qos)
{
#if DISPATCH_USE_KEVENT_WORKQUEUE
dispatch_deferred_items_t ddi = _dispatch_deferred_items_get();
if (unlikely(ddi && ddi->ddi_can_stash)) {……}
#endif
#if HAVE_PTHREAD_WORKQUEUE_QOS
if (_dispatch_root_queue_push_needs_override(rq, qos)) {
return _dispatch_root_queue_push_override(rq, dou, qos);
}
#else
(void)qos;
#endif
_dispatch_root_queue_push_inline(rq, dou, dou, 1);
}
内部调用了_dispatch_root_queue_push_override
:
static void
_dispatch_root_queue_push_override(dispatch_queue_global_t orig_rq,
dispatch_object_t dou, dispatch_qos_t qos)
{
bool overcommit = orig_rq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
dispatch_queue_global_t rq = _dispatch_get_root_queue(qos, overcommit);
dispatch_continuation_t dc = dou._dc;
if (_dispatch_object_is_redirection(dc)) {
// no double-wrap is needed, _dispatch_async_redirect_invoke will do
// the right thing
dc->dc_func = (void *)orig_rq;
} else {
dc = _dispatch_continuation_alloc();
dc->do_vtable = DC_VTABLE(OVERRIDE_OWNING);
dc->dc_ctxt = dc;
dc->dc_other = orig_rq;
dc->dc_data = dou._do;
dc->dc_priority = DISPATCH_NO_PRIORITY;
dc->dc_voucher = DISPATCH_NO_VOUCHER;
}
_dispatch_root_queue_push_inline(rq, dc, dc, 1);
}
直接调用_dispatch_root_queue_push_inline
:
static inline void
_dispatch_root_queue_push_inline(dispatch_queue_global_t dq,
dispatch_object_t _head, dispatch_object_t _tail, int n)
{
struct dispatch_object_s *hd = _head._do, *tl = _tail._do;
if (unlikely(os_mpsc_push_list(os_mpsc(dq, dq_items), hd, tl, do_next))) {
//调用_dispatch_root_queue_poke
return _dispatch_root_queue_poke(dq, n, 0);
}
}
直接调用_dispatch_root_queue_poke
:
void
_dispatch_root_queue_poke(dispatch_queue_global_t dq, int n, int floor)
{
……
return _dispatch_root_queue_poke_slow(dq, n, floor);
}
_dispatch_root_queue_poke_slow
实现如下:
static void
_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
{
……
_dispatch_root_queues_init();
……
}
_dispatch_root_queues_init
实现如下:
static inline void
_dispatch_root_queues_init(void)
{
dispatch_once_f(&_dispatch_root_queues_pred, NULL,
_dispatch_root_queues_init_once);
}
是一个单例的调用,调用_dispatch_root_queues_init_once
:
-
_dispatch_worker_thread2
赋值给cfg.workq_cb
,交由pthread
调用。 - 交由
_pthread_workqueue_init_with_workloop
调用,也就是os
调用(cpu
调度)。
直接在block
的实现打断点查看堆栈如下:
* thread #4, queue = 'com.apple.root.default-qos', stop reason = breakpoint 1.1
* frame #0: 0x0000000104614477 GCDDemo`__29-[ViewController viewDidLoad]_block_invoke(.block_descriptor=0x0000000104617038) at ViewController.m:33:9
frame #1: 0x00000001048897ec libdispatch.dylib`_dispatch_call_block_and_release + 12
frame #2: 0x000000010488a9c8 libdispatch.dylib`_dispatch_client_callout + 8
frame #3: 0x000000010488ce46 libdispatch.dylib`_dispatch_queue_override_invoke + 1032
frame #4: 0x000000010489c508 libdispatch.dylib`_dispatch_root_queue_drain + 351
frame #5: 0x000000010489ce6d libdispatch.dylib`_dispatch_worker_thread2 + 135
frame #6: 0x00007fff611639f7 libsystem_pthread.dylib`_pthread_wqthread + 220
frame #7: 0x00007fff61162b77 libsystem_pthread.dylib`start_wqthread + 15
可以看到是在_pthread
中系统进行调用的。
GCD
底层也是封装的pthread
。
_dispatch_worker_thread2
的实现:
调用了_dispatch_root_queue_drain
:
是对_dispatch_continuation_pop_inline
的调用:
dx_invoke
宏定义就是_dispatch_queue_override_invoke
:
_dispatch_queue_override_invoke
内部的调用了_dispatch_continuation_invoke_inline
:
-
_dispatch_client_callout
调用block
。
6.2.3 _dispatch_main_queue_push(主队列)
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"test");
});
与全局并发队列同理,当队列为主队列时进入_dispatch_main_queue_push
逻辑:
void
_dispatch_main_queue_push(dispatch_queue_main_t dq, dispatch_object_t dou,
dispatch_qos_t qos)
{
// Same as _dispatch_lane_push() but without the refcounting due to being
// a global object
if (_dispatch_queue_push_item(dq, dou)) {
return dx_wakeup(dq, qos, DISPATCH_WAKEUP_MAKE_DIRTY);
}
qos = _dispatch_queue_push_qos(dq, qos);
if (_dispatch_queue_need_override(dq, qos)) {
return dx_wakeup(dq, qos, 0);
}
}
内部调用了dx_wakeup
定义如下:
#define dx_wakeup(x, y, z) dx_vtable(x)->dq_wakeup(x, y, z)
qos
是第二个参数,所以核心实现是dq_wakeup
:
实现是_dispatch_main_queue_wakeup
:
void
_dispatch_main_queue_wakeup(dispatch_queue_main_t dq, dispatch_qos_t qos,
dispatch_wakeup_flags_t flags)
{
#if DISPATCH_COCOA_COMPAT
if (_dispatch_queue_is_thread_bound(dq)) {
return _dispatch_runloop_queue_wakeup(dq->_as_dl, qos, flags);
}
#endif
return _dispatch_lane_wakeup(dq, qos, flags);
}
下符号断点后走的是_dispatch_runloop_queue_wakeup
逻辑:
void
_dispatch_runloop_queue_wakeup(dispatch_lane_t dq, dispatch_qos_t qos,
dispatch_wakeup_flags_t flags)
{
if (unlikely(_dispatch_queue_atomic_flags(dq) & DQF_RELEASED)) {
//
return _dispatch_lane_wakeup(dq, qos, flags);
}
if (flags & DISPATCH_WAKEUP_MAKE_DIRTY) {
os_atomic_or2o(dq, dq_state, DISPATCH_QUEUE_DIRTY, release);
}
if (_dispatch_queue_class_probe(dq)) {
return _dispatch_runloop_queue_poke(dq, qos, flags);
}
qos = _dispatch_runloop_queue_reset_max_qos(dq);
if (qos) {
mach_port_t owner = DISPATCH_QUEUE_DRAIN_OWNER(dq);
if (_dispatch_queue_class_probe(dq)) {
_dispatch_runloop_queue_poke(dq, qos, flags);
}
_dispatch_thread_override_end(owner, dq);
return;
}
if (flags & DISPATCH_WAKEUP_CONSUME_2) {
return _dispatch_release_2_tailcall(dq);
}
}
依次下符号断点走了_dispatch_runloop_queue_poke
逻辑:
static void
_dispatch_runloop_queue_poke(dispatch_lane_t dq, dispatch_qos_t qos,
dispatch_wakeup_flags_t flags)
{
……
if (dx_type(dq) == DISPATCH_QUEUE_MAIN_TYPE) {
dispatch_once_f(&_dispatch_main_q_handle_pred, dq,
_dispatch_runloop_queue_handle_init);
}
……
}
核心在_dispatch_runloop_queue_handle_init
中的实现:
static void
_dispatch_runloop_queue_handle_init(void *ctxt)
{
dispatch_lane_t dq = (dispatch_lane_t)ctxt;
//创建handle
dispatch_runloop_handle_t handle;
_dispatch_fork_becomes_unsafe();
#if TARGET_OS_MAC
mach_port_options_t opts = {
.flags = MPO_CONTEXT_AS_GUARD | MPO_STRICT | MPO_INSERT_SEND_RIGHT,
};
mach_port_context_t guard = (uintptr_t)dq;
kern_return_t kr;
mach_port_t mp;
if (dx_type(dq) == DISPATCH_QUEUE_MAIN_TYPE) {
opts.flags |= MPO_QLIMIT;
opts.mpl.mpl_qlimit = 1;
}
kr = mach_port_construct(mach_task_self(), &opts, guard, &mp);
DISPATCH_VERIFY_MIG(kr);
(void)dispatch_assume_zero(kr);
handle = mp;
#elif defined(__linux__)
……
#else
#error "runloop support not implemented on this platform"
#endif
_dispatch_runloop_queue_set_handle(dq, handle);
_dispatch_program_is_probably_callback_driven = true;
}
首先创建
handle
。然后handle = mp
进行赋值。_dispatch_runloop_queue_set_handle
将handle
关联到dq
的do_ctxt
中。
static inline dispatch_runloop_handle_t
_dispatch_runloop_queue_get_handle(dispatch_lane_t dq)
{
#if TARGET_OS_MAC
return ((dispatch_runloop_handle_t)(uintptr_t)dq->do_ctxt);
#elif defined(__linux__)
// decode: 0 is a valid fd, so offset by 1 to distinguish from NULL
return ((dispatch_runloop_handle_t)(uintptr_t)dq->do_ctxt) - 1;
#elif defined(_WIN32)
return ((dispatch_runloop_handle_t)(uintptr_t)dq->do_ctxt);
#else
#error "runloop support not implemented on this platform"
#endif
}
在block
内部断点有如下调用栈:
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
* frame #0: 0x0000000102fef477 GCDDemo`__29-[ViewController viewDidLoad]_block_invoke(.block_descriptor=0x0000000102ff2040) at ViewController.m:33:9
frame #1: 0x00000001032647ec libdispatch.dylib`_dispatch_call_block_and_release + 12
frame #2: 0x00000001032659c8 libdispatch.dylib`_dispatch_client_callout + 8
frame #3: 0x0000000103273e75 libdispatch.dylib`_dispatch_main_queue_callback_4CF + 1152
frame #4: 0x00007fff2038fdbb CoreFoundation`__CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 9
frame #5: 0x00007fff2038a63e CoreFoundation`__CFRunLoopRun + 2685
CoreFoundation
调用的回调。_dispatch_main_queue_callback_4CF
实现:
void
_dispatch_main_queue_callback_4CF(
void *ignored DISPATCH_UNUSED)
{
// the main queue cannot be suspended and no-one looks at this bit
// so abuse it to avoid dirtying more memory
if (_dispatch_main_q.dq_side_suspend_cnt) {
return;
}
_dispatch_main_q.dq_side_suspend_cnt = true;
_dispatch_main_queue_drain(&_dispatch_main_q);
_dispatch_main_q.dq_side_suspend_cnt = false;
}
调用了_dispatch_main_queue_drain
:
到这里就与全局队列调用相同的逻辑了。
但是有个问题是_dispatch_main_queue_callback_4CF
是谁传递给CF
的呢?在CoreFoundation
动态库中搜索发现是写死的:
6.2.4 _dispatch_lane_push(自己创建串行队列)
dispatch_queue_t queue = dispatch_queue_create("test", NULL);
dispatch_async(queue, ^{
NSLog(@"test");
});
同样进入_dispatch_lane_push
逻辑:
void
_dispatch_lane_push(dispatch_lane_t dq, dispatch_object_t dou,
dispatch_qos_t qos)
{
dispatch_wakeup_flags_t flags = 0;
struct dispatch_object_s *prev;
if (unlikely(_dispatch_object_is_waiter(dou))) {
return _dispatch_lane_push_waiter(dq, dou._dsc, qos);
}
dispatch_assert(!_dispatch_object_is_global(dq));
qos = _dispatch_queue_push_qos(dq, qos);
……
os_mpsc_push_update_prev(os_mpsc(dq, dq_items), prev, dou._do, do_next);
if (flags) {
//_dispatch_lane_wakeup
return dx_wakeup(dq, qos, flags);
}
}
断点跟踪会进入_dispatch_lane_wakeup
逻辑:
void
_dispatch_lane_wakeup(dispatch_lane_class_t dqu, dispatch_qos_t qos,
dispatch_wakeup_flags_t flags)
{
dispatch_queue_wakeup_target_t target = DISPATCH_QUEUE_WAKEUP_NONE;
if (unlikely(flags & DISPATCH_WAKEUP_BARRIER_COMPLETE)) {
//有barrier走这里
return _dispatch_lane_barrier_complete(dqu, qos, flags);
}
if (_dispatch_queue_class_probe(dqu)) {
target = DISPATCH_QUEUE_WAKEUP_TARGET;
}
//否则走这里
return _dispatch_queue_wakeup(dqu, qos, flags, target);
}
继续断点会进入_dispatch_queue_wakeup
逻辑:
这个时候断点并没有断住。block
内部打断点堆栈调用如下:
* thread #7, queue = 'test', stop reason = breakpoint 1.1
* frame #0: 0x0000000103765477 GCDDemo`__29-[ViewController viewDidLoad]_block_invoke(.block_descriptor=0x0000000103768038) at ViewController.m:34:9
frame #1: 0x00000001039da7ec libdispatch.dylib`_dispatch_call_block_and_release + 12
frame #2: 0x00000001039db9c8 libdispatch.dylib`_dispatch_client_callout + 8
frame #3: 0x00000001039e2296 libdispatch.dylib`_dispatch_lane_serial_drain + 796
frame #4: 0x00000001039e2f67 libdispatch.dylib`_dispatch_lane_invoke + 439
frame #5: 0x00000001039eede2 libdispatch.dylib`_dispatch_workloop_worker_thread + 882
frame #6: 0x00007fff61163a3d libsystem_pthread.dylib`_pthread_wqthread + 290
frame #7: 0x00007fff61162b77 libsystem_pthread.dylib`start_wqthread + 15
_dispatch_workloop_worker_thread
是在_dispatch_root_queues_init_once
中赋值的:
但是断点并没有进入这里,直接添加断点启动
App
发现是在main
之后调用赋值的:
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 3.1
* frame #0: 0x000000010bb53d0e libdispatch.dylib`_dispatch_root_queues_init_once
frame #1: 0x000000010bb419c8 libdispatch.dylib`_dispatch_client_callout + 8
frame #2: 0x000000010bb42f33 libdispatch.dylib`_dispatch_once_callout + 66
frame #3: 0x000000010bb4e5c3 libdispatch.dylib`_dispatch_root_queue_poke_slow + 363
frame #4: 0x00007fff2467a0a1 UIKitCore`_UIApplicationMainPreparations + 91
frame #5: 0x00007fff2467a01c UIKitCore`UIApplicationMain + 73
frame #6: 0x000000010b8cbf72 GCDDemo`main(argc=1, argv=0x00007ffee4333cd8) at main.m:17:12
frame #7: 0x00007fff20256409 libdyld.dylib`start + 1
frame #8: 0x00007fff20256409 libdyld.dylib`start + 1
UIKitCore
中调用_dispatch_root_queue_poke_slow
:
static void
_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
{
......
_dispatch_root_queues_init();
_dispatch_debug_root_queue(dq, __func__);
_dispatch_trace_runtime_event(worker_request, dq, (uint64_t)n);
......
}
核心是_dispatch_root_queues_init
:
static inline void
_dispatch_root_queues_init(void)
{
dispatch_once_f(&_dispatch_root_queues_pred, NULL,
_dispatch_root_queues_init_once);
}
这样就调用到了_dispatch_root_queues_init_once
。
6.2.5 _dispatch_lane_concurrent_push(自己创建并行队列)
dispatch_queue_t queue = dispatch_queue_create("test", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue, ^{
NSLog(@"test");
});
同样会进入_dispatch_lane_concurrent_push
流程:
void
_dispatch_lane_concurrent_push(dispatch_lane_t dq, dispatch_object_t dou,
dispatch_qos_t qos)
{
// reserving non barrier width
// doesn't fail if only the ENQUEUED bit is set (unlike its barrier
// width equivalent), so we have to check that this thread hasn't
// enqueued anything ahead of this call or we can break ordering
if (dq->dq_items_tail == NULL &&
!_dispatch_object_is_waiter(dou) &&
!_dispatch_object_is_barrier(dou) &&
_dispatch_queue_try_acquire_async(dq)) {
return _dispatch_continuation_redirect_push(dq, dou, qos);
}
_dispatch_lane_push(dq, dou, qos);
}
这里有
_dispatch_lane_push
也就是说某些情况下会进入串行队列的逻辑。有barrier
那么意味着在有栅栏的情况下有可能会进入_dispatch_lane_push
的逻辑,栅栏部分将在后续文章分析。
会进入_dispatch_continuation_redirect_push
流程:
static void
_dispatch_continuation_redirect_push(dispatch_lane_t dl,
dispatch_object_t dou, dispatch_qos_t qos)
{
if (likely(!_dispatch_object_is_redirection(dou))) {
dou._dc = _dispatch_async_redirect_wrap(dl, dou);
} else if (!dou._dc->dc_ctxt) {
// find first queue in descending target queue order that has
// an autorelease frequency set, and use that as the frequency for
// this continuation.
dou._dc->dc_ctxt = (void *)
(uintptr_t)_dispatch_queue_autorelease_frequency(dl);
}
//target发生了变化
dispatch_queue_t dq = dl->do_targetq;
if (!qos) qos = _dispatch_priority_qos(dq->dq_priority);
dx_push(dq, dou, qos);
}
-
do_targetq
赋值给了dq
。此时dq_push
已经不是_dispatch_lane_concurrent_push
而是queue_pthread_root
类型的_dispatch_root_queue_push
。这个时候逻辑就和全局并发队列一致了(经验证内部调用流程逻辑也一致)。
do_targetq
是在_dispatch_lane_create_with_target
中赋值的tq
:
block
内部断点堆栈如下:
* thread #3, queue = 'test', stop reason = breakpoint 2.1
* frame #0: 0x0000000101ca7477 GCDDemo`__29-[ViewController viewDidLoad]_block_invoke(.block_descriptor=0x0000000101caa040) at ViewController.m:34:9
frame #1: 0x0000000101f1c7ec libdispatch.dylib`_dispatch_call_block_and_release + 12
frame #2: 0x0000000101f1d9c8 libdispatch.dylib`_dispatch_client_callout + 8
frame #3: 0x0000000101f20316 libdispatch.dylib`_dispatch_continuation_pop + 557
frame #4: 0x0000000101f1f71c libdispatch.dylib`_dispatch_async_redirect_invoke + 779
frame #5: 0x0000000101f2f508 libdispatch.dylib`_dispatch_root_queue_drain + 351
frame #6: 0x0000000101f2fe6d libdispatch.dylib`_dispatch_worker_thread2 + 135
frame #7: 0x00007fff611639f7 libsystem_pthread.dylib`_pthread_wqthread + 220
frame #8: 0x00007fff61162b77 libsystem_pthread.dylib`start_wqthread + 15
整个调用流程如下:
七、案例分析
7.1 案例1
@property (atomic, assign) int num;
- (void)test {
while (self.num < 5) {
dispatch_async(dispatch_get_global_queue(0, 0), ^{
self.num++;
});
}
NSLog(@"result: %d",self.num);
}
-
while
循环结束的条件是num >= 5
(这里就确定了至少会是5
)。 -
num++
的逻辑是在异步并发队列执行的,不会阻塞流程。 - 由于判断条件是
num < 5
的时候进入循环执行逻辑,所以最终会存在num++
还没有执行完毕while
循环继续进入执行,这样num
最终就>=5
了。本质上是任务执行次数>=5
次。
- 将
dispatch_async
改为同步函数dispatch_sync
的时候就和普通的循环没有区别了,最终num
会为5
。- 将队列改为自己创建的全局串行队列仍然不解决问题,只是顺序执行仍然会进入
>=5
次。- 将队列改为异步
main
函数会造成while
死循环,任务一次都不执行,因为任务本身要等while
执行完毕。
既然是异步执行的,那么NSLog
输出的num
是最终的值么?不一定,因为输出的时候任务不一定全部执行完毕,可以加个延时测试下:
- (void)test {
while (self.num < 5) {
dispatch_async(dispatch_get_global_queue(0, 0), ^{
self.num++;
});
}
NSLog(@"result: %d",self.num);
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(3 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
NSLog(@"after result: %d",self.num);
});
}
输出:
result: 7
after result: 16
7.2 案例2
@property (atomic, assign) int num;
- (void)test {
for (int i = 0; i < 10000; i++) {
dispatch_async(dispatch_get_global_queue(0, 0), ^{
self.num++;
});
}
NSLog(@"result: %d",self.num);
}
- 循环结束条件是
i = 10000
(这里就控制了循环只执行10000
次)。 -
num++
是在异步并发队列执行的。 - 由于任务是异步执行的,所以
result
输出<= 10000
(有任务还没有回来)。
同理加个延迟之后num
的值是否就为10000
了?
- (void)test {
for (int i = 0; i < 10000; i++) {
dispatch_async(dispatch_get_global_queue(0, 0), ^{
self.num++;
});
}
NSLog(@"result: %d",self.num);
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(10 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
NSLog(@"after result: %d",self.num);
});
}
输出:
result: 9956
after result: 9964
为什么?因为任务虽然执行了10000
次,由于并发block
获取的num
多次有可能相同,这就导致了值是小于10000
的。
将
dispatch_async
改为同步函数dispatch_sync
就和普通的循环没有区别了,最终num
会为10000
。将队列改为自己创建的全局串行队列这个时候最终
num = 10000
(由于串行队列顺序执行)
- (void)test {
dispatch_queue_t queue = dispatch_queue_create("test", NULL);
for (int i = 0; i < 10000; i++) {
dispatch_async(queue, ^{
self.num++;
});
}
NSLog(@"result: %d",self.num);
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(10 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
NSLog(@"after result: %d",self.num);
});
}
输出:
result: 3049
after result: 10000
- 将队列改为异步
main
函数:
- (void)test {
for (int i = 0; i < 10000; i++) {
dispatch_async(dispatch_get_main_queue(), ^{
self.num++;
});
}
NSLog(@"result: %d",self.num);
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(10 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
NSLog(@"after result: %d",self.num);
});
}
输出:
result: 0
after result: 10000
由于都在main
队列,所以会先执行完for
循环,然后输出result
,再执行任务。
所以先输出0
,延时后输出10000
。
⚠️atomic
只能保证setter & getter
内部的安全,多次settter
之间并不是安全的。