iOS底层探索23、GCD原理

libdispatch 源码下载地址

一、GCD 简介

1、GCD的特点

GCD - Grand Central Dispatch,是苹果为多核的并行运算而提出的解决方案,其特点如下:

  • C 语言编写,提供了非常多强大的函数;
  • GCD会自动利用CPU的多核(双核/四核等),为其计算提高了性能;
  • GCD会自动管理线程的生命周期 - 线程的创建、任务调度、销毁;
  • GCD对开发人员来说 更多用于关心要执行的任务,所以使用GCD对线程的管理便会较弱。

2、GCD 的函数

  • block 封装
  • 异步函数 - dispatch_async()
    1. 不必等当前代码执行完毕,就可以执行下面的语句;
    2. 会开启线程执行 block
    3. 异步 - 自动走子线程
  • 同步函数 - dispatch_sync()
    1. 必须等当前任务执行完才可执行下面语句;
    2. 不会开启新线程;
    3. 在当前线程中执行的block的任务。

耗时

    CFAbsoluteTime time = CFAbsoluteTimeGetCurrent();// CFAbsoluteTime是 CFTimeInterval 的 typedef
    
    dispatch_queue_t queue1 = dispatch_queue_create("my_queue001", DISPATCH_QUEUE_SERIAL);// 耗时 == 0.000007

//    dispatch_async(queue1, ^{
//
//    });// queue创建 + 开异步 = 耗时 == 0.000019

    dispatch_sync(queue1, ^{

    });// queue创建 + 开同步 = 耗时 == 0.000009
    
    NSLog(@"耗时 == %f",CFAbsoluteTimeGetCurrent()-time);

虽说同步/异步都是会耗费时间的 --> 但用它可来解决并发和多线程问题。

3、队列

image.png

队列的结构特点:先进先出 FIFO.

  • 主队列 - dispatch_get_main_queue()
    1. 专门在主线程上调度任务的串行队列
    2. 不会开启信线程 - 主线程只有一个
    3. 若当前主线程正在执行任务,那么当前无论向主队列添加了什么任务,都不会被调度。
  • 全局并发队列 - dispatch_get_global_queue()
    • global_queue的 4 种优先级选项
    // dispatch_get_global_queue(dispatch_QUEUE_def, <#unsigned long flags#>)
    
    #define DISPATCH_QUEUE_PRIORITY_HIGH 
    #define DISPATCH_QUEUE_PRIORITY_DEFAULT 0
    #define DISPATCH_QUEUE_PRIORITY_LOW (-2)
    #define DISPATCH_QUEUE_PRIORITY_BACKGROUND INT16_MIN
    
  • 创建队列
    /**
     串行队列:DISPATCH_QUEUE_SERIAL
            DISPATCH_QUEUE_SERIAL_INACTIVE - 初始化出来,默认不活跃
     
     并行队列:DISPATCH_QUEUE_CONCURRENT  - 支持栅栏 barrier
            DISPATCH_QUEUE_CONCURRENT_INACTIVE - 初始化出来,默认不活跃
     */
    dispatch_queue_t queue1 = dispatch_queue_create("my_queue001", DISPATCH_QUEUE_CONCURRENT);

函数与队列 的组合:

image.png
GCD函数与队列组合 API 使用
// 异步 串行
- (void)my_asyncSerial {
    
    dispatch_queue_t serial = dispatch_queue_create("my_asyncSerial", DISPATCH_QUEUE_SERIAL);
    for (int i=0; i<20; i++) {
        dispatch_async(serial, ^{
            NSLog(@"%d -- %@",i,[NSThread currentThread]);
        });
    }
    NSLog(@"hello 串行异步");
}
/** 输出打印:
                hello 串行异步
                0 -- {number = 6, name = (null)}
 1~19 内容也均是: i -- {number = 6, name = (null)}
 */

// 异步 并发
- (void)my_asyncConcurrent {
    
    dispatch_queue_t serial = dispatch_queue_create("my_asyncConcurrent", DISPATCH_QUEUE_CONCURRENT);
    for (int i=0; i<20; i++) {
        dispatch_async(serial, ^{
            NSLog(@"%d -- %@",i,[NSThread currentThread]);
        });
    }
    NSLog(@"hello 并发异步");
}
/** 输出打印:
 12:38:03.386791+0800 DemoEmpty_iOS[18284:1566736] 0 -- {number = 3, name = (null)}
 12:38:03.386807+0800 DemoEmpty_iOS[18284:1566616] hello 并发异步
 12:38:03.386849+0800 DemoEmpty_iOS[18284:1568901] 1 -- {number = 7, name = (null)}
 12:38:03.386886+0800 DemoEmpty_iOS[18284:1568902] 2 -- {number = 8, name = (null)}
 12:38:03.386927+0800 DemoEmpty_iOS[18284:1568903] 3 -- {number = 9, name = (null)}
 12:38:03.386971+0800 DemoEmpty_iOS[18284:1566736] 5 -- {number = 3, name = (null)}
 12:38:03.386986+0800 DemoEmpty_iOS[18284:1568904] 4 -- {number = 10, name = (null)}
 12:38:03.387029+0800 DemoEmpty_iOS[18284:1568901] 7 -- {number = 7, name = (null)}
 12:38:03.387053+0800 DemoEmpty_iOS[18284:1568905] 6 -- {number = 11, name = (null)}
 12:38:03.387084+0800 DemoEmpty_iOS[18284:1568906] 8 -- {number = 12, name = (null)}
 12:38:03.387094+0800 DemoEmpty_iOS[18284:1568902] 10 -- {number = 8, name = (null)}
 12:38:03.387117+0800 DemoEmpty_iOS[18284:1568907] 9 -- {number = 13, name = (null)}
 12:38:03.387127+0800 DemoEmpty_iOS[18284:1568903] 11 -- {number = 9, name = (null)}
 ... ...
*/

// 同步 串行
- (void)my_syncSerial {
    
    dispatch_queue_t serial = dispatch_queue_create("my_syncSerial", DISPATCH_QUEUE_SERIAL);
    for (int i=0; i<20; i++) {
        dispatch_sync(serial, ^{
            NSLog(@"%d -- %@",i,[NSThread currentThread]);
        });
    }
    NSLog(@"hello 串行同步");
}
/** 输出打印  // number = 1 主线程
 12:42:47.728512+0800 DemoEmpty_iOS[18284:1566616] 0 -- {number = 1, name = main}
 12:42:47.728665+0800 DemoEmpty_iOS[18284:1566616] 1 -- {number = 1, name = main}
 ... i -- 内容一致 ...
 12:42:47.738477+0800 DemoEmpty_iOS[18284:1566616] 18 -- {number = 1, name = main}
 12:42:47.738577+0800 DemoEmpty_iOS[18284:1566616] 19 -- {number = 1, name = main}
 12:42:47.738681+0800 DemoEmpty_iOS[18284:1566616] hello 串行同步
 */

// 同步 并发
- (void)my_syncConcurrent {
    
    dispatch_queue_t serial = dispatch_queue_create("my_syncConcurrent", DISPATCH_QUEUE_CONCURRENT);
    for (int i=0; i<20; i++) {
        dispatch_sync(serial, ^{
            NSLog(@"%d -- %@",i,[NSThread currentThread]);
        });
    }
    NSLog(@"hello 并发同步");
}
/**输出打印
 12:45:43.056685+0800 DemoEmpty_iOS[18284:1566616] 0 -- {number = 1, name = main}
 12:45:43.056852+0800 DemoEmpty_iOS[18284:1566616] 1 -- {number = 1, name = main}
 ... ...
 12:45:43.066501+0800 DemoEmpty_iOS[18284:1566616] 18 -- {number = 1, name = main}
 12:45:43.066636+0800 DemoEmpty_iOS[18284:1566616] 19 -- {number = 1, name = main}
 12:45:43.066739+0800 DemoEmpty_iOS[18284:1566616] hello 并发同步
 */

二、GCD 原理分析

1、队列

1、队列的创建

/** 创建一个队列
     (lldb) p  queue01
     (OS_dispatch_queue_concurrent *) $0 = 0x0000600000501180
*/
dispatch_queue_t queue01 = dispatch_queue_create("create_my_queue01", DISPATCH_QUEUE_CONCURRENT);// 
// 主队列 - 串
dispatch_queue_t mainQueue = dispatch_get_main_queue();
// 全局并发队列
dispatch_queue_t globalQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
NSLog(@"%@ ~ %@ ~ %@",queue01,mainQueue,globalQueue);
//  ~  ~ 

1.1、主队列 - dispatch_get_main_queue(void){}

主队列的获取并未通过我们传入名字label,那么它内部必然有设置值com.apple.main-thread

/*!
 * @function dispatch_get_main_queue
 *
 * @abstract
 * Returns the default queue that is bound to the main thread.
 *
 * @discussion
 * In order to invoke blocks submitted to the main queue, the application must
 * call dispatch_main(), NSApplicationMain(), or use a CFRunLoop on the main
 * thread.
 *
 * The main queue is meant to be used in application context to interact with
 * the main thread and the main runloop.
 *
 * Because the main queue doesn't behave entirely like a regular serial queue,
 * it may have unwanted side-effects when used in processes that are not UI apps
 * (daemons). For such processes, the main queue should be avoided.
 *
 * @see dispatch_queue_main_t
 *
 * @result
 * Returns the main queue. This queue is created automatically on behalf of
 * the main thread before main() is called.
 */
// 主队列 是在 main 函数前自动创建的,用来在应用程序上下文中与mainthread 和 main runloop 交互的。
DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_CONST DISPATCH_NOTHROW
dispatch_queue_main_t
dispatch_get_main_queue(void)
{
    return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q);
}
// _dispatch_main_q 结构体:
struct dispatch_queue_static_s _dispatch_main_q = {
    DISPATCH_GLOBAL_OBJECT_HEADER(queue_main),
#if !DISPATCH_USE_RESOLVERS
    .do_targetq = _dispatch_get_default_queue(true),
#endif
    .dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(1) |
            DISPATCH_QUEUE_ROLE_BASE_ANON,
    .dq_label = "com.apple.main-thread",
    .dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH(1),
    .dq_serialnum = 1,
};

dispatch_get_main_queue的创建早于main函数的调用 --> dyld --> # OC 底层探索 12、应用程序加载
找到源码 初始化位置libdispatch_init:

DISPATCH_EXPORT DISPATCH_NOTHROW
void
libdispatch_init(void)
{

    // ...... 更多代码未贴 ......
    #if DISPATCH_USE_RESOLVERS // rdar://problem/8541707
    _dispatch_main_q.do_targetq = _dispatch_get_default_queue(true);
#endif
    // 主队列和线程 的设置
    _dispatch_queue_set_current(&_dispatch_main_q);
    _dispatch_queue_set_bound_thread(&_dispatch_main_q);

#if DISPATCH_USE_PTHREAD_ATFORK
    (void)dispatch_assume_zero(pthread_atfork(dispatch_atfork_prepare,
            dispatch_atfork_parent, dispatch_atfork_child));
#endif
    _dispatch_hw_config_init();
    _dispatch_time_init();
    _dispatch_vtable_init();
    _os_object_init();
    _voucher_init();
    _dispatch_introspection_init();
}

DQF_WIDTH()

// DQF_WIDTH() :
#define DQF_FLAGS_MASK        ((dispatch_queue_flags_t)0xffff0000)
#define DQF_WIDTH_MASK        ((dispatch_queue_flags_t)0x0000ffff)
#define DQF_WIDTH(n)          ((dispatch_queue_flags_t)(uint16_t)(n))

主队列是个串行队列,它的DQF_WIDTH(1):调度宽度(dispatch_queue_flags) 1.

1.2、全局并发队列 - dispatch_get_global_queue()

1.2.1)分析验证dispatch_get_global_queue()是并发队列

源码工程中全局搜索com.apple.root.default-qos

#if DISPATCH_USE_INTERNAL_WORKQUEUE
static struct dispatch_pthread_root_queue_context_s
        _dispatch_pthread_root_queue_contexts[DISPATCH_ROOT_QUEUE_COUNT];
#define _dispatch_root_queue_ctxt(n) &_dispatch_pthread_root_queue_contexts[n]
#else
#define _dispatch_root_queue_ctxt(n) NULL
#endif // DISPATCH_USE_INTERNAL_WORKQUEUE

// 6618342 Contact the team that owns the Instrument DTrace probe before
//         renaming this symbol
struct dispatch_queue_global_s _dispatch_root_queues[] = {
#define _DISPATCH_ROOT_QUEUE_IDX(n, flags) \
        ((flags & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) ? \
        DISPATCH_ROOT_QUEUE_IDX_##n##_QOS_OVERCOMMIT : \
        DISPATCH_ROOT_QUEUE_IDX_##n##_QOS)
#define _DISPATCH_ROOT_QUEUE_ENTRY(n, flags, ...) \
    [_DISPATCH_ROOT_QUEUE_IDX(n, flags)] = { \
        DISPATCH_GLOBAL_OBJECT_HEADER(queue_global), \
        .dq_state = DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE, \
        .do_ctxt = _dispatch_root_queue_ctxt(_DISPATCH_ROOT_QUEUE_IDX(n, flags)), \
        .dq_atomic_flags = DQF_WIDTH(DISPATCH_QUEUE_WIDTH_POOL), \
        .dq_priority = flags | ((flags & DISPATCH_PRIORITY_FLAG_FALLBACK) ? \
                _dispatch_priority_make_fallback(DISPATCH_QOS_##n) : \
                _dispatch_priority_make(DISPATCH_QOS_##n, 0)), \
        __VA_ARGS__ \
    }
    _DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, 0,
        .dq_label = "com.apple.root.maintenance-qos",
        .dq_serialnum = 4,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
        .dq_label = "com.apple.root.maintenance-qos.overcommit",
        .dq_serialnum = 5,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(BACKGROUND, 0,
        .dq_label = "com.apple.root.background-qos",
        .dq_serialnum = 6,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(BACKGROUND, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
        .dq_label = "com.apple.root.background-qos.overcommit",
        .dq_serialnum = 7,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(UTILITY, 0,
        .dq_label = "com.apple.root.utility-qos",
        .dq_serialnum = 8,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(UTILITY, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
        .dq_label = "com.apple.root.utility-qos.overcommit",
        .dq_serialnum = 9,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT, DISPATCH_PRIORITY_FLAG_FALLBACK,
        .dq_label = "com.apple.root.default-qos",
        .dq_serialnum = 10,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT,
            DISPATCH_PRIORITY_FLAG_FALLBACK | DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
        .dq_label = "com.apple.root.default-qos.overcommit",
        .dq_serialnum = 11,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(USER_INITIATED, 0,
        .dq_label = "com.apple.root.user-initiated-qos",
        .dq_serialnum = 12,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(USER_INITIATED, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
        .dq_label = "com.apple.root.user-initiated-qos.overcommit",
        .dq_serialnum = 13,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, 0,
        .dq_label = "com.apple.root.user-interactive-qos",
        .dq_serialnum = 14,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
        .dq_label = "com.apple.root.user-interactive-qos.overcommit",
        .dq_serialnum = 15,
    ),
};

由上可知,_dispatch_root_queues[]结构体变量是个集合,包含了很多各种不同 qos 的全局并发队列。
全局并发队列的DQF_WIDTH() - DQF_WIDTH(DISPATCH_QUEUE_WIDTH_POOL)
--> 队列最大可开辟 DISPATCH_QUEUE_WIDTH_POOL = 0x1000ull - 1 = 16^3 - 1

1.2.2)全局并发队列创建的源码
dispatch_queue_global_t
dispatch_get_global_queue(long priority, unsigned long flags)
{
    dispatch_assert(countof(_dispatch_root_queues) ==
            DISPATCH_ROOT_QUEUE_COUNT);

    if (flags & ~(unsigned long)DISPATCH_QUEUE_OVERCOMMIT) {
        return DISPATCH_BAD_INPUT;
    }
    // 服务质量(qos - quality of service) <-- 优先级
    dispatch_qos_t qos = _dispatch_qos_from_queue_priority(priority);
#if !HAVE_PTHREAD_WORKQUEUE_QOS
    if (qos == QOS_CLASS_MAINTENANCE) {
        qos = DISPATCH_QOS_BACKGROUND;
    } else if (qos == QOS_CLASS_USER_INTERACTIVE) {
        qos = DISPATCH_QOS_USER_INITIATED;
    }
#endif
    if (qos == DISPATCH_QOS_UNSPECIFIED) {
        return DISPATCH_BAD_INPUT;
    }
    // _dispatch_get_root_queue 创建
    return _dispatch_get_root_queue(qos, flags & DISPATCH_QUEUE_OVERCOMMIT);
}
// _dispatch_get_root_queue()
DISPATCH_ALWAYS_INLINE DISPATCH_CONST
static inline dispatch_queue_global_t
_dispatch_get_root_queue(dispatch_qos_t qos, bool overcommit)
{
    if (unlikely(qos < DISPATCH_QOS_MIN || qos > DISPATCH_QOS_MAX)) {
        DISPATCH_CLIENT_CRASH(qos, "Corrupted priority");
    }
    return &_dispatch_root_queues[2 * (qos - 1) + overcommit];
}

结构体_dispatch_root_queues[]代码在上面,我们可找到:
.do_ctxt = _dispatch_root_queue_ctxt(_DISPATCH_ROOT_QUEUE_IDX(n, flags))
-->

#if DISPATCH_USE_INTERNAL_WORKQUEUE
static struct dispatch_pthread_root_queue_context_s
        _dispatch_pthread_root_queue_contexts[DISPATCH_ROOT_QUEUE_COUNT];
#define _dispatch_root_queue_ctxt(n) &_dispatch_pthread_root_queue_contexts[n]
#else
#define _dispatch_root_queue_ctxt(n) NULL
#endif // DISPATCH_USE_INTERNAL_WORKQUEUE

1.3、自定义创建 - dispatch_queue_create()

打开 libdispatch 源码 工程,全局搜索dispatch_queue_create(cons

dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
    return _dispatch_lane_create_with_target(label, attr,
            DISPATCH_TARGET_QUEUE_DEFAULT, true);
}

_dispatch_lane_create_with_target()

DISPATCH_NOINLINE
static dispatch_queue_t
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
        dispatch_queue_t tq, bool legacy)
{
    // dqa:外部传过来的 串行 or 并行
    dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa);

    dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa);

    //
    // Step 1: Normalize arguments (qos, overcommit, tq)
    //

    dispatch_qos_t qos = dqai.dqai_qos;
#if !HAVE_PTHREAD_WORKQUEUE_QOS
    if (qos == DISPATCH_QOS_USER_INTERACTIVE) {
        dqai.dqai_qos = qos = DISPATCH_QOS_USER_INITIATED;
    }
    if (qos == DISPATCH_QOS_MAINTENANCE) {
        dqai.dqai_qos = qos = DISPATCH_QOS_BACKGROUND;
    }
#endif // !HAVE_PTHREAD_WORKQUEUE_QOS

    _dispatch_queue_attr_overcommit_t overcommit = dqai.dqai_overcommit;
    if (overcommit != _dispatch_queue_attr_overcommit_unspecified && tq) {
        if (tq->do_targetq) {
            DISPATCH_CLIENT_CRASH(tq, "Cannot specify both overcommit and "
                    "a non-global target queue");
        }
    }

    if (tq && dx_type(tq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE) {
        // Handle discrepancies between attr and target queue, attributes win
        if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
            if (tq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) {
                overcommit = _dispatch_queue_attr_overcommit_enabled;
            } else {
                overcommit = _dispatch_queue_attr_overcommit_disabled;
            }
        }
        if (qos == DISPATCH_QOS_UNSPECIFIED) {
            qos = _dispatch_priority_qos(tq->dq_priority);
        }
        tq = NULL;
    } else if (tq && !tq->do_targetq) {
        // target is a pthread or runloop root queue, setting QoS or overcommit
        // is disallowed
        if (overcommit != _dispatch_queue_attr_overcommit_unspecified) {
            DISPATCH_CLIENT_CRASH(tq, "Cannot specify an overcommit attribute "
                    "and use this kind of target queue");
        }
    } else {
        if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
            // Serial queues default to overcommit!
            overcommit = dqai.dqai_concurrent ?
                    _dispatch_queue_attr_overcommit_disabled :
                    _dispatch_queue_attr_overcommit_enabled;
        }
    }

    if (!tq) {
        tq = _dispatch_get_root_queue(
                qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos,
                overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq;
        if (unlikely(!tq)) {
            DISPATCH_CLIENT_CRASH(qos, "Invalid queue attribute");
        }
    }

    //
    // Step 2: Initialize the queue
    //

    if (legacy) {
        // if any of these attributes is specified, use non legacy classes
        if (dqai.dqai_inactive || dqai.dqai_autorelease_frequency) {
            legacy = false;
        }
    }
    const void *vtable;
    dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0;

     /**  vtable 名字:
     #define DISPATCH_VTABLE(name) DISPATCH_OBJC_CLASS(name)
     #define DISPATCH_OBJC_CLASS(name)  (&DISPATCH_CLASS_SYMBOL(name))
     #define DISPATCH_CLASS_SYMBOL(name) OS_dispatch_##name##_class
     */
    if (dqai.dqai_concurrent) {
        // OS_dispatch_##name --> OS_dispatch_queue_concurrent
        // 和上面创建的 queue01 的类名一致 OS_dispatch_queue_concurrent
        vtable = DISPATCH_VTABLE(queue_concurrent);
    } else {
        vtable = DISPATCH_VTABLE(queue_serial);
    }
    switch (dqai.dqai_autorelease_frequency) {
    case DISPATCH_AUTORELEASE_FREQUENCY_NEVER:
        dqf |= DQF_AUTORELEASE_NEVER;
        break;
    case DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM:
        dqf |= DQF_AUTORELEASE_ALWAYS;
        break;
    }
    if (label) {
        const char *tmp = _dispatch_strdup_if_mutable(label);
        if (tmp != label) {
            dqf |= DQF_LABEL_NEEDS_FREE;
            label = tmp;
        }
    }

    // 返回 _dq, 找到下面 dq 的创建位置
    // alloc 创建
    dispatch_lane_t dq = _dispatch_object_alloc(vtable,
            sizeof(struct dispatch_lane_s));
    // init 处理
    /**
     dqai.dqai_concurrent ? DISPATCH_QUEUE_WIDTH_MAX : 1
     是不是 concurrent  并发,是并发队列则 DISPATCH_QUEUE_WIDTH_MAX
     
     #define DISPATCH_QUEUE_WIDTH_MAX  (DISPATCH_QUEUE_WIDTH_FULL - 2)
     #define DISPATCH_QUEUE_WIDTH_FULL          0x1000ull
    */
    _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
            DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
            (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));
    
    dq->dq_label = label;
    dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,
            dqai.dqai_relpri);
    if (overcommit == _dispatch_queue_attr_overcommit_enabled) {
        dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
    }
    if (!dqai.dqai_inactive) {
        _dispatch_queue_priority_inherit_from_target(dq, tq);
        _dispatch_lane_inherit_wlh_from_target(dq, tq);
    }
    _dispatch_retain(tq);
    dq->do_targetq = tq;// 以 tq 为模板创建
    _dispatch_object_debug(dq, "%s", __func__);
    return _dispatch_trace_queue_create(dq)._dq;
}
1.3.1、创建 queue对象
1)alloc

1.1、类名
_dispatch_object_alloc(vtable, sizeof(struct dispatch_lane_s))
-->vtable-->DISPATCH_VTABLE(queue_concurrent)

-->`#define DISPATCH_VTABLE(name) DISPATCH_OBJC_CLASS(name)`
-->`#define DISPATCH_OBJC_CLASS(name)   (&DISPATCH_CLASS_SYMBOL(name))`
-->`#define DISPATCH_CLASS_SYMBOL(name) OS_dispatch_##name##_class`

queue对象的创建,类的名字拼接而来。

1.2、alloc 流程

void *
_dispatch_object_alloc(const void *vtable, size_t size)
{
#if OS_OBJECT_HAVE_OBJC1
    const struct dispatch_object_vtable_s *_vtable = vtable;
    dispatch_object_t dou;
    dou._os_obj = _os_object_alloc_realized(_vtable->_os_obj_objc_isa, size);
    dou._do->do_vtable = vtable;
    return dou._do;
#else
    return _os_object_alloc_realized(vtable, size);
#endif
}
inline _os_object_t
_os_object_alloc_realized(const void *cls, size_t size)
{
    _os_object_t obj;
    dispatch_assert(size >= sizeof(struct _os_object_s));
    while (unlikely(!(obj = calloc(1u, size)))) {
        _dispatch_temporary_resource_shortage();
    }
    obj->os_obj_isa = cls;
    return obj;
}

由上,队列的创建,和普通对象的创建不谋而合。
queue也是一个对象 --> 通过alloc init创建
--> alloc时不同的类(cls)类名通过宏定义拼接而成
对象的os_obj_isa指向cls.

2)init

_dispatch_queue_init(): --> 对dq 的 init处理:

// Note to later developers: ensure that any initialization changes are
// made for statically allocated queues (i.e. _dispatch_main_q).
static inline dispatch_queue_class_t
_dispatch_queue_init(dispatch_queue_class_t dqu, dispatch_queue_flags_t dqf,
        uint16_t width, uint64_t initial_state_bits)
{
    uint64_t dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(width);
    dispatch_queue_t dq = dqu._dq;

    dispatch_assert((initial_state_bits & ~(DISPATCH_QUEUE_ROLE_MASK |
            DISPATCH_QUEUE_INACTIVE)) == 0);

    if (initial_state_bits & DISPATCH_QUEUE_INACTIVE) {
        dq->do_ref_cnt += 2; // rdar://8181908 see _dispatch_lane_resume
        if (dx_metatype(dq) == _DISPATCH_SOURCE_TYPE) {
            dq->do_ref_cnt++; // released when DSF_DELETED is set
        }
    }

    dq_state |= initial_state_bits;
    dq->do_next = DISPATCH_OBJECT_LISTLESS;
    dqf |= DQF_WIDTH(width);
    os_atomic_store2o(dq, dq_atomic_flags, dqf, relaxed);
    dq->dq_state = dq_state;
    dq->dq_serialnum =
            os_atomic_inc_orig(&_dispatch_queue_serial_numbers, relaxed);
    return dqu;
}
#define _dispatch_queue_alloc(name, dqf, w, initial_state_bits) \
        _dispatch_queue_init(_dispatch_object_alloc(DISPATCH_VTABLE(name),\
                sizeof(struct dispatch_##name##_s)), dqf, w, initial_state_bits)

dq进行allocinit处理,以及名字、优先级(qos) 的处理后,继续处理dq

dqai.dqai_concurrent判断是否是并发队列

查看dqai的类型:
结构体dispatch_queue_attr_info_t:

typedef struct dispatch_queue_attr_info_s {
    dispatch_qos_t dqai_qos : 8;
    int      dqai_relpri : 8;
    uint16_t dqai_overcommit:2;
    uint16_t dqai_autorelease_frequency:2;
    uint16_t dqai_concurrent:1;
    uint16_t dqai_inactive:1;
} dispatch_queue_attr_info_t;

dqai的创建函数:

dispatch_queue_attr_info_t
_dispatch_queue_attr_to_info(dispatch_queue_attr_t dqa)
{
    dispatch_queue_attr_info_t dqai = { };

    if (!dqa) return dqai;

#if DISPATCH_VARIANT_STATIC
    if (dqa == &_dispatch_queue_attr_concurrent) {
        dqai.dqai_concurrent = true;// 是 则并发 ture,其他情况则全为串行
        return dqai;
    }
#endif

    ... 更多代码未贴全 ...
    return dqai;
}
3)create

_dispatch_trace_queue_create()
-->_dispatch_introspection_queue_create()
-->_dispatch_introspection_queue_create_hook
-->dispatch_introspection_queue_get_info
-->_dispatch_introspection_lane_get_info

DISPATCH_ALWAYS_INLINE
static inline dispatch_introspection_queue_s
_dispatch_introspection_lane_get_info(dispatch_lane_class_t dqu)
{
    dispatch_lane_t dq = dqu._dl;
    bool global = _dispatch_object_is_global(dq);
    uint64_t dq_state = os_atomic_load2o(dq, dq_state, relaxed);

    dispatch_introspection_queue_s diq = {
        .queue = dq->_as_dq,
        .target_queue = dq->do_targetq,
        .label = dq->dq_label,
        .serialnum = dq->dq_serialnum,
        .width = dq->dq_width,
        .suspend_count = _dq_state_suspend_cnt(dq_state) + dq->dq_side_suspend_cnt,
        .enqueued = _dq_state_is_enqueued(dq_state) && !global,
        .barrier = _dq_state_is_in_barrier(dq_state) && !global,
        .draining = (dq->dq_items_head == (void*)~0ul) ||
                (!dq->dq_items_head && dq->dq_items_tail),
        .global = global,
        .main = dx_type(dq) == DISPATCH_QUEUE_MAIN_TYPE,
    };
    return diq;
}

由上,也可知,任何queue队列的创建 都是有模板进行处理的

§ 队列总结:

1:

  • dispatch_queue_create()
    1. 串并行队列的创建,取决于外部传入的DISPATCH_QUEUE_xxx,源码通过它得到dqai_concurrent的值是否true
      --> 然后得出开辟量 dqai.dqai_concurrent ? DISPATCH_QUEUE_WIDTH_MAX : 1
    2. 列能开辟最大: DISPATCH_QUEUE_WIDTH_MAX = 0x1000ull - 2 = 16^3 - 2
  • dispatch_get_global_queue()
    1. 全局并发队列,集合,含有多种不同qos的全局并发队列
    2. 队列最大可开辟:DISPATCH_QUEUE_WIDTH_POOL = 0x1000ull - 1 = 16^3 - 1
  • dispatch_get_main_queue()
    1. 主队列 - 串行队列 - DQF_WIDTH(1)
    2. main函数前自动创建,用来在应用程序上下文中与main threadmain runloop 交互。

2:

  • 队列也是对象
  • queue的创建也是有其模板而create的出来的.

2、函数

函数 dispatch_async() 源码分析.

#ifdef __BLOCKS__
void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
    dispatch_continuation_t dc = _dispatch_continuation_alloc();
    uintptr_t dc_flags = DC_FLAG_CONSUME;
    dispatch_qos_t qos;
    // 服务质量 init: _dispatch_continuation_init
    qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
    _dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
#endif

workblock任务
dq:队列

2.1、探索block在内部的回调流程 - 堆栈分析

查看堆栈信息:

dispatch_async(queue01, ^{
        NSLog(@"123456789");// 将断点打在这 然后 bt 一下 当前的堆栈信息
});

/*
(lldb) bt
* thread #4, queue = 'create_my_queue01', stop reason = breakpoint 2.1
  * frame #0: 0x0000000103cc6bd7 DemoEmpty_iOS`__32-[MyGCDController my_aboutQueue]_block_invoke(.block_descriptor=0x0000000103cca0c8) at MyGCDController.m:165:9
    frame #1: 0x0000000103fe8dd4 libdispatch.dylib`_dispatch_call_block_and_release + 12
    frame #2: 0x0000000103fe9d48 libdispatch.dylib`_dispatch_client_callout + 8
    frame #3: 0x0000000103fec6ba libdispatch.dylib`_dispatch_continuation_pop + 552
    frame #4: 0x0000000103febac5 libdispatch.dylib`_dispatch_async_redirect_invoke + 849
    frame #5: 0x0000000103ffb28c libdispatch.dylib`_dispatch_root_queue_drain + 351
    frame #6: 0x0000000103ffbb96 libdispatch.dylib`_dispatch_worker_thread2 + 132
    frame #7: 0x00007fff524636b6 libsystem_pthread.dylib`_pthread_wqthread + 220
    frame #8: 0x00007fff52462827 libsystem_pthread.dylib`start_wqthread + 15
(lldb) 
*/

流程:libdispatch.dylib_dispatch_worker_thread2开始
--> _dispatch_root_queue_drain:

DISPATCH_NOT_TAIL_CALLED // prevent tailcall (for Instrument DTrace probe)
static void
_dispatch_root_queue_drain(dispatch_queue_global_t dq,
        dispatch_priority_t pri, dispatch_invoke_flags_t flags)
{
    _dispatch_queue_set_current(dq);
    _dispatch_init_basepri(pri);
    _dispatch_adopt_wlh_anon();

    struct dispatch_object_s *item;
    bool reset = false;
    dispatch_invoke_context_s dic = { };
#if DISPATCH_COCOA_COMPAT
    _dispatch_last_resort_autorelease_pool_push(&dic);
#endif // DISPATCH_COCOA_COMPAT
    _dispatch_queue_drain_init_narrowing_check_deadline(&dic, pri);
    _dispatch_perfmon_start();

    // 重点代码
    while (likely(item = _dispatch_root_queue_drain_one(dq))) {
        if (reset) _dispatch_wqthread_override_reset();
        // 
        _dispatch_continuation_pop_inline(item, &dic, flags, dq);
        reset = _dispatch_reset_basepri_override();
        if (unlikely(_dispatch_queue_drain_should_narrow(&dic))) {
            break;
        }
    }

    // overcommit or not. worker thread
    if (pri & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) {
        _dispatch_perfmon_end(perfmon_thread_worker_oc);
    } else {
        _dispatch_perfmon_end(perfmon_thread_worker_non_oc);
    }

#if DISPATCH_COCOA_COMPAT
    // 
    _dispatch_last_resort_autorelease_pool_pop(&dic);
#endif // DISPATCH_COCOA_COMPAT
    _dispatch_reset_wlh();
    _dispatch_clear_basepri();
    _dispatch_queue_set_current(NULL);
}

_dispatch_continuation_pop_inline():

DISPATCH_ALWAYS_INLINE_NDEBUG
static inline void
_dispatch_continuation_pop_inline(dispatch_object_t dou,
        dispatch_invoke_context_t dic, dispatch_invoke_flags_t flags,
        dispatch_queue_class_t dqu)
{
    dispatch_pthread_root_queue_observer_hooks_t observer_hooks =
            _dispatch_get_pthread_root_queue_observer_hooks();
    if (observer_hooks) observer_hooks->queue_will_execute(dqu._dq);
    flags &= _DISPATCH_INVOKE_PROPAGATE_MASK;
    if (_dispatch_object_has_vtable(dou)) {
        dx_invoke(dou._dq, dic, flags);
    } else {
        _dispatch_continuation_invoke_inline(dou, flags, dqu);
    }
    if (observer_hooks) observer_hooks->queue_did_execute(dqu._dq);
}

_dispatch_continuation_invoke_inline():

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_invoke_inline(dispatch_object_t dou,
        dispatch_invoke_flags_t flags, dispatch_queue_class_t dqu)
{
    dispatch_continuation_t dc = dou._dc, dc1;
    dispatch_invoke_with_autoreleasepool(flags, {
        uintptr_t dc_flags = dc->dc_flags;
        // Add the item back to the cache before calling the function. This
        // allows the 'hot' continuation to be used for a quick callback.
        //
        // The ccache version is per-thread.
        // Therefore, the object has not been reused yet.
        // This generates better assembly.
        _dispatch_continuation_voucher_adopt(dc, dc_flags);
        if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
            _dispatch_trace_item_pop(dqu, dou);
        }
        if (dc_flags & DC_FLAG_CONSUME) {
            dc1 = _dispatch_continuation_free_cacheonly(dc);
        } else {
            dc1 = NULL;
        }
        if (unlikely(dc_flags & DC_FLAG_GROUP_ASYNC)) {
            _dispatch_continuation_with_group_invoke(dc);
        } else {
            _dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
            _dispatch_trace_item_complete(dc);
        }
        if (unlikely(dc1)) {
            _dispatch_continuation_free_to_cache_limit(dc1);
        }
    });
    _dispatch_perfmon_workitem_inc();
}

_dispatch_client_callout():

#undef _dispatch_client_callout
void
_dispatch_client_callout(void *ctxt, dispatch_function_t f)
{
    @try {
        return f(ctxt);// 函数回调
    }
    @catch (...) {
        objc_terminate();
    }
}

以上,验证 GCDblock在底层源码中的调用流程,

* GCD block 任务块的执行流程:

libdispatch.dylib_dispatch_worker_thread2开始
--> _dispatch_root_queue_drain
-->_dispatch_continuation_pop_inline -->_dispatch_continuation_invoke_inline
-->_dispatch_client_callout() : return f(ctxt);// 函数回调

2.2、源码分析

2.2.1、qos: _dispatch_continuation_init()
  • --> block任务的一层包装后,赋给了qos.
  • --> 将block任务work进行处理保存在了dc中:
    dc->dc_func = f;dc->dc_ctxt = ctxt;

源码如下:

DISPATCH_ALWAYS_INLINE
static inline dispatch_qos_t
_dispatch_continuation_init(dispatch_continuation_t dc,
        dispatch_queue_class_t dqu, dispatch_block_t work,
        dispatch_block_flags_t flags, uintptr_t dc_flags)
{
    // ctxt指针 =  任务work进行copy
    void *ctxt = _dispatch_Block_copy(work);

    dc_flags |= DC_FLAG_BLOCK | DC_FLAG_ALLOCATED;
    if (unlikely(_dispatch_block_has_private_data(work))) {
    // 这个条件,相对较少会进来
        dc->dc_flags = dc_flags;
        // 将 ctxt 保存在 dc->ctxt 中 - dc:alloc出来的
        dc->dc_ctxt = ctxt;
        // will initialize all fields but requires dc_flags & dc_ctxt to be set
        return _dispatch_continuation_init_slow(dc, dqu, flags);
    }

    // 将 work 进行 _dispatch_Block_invoke
    dispatch_function_t func = _dispatch_Block_invoke(work);
    if (dc_flags & DC_FLAG_CONSUME) {
        // 函数 call_block_and_release
        func = _dispatch_call_block_and_release;
    }
    return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
}

// _dispatch_continuation_init_f() 
DISPATCH_ALWAYS_INLINE
static inline dispatch_qos_t
_dispatch_continuation_init_f(dispatch_continuation_t dc,
        dispatch_queue_class_t dqu, void *ctxt, dispatch_function_t f,
        dispatch_block_flags_t flags, uintptr_t dc_flags)
{
    pthread_priority_t pp = 0;
    dc->dc_flags = dc_flags | DC_FLAG_ALLOCATED;
    
    // 包装了 func 和 ctxt -->
    dc->dc_func = f;// 存 work 的 func
    dc->dc_ctxt = ctxt;// 存 work 的 block_copy
    // in this context DISPATCH_BLOCK_HAS_PRIORITY means that the priority
    // should not be propagated, only taken from the handler if it has one
    if (!(flags & DISPATCH_BLOCK_HAS_PRIORITY)) {
        pp = _dispatch_priority_propagate();
    }
    _dispatch_continuation_voucher_set(dc, flags);
    return _dispatch_continuation_priority_set(dc, dqu, pp, flags);
}
2.2.2、_dispatch_continuation_async() :
// _dispatch_continuation_async() :
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu,
        dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
{
#if DISPATCH_INTROSPECTION
    if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
        _dispatch_trace_item_push(dqu, dc);
    }
#else
    (void)dc_flags;
#endif
    return dx_push(dqu._dq, dc, qos);
}

dx_push-->dq_push:

#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z)

全局搜索dq_push

image.png

以并行队列 示例,搜索_dispatch_lane_concurrent_push

DISPATCH_NOINLINE
void
_dispatch_lane_concurrent_push(dispatch_lane_t dq, dispatch_object_t dou,
        dispatch_qos_t qos)
{
    //  reserving non barrier width
    // doesn't fail if only the ENQUEUED bit is set (unlike its barrier
    // width equivalent), so we have to check that this thread hasn't
    // enqueued anything ahead of this call or we can break ordering
    // 判断 是不是waiter 、栅栏、队列尝试获取异步
    if (dq->dq_items_tail == NULL &&
            !_dispatch_object_is_waiter(dou) &&
            !_dispatch_object_is_barrier(dou) &&
            _dispatch_queue_try_acquire_async(dq)) {
        return _dispatch_continuation_redirect_push(dq, dou, qos);
    }

    _dispatch_lane_push(dq, dou, qos);
}

1)_dispatch_continuation_redirect_push():

DISPATCH_NOINLINE
static void
_dispatch_continuation_redirect_push(dispatch_lane_t dl,
        dispatch_object_t dou, dispatch_qos_t qos)
{
    if (likely(!_dispatch_object_is_redirection(dou))) {
        dou._dc = _dispatch_async_redirect_wrap(dl, dou);
    } else if (!dou._dc->dc_ctxt) {
        // find first queue in descending target queue order that has
        // an autorelease frequency set, and use that as the frequency for
        // this continuation.
        dou._dc->dc_ctxt = (void *)
        (uintptr_t)_dispatch_queue_autorelease_frequency(dl);
    }

    dispatch_queue_t dq = dl->do_targetq;
    if (!qos) qos = _dispatch_priority_qos(dq->dq_priority);
    dx_push(dq, dou, qos);
    // 这里又 dx_push 回去了 - 递归
}

由上dx_push()的走向,和文章上面对队列的探究,可联想方法的调用 类 -- 父类 -- NSObject.

验证方式:通过下符号断点查看底层汇编流程走向。

a.)经过不断dx_push(),应该会走向 --> _dispatch_root_queue_push:

DISPATCH_NOINLINE
void
_dispatch_root_queue_push(dispatch_queue_global_t rq, dispatch_object_t dou,
        dispatch_qos_t qos)
{
#if DISPATCH_USE_KEVENT_WORKQUEUE
    dispatch_deferred_items_t ddi = _dispatch_deferred_items_get();
    if (unlikely(ddi && ddi->ddi_can_stash)) {
        dispatch_object_t old_dou = ddi->ddi_stashed_dou;
        dispatch_priority_t rq_overcommit;
        rq_overcommit = rq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT;

        if (likely(!old_dou._do || rq_overcommit)) {
            dispatch_queue_global_t old_rq = ddi->ddi_stashed_rq;
            dispatch_qos_t old_qos = ddi->ddi_stashed_qos;
            ddi->ddi_stashed_rq = rq;
            ddi->ddi_stashed_dou = dou;
            ddi->ddi_stashed_qos = qos;
            _dispatch_debug("deferring item %p, rq %p, qos %d",
                    dou._do, rq, qos);
            if (rq_overcommit) {
                ddi->ddi_can_stash = false;
            }
            if (likely(!old_dou._do)) {
                return;
            }
            // push the previously stashed item
            qos = old_qos;
            rq = old_rq;
            dou = old_dou;
        }
    }
#endif
#if HAVE_PTHREAD_WORKQUEUE_QOS
    if (_dispatch_root_queue_push_needs_override(rq, qos)) {
        return _dispatch_root_queue_push_override(rq, dou, qos);
    }
#else
    (void)qos;
#endif
    _dispatch_root_queue_push_inline(rq, dou, dou, 1);
}

_dispatch_root_queue_push_inline()
--> _dispatch_root_queue_poke()

b.)-->_dispatch_root_queue_poke_slow()

DISPATCH_NOINLINE
static void
_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
{
    int remaining = n;
    int r = ENOSYS;

    _dispatch_root_queues_init();
    _dispatch_debug_root_queue(dq, __func__);
    _dispatch_trace_runtime_event(worker_request, dq, (uint64_t)n);

#if !DISPATCH_USE_INTERNAL_WORKQUEUE // 是否用了内部的工作队列
#if DISPATCH_USE_PTHREAD_ROOT_QUEUES
    if (dx_type(dq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE)// 类型 全局root
#endif
    {
        _dispatch_root_queue_debug("requesting new worker thread for global "
                "queue: %p", dq);
        // add 添加 thread
        r = _pthread_workqueue_addthreads(remaining,
                _dispatch_priority_to_pp_prefer_fallback(dq->dq_priority));
        (void)dispatch_assume_zero(r);
        return;
    }
#endif // !DISPATCH_USE_INTERNAL_WORKQUEUE
#if DISPATCH_USE_PTHREAD_POOL // 是否用线程池
    dispatch_pthread_root_queue_context_t pqc = dq->do_ctxt;
    if (likely(pqc->dpq_thread_mediator.do_vtable)) {
        while (dispatch_semaphore_signal(&pqc->dpq_thread_mediator)) {
            _dispatch_root_queue_debug("signaled sleeping worker for "
                    "global queue: %p", dq);
            if (!--remaining) {
                return;
            }
        }
    }

    bool overcommit = dq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
    if (overcommit) {
        os_atomic_add2o(dq, dgq_pending, remaining, relaxed);
    } else {
        if (!os_atomic_cmpxchg2o(dq, dgq_pending, 0, remaining, relaxed)) {
            _dispatch_root_queue_debug("worker thread request still pending for "
                    "global queue: %p", dq);
            return;
        }
    }

    int can_request, t_count;
    // seq_cst with atomic store to tail 
    t_count = os_atomic_load2o(dq, dgq_thread_pool_size, ordered);
    do {
        can_request = t_count < floor ? 0 : t_count - floor;
        if (remaining > can_request) {
            _dispatch_root_queue_debug("pthread pool reducing request from %d to %d",
                    remaining, can_request);
            os_atomic_sub2o(dq, dgq_pending, remaining - can_request, relaxed);
            remaining = can_request;
        }
        if (remaining == 0) {
            _dispatch_root_queue_debug("pthread pool is full for root queue: "
                    "%p", dq);
            return;
        }
    } while (!os_atomic_cmpxchgvw2o(dq, dgq_thread_pool_size, t_count,
            t_count - remaining, &t_count, acquire));

#if !defined(_WIN32)
    pthread_attr_t *attr = &pqc->dpq_thread_attr;
    pthread_t tid, *pthr = &tid;
#if DISPATCH_USE_MGR_THREAD && DISPATCH_USE_PTHREAD_ROOT_QUEUES
    if (unlikely(dq == &_dispatch_mgr_root_queue)) {
        pthr = _dispatch_mgr_root_queue_init();
    }
#endif
    do {
        _dispatch_retain(dq); // released in _dispatch_worker_thread
        // pthread_create  创建线程
        while ((r = pthread_create(pthr, attr, _dispatch_worker_thread, dq))) {
            if (r != EAGAIN) {
                (void)dispatch_assume_zero(r);
            }
            _dispatch_temporary_resource_shortage();
        }
    } while (--remaining);
#else // defined(_WIN32)
#if DISPATCH_USE_MGR_THREAD && DISPATCH_USE_PTHREAD_ROOT_QUEUES
    if (unlikely(dq == &_dispatch_mgr_root_queue)) {
        _dispatch_mgr_root_queue_init();
    }
#endif
    do {
        _dispatch_retain(dq); // released in _dispatch_worker_thread
#if DISPATCH_DEBUG
        unsigned dwStackSize = 0;
#else
        unsigned dwStackSize = 64 * 1024;
#endif
        uintptr_t hThread = 0;
        while (!(hThread = _beginthreadex(NULL, dwStackSize, _dispatch_worker_thread_thunk, dq, STACK_SIZE_PARAM_IS_A_RESERVATION, NULL))) {
            if (errno != EAGAIN) {
                (void)dispatch_assume(hThread);
            }
            _dispatch_temporary_resource_shortage();
        }
        if (_dispatch_mgr_sched.prio > _dispatch_mgr_sched.default_prio) {
            (void)dispatch_assume_zero(SetThreadPriority((HANDLE)hThread, _dispatch_mgr_sched.prio) == TRUE);
        }
        CloseHandle((HANDLE)hThread);
    } while (--remaining);
#endif // defined(_WIN32)
#else
    (void)floor;
#endif // DISPATCH_USE_PTHREAD_POOL
}

上述源码,可看到线程的调度创建.

问题:那么,文章上面已知block任务被执行是如何回调的,但是为何是调了_dispatch_worker_thread2

分析_dispatch_root_queues_init():

DISPATCH_STATIC_GLOBAL(dispatch_once_t _dispatch_root_queues_pred);
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_root_queues_init(void)
{
    dispatch_once_f(&_dispatch_root_queues_pred, NULL,
            _dispatch_root_queues_init_once);
}

_dispatch_root_queues_init_once
--> 所有判断条件(除了crash)下,work的回调句柄均是_dispatch_worker_thread2

static void
_dispatch_root_queues_init_once(void *context DISPATCH_UNUSED)
{
    // ... 代码较多不做粘贴 ...

    ... 一些事务 任务的处理...
    
    cfg.workq_cb = _dispatch_worker_thread2;
    // ... 代码较多不做粘贴 ...
}

以上为 异步函数分析,同步函数分析见OC底层原理24、GCD 的应用.

你可能感兴趣的:(iOS底层探索23、GCD原理)