细数iOS中的线程同步方案(一)
细数iOS中的线程同步方案(二)
NSLock
这个其实就是对pthread_mutex普通互斥锁的封装;面向对象,使用起来更方便;
- (void)lock;
- (void)unlock;
- (BOOL)tryLock;
- (BOOL)lockBeforeDate:(NSDate *)limit;
NSRecursiveLock 递归锁
对pthread_mutex递归锁的封装,方法和上面的一样;
NSCondition
对pthread_cond条件锁的封装,使用pthread_cond需要配合pthread_mutex互斥锁使用,NSCondition封装好了,一把锁就能实现:
NSCondition *lock = [[NSCondition alloc] init];
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
[queue addOperationWithBlock:^{
[lock lock];
while (self.condition_value <= 0) { // 条件成立则暂时解锁并等待
[lock wait];
}
NSLog(@"%@===read===start",[NSThread currentThread]);
sleep(2);
NSLog(@"%@===read===end",[NSThread currentThread]);
[lock unlock];
}];
[queue addOperationWithBlock:^{
[lock lock];
NSLog(@"%@===write===start",[NSThread currentThread]);
sleep(3);
self.condition_value = 1; // 一定要更改条件 否则上面read线程条件成立又会wait
NSLog(@"%@===write===end",[NSThread currentThread]);
[lock signal]; // 传递信号给等待的线程 而且是在解锁前
// [lock broadcast] // 通知所有线程
[lock unlock];
}];
NSConditionLock
对NSCondition的进一步封装,在NSCondition基础上,加了可控制的条件condition;通过条件变量,控制通知哪条线程;
@property (readonly) NSInteger condition;
NSConditionLock *lock = [[NSConditionLock alloc] initWithCondition:1]; // 初始化,设置condition=1
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
[queue addOperationWithBlock:^{
[lock lockWhenCondition:1]; // 当condition=1时 获取锁成功 否则等待(但是首次使用lockWhenCondition时condition不对时也能获取锁成功)
NSLog(@"%@===A===start",[NSThread currentThread]);
sleep(2);
NSLog(@"%@===A===end",[NSThread currentThread]);
// unlock根据不同的条件 控制对应的线程
[lock unlockWithCondition:2]; // 解锁,同时设置condition=2并signal;
// [lock unlockWithCondition:3];
}];
[queue addOperationWithBlock:^{
[lock lockWhenCondition:2];
NSLog(@"%@===B===start",[NSThread currentThread]);
sleep(1);
NSLog(@"%@===B===end",[NSThread currentThread]);
[lock unlock];
}];
[queue addOperationWithBlock:^{
[lock lockWhenCondition:3];
NSLog(@"%@===C===start",[NSThread currentThread]);
sleep(1);
NSLog(@"%@===C===end",[NSThread currentThread]);
[lock unlock];
}];
线程A解锁时可以传不同条件值,对应条件值的其他等待线程就会被唤醒;这里条件值为2,则执行线程B任务;条件设置为3,则执行线程C任务;如果是其他值则线程B,C继续一直等待;
NSThread: 0x282b66340>{number = 6, name = (null)}===A===start
NSThread: 0x282b66340>{number = 6, name = (null)}===A===end
NSThread: 0x282b68240>{number = 3, name = (null)}===B===start
NSThread: 0x282b68240>{number = 3, name = (null)}===B===end
@synchronized
是对mutex递归锁的封装;
@synchronized(obj)内部会生成obj对应的递归锁,然后进行加锁、解锁操作;一个对象对应一把锁;
NSObject *obj = [[NSObject alloc] init];
@synchronized (obj) {
// ...
}
GCD相关
dispatch_semaphore信号量
这个和上篇讲的semaphore差不多;
// 创建信号量
dispatch_semaphore_t sem = dispatch_semaphore_create(1);
// 判断信号量,如果=0则等待,否则信号值-1往下执行
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
// 发送信号量,信号值+1
dispatch_semaphore_signal(sem);
DISPATCH_QUEUE_SERIAL 串行队列
串行队列的任务就是同步执行的;
dispatch_queue_t queue = dispatch_queue_create("serial_queue", DISPATCH_QUEUE_SERIAL);
dispatch_async(queue, ^{
// ThreadA dosomething....
});
dispatch_async(queue, ^{
// ThreadB dosomething....
});
dispatch_group
将任务分组,组内任务异步执行;当所有任务执行完后,可以通知其他线程执行任务:
// group必须使用自己创建的并发队列 使用global全局队列无效
dispatch_queue_t queue = dispatch_queue_create("concurrent_queue", DISPATCH_QUEUE_CONCURRENT);
// dispatch_queue_t queue = dispatch_get_global_queue(0, 0); xxx
dispatch_group_t group = dispatch_group_create();
dispatch_group_async(group, queue, ^{
sleep(1);
NSLog(@"%@===TaskA",[NSThread currentThread]);
});
dispatch_group_async(group, queue, ^{
sleep(1);
NSLog(@"%@===TaskB",[NSThread currentThread]);
});
dispatch_group_notify(group, queue, ^{
NSLog(@"%@===TaskC",[NSThread currentThread]);
});
// dispatch_async(queue, ^{
// dispatch_group_wait(group, dispatch_time(DISPATCH_TIME_NOW, (int64_t)(2 * NSEC_PER_SEC))); // 可以设置等待的超时时间
// NSLog(@"%@===TaskC",[NSThread currentThread]);
// });
以上代码对应的场景就是:A,B线程可以并发执行,但C线程一定要在AB线程执行完后再执行;
dispatch_group_notify也可以使用dispatch_group_wait替代,一样是阻塞的作用,而dispatch_group_wait能设置等待超时时间;超过时间将不再阻塞,继续任务;
还有一点需要注意的是,dispatch_group必须使用自己创建的并发队列, 使用global全局队列无效,使用串行队列没有意义;
dispatch_barrier
如同它的名字一样,dispatch_barrier就是起到一个栅栏的作用;栅栏两边的任务可以并发执行,栅栏里的任务必须等到栅栏上边的任务执行完才执行,栅栏下边的任务必须等栅栏里的任务执行完后才执行;
dispatch_barrier其实就是阻塞队列的作用;
这个其实也可以通过dispatch_group实现,但dispatch_barrier更加方便;
dispatch_queue_t queue = dispatch_queue_create("concurrent_queue", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue, ^{
sleep(1);
NSLog(@"%@===TaskA",[NSThread currentThread]);
});
dispatch_async(queue, ^{
sleep(1);
NSLog(@"%@===TaskB",[NSThread currentThread]);
});
// async不会阻塞当前线程(主线程)
dispatch_barrier_async(queue, ^{
NSLog(@"%@===Barrier",[NSThread currentThread]);
});
// sync会阻塞当前队列(主队列)
// dispatch_barrier_sync(queue, ^{
// NSLog(@"%@===Barrier",[NSThread currentThread]);
// });
dispatch_async(queue, ^{
sleep(1);
NSLog(@"%@===TaskC",[NSThread currentThread]);
});
dispatch_async(queue, ^{
sleep(1);
NSLog(@"%@===TaskD",[NSThread currentThread]);
});
NSLog(@"%@===MainTask",[NSThread currentThread]);
{number = 1, name = main}===MainTask
{number = 3, name = (null)}===TaskB
{number = 4, name = (null)}===TaskA
{number = 4, name = (null)}===Barrier
{number = 3, name = (null)}===TaskD
{number = 4, name = (null)}===TaskC
dispatch_barrier的使用有两种方式
- dispatch_barrier_async
- dispatch_barrier_sync
async不会阻塞当前队列,sync同时会阻塞当前队列;如果以上代码换成dispatch_barrier_sync,最终的结果将是MainTask会在Barrier任务后;
基于barrier的这种特性,很容易实现一个读写锁;栅栏内为write,栅栏外为read;这样同样能实现读任务能异步执行,写任务只能同步执行;同时在写操作时,不允许读操作;
dispatch_queue_t queue = dispatch_queue_create("concurrent_queue", DISPATCH_QUEUE_CONCURRENT);
for (int i = 0; i < 3; i ++) {
dispatch_async(queue, ^{
sleep(1);
NSLog(@"%@===read",[NSThread currentThread]);
});
}
for (int i = 0; i < 3; i ++) {
dispatch_barrier_async(queue, ^{
sleep(1);
NSLog(@"%@===write",[NSThread currentThread]);
});
}
for (int i = 0; i < 3; i ++) {
dispatch_async(queue, ^{
sleep(1);
NSLog(@"%@===read",[NSThread currentThread]);
});
}
{number = 4, name = (null)}===read
{number = 6, name = (null)}===read
{number = 5, name = (null)}===read
{number = 1, name = main}===write
{number = 1, name = main}===write
{number = 1, name = main}===write
{number = 5, name = (null)}===read
{number = 7, name = (null)}===read
{number = 4, name = (null)}===read
NSOperation相关
NSOperation是对GCD的封装
最大并发数
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
// 最大并发数设置为1,queue内任务同步执行
queue.maxConcurrentOperationCount = 1;
设置栅栏
// similarly to the `dispatch_barrier_async` function.
[queue addBarrierBlock:^{
}];
设置依赖关系
使用场景:线程B必须要等线程A任务执行完后才执行,即线程A依赖线程B:
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
NSBlockOperation *taskA = [NSBlockOperation blockOperationWithBlock:^{
sleep(2);
NSLog(@"%@===TaskA",[NSThread currentThread]);
}];
NSBlockOperation *taskB = [NSBlockOperation blockOperationWithBlock:^{
sleep(.5);
NSLog(@"%@===TaskB",[NSThread currentThread]);
}];
[taskB addDependency:taskA];
[queue addOperation:taskA];
[queue addOperation:taskB];
{number = 6, name = (null)}===TaskA
{number = 6, name = (null)}===TaskB
自旋锁、互斥锁比较
前面我们介绍了自旋锁、互斥锁机制的不同,它们各有优点;实际开发中的如何选择呢?
适用自旋锁的情况
- 线程等待时间比较短(这样忙等的时间不会太长,不会有太大消耗)
- 加锁的代码(临界区)经常被调用,但竞争情况很少发生
- CPU资源不紧张(自旋锁比较耗CPU资源)
相反的,适用互斥锁的情况
- 线程等待时间比较长
- 加锁的代码(临界区)复杂,循环度大,或者有IO操作
- 加锁的代码(临界区)竞争激烈
线程同步方案性能比较
这个直接引用大神的图:
另外,os_unfair_lock锁性能是最好的,可惜最低只支持iOS10;
完整demo
参考:
不再安全的 OSSpinLock