日常开发中对GCD
用的最多的就是async和sync
,也就是异步去做和同步去做
某个任务,但是对于GCD来说可不止单单这个功能,本文主要看下GCD的其他功能如栅栏函数、信号量、调度组以及dispatch_once
等等。
dispatch_once
我们在日常开发中会经常使用单例,而单例的写法也并不陌生,如下所示:
+(instancetype) shareManager{
static dispatch_once_t once;
static FMUserManager *shareManager;
dispatch_once(&once,^{
shareManager = [[FMUserManager alloc]init];
});
return shareManager;
}
复制代码
这当中就使用了dispatch_once
函数,来保证block代码
只会执行一次,那么他又是如何保证的呢?首先就来看下源码:
void
dispatch_once(dispatch_once_t *val, dispatch_block_t block)
{
dispatch_once_f(val, block, _dispatch_Block_invoke(block));
}
void
dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
dispatch_once_gate_t l = (dispatch_once_gate_t)val;
#if !DISPATCH_ONCE_INLINE_FASTPATH || DISPATCH_ONCE_USE_QUIESCENT_COUNTER
uintptr_t v = os_atomic_load(&l->dgo_once, acquire);
if (likely(v == DLOCK_ONCE_DONE)) {
return;
}
#if DISPATCH_ONCE_USE_QUIESCENT_COUNTER
if (likely(DISPATCH_ONCE_IS_GEN(v))) {
return _dispatch_once_mark_done_if_quiesced(l, v);
}
#endif
#endif
if (_dispatch_once_gate_tryenter(l)) {
return _dispatch_once_callout(l, ctxt, func);
}
return _dispatch_once_wait(l);
}
复制代码
这里首先对任务block
进行了一下包装,然后调用dispatch_once_f
函数,在该函数中又有三种状态,也就是:
- 如果之前调用过
block任务
,那么就直接返回; - 如果之前没有调用过,就去执行任务,也就是调用
_dispatch_once_callout
; - 如果正在执行中,就死循环查询任务状态,直至任务状态改变,也就是调用
_dispatch_once_wait
函数;
首先来看下执行任务_dispatch_once_callout
static void
_dispatch_once_callout(dispatch_once_gate_t l, void *ctxt,
dispatch_function_t func)
{
_dispatch_client_callout(ctxt, func);
_dispatch_once_gate_broadcast(l);
}
static inline void
_dispatch_once_gate_broadcast(dispatch_once_gate_t l)
{
dispatch_lock value_self = _dispatch_lock_value_for_self();
uintptr_t v;
#if DISPATCH_ONCE_USE_QUIESCENT_COUNTER
v = _dispatch_once_mark_quiescing(l);
#else
v = _dispatch_once_mark_done(l);
#endif
if (likely((dispatch_lock)v == value_self)) return;
_dispatch_gate_broadcast_slow(&l->dgo_gate, (dispatch_lock)v);
}
static inline uintptr_t
_dispatch_once_mark_done(dispatch_once_gate_t dgo)
{
return os_atomic_xchg(&dgo->dgo_once, DLOCK_ONCE_DONE, release);
}
#define os_atomic_xchg(p, v, m) \
atomic_exchange_explicit(_os_atomic_c11_atomic(p), v, memory_order_##m)
复制代码
调用_dispatch_once_callout
函数后,首先执行_dispatch_client_callout
去调用block任务
(在单例中也就是去初始化实例对象),然后调用_dispatch_once_gate_broadcast
进行调用后的标记处理,os_atomic_xchg
函数就是将传进来的dispatch_once
参数与DLOCK_ONCE_DONE
做交换,以表明执行过该block任务
当第二次调用dispatch_once_f
的时候,会首先进行os_atomic_load(&l->dgo_once, acquire)
的取值判断:
#define os_atomic_load(p, m) \
atomic_load_explicit(_os_atomic_c11_atomic(p), memory_order_##m)
复制代码
由此,结合上方os_atomic_xchg
宏函数,可知这里会将DLOCK_ONCE_DONE
的值取出来,进行if (v == DLOCK_ONCE_DONE)
判断,也就是说,之前只要执行过block任务
,那么第二次调用dispatch_once_f
就会直接返回。
那如果block任务
正在执行时,调用了dispatch_once_f
函数呢,这时就会走_dispatch_once_wait
在这里就会一直死循环读取
dgo->dgo_once
的状态,直至超时或者跳出循环。
栅栏函数
首先来看一个栅栏函数的例子:
dispatch_queue_t t = dispatch_queue_create("fm", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(t, ^{
NSLog(@"1");
});
dispatch_async(t, ^{
NSLog(@"2");
});
// 栅栏函数
dispatch_barrier_async(t, ^{
NSLog(@"3");
});
NSLog(@"4");
dispatch_async(t, ^{
NSLog(@"5");
});
复制代码
栅栏函数是GCD提供的用于阻塞分割任务的一组函数
。就像其定义一样,其主要作用就是在队列中设置栅栏,来人为干预队列中任务的执行顺序.常用的栅栏函数有两个dispatch_barrier_async
和dispatch_barrier_sync
,
由于同步函数是立即执行,分析栅栏函数的复杂度要比异步函数简单些,因此在这里我们主要看下dispatch_barrier_sync
。
当时串行队列时,我们看下调用同步栅栏函数的流程
void
dispatch_barrier_sync(dispatch_queue_t dq, dispatch_block_t work)
{
uintptr_t dc_flags = DC_FLAG_BARRIER | DC_FLAG_BLOCK;
if (unlikely(_dispatch_block_has_private_data(work))) {
return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
}
_dispatch_barrier_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
static void
_dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
dispatch_function_t func, uintptr_t dc_flags)
{
_dispatch_barrier_sync_f_inline(dq, ctxt, func, dc_flags);
}
static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,
dispatch_function_t func, uintptr_t dc_flags)
{
...省略部分代码...
if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dl, tid))) {
return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,
DC_FLAG_BARRIER | dc_flags);
}
...省略部分代码...
_dispatch_introspection_sync_begin(dl);
_dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
}
static void
_dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt,
dispatch_function_t func, uintptr_t top_dc_flags,
dispatch_queue_class_t dqu, uintptr_t dc_flags)
{
...省略部分代码...
_dispatch_trace_item_push(top_dq, &dsc);
__DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq);
if (dsc.dsc_func == NULL) {
// dsc_func being cleared means that the block ran on another thread ie.
// case (2) as listed in _dispatch_async_and_wait_f_slow.
dispatch_queue_t stop_dq = dsc.dc_other;
return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags);
}
_dispatch_introspection_sync_begin(top_dq);
_dispatch_trace_item_pop(top_dq, &dsc);
_dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags
DISPATCH_TRACE_ARG(&dsc));
}
复制代码
首先通过源码可以看到大体的调用流程dispatch_barrier_sync->_dispatch_barrier_sync_f->_dispatch_sync_f_slow
(这里可以通过添加符号断点的方式进行验证),然后进入到_dispatch_sync_f_slow
函数,在_dispatch_sync_f_slow
函数中,__DISPATCH_WAIT_FOR_QUEUE__
函数会判断是否会发生死锁以及等待队列的执行(串行队列会等待上一任务执行完毕再执行下一任务),然后调用_dispatch_sync_invoke_and_complete_recurse
去同步执行栅栏函数任务。
static void
_dispatch_sync_invoke_and_complete_recurse(dispatch_queue_class_t dq,
void *ctxt, dispatch_function_t func, uintptr_t dc_flags
DISPATCH_TRACE_ARG(void *dc))
{
_dispatch_sync_function_invoke_inline(dq, ctxt, func);
_dispatch_trace_item_complete(dc);
_dispatch_sync_complete_recurse(dq._dq, NULL, dc_flags);
}
static void
_dispatch_sync_complete_recurse(dispatch_queue_t dq, dispatch_queue_t stop_dq,
uintptr_t dc_flags)
{
bool barrier = (dc_flags & DC_FLAG_BARRIER);
do {
if (dq == stop_dq) return;
if (barrier) {
dx_wakeup(dq, 0, DISPATCH_WAKEUP_BARRIER_COMPLETE);
} else {
_dispatch_lane_non_barrier_complete(upcast(dq)._dl, 0);
}
dq = dq->do_targetq;
barrier = (dq->dq_width == 1);
} while (unlikely(dq->do_targetq));
}
复制代码
_dispatch_sync_function_invoke_inline
函数里边会通过callout
去回调执行栅栏函数的任务,执行完后调用_dispatch_sync_complete_recurse
去循环判断当前队列是否还有栅栏函数,如果有,则调用dx_wakeup
宏函数,也就是_dispatch_lane_push
方法
DISPATCH_VTABLE_INSTANCE(workloop,
...省略部分代码...
.dq_wakeup = _dispatch_workloop_wakeup,
.dq_push = _dispatch_workloop_push,
);
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_serial, lane,
...省略部分代码...
.dq_wakeup = _dispatch_lane_wakeup,
.dq_push = _dispatch_lane_push,
);
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_concurrent, lane,
...省略部分代码...
.dq_wakeup = _dispatch_lane_wakeup,
.dq_push = _dispatch_lane_concurrent_push,
);
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_global, lane,
...省略部分代码...
.dq_wakeup = _dispatch_root_queue_wakeup,
.dq_push = _dispatch_root_queue_push,
);
复制代码
在这里我们看到不同队列,dq_wakeup
会调用不同的函数:
- 当是全局并发队列的时候,调用
_dispatch_root_queue_push
,但提示Don't try to wake up or override a root queue不要试图唤醒或覆盖根队列
,原因也很简单,全局并发队列系统也在调用,如果被栅栏函数栅住,会导致系统的代码执行也出问题;总之:栅栏函数是拦不住全局并发队列的
。 - 如果是自定义的并发队列或者串行队列,调用
_dispatch_lane_wakeup
void
_dispatch_lane_wakeup(dispatch_lane_class_t dqu, dispatch_qos_t qos,
dispatch_wakeup_flags_t flags)
{
dispatch_queue_wakeup_target_t target = DISPATCH_QUEUE_WAKEUP_NONE;
if (unlikely(flags & DISPATCH_WAKEUP_BARRIER_COMPLETE)) {
// 栅栏函数任务完成后调用该函数
return _dispatch_lane_barrier_complete(dqu, qos, flags);
}
if (_dispatch_queue_class_probe(dqu)) {
target = DISPATCH_QUEUE_WAKEUP_TARGET;
}
// 唤醒后面队列中的任务,执行栅栏函数后面队列里的任务
return _dispatch_queue_wakeup(dqu, qos, flags, target);
}
复制代码
然后调用
_dispatch_lane_barrier_complete
函数。在该函数中,最终调用_dispatch_lane_class_barrier_complete
方法,完成栅栏的清除,从而返回到_dispatch_lane_wakeup
中执行栅栏函数之后的任务。
总结:
栅栏函数的效果:
等待栅栏函数前添加到队列里面的任务全部执行完成之后,才会执行栅栏函数里面的任务,栅栏函数里面的任务执行完成之后才会执行栅栏函数后面的队列里面的任务。
需要注意的点:
- 栅栏函数只对同一队列起作用。
- 栅栏函数对全局并发队列无效。
另:那为什么栅栏函数还区分同步和异步函数呢?
其实就是栅栏函数本身的任务是否需要开辟线程去进行执行来区分使用同步还是异步函数
总结:
- 栅栏函数只对同⼀队列起作⽤。
- 栅栏函数对全局并发队列⽆效。
调度组
上述的栅栏函数也说到,其只会对同一队列起作用,但在日常开发中遇到多队列的任务时,栅栏函数就歇菜了,这时就用到了调度组dispatch_group
,我们在日常开发中,经常看到如下代码:
-(void)test{
dispatch_group_t g = dispatch_group_create();
dispatch_queue_t que1 = dispatch_queue_create("lg1", DISPATCH_QUEUE_CONCURRENT);
dispatch_queue_t que2 = dispatch_queue_create("lg2", DISPATCH_QUEUE_CONCURRENT);
dispatch_group_enter(g);
dispatch_async(que1, ^{
sleep(2);
NSLog(@"1");
dispatch_group_leave(g);
});
dispatch_group_enter(g);
dispatch_async(que2, ^{
sleep(3);
NSLog(@"2");
dispatch_group_leave(g);
});
dispatch_group_enter(g);
dispatch_async(dispatch_get_global_queue(0, 0 ), ^{
sleep(4);
NSLog(@"3");
dispatch_group_leave(g);
});
dispatch_group_enter(g);
dispatch_async(dispatch_get_main_queue(), ^{
sleep(5);
NSLog(@"4");
dispatch_group_leave(g);
});
dispatch_group_notify(g, dispatch_get_global_queue(0, 0), ^{
NSLog(@"5");
});
}
复制代码
这种以dispatch_group_enter
和dispatch_group_leave
搭配使用的方式其实和下方这种是相同的。
dispatch_group_t g = dispatch_group_create();
dispatch_queue_t que1 = dispatch_queue_create("lg1", DISPATCH_QUEUE_CONCURRENT);
dispatch_queue_t que2 = dispatch_queue_create("lg2", DISPATCH_QUEUE_CONCURRENT);
dispatch_group_async(g, que1, ^{
NSLog(@"我是另一种调用方式");
});
复制代码
我们可以通过源码来证明一下:
void
dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq,
dispatch_block_t db)
{
dispatch_continuation_t dc = _dispatch_continuation_alloc();
uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_GROUP_ASYNC;
dispatch_qos_t qos;
qos = _dispatch_continuation_init(dc, dq, db, 0, dc_flags);
_dispatch_continuation_group_async(dg, dq, dc, qos);
}
static inline void
_dispatch_continuation_group_async(dispatch_group_t dg, dispatch_queue_t dq,
dispatch_continuation_t dc, dispatch_qos_t qos)
{
dispatch_group_enter(dg);
dc->dc_data = dg;
_dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
......
static inline void
_dispatch_continuation_with_group_invoke(dispatch_continuation_t dc)
{
struct dispatch_object_s *dou = dc->dc_data;
unsigned long type = dx_type(dou);
if (type == DISPATCH_GROUP_TYPE) {
_dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
_dispatch_trace_item_complete(dc);
dispatch_group_leave((dispatch_group_t)dou);
} else {
DISPATCH_INTERNAL_CRASH(dx_type(dou), "Unexpected object type");
}
}
复制代码
可以看到
dispatch_group_async
调用的_dispatch_continuation_group_async
函数,一进入就调用了dispatch_group_enter(dg)
,- 而在异步函数运行到
_dispatch_continuation_invoke_inline->_dispatch_continuation_with_group_invoke
时,在执行完_dispatch_client_callout
也就是执行完block任务
之后,就调用dispatch_group_leave()
进行出组操作。
对于dispatch_group_async
在documentation
文件中有如下定义,这种定义也会更好理解写。
void
dispatch_group_async(dispatch_group_t group, dispatch_queue_t queue, dispatch_block_t block)
{
dispatch_retain(group);
dispatch_group_enter(group);
dispatch_async(queue, ^{
block();
dispatch_group_leave(group);
dispatch_release(group);
});
}
复制代码
那dispatch_group_enter
和dispatch_group_leave
又做了什么操作呢?
void
dispatch_group_enter(dispatch_group_t dg)
{
// The value is decremented on a 32bits wide atomic so that the carry
// for the 0 -> -1 transition is not propagated to the upper 32bits.
uint32_t old_bits = os_atomic_sub_orig2o(dg, dg_bits,
DISPATCH_GROUP_VALUE_INTERVAL, acquire);
uint32_t old_value = old_bits & DISPATCH_GROUP_VALUE_MASK;
if (unlikely(old_value == 0)) {
_dispatch_retain(dg); // <rdar://problem/22318411>
}
if (unlikely(old_value == DISPATCH_GROUP_VALUE_MAX)) {
DISPATCH_CLIENT_CRASH(old_bits,
"Too many nested calls to dispatch_group_enter()");
}
}
复制代码
首先在苹果的官方文档中对于dispatch_group_enter
有这么一段话的解释:
Calling this function increments the current count of outstanding tasks in the group. 调用此函数会增加组中当前未完成任务的计数。
- 通过源码及注释可知
os_atomic_sub_orig2o
函数会进行减
操作,也就是说增加了组中的未完成任务数。 - 这里的两个参数
dg, dg_bits
会通过os_atomic_sub_orig2o
宏函数变成这个样子os_atomic_sub_orig(&(dg)->dg_bits, (v), m)
也就是说其实是对dg_bits
进行赋值。
void
dispatch_group_leave(dispatch_group_t dg)
{
// The value is incremented on a 64bits wide atomic so that the carry for
// the -1 -> 0 transition increments the generation atomically.
uint64_t new_state, old_state = os_atomic_add_orig2o(dg, dg_state,
DISPATCH_GROUP_VALUE_INTERVAL, release);
uint32_t old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK);
if (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {
old_state += DISPATCH_GROUP_VALUE_INTERVAL;
do {
new_state = old_state;
if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) {
new_state &= ~DISPATCH_GROUP_HAS_WAITERS;
new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
} else {
// If the group was entered again since the atomic_add above,
// we can't clear the waiters bit anymore as we don't know for
// which generation the waiters are for
new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
}
if (old_state == new_state) break;
} while (unlikely(!os_atomic_cmpxchgv2o(dg, dg_state,
old_state, new_state, &old_state, relaxed)));
return _dispatch_group_wake(dg, old_state, true);
}
if (unlikely(old_value == 0)) {
DISPATCH_CLIENT_CRASH((uintptr_t)old_value,
"Unbalanced call to dispatch_group_leave()");
}
}
复制代码
dispatch_group_leave
函数其实就是对dg_state
进行加
操作,-dg_state
其实就是dispatch_group_enter
函数中的dg_bits
,其本质是个联合体,共用同一段内存,定义如下:
DISPATCH_UNION_LE(uint64_t volatile dg_state,
uint32_t dg_bits,
uint32_t dg_gen
) DISPATCH_ATOMIC64_ALIGN
复制代码
- 当
dg_state增加
的时候,也就是有一个出组操作,就去调用_dispatch_group_wake
去唤醒其他任务。
static inline void
_dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq,
dispatch_continuation_t dsn)
{
uint64_t old_state, new_state;
dispatch_continuation_t prev;
dsn->dc_data = dq;
_dispatch_retain(dq);
prev = os_mpsc_push_update_tail(os_mpsc(dg, dg_notify), dsn, do_next);
if (os_mpsc_push_was_empty(prev)) _dispatch_retain(dg);
os_mpsc_push_update_prev(os_mpsc(dg, dg_notify), prev, dsn, do_next);
if (os_mpsc_push_was_empty(prev)) {
os_atomic_rmw_loop2o(dg, dg_state, old_state, new_state, release, {
new_state = old_state | DISPATCH_GROUP_HAS_NOTIFS;
if ((uint32_t)old_state == 0) {
os_atomic_rmw_loop_give_up({
return _dispatch_group_wake(dg, new_state, false);
});
}
});
}
}
复制代码
而在dispatch_group_notify
中这里我们就看到:dispatch_group_notify
中会判断old_state == 0
来判断enter和leave
是否都已执行。
信号量
在GCD中还有一种控制任务执行顺序的方式,也就是信号量,不过它控制的其实是并发数量。我们一般使用如下三个函数:
dispatch_semaphore_create(long value)
这个函数是创建一个dispatch_semaphore_t
类型的信号量,并且创建的时候需要指定信号量的大小。dispatch_semaphore_wait(dispatch_semaphore_t dsema,dispatch_time_t timeout)
等待信号量。如果信号量值为0,那么该函数就会一直等待,也就是不返回(相当于阻塞当前线程),直到该函数等待的信号量的值大于等于1,该函数会对信号量的值进行减1操作,然后返回。dispatch_semaphore_signal(dispatch_semaphore_t dsema)
发送信号量。该函数会对信号量的值进行加1操作。
通过这三个方法,就能控制GCD的最大并发数量。 案例:
dispatch_semaphore_t sem = dispatch_semaphore_create(0);
dispatch_async(dispatch_get_global_queue(0, 0), ^{
NSLog(@"1");
dispatch_semaphore_signal(sem);
});
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
dispatch_async(dispatch_get_global_queue(0, 0), ^{
NSLog(@"2");
dispatch_semaphore_signal(sem);
});
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
dispatch_async(dispatch_get_global_queue(0, 0), ^{
NSLog(@"3");
dispatch_semaphore_signal(sem);
});
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
dispatch_async(dispatch_get_global_queue(0, 0), ^{
NSLog(@"4");
dispatch_semaphore_signal(sem);
});
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
dispatch_async(dispatch_get_global_queue(0, 0), ^{
NSLog(@"5");
dispatch_semaphore_signal(sem);
});
复制代码
这里创建的就是一个为0信号量的例子,其实运行效果类似于串行队列。
信号量在使用的时候需要注意:
dispatch_semaphore_wait 和 dispatch_semaphore_signal
一定要成对出现。因为在信号量释放的时候,如果dsema_orig初始信号量的大小大于dsema_value(通过dispatch_semaphore_wait和dispatch_semaphore_signal改变之后的信号量的大小)就会触发崩溃。
dispatch_source
dispatch_source
是⽤来监听事件的,可以创建不同类型的dispatch_source
来监听不同的事件。 dispatch_sourc
e可以监听的事件类型如下 :
dispatch_source
的⼏个⽅法:
案例:定时器
- (void)iTimer {
__block int timeout = 60;
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
dispatch_source_t _timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue);
dispatch_source_set_timer(_timer,dispatch_walltime(NULL, 0),1.0*NSEC_PER_SEC, 0);
dispatch_source_set_event_handler(_timer, ^{
if(timeout <= 0){
dispatch_source_cancel(_timer);
}
else{
timeout--;
NSLog(@"倒计时:%d", timeout);
}
});
dispatch_resume(_timer);
}
复制代码