IOS multithreading-use GCD

IOS multithreading-use GCD

This article first appeared on a personal blog

GCDThe full name Grand Central Dispatchis a multi-core programming solution developed by Apple. This method was first introduced in Mac OSX 10.6 Snow Leopard and then introduced into IOS4.0. GCD is an alternative NSThread, NSOperationQueueand other technical solution.

A Preliminary Study of GCD

// 1. 
dispatch_queue_t queue = dispatch_queue_create("Typeco", DISPATCH_QUEUE_CONCURRENT);
// 2. :
void(^block)(void) = ^{
     NSLog(@" ");
};
// 3.
dispatch_async(queue, block);

Here, in order to distinguish tasks and functions, I separate the blocks, so that the understanding of GCD will be simpler and more intuitive. To sum up:

GCD is to add the specified task (block) to the specified queue (queue), and then specify the dispatch function to execute. The execution of tasks follows the FIFO principle of queues: first in, first out.

queue

  • Serial queue (Serial)

    The tasks in the queue are executed one by one, and the next task is executed after the completion of one task. The custom serial queue is as follows:

    dispatch_queue_t queue = dispatch_queue_create("Typeco", DISPATCH_QUEUE_SERIAL);

    The first parameter is used to label string that identifies the queue, the second parameter identifies the queue, where we want to create serial queue: DISPATCH_QUEUE_SERIAL.

    #define DISPATCH_QUEUE_SERIAL NULL

    PS: The effect of passing NULL as the second parameter of the serial queue is the same.

  • Concurrent queue (Concurrent)

    Multiple tasks are allowed to execute in parallel (simultaneously), but the order of execution of tasks is random, depending on the scheduling of the CPU, which will be discussed later.

    dispatch_queue_t queue = dispatch_queue_create("Typeco", DISPATCH_QUEUE_CONCURRENT);

    Same here, but the second parameter we choose DISPATCH_QUEUE_CONCURRENT

  • System queue

    • dispatch_get_main_queue() The main queue is the only serial queue automatically created by the system when the application starts (before the main function), and the queue is bound to the main thread.
    • dispatch_get_global_queue(0,0)Global concurrent queue. Usually, if we have no special requirements, we can pass 0 by default. The global concurrent queue can only be obtained but not created.

Knowing the meaning of queues, we understand how tasks are executed when they are added to the queue. Next, we will look at how functions affect the execution of queue tasks.

function

  • Synchronization function dispatch_sync()

    • Must wait for the current statement to complete before executing the next statement
    • Will not start the thread, execute the block task in the current thread
    • The execution order still follows the FIFO principle of the current queue
  • Asynchronous function dispatch_async()

    • You can execute the next statement without waiting for the completion of the current statement
    • Will start the thread to execute the task of the block
    • Asynchrony is synonymous with multithreading
  • The relationship between threads and queues

    The specific test can go to Demo to download!

    the difference Serial Concurrent Main queue
    Synchronize 1. The thread will not be started, and the
    task
    will be executed in the current thread . 2. The task will be executed one by one 3. Blockage will occur
    1. The thread will not be started, and the task will be executed in the current thread
    . 2. The tasks will be executed one by one
    1. Deadlock stuck and not executed
    asynchronous 1. Start a new thread
    2. Tasks are executed one after another
    1. Start the thread and execute the task in the current thread
    . 2. The task is executed asynchronously, there is no sequence, and the CPU scheduling is related
    1. Will not start a new thread, still execute tasks serially on the main thread

    It can be seen that threads and queues are not directly connected.

Low-level implementation of queues and functions

1. you can go to the GCD source code to download and view.

Queue creation

Let's roughly analyze how the queue is created. 1. the first parameter of dispatch_queue_create is a string in order to identify the queue. The second parameter represents serial or concurrent, so the focus of our research is on the second parameter. :

   /*
      
     */

    dispatch_queue_create("sync_serial", DISPATCH_QUEUE_SERIAL);
    /*
        
         dqa  DISPATCH_QUEUE_SERIAL
     */

    _dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
                                      dispatch_queue_t tq, bool legacy)



   //  ...



    //   -   queue
    dispatch_lane_t dq = _dispatch_object_alloc(vtable,
                                                sizeof(struct dispatch_lane_s));


    /*
       dqai_concurrent   DISPATCH_QUEUE_WIDTH_MAX   1
       1
     */

    _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
                         DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
                         (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));

    // 
    dq->dq_label = label;
    // 
    dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,
                                              dqai.dqai_relpri);
    /*
         queue api ARC   dispatch_release
     */

    _dispatch_retain(tq);

    return dq;

The above is just a rough analysis of how dispatch_queue_create creates queues and how queues distinguish between serial and concurrency. Please download the source code to read for relevant details.

signal

Studying semaphores is nothing more than focusing on three methods:

dispatch_semaphore_create

 dispatch_semaphore_t
dispatch_semaphore_create(long value)
{
    dispatch_semaphore_t dsema;

    // If the internal value is negative, then the absolute of the value is
    // equal to the number of waiting threads. Therefore it is bogus to
    // initialize the semaphore with a negative value.
    if (value < 0) {
        return DISPATCH_BAD_INPUT;
    }

    dsema = _dispatch_object_alloc(DISPATCH_VTABLE(semaphore),
            sizeof(struct dispatch_semaphore_s));
    dsema->do_next = DISPATCH_OBJECT_LISTLESS;
    dsema->do_targetq = _dispatch_get_default_queue(false);
    dsema->dsema_value = value;
    _dispatch_sema4_init(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
    dsema->dsema_orig = value;
    return dsema;
}

In fact, it is a dispatch_semaphore_tprocess of combined assignment. The assignment is mainly on ** dema_value**, remember this field, we may need to use it in the subsequent analysis.

dispatch_semaphore_signal

 long
dispatch_semaphore_signal(dispatch_semaphore_t dsema)
{
    long value = os_atomic_inc2o(dsema, dsema_value, release);
    if (likely(value > 0)) {
        return 0;
    }
    if (unlikely(value == LONG_MIN)) {
        DISPATCH_CLIENT_CRASH(value,
                "Unbalanced call to dispatch_semaphore_signal()");
    }
    return _dispatch_semaphore_signal_slow(dsema);
}

os_atomic_inc2o----> os_atomic_add2o(p, f, 1, m)----> os_atomic_add(&(p)->f, (v), m)----> _os_atomic_c11_op((p), (v), m, add, +)----> The result is deem_value+1, which means that os_atomic_inc2othe result returned above is +1 on the basis of the previous value, and if it is >0, return 0 immediately

dispatch_semaphore_wait

long
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
    // value++
    long value = os_atomic_dec2o(dsema, dsema_value, acquire);
    if (likely(value >= 0)) {
        return 0;
    }
    return _dispatch_semaphore_wait_slow(dsema, timeout);
}

The principle of os_atomic_dec2o is similar to the above os_atomic_inc2o, except this time the operator is -, so to sum it up is to subtract 1 from the semaphore, and return 0 if it is greater than or equal to 0

So looking at these two methods, what we see is only the addition and subtraction of a number (semaphore). How does it affect our threads at the bottom?

do {
        _dispatch_trace_runtime_event(worker_unpark, dq, 0);
        _dispatch_root_queue_drain(dq, pri, DISPATCH_INVOKE_REDIRECTING_DRAIN);
        _dispatch_reset_priority_and_voucher(pp, NULL);
        _dispatch_trace_runtime_event(worker_park, NULL0);
    } while (dispatch_semaphore_wait(&pqc->dpq_thread_mediator,
            dispatch_time(0, timeout)) == 0);

In the queue.c file, I found the above code, which is actually a do..while loop. As long as wait returns 0, the queue is kept in a state of deferred execution (park). Combining our understanding of the above wait method, we can conclude that, As long as the semaphore is >=0, the current queue continues to FIFO, otherwise it will wait until wait returns 0. When the signal method is executed, the semaphore will perform the +1 operation, and the above cycle will be broken at this time.

In summary, we usually use semaphores like this:

- (void)demo {
    dispatch_semaphore_t sema = dispatch_semaphore_create(1);
    dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER);
    dispatch_async(dispatch_get_global_queue(00), ^{
        // 
        sleep(5);
        dispatch_semaphore_signal(sema);
    });
}

Initialize the semaphore of 1, which means that a single thread can only be accessed by one thread at a time. For example, currently our thread 1 has executed the wait method, and the semaphore is 0 at this time. When another thread 2 accesses this asynchronous time-consuming operation Will be in the above-mentioned do...while loop waiting, until thread 1 executes signal to perform the +1 operation on the semaphore, this time the loop will be broken so that thread 2 can access the operation.

This is to solve the security problem caused by preventing multiple threads from accessing the same resource at the same time. We can also use semaphores to achieve the role of barrier:

- (void)demo {
    dispatch_semaphore_t sema = dispatch_semaphore_create(0);
    dispatch_async(dispatch_get_global_queue(00), ^{
        //  1
        NSLog(@" 1");
        sleep(5);
        dispatch_semaphore_signal(sema);
    });
    dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER);
    dispatch_async(dispatch_get_global_queue(00), ^{
       //2
        NSLog(@" 2");
    });
}

The only difference here is that the value is assigned to 0 during initialization, and the default wait waits. Until the asynchronous thread signal, the subsequent tasks can be executed beyond the wait to achieve the effect of a fence.

Dispatch group

The following are based on the results of libdispatch source code analysis: source code reference , the source code part is more content, please download and view the detailed code, paste the code as little as possible here, and try to show the general process with a flowchart:

The dispatch_group related code is in the dispatch_semaphore.c module. It can be seen that dispatch_group is a semaphore-based synchronization mechanism. The core functions are mainly the following functions:

  • dispatch_group_enter
  • dispatch_group_leave
  • dispatch_group_wait
  • dispatch_group_async
  • dispatch_group_notify

There are 4 sets of control flow in the figure:

  • There are two parallel asyncs above, which are executed asynchronously and the enter and leave methods are implicitly called inside
  • The upper right corner is ordinary enter and leave
  • The lower left corner is the wait control, which uses the blocking wait method, and waits until the signal is consistent.
  • The notification in the lower right corner has two branches. The same is to determine that the signal meets the block that directly wakes up wait to process the notify. If the condition is not met, see the notify block and put it into the queue corresponding to the group to trigger execution when the semaphore is met in the future .

It is best to wake up notify. All async and ordinary methods will use enter and leave, so the judgment of the signal is the pairing of enter and leave. As mentioned in the api, enter and leave must appear in pairs. If you use async, you don t need to. Worry, because its internal help you realize in pairs:

void
dispatch_group_async(dispatch_group_t group, dispatch_queue_t queuedispatch_block_t block)
{
    dispatch_retain(group);
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        block();
        dispatch_group_leave(group);
        dispatch_release(group);
    });
}

So when each control flow executes leave, it must check whether the semaphore is satisfied, if it is satisfied, execute notify, otherwise wait, since the key is the execution of the leave method to trigger notify, so you can focus on the implementation of leave:

void
dispatch_group_leave(dispatch_group_t dg)
{
    // The value is incremented on a 64bits wide atomic so that the carry for
    // the -1 -> 0 transition increments the generation atomically.
    uint64_t new_state, old_state = os_atomic_add_orig2o(dg, dg_state,
            DISPATCH_GROUP_VALUE_INTERVAL, release);
    uint32_t old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK);

    if (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {
        old_state += DISPATCH_GROUP_VALUE_INTERVAL;
        do {
            new_state = old_state;
            if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) {
                new_state &= ~DISPATCH_GROUP_HAS_WAITERS;
                new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
            } else {
                // If the group was entered again since the atomic_add above,
                // we can't clear the waiters bit anymore as we don't know for
                // which generation the waiters are for
                new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
            }
            if (old_state == new_state) break;
        } while (unlikely(!os_atomic_cmpxchgv2o(dg, dg_state,
                old_state, new_state, &old_state, relaxed)));
        return _dispatch_group_wake(dg, old_state, true);
    }

    if (unlikely(old_value == 0)) {
        DISPATCH_CLIENT_CRASH((uintptr_t)old_value,
                "Unbalanced call to dispatch_group_leave()");
    }
}

There is a do...while loop, which compares the state each time until the state matches the wake method to wake up the group.

Scheduling group use

  • (Asynchronous request 1 + Asynchronous request 2) ==> Asynchronous request 3, request 3 depends on the return of request 1 and request 2

     dispatch_group_t group = dispatch_group_create();
        dispatch_group_async(group, dispatch_get_global_queue(00), ^{
            NSLog(@" 1");
        });
        dispatch_group_async(group, dispatch_get_global_queue(00), ^{
            sleep(3);
            NSLog(@" 2");
        });

        dispatch_group_notify(group, dispatch_get_global_queue(00), ^{
            NSLog(@"1 2 3");
        });

        dispatch_group_async(group, dispatch_get_global_queue(00), ^{
            NSLog(@" 4");
        });

    GCD_Demo[19428:6087378] Asynchronous request 1

    GCD_Demo[19428:6087375] Asynchronous request 4

    GCD_Demo[19428:6087377] Asynchronous request 2

    GCD_Demo[19428:6087377] 1 and 2 finished asynchronous request 3

    Here I deliberately added a 4 after the notify, and found that notify is in no particular order.

  • enter + leave

    dispatch_group_t group = dispatch_group_create();
        dispatch_queue_t queue = dispatch_get_global_queue(00);

        dispatch_group_enter(group);
        dispatch_async(queue, ^{
            NSLog(@" 1");
            dispatch_group_leave(group);
        });

        dispatch_group_enter(group);
        dispatch_async(queue, ^{
            sleep(3);
            NSLog(@" 2");
            dispatch_group_leave(group);
        });


        dispatch_group_notify(group, dispatch_get_global_queue(00), ^{
            NSLog(@"1 2 3");
        });

    The effect achieved is the same, except that group_async is not used here to achieve the corresponding purpose.

At this point, the GCD analysis has come to an end, welcome comments and exchanges.