LinkedBlockingQueue-Second Understanding-Graphic


Note: Before reading this article, please master the pre-knowledge of this article: the core principle diagram of the jump table , and ConcurrentSkipListMap-second understanding .

JUC high-concurrency tools (3 articles) and high-concurrency containers (N articles):


Note: Before reading this article, please master the pre-existing knowledge of this article: BlockingQueue -Second Understanding-Illustration .

1 Basic overview of LinkedBlockingQueue

LinkedBlockingQueue is a blocking queue based on a linked list. It maintains a data queue based on a linked list inside. In fact, our API operations on LinkedBlockingQueue all indirectly manipulate the internal data queue.

7.5.1 LinkedBlockingQueue Constructor

LinkedBlockingQueue is a bounded queue blocking queue implemented by a linked list, but the default size is Integer.MAX_VALUE, so we recommend to manually pass the value when using LinkedBlockingQueue to provide it with the size we need to avoid excessive queues causing machine load or The memory is full, etc. Its constructor is as follows

/** 
 * 默认情况下,创建一个容量为 Integer.MAX_VALUE 的 LinkedBlockingQueue 
 */  

public LinkedBlockingQueue() {  

  this(Integer.MAX_VALUE);  

}  

/** 
 * 创建一个具有给定(固定)容量的 LinkedBlockingQueue 
 */  

public LinkedBlockingQueue(int capacity) {  

  if (capacity <= 0)  

      throw new IllegalArgumentException();  
  this.capacity = capacity;  
  last = head = new Node<E>(null);  

}  
 

/** 
  * 创建一个容量为 Integer.MAX_VALUE 的 LinkedBlockingQueue, 
  * 最初包含给定 collection 的元素,元素按该 collection 迭代器的遍历顺序添加。 
 */  
public LinkedBlockingQueue(Collection<? extends E> c) {  
  this(Integer.MAX_VALUE);  
  for (E e : c)  
      add(e);  
}  

The description of the three constructors is as follows:

  • The default constructor creates a LinkedBlockingQueue instance with a capacity of Integer.MAX_VALUE.

  • In the second construction method, the queue capacity is specified, and it is first judged whether the specified capacity is greater than zero, otherwise an exception is thrown. Then assign a value to the capacity, and finally create an empty node, and point to head and last. The item and next of both are null at this time.

  • The third type is to use loops to add elements from a specified set to the queue.

The LinkedBlockingQueue queue also sorts elements according to FIFO (first in first out). The head element of the queue is the element with the longest enqueue time, and the tail element of the queue is the element with the shortest enqueue time. The fetch operation performed by the queue will get the element at the head of the queue, and the new element will be inserted at the end of the queue. .

Description
The APIs of LinkedBlockingQueue and ArrayBlockingQueue are almost the same, but their internal implementation principles are not the same.

Using LinkedBlockingQueue, we can also implement the producer-consumer model. Just replace the blocking queue object in the previous ArrayBlockingQueue case with LinkedBlockingQueue. Due to space limitations, duplicate code will not be posted here.

7.5.2 Internal members

LinkedBlockingQueue is a blocking queue based on a linked list. It maintains a data queue based on a linked list inside. In fact, our API operations on LinkedBlockingQueue all indirectly manipulate the data queue. Here we first look at the internal member variables of LinkedBlockingQueue.


public class LinkedBlockingQueue<E> extends AbstractQueue<E>

      implements BlockingQueue<E>, java.io.Serializable {
 /**

    * 节点类,用于存储数据

   */

  static class Node<E> {

      E item;

 
      /**
    
        * One of:
    
        * - the real successor Node
    
        * - this Node, meaning the successor is head.next
    
        * - null, meaning there is no successor (this is the last node)
    
       */
    
      Node<E> next;

 
      Node(E x) { item = x; }

  }


  /** 阻塞队列的大小,默认为Integer.MAX_VALUE */

  private final int capacity;

  /** 当前阻塞队列中的元素个数 */

  private final AtomicInteger count = new AtomicInteger();

  /**

    * 阻塞队列的头结点

   */

  transient Node<E> head;

  /**
    * 阻塞队列的尾节点
   */

  private transient Node<E> last;

  /** 获取并移除元素时使用的锁,如take, poll, etc */

  private final ReentrantLock takeLock = new ReentrantLock();

  /** notEmpty条件对象,当队列没有数据时用于挂起执行删除的线程 */

  private final Condition notEmpty = takeLock.newCondition();

  /** 添加元素时使用的锁如 put, offer, etc */

  private final ReentrantLock putLock = new ReentrantLock();

  /** notFull条件对象,当队列数据已满时用于挂起执行添加的线程 */

  private final Condition notFull = putLock.newCondition();

}

Under normal circumstances, the throughput of LinkedBlockingQueue is higher than that of array-based queue ArrayBlockingQueue. Why? Because the former add and delete operations use two display locks (ReenterLock) to control concurrent execution, and ArrayBlockingQueue only uses one ReenterLock to control concurrency.

Two display locks (ReenterLock), one control head, equivalent to control new elements; one control tail, equivalent to control delete elements; it can be simply understood as separation of reading and writing.

Each data added to the LinkedBlockingQueue queue will be encapsulated into a Node node. In the added linked list queue, head and last point to the head node and the end node of the queue respectively. Different from ArrayBlockingQueue, LinkedBlockingQueue uses takeLock and putLock internally to control concurrency, that is to say, add and delete operations are not mutually exclusive operations, and can be performed at the same time, which can greatly improve throughput. Here again, if the capacity size is not specified for LinkedBlockingQueue, its default value will be Integer.MAX_VALUE. If there is a case where the adding speed is greater than the deleting speed, the memory may overflow. This point should be carefully considered before use. As for the implementation principle of LinkedBlockingQueue, it is similar to ArrayBlockingQueue. In addition to using separate lock control for the add and remove methods, both use different Condition objects as the waiting queue for suspending the take thread and the put thread.

Ok~, let's take a look at how its internal adding process and deleting process are implemented.

2 Non-blocking adding elements: principle of add and offer methods

Next, let's look at the implementation of non-blocking element add method and offer method.

public boolean add(E e) {

   if (offer(e))

       return true;

   else

       throw new IllegalStateException("Queue full");

}

It can be seen from the source code that the add method indirectly calls the offer method. If the offer method fails to be added, an IllegalStateException will be thrown. If the offer method is added successfully, it will return true.

Realization of offer

The Offer() method here does two things:

(1) The first thing is to judge whether the queue is full, release the lock directly when it is full, then encapsulate the node into the queue when it is not full, and then judge whether the queue is full after the addition is completed, and continue to wake up when dissatisfied and wait until the condition object Add thread on notFull.

(2) The second thing is to determine whether it is necessary to wake up the consumer thread waiting on the notEmpty condition object.

So let's take a look directly at the implementation of relevant methods of offer

public boolean offer(E e) {

 

   //添加元素为null直接抛出异常

   if (e == null) throw new NullPointerException();

    //获取队列的个数
    
    final AtomicInteger count = this.count;
    
    //判断队列是否已满
    
    if (count.get() == capacity)
    
        return false;
    
    int c = -1;
    
    //构建节点
      Node<E> node = new Node<E>(e);
     final ReentrantLock putLock = this.putLock;
     putLock.lock();
   try {
    
        //再次判断队列是否已满,考虑并发情况
    
        if (count.get() < capacity) {
    
            enqueue(node);//添加元素
    
            c = count.getAndIncrement();//拿到当前未添加新元素时的队列长度
    
            //如果容量还没满
    
            if (c + 1 < capacity)
    
                notFull.signal();//唤醒下一个添加线程,执行添加操作
          }
    
    } finally {
    
        putLock.unlock();
    
    }

    // 由于存在添加锁和消费锁,而消费锁和添加锁都会持续唤醒等到线程,因此count肯定会变化。
    
    //这里的if条件表示如果队列中还有1条数据
    
    if (c == 0) 
    
      signalNotEmpty();//如果还存在数据那么就唤醒消费锁

  return c >= 0; // 添加成功返回true,否则返回false

  }

enqueue enqueue operation

//Entry operation

private void enqueue(Node node) {

//The tail node of the queue points to the new node node

last = last.next = node;

}

Let's take a look at putting element A and element B in the queue in sequence, as shown in the figure below:

signalNotEmpty wakes up the delete thread (such as consumer thread)

//signalNotEmpty方法

private void signalNotEmpty() {

    final ReentrantLock takeLock = this.takeLock;
    
    takeLock.lock();
    
        //唤醒获取并删除元素的线程
    
        notEmpty.signal();
    
    } finally {
    
        takeLock.unlock();
    
    }

  }

Here we may be a little confused, why continue to wake up the adding thread on the condition object notFull after the addition is completed, instead of directly waking up the consumer thread on the notEmpty condition object like ArrayBlockingQueue? And why is it necessary to wake up the consumer thread when if (c == 0)?

The reason for waking up the adding thread is that after adding new elements, it will judge whether the queue is full. If dissatisfied, it will continue to wake up the adding thread on the condition object notFull. This is very different from the ArrayBlockingQueue analyzed earlier. The adding operation is completed inside the ArrayBlockingQueue. Later, the consumer thread will be directly awakened to obtain the element. This is because ArrayBlockingQueue uses only one ReenterLock to control the adding thread and the consumer thread at the same time, so if the adding thread is awakened again after the addition is completed, the consumer thread may never be able to execute. For LinkedBlockingQueue, it is different. It uses their own ReenterLock locks to control concurrency for adding threads and consuming threads. That is to say, adding threads and consuming threads are not mutually exclusive, so add locks as long as you manage Just add your own thread, add thread directly wake up your other adding thread, if there is no waiting adding thread, it will end directly. If there is, the suspension will not end until the queue element is full. Of course, the offer method does not suspend, but directly ends. Only the put method will execute the suspend operation when the queue is full. Note that the execution process of the consumer thread is also the same. This is why the throughput of LinkedBlockingQueue is relatively large.

Why do you want to wake up the consumer thread when you judge if (c == 0)?

 if (c == 0)  //c拿到当前未添加新元素时的队列长度

  signalNotEmpty();//如果还存在数据那么就唤醒消费锁

This is because once the consuming thread is awakened, it is always consuming (provided that there is data), so the value of c is always changing, and the value of c is the size of the queue before the element is added. At this time, c can only be 0 or c> 0, if it is c=0, it means that the consumer thread has stopped before, and there may be a consumer thread waiting on the condition object. After adding data, it should be c+1, then the waiting consumer thread will be awakened directly if there is data, if not, it will end La, waiting for the next consumption operation. If c>0, then the consumer thread will not be awakened, and can only wait for the next consumer operation (poll, take, remove) call, then why is it not the condition c>0 to wake up? What we need to understand is that once the consuming thread is awakened, it will continue to wake up other consuming threads like adding threads. If c>0 before adding, then it is very likely that the data has not been consumed after the last consuming thread called, and the conditional queue There is no waiting consumer thread, so it doesn’t make much sense for c>0 to wake up the consumer thread. Of course, if the adding thread keeps adding elements, then c>0 all the time, and the consumer thread will wait for the next call to consume. Operation (poll, take, remove).

3 Blocking adding elements: the principle of the put method

The methods of adding elements are: add, offer and put. Here first introduce the blocking method of adding elements-put method.

/** 

  * 将指定元素插入到此队列的尾部,如有必要,则等待空间变得可用 

 */  

public void put(E e) throws InterruptedException {  

  //判断添加元素是否为null  

  if (e == null)  

      throw new NullPointerException();  

  int c = -1;  

  final ReentrantLock putLock = this.putLock;  

  final AtomicInteger count = this.count;  

  //获取插入的可中断锁  

  putLock.lockInterruptibly();  

  try {  

      try {  
    
          //判断队列是否已满  
    
          while (count.get() == capacity)  
    
              //如果已满则阻塞添加线程  
    
              notFull.await();  
    
      } catch (InterruptedException ie) {  
    
          //失败就唤醒添加线程  
    
          notFull.signal();   
    
          throw ie;  
    
      }  
    
      //添加元素  
    
      insert(e);  
    
      //修改c值  
    
      c = count.getAndIncrement();  
    
      //根据c值判断队列是否已满  
    
      if (c + 1 < capacity)  
    
          //未满则唤醒添加线程  
    
          notFull.signal();  

  } finally {  

      //释放锁  
    
      putLock.unlock();  

  }  

  //c等于0代表添加成功  

  if (c == 0)  

      signalNotEmpty();  

}  

Summarize the adding operation process

1. Acquire putLock

2. If the queue is full, wait (notFull.await())

3. The elements join the team

4. After the current producer adds elements, if the queue is not full, notify other producers to add elements (notFull.signal())

5. Release putLock

6. If there are already elements in the queue, notify the consumer

Illustration: blocking process of put thread

When adding elements, if the queue is full, then the newly arrived put thread will be added to the notFull condition waiting queue, as shown in the following figure:

Insert picture description here

Figure: When the queue is full, the put thread joins the notFull waiting queue

The notFull conditional queue is associated with the putLock display lock, not with the takeLock display lock. putLock shows that the lock is responsible for synchronizing element addition. The specific code is as follows:

 /** putLock显示锁 */

  private final ReentrantLock putLock = new ReentrantLock();

 

  /**putLock 条件队列与putLock显示锁关联 */

  private final Condition notFull = putLock.newCondition();

4 Non-blocking removal: principle of poll method

The poll method is also relatively simple, if there is no data in the queue, it returns null.

If there is data in the queue, it will be taken out by the poll method.

After fetching, if there is still data in the queue, then wake up the consumer thread waiting on the condition object notEmpty. Let those threads also get the data.

Finally, if (c == capacity) is judged, if it is true, the production (or adding) thread will be awakened. This is the same as the previous analysis of if (c==0). Because only if the queue is full, there may be waiting adding threads on the notFull condition object.


public E poll() {

        //获取当前队列的大小
    
       final AtomicInteger count = this.count;
    
       if (count.get() == 0)//如果没有元素直接返回null
    
                return null;
    
       E x = null;
    
       int c = -1;
    
       final ReentrantLock takeLock = this.takeLock;
    
       takeLock.lock();
    
       try {
    
                //判断队列是否有数据
    
                if (count.get() > 0) {
    
                        //如果有,直接删除并获取该元素值
    
                        x = dequeue();
    
                        //当前队列大小减一
    
                        c = count.getAndDecrement();
    
                        //如果队列未空,继续唤醒等待在条件对象notEmpty上的消费线程
    
                        if (c > 1)
    
                                 notEmpty.signal();
    
                }
    
       } finally {
    
                takeLock.unlock();
    
       }
    
       //判断c是否等于capacity,这是因为如果满说明NotFull条件对象上
    
       //可能存在等待的添加线程
    
       if (c == capacity)
    
                signalNotFull();
    
       return x;

}

dequeue

Remove elements from the head



private E dequeue() {

       Node<E> h = head;//获取头结点
    
       Node<E> first = h.next; 获取头结的下一个节点(要删除的节点)
    
       h.next = h; // help GC//自己next指向自己,即被删除
    
       head = first;//更新头结点
    
       E x = first.item;//获取删除节点的值
    
       first.item = null;//清空数据,因为first变成头结点是不能带数据的,这样也就删除队列的带数据的第一个节点
    
       return x;

}

Insert picture description here

5 Blocking removal of elements: the principle of the take method

The take method is a blocking and interruptible removal method, which mainly does two things:

First, if there is no data in the queue, the current thread will be suspended and waited in the waiting queue of the notEmpty condition object. If there is data, the node will be deleted and the data item will be returned, and subsequent consumption threads will be awakened at the same time.

The second is to try to wake up the added thread in the waiting queue on the condition object notFull. At this point, the implementation of remove, poll, and take has also been analyzed, and only the take method has the blocking function.

  public E take() throws InterruptedException {

      E x;
    
      int c = -1;
    
      //获取当前队列大小
    
      final AtomicInteger count = this.count;
    
      final ReentrantLock takeLock = this.takeLock;
    
      takeLock.lockInterruptibly();//可中断
    
      try {
    
          //如果队列没有数据,挂机当前线程到条件对象的等待队列中
    
          while (count.get() == 0) {
    
              notEmpty.await();
    
          }
    
          //如果存在数据直接删除并返回该数据
    
          x = dequeue();
    
          c = count.getAndDecrement();//队列大小减1
    
          if (c > 1)
    
              notEmpty.signal();//还有数据就唤醒后续的消费线程
    
      } finally {
    
          takeLock.unlock();
    
      }
    
      //满足条件,唤醒条件对象上等待队列中的添加线程
    
      if (c == capacity)
    
          signalNotFull();
    
      return x;

  }

The remove method returns true if it succeeds or false if it fails, the poll method returns the removed value if it succeeds, and returns null if it fails or has no data.

Illustration: the blocking process of the take thread

Insert picture description here

Figure: The take thread is blocked when the queue is empty

The notEmpty conditional queue is associated with the takeLock display lock, not with the putLock display lock. takeLock shows that the lock is responsible for synchronizing element deletion. The specific code is as follows:

  /** takeLock显示锁 */
  private final ReentrantLock takeLock = new ReentrantLock();
  /**notEmpty条件队列与takeLock显示锁关联 */
  private final Condition notEmpty = takeLock.newCondition();

6 Extract elements: peek and element

Let’s take a look at two more methods to get elements, namely peek and element

  public E element() {

      E x = peek();//直接调用peek
    
      if (x != null)
    
          return x;
    
      else
    
          throw new NoSuchElementException();//没数据抛异常

  }

The peek method can get the first added element directly from the head node, so the efficiency is relatively high. If it does not exist, it returns null.


/** 
  * 获取但不移除此队列的头;如果此队列为空,则返回 null 
 */  

public E peek() {  

  //判断元素数是否为0  

  if (count.get() == 0)  

      return null;  

  final ReentrantLock takeLock = this.takeLock;  

  //获取获取锁  

  takeLock.lock();  

  try {  

      //头节点的 next节点即为添加的第一个节点  
    
      Node<E> first = head.next;  
    
      //如果不为空则返回该节点  
    
      if (first == null)  
    
          return null;  
    
      else  
    
          return first.item;  

  } finally {  

      //释放锁  
    
      takeLock.unlock();  

  }  

} 

From the code point of view, the head node itself does not carry data when it is initialized, and it is only used as the head head to facilitate us to perform related operations on the linked list. peek returns directly to get the next node of the head node and returns its value. If there is no value, it returns null, and if there is a value, it returns the value corresponding to the node.

7 The implementation principle of remove element remove

The method of removal mainly refers to the remove, poll and take methods, which are analyzed one by one below


       public boolean remove(Object o) {
    
          if (o == null) return false;
    
                 fullyLock();//同时对putLock和takeLock加锁
    
                 try {
    
                         //循环查找要删除的元素
    
                         for (Node<E> trail = head, p = trail.next;
    
                                   p != null;
    
                                   trail = p, p = p.next) {
    
                                  if (o.equals(p.item)) {//找到要删除的节点
    
                                           unlink(p, trail);//直接删除
    
                                           return true;
    
                                  }
    
                         }
    
                         return false;
    
                 } finally {
    
                         fullyUnlock();//解锁
    
                 }

  }

 

       //两个同时加锁
    
       void fullyLock() {
    
                   putLock.lock();
    
                   takeLock.lock();
    
       }

 




       void fullyUnlock() {
    
                  takeLock.unlock();
    
                  putLock.unlock();
    
       }

 

The remove method deletes the specified object. Here we may be surprised, why both putLock and takeLock are locked at the same time? This is because the location of the data deleted by the remove method is uncertain. In order to avoid causing non-security problems, it is necessary to lock both locks at the same time.

8 LinkedBlockingQueue和ArrayBlockingQueue迥异

Through the above analysis, we are familiar with the basic use and internal implementation principles of LinkedBlockingQueue and ArrayBlockingQueue. Here we will summarize the differences between them.

1. The size of the queue is different. ArrayBlockingQueue is a bounded initialization and the size must be specified, while LinkedBlockingQueue can be bounded or unbounded (Integer.MAX_VALUE). For the latter, when the adding speed is greater than the removing speed , In the case of unbounded, it may cause memory overflow and other problems.

2. The data storage container is different. ArrayBlockingQueue uses an array as a data storage container, while LinkedBlockingQueue uses a linked list with Node nodes as the connection object.

3. Since ArrayBlockingQueue uses an array storage container, it will not generate or destroy any additional object instances when inserting or deleting elements, while LinkedBlockingQueue will generate an additional Node object. This may have a greater impact on the GC when a large amount of data needs to be processed efficiently and concurrently for a long time.

4. The locks for adding or removing the queue are different. The locks in the queue implemented by ArrayBlockingQueue are not separated, that is, the same ReenterLock lock is used for the adding operation and the removing operation, and the lock in the queue implemented by LinkedBlockingQueue It is separated. PutLock is used to add and takeLock is used to remove. This can greatly improve the throughput of the queue. It also means that in the case of high concurrency, the producer and consumer can operate the data in the queue in parallel. , In order to improve the concurrency performance of the entire queue.


Reference 1: https://www.cnblogs.com/KingIceMou/p/8075343.html

Reference 2: https://blog.csdn.net/tonywu1992/article/details/83419448

Reference 3: https://blog.csdn.net/tonywu1992/article/details/83419448


Back to ◀ Crazy Maker Circle

Crazy Maker Circle-Java high-concurrency research community, open the door to big factories for everyone

Guess you like

Origin blog.csdn.net/crazymakercircle/article/details/109522217