Some questions related to multi-threaded interviews

1. Common locking strategies

  1. Optimistic lock vs pessimistic lock
    Pessimistic lock: always assume the worst case, every time you go to get the data, you think that others will modify it, so every time you get the data, you will lock it, so that others will block if they want to get the data until it gets the lock.
    Optimistic lock:
    It is assumed that the data will not generate concurrency conflicts under normal circumstances, so when the data is submitted for update, it will formally detect whether the data has concurrency conflicts. If concurrency conflicts are found, it will return user error information, so that The user decides how to do it.
    Synchronized initially uses an optimistic locking strategy. When lock competition is found to be frequent, it will automatically switch to a pessimistic locking strategy.

  2. Read-write lock Readers
    -writer lock (readers-writer lock) can be seen in English as the name implies. When performing a locking operation, it is necessary to additionally indicate the read-write intention. Multiple readers are not mutually exclusive, while writers require mutual exclusion with anyone.
    A read-write lock is to treat read operations and write operations differently. The Java standard library provides the ReentrantReadWriteLock class, which implements a read-write lock.

    1. The ReentrantReadWriteLock.ReadLock class represents a read lock. This object provides lock / unlock methods for locking and unlocking.
    2. The ReentrantReadWriteLock.WriteLock class represents a write lock. This object also provides lock / unlock methods for locking and unlocking.

    Read-write locks are especially suitable for "frequent reads, infrequent writes" scenarios.

  3. Heavyweight locks vs lightweight locks
    The operating system implements mutex mutex locks based on CPU atomic instructions. JVM implements keywords and classes such as synchronized and ReentrantLock based on the mutex locks provided by the operating system. Heavyweight
    locks: add The lock mechanism relies heavily on the OS to provide mutex
    lightweight locks: the lock mechanism does not use mutex as much as possible, but tries to complete it in user mode code. If it is really unsure, use mutex again.
    synchronized is a lightweight lock at first. If the lock conflict is serious, it will become a heavyweight lock


  4. According to the previous method of spin lock, the thread enters the blocking state after the lock grab fails, giving up the CPU, and it will take a long time to be scheduled again. But in fact, in most cases, although the current lock grab fails, it will not take long before the lock will be released. Give up the CPU if there is no need. At this time, you can use spin locks to deal with such problems. The
    pseudo code of spin locks: while (lock grab (lock) == failure) {}
    If the lock acquisition fails, try to acquire the lock again immediately, Infinite loop until the lock is acquired. The first attempt to acquire the lock fails, and the second attempt will come in a very short time. Once the lock is released by other threads, the lock can be acquired at the first time. The spin lock
    is A typical implementation of lightweight locks.
    Advantages: no CPU is given up, thread blocking and scheduling are not involved, once the lock is released, the lock can be acquired at the first time.
    Disadvantages: if the lock is held by other threads If it takes a long time, it will continue to consume CPU resources. (And when it is suspended and waiting, it does not consume CPU). The
    lightweight lock strategy in synchronized is most likely implemented through spin locks.

  5. Fair lock vs unfair lock
    Fair lock: obey "first come, first served". B comes first than C. When A releases the lock, B can acquire the lock before C.
    Unfair lock: does not obey "first come first served". B and It is possible for C to acquire locks.
    Note: The thread scheduling inside the operating system can be regarded as random. If no additional restrictions are made, the lock is an unfair lock. If you want to achieve a fair lock, you need to rely on additional Data structure to record the sequence of threads.
    synchronized is an unfair lock.

  6. Reentrant lock vs non-reentrant lock
    Reentrant lock: That is, the same thread is allowed to acquire the same lock multiple times.
    In Java, all locks named starting with Reentrant are reentrant locks, and all ready-made Lock implementation classes provided by JDK, including synchronized keyword locks, are all reentrant. The mutex provided by the Linux system is a non-reentrant lock.

related interview questions

  1. How do you understand optimistic locking and pessimistic locking, and how do you implement them?

Pessimistic locks believe that the probability of multiple threads accessing the same shared variable conflicts is high, and will actually lock before each access to the shared variable. Optimistic
locking believes that the probability of multiple threads accessing the same shared variable conflicts is not high. It will actually lock, but directly try to access the data. While accessing, identify whether the current data has an access conflict.
The implementation of pessimistic locking is to lock first (for example, with the help of the mutex provided by the operating system), and then operate the data after obtaining the lock . Wait if you can’t get the lock. The implementation of optimistic lock can introduce a version number. Use the version number to identify whether the current data access conflicts.

  1. Introduce read-write lock?

The read-write lock is to lock the read operation and the write operation separately. The read lock and the read lock are not mutually exclusive. The write lock and the write lock are mutually exclusive. The write lock and the read lock are mutually exclusive. It is mainly used in the scenario of "frequent read, infrequent write".

  1. What is a spin lock, why use a spin lock strategy, and what are the disadvantages?

If the lock acquisition fails, try to acquire the lock again immediately, and loop infinitely until the lock is acquired. If the lock acquisition fails for the first time, the second attempt will come in a very short time. Once the lock is released by other threads, you can Get the lock at the first time.
Compared with hanging and waiting for the lock,
the advantages: no CPU resources are given up, once the lock is released, the lock can be acquired at the first time, which is more efficient. It is very useful in scenarios where the lock holding time is relatively short .Disadvantage
: If the lock is held for a long time, CPU resources will be wasted

2. CAS

CAS: The full name of Compare and swap, literally means "comparison and exchange".
CAS is divided into three steps:

  1. Compare A and V for equality (comparison)
  2. If compares equal, write B to V (swap)
  3. Returns whether the operation was successful

When multiple threads perform CAS operations on a resource at the same time, only one thread can operate successfully, but it will not block other threads, and other threads will only receive a signal of operation failure.

related interview questions

  1. Explain the CAS mechanism you understand

The full name is Compare and swap, that is, "comparison and exchange". It is equivalent to completing the three steps of "reading memory, comparing whether it is equal, and modifying memory" at the same time through an atomic operation. Essentially, it needs the support of CPU instructions

  1. How to solve the ABA problem?

Introduce a version number to the data to be modified. When CAS compares the current value of the data with the old value, it also compares whether the version number meets expectations. If the current version number is found to be consistent with the previously read version number, the modification operation is actually performed. And let the version number self-increment; if the current version number is found to be greater than the previously read version number, the operation is considered to have failed.

3. Synchronized principle

Combined with the above lock strategy, we can conclude that Synchronized has the following characteristics (only considering JDK 1.8):

  1. At the beginning, it is an optimistic lock. If there are frequent lock conflicts, it will be converted to a pessimistic lock.
  2. At the beginning, it is a lightweight lock implementation. If the lock is held for a long time, it will be converted into a heavyweight lock.
  3. The spin lock strategy that is most likely to be used when implementing lightweight locks
  4. is an unfair lock
  5. is a reentrant lock
  6. not read-write lock

The JVM divides synchronized locks into lock-free, biased locks, lightweight locks, and heavyweight lock states. It will be upgraded in turn according to the situation.
Lock elimination:
Synchronized is used in the code of some applications, but it is not in a multi-threaded environment. (For example, StringBuffer) If this code is only executed in a single thread, then these locking and unlocking operations are unnecessary and wasteful At this time, the compiler + JVM will judge whether the lock can be eliminated. If it can, it will be eliminated directly.

related interview questions

  1. What is a biased lock?

Biased locks are not really locked, but just record a mark in the object header of the lock (record the thread to which the lock belongs). If no other thread participates in competing for the lock, then the locking operation will not be actually performed, thereby reducing program overhead . Once it really involves other thread competition, cancel the biased lock state and enter the lightweight lock state.

4. Callable interface

Callable is an interface. It is equivalent to encapsulating a thread with a "return value".
Steps to use:

  1. Create an anonymous inner class that implements the Callable interface. Callable has generic parameters. The generic parameters indicate the type of the return value.
  2. Rewrite the call method of Callable to complete the accumulation process. Return the calculation result directly through the return value.
  3. Wrap the callable instance with FutureTask.
  4. Create a thread, and pass the construction method of the thread into the FutureTask. At this time, the new thread will execute the call method of the Callable inside the FutureTask to complete the calculation. The calculation result is placed in the FutureTask object.
  5. Calling futureTask.get() in the main thread can block and wait for the new thread to complete the calculation. And get the result in FutureTask.

Example: Calculate the value from 1 to 1000

Callable<Integer> callable = new Callable<Integer>() {
    
    
    @Override
    public Integer call() throws Exception {
    
    
        int sum = 0;
        for (int i = 1; i <= 1000; i++) {
    
    
            sum += i;
       }
        return sum;
   }
};
FutureTask<Integer> futureTask = new FutureTask<>(callable);
Thread t = new Thread(futureTask);
t.start();
int result = futureTask.get();
System.out.println(result);

related interview questions

Introduce what Callable is

Callable is an interface. It is equivalent to encapsulating a thread with a "return value".
Callable and Runnable both describe a "task". Callable describes a task with a return value, and Runnable describes a task without a return value Task.
Callable usually needs to be used with FutureTask. FutureTask is used to save the return result of Callable. Because Callable is often executed in another thread, it is not sure when it will be executed. FutureTask can be responsible for the work of waiting for the result.

Guess you like

Origin blog.csdn.net/m0_71645055/article/details/131947872