Concurrent principle under common keywords

Synchronized understanding of
 
The solution is synchronized keyword to access resources between multiple threads synchronization, synchronized keyword can be guaranteed by its modified method or block of code at any time, only one thread of execution.
Further, in the Java earlier versions, the synchronized heavyweight lock belongs, inefficient, since the monitor lock (monitor) is dependent on the underlying operating system to achieve Mutex Lock, Java threads are mapped to the operating system's native threads Up. To wake up a thread or suspend, the operating system needs to help complete, the need to convert from user mode to kernel mode when switching between threads operating system implementation, the transition between a state requires a relatively long time, time cost relatively high, which is why early synchronized low efficiency reasons. After Java 6 Java JVM-level official from the larger synchronized optimization, so now synchronized lock efficiency is optimized quite well. JDK1.6 lock achieved it introduced a number of optimization, such as spin lock spin lock adaptability, to eliminate the lock, the lock roughening biased locking, lightweight and other techniques to reduce lock overhead lock operations.
The main synchronized keyword used in three ways:
  • Modified Examples of the method: apply to the current instance of the object locked, before entering the synchronization code to obtain the current lock object instance
  • Modified static method:: that is locked to the current class, will apply to all object instances of the class, because static members do not belong to any instance of an object, the class members (static indicates that this is a static resource class, regardless of the new how many objects, only one). So if a thread A non-static synchronized method calls an instance of an object, and the thread B needs to call this static synchronized method belongs to the class of the object instance, is allowed, mutual exclusion will not happen, because access static synchronized method lock is occupied the current class locks, access to non-static synchronized method lock is occupied by the current instance of the object lock.
  • Modified block: designated lock object, to lock a given object, before entering the synchronization code library to obtain a given lock of the object.
Synchronized common method
After calling wait wait method, the calling thread will be blocked pending,
notify wake a thread in a wait state (wait), but wake up that thread is random, it is determined by the JVM.
notifyAll wakes up all the threads in a wait state
sleep will not participate in the CPU scheduling within the specified time, but does not change the status of the lock, after the designated end of the sleep time, he will be ready. Get the CPU resources can continue to run.
 
After finished Synchronized, let us talk about the underlying principles of Synchronized
Achieve synchronized sync block of statements using the monitorenter and monitorexit instructions, wherein monitorenter instruction start position of the synchronization code points to blocks, monitorexit instruction indicating the end position of the synchronization code block. When performing monitorenter instruction thread tries to acquire a lock that is acquired monitor (monitor object exists in the subject header of each Java object, synchronized lock is acquired in this way locks, it is why any object in Java can be used as a lock reason) to their holding. When the counter is 0 can succeed, it will acquire the lock counter is set to 1 plus 1. After performing the appropriate monitorexit instruction, the lock counter is set to 0, indicating that the lock is released. If the lock fails to obtain the object that the current thread will block until another thread lock is released.
synchronized modification of the method does not monitorenter instructions and monitorexit command, made on behalf of the really ACC_SYNCHRONIZED identifier, which indicates that the method is a synchronous method, JVM through the ACC_SYNCHRONIZED access flag to identify if a method is declared as synchronization method to perform corresponding synchronous call.
 
JDK1.6 lock achieved introduced a number of optimization, such as the biased locking, lightweight lock, spin lock, adaptive spin locks, locks elimination lock roughening techniques to reduce the overhead of the lock operation.
There are four main lock state, in order: no lock status, tend to lock status, lock status lightweight, heavyweight lock status, they will be with the fierce competition and escalating. Note that the lock can not be upgraded downgrade, this strategy is to improve the efficiency of obtaining and release locks.
① biased locking
The purpose of introducing the purpose of biased locking and lock much like the introduction of lightweight, they are for the absence of multi-threaded premise of competition, reducing the traditional heavyweight performance operating system lock mutex produce consumption. But the difference is: lightweight lock using CAS operates without competition case to replace the use mutex. The biased locking in the absence of competitive situation would have eliminated the entire synchronization.
Biased locking of "bias" is the eccentric side, it means will tend to get it the first thread, if in the next execution, the lock is not acquired by another thread, then the thread holding the lock bias You do not need to be synchronized! About biased locking principle can be viewed "in-depth understanding of the Java Virtual Machine: JVM advanced features and best practices," second edition, Chapter 13 Section lock optimization.
But the more intense competition for places locks, lock bias on the failure, because this situation is likely to lock a thread for each application are different, and therefore should not be used biased locking In this case, otherwise would be wasted, you need to pay attention that tend to lock after the failure, and will not expand immediately heavyweight lock, but the first upgrade to lightweight lock.
② lightweight lock
If tend to lock fails, the virtual machine will not immediately upgrade to heavyweight lock, it will try to optimize the use of a tool called a lightweight lock (after adding 1.6). When the premise is not to lock the lightweight heavyweight lock in place, it is not the intention of competing multi-threaded, to reduce the use of traditional heavyweight lock mutex performance overhead operating system produced, because the use of lightweight lock, not You need to apply mutex. In addition, the lightweight lock locking and unlocking have used the CAS operation. Lock and unlock the lock on the Lightweight principles can be viewed "in-depth understanding of the Java Virtual Machine: JVM advanced features and best practices," second edition, Chapter 13 Section lock optimization.
Based on a lightweight lock can improve synchronization performance is the program "For the vast majority of locks, it is there is no competition in the entire synchronization cycle," which is an empirical data. If there is no competition, lightweight lock using CAS operations without the overhead of using a mutex operation. But if there is lock contention, in addition to the cost mutex, it will also operate additional CAS happen, so in case there is lock contention, lightweight lock slower than the traditional heavyweight lock! If the lock is highly competitive, it will soon be expanded lightweight heavyweight lock!
③ adaptive spin and spin locks
After a lightweight lock fails, the virtual machine in order to avoid the real thread hangs in the operating system level, but also a means to optimize known as spin lock.
Mutex synchronization is blocked achieve the greatest impact on performance because of a hung thread / recovery operations will need to thread into kernel mode is completed (user mode to kernel mode conversion takes time).
No general thread holds the lock time is too long, so just a little time for this thread to suspend / resume the thread is worth the candle. Therefore, the virtual machine development team to consider this: "Can we get behind the request to acquire the lock threads wait for a while to see if it is suspended without thread holding the lock will soon release the lock?" . In order to make a thread wait, we just let the thread executes a busy loop (spin), the technology is called spin.
Baidu Encyclopedia explanation of the spin lock:
What spin lock? It is proposed to achieve the protection of shared resources and a locking mechanism. In fact, the spin mutex lock and more similar, they are made to address the use of an exclusive resources. Whether mutex or spin lock, at any moment, can only have a holder, it said, you can only have at most one execution unit at any time to acquire a lock. But the two slightly different scheduling mechanisms. For the mutex, if the resource is already occupied, the resource request can only go to sleep. But the spin lock without causing the caller to sleep, if the spin lock has been held another execution unit, the caller has been to see whether the spin cycle lock holder has released the lock where the word "spin" It is so named.
To open + UseSpinning parameters: spin locks before JDK1.6 fact has been introduced, but is off by default, need to --XX. After JDK1.6 and 1.6, it is enabled by default instead of the. Note that: spin-wait can not completely replace the blocked because it still takes up processor time. If the lock is occupied time is short, then the effect of course, very good! On the contrary, on the contrary! Spin-wait time must have limits. If the number exceeds the limit spin of any course not get the lock, it should be hung thread. The default value is 10 times the number of spins, the user can modify --XX: PreBlockSpin to change.
Further, the introduction of adaptive spin lock in JDK1.6. Adaptive spin lock bring improvement is: spin time is not fixed, but rather with the state and the previous time and a spin lock lock owner to decide, virtual machines become more and more " smart ".
④ eliminate lock
Lock eliminate understand it is very simple, it refers to the virtual machine even if the compiler at run time, if it detects that the shared data can not compete, then the elimination of the implementation of the lock. Lock can save time eliminate pointless requests lock.
⑤ lock coarsening
In principle, we are in the preparation of the code is always recommended to limit the scope of sync blocks as small as possible - just be straight in the actual scope of the shared data synchronization, so in order to make the number of operations required to synchronize becomes possible small, if there is lock contention, that the waiting thread can get the lock as soon as possible.
In most cases, the above principles are no problem, but if a series of successive operations are repeated locking and unlocking of the same object, it will bring a lot of unnecessary performance overhead.
 
 
 
Introduce ReentrantLock
After ReentrantLock is a reentrant lock, which indicates that the lock can prop up a thread lock to duplicate resources, in addition he also supports getting fair and unfair selection lock, ReentrantLock calling lock () method , can again call lock () method to get the lock without being blocked. In fact, this realization reentrant mainly to solve two problems, first thread acquires the lock again when the lock needs to identify whether the thread to acquire the lock for the current thread lock to occupy, if it is successfully acquired again. The second is the final release of the lock, the thread is repeated n times to obtain a lock, then after the n-th release the lock, other threads can get to the lock. The final release of the lock requires counting increment, decrement lock release time for the number of times to acquire the lock, when the count of 0 signifies that the lock has been released successfully. ReentrantLock can achieve a fair and non-locking lock fair, non-default ReentrantLock fair, it can be developed by ReentrantLock ReentrantLock class (boolean fair) whether the constructor is fair. Its implementation principle is in tryAcquire method will be used hasQueuedPredecessors to judge, to determine the synchronization queue the lock if there is a precursor node, if it returns true, then it represents have a thread requests earlier than the current thread to acquire the lock, it is necessary to wait after the precursor thread acquire and release the lock he can get. Fair locks to avoid lock starvation scenario, but fair locks a lot of thread switching, a large overhead. Unfair lock may occur lock hungry, but you can guarantee greater throughput.
 
ReentrantLock lock = new ReentrantLock();
try {
        lock.lock();
     //……
}finally {
     lock.unlock();
}
 
The difference ReentrantLock and Synchronized
 
① Both are reentrant lock
Both are reentrant lock. "Reentrant lock" concept is: they can get their own internal lock again. For example, a thread acquires the lock of an object, then the object lock has not been released, it again when you want to acquire a lock of this object can still get in, if not reentrant locks, it would cause a deadlock. Each time the same thread acquires the lock, the lock counters are incremented by one, so have to wait until the lock counter drops to zero in order to release the lock.
② synchronized depends on the JVM and ReentrantLock rely on API
synchronized is dependent on the JVM implementation, we also talked about in front of the virtual machine team a lot of optimization for the synchronized keyword in JDK1.6, but these optimizations are implemented in the virtual machine level, and not exposed directly to us. ReentrantLock JDK level is achieved (ie API level, we need to lock () and unlock () method with try / finally block to complete the sentence), so we can see its source code to see how it is implemented.
③ ReentrantLock than synchronized adds some advanced features
Compared synchronized, ReentrantLock added some advanced features. Mainly for the three main points: ① waiting can interrupt; ② can achieve a fair lock; ③ can achieve selective notification (lock can bind multiple conditions)
  • ReentrantLock provides a mechanism capable of interrupting the thread waiting for the lock, this mechanism is implemented by lock.lockInterruptibly (). That is awaiting threads can choose to give up waiting, changed other things.
  • ReentrantLock can specify whether fair or unfair lock lock. The only non-synchronized fair locks. The so-called fair lock is the first thread is waiting to acquire the lock. ReentrantLock non-default fair, it can be developed by ReentrantLock ReentrantLock class (boolean fair) whether the constructor is fair.
  • synchronized keyword and wait () and notify () / notifyAll () method may be implemented in conjunction with wait / notification mechanism, of course, also be realized class of ReentrantLock, but need the help Condition interface newCondition () method. Condition is only after JDK1.5, it has good flexibility, for example, can achieve multiple notification feature is to create multiple instances Condition (ie object monitors) in a Lock object, thread object can be registered in Condition specified, so that thread can be selectively notification, more flexibility in scheduling threads. When using a notification notify () / notifyAll () method, a thread is notified is selected by the JVM, the class with ReentrantLock Condition instance may be implemented in conjunction with "selective notice", this feature is very important, and the interface is available by default Condition . The synchronized keyword is equivalent to the entire Lock objects only a Condition instance, all threads are registered with it a body. If you do notifyAll () method will then inform all the threads in a wait state, it will cause great efficiency, while Condition instance signalAll () method will only be registered in the wake of the Condition instance all waiting threads.
 
Tell us about the volatile keyword
 
 The main role of the volatile keyword is to ensure the visibility of variables and then there is a role to prevent instruction reordering, then we explain in detail them.
Before JDK1.2, Java memory model implementation always from main memory (ie, shared memory) reading of variables, does not require any special attention. In the current Java memory model, a thread can be stored local memory variables such as machine registers), rather than directly in the read and write main memory. This modification may result in a thread in the main memory of the value of a variable, while the other thread to continue using its copy of the variable's value in the register, resulting in inconsistent data.
To solve this problem, we need to declare a variable as volatile, when writing a volatile variable, the thread will JMM corresponding local memory shared variable values ​​flushed to main memory, when reading a volatile variable when will the JMM corresponding to the thread local memory is disabled, and the next thread reads the shared variable from main memory, so it can ensure visibility of the variable.
The next instruction is to prevent reordering.
In order to improve performance when executing a program, the compiler and processor will typically do reordering instructions:
  1. Compile discouraged sort. Compiler does not change the semantics of the premise of single-threaded programs, can rearrange the order of execution of the statement;
  2. Processor reordering. If no data dependency exists, the processor may change the execution of machine instructions corresponding to the statement sequence;
Instruction reordering have little impact on single-threaded, he will not affect the operating results of the program, but will affect the validity of multi-threading .
volatile can prohibit instruction reordering, JMM developed for volatile reordering rule is, suppose we now have two operations, the selection can operate as an ordinary reader, volatile read, write volatile. I was a bit summed up what rules, if the second operation is a volatile write, no matter what is the first action can not be reordered, that is to say before the operation to ensure that the compiler thinks highly volatile write is not the sort to write after volatile. When the first operation is a volatile read, no matter what the second operation can not be reordered, a local operation is a volatile write operation is a volatile second reading can not be reordered. To achieve these semantics, the compiler generates byte code before, will be inserted in the instruction sequence memory barrier to inhibit particular type of instruction reordering.
Storestore barrier inserted before each volatile write operation
Insert storeload barrier after each write operation in volatile
Loadload inserted after each barrier volatile reads
Loadstore inserted after each barrier volatile reads
So he can ensure that any program can correct volatile semantics. because
storestore barrier: Statement 1, storestore barrier, statement 2 is in writing and before the operation is performed after the statement 2, sentence 1 of the write operation to ensure other processors visible, that is prohibited storestore above ordinary write and re-write the following volatile Sort.
storeload barrier: storeload statement Statement 2 1 2 and before all the reading operation performed subsequent statement, a statement written guarantee visible to all processors. In order to prevent the above and below may have volatile write a volatile read / write reordering.
loadload barrier: loadload statement Statement 1 and 2 in a subsequent data read operation to read statement 2 before being accessed, the data to ensure that a statement to be read has been read, the read operation that is prohibited general below and above volatile read reordering
loadstore barrier: 1 loadstore statement statement in statement 2 and 2 are front and rear brush of the writing operation, to ensure that data is read statement 1 has been read. The following general ban all write and read the above volatile reordering.    
 
class ReorderExample {
int a = 0;
boolean flag = false;
 
 
public void writer() {
    a = 1;                   //1
    flag = true;             //2
}
 
 
Public void reader() {
    if (flag) {                //3
        int i =  a * a;        //4
        ……
    }
}
}
 
 
Compare the synchronized keyword and volatile keyword
  • The volatile keyword lightweight thread synchronization is achieved, the volatile performance is certainly better than the synchronized keyword. But the volatile keyword can only be used for variables can be modified method synchronized keyword and code blocks. Efficiency has been significantly improved after the synchronized keyword conducted mainly in order to reduce to obtain and release locks bring the performance overhead introduced by biased locking and lightweight locks and other various optimization after JavaSE1.6, the actual development use the keyword synchronized scenes or more.
  • Multithreaded access to the volatile keyword blocking will not occur, and may block the synchronized keyword occur
  • to ensure the visibility of the volatile keyword data, but can not guarantee the atomicity of data. Both the synchronized keyword can be guaranteed.
  • The volatile keyword is mainly used to solve the visibility of variables between multiple threads, and the synchronized keyword is solved resources between multiple threads to access synchronization.
Under what circumstances is suitable volatile?
Writing does not depend on the current value of the variable value depending on the current value because if the calculation is to obtain three-step write operation, the three steps are not atomic operation can not be guaranteed and volatile atomicity
Read and write variables are not locked because the lock itself can guarantee visibility, you do not need to be declared as volatile
 
 
 
Talk about ThreadLocal
 
public class ThreadLocalTest {
    static ThreadLocal<String> local=new ThreadLocal<>();
    static void print(String str)
    {
        System.out.println(str+":"+local.get());
        local.remove();
    }
 
 
    public static void main(String[] args) {
        Thread thread1=new Thread(new Runnable() {
            @Override
            public void run() {
                local.set("thread1 local");
                print("threa1");
                System.out.println("thread1 remove after"+":"+local.get());
            }
        });
 
 
        Thread thread2=new Thread(new Runnable() {
            @Override
            public void run() {
                local.set("thread2 local");
                print("threa2");
                System.out.println("thread2 remove after"+":"+local.get());
            }
        });
        thread1.start();;
        thread2.start();
 
 
    }
}
 
public class Thread implements Runnable {
......
    // this thread ThreadLocal value related. Maintained by the ThreadLocal class
    ThreadLocal.ThreadLocalMap threadLocals = null;
 
 
    // this thread InheritableThreadLocal value related. Maintained by InheritableThreadLocal class
    ThreadLocal.ThreadLocalMap inheritableThreadLocals = null;
......
}#
+ Generally the case, we are creating a variable can be any one thread to access and modify. If you want to implement each thread has its own dedicated local variables how to solve it? JDK provided ThreadLocal class in order to solve this problem. ThreadLocal class main solution is to allow each thread to bind their value, ThreadLocal class image of the metaphor of the box can store data, the box can store private data for each thread.
If you create a ThreadLocal variable, the variable access each thread will have a local copy of this variable, which is the origin ThreadLocal variable name. They can use the get () and set () method to get the default value or change the value to the value stored copy of the current thread, thereby avoiding the thread-safety issues.
Thread class has a threadLocals and a inheritableThreadLocals variables, which are ThreadLocalMap types of variables, we can ThreadLocalMap understood as customized HashMap ThreadLocal classes. By default, these two variables are null, they are created only when the current thread calls ThreadLocal class set or get method, in fact, call these two methods, we call that ThreadLocalMap class corresponding get (), set () method.
Each thread local variables are not stored in the ThreadLocal instance, but stored in threadLocals variable thread. That specific type of ThreadLocal variables stored in a specific thread memory space. ThreadLocal call the set method, the internal method will be set by Thread.currentThread the current thread, then get threadlocals current thread by getMap (Thread t), we've also said threadlocals ThreadLocalMap is a variable of type. Then we'll call the set of methods threadlocals, key is ThreadLocal objects, value is the value set ThreadLocal object calls the method settings. ThreadLocal is a map structure is to let each thread can be associated with multiple ThreadLocal variables. This also explains why the variables ThreadLocal declared its own dedicated local variables in each thread. get method is to first get the current thread and then get the corresponding threadlocals, then get the corresponding local variables. remove method that will remove the current thread local variable threadlocal instance.
 
used as key ThreadLocalMap ThreadLocal weak reference, and a reference value is strong. So, if the case is strong ThreadLocal no external references, when garbage collection will key will be cleared away, and the value will not be cleared away. As a result, ThreadLocalMap key is null will appear in the Entry. If we do not do any measure, then, value can never be GC recovery, this time may cause memory leaks. ThreadLocalMap implementation has been considered such a case, when calling set (), get (), remove () method, it will clean out the key to null records. After use the best method to manually call ThreadLocal remove () method.
Why use a weak reference key
Conversely might think, if you use strong references, when ThreadLocal objects (assumed to be ThreadLocal @ 123456) references (ie: TL_INT, is a strong reference point ThreadLocal @ 123456) was recovered, ThreadLocalMap itself still also holds a ThreadLocal @ 123456 strong reference, if not manually delete this key, then ThreadLocal @ 123456 will not be recovered, so long as the current thread does not die, those objects ThreadLocalMap references will not be recovered, it is believed that lead to Entry memory leak.
 
That the benefits of using a weak reference it?
 
If you use a weak reference, that pointing ThreadLocal @ 123456 object reference to two: TL_INT strong references, and ThreadLocalMap the Entry of weak references. Once TL_INT been recovered, the point ThreadLocal @ 123456's only weak references, and gc the next time, this ThreadLocal @ 123456 will be recovered.
 
So the question is, ThreadLocal @ 123456 objects only as a key ThreadLocalMap exists, and now it has been recovered, but its corresponding value has not been recovered, memory leaks still exist! And after the key is deleted, become null, value is not accessible to! To solve this problem, ThreadLocalMap class design itself already has a solution to this problem, and that is when each get () / set () / remove () value ThreadLocalMap in automatically clean up the key is null value. Thus, value can also be recovered.
 
Since the use of weak references to key, make the key auto recycling, why not use a weak reference value? The obvious answer is to assume that there ThreadLocalMap saved a value, disappeared after gc value, it can not be used ThreadLocalMap to achieve the effect of a variable to store full thread. (However, when accessing the key again, still can take to value, value at this time is to get the value of the initial value. That is, after deletion, if accessed again, get to null, will re-initialization method is called.)
 

Guess you like

Origin www.cnblogs.com/jiangtunan/p/11416090.html