JUC summary

1 way to create a thread

A more common problem in general is four categories:
(1) inheritance Thread class
(2) implement Runnable
(3) implement Callable Interface: The way compared to implement Runnable interface, multi return value, and can throw abnormal a FutureTask,
(4) thread pool

Distinction 2 start () method and the run () method

Only call the start () method, will exhibit characteristics of a multi-threaded, different threads run () method code inside alternately executed. If you just call the run () method, the code is executed synchronously, must wait for a thread's run () method after the code inside all is finished, another thread can execute its run () method code inside.

3 What is thread-safe

If your code execution and execution always be able to get the same results in the single-threaded, so your code is thread-safe in a multithreaded
(1) immutable
like String, Integer, Long these are the final type of class, a thread can not change any of their values, to be changed unless a new creation, so these immutable objects without any synchronization means you can directly in a multithreaded environment using the
set of immutable objects reason:
because immutable objects once created, the data inside an object can not be modified, thus reducing errors due to the modification of data caused Moreover, since the object unchanged, multitasking environment. while reading an object does not need to be locked, but not have any problems when reading data. when we write programs, if you can design an immutable objects, then try to design the same object.
(2) absolute thread safety
matter how the runtime environment, the caller does not need additional synchronization measures. To do this usually takes a lot of extra expense, Java noted in its own thread-safe class, the vast majority in fact are not thread-safe, but absolutely thread-safe class, Java also has, say CopyOnWriteArrayList, CopyOnWriteArraySet
(3) the relative security thread
relative said thread safety is our usual sense of the thread-safe, such as Vector, add, remove methods are atomic operations will not be interrupted, but only so far, if there is a thread traverse a Vector, there are threads at the same time add this Vector, will appear ConcurrentModificationException 99% of the cases, which is fail-fast mechanism.
(4) non-thread-safe
that it's really nothing, ArrayList, LinkedList, HashMap are all non-thread-safe class

4 Volatile

When 1, volatile when a shared variable is declared volatile, it will ensure that the modified values will be updated immediately to the main memory, when there are other threads need to read, it will read the new value to memory.
Once a modified volatile shared variable (class member variables, static class member variables), then the two layers have semantics:
1) to ensure visibility when different threads on the operating variable, i.e. a thread modifies the value of a variable, which is the new value for the other thread is immediately visible.
2) prohibition command reordering.
An important role is volatile and CAS combined to ensure the atomicity,
compared with the multi-thread
1, multithreaded, not a mutually exclusive relationship
2, can not guarantee that the state variable "atomic operations."

5 CAS algorithm

CAS (Compare-And-Swap) is a hardware support for concurrent processors for multiprocessor designed to operate in a special instructions for managing concurrent access to shared data.
CAS is a non-blocking lock-free algorithms.
CAS includes three operands:
the need to read and write memory value V
comparing value A
new value written Quasi B
if and only if the value of V A is equal to the time, with the new CAS to update the value V B by atomically value, otherwise it would not do anything.
Stamp solve the problem by aba version

6 How to share data between two threads

By sharing objects between threads on it, and then wait / notify / notifyAll, await / signal / signalAll be arouse and wait, say BlockingQueue blocking queue data is shared between threads and design

7 different sleep and wait in java method

The biggest difference is wait while waiting to release the lock, and sleep has been holding the lock. Wait typically used for inter-thread interactions, sleep is often used to pause execution.

8 ThreadLocal what use

Simply put ThreadLocal is a kind of space for time approach, in which each Thread maintains ThreadLocal.ThreadLocalMap a law to address open implementation, data isolation, data is not shared, naturally, there is no question of the security thread
because ThreadLocal each thread is created in a copy of the variable, that is, each thread of the interior will have a variable, and can be used anywhere in the internal threads simultaneously and thread, so that there is no thread safety problem, it will not seriously affect program execution performance.

9 Why wait () method, and notify () / notifyAll () method to be invoked in the sync blocks

This JDK is mandatory, wait () method, and notify () / notifyAll () method must first obtain the object before calling lock

10 wait () method, and notify () / notifyAll () method of any difference object is discarded when the monitor

wait () method, and notify () / notifyAll () method when the object is discarded monitor distinction is characterized: wait () method object monitor immediate release, notify () / notifyAll () method of the thread will wait the remaining code is completed will give up the object monitor.

11 Why use a thread pool

Avoid frequent create and destroy threads, to reuse the thread object. In addition, using a thread pool can also be flexibly controlled according to the number of concurrent projects. %

The difference between 12 synchronized and ReentrantLock

It is synchronized and if, else, for, while the same keyword, ReentrantLock is class, which is both; the essential difference. Since ReentrantLock is a class, then it provides more flexibility than the synchronized characteristics can be inherited, there are ways you can, you can have a variety of class variables, ReentrantLock than the synchronized expansion is reflected in the points:
( 1) ReentrantLock can set the waiting time to obtain a lock, thus avoiding deadlocks
(2) ReentrantLock can obtain information on the various locks
(3) ReentrantLock the flexibility to implement multi-channel notification
in addition, both the lock mechanism is actually Different. ReentrantLock low-level calls that park Unsafe methods of locking, synchronized operation should be the subject header mark word, which I can not determine.

What is the degree of concurrency 13ConcurrentHashMap

ConcurrentHashMap concurrency is the size of the segment, default to a maximum of 16 655 362 * 16, which means that there can be up to 16 threads simultaneously operating ConcurrentHashMap, which is the biggest advantage of the Hashtable ConcurrentHashMap, in any case, to have two simultaneous Hashtable thread gets Hashtable data in it?
Achieve JDK1.8 has abandoned the concept of Segment, but directly linked list data structure Node array + + to achieve red-black tree, concurrency control and use Synchronized CAS to operate, the whole looks like optimized thread-safe HashMap
compare and synchronize the volatile:
1.volatile lightweight thread synchronization is achieved, so good volatile performance than synchronize; volatile variables can only be used to modify, synchronize method can be used to modify the code block. With the development of technology jdk, synchronize on the efficiency will be greatly improved, so synchronize during the project or the more common;
2. multithreaded access volatile blocking does not occur; and synchronize clog occurs;
3.volatile can ensure variable between private memory and main memory synchronization, but can not guarantee atomic variables; synchronize atomic variables can be guaranteed;
4.volatile is in between multiple threads visibility of variables; synchronize multiple threads are accessing resources between the synchronization;
for volatile variables, can solve the visibility problem when the variable read, can not guarantee atomicity. For multiple threads access the same instance variables still need to lock synchronization.

What is 14 FutureTask

The fact mentioned earlier, interfaces future, indicates that the task implementation class FutureTask an asynchronous operation. FutureTask which can pass a Callable implementation class may be made to wait for access to the results of the task asynchronous operations, has been completed is determined whether to cancel the operation tasks. Of course, since FutureTask also Runnable interface implementation class, so FutureTask also can be placed in the thread pool. task.get () Returns the value obtained callable interface

15 how awaken a blocked thread

If the thread is blocked because the call wait (), sleep () or join () method caused, you can interrupt thread t.interrupt () ;, and to wake it up by throwing InterruptedException; if the IO thread encounters a blockage, powerless , as implemented in the operating system IO, Java code, and no way to directly contact the operating system, such as the socket I / O, the underlying socket close, then throws exception handling like; such as synchronous I / O, closes the underlying Channel and handle exceptions. For Lock.lock way, we can be transformed into Lock.lockInterruptibly method to achieve.

What is 16 ReadWriteLock

First clear look at, not to say ReentrantLock bad, just ReentrantLock sometimes have limitations. If you use ReentrantLock, may itself be in order to prevent the thread A write data, data inconsistencies thread B in the read data caused, but this way, if the thread C in reading data, the thread D also read data, the read data will not change the data, it is not necessary lock, but still locked up, reducing the performance of the program.
Because of this, before the birth of the read-write lock ReadWriteLock. ReadWriteLock interfaces is a read-write lock, is a specific implementation ReentrantReadWriteLock with ReadWriteLock interface, realizes the separation of read and write, a read lock is shared, exclusive write lock is not mutually exclusive write and read, read and write, will mutually exclusive between the write and read, write, and write, to enhance the literacy

17:00 If you submit the task, the thread pool queue is full, then what happens

If you are using LinkedBlockingQueue, which is unbounded queue, then it does not matter, continue to add tasks to the blocking queue waiting to be executed, because LinkedBlockingQueue can almost considered to be an infinite queue, unlimited storage tasks; if you are using a bounded queue say ArrayBlockingQueue words, the first task will be added to the ArrayBlockingQueue, ArrayBlockingQueue full, full processing tasks will be used to refuse strategies RejectedExecutionHandler, the default is AbortPolicy.

What thread scheduling algorithm is used in 18 Java

Preemptive. After a thread run out of CPU, operating system will be calculated based on a total priority thread priority, thread starvation situation and other data and assign the next time slice to execute a thread.

What 19 Thread.sleep (0) role is

This problem and that problem is related to the above, I even together. Since Java preemptive thread scheduling algorithm, so there may be a case of threads often get into control of the CPU, in order to allow some low priority threads can get to the control of the CPU, you can use Thread.sleep ( 0) triggered manually operated once the operating system allocation of time slices, which is the balance of operating control of the CPU.

20 What is the optimistic and pessimistic locking

(1) optimistic lock: As its name suggests, for thread safety issues between concurrent operations generated optimistic state, optimistic locking that competition does not always happen, so it does not need to hold the lock, the comparison - set two actions as one atomic operation attempts to modify variables in memory, if said conflict fails, then there should be a corresponding retry logic.
Optimistic locking (Optimistic Lock), by definition, is very optimistic, pick up data every time that others are not modified, so it will not be locked, but when the update will determine what others during this time did not go to update this data can be used mechanisms such as the version number. Optimistic locking is suitable for the types of applications to read, so you can improve throughput if the database provided similar write_condition optimistic locking mechanisms are actually provided like.
(2) pessimistic lock: or, as its name suggests, for thread safety issues between concurrent operations generated a pessimistic state, pessimistic locking that competition will always happen, so every time a resource operation, will hold an exclusive lock, like synchronized, willy-nilly, operating directly on the lock on the resource.

21 producers, consumers have a lot of implementation:

  • With wait () / notify () method to solve the problem of closed thread does not have to wait, if changed while solves the problem of producers and consumers
  • Lock method of multi Condition
  • BlockingQueue blocking queue methods
    22 What is the difference in Java CycliBarriar and CountdownLatch?
    A CountDownLatch: Under the java.util.concurrent, can be used to represent code to run on a certain point, the difference between them is that:
    (1) to a thread running CyclicBarrier after a certain point, i.e. the thread to stop running, until all threads have reached this point, all the threads before re-run; CountDownLatch is not, after running a thread to a certain point, just continue to run it a value of -1, the thread
    (2) CyclicBarrier can only arouse a task, can evoke CountDownLatch plurality of tasks
    (3) CyclicBarrier reusable, CountDownLatch not be reused, the count value 0 is not used again on CountDownLatch
Summarize the difference between CountDownLatch and join: Call thread.join () method must wait thread is finished, the current thread to continue down, but CountDownLatch provides more flexible control by a counter, counter to zero whenever it detects the current thread can down execution do not control whether the corresponding thread is finished.

23 and sync block synchronization method, which is the better choice

Sync blocks, which means that code outside the sync block is asynchronous, which further enhance the efficiency of the overall process than synchronous code. Please know that a principle: the better synchronized range.
Through this article, I put a little extra, although the scope of the better sync, but in the Java virtual machine optimization method still exists called lock coarsening, this method is to synchronize range increases. This is useful, for example StringBuffer, it is a thread-safe class, the most common natural append () method is a synchronous method, we write the code will be repeated append strings, which means that to be repeated lock -> unlock this performance disadvantage, because it means that the Java virtual machine to repeatedly switch between kernel mode and user mode on this thread, so Java virtual machine code that will repeatedly append method calls are a lock roughening operation, extended operation to append multiple craniocaudal append method, it becomes a large sync block, thus reducing the lock -> unlock times, effectively improve the efficiency of the code execution.

24 high concurrency, short of task execution time business how to use thread pool? How concurrency is not high, long-time business using task execution thread pool? High concurrency, how long time business execution services using a thread pool?

This is a problem I see in concurrent programming online, put this question on the last, I hope everyone can see and think about, because this is a very good, very practical, very professional. On this issue, a personal view is:
(1) high concurrency, short business task execution time, the number of threads in the thread pool can be set to the number of CPU cores + 1, reducing switching thread contexts
(2) concurrency is not high, long task execution time to see separate business area:
a) If the business is a long time focused on the IO operation, that is IO-intensive task, because the operation does not take up CPU IO, so do not let all the CPU retired and sit, you can increase the thread pool the number of threads, so CPU handle more business
b) If the business is a long time focused on computing operation, that is computationally intensive tasks, this would not work, and (1) the same bar, thread pool thread set the number of less, reducing thread context switching of
the key (3) high concurrency, long-time business execution, to address this type of task is not thread pool but in the overall architecture design, to see if some of the data inside these businesses can do caching is the first step, the second step is to add servers, as set thread pool, set the reference (2). Finally, a long time business execution problems, may also need to analyze it, see if you can use the middleware to split tasks and decoupling

25 thread pool

First, the important parameters ThreadPoolExecutor of
two, https://www.cnblogs.com/waytobestcoder/p/5323130.html
three, https://blog.csdn.net/zhouhl_cn/article/details/7392607

  • corePoolSize: core number of threads
    core threads will always survive, even without the need to perform the task
    when the number of threads is smaller than the core number of threads, even with idle thread, the thread pool will create a new thread priority treatment
    when setting allowCoreThreadTimeout = true (default false), the core Close thread time out
  • queueCapacity: task queue capacity (blocking queue)
    when the core reaches the maximum number of threads, the new task will be placed in the queue are queued for execution
  • maxPoolSize: maximum number of threads
    when the number of threads> = corePoolSize, and the task queue is full. Thread pool creates a new thread to handle the task
    when the number of threads = maxPoolSize, and the task queue is full, the thread pool will refuse to deal with the task and throw an exception
  • keepAliveTime: thread idle time
    when the thread is idle for keepAliveTime, the thread exits, until the number of threads = corePoolSize
    if allowCoreThreadTimeout = true, it will until the number of threads = 0
  • allowCoreThreadTimeout: allowing the core thread timeout
  • rejectedExecutionHandler: tasks Processor refused
    in both cases will refuse processing tasks:
    When the number of threads have reached maxPoolSize, cut the queue is full, it will reject the new task
    when the thread pool is called shutdown (), waits for the completion of task execution thread pool, then shutdown. If you submit tasks between calls shutdown () thread pool and really shutdown, the task will reject the new
    thread pool calls rejectedExecutionHandler to handle this task. If you do not set the default is AbortPolicy, will throw an exception
    ThreadPoolExecutor class has several internal implementation class to deal with such situations:
    AbortPolicy discard task, throw an exception runtime
    CallerRunsPolicy mission
    DiscardPolicy neglect, nothing will happen
    DiscardOldestPolicy kicked out from the queue the first task into the queue (the last execution)
    to achieve RejectedExecutionHandler interface can be customized processor

Guess you like

Origin www.cnblogs.com/monkay/p/11372097.html