Concurrent summary

1. threads and processes
thread is a subset of the process, a process can have a lot of threads, each thread in parallel to perform different tasks. Different processes using different memory space, and all threads share a same memory space. Do not confuse it and stack memory, each thread has a separate stack memory used to store local data. (Note the use of memory inside)
2. Parallel and Concurrent
Concurrent Programming and Parallel
3. thread-safe
if there are multiple threads in a process where your code at run simultaneously, and these threads may run this code at the same time. If the results of each run single-threaded operating results and the same, and also the values of other variables and expectations are the same, that is thread-safe
4. competitive conditions
4.synchronized
state 5. threads
6. The producers and consumers, dining philosophers
communication between threads 7.
7.java memory model JMM
main memory (main memory), working memory (thread stack), when processing the data, the thread will load the value from the main memory to the local stack, complete operation and then save back (volatile keyword role: once for the operation of the variable are excited once load and save).
Java memory model to change a thread can be made visible to other threads to provide a guarantee, the relationship between them in advance.
8.volatile
If it is not volatile or final modification, it is likely to produce unpredictable results for the variables used multithreading (another thread to modify this value, but after a thread to see is the value before modification). In fact, the same property Theoretically speaking the same instance of itself is only one copy. But Multithreading is cached values, in essence, volatile cache is not directly value. In the thread-safe plus volatile situation at the expense of performance.
The principle underlying the visibility of the volatile
9.Runnable Callable and
the main difference is the Callable call () method throws an exception and return values, while the Runnable run () method do not have these features. Callable Future object may return loaded with the calculation result.
10.future mode
isDone and GET
11.FutureTask
in concurrent Java programs FutureTask represents an asynchronous operation can be canceled. It has a start and cancel operations, query and retrieval operation is complete calculation results and other methods. Only when the operation is completed when the results can be retrieved, if the operation has not completed the get method will block. A FutureTask object can be packaged to call Callable and Runnable objects, due FutureTask also called Runnable interface so it can be submitted to the Executor to execute.
12.ThreadLocal
use: Save the independent variable thread. For a thread class (inherited from Thread)
When using ThreadLocal variable maintenance, use ThreadLocal thread of the variable to provide a copy for each independent variable, so each thread can independently change their copy without affecting other threads the corresponding copy. Commonly used in the user login control, information recording session.
Implementation: Each Thread TreadLocalMap are in possession of a variable of type (class is a lightweight Map, function and map the same, except that the bucket put the entry instead of entry in the list function or a map..) To itself as a key, to target value.
The main method is to get () and set (T a), then set to maintain a threadLocal in the map -> a, when will get a return. ThreadLocal is a special container.
CAS 13.compareAndSet
14.AtomicInteger atomic class
15.Lock class
ReentrantLock
ReentrantReadWriteLock.ReadLock
ReentrantReadWriteLock.WriteLock
tryLock and for condition Condition
16. The pessimistic locking and optimistic locking
ways of doing things pessimists and optimists are completely different outlook on life is a pessimist things I have to do it one hundred percent full control only, otherwise think this thing will go wrong; and optimistic outlook on life who on the contrary, everything regardless of the final outcome, he will first try to do a big deal ultimately unsuccessful. This is the difference between optimistic and pessimistic locking lock, pessimistic locking will lock the entire object for its own account to do after the operation, optimistic locking is not done directly acquire the lock operation and then decide whether to update the data through a certain means of detection. This section will discuss in depth optimistic locks.
Synchronized mutex belongs to the pessimistic lock, it has an obvious disadvantage, regardless of which data exists or not competition is locked, with the increase concurrency, and if the lock is a long time, it will become very large performance overhead. Is there a way to solve this problem? The answer is based on optimistic locking conflict detection. In this mode, has no concept of the so-called lock, each thread will go directly perform operations, detect the presence of competition to share data with other threads after the calculation is complete, if not then let the success of this operation, if there is competition in the shared data may continue to re-execute the operation and testing until successful, it could be called CAS spin.
The core algorithm optimistic locking is CAS (Compareand Swap, compare and exchange), which involves three operand: memory value, expected value, new value. If and only if the expected value equal to the value of memory and memory values to the new values. Such logic is that, before the first check whether the value of a block of memory with the same as when I read. If not, it means that memory during this value has been changed by another thread, give up this operation, otherwise there is no other explanation during memory for this thread operation value may be set to a new value to this memory block.
16. The container class
BlockingQueue
of ConcurrentHashMap
17.ThreadPoolExecutor thread pool
corePoolSize: initial value and the minimum thread pool, even if the idle state, will hold the number of threads.
maximumPoolSize: maximum thread, the thread is always growth will not exceed this value.
keepAliveTime: When the number of pool threads than corePoolSize, how much time elapsed excess idle threads will be recycled. Prior to recovery in the wait state
unit: unit of time, can be used TimeUnit instance, as TimeUnit.MILLISECONDS 
workQueue: to be the task (Runnable) waiting place, this parameter primarily affects scheduling policy, such as fair or not, whether a starvation (Starving)
threadFactory: Thread factory class, there is the default implementation, if there is need to customize their own needs ThreadFactory and implement the interface as a parameter.
RejectedExecutionHandler
18.AQS AbstractQueuedSynchronizer
19.AbstractExecutorService, ExecutorService, Executor
newSiggleThreadPool
newCacheThreadPool
newFixedThreadPool
20.CountDownLatch CyclicBarrier Semaphore
CyclicBarrier and CountDownLatch can be used to make a set of threads waiting for the other thread. The difference is that with CyclicBarrier, CountdownLatch can not be reused
1) CountDownLatch and CyclicBarrier between threads can wait for implementation, but they have different emphases:
CountDownLatch generally wait for a thread A number of other threads executing the task after it It was performed;
and a group of threads for general CyclicBarrier wait for each other to a certain state, and then execute the group of threads simultaneously;
Further, it is not possible to reuse a CountDownLatch, CyclicBarrier and can be reused.
2) Semaphore is actually somewhat similar and lock, it is generally used to control access to a certain set of resources.
21. Distributed Lock
Redis SETNX
ZooKeeper ???
database? ?
22. To be added. . .

Guess you like

Origin blog.csdn.net/beyondxiaohu15/article/details/90241199