40 + Java multithreading answer interview questions, suggestions favorites

Foreword

Classification write multi-threaded Java a more than 21-threaded articles, a lot of content 21 article, think, learn, more content, more complex the knowledge, the greater the need for profound summary, so as to memories, knowledge becomes into their own. This article is a summary of multithreading issues, and therefore the issue lists more than 40 threads.

These multi-threading problems, some from major sites, some from their own thinking. There may be some problems online, the corresponding answer may be some problems there may also be some netizens have also seen, but the focus of this writing is that all the questions will answer it again according to their own understanding, I do not see the answer online , so some might say the problem is wrong, correct me hope that we can feel free.

40 questions Summary

1, multi-thread what's the use?

A lot of people seem to be possible in a nonsense question: I would like to use multithreading, also cares what's the use? In my opinion, the answer is more nonsense. The so-called "know know why," "will use" just "know", "Why" is the "know why", only to the extent that "know know why" it can be said that the a knowledge with ease. OK, here to talk about my views on this issue:

(1) take advantage of multi-core CPU

With the progress of industry, now laptops, desktops and even commercial application servers are also at least a dual-core, 4-core, 8-core or 16-core is also not uncommon, if it is single-threaded program, then the dual-core CPU in wasted 50% on a 4-core CPU is wasted by 75%. On a single core CPU so-called "multi-threaded" That is false multithreaded, processor will only deal with the same time period of logic, but faster than switching between threads, looked like multiple threads "simultaneously" run Bale . Multi-threading on multi-core CPU is a true multi-threaded, it allows you to work at the same logic multi-segment, multi-threaded, can really play the advantages of multi-core CPU to achieve the purpose of full use of the CPU.

(2) prevent obstruction

From a procedural point of view of operating efficiency, single-core CPU will not only play the advantages of multi-threaded, multi-threaded run because it will lead to thread context switches, and reduce the overall efficiency of the program on a single core CPU. But we still have to single-core CPU multi-threaded applications, it is to prevent clogging. Just think, if it is before the single-core CPU using a single thread, so as long as this thread is blocked, say a data remotely read it, the peer has not yet been returned and no set time-out, then your entire program in return data back stopped working. Multithreading can prevent this problem, a number of threads to run simultaneously, even if a code execution thread to read data blocked, it will not affect the execution of other tasks.

(3) ease of modeling

This is another advantage of the not so obvious. Suppose you have a large task A, single-threaded programming, then you must consider a lot, build the entire program model is too much trouble. But if this great task A broken down into several small tasks, task B, task C, task D, were established program model and run these tasks through multi-threading, respectively, then a lot simpler.

2, create a thread way

A more common problem, and generally is in two ways:

(1) Thread class inheritance

(2) implement Runnable

As to which is better, needless to say the latter is certainly good, because the way of implementation of the interface is more flexible than class inheritance way, but also to reduce the degree of coupling between the program, interface-oriented programming is the core design patterns six principles.

Distinction 3, start () method and the run () method

Only call the start () method, will exhibit characteristics of a multi-threaded, different threads run () method code inside alternately executed. If you just call the run () method, the code is executed synchronously, must wait for a thread's run () method after the code inside all is finished, another thread can execute its run () method code inside.

4, the difference between Callable Runnable interface and interface

A little deeper problem, also see a breadth of knowledge of Java programmers to learn.

Runnable interface in the run () method's return value is void, just do it purely to perform the run () method code only; call Callable interface () method returns a value, it is a generic , and Future, FutureTask fit may be used to obtain the results of asynchronous execution.

In fact, this is a useful feature, since multi-threading more difficult compared to single-threaded, a more important reason is because the complex multi-thread full of unknown sex , whether a certain threads? Some threads execute how long? Whether certain threads when the execution of an assignment we expect the data has been completed? Not know, we can do is wait this multi-threaded task is finished it. Canceling a task that thread, is really useful in case Callable + Future / FutureTask but you can get the results of running multiple threads, you can not wait too long to get the data needed.

5, the difference CyclicBarrier and CountDownLatch

Two looks a bit like a class, are in java.util.concurrent, can be used to represent the code to run on a certain point, that the difference between the two:

(1) a thread to run CyclicBarrier after a certain point, the thread will stop running until all threads have reached this point, all the threads before re-run; CountDownLatch is not, a thread to run at some point after that, just keep running it to a value of -1, the thread

(2) CyclicBarrier can only evoke a task, CountDownLatch can evoke a number of tasks

(3) CyclicBarrier reusable, CountDownLatch not be reused, the count value 0 is not used again on CountDownLatch

6, the role of the volatile keyword

A very important issue is that each study, multi-threaded applications Java programmers must grasp. Understand the role of the volatile keyword is to understand the premise of Java memory model, not talked about here Java memory model, you can see the first 31 points, the role of the volatile keyword two main reasons:

(1) multi-threaded mainly revolve around visibility and atomicity two properties, the use of the volatile keyword modified variables to ensure its visibility among multiple threads, that is, each read volatile variable, it must be the latest The data

(2) the underlying code is executed as we have seen high-level language ---- Java program so simple, its execution is Java code -> byte code -> execute bytecode corresponding C / C ++ code based on - -> C / C ++ code is compiled into assembly language -> and hardware interaction , in reality, in order to obtain better performance JVM might command reordering, there may be some unexpected problems under multiple threads. It will prohibit the use of volatile semantics of reordering, which of course must also reduces the degree of code efficiency

From a practical point of view, an important role is volatile and CAS combined to ensure the atomicity, details can be found under the category java.util.concurrent.atomic package, such AtomicInteger.

7. What is thread-safe

Is a theoretical question, the answer has a lot of variety, I give a personal view to the best explanation: if your code in multiple threads under execution and execution never be able to get the same result in a single thread then your code is thread-safe .

This question is worth mentioning that the place is also a security thread several levels:

(1) immutable

Like String, Integer, Long these are the final type of class, a thread can not change any of their values, to be changed unless a new creation, so these immutable objects without any synchronization means can be used directly in a multithreaded environment use

(2) Absolute thread safety

Regardless of the runtime environment, the caller does not need additional synchronization measures. To do this usually takes a lot of extra expense, Java noted in its own thread-safe class, the vast majority in fact are not thread-safe, but absolutely thread-safe class, Java also has, say CopyOnWriteArrayList, CopyOnWriteArraySet

(3) the relative security thread

The relative security thread that is said on our usual sense of the thread-safe, such as Vector, add, remove methods are atomic operations will not be interrupted, but only so far, if there is a thread traverse a Vector , there are threads at the same time add this Vector, will appear ConcurrentModificationException 99% of the cases, which is fail-fast mechanism .

(4) non-thread-safe

This just nothing to say, ArrayList, LinkedList, HashMap are all non-thread-safe class

8, Java how to get to the thread dump file

Infinite loop, deadlocks, blocking, the page opens slow and other issues, playing a thread dump is the best way to solve the problem. That is called a thread dump thread stack, to obtain the thread stack has two steps:

(1) to obtain the pid thread, you can use the jps command in the Linux environment can also use the ps -ef | grep java

(2) print thread stack, you can use jstack pid command in the Linux environment you can also use kill -3 pid

Further mention that, Thread class provides a getStackTrace () method may also be used to obtain a thread stack. This is an instance method, so this method is specific and binding thread instance, every acquisition is to obtain a specific currently running thread stack,

9. What happens if a thread exception occurs when running

If the exception is not caught, then the thread is stopped executed. Another important point is: if this thread holds a monitor of an object, then the object's monitor will be released immediately

10, how to share data between two threads

By sharing objects between threads on it, and then wait / notify / notifyAll, await / signal / signalAll be arouse and wait, say BlockingQueue blocking queue data is shared between threads and design

11, sleep and wait method method What is the difference

This question is often asked, sleep and wait method method can be used to give up some CPU time, except that if the thread holding an object monitor, sleep method does not give up this object's monitor, wait method will give up this object monitor

12. What is the role of producer-consumer model

This problem is very theoretical, but very important:

(1) through the production capacity of the producers and consumers of the balance of spending power to improve the efficiency of the whole system , which is the most important producer consumer model role

(2) decoupling, which is a producer-consumer model side-effect of decoupling means less contact between producers and consumers, the less contact can develop alone without the need to receive each other's constraints

13, ThreadLocal what use

ThreadLocal is simply a kind of space for time approach, in which each Thread maintains ThreadLocal.ThreadLocalMap a law to address open implementation, data isolation, data is not shared, naturally, there is no question of the security thread

14, why the wait () method, and notify () / notifyAll () method to be invoked in the sync blocks

This JDK is mandatory, wait () method, and notify () / notifyAll () method must first obtain the object before calling lock

15, wait () method, and notify () / notifyAll () method of any difference object is discarded when the monitor

wait () method, and notify () / notifyAll () method when the object is discarded monitor distinction is characterized: wait () method object monitor immediate release, notify () / notifyAll () method of the thread will wait the remaining code is completed will give up the object monitor .

16. Why use a thread pool

Avoid frequent create and destroy threads, to reuse the thread object. In addition, using a thread pool can also be flexibly controlled according to the number of concurrent projects.

17, how to detect whether a thread holding an object monitor

I was in line to see more than one thread face questions before we know there are ways to determine whether a thread holding an object monitor: Thread class provides a holdsLock (Object obj) method, if and only if the monitor is an object obj returns the threads of time will hold true, that this is a static method, which means that "certain threads" refers to the current thread .

18, the difference between synchronized and ReentrantLock

It is synchronized and if, else, for, while the same keyword, ReentrantLock is class, which is the essential difference between the two. Since ReentrantLock is a class, then it provides more flexibility than the synchronized characteristics can be inherited, there are ways you can, you can have a variety of class variables, scalability synchronized ReentrantLock than reflected in the points:

(1) ReentrantLock can set the waiting time to acquire the lock, thus avoiding the deadlock

(2) ReentrantLock may acquire various information lock

(3) ReentrantLock flexibility to achieve multi-channel notification

In addition, both the lock mechanism is actually not the same. ReentrantLock low-level calls that park Unsafe methods of locking, synchronized operation should be the subject header mark word, which I can not determine.

19, what is the degree of concurrency ConcurrentHashMap

ConcurrentHashMap concurrency is the size of the segment, the default is 16, which means that there can be up to 16 threads simultaneously operating ConcurrentHashMap, which is the biggest advantage of the Hashtable ConcurrentHashMap, in any case, there are two threads simultaneously Hashtable can get the Hashtable data?

20, what is ReadWriteLock

First clear look at, not to say ReentrantLock bad, just ReentrantLock sometimes have limitations. If you use ReentrantLock, may itself be in order to prevent the thread A write data, data inconsistencies thread B in the read data caused, but this way, if the thread C in reading data, the thread D also read data, the read data will not change the data, it is not necessary lock, but still locked up, reducing the performance of the program.

Because of this, before the birth of the read-write lock ReadWriteLock. ReadWriteLock interfaces is a read-write lock, is a specific implementation ReentrantReadWriteLock with ReadWriteLock interface, realizes the separation of read and write, a read lock is shared, exclusive write lock is not mutually exclusive write and read, read and write, will mutually exclusive between the write and read, write, and write, read and write performance improves.

21, what is FutureTask

This fact has mentioned earlier, FutureTask indicates that the task of an asynchronous operation. FutureTask which can pass a Callable implementation class may be made to wait for access to the results of the task asynchronous operations, has been completed is determined whether to cancel the operation tasks. Of course, since FutureTask also Runnable interface implementation class, so FutureTask also can be placed in the thread pool.

22, how to find out which Linux environment using CPU longest thread

This is a partial practical problems that I feel quite meaningless. You can do:

pid (1) acquisition project, jps or ps -ef | grep java, have talked about this in front

(2) top -H -p pid, the order can not be changed

So that you can print out the current project, each thread occupancy percentage of CPU time. Note that this play is the LWP, which is the operating system native threads thread, then I Notebook Hill did not deploy Java project under the Linux environment, so there is no way to capture the demo, users and friends if the company is using Linux environment deployment project, you can try a bit.

Use "top -H -p pid" + "jps pid" you can easily find a bar with a high CPU thread thread stack, thereby positioning occupy high CPU reasons, usually because of improper operation of the code results in an infinite loop.

Finally mention that, "top -H -p pid" break out of the LWP is decimal, "jps pid" break out of the local thread number is hexadecimal, convert it, you can navigate to the CPU-high thread the current thread stack up.

23, Java programmer to write a program would lead to deadlock

The first time I saw this topic, feel that this is a very good question. Many people know how it is deadlock children: thread A and thread B waiting for the locks held each other's infinite infinite loop causes the program to continue. Of course, only so far, and ask how to write a deadlock in the program do not know this situation do not know what it means to be a deadlock, understand a theory on the finished thing, and practical problems encountered in deadlock basically you can not see out.

Really understand what is deadlock, the problem is not difficult, a few steps:

(1) two threads which were held by two Object object: lock1 and lock2. Both the synchronization code lock as a lock block;

(2) the thread run 1 () method to get the object blocks of the synchronization code of the lock lock1, Thread.sleep (xxx), does not require much time, almost 50 milliseconds, and then acquire the object lock lock2. This is primarily intended to prevent the thread 1 start suddenly and win a lock1 lock2 two objects object lock

(3) a thread RUN 2) (method of acquiring objects synchronized block to lock lock2, and then acquire the object lock lock1, of course, then it has been subject lock1 lock locks held thread 1, thread 2 thread to wait certainly 1 release of the object lock lock1

In this way, the thread 1 "sleep" sleep finished, thread 2 has acquired the object lock lock2 of the thread 1 attempts to acquire the object lock lock2 this time, they are blocked, this time on the formation of a deadlock. Do not write the code a bit too much of the space, the Java multithreading 7: Deadlock this article there is the code to achieve the above steps.

24, how to awaken a blocked thread

If the thread is blocked because the call wait (), sleep () or join () method caused, can interrupt threads, and to wake it up by throwing InterruptedException; if the IO thread encounters a blockage, do nothing, because the operating system IO implementation, Java code and no way to directly access to the operating system.

25, immutable objects was going to help multithreading

There is a problem mentioned in the foregoing, the immutable object to ensure visibility of memory objects, to read immutable object does not require additional synchronization means to enhance the efficiency of code execution.

26, what is the context of multi-threaded switch

Multi-thread context switch refers to the control of the CPU switch from an already running thread to another place and wait for the process of obtaining execution of the CPU threads.

27, if you submit the task, the thread pool queue is full, then what will happen

Here we distinguish between:

  1. If you are using LinkedBlockingQueue unbounded queue, the queue is unbounded, then it does not matter, continue to add tasks to the blocking queue waiting to be executed, because LinkedBlockingQueue can be considered almost an infinite queue, you can store unlimited tasks
  2. If it is bounded queue used such ArrayBlockingQueue, the first task will be added to the ArrayBlockingQueue, ArrayBlockingQueue full, will increase the number of threads maximumPoolSize value, if increasing the number of threads or handle, however, continue ArrayBlockingQueue full, it will be used RejectedExecutionHandler processing tasks rejection policy is full, the default is AbortPolicy

28. What Java thread scheduling algorithm is used

Preemptive. After a thread run out of CPU, operating system will be calculated based on a total priority thread priority, thread starvation situation and other data and assign the next time slice to execute a thread.

29. What is the role Thread.sleep (0) is

This problem and that problem is related to the above, I even together. Since Java preemptive thread scheduling algorithm, so there may be a case of threads often get into control of the CPU, in order to allow some low priority threads can get to the control of the CPU, you can use Thread.sleep ( 0) triggered manually operated once the operating system allocation of time slices, which is the balance of operating control of the CPU.

30, what is a spin

Many synchronized code inside just some very simple code, very fast execution time, then wait for the thread lock might be a less worthy of operation, because the thread blocking issues related to the user mode and kernel mode switching. Since the code execution inside the synchronized very fast, let's wait for the lock thread is not blocked, but do busy circulating in the border synchronized, this is spin. If you do not find that many times get busy cycle lock, and then blocked, this may be a better strategy.

31. What is the Java Memory Model

Java memory model defines a multi-threaded Java specification access memory. To complete Java memory model is not here to speak a few words to say clearly, I briefly summarize some parts of Java memory model:

(1) Java memory model memory into the main memory and working memory . Class status, which is shared between class variables are stored in the main memory, each time Java threads use these variables in main memory, will read one of the main variables in memory, and let them in memory their working memory have a copy, run your own thread code, use these variables, are themselves working memory operation in that one. After threaded code is finished, it will be updated to the latest value to main memory

(2) defines several atomic operation, for operating the main memory and the working memory variables

(3) defines the use of the rule of volatile variables

(4) happens-before, the principle i.e. first occurred, defines rules B over A bound ahead occurs in operation, such as in the same thread as the front of the control flow of the code must first occur in later control of the flow of the code, a release lock unlock action must occur first in the back to be locked with a lock action lock, etc., provided that they meet the rules, you do not need additional synchronization measures, if a piece of code does not comply with all the rules of happens-before, then this code it must be a non-thread-safe

32. What is CAS

CAS, called the Compare and Swap, namely comparison - replaced. Suppose there are three operand: memory value V, the expected value of the old A, B values to be modified, if and only if the expected value of the A and V are the same memory value, the memory value will be revised to B and returns true, what else nothing and returns false . Of course, with the CAS variable must be volatile, so as to ensure that each variable is the main memory to get the latest that value, otherwise the old expectations A thread on a post, it is always a value of A will not change, as long as CAS a particular operation fails, it will never succeed.

33. What is the optimistic and pessimistic locking

(1) optimistic lock: As its name suggests, for thread safety issues between concurrent operations generated optimistic state, optimistic locking that competition does not always happen, so it does not need to hold the lock will compare - to replace two actions as one atomic operation attempts to modify variables in memory, if said conflict fails, then there should be a corresponding retry logic.

(2) pessimistic lock: or, as its name suggests, for thread safety issues between concurrent operations generated a pessimistic state, pessimistic locking that competition will always happen, so every time a resource operation, will hold an exclusive lock, like synchronized, willy-nilly, operating directly on the lock on the resource.

34, what is the AQS

Briefly about AQS, AQS full name AbstractQueuedSychronizer, should be translated abstract queue synchronizer.

If the foundation is java.util.concurrent CAS, then AQS is the core of the contract and the entire Java, ReentrantLock, CountDownLatch, Semaphore, etc., use it. AQS actually connected in the form of two-way queue of all of Entry, say ReentrantLock, all waiting threads are placed in a two-way Entry and even into the queue in front of a thread uses ReentrantLock Well, the fact of the first two-way queue Entry to start a run.

AQS defines all operations on the two-way queue, but only open tryLock and tryRelease method for developers to use, developers can rewrite tryLock and tryRelease method based on their implementation, in order to realize their concurrent function.

Security thread 35, the singleton

Commonplace problem, the first thing to say is thread-safe means Singleton pattern: an instance of a class will only be created once out in a multithreaded environment . There are many cases of single mode of writing, I summarize:

Writing (1) starving single mode embodiment: Security Thread

Writing (2) single idler embodiment mode: non thread-safe

Writing (3) Double-checked locking singletons: Security Thread

36, Semaphore what role

Semaphore is a semaphore, its role is to limit the number of concurrent certain code block . Semaphore has a constructor, you may pass an int type integer n, represents a piece of code at most n threads can access, if exceeded n, then please wait until a thread completes this code block, the next thread re-entry. It can be seen that if the incoming Semaphore constructor type integer int n = 1, the equivalent into a synchronized.

37, Hashtable's size () method obviously only a statement "return count", why do sync?

This is a confused before me, I do not know if you have not thought about it. If more than one method statement and are in operation with a class variable, then unlocked in multithreaded environment, it will inevitably lead to thread safety issues, it is well understood, but the size () method is clearly only one statement Why lock?

On this issue, slowly work, study, have to understand, for two main reasons:

(1) at the same time only a fixed synchronization method of a class thread execution, but for asynchronous method of the class, you can access multiple threads simultaneously . So, this way there will be problems, may add data execution thread A Hashtable's put method, thread B you can call normal size () method to read the number of the current element of the Hashtable, to read that value may not be current , a thread may be added over the data, but there is no size ++, thread B has read size, then thread B is for reading a certain size to be inaccurate. And to the size () method after the addition of synchronous, meaning that thread B calls the size () method can be called only after a call to put the thread A method is completed, thus ensuring thread safety

(2) the CPU to execute code, Java code is not executed, it is very critical, we have to remember . Java code will eventually be translated into machine code is the real code that can execute machine code and hardware interaction. Even if you see only a single line of Java code, you see that even after the bytecode generated Java code is compiled, only one line, it does not mean that the underlying operating for only one sentence statement . A "return count" hypothesis has been translated into three assembler statement is executed, a compilation of statements and its machine code corresponding to do, and there could complete the implementation of the first sentence, the thread switched.

38, the constructor of the Thread class, is called a static block which thread

This is a very tricky and tricky questions. Remember: Thread class constructor, static block is where the new thread is the thread class called, and run the code inside the method is being called by the thread itself.

If the above statement makes you feel confused, then I, for example, assume that the new Thread2 the Thread1, main function in the new Thread2, then:

Constructor (1) Thread2 static block is the main thread calls, Thread2 the run () method is invoked own Thread2

The method of construction (2) Thread1 static blocks are called Thread2, Thread1 the run () method is invoked their Thread1

39, and the sync block synchronization method, which is the better choice

Sync blocks, which means that code outside the sync block is asynchronous, which further enhance the efficiency of the overall process than synchronous code. Please know that a principle: synchronous range as small as possible .

Through this article, I put a little extra, although the scope of the better sync, but in the Java Virtual Machine or the existence of something called lock coarsening optimization method, this method is to synchronize range increases. This is useful, for example StringBuffer, it is a thread-safe class, the most common natural append () method is a synchronous method, we write the code will be repeated append strings, which means that to be repeated lock -> unlock this performance disadvantage, because it means that the Java virtual machine to repeatedly switch between kernel mode and user mode on this thread, so Java virtual machine code that will repeatedly append method calls are a lock roughening operation, extended operation to append multiple craniocaudal append method, it becomes a large sync block, thus reducing the lock -> unlock times, effectively improve the efficiency of the code execution.

40, high concurrency, task execution time is short business how to use thread pool? How concurrency is not high, long-time business using task execution thread pool? High concurrency, how long time business execution services using a thread pool?

This is a problem I see in concurrent programming online, I put this question on the last, I hope everyone can see and think about, because this is a very good, very practical, very professional. On this issue, a personal view is:

(1) high concurrent, business task execution time short, the number of threads in the thread pool may be set to the number +1 CPU core, reducing the thread context switching

(2) concurrent is not high, long task execution time distinguished service to see:

a) If the business is a long time focused on the IO operation, that is IO-intensive task, because the operation does not take up CPU IO, so do not let all the CPU retired and sit, you can increase the number of threads in the pool, let CPU handle more business

b) If the traffic is concentrated in a long time operation is calculated, which is computationally intensive tasks, that no solution, and (1) as it, the number of threads in the pool is set to be less, reducing the thread context switching

Key (3) high concurrency, long-time business execution, to address this type of task is not thread pool but in the overall architecture design, to see whether they can do business inside some of the data cache is the first step, the first increase in server two steps, as set thread pool is provided with reference (2). Finally, a long time business execution problems, may also need to analyze it, see if you can



 

Published 16 original articles · won praise 5 · Views 552

Guess you like

Origin blog.csdn.net/weixin_46329358/article/details/104480256