Java Concurrent Programming Interview Perspective

Java Concurrent Programming Interview Perspective

This article is mainly to collect the problems that will arise during the concurrent programming interview. I will always add
a picture to see what knowledge you need to master in concurrent programming.

Concurrency basics

Question: Why do you need multithreading?

Answer: The use of multithreading can improve performance, mainly to reduce latency and increase throughput .
So if you want to improve performance, the corresponding methods are mainly in two directions: one is to optimize the algorithm , and the other is to maximize the performance of the hardware . In the field of concurrent programming, improving performance is essentially improving hardware utilization, and more specifically, it is improving I/O utilization and CPU utilization . Concurrent programming is not to solve the utilization rate of a single hardware, that is, it does not solve the CPU utilization rate or the I/O utilization rate. These are all solved by the operating system. The multi-threaded solution is the integration of the CPU and the I/O device. The problem of utilization . Here is an example:
Suppose that CPU calculation and I/O operations are executed alternately, and the CPU calculation and I/O operations are both time-consuming 1:1 (for example, this ratio is impossible in reality), then the CPU utilization and I/O The utilization rate of the equipment is 50%.

Suppose two threads perform I/O operations when thread A performs CPU calculations; when thread A performs I/O operations, thread B performs CPU calculations, so that CPU utilization and I/O device utilization The rate has reached 100%.

In the single-core era, multithreading is mainly used to balance the CPU and I/O devices.

Question: How many ways are there to create a thread? Which way do you think is better

Answer: There is actually one kind of thread creation. To create a thread, we must use Thread, but we usually divide it into two forms, one is the Runnable interface, the other is to inherit the Thread class, and there are other ways to create threads. For example, thread pools, timers, Lambdas, internal classes, etc., but the essence cannot escape these two.
It is better to implement the Runnable interface than to inherit the Thread method because:

  • From the perspective of code architecture, the specific execution task (ie the code in the run method) should be separated from the creation of the thread (Thread), that is, the task should be decoupled from the creation of the thread, and the creation of the thread should not be mixed with the execution of the task. .
  • If you use inherited Thread, you need to create a new thread every time you perform a task, and the cost of creating a new thread is relatively large (need to create, execute, and destroy). If you use Runnable, we can use the thread pool tool, so that The creation of threads is greatly reduced, and it also shows that the benefits of separating tasks and threads are for resource saving.
  • Java is single inheritance, but multiple implementations can be implemented

Question: What happens if a thread calls the start() method twice? why?

Answer: An exception will be thrown because the thread status will be checked when the start() method is called. If the status changes are found, an error will be reported, and start() will change the thread status from new to runnable.

Question: Since the start() method calls the run() method, why do we choose to call the start() method instead of calling the run() method directly?

Answer: Because calling start() is to actually start the thread, it will execute the life cycle of the thread, and calling the run() method just executes an ordinary method.

Question: Can half of the running threads in Java be killed forcibly? How to close gracefully?

Answer: The answer to the first question is definitely no. Java provides functions such as stop() and destroy(), but these functions are out of date and not recommended. The reason is that the thread is forced to kill, the resources used in the thread , Such as file descriptors, network connections, etc. cannot be closed normally. So the reasonable way after the thread runs is to let it finish running, release the resources, and then exit. If it is a loop, a thread communication mechanism is required to notify it to exit

Question: How to stop the thread correctly? Is it possible to stop the volatile flag bit?

Answer: There is no way to forcibly stop the thread in Java. The stop(), destroy() and other methods provided by early Java are all marked as expired methods, and it is not recommended to use them. Stopping threads can only be done through notification and collaboration.
There are generally two ways:
1. Use Interrupt to notify (recommended)
2. Use volatile to mark a field, and exit the thread by judging this field to be true/false (with limitations)

  • Use Interrupt to notify

while (!Thread.currentThread().isInterrupted() && more work to do) {do more work}
First determine whether the thread is interrupted by Thread.currentThread().isInterrupt(), and then check whether there is work to be done.

public class StopThread implements Runnable {     @Override     public void run() {         int count = 0;         while (!Thread.currentThread().isInterrupted() && count <1000) {             System.out.println("count = "+ count++);         }     }     public static void main(String[] args) throws InterruptedException {         Thread thread = new Thread(new StopThread());         thread.start();         Thread.sleep(5);         thread.interrupt();     } } Copy code

When using this method, you need to pay attention to the fact that there are

sleep()
,
wait()
Wait for the thread to enter the blocking method to make the thread sleep, and the sleeping thread is interrupted, then the thread can feel the interrupt signal and will throw a
InterruptedException
Exception, clear the interrupt signal at the same time, and set the interrupt flag bit to false. Deal with this time
try{}catch(InterruptedException e){}
Don't swallow the exception (do nothing after catching the exception), at this time you can throw it directly or interrupt again without catching the legacy.
Throw exception case:

/** * Description: Best practice: The preferred choice after catching InterruptedExcetion: Throwing an exception in the method signature will force try/catch in run() */ public class RightWayStopThreadInProd implements Runnable { @Override public void run() { while (true && !Thread.currentThread().isInterrupted()) { System.out.println("go"); try { throwInMethod(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); //Save the log, stop the program System.out.println("Save log"); e.printStackTrace(); } } } private void throwInMethod() throws InterruptedException { Thread.sleep(2000); } public static void main(String[] args) throws InterruptedException { Thread thread = new Thread(new RightWayStopThreadInProd()); thread.start(); Thread.sleep(1000); thread.interrupt(); } } Copy code

Interrupt the case again:

/** * Description: Best Practice 2: Call Thread.currentThread().interrupt() in the catch sub-statement To restore the interrupt status, so that in the subsequent execution, you can still check that the interrupt has just occurred * Go back to RightWayStopThreadInProd just now to make up the interrupt and let it jump out */ public class RightWayStopThreadInProd2 implements Runnable { @Override public void run() { while (true) { if (Thread.currentThread().isInterrupted()) { System.out.println("Interrupted, the end of program operation"); break; } reInterrupt(); } } private void reInterrupt() { try { Thread.sleep(2000); } catch (InterruptedException e) { Thread.currentThread().interrupt();//Resume interrupt e.printStackTrace(); } } public static void main(String[] args) throws InterruptedException { Thread thread = new Thread(new RightWayStopThreadInProd2()); thread.start(); Thread.sleep(1000); thread.interrupt(); } } Copy code
  • Use volatile method
/** * Description: Demonstrate the limitations of using volatile: part1 seems feasible */ public class WrongWayVolatile implements Runnable { private volatile boolean canceled = false; @Override public void run() { int num = 0; try { while (num <= 100000 && !canceled) { if (num% 100 == 0) { System.out.println(num + "is a multiple of 100."); } num++; Thread.sleep(1); } } catch (InterruptedException e) { e.printStackTrace(); } } public static void main(String[] args) throws InterruptedException { WrongWayVolatile r = new WrongWayVolatile(); Thread thread = new Thread(r); thread.start(); Thread.sleep(5000); r.canceled = true; } } Copy code

This method is feasible, but it has limitations

    • Limitations of using the volatile method Volatile boolean cannot handle long-term blocking. The following is an example of the producer-consumer model:
/** * Description: Demonstrate the limitations of using volatile when part2 is blocked, volatile cannot be threaded In this example, the producer's production speed is very fast, and the consumer's consumption speed is slow, so after the blocking queue is full, Producers will block, waiting for consumers to consume further */ public class WrongWayVolatileCantStop { public static void main(String[] args) throws InterruptedException { ArrayBlockingQueue storage = new ArrayBlockingQueue(10); Producer producer = new Producer(storage); Thread producerThread = new Thread(producer); producerThread.start(); Thread.sleep(1000); Consumer consumer = new Consumer(storage); while (consumer.needMoreNums()) {----(1) System.out.println(consumer.storage.take()+"consumed"); Thread.sleep(100); } System.out.println("The consumer does not need more data."); //Once no more data is needed for consumption, we should stop the producer as well, but the actual situation producer.canceled=true; System.out.println(producer.canceled); } } class Producer implements Runnable { public volatile boolean canceled = false; BlockingQueue storage; public Producer(BlockingQueue storage) { this.storage = storage; } @Override public void run() { int num = 0; try { while (!canceled) { if (num% 100 == 0) { storage.put(num); -----------(2) System.out.println(num + "It is a multiple of 100 and is put in the warehouse."); } num++; } } catch (InterruptedException e) { e.printStackTrace();//What is demonstrated here is not to directly shield the interrupt by judging the interrupt form } finally { System.out.println("The producer finishes running"); -----(3) } } } class Consumer { BlockingQueue storage; public Consumer(BlockingQueue storage) { this.storage = storage; } public boolean needMoreNums() { if (Math.random()> 0.95) { return false; } return true; } } Copy code

After the above code runs, it is found that at code (1), when the consumer no longer needs the production data, the canceled is set to true, then the code running should be stopped, and it will be executed to the code at (3), printing out " Producer finishes running", in fact, the code does not finish running.
So volatile cannot stop the thread when it is blocked. So why is this happening?
In fact, the code is blocked at (2), then the loop will not continue, and it will not be judged whether canceled is true. This situation is considered in the Java design, so the interrupt method is used to interrupt the authentic method of the thread, because the interrupt method can respond to the interrupt even when it is blocked.

Question: The following main(...) function starts a thread. May I ask: When the main function ends, does the thread forcefully exit? Does the process forcefully exit?

public static void main(String[] args) { System.out.println("main thread start"); Thread t1 = new Thread(() -> { while (true){ System.out.println(Thread.currentThread().getName()); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } } }); t1.start(); System.out.println("main thread end"); } Copy code

Answer: No, but in

t1.start()
Prepend
t1.setDaemon(true);
After
main(...)
After the function exits, thread t1 will exit, and the entire process will exit. Multiple threads running in the JVM are divided into two categories: daemon threads and non-daemon threads. The threads opened by default are all non-daemon threads. Java stipulates that when all non-daemon threads exit, the JVM process will exit, and the daemon thread does not affect the exit of the entire JVM process.

Question: Will the following situations throw Interrupted exceptions?

public void run(){ while (!stopped){ int a = 1, b = 2; int c = a + b; System.out.println("thread is executing"); } } Copy code

Like the above code, if t.interrupt() is called in the main thread, will the thread throw an exception? If the code is modified so that this thread is blocked in a synchronized keyword, it is preparing to take the lock, as shown in the following code

public void run(){ while (!stopped){ synchronized (this){ int a = 1, b = 2; int c = a + b; System.out.println("thread is executing"); } } } Copy code

At this time, call t.interrupt() in the main thread. Will the thread throw an exception?

Answer: The above two pieces of code will not throw exceptions, only those functions that are declared to throw InterruptedException will throw exceptions, that is, the following commonly used functions:

public static void sleep(long millis) throws InterruptedException{} public final void wait() throws InterruptedException{} public final void join() throws InterruptedException{} Copy code

Question: What state are there between threads? How to change?

Answer: The thread state transition is shown in the figure above.
Blocking that can be interrupted is called lightweight blocking , and the corresponding thread state is

WAITING
or
TIMED_WAITING
; And like
synchronized
This kind of blocking that cannot be interrupted is called heavyweight blocking , and the corresponding state is
BLOCKED
. NEW state call
start()
enter
RUNNING
or
READY
status. If the blocking function is not called, the thread will only be
RUNNING
with
READY
Switch between, which is the time slice scheduling of the system. You can call
yield()
Function, give up the occupation of the CPU, other methods can not intervene in these two state changes.
Calling the blocking function will enter
WAITING
or
TIMED_WAITING
The difference between the two states is that the former is blocked indefinitely, while the latter is passed in a time parameter to block for a finite time.
use
synchronized
Will enter
BLOCKED
status.
Blocked
versus
Waiting
The difference is
Blocked
Waiting for other threads to release
monitor
Lock while
Waiting
Is waiting for a certain condition, such as
join
The thread of execution is complete, or
notify()/notifyAll()
.

Question: What is the difference between t.isInterrupted() and Thread.interrupted()?

Answer: t.interrupted() is equivalent to sending a wake-up signal to the thread. If the thread happens to be in the WAITING or TIMED_WAITING state at this time, an InterruptedException will be thrown and the thread will be awakened. If the thread is not blocked, the thread does nothing. These two functions are used by the thread to determine whether it has received an interrupt signal. The former is a non-static function, and the latter is a static function. The difference between the two is that the former only reads the interrupt status and does not modify the status; the latter not only reads the interrupt status, but also resets the interrupt flag bit.

Question: How do you understand synchronized?

Answer:
1. Synchronized locks the object :
synchronized actually adds a lock to the object. For modified non-static member functions, the lock is the current object this, and for static member functions, the lock is the class object of the current class.

2. The essence of the
lock : A lock is actually an object. This object needs to complete the following things:
1) There must be a flag (state variable) inside the object to record whether it is occupied by a thread. The simplest case is that this state has two values of 0 and 1. 0 means that no thread occupies the lock, and 1 means that a certain thread occupies the lock.
2) The object needs to record the thread ID that occupies the lock (know which thread it is occupied)
3) The object needs to maintain a list of thread IDs and record other blocked threads. When the current thread releases the lock, it will take a thread to wake up from the blocked thread list. The
lock is an object, and the shared resource is also an object. Then you can use the same object for the lock and shared resource:

synchronized(this){ }
You can also use two objects, the lock is added to obj, and the shared resource is another object:
synchronized(obj){ }

Question: How to design a producer-consumer model

Answer:
As shown in the figure above: a memory queue, multiple producer threads put data in the memory queue; multiple consumer threads fetch data from the memory queue.

    • The memory queue itself must be locked to achieve thread safety.
    • block. When the memory queue is full and the producer cannot put it in, it will be blocked; when the memory queue is empty, the consumer has nothing to do and will be blocked.
    • Two-way notification. After the consumer is blocked, the producer puts in the new data and needs to notify() the consumer; conversely, after the producer is blocked, the consumer consumes the data and needs to notify() the producer.
How to block?
  • The thread blocks itself, that is, the producer and consumer threads call wait() and notify() respectively
  • With a blocking queue, the enqueue/dequeue function itself is blocked when the data cannot be fetched or put in. This is the realization of BlockingQueue
How to notify in both directions?
  • wait() and notify() mechanism
  • Condition mechanism

Pseudo code: There is a problem. . . . . .

public void enqueue() { synchronized (queue) { while (queue.full()) { queue.wait(); } //Enqueue operation queue.notify(); } } public void dequeue() { synchronized (queue) { while (queue.empty()) { queue.wait(); } //Dequeue operation queue.notify(); } } Copy code

Question: Why must the wait method be used in synchronized code protected by synchronized?

Answer: When describing wait() in Javadoc, it looks like this:

* As in the one argument version, interrupts and spurious wakeups are * possible, and this method should always be used in a loop: * <pre> * synchronized (obj) { * while (<condition does not hold>) * obj.wait(); * ...//Perform action appropriate to condition *} * </pre> * This method should only be called by a thread that is the owner * of this object's monitor. See the {@code notify} method for a * description of the ways in which a thread can become the owner of * a monitor. Copy code

The above English says that this method may happen

interrupts
(Interrupted) or
interrupts
(False wake-up), so it needs to be executed in a loop. Because the loop will determine whether the conditions are met, if the thread is falsely awakened at this time, then
while
The condition is judged in the loop, and the waiting condition is still blocked if the condition is not met.
So why design as
synchronized
What about?

class BlockingQueue {     Queue<String> buffer = new LinkedList<String>();     public void give(String data) {         buffer.add(data);         notify();//Since someone may be waiting in take     }     public String take() throws InterruptedException {         while (buffer.isEmpty()) {             wait();         }         return buffer.remove();     } } Copy code

This code is a typical producer-consumer model.

1. 1. the consumer thread calls the take method and determines whether the buffer.isEmpty method returns true. If it is true, the buffer is empty, and the thread wants to wait, but it is suspended by the scheduler before the thread calls the wait method. So there is no time to execute the wait method at this time.
2. At this time the producer starts to run, executes the entire give method, it adds data to the buffer, and executes the notify method, but notify has no effect, because the wait method of the consumer thread has not had time to execute, so there is no thread Waiting to be awakened.
3. At this time, the consumer thread that was just suspended by the scheduler returns to continue executing the wait method and enters the wait.

Because consumers need to judge and then wait. These are two operations, not an atomic operation. It is interrupted in the middle, which is thread-unsafe.
And one more thing, call

wait()
Need to release the lock, the premise of releasing the lock is to obtain the lock, so you must cooperate
synchronized
use.

Question: Why is wait/notify/notifyAll defined in the Object class, while sleep is defined in the Thread class?

Answer: There are two main reasons:

  • Because every object in Java has a
    monitor
    The lock of the monitor, since each object can be locked, this requires a location in the object header to store the lock information. This lock is at the object level, not at the thread level.
    wait/notify/notifyAll
    They are also lock-level operations, their locks belong to the object, so they are defined in
    Object
    Category is the most suitable because
    Object
    The class is the parent class of all objects.
  • Because if you put
    wait/notify/notifyAll
    The method is defined in
    Thread
    In the class, it will bring great limitations, for example, a thread may hold multiple locks in order to achieve complex logic that cooperates with each other, assuming that
    wait
    The method is defined in
    Thread
    In the class, how to make a thread hold multiple locks? How do you know which lock the thread is waiting for? Since we let the current thread wait for the lock of an object, it should naturally be achieved by operating the object instead of operating the thread.

Question: What are the similarities and differences between wait/notify and sleep methods?

Answer: the
same

  • They can all block threads
  • They can all respond to interrupt interrupts: if they receive an interrupt signal while waiting, they can respond and throw InterruptedException

difference

  • The wait method must be used in synchronized protected code, while the sleep method does not have this requirement
  • When the sleep method is executed in the synchronous code, the monitor lock will not be released, but the monitor lock will be actively released when the wait method is executed.
  • The sleep method requires that a time must be defined. After the time expires, the thread will be actively resumed or interrupted to wake up the thread in advance. For the wait method without parameters, it means that it will wait forever until it is interrupted or awakened. It will not resume actively.
  • wait/notify is an Object method, and sleep is a method of the Thread class

Question: Why must the lock be released when wait()

Answer:
When thread A enters synchronized (obj1), it also locks obj1. At this point, calling wait() enters the blocking state and has not been able to exit the synchronized code block; then, thread B can never enter the synchronized(obj1) synchronization block, and never has the opportunity to call notify(). Isn't it a deadlock?
So inside wait():

wait() { //release the lock obj1 //Block, waiting to be notified by other threads //Get the lock again } Copy code

Question: How to share data between two threads?

Answer: This can be achieved by sharing objects, or by using concurrent data structures like blocking queues. The producer-consumer model is implemented with wait and notify methods.

Question: What is context-switching in multithreading?

Answer: Context switching is the process of storing and restoring CPU state, which enables thread execution to resume execution from the point of interruption. Context switching is a basic feature of multitasking operating systems and multithreaded environments.

problem:
Callable
with
Runnable
s difference?

Answer: The difference:

  • Method name, the execution method specified by Callable is call(), and the execution method specified by Runnable is run();
  • Return value, Callable's task has a return value after execution, and Runnable's task has no return value after execution;
  • Throw an exception, the call() method can throw an exception, but the run() method cannot throw a checked exception;
  • There is a Future class that cooperates with Callable. Through Future, you can understand task execution, or cancel task execution, and get the result of task execution. These functions are not available to Runnable. Callable is more powerful than Runnable.

Thread Pool

Question: What is a thread pool?

Answer: In order to avoid the frequent creation and destruction of threads by the system, we can reuse the created threads and cache the threads. The creation thread becomes an idle thread from the thread pool, and the closing thread becomes the return thread to the pool. .

Question: Why do you need a thread pool?

Answer: This question can be translated into what are the advantages of thread pools or what are the disadvantages of not using thread pools.

Disadvantages of not using thread pool :

  • The system overhead of repeatedly creating threads is relatively large, and each thread creation and destruction takes time. If the task is simple, it may cause the creation and destruction of the thread to consume more resources than the thread to execute the task itself.
  • Too many threads will take up too much memory and other resources, will also bring too many context switches, and will also lead to system instability.

Advantages of thread pool :

  • Reduce resource consumption. Reduce the consumption caused by thread creation and destruction by reusing the created threads.
  • Improve response speed. When the task arrives, the task can be executed immediately without waiting for the thread to be created.
  • The thread pool can manage resources uniformly. For example, the thread pool can uniformly manage task queues and threads, and can start or end tasks uniformly. It is more convenient and easier to manage than a single thread processing tasks one by one. It is also conducive to data statistics. For example, we can easily count the executed tasks. The number of tasks.

Question: What are the parameters of the thread pool? What is the workflow of the thread pool?

Answer: The thread pool has the following parameters:

  • corePoolSize (number of core threads): will not be destroyed after creation
  • maximumPoolSize (maximum number of threads): after creation, no task execution will be destroyed after keepAliveTime time
  • keepAliveTime+time unit: idle thread survival time
  • ThreadFactory (thread factory)
  • workQueue (blocking queue)
  • handler (rejection strategy)

The execution flow can be explained with the following diagram:

  • The current number of threads is less than corePoolSize, then create a new thread to perform the task;
  • If the current number of threads is greater than or equal to corePoolSize, the task will be added to the BlockingQueue;
  • If the queue is full, create a new thread to process the task;
  • If the number of threads exceeds maxinumPoolsize, the task will be rejected and the RejectExecutionHandler.rejectedExecution() method will be called;

Question: What kinds of rejection strategies are provided by the thread pool? When is the rejection?

Answer:
The timing of the thread pool rejection policy is mainly in two situations:

  • When we call
    shutdown
    After the method closes the thread pool, even if there are still tasks that have not been completed at this time, because the thread pool is closed, if the task is submitted at this time, it will be rejected.
  • The thread pool has no capacity to handle new tasks, that is, when the work is very saturated.

Rejection strategy:

  • AbortPolicy
    : This rejection strategy will directly throw a type of when rejecting a task
    RejectedExecutionException
    of
    RuntimeException
    , Let you perceive that the task has been rejected, so you can choose strategies such as retry or abandon submission according to business logic .
  • DiscardPolicy
    : The new task will be discarded directly after being submitted, without any notice to you, there is a certain risk, which may cause data loss.
  • DiscardOldestPolicy
    : After the new task is submitted, the task with the longest survival time will be discarded. Similarly, it also has a certain risk of data loss.
  • CallerRunsPolicy
    : After the new task is submitted, the task will be handed over to the thread that submitted the task for execution, that is, whoever submits the task will be responsible for the execution of the task. There are two main advantages to doing this.
    • The first is that newly submitted tasks will not be discarded, so there will be no business loss.
    • The second is because whoever submits the task is responsible for executing the task, so that the thread that submits the task has to be responsible for executing the task, and the execution of the task is relatively time-consuming. During this period, the thread that submits the task is occupied, and it is Will submit new tasks again, slowing down the speed of task submission, which is equivalent to a negative feedback. During this period, the threads in the thread pool can also make full use of this time to execute some tasks, freeing up a certain amount of space, which is equivalent to giving the thread pool a certain buffer period.

Question: Why is it not recommended to automatically create a thread pool in the Ali protocol? Which is to use
Executors
The thread pool created.

Answer: Because of using

Executors
It is not safe to create a thread pool. Before answering this question, we must first know the pass
Executors
Which thread pools can be created.

  • FixedThreadPool
    : A thread pool with a fixed number of threads, the number of core threads and the maximum number of threads are the same,
    newFixedThreadPool
    Actually called internally
    ThreadPoolExecutor
    Constructor. The problem is that the queue is used and there is no capacity to go online
    LinkedBlockingQueue
    , So if the thread processing tasks is very slow, and when there are too many tasks, the queue will accumulate a large number of tasks, which may cause
    OutOfMemoryError
    .
public static ExecutorService newFixedThreadPool(int nThreads) {      return new ThreadPoolExecutor(nThreads, nThreads,0L, TimeUnit.MILLISECONDS,new LinkedBlockingQueue<Runnable>()); } Copy code
  • SingleThreadExecutor
    : A thread pool for a single thread, and
    newFixedThreadPool
    It is the same, except that the number of core threads and the maximum number of threads are directly set to 1, but the task queue is still unbounded
    LinkedBlockingQueue
    , So there are also
    newFixedThreadPool
    Risk of memory overflow.
public static ExecutorService newSingleThreadExecutor() {      return new FinalizableDelegatedExecutorService (new ThreadPoolExecutor(1, 1,0L, TimeUnit.MILLISECONDS,new LinkedBlockingQueue<Runnable>())); } Copy code
  • CachedThreadPool
    : Cacheable thread pool, the queue it uses is
    SynchronousQueue
    ,
    SynchronousQueue
    It does not store the task itself, but forwards the task directly, which is no problem. But the second parameter of the constructor is
    Integer.MAX_VALUE
    , Which shows that it does not limit the maximum number of threads, that is, when there are too many tasks, it will create a lot of threads, causing memory overflow.
public static ExecutorService newCachedThreadPool() {      return new ThreadPoolExecutor(0, Integer.MAX_VALUE,60L, TimeUnit.SECONDS,new SynchronousQueue<Runnable>()); } Copy code
  • ScheduledThreadPool
    : A thread pool that executes tasks regularly or periodically. The task queue it uses is
    DelayedWorkQueue
    , Is a delayed and unbounded queue, so it also has the risk of memory overflow mentioned above.
public ScheduledThreadPoolExecutor(int corePoolSize) {      super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,new DelayedWorkQueue()); } Copy code
  • SingleThreadScheduledExecutor
    :with
    ScheduledThreadPool
    There are the same risks.

So, use

Executors
The created thread pool is risky. In comparison, it is better to create it manually because we can be more clear about the running rules of the thread pool, we can choose the number of threads that suit us, and we can reject the submission of new tasks when necessary , To avoid the risk of resource exhaustion.

Question: How to set the number of threads in the thread pool?

Answer: To set the number of threads, we need to be guided by this conclusion:

  • The higher the percentage of the average working time of a thread, the fewer threads are needed.
  • The higher the percentage of the average waiting time of a thread, the more threads are needed.
  • For different procedures, actual pressure testing is required to get a suitable choice.

Tasks are CPU-intensive (such as encryption, decryption, compression, calculations and a series of tasks that require a lot of CPU resources), the number of threads is set to 1~2 times the number of CPUs , because this kind of task will take up a lot of CPU time, and the CPU is almost At full load, how to set up many threads at this time will cause unnecessary context switching, and the performance is not good.

Tasks are IO-intensive (such as database, file reading and writing, network communication and other tasks). These tasks do not particularly consume CPU resources, but IO is time-consuming, so the CPU will have a lot of waiting. At this time, the number of threads can be set large. Many times, the formula: number of threads = number of CPU cores * (1 + thread waiting time/thread working time) , also need to consider other system load, the number of threads can be determined according to the actual pressure test.

Question: How to properly close the thread pool? What is the difference between shutdown and shutdownNow?

Answer: in

ThreadPoolExecutor
There are several ways to close the thread pool:

void shutdown; boolean isShutdown; boolean isTerminated; boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException; List<Runnable> shutdownNow; Copy code

shutdown()
The method is to safely close the thread pool, call
shutdown
After the method, the thread pool is not closed immediately. If there are many tasks in the thread pool being executed or there are tasks waiting to be executed in the waiting queue, then it will wait for all tasks to be executed and then close the thread pool, but after calling this method, it cannot be closed. If a new task is submitted, subsequent tasks submitted will be rejected according to the rejection policy.

isShutdown()
The method can judge whether the thread pool has started to shut down , it cannot judge whether it is shut down completely.

isTerminated()
Method can determine whether the thread pool has been completely closed , so if you call
shutdown
Later, there are still tasks in execution, this time call
isShutdown
return
true
, While calling
isTerminated
What returned is
false
.

awaitTermination
The method is to determine the thread pool
Has it been completely closed
,with
isTerminated
Similar, except that it accepts a waiting time. The following situations may occur when calling this method:

  • During the waiting period (including before entering the waiting state) the thread pool is closed and all submitted tasks (including those being executed and waiting in the queue) have been executed, which is equivalent to the thread pool having "finished", and the method will return true;
  • After waiting for the timeout period, the first type of thread pool "end" has never happened, and the method returns false;
  • When the thread is interrupted while waiting, the method will throw InterruptedException.

shutdownNow
: Close the thread pool immediately. It will first send an interrupt signal to the threads in the thread pool, try to interrupt the thread, and then return the tasks in the waiting queue to the caller, allowing the caller to make some remedies for these tasks.
So we can choose the appropriate method to stop the thread pool according to our business needs. For example, we can usually use the shutdown() method to shut down, so that all submitted tasks can be executed, but if the situation is urgent, then we will The shutdownNow method can be used to speed up the "finalization" of the thread pool.

Question: What is the two-level scheduling model of the Executor framework?

Answer: Simply put, Java threads are mapped one-to-one to native operating system threads. When the Java thread starts, a local operating system thread is started, and when the Java thread terminates, the operating system thread will also be recycled. The operating system will schedule all threads and allocate them to the available CPUs.

As a result, the application program controls the call of the upper layer through the Executor framework, and the scheduling of the lower layer is controlled by the kernel of the operating system, thus forming a two-level scheduling model, and the scheduling of the lower layer is not controlled by the application program. The two levels of tasks The scheduling model is as follows:

Various locks in Java

Question: What are the types of locks in Java?

Answer: According to the classification standard, we divide locks into the following 7 categories.

  • Deflection lock/lightweight lock/heavyweight lock;
  • Reentrant lock/non-reentrant lock;
  • Shared lock/exclusive lock;
  • Fair lock/unfair lock;
  • Pessimistic lock/optimistic lock;
  • Spin lock/non-spin lock;
  • Interruptible lock/non-interruptible lock.

Question: What is the essence of pessimistic locking and optimistic locking?

Answer:
Pessimistic lock , as the name suggests, is more pessimistic, you feel that if you don't lock the column, resources will be competed by other threads, so pessimistic lock will lock the data every time it acquires and modifies data.
Typical case of pessimistic lock: synchronized keyword and Lock interface

Optimistic lock is more optimistic. I think that there will be no other threads to interfere when I manipulate resources, so I will not lock the object, but when I update the resource, I will compare whether other threads have modified the data. . If there is no modification, the modification is normal. If it has been modified by other threads, the modification is abandoned, and an error or retry is selected. It is a concurrency strategy based on conflict detection. The implementation of this concurrency strategy does not require thread suspension, so it is non-blocking synchronization. Optimistic locks are generally used

CAS
Algorithm implementation.
A typical case of optimistic locking: Java concurrent package
Atomic
Atomic class, in the database
version
Version mechanism.

Comparison of pessimistic locking and optimistic locking : pessimistic locking will block threads that cannot obtain the lock. This overhead is fixed. The original cost of pessimistic locking is higher than that of optimistic locking. Although optimistic locking has a lower cost than pessimistic locking at first, If the lock has not been obtained or the concurrency is high and the competition is fierce, it will lead to non-stop retries, and then more and more resources will be consumed, even exceeding the pessimistic lock.

Use scenario : Pessimistic lock is suitable for scenarios with many concurrent writes, complex critical section code, and fierce competition. This scenario can avoid a lot of useless repeated attempts and consumption.
Optimistic lock is suitable for reading, a few modification scenarios, and also suitable for scenarios where there are many reads and writes, but the competition is not fierce.

Question: What does the CAS algorithm look like?

Answer: CAS (Compare and swap) means to compare and exchange. It involves three operands: memory value, expected value, and new value. Modify the memory value to the new value only if the memory value is equal to the expected value. CAS is atomic, and its atomicity is guaranteed by CPU hardware instructions.

problem:
synchronized
What is the principle of the lock?

Answer: Slightly, special article supplement

Question: Follow
Synchronized
In contrast, reentrant locks
ReentrantLock
What is the difference in its realization principle?

Answer: Actually, the realization principle of lock is basically to achieve one goal: to
make all threads see a certain mark.

Synchronized
This is achieved by setting a mark in the object header, which is a JVM native lock implementation method, and
ReentrantLock
And all implementation classes based on the Lock interface are through the use of a
volitile
Modified int variables, and ensure that each thread can have visibility and atomic modification of the int, its essence is based on the AQS framework.

Question: Why is there
Synchronized
Yes, Java and
ReentrantLock
lock? What are the similarities and differences between them?

Answer: Actually

ReentrantLock
Is not meant to replace
Synchronized
, But to
Synchronized
To supplement.
Same point:

  • Both are used to protect resource thread safety.
  • Reentrant lock

difference:

  • From the existential level,
    synchronized
    At the JVM level,
    ReentrantLock
    It is at the Java API level.
  • In terms of usage,
    synchronized
    No need to explicitly lock and release locks,
    ReentrantLock
    You need to lock it yourself, you need to
    finally
    Release the lock.
  • In terms of synchronization mechanism,
    synchronized
    Through the Java object header lock tag and
    Monitor
    The objects are synchronized.
    ReentrantLock
    by
    CAS, AQS (AbstractQueuedSynchronizer)
    with
    LockSupport
    (Used to block and unblock) to achieve synchronization.
  • Functionally speaking,
    ReentrantLock
    Added advanced functions, such as: try to acquire locks, wait for interruption, and achieve fair locks, etc.

How to choose?

  • If you can't use it, it's best not to use ReentrantLock or synchronized. Because in many cases you can use the mechanism in the java.util.concurrent package, it will handle all locking and unlocking operations for you, that is, it is recommended to use tool classes to unlock.
  • If the synchronized keyword is suitable for your program, please use it as much as possible. This can reduce the amount of code written and reduce the probability of errors. Because once you forget to unlock in finally, the code may have a big problem, and it is safer to use synchronized.
  • If you need special features of ReentrantLock, such as trying to acquire a lock, interruptible, timeout function, etc., use ReentrantLock.

Question: What are the commonly used methods for Lock?

Answer: Method overview

public interface Lock { //lock, block waiting if no lock is acquired     void lock(); //Lock, if the lock is not acquired, unless the current thread is interrupted during the acquisition of the lock, it will continue to try to acquire until it is acquired     void lockInterruptibly() throws InterruptedException; //Try to lock, return true if the lock is obtained, false if it is not obtained, and will not block waiting, so there is no deadlock problem     boolean tryLock(); //Try to lock, if the lock is not obtained, after waiting for a specified timeout period, the thread will actively abandon the acquisition of the lock to avoid waiting forever     boolean tryLock(long time, TimeUnit unit) throws InterruptedException; //unlock     void unlock();     Condition newCondition(); } Copy code

Question: What is a fair lock? What is an unfair lock? Why do we need unfair locks?

Answer:
Fair lock allocates locks in the order of thread request.
The unfair lock will not order as requested, under certain circumstances, be allowed to jump the queue (note, not completely random, but under the right opportunity). A suitable opportunity means that if one thread releases the lock when the current thread requests to acquire the lock, then the thread that currently applies for the lock immediately jumps in the queue regardless of the waiting thread. But if the previous thread did not release the lock when the current thread applied for the lock, then the current thread will still enter the waiting queue. The emergence of
unfair locks is to improve throughput . Consider this situation. Suppose that thread A holds the lock and thread B requests the lock. Because thread A holds the lock, thread B can only block and wait. When A releases the lock, B should be awakened to acquire the lock, but if C jumps in the queue to apply for the lock, in the unfair lock mode, C will acquire the lock, because the cost of waking up B is relatively large, and it is possible that C will wake up before B wakes up. The lock has been obtained and the task has been executed to release the lock. At this time, B comes to acquire the lock again, isn't that a win-win situation.
From the point of view of code implementation, the logic of applying for a fair lock is to first check whether there is a thread waiting in the waiting queue, and if there is a waiting, it will not acquire the lock. When an unfair lock applies for a lock, it does not matter whether it is three-seven-one to obtain the lock directly, and the queue is not applied until the lock is applied.

Question: Why do I need a read-write lock? What are the rules?

Answer: The appearance of the read-write lock is to improve performance, because we know that multiple read operations do not have thread safety issues. Then you can allow multiple threads to read, improving efficiency.
The design idea is: design two locks, a read lock and a write lock. After acquiring the read lock, you can only read the data but not modify it. When the write lock is acquired, you can read or modify the data. The read lock can be held by multiple threads, and the write lock can only be held by one thread.

/** * Description: Demonstrate the usage of read-write lock */ public class ReadWriteLockDemo { private static final ReentrantReadWriteLock reentrantReadWriteLock = new ReentrantReadWriteLock(false); private static final ReentrantReadWriteLock.ReadLock readLock = reentrantReadWriteLock.readLock(); private static final ReentrantReadWriteLock.WriteLock writeLock = reentrantReadWriteLock.writeLock(); private static void read() { readLock.lock(); try { System.out.println(Thread.currentThread().getName() + "Get the read lock and is reading"); Thread.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } finally { System.out.println(Thread.currentThread().getName() + "Release the read lock"); readLock.unlock(); } } private static void write() { writeLock.lock(); try { System.out.println(Thread.currentThread().getName() + "Get the write lock and is writing"); Thread.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } finally { System.out.println(Thread.currentThread().getName() + "Release Write Lock"); writeLock.unlock(); } } public static void main(String[] args) throws InterruptedException { new Thread(() -> read()).start(); new Thread(() -> read()).start(); new Thread(() -> write()).start(); new Thread(() -> write()).start(); } } Copy code

result

Thread-0 got the read lock and is reading Thread-1 gets the read lock and is reading Thread-0 releases the read lock Thread-1 releases the read lock Thread-2 gets the write lock and is writing Thread-2 releases the write lock Thread-3 gets the write lock and is writing Thread-3 releases the write lock Copy code

Question: According to the above questions, since there is no thread safety issue for reading data, why do we need to add a read lock? Can't it work without locking?

Answer: There is no security problem in the operation of reading, but if there is reading of shared variables and writing of shared variables (that is, one method writes to shared variables, and another method reads to shared variables), at this time For operation, assuming that the read is not locked, the shared variable may be written while reading, and the read may not be the expected value.

Question: Can the read lock jump in the queue?

answer:

ReentrantReadWriteLock
You can set the fair lock mode and the unfair lock mode.

//Fair lock mode ReentrantReadWriteLock reentrantReadWriteLock = new ReentrantReadWriteLock(true); //The default situation of non-fair lock mode ReentrantReadWriteLock reentrantReadWriteLock = new ReentrantReadWriteLock(false); Copy code
  • Check before acquiring read lock when fair lock
    readerShouldBlock()
    Method, will be checked before acquiring the write lock
    writerShouldBlock()
    Method to decide whether to line up or jump in line.
    final boolean writerShouldBlock() { return hasQueuedPredecessors(); } final boolean readerShouldBlock() { return hasQueuedPredecessors(); } Copy code

Obviously, in the fair lock mode, when a thread is queued, the read lock/write lock cannot be inserted into the queue .

  • In unfair lock mode,
    writerShouldBlock()
    with
    readerShouldBlock()
    achieve
    final boolean writerShouldBlock() { return false;//writers can always barge } final boolean readerShouldBlock() { return apparentlyFirstQueuedIsExclusive(); } Copy code

Unfair locks can jump in the queue when acquiring write locks. Use policy decisions when acquiring read locks .
Suppose that thread 1 and thread 2 hold a read lock, and thread 3 wants to acquire a write lock, and can only wait in a queue. At this time, thread 4 comes over and wants to acquire a read lock. It depends on the following strategy.

  1. The first strategy: you can jump in the queue

Because threads 1 and 2 hold the read lock, thread 4 can share the read lock, so they jump in the queue and read together. This strategy thread 4 improves efficiency, but there is a big problem. If there are many threads that want to acquire the read lock and can jump in the queue, then the thread 3 that first wants to acquire the write lock can only wait for the "starvation" state.

  1. The second strategy: you cannot jump in the queue

This strategy assumes that thread 3 has been queued in advance, so thread 4 must be queued. In this way, thread 3 acquires the lock, which can avoid the "starvation" phenomenon.

ReentrantReadWriteLock
If you choose the second strategy, you can't jump in the queue.

Question: Talk about the upgrade and downgrade of read-write lock?

Answer: The downgrade strategy can only be downgraded from a write lock to a read lock, but cannot be upgraded from a read lock to a write lock.

  • Support lock downgrade
    Update cache code case:

    public class CachedData { Object data; volatile boolean cacheValid; final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock(); void processCachedData() { //Add a read lock before judging the data rwl.readLock().lock(); if (!cacheValid) { //Cache invalidation Before acquiring the write lock, the read lock must be released first. rwl.readLock().unlock(); //Update the cache data and obtain the write lock first rwl.writeLock().lock(); try { //Here we need to judge the validity of the data again, because in the gap between the release of the read lock and the acquisition of the write lock, there may be other threads that have modified the data. if (!cacheValid) { data = new Object(); cacheValid = true; } //In the case of not releasing the write lock, directly acquire the read lock, which is the degradation of the read-write lock. rwl.readLock().lock(); } finally { //The write lock is released, but the read lock is still held rwl.writeLock().unlock(); } } try { //Only read lock at this time System.out.println(data); } finally { //Release the read lock rwl.readLock().unlock(); } } } Copy code

    The above code is the degradation of the read-write lock .
    Why does the analysis need to be downgraded? Isn't it good to get the write lock in the above code and finally read the data? Why is it so complicated?
    In fact, it is for performance, because if the write lock is held all the time, if the subsequent read operation is very time-consuming, then because the write lock is an exclusive lock, other threads can only queue up if they want to read data, and the thread currently holding the write lock does The thing is actually a read operation. Downgrading at this time can improve overall performance.

  • Upgrade
    code case that does not support locks :

    final static ReentrantReadWriteLock rwl = new ReentrantReadWriteLock(); public static void main(String[] args) { upgrade(); } public static void upgrade() { rwl.readLock().lock(); System.out.println("Read lock acquired"); rwl.writeLock().lock(); System.out.println("Successfully upgraded"); } Copy code

    The above code "successfully upgraded" will not be printed. Why doesn't it support upgrades? The lock upgrade is to upgrade the read lock to a write lock, and the read lock is a shared lock, which can be held by multiple threads. At this time, the read lock is upgraded to a write lock. What about other threads? In this way, different threads simultaneously hold read-write locks.

Question: What is a spin lock? What are its advantages and disadvantages?

Answer: Spin lock is to keep trying to acquire the lock until the lock is acquired. The realization is to obtain the lock through a continuous loop until the lock is obtained .
Spin lock and non-spin lock process comparison , spin lock will not release the CPU time slice, and keep trying to acquire the lock until it succeeds, while the non-spin lock will let the thread sleep and release the CPU time when the lock cannot be acquired. When another thread releases the lock red, try to acquire the lock again. Therefore, the biggest difference between a non-spin lock and a spin lock is that if it encounters a situation where it cannot get the lock, it will block the thread until it is awakened. The spin lock will keep trying.

Benefit: Because blocking and waking up threads require a lot of overhead, if the synchronization code is not complicated, the overhead of executing the code may not be as large as the overhead of switching threads. So as long as you spin and wait for a while, you can avoid the overhead of context switching and improve efficiency.

Disadvantages Although the overhead of switching context is avoided, it brings new overhead. Continuous spinning will take up CPU time slices for useless work. Although the overhead of spinning is low at the beginning, the lock has not been acquired, and the overhead will be more Come bigger.

How to choose Spin is suitable for those where the degree of concurrency is not particularly high, and the execution time of synchronous code is relatively short. This spin can avoid thread switching and improve efficiency.
If the synchronization code takes a long time to execute, and the thread does not release the lock for a long time after obtaining the lock, then the spin will waste CPU resources in vain.

Question: What optimizations have been made to locks by jvm?

Answer: in

JDK 1.6
HotSopt virtual machine pair
synchronized
The performance of the built-in lock has undergone many optimizations, including adaptive spin, lock elimination, lock coarsening, bias lock, lightweight lock, etc.

  • The
    disadvantage of the adaptive spin lock is that when the lock cannot be obtained for a long time, it will always try to acquire the lock, wasting CPU resources. The adaptive spin is to solve this problem, its spin time is not fixed, it will be determined by various factors such as the success rate and failure rate of the most recent spin, and the current state of the lock owner. The spin lock becomes smarter.

  • Lock Elimination
    If the compiler determines that certain objects cannot be accessed by other threads, then it must be safe, so it will automatically remove such locks.

  • Lock coarsening,
    such as the following code:

    public void lockCoarsening() { synchronized (this) { //do something } synchronized (this) { //do something } synchronized (this) { //do something } } Copy code

    The above release and reacquisition of the lock is meaningless, it can expand the synchronization area

    public void lockCoarsening() { synchronized (this) { //do something //do something //do something } } Copy code

    However, lock coarsening is not suitable for looping scenarios, because the looping scenario expands the synchronization area, which will cause the thread holding the lock to run for a long time, and other threads cannot acquire the lock.

  • The three types of locks: bias lock/lightweight lock/heavyweight lock are aimed at

    synchronized
    The state of the lock, through the head of the object
    mark word
    To indicate the state of the lock.

    For a biased lock , there is no competition for this lock from beginning to end, and there is no need to lock it, just mark it. After an object is initialized, if there is no thread to acquire its lock, it is biased. When the first thread accesses it and tries to acquire the lock, it records this thread. If the subsequent thread that tries to acquire the lock is the owner of the biased lock, it can directly acquire the lock.

    Lightweight lock means that when the lock is originally a biased lock and is accessed by another thread, then the biased lock will be upgraded to a lightweight lock, and the thread will acquire the lock by spinning without blocking.

    Heavyweight locks , when multiple threads directly compete in actual competition, and the competition time is relatively long, at this time, bias locks and lightweight locks cannot meet the demand, and the locks will expand into heavyweight locks. Threads that cannot apply for a lock will block.

Question: What is a deadlock? What are the necessary conditions for deadlock? What should I do if there is a deadlock?

Answer: The definition of deadlock: A group of threads competing for resources wait for each other, leading to "permanent" blocking .

There are four necessary conditions for deadlock:

  • Mutually exclusive, shared resources X and Y can only be occupied by one thread
  • Occupy and wait, thread T1 has already obtained the shared resource X, while waiting for the shared resource Y, the shared resource X is not released;
  • Cannot preempt, other threads cannot forcefully preempt the resources occupied by thread T1;
  • Circular waiting. Thread T1 waits for the resource occupied by thread T2, and thread T2 waits for the resource occupied by thread T1, which is called circular waiting.

If there is a deadlock online, save the data of the "crisis scene" such as JVM information and logs, and then restart the service immediately to try to fix the deadlock. The best way is to avoid deadlock. To avoid deadlock, as long as the necessary conditions of a deadlock are destroyed, there will be no deadlock problem.

  • For the condition of "occupy and wait", we can apply for all resources at once, so that there is no waiting. (Add a resource management role, if you want to get resources at once, get the resource manager first)
  • Regarding the condition of "non-preemption", when a thread occupying some resources further applies for other resources, if it fails to apply, it can actively release the resources it occupies, so that the condition of non-preemption will be destroyed. (Java Lock try to acquire, give up if the lock is not acquired)
  • For the condition of "loop waiting", it can be prevented by applying for resources in order. The so-called sequential application means that resources are in a linear order. When applying, you can first apply for a resource with a smaller serial number, and then apply for a resource with a larger serial number, so that there will naturally be no cycle after linearization.

problem:
synchronized
What is the main problem solved?

answer:

synchronized
It mainly solves two core problems in concurrent programming. One is mutual exclusion , that is , only one thread is allowed to access shared resources at a time; the other is synchronization , that is, how to communicate and collaborate between threads.

Problem: the following code
synchronized
Is it used correctly?

class Account { //Account Balance private Integer balance; //account password private String password; //Withdraw money void withdraw(Integer amt) { synchronized(balance) { if (this.balance> amt){ this.balance -= amt; } } } //change the password void updatePassword(String pw){ synchronized(password) { this.password = pw; } } } Copy code

Answer: The basic principle of locks should be private, immutable, and non-reusable . So the above code lock object is incorrect. due to

Ingeter
Will cache values in the range of -128~127,
String
There is a string constant pool, so
Integer
with
String
Type of object in
JVM
It is possible to be reused inside, besides,
JVM
Among the objects that may be reused are
Boolean
, What does reuse mean? Means your lock may be used by other code, if other code
synchronized
(Your lock), and if you don t release it, your program will never get the lock. This is a hidden risk.

Question: Lock is provided in the Java SDK, why provide a
Semaphore
?

answer:

Semaphore
Is translated into semaphore. The semaphore model can be summarized as: a counter, a waiting queue, and three methods (init, down, up). 3.methods (all atomic):

  • init(): Set the initial value of the counter.
  • down(): The value of the counter is reduced by 1; if the value of the counter is less than 0 at this time, the current thread will be blocked, otherwise the current thread can continue to execute.
  • up(): The value of the counter is increased by 1; if the value of the counter is less than or equal to 0 at this time, a thread in the waiting queue is awakened and removed from the waiting queue.

Lock
with
Semaphore
Can achieve the effect of mutual exclusion locks, but
Semaphore
It is also possible to allow multiple threads to access a critical section . The more common requirements are various pooled resources (connection pool, object pool, thread pool), and multiple threads are allowed to use pooled resources at the same time. this is
Lock
Not easy to achieve.

Java concurrent container related

Question: why
HashMap
Isn't it thread safe?

Answer: Actually

HashMap
There are many unsafe places.

  • HashMap
    in
    put()
    Method has one line of code
    modCount++
    , This code is thread-unsafe at first glance.

  • During the expansion period, the value is inaccurate,

    HashMap
    The expansion of will create a new empty array and fill the new array with old items. If you get the value at this time, you may get a null value. The following code will print
    Exception in thread "main" java.lang.RuntimeException: HashMap is not thread safe. at top.xiaoduo.luka.api.activity.controller.HashMapNotSafe.main(HashMapNotSafe.java:26)
    .

    public class HashMapNotSafe { public static void main(String[] args) { final Map<Integer, String> map = new HashMap<>(); final Integer targetKey = 65535;//65 535 final String targetValue = "v"; map.put(targetKey, targetValue); new Thread(() -> { IntStream.range(0, targetKey).forEach(key -> map.put(key, "someValue")); }).start(); while (true) { if (null == map.get(targetKey)) { throw new RuntimeException("HashMap is not thread safe."); } } } } Copy code
  • At the same time put collision causes data loss, if multiple threads at the same time

    put
    Add elements, exactly two
    put
    of
    key
    it's the same. They collide, but at this time the two threads determine that the position is empty, then write the two values to the same position, and finally lose one data.

  • Visibility cannot be guaranteed, if a thread gives a

    key
    Putting a new value in, when another thread fetches the value, there is no guarantee that the fetched value will be the new value.

  • The infinite loop causes the CPU to be 100%. When this problem occurs, the linked list may be ringed.

    get
    When there is an endless loop.

Question: why
ConcurrentHashMap
Is there more than 8 in the bucket before it turns into a red-black tree?

Answer: First of all

ConcurrentHashMap
It is in the form of array + linked list. When the length of the linked list is greater than 8, the linked list will be converted into a red-black tree.
The first question is why switch to red-black trees?
The red-black tree is a balanced binary tree, so the query efficiency is very high. The time complexity of the linked list is O(n), and the time complexity of the red-black tree is O(log(n)), so in order to improve performance, the linked list will be converted Into red and black trees.
So why not use red-black trees in the first place?
Because a single in the red-black tree
TreeNode
The space that needs to be occupied is about normal
Node
Twice as much, so only if it contains enough
Nodes
Will turn into
TreeNodes
, So when the number of nodes is greater than the threshold, it will switch to the red-black tree, saving space. This is explained in the Java source code:

Because TreeNodes are about twice the size of regular nodes, use them only when bins contain enough nodes to warrant use (see TREEIFY_THRESHOLD). And when they become too small (due  removal or resizing) they are converted back to plain bins. Copy code

When the length of the linked list reaches 8, it is converted into a red-black tree, and when the length drops to 6, it is converted back. This reflects the idea of time and space balance.
Then this threshold of 8 is also explained in the Javadoc:

In usages with well-distributed user hashCodes, tree bins  are rarely used. Ideally, under random hashCodes, the  frequency of nodes in bins follows a Poisson distribution  (http://en.wikipedia.org/wiki/Poisson_distribution) with a  parameter of about 0.5 on average for the default resizing  threshold of 0.75, although with a large variance because  of resizing granularity. Ignoring variance, the expected  occurrences of list size k are (exp(-0.5) * pow(0.5, k)/  factorial(k)). The first values are:  0: 0.60653066  1: 0.30326533  2: 0.07581633  3: 0.01263606  4: 0.00157952  5: 0.00015795  6: 0.00001316  7: 0.00000094  8: 0.00000006  more: less than 1 in ten million Copy code

This means that if the hashCode is well distributed and the calculation results are discrete and balanced, the red-black tree is difficult to use. In an ideal situation, the length of the linked list conforms to the Poisson distribution, and the probability of the length of the linked list reaching 8 is less than one in ten million. Therefore, in general, the linked list will not turn to a red-black tree. If there is a linked list in your code, it will turn to red-black. Tree, then you need to consider whether

hashCode
The method is inappropriate.

Question: Comparison
ConcurrentHashMap
with
Hashtable
The difference?

Answer: The main difference is

  • Different ways to achieve thread safety

    Hashtable
    The data structure is also in the form of array + linked list, which is used for key methods
    synchronized
    Synchronize keywords to achieve thread safety.
    ConcurrentHashMap
    Divided into the implementation of Java 7 and Java 8, Java 7 uses
    Segment
    Segmented lock to ensure safety,
    Segment
    inherit
    ReentrantLock
    . Abandoned in Java 8
    Segment
    Design, adopt
    Node + CAS + synchronized
    Ensure thread safety.

  • Performance difference

    Hashtable
    Due to the use of
    synchronized
    When the number of threads increases, performance will drop sharply, because only one thread can manipulate the object at a time, and other threads are blocked. And there are overheads such as context switching.
    ConcurrentHashMap
    In Java 7, the segment lock is used, and its maximum concurrency is the size of the segment. The default is 16, which is higher than
    Hashtable
    It is much more efficient. And Java 8 uses
    Node + CAS + synchronized
    Way to
    put()
    Take the method as an example. When the slot is calculated by hash, and when it is found that the slot is empty, the CAS setting value is used, which means that the array layer is CAS lock-free operation.
    synchronized
    Only used in the linked list or red-black tree, that is, the calculated slot is found to have data (that is, Hash collision occurs), then it is used when the value is mounted in the linked list or red-black tree
    synchronized
    . Then its concurrency is the number of arrays.

  • Modifications are different during iteration

    Hashtable
    An error will be reported when it is modified during iteration,
    ConcurrentModificationException
    Concurrent modification is abnormal. Mainly will check
    modCount
    with
    expectedModCount
    Are they the same,
    expectedModCount
    It is generated when the iterator is produced and cannot be modified, but it will be modified when the data is modified
    modCount
    Value, then the two variables must be inconsistent.

    public T next() { if (modCount != expectedModCount) throw new ConcurrentModificationException(); return nextElement(); } Copy code

    ConcurrentHashMap
    Modifying the data during the iteration will not throw an exception.

Question: please talk
CopyOnWriteArrayList
What is it?

answer:

  • definition

    CopyOnWriteArrayList
    Is a concurrent container provided in the Java Concurrency Package, it is thread-safe and read operations are lock-free
    ArrayList
    , The write operation is realized by creating a new copy of the underlying array, which is a concurrency strategy that separates reads and writes . We can also call this kind of container a " write-on-write replicator " ".
    CopyOnWriteArrayList
    Concurrent reading is allowed, and reading is not locked. The most important thing is that it does not affect reading when writing, because when writing, it copies the original array and operates on the new array, which does not affect the original array at all. Only multiple writes are synchronized. I think it is very similar to the multi-version concurrency mechanism of the database.

  • Add element source code

    public boolean add(E e) { final ReentrantLock lock = this.lock; lock.lock(); try { Object[] elements = getArray(); int len = elements.length; Object[] newElements = Arrays.copyOf(elements, len + 1); newElements[len] = e; setArray(newElements); return true; } finally { lock.unlock(); } } Copy code
    • use
      ReentrantLock
      Lock
    • Copy the original array, add one to the length, and set the original data in the new array
    • Point the reference of the original array to the new array
    • Unlock
  • advantage

    • High read performance
    • Modifying data during iteration will not report concurrent modification exceptions
  • Disadvantage

    • Large memory footprint, a new array needs to be copied every time. When the amount of data is large, frequent GC may occur
    • When there are many or complex elements, the copying overhead is high
    • Real-time performance cannot be guaranteed. When writing, old data is read, and new data can be read only after writing is completed
  • Applicable scenarios:
    Read more and write less scenarios, and the real-time requirements are not very high

Concurrency tools

problem:
AQS
What is the principle?

Answer: I quote here a blog post that explains more clearly, "Analysis of the source code line by line clearly AbstractQueuedSynchronizer"

Atomic class

Question: What is atomicity? Why is there an atomicity problem? How to deal with it?

Answer: Atomicity means to ensure that a set of operations either all succeed or all fail .
Therefore, the atomic representation is that the operation is indivisible , and its essence is that there is a requirement for consistency among multiple resources, and the intermediate state of the operation is not visible to the outside .

If only one thread will not be interrupted when executing one or more instructions, will there be no atomicity problems? That is, threads will not switch when performing a set of operations.
Therefore, the root of the atomicity problem is thread switching , and thread switching relies on CPU interrupts, so disabling CPU interrupts can solve the atomicity problem . This is true for early single-core CPUs, but under multi-core CPUs, even if interrupts are disabled, it will not work. There will be situations where multiple threads operate on resources at the same time.
Therefore, only one thread executes this condition at the same time to solve the atomicity problem, which is called mutual exclusion . It can be done by relying on locks.

Question: When is the atomic class? what's the effect?

Answer: Before understanding atomic classes, let me talk about atomicity. Atomicity is to ensure that a set of operations will either succeed or fail. Therefore , a class with atomicity is called an atomic class , which can perform operations such as atomic addition, increment, and decrement.
The role of atomic classes is similar to that of locks, ensuring thread safety under concurrent conditions .
Compared with the lock, its advantage is

  • The granularity of control is finer . Atomic variables can control the scope of competition at the variable level. In one case, the granularity of lock acquisition is larger than that of atomic variables.
  • It is more efficient . The underlying atomic class uses CAS and will not block threads. In addition to high competition, the efficiency of CAS will be higher than that of locks.

Question: What are the atomic classes?

answer:

  • Basic type atomic class

    AtomicInteger
    ,
    AtomicLong
    ,
    AtomicBoolean

    //AtomicInteger common methods public final int get()//Get the current value public final int getAndSet(int newValue)//Get the current value and set the new value public final int getAndIncrement()//Get the current value and increment public final int getAndDecrement()//Get the current value and decrement it public final int getAndAdd(int delta)//Get the current value and add the expected value Copy code
  • Array type atomic class

    AtomicIntegerArray
    ,
    AtomicLongArray
    ,
    AtomicReferenceArray

  • Reference type atomic class

    AtomicReference
    ,
    AtomicStampedReference
    ,
    AtomicMarkableReference
    The reference type atomic class is similar to the basic type atomic class, and the reference type atomic class can guarantee the atomicity of the object

  • Upgrade Type Atomic Class

    AtomicIntegerFieldUpdater
    ,
    AtomicLongFieldUpdater
    ,
    AtomicRerenceFieldUpdater

    You can turn the original ordinary variables into atomicity, in order to
    AtomicIntegerFieldUpdater
    As an example

    public class AtomicIntegerFieldUpdaterDemo implements Runnable { static class Score { volatile int score; } static Score computer; static Score math; private AtomicIntegerFieldUpdater atomicIntegerFieldUpdater = AtomicIntegerFieldUpdater.newUpdater(Score.class, "score"); @Override public void run() { for(int i=0; i <1000; i++){ //Ordinary variable++ computer.score ++; //Use upgrade type atomic class++ atomicIntegerFieldUpdater.incrementAndGet(math); } } public static void main(String[] args) throws InterruptedException{ computer = new Score(); math = new Score(); AtomicIntegerFieldUpdaterDemo r = new AtomicIntegerFieldUpdaterDemo(); Thread t1 = new Thread(r); Thread t2 = new Thread(r); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println("Results of ordinary variables:" + computer.score); System.out.println("Upgraded result:" + math.score); } } //Results of the Results of ordinary variables: 1998 The result after the upgrade: 2000 Copy code

    Since these variables need to have atomic classes, why not design them from the beginning

    AtomicInteger
    What?

    • For historical reasons, this variable has been declared as a common variable and has been used extensively, and the cost of modification is high
    • If this variable only needs to use its atomicity in one or two places, and it is not needed in many other places, there is no need to design it as an atomic class. After all, atomic analogs consume resources more than ordinary variables.
  • Adder accumulator

    LongAdder
    ,
    DoubleAdder

  • Accumulator

    LongAccumulator
    ,
    DoubleAccumulator

Question: Contrast
AtomicInteger
with
synchronized

answer:

  • The same points
    can guarantee thread safety .

  • difference

    • Different principles behind ,
      synchronized
      use
      monitor
      Monitor, get it first
      monitor
      , The execution is complete, release
      monitor
      ,
      AtomicInteger
      The use is
      CAS
      principle.
    • The scope of use is different ,
      synchronized
      You can modify the method, modify the code block, that is, the scope of synchronization is wider, and
      AtomicInteger
      The atomic class is just an object.
    • Performance difference , this is also the difference between optimistic lock and pessimistic lock. The cost of pessimistic locking is fixed. Optimistic locking has a relatively small cost at first, but as the lock cannot be obtained, the continuous spinning cost will be very large. Therefore, these two performances need to be distinguished from the scenes, and the scenes with fierce competition and complex synchronization zone code are suitable
      synchronized
      , It is better to use atomic classes without fierce competition. and
      synchronized
      The optimized performance is also good.

Problem: The underlying atomic class is used
CAS
, Then
CAS
what exactly is it?

answer:

CAS
The full English name is
Compare-And-Swap
, Which means "compare and exchange", it is a kind of thought and an algorithm.
CAS
There are three operands, memory value V, expected value A, modified value B,
CAS
The idea is to modify the memory value to B only when the expected value A and the current memory value V are the same, otherwise, give up this modification and continue with the next attempt. The "compare and exchange" operation is completed by a single instruction from the CPU, so it is atomic.

problem:
CAS
What are the disadvantages?

Answer: There are the following disadvantages:

  • CAS
    The biggest disadvantage is
    ABA
    problem,
    ABA
    The problem is that the current value changes from A to B, and then from B to A. Although this value is the same as the original value, the value has actually been modified. and
    CAS
    The criterion for judging is whether the current value is consistent with the expected value. and so
    CAS
    It cannot detect whether the value has been modified during this period, it can only check whether the present value is the same as the original value. The solution is to add a version number, and add 1 each time the version number is modified. In addition to whether the comparison value is consistent, it is also necessary to compare whether the version number is consistent.
  • The spin time is too long.
    CAS
    Usually it comes with a loop or even an endless loop. In a high concurrency scenario, it has been unable to modify successfully, and it will continue to loop to try to modify, which consumes CPU resources.
  • The scope cannot be flexibly controlled,
    CAS
    Can only modify a variable, it may be a basic class or a reference type, but it is difficult to do for a group or multiple shared variables
    CAS
    operating.

ThreadLocal

problem:
ThreadLocal
What scenarios are they generally used for?

Answer: Two typical scenarios:

  • ThreadLocal is used to save the exclusive object of each thread, and create a copy for each thread. Each thread can only modify the copy it owns without affecting the copies of other threads, so that the original is in a concurrent situation. Under the circumstances, the thread-unsafe situation becomes the thread-safe situation.
  • The function of ThreadLocal is to provide local variables in the thread. This variable works during the life cycle of the thread, reducing the complexity of transferring some public variables between multiple functions or components in the same thread.

From the perspective of the thread, the target variable is like the local variable of the thread, which is also in the class name

Local
The meaning expressed.

problem:
ThreadLocal
Is it used to solve the problem of multithreaded access to shared resources?

Answer: Actually not, although

ThreadLocal
It can solve the thread safety problem in the case of multithreading, but the most important thing is that this resource is not shared, but unique to each thread.
ThreadLocal
A copy is made for each thread, and there is no competition for resources at all.
If put in
ThreadLocal
The one in is one
static
Modified shared resources, then
ThreadLocal
The thread safety of resources cannot be guaranteed.

Question: What is the relationship between ThreadLocal and synchronized ?

Answer: Although they can all solve the problem of thread safety, their design principles are different.

  • ThreadLocal
    It is to avoid resource competition by letting each thread share a copy.
  • synchronized
    The main purpose is to achieve thread safety by restricting the resources of the critical section by locking only one thread at the same time.

and

ThreadLocal
There is another usage scenario, which is through
ThreadLocal
Conveniently get the information saved by the thread independently in the task place of the current thread, that is, use
ThreadLocal
Avoid passing parameters.

4.
Thread
,
ThreadLocal
and
ThreadLocalMap
The relationship between the three

Answer: one

Thread
Hold one
ThreadLocalMap
,One
ThreadLocalMap
Can store multiple
ThreadLocal
,Every
ThreadLocal
All correspond to one
value
.

Thread
with
ThreadLocalMap
relationship

ThreadLocal
Class relationship:

problem:
ThreadLocal
How is it possible to maintain a copy of the variable for each thread?

Answer: in

ThreadLocal
There is one in the class
static
Declared
Map
, Used to store a copy of the variable of each thread,
Map
The key of the element is the thread object, and the value corresponds to the variable copy of the thread.

problem:
ThreadLocal
OOM problem occurred

Java memory model related

Question: What is the CPU multi-level cache model?

Question: The bus lock mechanism and MESI cache compliance protocol?

Question: Why are there problems of visibility, atomicity, and order in concurrent programming?

answer:

  • The visibility problem is due to caching, because there are multiple levels of cache between the CPU and the main memory, and the cached data between multiple CPUs cannot be synchronized in time. JMM is abstracted as each thread has a working memory, and different threads appear. The working memory data is not visible problem.
  • Atomicity is caused by switching between threads
  • Reordering is due to compilation optimization

Question: Can volatile guarantee viability, order, and atomicity?

answer:

volatile
It can guarantee visibility, order, and atomicity but not atomicity. There is no doubt about the first two points, but the latter sentence can guarantee atomicity but how to understand it.
volatile
It can guarantee the atomicity of each operation of reading or writing. For example, the operation of reading a variable is atomic, but it cannot guarantee that the combined operation of multiple atoms is still atomic. For example, variable ++ operations are typically not atomic.

Question: Compare JVM memory structure VS Java memory model?

answer:

  • JVM
    The memory structure is related to the runtime data area of the Java virtual machine. Java runs on the Java virtual machine. The virtual machine divides the managed memory into different data areas. According to the Java virtual machine specification, it can be divided into 6 areas: heap area, virtual machine stack, method area, local method stack, Program counter, running constant pool.
  • The Java memory model is related to Java concurrent programming. The Java memory model is a set of specifications, which mainly solves the problem of unpredictable results caused by CPU and multi-level cache, processor optimization, and specified reordering.

Question: Why do you need
JMM
(Java memory model)

Answer: There is no concept of a memory model for getting up early. Therefore, the final execution result of the program depends on the processor, and the rules of each processor are not the same. Therefore, the same piece of code will be executed on different processors to get different results, and different JVM implementations will also bring different translations. result.
So Java needs a standard to tell developers, compilers, and JVM engineers to reach an agreement. This standard is JMM.

Question: What is JMM?

Answer: JMM is a set of specifications related to multithreading. It requires the implementation of JVM to comply with JMM specifications. JMM is related to processors, caches, concurrency, and compilers. Methods, specifically, these methods include

volatile
,
synchronized
with
final
3.keywords, and six
Happens-Before
rule). It solves the problem of unpredictable results caused by CPU multi-level cache, processor optimization, instruction rearrangement, etc. The communication between Java threads is controlled by the Java memory model, and JMM determines when a thread's writing to a shared variable is visible to another thread.

Question: What is designated reordering?

Answer: The order of the statements in the Java program we wrote at runtime is not necessarily the same as the order of the code we wrote, because the compiler, JVM, and CPU may adjust the order of instructions for optimization purposes. This is reordering. .

Question: What are the benefits of reordering? Why do you want to reorder?

Answer: The benefit of reordering is to increase processor speed. for example:

Load
Is to read data from main memory,
Store
It writes data to main memory. Because you need to do a twice before reordering
Load
with
Store
, And after the reordering, a is only done once
Load
with
Store
. This improves the overall operating speed. However, reordering will not be arbitrarily ordered, and it must ensure that it does not change the semantics within a single thread.

Question: Which ones have atomic operations in Java?

Answer: All of the following have atomic operations.

  • apart from
    long
    with
    double
    Basic types other than (
    int
    ,
    byte
    ,
    boolean
    ,
    short
    ,
    char
    ,
    float
    ) Read/write operations are naturally atomic.
  • All references
    reference
    Read/write operations.
  • Added
    volatile
    Read/write operations of post-variables (including
    long
    with
    double
    )
  • in
    java.concurrent.Atomic
    Some methods of some classes in the package are atomic, such as
    AtomicInteger
    of
    incrementAndGet
    method.

But pay attention: atomic operation + atomic operation! = Atomic operation. Such as using

volatile
decorative
int
variable
i
,
i
The value and assignment of are respectively atomic operations. If the value is incremented and then assigned, it is not an atomic operation.

Question: why
long
with
doouble
Is read and write atomic?

answer:

long
with
double
The value of occupies a 64-bit memory space, and the writing of a 64-bit value can be divided into two 32-bit operations. Such an assignment operation is split into two operations of the lower 32 bits and the upper 32 bits. Wrong values may appear under multiple threads. So use
volatile
Modification then read/write is an atomic operation. But it is generally not used in actual development
volatile
To modify
long
with
double
Type, because the implementation of each virtual machine will treat it as an atomic operation.

Question: Why is there a memory visibility problem

Answer:
To say that this question involves modern CPU architecture, as shown in the figure below

Because of the existence of the CPU cache coherency protocol, the caches between multiple CPUs will not be out of synchronization. However, the cache coherency protocol has a loss in performance, so CPU designers have optimized it on this basis. For example: Store Buffer and Load Buffer are added between the computing unit and L1, but the Store Buffer and L1 are not synchronized. When mapped to Java, the Java memory model abstracts the CPU multi-level cache. JMM has the following provisions:

  1. All variables are stored in the main memory, and each thread has its own independent working memory, and the content of the variable in the working memory is a copy of the variable in the main memory;

  2. Threads cannot directly read/write variables in main memory, but can manipulate variables in their own working memory, and then synchronize to the main memory, so that other threads can see this modification;

  3. The main memory is shared by multiple threads, but the threads do not share their own working memory. If the threads need to communicate, it must be completed with the help of main memory transfer.

Question: what is
happens-before
relationship?

answer:

happens-before
Relationships are used to describe issues related to visibility: if the first operation
happens-before
The second operation, then we say that the first operation must be visible to the second operation.

programming

Question: What is the difference between asynchronous and synchronous?
Dubbo
How to realize the conversion from asynchronous to synchronous, can you design this program?

Answer: The difference between asynchronous and synchronous, in general, is whether the caller needs to wait for the result, the need to wait for the result is synchronous, and the need to wait for the result is asynchronous .

Dubbo
Is well-known
RPC
Frame in
TCP
Protocol level, after sending
RPC
After the request, the thread will not wait for the response result of the RPC
. But we are using
RPC
Most of the frameworks are synchronous, because the framework converts asynchronous to synchronous.

//Create lock and condition variable private final Lock lock = new ReentrantLock(); private final Condition done = lock.newCondition(); //The thread calls the get() method to wait for the RPC to return the result Object get(int timeout){ long start = System.nanoTime(); lock.lock(); try { while (!isDone()) { //No results are returned waiting done.await(timeout); long cur=System.nanoTime(); if (isDone() || cur-start> timeout){ //wait for timeout break; } } } finally { lock.unlock(); } if (!isDone()) { throw new TimeoutException(); } return returnFromResponse(); } //Whether the RPC result has been returned boolean isDone() { return response != null; } //Call this method when the RPC result returns private void doReceived(Response res) { lock.lock(); try { response = res; if (done != null) { //Call signal to notify the calling thread done.signal(); } } finally { lock.unlock(); } } Copy code

Question: Can you write a reentrant spin lock by hand?

public class ReentrantSpinLock { private AtomicReference<Thread> owner = new AtomicReference<>(); //Number of reentrants private int count = 0; //lock public void lock() { Thread current = Thread.currentThread(); if (owner.get() == current) { count++; return; } while (!owner.compareAndSet(null, current)) { System.out.println("--I'm spinning--"); } } //Unlock public void unLock() { Thread current = Thread.currentThread(); //Only the thread holding the lock can unlock if (owner.get() == current) { if (count> 0) { count--; } else { //No CAS operation is needed here, because there is no competition, because only the thread holder can unlock owner.set(null); } } } public static void main(String[] args) { ReentrantSpinLock spinLock = new ReentrantSpinLock(); Runnable runnable = () -> { System.out.println(Thread.currentThread().getName() + "Start trying to acquire a spin lock"); spinLock.lock(); try { System.out.println(Thread.currentThread().getName() + "Spinlock acquired"); Thread.sleep(4000); } catch (InterruptedException e) { e.printStackTrace(); } finally { spinLock.unLock(); System.out.println(Thread.currentThread().getName() + "The spin lock is released"); } }; Thread thread1 = new Thread(runnable); Thread thread2 = new Thread(runnable); thread1.start(); thread2.start(); } } Copy code

Question: How to use
Semaphore
Quickly implement a current limiter?

answer:

public class SemaphoreDemo { static class Link { } static class ObjPool<T, R> { final List<T> pool; final Semaphore semaphore; ObjPool(int size, T t) { pool = new Vector<>(size); for (int i = 0; i <size; i++) { pool.add(t); } semaphore = new Semaphore(size); } public R exec(Function<T, R> func) throws Exception { T t = null; semaphore.acquire(); try { System.out.println(Thread.currentThread().getName() + "---------Competing for lock--------"); t = pool.remove(0); System.out.println(Thread.currentThread().getName() + "Get the lock and execute"); return func.apply(t); } finally { pool.add(t); semaphore.release(); } } } public static void main(String[] args) { ObjPool objPool = new ObjPool(5, new Link()); for (int i = 0; i <30; i++) { new Thread(() -> { try { objPool.exec(t -> t.toString()); } catch (Exception e) { } }).start(); } } } Copy code