Read the past and present of AQS&ReentrantLock in one article

Read the past and present of AQS&ReentrantLock in one article

I. Introduction

Hello, everyone, long time no see. I have been too busy at work recently and have not had time to write a blog. I just got vacant and thought about another important system in java: lock, which I haven't shared yet. This article will start from the basic data structure AbstractQueuedSynchronizer (hereinafter referred to as AQS) of the JUC package as a starting point to analyze the lock mechanism in the JDK package. AQS is the cornerstone of many synchronization classes (Lock, Semaphore, ReentrantLock, etc.) in the JUC package. Learning AQS is important for understanding the JUC package. When this article introduces AQS, I will introduce ReentrantLock for you, hoping to help you.

2. AQS

AQS is a simple framework that provides atomically managing synchronization status, blocking and waking up threads, and a queue model. This article will gradually deepen from the application layer to the principle layer, and through the basic characteristics of ReentrantLock and the relationship between ReentrantLock and AQS, to deeply interpret the knowledge points of AQS-related exclusive locks.

The core of this article is the analysis of the principle of AQS. ReentrantLock only gives a brief introduction. A column will be introduced in the follow-up. Pay attention to subscribing to my java column and don t get lost~

2.1.AQS overview

2.1.1. AQS class frame diagram

Image source: tech.meituan.com/2019/12/05/

2.1.2. Core variables

//The head node of the two-way queue, the thread of this node is the thread that occupies the current state variable private transient volatile Node head; //The tail node, when a lock contention occurs, the thread that has not grabbed the lock will be queued to the tail private transient volatile Node tail; /** * State variables, AQS controls access to critical resources by maintaining the value of state variables * Initialized to 0 means that there is no thread occupation. If an exclusive lock is used for access, the thread that grabs the lock will increase the value of state by 1, and the implementation of the reentrant lock is based on this * Field, if the thread id is equal to the thread id holding the state variable, the state will be increased by 1, and when the lock is subsequently released, it will also be repeatedly decremented by 1 here. Follow up on * It will be demonstrated in ReentrantLock. */ private volatile int state; Copy code

Description:

AQS controls access to critical resources by maintaining the value of the volatile modified state variable. State access provides the following methods

  • getState()

  • setState()

  • compareAndSetState() CAS : set and compare

    //CAS compares and sets the value. It is also a familiar unsafe method. The bottom layer calls the C language method and uses JMM to ensure that the modification is guaranteed in a multi-threaded environment. //The value is the latest value in main memory protected final boolean compareAndSetState(int expect, int update) { return unsafe.compareAndSwapInt(this, stateOffset, expect, update); } For the analysis of CAS, please refer to: https://www.jianshu.com/p/ae25eb3cfb5d It is often used to implement optimistic locking in java It was also introduced in another blog post, ConcurrentHashMap: https://juejin.cn/post/6960898411314823204 Copy code

AQS defines two resource sharing methods:

1.Exclusive (exclusive, only one thread can execute, such as ReentrantLock)

2.Share (shared, multiple threads can be executed at the same time, such as Semaphore/CountDownLatch).

Different custom synchronizers compete for shared resources in different ways. The custom synchronizer only needs to realize the acquisition and release of the shared resource state when it is implemented . As for the maintenance of the specific thread waiting queue (such as the failure to obtain the resource into the queue/waking up, etc.), AQS has been implemented at the top level. The following methods are mainly implemented when implementing a custom synchronizer:

  • isHeldExclusively(): Whether the thread is monopolizing resources. Only use condition to realize it.
  • tryAcquire(int): Exclusive mode. Attempt to obtain resources, return true if successful, false if failed.
  • tryRelease(int): Exclusive mode. Attempt to release the resource, return true if it succeeds, and return false if it fails.
  • tryAcquireShared(int): The sharing method. Try to obtain resources. Negative number means failure; 0 means success, but there are no remaining available resources; positive number means success and there are remaining resources.
  • tryReleaseShared(int): Sharing mode. Try to release the resource, and return true if it is allowed to wake up subsequent waiting nodes after release, otherwise return false.

Take ReentrantLock as an example, the state is initialized to 0, which means the unlocked state. When thread A locks(), tryAcquire() is called to monopolize the lock and state+1. After that, other threads will fail when tryAcquire() again, until the A thread unlock() reaches state=0 (that is, the lock is released), other threads have a chance to acquire the lock. Of course, before the lock is released, the A thread itself can repeatedly acquire the lock (state will accumulate), which is the concept of reentrancy. But it should be noted that how many times must be released, so as to ensure that the state can return to the zero state.

Taking CountDownLatch as an example, the task is divided into N sub-threads to execute, and the state is also initialized to N (note that N must be the same as the number of threads). These N sub-threads are executed in parallel. After each sub-thread is executed, countDown() once, and the state will decrease the CAS by 1. After all child threads are executed (that is, state=0), the main calling thread will be unpark(), and then the main calling thread will return from the await() function and continue the rest of the action.

Generally speaking, custom synchronizers are either an exclusive method or a shared method, and they only need to implement one of tryAcquire-tryRelease and tryAcquireShared-tryReleaseShared. But AQS also supports custom synchronizers to achieve both exclusive and shared methods at the same time, such as ReentrantReadWriteLock.

2.1.3. AQS two-way queue

Picture source: www.cnblogs.com/waterystone...

//node node is the basic data structure of AQS internal two-way queue static final class Node { //A flag indicating that the node is waiting in shared mode static final Node SHARED = new Node(); //A flag indicating that the node is waiting in exclusive mode static final Node EXCLUSIVE = null; //Indicates that the current node has canceled scheduling. When timeout or interrupted (in response to interruption), it will trigger a change to this state, and the node after entering this state will no longer change. static final int CANCELLED = 1; //Indicates that the subsequent node is waiting for the current node to wake up. When the successor node joins the team, the status of the successor node will be updated to SIGNAL static final int SIGNAL = -1; //Indicates that the node is waiting on the Condition. When other threads call the signal() method of the Condition, the node in the CONDITION state will be transferred from the waiting queue to the synchronization queue, waiting to obtain the synchronization lock static final int CONDITION = -2; //In the sharing mode, the predecessor node will not only wake up the successor node, but also may wake up the successor node static final int PROPAGATE = -3; //The waiting state of the current node is the above state value volatile int waitStatus; //The front node of the current node volatile Node prev; //The post node of the current node volatile Node next; //The thread of the current node volatile Thread thread; //Next waiting node Node nextWaiter; //Is the current sharing mode final boolean isShared() { return nextWaiter == SHARED; } //Return to the front node final Node predecessor() throws NullPointerException { Node p = prev; if (p == null) throw new NullPointerException(); else return p; } Node() { } Node(Thread thread, Node mode) { this.nextWaiter = mode; this.thread = thread; } Node(Thread thread, int waitStatus) { this.waitStatus = waitStatus; this.thread = thread; } } Copy code

AQS internally controls access to critical resources by maintaining a doubly linked list. The head node is the thread that currently has the permission to access the shared variable, and the subsequent node is the node waiting to be awakened for access.

2.2. Core method analysis

Having said so much, in fact, AQS is a complete API library. Many tool classes provided in the JUC package are implemented based on AQS's resource access and release methods.

2.2.1.acquire(int)

The acquire (int) method is the top-level entry for threads to acquire shared resources in exclusive mode, and the entry for resource acquisition (locking). If the resource acquisition is successful, it means that the thread has access to the state variable, and the corresponding thread can execute the current business logic; if the acquisition fails, the node will be enqueued. Let's take a look at the code

public final void acquire(int arg) { if (!tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); } Copy code
//Empty method, this method allows users to customize access to control state variables protected boolean tryAcquire(int arg) { throw new UnsupportedOperationException(); } Copy code
private Node addWaiter(Node mode) { //Construct a node with a given pattern. There are two modes: EXCLUSIVE (exclusive) and SHARED (shared) Node node = new Node(Thread.currentThread(), mode); //Try the quick way to put it directly at the end of the line. Node pred = tail; if (pred != null) { node.prev = pred; if (compareAndSetTail(pred, node)) { pred.next = node; return node; } } //If the previous step fails, enter the team through enq. enq(node); return node; } Copy code
private Node enq(final Node node) { //CAS "spin" until it successfully joins the end of the team for (;;) { Node t = tail; //The queue is empty, create an empty flag node as the head node, and point the tail to it. if (t == null) { if (compareAndSetHead(new Node())) tail = head; /Normal process, put at the end of the queue } else { node.prev = t; if (compareAndSetTail(t, node)) { t.next = node; return t; } } } } Copy code
final boolean acquireQueued(final Node node, int arg) { //Mark whether the resource was successfully obtained boolean failed = true; try { //Mark whether it has been interrupted during the waiting process boolean interrupted = false; //Spin for (;;) { //Get the precursor node final Node p = node.predecessor(); /If the predecessor is head, that is, the node has become the second child, then it is eligible to try to obtain resources (maybe the boss wakes up after releasing the resources, or of course it may be interrupted). if (p == head && tryAcquire(arg)) { //After getting the resource, point the head to the node. Therefore, the benchmark node pointed to by head is the node where the resource is currently obtained or null. setHead(node); //node.prev has been set to null in setHead, and head.next is set to null here to facilitate the GC to reclaim the previous head node. It also means that the node that has taken up the resources before is out of the team! p.next = null; //Successfully obtain resources failed = false; /Return to whether it has been interrupted during the waiting process return interrupted; } //If you can rest, enter the waiting state through park() until it is unpark(). If it is interrupted without interruption, it will wake up from park() and find that resources are not available, so it will continue to wait in park(). if (shouldParkAfterFailedAcquire(p, node) && parkAndCheckInterrupt()) //If the waiting process is interrupted, even if only once, mark interrupted as true interrupted = true; } } finally { //If the resource is not successfully obtained during the waiting process (such as timeout, or interrupted if it can be interrupted), then cancel the node's waiting in the queue. if (failed) cancelAcquire(node); } } Copy code
private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) { //Get the status of the precursor int ws = pred.waitStatus; if (ws == Node.SIGNAL) //If you have told your precursor to notify yourself after you get the number, you can rest at ease return true; if (ws> 0) { //If the predecessor gives up, then keep looking forward until the nearest normal waiting state is found, and it is placed behind it. //Note: Those abandoned nodes, because they are "stuffed" in front of them, they are equivalent to forming a non-reference chain, which will be reclaimed by the GC later do { node.prev = pred = pred.prev; } while (pred.waitStatus> 0); pred.next = node; } else { //If the precursor is normal, set the status of the precursor to SIGNAL and tell it to notify yourself after taking the number. It may fail, maybe they just finished releasing it! compareAndSetWaitStatus(pred, ws, Node.SIGNAL); } return false; } Copy code
private final boolean parkAndCheckInterrupt() { //Call park() to make the thread enter the waiting state LockSupport.park(this); //If you are awakened, check if you are interrupted. Thread.interrupted() will clear the interrupt flag bit of the current thread return Thread.interrupted(); } Copy code

Summarize after reading the source code

  1. Call the tryAcquire() of the custom synchronizer to try to acquire the resource directly, and return directly if it succeeds;
  2. If it fails, addWaiter() adds the thread to the end of the waiting queue and marks it as exclusive mode;
  3. acquireQueued() makes the thread rest in the waiting queue, and when there is a chance (it is its turn, it will be unpark()) will try to acquire resources. Only return after the resource is obtained. If it has been interrupted during the entire waiting process, it returns true, otherwise it returns false.
  4. If the thread is interrupted while waiting, it will not respond. Only after the resource is obtained, will the self-interrupt selfInterrupt() be performed, and the interrupt will be filled.

Picture source: www.cnblogs.com/waterystone...

2.2.2. release(int)

Corresponding to the acquire(int) method, release(int) is the top-level entry for the thread to release shared resources in the exclusive mode, and the resource release (unlock) entry. If it is completely released (that is, state=0), it will wake up other threads in the waiting queue to obtain resources. Let's take a look at the source code

public final boolean release(int arg) { if (tryRelease(arg)) { //Find the head node Node h = head; if (h != null && h.waitStatus != 0) //Wake up the next thread in the waiting queue unparkSuccessor(h); return true; } return false; } Copy code
//Similarly, it is an empty method, which is implemented by the corresponding implementation class protected boolean tryRelease(int arg) { throw new UnsupportedOperationException(); } Copy code
private void unparkSuccessor(Node node) { //Here, node is generally the node where the current thread is located. int ws = node.waitStatus; //Zero the state of the node where the current thread is located, and allow failure. if (ws <0) compareAndSetWaitStatus(node, ws, 0); //Find the next node s that needs to be awakened Node s = node.next; /If empty or cancelled if (s == null || s.waitStatus> 0) { s = null; //Look from the back to the front. for (Node t = tail; t != null && t != node; t = t.prev) //It can be seen from here that the nodes with <=0 are all valid nodes. if (t.waitStatus <= 0) s = t; } //wake if (s != null) LockSupport.unpark(s.thread); } Copy code

The logic of release is relatively simple. Empty the head node and wait for the GC, and then look for nodes that have not been cancelled. The nodes with waitStatus<=0 are awakened as the head node to gain access to the state.

2.2.3.acquireShared(int)

This method is the top-level entry point for threads to obtain shared resources in shared mode. It will acquire a specified amount of resources, and return directly if the acquisition is successful, and enter the waiting queue if the acquisition fails, until the resource is acquired, and the interrupt is ignored in the whole process. Let's take a look at the source code

public final void acquireShared(int arg) { //AQS has defined the semantics of its return value: a negative value means that the acquisition failed; 0 means that the acquisition is successful, but there are no remaining resources; a positive number means that the acquisition is successful, //There are remaining resources, and other threads can still get them. Try to get resources, return directly if successful tryAcquireShared() if (tryAcquireShared(arg) <0) If it fails, enter the waiting queue through doAcquireShared(), and return until the resource is acquired doAcquireShared(arg); } Copy code
//Similarly, the business side implements itself protected int tryAcquireShared(int arg) { throw new UnsupportedOperationException(); } Copy code
private void doAcquireShared(int arg) { //Join the end of the queue final Node node = addWaiter(Node.SHARED); //Successful sign boolean failed = true; try { //Whether it has been interrupted during the waiting process boolean interrupted = false; for (;;) { //Predecessor final Node p = node.predecessor(); //If you go to the next one of head, because head is the thread that gets the resources, node is awakened at this time, it is likely that head runs out of resources to wake up if (p == head) { //Try to get resources int r = tryAcquireShared(arg); //success if (r >= 0) { setHeadAndPropagate(node, r); p.next = null;//help GC /If the waiting process is interrupted, the interruption will be added at this time. if (interrupted) selfInterrupt(); failed = false; return; } } //Judging the state, looking for a safe point, entering the waiting state, waiting to be unpark() or interrupt() if (shouldParkAfterFailedAcquire(p, node) && parkAndCheckInterrupt()) interrupted = true; } } finally { if (failed) cancelAcquire(node); } } Copy code
private void setHeadAndPropagate(Node node, int propagate) { Node h = head;//Record old head for check below //head points to itself setHead(node); //If there is remaining amount, continue to wake up the next neighbor thread if (propagate> 0 || h == null || h.waitStatus <0 || (h = head) == null || h.waitStatus <0) { Node s = node.next; if (s == null || s.isShared()) doReleaseShared(); } } Copy code

The overall logic is consistent with the exclusive lock of the enqueue

Compared with the exclusive mode, one more thing to note is that only when the thread is head.next (the "second"), will it try to acquire resources, and if there is remaining, it will wake up future teammates. Then the problem arises. If the boss releases 5 resources after running out, while the second child needs six, the third child needs one, and the fourth child needs two. The eldest awakens the second child first. The second child sees that the resources are not enough. Should he give up the resources to the third child or not? the answer is negative! The second child will continue to park() waiting for other threads to release resources, and will not wake up the third and fourth. In exclusive mode, there is only one thread to execute at the same time. This is not a problem; but in shared mode, multiple threads can be executed at the same time. 4.also got stuck. Of course, this is not a problem, but AQS guarantees to wake up strictly in the order of enqueue (to ensure fairness, but reduce concurrency)

2.2.4. releaseShared()

Reversely release the shared lock

public final boolean releaseShared(int arg) { //Try to release resources if (tryReleaseShared(arg)) { //Wake up the successor node doReleaseShared(); return true; } return false; } Copy code
private void doReleaseShared() { for (;;) { Node h = head; if (h != null && h != tail) { int ws = h.waitStatus; if (ws == Node.SIGNAL) { if (!compareAndSetWaitStatus(h, Node.SIGNAL, 0)) continue; //Wake up the successor unparkSuccessor(h); } else if (ws == 0 && !compareAndSetWaitStatus(h, 0, Node.PROPAGATE)) continue; } //head changes if (h == head) break; } } Copy code

The process of this method is also relatively simple, in a word: after releasing the resource, wake up the successor. It is similar to release() in exclusive mode, but there is one point that needs to be noted: tryRelease() in exclusive mode will return true to wake up other threads after the resource is completely released (state=0). This is mainly based on exclusive Under the consideration of reentrancy; the releaseShared() in shared mode does not have such a requirement. The essence of the shared mode is to control a certain amount of concurrent execution of threads, so that the thread that owns the resource can wake up the subsequent wait when it releases some resources. point. For example, if the total amount of resources is 13, A(5) and B(7) obtain resources and run concurrently, and C(4) only has 1 resource left and needs to wait. A releases 2 resources during the running process, and then tryReleaseShared(2) returns true to wake up C, and C sees that only 3 resources are not enough to wait; then B releases 2 more resources, tryReleaseShared(2) returns true to wake up C, and C Once I see that there are 5 enough for my own use, then C can run with A and B. The tryReleaseShared() of the ReentrantReadWriteLock read lock returns true only when the resource is completely released (state=0), so the custom synchronizer can determine the return value of tryReleaseShared() as needed.

3. Introduction and use of ReentrantLock

3.1. Introduction

ReentrantLock means reentrant lock, which means that a thread can repeatedly lock a critical resource. It is often used for comparison with the Synchronized keyword. This section will connect the previous explanations of the principles of AQS in series to help you understand the role of AQS in ReentrantLock.

The difference between Synchronized and ReentrantLock: juejin.cn/post/684490...

3.1.1. Fair lock and unfair lock

ReentrantLock supports two locking methods, fair lock and unfair lock.

Introduce the fair lock and unfair lock in ReentrantLock with the scene of eating KFC

**Fair lock: **I want to eat KFC. I saw someone in line in front of me. I honestly lined up to the end of the existing line. I will buy it after the people in front have finished buying it.

Unfair lock: I want to eat KFC. I see someone in line in front of me. I don t want to line up. I just jump in the queue to buy. If your order is successful at this time, then you can save time in queuing and get KFC, but if you order If you fail alone, just continue to line up honestly.

advantageDisadvantage
Fair lockAll threads can get resources and will not starve to death in the queueThroughput will drop a lot. Except for the first thread in the queue, all other threads will be blocked. The overhead of cpu waking up the blocked thread will be very large.
Unfair lockIt can reduce the overhead of CPU waking up threads, the overall throughput efficiency will be higher, and the CPU does not have to wake up all threads, which will reduce the number of threads to be calledIt may cause the thread in the middle of the queue to be unable to obtain the lock or obtain the lock for a long time, resulting in starvation

3.2. ReentrantLock use

3.2.1. ReentrantLock construction method

ReentrantLock uses unfair locks by default

//Unfair lock in the default construction method public ReentrantLock() { sync = new NonfairSync(); } //Unfair lock and fair lock choose by themselves public ReentrantLock(boolean fair) { sync = fair? new FairSync(): new NonfairSync(); } Copy code
/** * Unfair lock */ static final class NonfairSync extends Sync { private static final long serialVersionUID = 7316153563782823691L; final void lock() { if (compareAndSetState(0, 1)) setExclusiveOwnerThread(Thread.currentThread()); else acquire(1); } protected final boolean tryAcquire(int acquires) { return nonfairTryAcquire(acquires); } } /** * Fair lock */ static final class FairSync extends Sync { private static final long serialVersionUID = -3000897897090466540L; final void lock() { acquire(1); } //Try to load the resource protected final boolean tryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); if (c == 0) { if (!hasQueuedPredecessors() && compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current); return true; } } else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc <0) throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; } } Copy code

3.2.2. ReentrantLock lock and unlock use

So far, everyone should not know how ReentrantLock and AQS are related.

Let's first use ReentrantLock briefly

Unlocked

public class Demo { int count = 1; public void add(){ this.count ++; } public static void main(String[] args) throws Exception{ Demo demo = new Demo(); ExecutorService executorService = Executors.newFixedThreadPool(30); List<Runnable> runnables = new ArrayList<>(); List<Future<?>> futures = new ArrayList<Future<?>>(); for (int i = 0; i <1000; i++) { runnables.add(()->{ demo.add(); }); } runnables.forEach(e->{ futures.add(executorService.submit(e)); }); //Wait for the completion of all tasks for(Future<?> f: futures) { try { f.get(); } catch (Exception e) { e.printStackTrace(); } } System.out.println(demo.count); } } Console output: 987 Basically not equal to 1001 Copy code

Lock

public class Demo { ReentrantLock reentrantLock = new ReentrantLock(); int count = 1; public void add(){ reentrantLock.lock(); try{ this.count ++; }catch (Exception e){ e.printStackTrace(); }finally { reentrantLock.unlock(); } } public static void main(String[] args) throws Exception{ Demo demo = new Demo(); ExecutorService executorService = Executors.newFixedThreadPool(30); List<Runnable> runnables = new ArrayList<>(); List<Future<?>> futures = new ArrayList<Future<?>>(); for (int i = 0; i <1000; i++) { runnables.add(()->{ demo.add(); }); } runnables.forEach(e->{ futures.add(executorService.submit(e)); }); //Wait for the completion of all tasks for(Future<?> f: futures) { try { f.get(); } catch (Exception e) { e.printStackTrace(); } } System.out.println(demo.count); } } Copy code

3.2.3. ReentrantLock lock and unlock analysis

From the code in 3.2.2. we can see that the locking and unlocking of ReentrantLock rely on the lock() and unlock() methods. Let's see what these two methods are doing

1.Lock() method

public void lock() { sync.lock(); } Copy code
abstract static class Sync extends AbstractQueuedSynchronizer { //Omit unnecessary code abstract void lock(); //Omit unnecessary code } Copy code

View the lock method implementation

There are implementations of fair locks and unfair locks.

For display convenience, take fair lock implementation logic as an example

//FairSync class final void lock() { acquire(1); } Copy code
//Resource acquisition method public final void acquire(int arg) { if (!tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); } Copy code

There is also a method tryAcquire in the FairSync class

protected final boolean tryAcquire(int acquires) { //Get the current thread variable final Thread current = Thread.currentThread(); //Get shared variables int c = getState(); //Unoccupied if (c == 0) { //Fair lock reflects, if there is no waiting queue and the CAS setting status is successful, the lock is obtained if (!hasQueuedPredecessors() && compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current); return true; } } //If it is occupied, but the occupied thread is the current thread, add 1 to the state to reflect the reentrant lock else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc <0) throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; } Copy code

It is found that it is an overlay method, which is consistent with the mechanism implemented on the business side mentioned above.

protected boolean tryAcquire(int arg) { throw new UnsupportedOperationException(); } Copy code

At this point we have associated the ReentrantLock method with the AQS method. Let's sort out the steps of fair lock locking.

2.unlock() method

//ReentrantLock method public void unlock() { sync.release(1); } Copy code
//AQS internal method public final boolean release(int arg) { if (tryRelease(arg)) { Node h = head; if (h != null && h.waitStatus != 0) unparkSuccessor(h); return true; } return false; } Copy code

3.2.4. Implementation differences between fair and unfair locks

We know that ReentrantLock is an exclusive lock, so what is the difference between the fair lock and the unfair lock in it?

Regardless of the source code, we combined the above analysis of the code implemented by AQS and fair lock, where do you think it can be reflected?

It must be the lock() method. In the fair lock, the lock method finds that if the shared variable is occupied by a thread other than the current thread, and there is already a thread in the queue, then it honestly goes to the queue. Let's go back to the KFC picture. When the unfair lock knows that the state is occupied, and there are threads in the queue, they will directly try to jump in the queue. ok, with this idea, let s look at the implementation of unfair locks

final void lock() { //The first time to jump in the queue, regardless of whether the current state is occupied or not, first cas try your luck to see if you can grab the resource access permissions if (compareAndSetState(0, 1)) setExclusiveOwnerThread(Thread.currentThread()); else acquire(1); } Copy code
protected final boolean tryAcquire(int acquires) { return nonfairTryAcquire(acquires); } Copy code
final boolean nonfairTryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); //Unoccupied, set the value if (c == 0) { //Grab it again if (compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current); return true; } } //Re-entrant lock implementation else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc <0)//overflow throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; } Copy code
  1. After calling lock, the unfair lock will first call CAS to grab the lock. If the lock happens to be not occupied at this time, then the lock will be acquired directly and returned.

  2. After the CAS fails, unfair locks will enter the same way as fair locks.

    tryAcquire
    Method in
    tryAcquire
    In the method, if the lock is found to be released at this time (state == 0), the unfair lock will directly CAS grab the lock, but the fair lock will determine whether there are threads in the waiting queue, and if there are threads, do not grab the lock, behave. Row to the back.

4. References and Acknowledgements

Image and text reference: tech.meituan.com/2019/12/05/...

This article focuses on reference: www.cnblogs.com/waterystone...

Five. Contact me

If you think the article is well written, you can like and comment + follow Sanlian, okay~

Dingding: louyanfeng25

WeChat: baiyan_lou