Dry goods: Java concurrent programming must understand the knowledge points analysis

Dry goods: Java concurrent programming must understand the knowledge points analysis


Outline of this article

1. 3.elements of concurrent programming

Atomicity

Atom, a particle that can no longer be divided. Atomicity in Java means that one or more operations are either all executed successfully or all executed fail.

Orderliness

The order of program execution is executed in the order of code. (The processor may reorder the instructions)

Visibility

When multiple threads access the same variable, if one thread modifies it, other threads can immediately obtain the latest value.

2. The five states of threads

Creation status

When creating a thread with the new operator

Ready state

Call the start method, the thread in the ready state may not execute the run method immediately, but also needs to wait for the CPU scheduling

Operating status

The CPU starts scheduling threads and starts to execute the run method

Blocking state

During the execution of the thread, it enters a blocking state for some reasons

For example: calling the sleep method, trying to get a lock, etc.

State of death

The run method is executed or an exception is encountered during execution

3. Pessimistic lock and optimistic lock

Pessimistic lock : each operation will lock, which will cause thread blocking.

Optimistic locking : Each operation does not lock but assumes that there is no conflict and completes an operation. If it fails due to the conflict, it will retry until it succeeds, without causing thread blockage.

4. Cooperation between threads

4.1 wait/notify/notifyAll

This group is the methods of the Object class

It should be noted that these three methods must be called within the scope of synchronization

wait

Block the current thread until notify or notifyAll to wake up

There are three ways to call wait() for wait. It is necessary to be awakened by notify or notifyAll. Wait(longtimeout) will automatically wake up if there is no notify or notifAll method within a specified time. wait(longtimeout, longnanos) is essentially a method that calls a parameter publicfinalvoidwait(longtimeout,intnanos)throwsInterruptedException{if(timeout <0) {thrownewIllegalArgumentException("timeout value is negative");}if(nanos <0|| nanos >999999 ) {thrownewIllegalArgumentException("nanosecond timeout value out of range");}if(nanos >0) {timeout++;}wait(timeout);}

notify

Only one thread in wait can be awakened

notifyAll

Wake up all threads in wait

4.2 sleep/yield/join

This group is the methods of the Thread class

sleep

Suspend the current thread for a specified time, but give up the right to use the CPU, but does not release the lock

yield

Suspend the execution of the current thread, that is, the right to use the current CPU, so that other threads have the opportunity to execute, and the time cannot be specified. Will make the current thread change from the running state to the ready state, this method is rarely used in the production environment, the official also has related instructions in its comments

/*** A hinttotheschedulerthatthecurrent threadiswillingtoyield*itscurrent useofa processor. The schedulerisfreetoignore this* hint.**

Yieldisa heuristic attempt toimprove relative progression*betweenthreadsthatwould otherwise over-utilise a CPU. Its use* should be combinedwithdetailed profilingandbenchmarkingto* ensurethatitactually hasthedesired effect.**

Itisrarely appropriatetouse this method. It may be useful*fordebuggingor testing purposes,whereitmay helptoreproduce* bugs duetorace conditions. It may also be useful when designing* concurrency control constructs suchastheonesinthe* {@link java.util.concurrent.locks} package.*/

join

Wait for the end of the execution of the thread that calls the join method before executing the following code

The call must be after the start method (see the source code)

Usage scenario: Join method is used when the parent thread needs to wait for the execution of the child thread to finish before executing the following content or when the execution result of a child thread is needed

5.valitate keyword

5.1 Definition

The java programming language allows threads to access shared variables. In order to ensure that shared variables can be updated accurately and consistently, threads should ensure that this variable is individually acquired through an exclusive lock. Java language provides volatile, which is more convenient than lock in some cases. If a field is declared volatile, the Java thread memory model ensures that all threads see the value of this variable consistent.

Valitate is a lightweight synchronized, which will not cause thread context switching and scheduling, and the execution overhead is smaller.

5.2 Principle

1. Variables modified with volitate will have an additional lock prefix instruction in the assembly stage

2. It ensures that when the instruction is reordered, the following instructions will not be placed before the memory barrier, nor will the previous instructions be placed after the memory barrier; that is, when the instruction of the memory barrier is executed, it will All previous operations have been completed

3. It will force the modification of the cache to be written to the main memory immediately

4. If it is a write operation, it will invalidate the data cached at the memory address in other CPUs

5.3 Function

Memory visibility

In multi-threaded operation, one thread modifies the value of a variable, and other threads can immediately see the modified value

Prevent reordering

That is, the execution order of the program is executed in the order of the code (the processor may reorder the code in order to improve the execution efficiency of the code)

Does not guarantee the atomicity of the operation (for example, the execution result of the following code must not be 100000)

publicclasstestValitate{publicvolatileintinc =0;publicvoidincrease(){inc = inc +1;}publicstaticvoidmain(String[] args){final testValitate test =newtestValitate();for(inti =0; i <100; i++) {newThread() { publicvoidrun(){for(intj =0; j <1000; j++)test.increase();}).start();}while(Thread.activeCount() >2) {//Ensure that the previous threads are all executed Thread.yield();}System.out.println(test.inc);}}

6. The synchronized keyword

Ensure that threads are mutually exclusive to access synchronization code

6.1 Definition

Synchronized is a kind of lock implemented by JVM, in which lock acquisition and release are respectively

The monitorenter and monitorexit instructions are divided into biased locks, lightweight locks and heavyweight locks in terms of implementation. The biased locks are enabled by default in java1.6. Lightweight locks will expand into Heavyweight lock, the data about the lock is stored in the object header

6.2 Principle

For the code segment with the synchronized keyword, the generated bytecode file will have two more instructions: monitorenter and monitorexit (using the javap -verbose bytecode file can be seen, the documents about these two instructions are as follows:

monitorenter

Each object is associated with a monitor. A monitor is locked if and only if it has an owner. The thread that executes monitorenter attempts to gain ownership of the monitor associated with objectref, as follows:

If the entry count of the monitor associated with objectref is zero, the thread enters the monitor and sets its entry count to one. The thread is then the owner of the monitor.

If the thread already owns the monitor associated with objectref, it reenters the monitor, incrementing its entry count.

If another thread already owns the monitor associated with objectref, the thread blocks until the monitor's entry count is zero, then tries again to gain ownership.

monitorexit

The thread that executes monitorexit must be the owner of the monitor associated with the instance referenced by objectref.

The thread decrements the entry count of the monitor associated with objectref. If as a result the value of the entry count is zero, the thread exits the monitor and is no longer its owner. Other threads that are blocking to enter the monitor are allowed to attempt to do so.

In the method with the synchronized keyword, there will be an additional ACC_SYNCHRONIZED flag in the generated bytecode file. When the method is called, the calling instruction will check whether the ACC_SYNCHRONIZED access flag of the method is set. If it is set, the execution thread will get it first Monitor, the method body can be executed after the acquisition is successful, and the monitor can be released after the method is executed. During the execution of the method, no other thread can obtain the same monitor object. In fact, there is no difference in essence, but method synchronization is achieved in an implicit way, without bytecode.

6.3 About use

Modification common method

Synchronization object is an instance object

Decorate static methods

The synchronization object is the class itself

Modified code block

You can set the synchronization object yourself

6.4 Disadvantages

The resources that are not locked will enter the Block state, and then turn to the Running state after the resource is contended. This process involves switching between the user mode of the operating system and the kernel mode, which is relatively expensive. Java 1.6 is optimized for synchronized, increasing the transition from biased locks to lightweight locks to heavyweight locks, but after the final transformation to heavyweight locks, the performance is still low.

7. CAS

The underlying layers such as AtomicBoolean, AtomicInteger, AtomicLong, and Lock related classes are implemented with CAS, and to a certain extent, the performance is higher than synchronized. If you want to know about the latest concurrent programming interview questions of major Internet companies in 2018, you can add a group: 650385180, and the interview questions and answers are in the shared area of the group.

7.1 What is CAS

The full name of CAS is Compare And Swap, that is, compare replacement, which is a technology for implementing concurrent applications. The operation consists of three operands-the memory location (V), the expected original value (A) and the new value (B). If the value of the memory location matches the expected original value, the processor will automatically update the location value to the new value. Otherwise, the processor does nothing.

7.2 Why is there a CAS

If you only use synchronized to ensure synchronization, there will be the following problems

Synchronized is a kind of pessimistic lock, which will cause certain performance problems in use. Under multi-thread competition, locking and releasing locks will cause more context switching and scheduling delays, causing performance problems. Holding a lock by one thread causes all other threads that need the lock to suspend.

7.3 Implementation Principle

Java cannot directly access the bottom layer of the operating system, it is accessed through the native method (JNI). The bottom layer of CAS implements atomic operations through the Unsafe class.

7.4 Existing problems

ABA problem

What is the ABA problem? For example, there is an int type value N is 1

There are three threads that want to change it at this time:

Thread A: Hope to assign a value of 2 to N

Thread B: I want to assign a value of 2 to N

Thread C: I want to assign a value of 1 to N

At this time, thread A and thread B obtain the value of N at the same time. Thread A is the first to obtain system resources and assigns N to 2. Thread B is blocked for some reason. Thread C gets the current value of N after thread A is executed. Value 2

Thread state at this time

Thread A successfully assigns a value of 2 to N

Thread B gets the current value 1 of N and hopes to assign it a value of 2, which is in a blocked state

Thread C gets the current value 2 of Danghao N and hopes to assign it a value of 1.

Then thread C successfully assigns N to 1

Finally, thread B obtains the system resources and resumes the running state. Before blocking, the value of N obtained by thread B is 1. Performing the compare operation finds that the current value of N is the same as the obtained value (both are 1), success Assign N to 2.

In this process, the value of N obtained by thread B is an old value. Although it is equal to the current value of N, the value of N has actually undergone a change from 1 to 2 to 1.

The above example is a typical ABA problem

How to solve the ABA problem

Just add a version number to the variable. When comparing, not only the value of the current variable must be compared, but the version number of the current variable must be compared. AtomicStampedReference in Java solves this problem

Long cycle time and high overhead

In the case of relatively high concurrency, if many threads repeatedly try to update a certain variable, but the update is unsuccessful all the time, and the cycle goes back and forth, it will put a lot of pressure on the CPU.

CAS can only guarantee the atomic operation of a shared variable

8. AbstractQueuedSynchronizer(AQS)

The AQS abstract queue synchronizer is a state-based linked list management method. The state is modified using CAS. It is the most important cornerstone of the java.util.concurrent package. It is the key to learn the content of the java.util.concurrent package. The realization principle of ReentrantLock, CountDownLatcher, and Semaphore is based on AQS. You can refer to this article if you want to know how it is implemented and how it is implemented.https://www.cnblogs.com/waterystone/p/4920797.html

9. Future

In concurrent programming, we generally use Runable to execute asynchronous tasks. However, we can't get the return value of asynchronous tasks in this way, but we can use Future. Using Future is very simple, just replace Runable with FutureTask. It's relatively simple to use, so I won't introduce it here.

10. Thread Pool

If we create a thread when we use a thread, it is simple, but there are big problems. If there are a large number of concurrent threads, and each thread executes a short task and ends, then frequent thread creation will greatly reduce the efficiency of the system, because it takes time to create and destroy threads frequently. The thread pool can greatly reduce the performance loss caused by frequent thread creation and destruction through reuse.

The thread pool implementation class ThreadPoolExecutor in Java, the meaning of each parameter of its constructor has been clearly written in the comments, here are a few key parameters can be briefly said

corePoolSize: The number of core threads is the number of threads that have been kept in the thread pool, and will not be destroyed even if they are idle. To set allowCoreThreadTimeOut to true, it will be destroyed.

maximumPoolSize: the maximum number of threads allowed in the thread pool

keepAliveTime: The maximum idle time allowed by non-core threads, after which it will be destroyed locally.

workQueue: The queue used to store tasks.

SynchronousQueue: This queue will allow the newly added task to be executed immediately. If all threads in the thread pool are executing, then a new thread will be created to execute the task. When using this queue, maximumPoolSizes will generally set a maximum value Integer.MAX_VALUE

LinkedBlockingQueue: This queue is an unbounded queue. How to understand it, that is, we will perform as many tasks as there are tasks. If the threads in the thread pool are less than corePoolSize, we will create a new thread to perform this task. If the number of threads in the thread pool is equal to corePoolSize, it will Put tasks in the queue and wait, because there is no limit to the size of the queue, it is also called an unbounded queue. When using this queue, maximumPoolSizes does not take effect (the number of threads in the thread pool will not exceed corePoolSize), so it is generally set to 0.

ArrayBlockingQueue: This queue is a bounded queue. The maximum capacity of the queue can be set. When the number of threads in the thread pool is greater than or equal to maximumPoolSizes, the task will be placed in this queue. When the task in the current queue is greater than the maximum capacity of the queue, the task will be discarded and handed over to the RejectedExecutionHandler for processing.

If you want to learn more about concurrent programming knowledge, you can follow me. I will also organize more related technical points to share in the future. In addition, I would like to recommend an exchange learning group for everyone: 650385180 , which will share some recordings by senior architects. Video recording: There are Spring, MyBatis, Netty source code analysis, the principles of high concurrency, high performance, distributed, and microservice architecture, and JVM performance optimization has become an essential knowledge system for architects. You can also receive free learning resources and interview materials, which have benefited a lot at present. The following course system diagrams are also available in the group.

Finally, this article mainly gives a brief explanation of the knowledge points required for the development of Java concurrent programming. Every knowledge point here can be explained in an article. Due to space reasons, every knowledge point cannot be introduced in detail. I believe that through this article You will have a closer understanding of concurrent programming in Java. If you find any omissions or errors, you can add them in the comment area, thank you.