Several basic issues that you must know about concurrent programming

Several basic issues that you must know about concurrent programming

Concurrent programming knowledge system

Concurrent programming is an important proposition in computer science. How to outline and master concurrent programming is particularly important to build a knowledge system. This article is based on my understanding of concurrent programming and the compilation of public information, trying to clear the fog and introduce concurrent programming as a whole.

The main contents include:

  1. The basic concepts of concurrent programming:
  • The difference between concurrency and parallelism
  • Multithreading advantages
  • 3.basic problems of multithreading
  1. Concurrent programming practice
  • JUC framework
    • Excutor frame
    • Fork/Join framework
  • Discussion on two basic issues of concurrent programming

Basic concepts of concurrent programming

The emergence of concurrent programming

Concurrent programming comes with the development of computers.

The first generation of computers used cumbersome card machines, and workers entered the computer by punching holes for calculation tasks. In order to increase the computing speed, a batch operating system has appeared. Then came the time-sharing operating system.

Gordon Moore, one of the founders of Intel, proposed that the number of transistors that can be accommodated on an integrated circuit will double about every two years. This is the famous Moore's Law. The semiconductor industry has roughly developed in accordance with Moore's Law for more than half a century, and has contributed to the world economic growth in the second half of the 20th century.

The CPU has gradually increased from the original single-core to dual-core, quad-core and even more. The CPU becomes faster and faster, but the growth of memory and IO speed is very slow. The cache is to alleviate the speed mismatch between the CPU and the memory speed, and the memory alleviates the speed mismatch between the CPU and the IO device.

The computer storage structure is as shown in the figure below:

The generation of multi-threading and multi-process is to make full use of the computing advantages of multi-core CPU and reduce IO congestion.

Advantages of concurrent programming

Multithreaded programs have many advantages:

  1. Fast processing speed: Multi-threaded programs make full use of the advantages of multi-core CPUs and distribute calculations on different CPU cores to increase the calculation speed.

  2. Reduce IO congestion: Network requests or local IO are relatively slow operations. You can use multi-threading technology to create a new thread for other operations (asynchronous operations) while IO, so as to reduce the impact of IO congestion on response speed. The previous operating system can only create a small number of threads. Therefore, the operating system provides an efficient IO method. Such as multiplexing IO, asynchronous IO, etc. In modern operating systems, the number of threads is usually very limited, and thread pool technology can be used to improve processing capabilities.

  3. Improve the response speed of the interface system:

Many traditional GUI systems are single-threaded and process various events of interface components through the Main Event Loop. When dealing with time-consuming event operations, it is easy to get stuck on the main interface. The current GUI system utilizes multi-threading technology and uses Event Dispatch Thread instead of the main event loop. When an event occurs, the corresponding event handler is called. The interface system influence is more sensitive.

  1. Simplify modeling:

One thread processing one thing is easier to model than processing multiple things at once. Through the multi-threaded framework, event processing and resource scheduling, alternate operations, asynchronous IO and resource waiting can be isolated. The current concurrent programming framework and components (Servlet, RMI, etc.) make modeling easier.

3.issues that need to be paid attention to in concurrent programming: security issues, liveness issues, and performance issues

Multithreading is a double-edged sword, which brings security, liveness, and performance issues while improving performance.

Take hotel cleaning as an example:

There are many rooms in Kuntai Hotel that need to be cleaned and cleaned. The cleaning of all hotel rooms is completed under the supervision of the supervisor. The company's funds are limited, and there are only limited cleaning tools for cleaning in turn.

In this example: cleaning the room is the next computing task that the computer will handle. Cleaning is equivalent to a thread that processes tasks, a supervisor is equivalent to a thread scheduler, and a limited cleaning tool is equivalent to CPU resources or other public resources.

The problem came. When the hotel was small, there was only one cleaner in the entire Quintai Hotel, and this cleaner only needed to clean them one by one. Although the processing speed is slow, there is no error. The cleaning tools can be monopolized by this cleaner, and there is only one cleaner, so there is no need to recruit an additional cleaning supervisor.

As the size of the hotel becomes larger, one cleaner cannot meet the requirements of the hotel. The company hires more cleaning staff and assigns cleaning supervisors to these cleaning staff.

After cleaning the room, multiple cleaners will notify the cleaning supervisor that the room has been cleaned. After the cleaning tool is used, it needs to be cleaned and marked with its own job number to indicate that it is in use (context switching) before handing over to the next cleaner.

Collaborative work between cleaning and cleaning is very similar to a multi-threaded system.

Security issues

Cleaning and cleaning a room, if you do not make any mark or notice, other cleaning may also enter the cleaning. Caused a waste of resources (duplicated execution of tasks). More unfortunately, the new tenant entered the room at this time and saw the room in a mess. What should I think?

This situation is equivalent to thread safety issues when accessing public resources. Thread safety issues sometimes just cause a waste of resources, and more often cause serious errors.

After the cleaning supervisor discovered the problem, he asked the cleaning staff to place a cleaning sign (lock) at the door before entering the room door to clean. Another cleaner sees this sign (the lock is invalid) and knows that the room is being cleaned. This problem is solved.

The cleaning state of the room is equivalent to the lock in the Java language. The lock is not tight to solve the mutual exclusive access, but also indirectly realize the communication between threads.

The cleaning supervisor designed a set of resource allocation rules (JMM Java memory model) for the efficient use of limited tools (resources). The rules stipulate the placement of different tools (memory distribution), tool usage rules (to ensure that the status of tool usage is visible to everyone), process optimization rules (reordering), and work distribution rules (atomicity).

The rules governing the allocation of specified resources are equivalent to Java's memory model.

The Java memory model includes Java memory area division and memory usage rules (visibility, atomicity, and instruction reordering).

Liveness issues

Due to the scarcity of thread resources or the problems and defects of the program itself, the thread thread has been in a non-runnable state, or although the thread is in the runnable state but the task to be performed has been unable to progress, the phenomenon is called thread activity failure. Common thread activity failures include deadlock, livelock, and starvation.

Deadlock: Cleaning A is holding a mop in her hand, she needs a rag, and the other cleaning rag needs a mop in her hand. If two people are not humility to each other, it will cause a deadlock problem.

4.conditions are required to cause a deadlock: mutually exclusive conditions, request and hold conditions, inalienable and circular waiting. Destroying any of the four conditions can solve the deadlock problem.

Livelock: There is no thread blocking, but due to the existence of some kind of problem, the execution cannot be continued.

  1. Message retry. When a message processing fails, it keeps retrying, but for some reason, for example, the message format is incorrect, the parsing fails, and it is retried

In this case, it is generally not to retry the unrepairable error, or limit the number of retries

  1. Livelock problem caused by mutual cooperation.

For example, two very polite people met on the same road, giving way to each other, but met on the same road again. Repeatedly avoid each other

At this time, you can choose a random concession to make it have a certain degree of randomness

hunger

Low-priority threads are eventually unable to execute because they are constantly preempted by high-priority threads for execution.

The introduction of fairness mechanisms can solve the problem of hunger. For example, fair lock implemented using ReentrantLock.

Performance issue

Concurrent programming has performance problems such as decreased program throughput and slower execution due to contention for mutually exclusive resources.

Locking can solve the problem of thread resource contention, but it also brings overhead.

Synchronized lock optimization

After JDK1.6, various lock optimization techniques have appeared, such as lightweight locks, biased locks, adaptive spinning, lock coarsening, lock elimination, etc.

The lock object modified by the synchronized keyword will select different lock implementations according to the mutually exclusive resource contention.

  • When there is no lock contention, the lock becomes a biased lock, and there is no additional cost for locking and unlocking.

  • When there is lock contention, the biased lock will be upgraded to a lightweight lock, the thread will not be blocked, and the response speed of the program will be improved by spinning.

  • When more threads compete for lock resources, lightweight locks will be upgraded to heavyweight locks, and threads will block waiting for the lock to be released.

no lock

When accessing shared data, synchronization is usually used. If you want to avoid using synchronization, just don't provide shared data. If the data is accessed only in a single thread, synchronization is not required. This technique is called thread closure.

There are three main ways to achieve thread closure:

  • Ad-hoc method: refers to the maintenance of thread closure is implemented and maintained by the program itself. Ad-hoc is very fragile because it does not have a language feature, such as visibility modifiers or local variables, that can enclose objects to specific threads.

  • Stack closure: A simple understanding of stack closure is to achieve thread closure through local variables. Multiple threads access the same method of the object. The local variables inside the method will be copied to the thread stack of each thread. Only the current thread can access it. Do not interfere. So local variables are not shared by multiple threads.

  • ThreadLocal way

At the same time, JDK also provides AtomicXXX to use CAS to achieve thread synchronization.

Concurrent programming model

JUC mode

JUC refers to the concurrent class under the java.util.concurrent package.

The classes in the JUC package have some:

  1. Thread safety related classes
  2. Thread pool framework
  3. Fork/Join framework

Thread safety-related classes include the following three categories:

  1. Atomic category: AtomicLong, etc.
  2. Lock object: ReentrantLock, ReadWriteLock, etc.
  3. Thread-safe container: ConcurrentHashMap, CopyOnWriteArrayList, etc.

The class diagram is as follows:

ExecutorService mode

In order to improve the efficiency of room cleaning and personnel use, the supervisor proposes to use dynamic personnel to configure cleaning (thread pool). The hotel has at least 2 cleaning on duty (coreSize) every day, and 2 cleaning staff will be dispatched after the guests leave the hotel. In order to improve the service experience, if more than 2 tenants leave the store, the supervisor will dispatch cleaning from other branches. Unfortunately, the headquarters stipulates that each hotel can only have a maximum of 10 cleaning (maxSize). If there is no room available, the new tenants who want to enter the store can only wait (Queue). Hotel resources are limited, the waiting queue cannot be infinitely long (bounded queue), otherwise the hotel may be overwhelmed, hahaha, in this case, the hotel will earn money. When the waiting queue is also full, a reasonable strategy (rejection strategy) is needed. The optional policies at the front desk of the hotel can include: refuse to receive new guests (DiscardPolicy), not receive new guests and notify the headquarters (AbortPolicy, which causes RejectExecutionException exception), let guests clean the room by themselves (CallerRunsPolicy), and let new guests arrive at the end of the line ( DiscardOldestPolicy).

This is the model of thread pool. The core concern of the thread pool model: 3 capacity sizes (coreSize, maxSize, queueSize) and 4 rejection policies (DiscardPolicy, AbortPolicy, CallerRunsPolicy, DiscardOldPolicy). The headquarters will arrange resource pool configuration parameters (set coreSize, maxSize, and QueueSize) according to the hotel s passenger flow. This is how to set the thread pool parameters.

In the default ThreadPoolExecutor.AbortPolicy, the handler will throw a rejection after running RejectedExecutionException. In ThreadPoolExecutor.CallerRunsPolicy, call the thread of execute itself to run the task. This provides a simple feedback control mechanism that will slow down the submission of new tasks. In ThreadPoolExecutor.DiscardPolicy, simply delete tasks that cannot be executed. In ThreadPoolExecutor.DiscardOldestPolicy

Fork/Join mode

When there are not many rooms to be cleaned, the ExcutorSevice mode works very efficiently. After 12 noon, a lot of free rooms need to be cleaned. If the task distribution is still carried out according to the thread pool mode, the cleaning staff has just cleaned the room on the 4th floor, and the cleaning supervisor called to say that the next step is to clean the room on the first floor. The cleaning supervisor found that this model is not efficient when it takes more time to clean the room (more tasks).

In order to solve this problem, the cleaning supervisor thought of a way. One floor is allocated to each cleaning (one thread and one work queue). There are more floors and fewer check-outs, in order to solve the problem of fairness and efficiency. After cleaning the task on this floor, randomly go to other floors to help (work stealing).

The method that the cleaning supervisor thought of was the Fork/Join model. Fork/Join is similar to ExecutorService but has many differences. Fork/Join can split the big task (hotel) into small tasks (room/floor). The two main differences between the Fork/Join model and the ExecutorService model are

(1) One thread and one work queue: each person is responsible for one floor. (2) Work stealing: After working on one's own floor, randomly go to other floors to help.

Two core issues of concurrent programming

Multithreaded programming is the main way to realize concurrent programming. Through threads, system throughput and performance can be greatly improved. However, there are two core problems to be solved for multi-threaded programming:

  1. Thread communication problem: how to exchange data and state between threads.
  2. Thread synchronization problem: How to ensure the correct operation of the program under race conditions.

We can use wait/notify, shared variables, pipelines and other methods for thread communication, such as wait/notify of Object, await/notify of Condition, park/unpark of LockSupport, volatile, InheritableThreadLocal, PipeInputStream/PipeOutputStream, etc.

For thread synchronization, we can use locked technology (synchronized, Lock), or lock-free technology (CAS). Due to space reasons. About thread communication and thread synchronization, I will write a separate article to explain.

What is architecture design? Architecture design is enough to read this article

Why is Redis so fast?

Heavy: Interpretation of the latest JVM Ecological Report in 2020

Summary of BIO, NIO, AIO

How much do you know about the new features of JDK8?

Reply to "Information" and get an exclusive and disgustingly organized technical information for free!