(1) High concurrency and short task execution time, the number of thread pool threads can be set to CPU core number +1 to reduce the switching of thread context
(2) Businesses with low concurrency and long task execution time should be distinguished:
a) If the business time is concentrated on IO operation for a long time, that is, IO-intensive tasks, because IO operation does not occupy CPU, so don’t let all CPUs idle. You can increase the number of threads in the thread pool to let the CPU handle more services.
b) If the business time is concentrated on computing operations, that is, computing-intensive tasks, there is nothing we can do about this. Just like (1), the number of threads in the thread pool should be set less to reduce the switching of thread context.
(3) High concurrency and long business execution time. The key to solving this type of task is not the thread pool but the overall architecture design. It is the first step to see if some data in these services can be cached, and adding a server is the second step. As for the settings of the thread pool, the settings reference (2). Finally, the problem of long business execution time may also be necessary to analyze to see if middleware can be used to split and decouple tasks.
How to set it up
It needs to be determined based on several values
tasks: the number of tasks per second, assuming it is 500~1000
taskcost: Each task takes time, assuming it is 0.1s
response time: The maximum response time allowed by the system, assuming it is 1s
How many calculations are done
corePoolSize = How many threads are needed per second?
threadcount = tasks/(1/taskcost) =tasks*taskcout = (500~1000)*0.1 = 50~100 threads. corePoolSize setting should be greater than 50
According to the 8020 principle, if 80% of the number of tasks per second is less than 800, then corePoolSize is set to 80
queueCapacity = (coreSizePool/taskcost)*responsetime
Calculate to get queueCapacity = 80/0.1*1 = 80. It means that the threads in the queue can wait for 1s. If it exceeds it, a new thread needs to be opened to execute it.
Remember not to set to Integer.MAX_VALUE, so that the queue will be large and the number of threads will only remain at corePoolSize. When the task increases sharply, new threads cannot be opened to execute, and the response time will increase sharply.
maxPoolSize = (max(tasks)- queueCapacity)/(1/taskcost)(Maximum number of tasks - queue capacity)/processing capacity per second per thread = Maximum number of threads
Calculate it maxPoolSize = (1000-80)/10 = 92
rejectedExecutionHandler: Decide based on the specific situation. Tasks are not important and can be discarded. Tasks are important to use some buffering mechanisms to deal with
keepAliveTime and allowCoreThreadTimeout: It can usually satisfy the default