Executors是创建线程池的简便工具类,它帮我们提供了几种关于创建线程池的简便方法,如:
创建一个固定大小的线程池
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue());
}
创建一个可以根据实际情况调整数量的线程池
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue());
}
创建一个只有一个线程的线程池
public static ExecutorService newSingleThreadExecutor() {
return new FinalizableDelegatedExecutorService
(new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue()));
}
可以从以上的几个方法中看出,它们都是对 ThreadPoolExecutor 的封装,下面来看看 ThreadPoolExecutor 的构造函数:
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue workQueue,
ThreadFactory threadFactory) {
this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
threadFactory, defaultHandler);
}
函数的参数含义如下:
corePoolSize:指定了线程池中的线程数量;
maxImumPoolSize:指定了线程池中的最大线程数量;
keepAliveTime:当线程池的线程数量超过了 corePoolSize 时,多余的空闲线程的存活时间。即超过了 corePoolSize 的空闲线程,在多长时间内,会被销毁。
unit:keepAliveTIme的时间单位;
workQueue:任务队列,被提交但尚未被执行的任务;
threadFactory:线程工厂,用于创建线程,一般用默认的即可;
handler:拒绝策略。当任务太多来不及处理,如何拒绝任务
参数 workQueue 指被提交但未被执行的任务队列,它是一个 BlockingQueue 接口的对象,仅用于存放 Runnable 对象。根据队列功能分类,在 ThreadPoolExecutor
的构造函数中可使用几种 BlockingQueue:
1、SynchronousQueue :直接提交的队列,该对象没有容量,每一个插入操作都要等待一个相应的删除操作,反之
,每一个删除操作都要等待对应的插入操作。如果使用 SynchronousQueue ,提交的任务不会被真实保存,而总是
将新任务提交给线程执行,如果没有空闲的线程,则尝试创建新的线程,如果线程数量已经达到最大值,则执行拒绝策略。
因此,使用 SynchronousQueue 队列,通常要设置很大的 maxImumPoolSize 值,否则很容易执行拒绝策略;
2、ArrayBlockingQueue:有界的任务队列。它的构造函数必须带一个容量参数,表示该队列的最大容量:
public ArrayBlockingQueue(int capacity)
当使用有界的队列时,若有新的任务需要执行,如果线程池的实际线程小于 corePoolSize ,则会优先创建新的线程,若大于
corePoolSize ,则会将新任务加入等待队列。若等待队列,无法加入,则在总线程数不大于 maxImumPoolSize 的前提下,
创建新的线程执行任务。若大于 maxImumPoolSize ,则执行拒绝策略。可见,有界队列仅当在任务队列装满时,才可能将
线程数提升到 corePoolSize 以上,换言之除非系统非常繁忙,否则确保核心线程数维持在 corePoolSize。
3、LinkedBlockingQueue:无界的任务队列。除非系统资源耗尽,否则无界队列不存在任务入队失败的情况。当有新的任务
到来,系统的线程数小于 corePoolSize 时,线程池会生成新的线程执行任务,但当系统的线程数达到 corePoolSize 后,
就不会继续增加。若后续仍有新的任务加入,而又没有空闲的线程资源,则任务直接进入队列等待。若任务创建和处理的速度
差异很大,无界队列会保持快速增长,直到耗尽系统内存。
4、PriorityBlockingQueue:优先级任务队列,可以控制任务的执行先后顺序。它是一个特殊的无界队列,按照先进先出
算法处理任务
在官方文档中是这样解释 workQueue 参数的:
Any BlockingQueue may be used to transfer and hold submitted tasks. The use of this queue interacts with pool sizing:
If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing.
If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread.
If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected.
There are three general strategies for queuing:
Direct handoffs. A good default choice for a work queue is a SynchronousQueue that hands off tasks to threads without otherwise holding them. Here, an attempt to queue a task will fail if no threads are immediately available to run it, so a new thread will be constructed. This policy avoids lockups when handling sets of requests that might have internal dependencies. Direct handoffs generally require unbounded maximumPoolSizes to avoid rejection of new submitted tasks. This in turn admits the possibility of unbounded thread growth when commands continue to arrive on average faster than they can be processed.
Unbounded queues. Using an unbounded queue (for example a LinkedBlockingQueue without a predefined capacity) will cause new tasks to wait in the queue when all corePoolSize threads are busy. Thus, no more than corePoolSize threads will ever be created. (And the value of the maximumPoolSize therefore doesn’t have any effect.) This may be appropriate when each task is completely independent of others, so tasks cannot affect each others execution; for example, in a web page server. While this style of queuing can be useful in smoothing out transient bursts of requests, it admits the possibility of unbounded work queue growth when commands continue to arrive on average faster than they can be processed.
Bounded queues. A bounded queue (for example, an ArrayBlockingQueue) helps prevent resource exhaustion when used with finite maximumPoolSizes, but can be more difficult to tune and control. Queue sizes and maximum pool sizes may be traded off for each other: Using large queues and small pools minimizes CPU usage, OS resources, and context-switching overhead, but can lead to artificially low throughput. If tasks frequently block (for example if they are I/O bound), a system may be able to schedule time for more threads than you otherwise allow. Use of small queues generally requires larger pool sizes, which keeps CPUs busier but may encounter unacceptable scheduling overhead, which also decreases throughput.
另外,关于线程池的四种拒绝策略:
接下来,我们浅析一下关于线程池大致是怎样执行的。
通常我们使用线程池都会调用 execute 方法,下面我们大致来看一下它是怎么执行的
public void execute(Runnable command) {
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}
可以看到,如果当前线程总数小于 corePoolSize 核心线程数时,会将任务通过 addWorker() 方法直接调度执行 。否则,则调用 workQueue.offer(command) 使任务进入等待队列,如果入队失败,则会直接提交给线程池。如果当前线程数已经达到 maxImumPoolSize ,则提交失败,就执行拒绝策略。
private boolean addWorker(Runnable firstTask, boolean core) {
...省略部分代码
boolean workerStarted = false;
boolean workerAdded = false;
Worker w = null;
try {
w = new Worker(firstTask);// 将任务设置到Worker中,Worker实际上是个 Runnable 对象
final Thread t = w.thread;
if (t != null) {
...省略部分代码
if (workerAdded) {
t.start(); //启动线程
workerStarted = true;
}
}
} finally {
}
return workerStarted;
}
从以上方法可以看出,提交的任务实际上被构造成一个 Worker 对象,而 Worker 实际上也是一个 Runnable 对象,通过 worker 获取其中的 thread 对象,然后调用 start() 方法启动任务。下面看看 Worker 的构造方法:
private final class Worker
extends AbstractQueuedSynchronizer
implements Runnable {
final Thread thread;
Runnable firstTask;
Worker(Runnable firstTask) {
this.firstTask = firstTask;
this.thread = getThreadFactory().newThread(this);//注意这个this
}
public void run() {
runWorker(this);
}
}
从构造方法看到,这个 thread 的创建是通过我们构造 ThreadPoolExecutor 时,使用构造函数传进来的 ThreadFactory 来创建这个 Thread 的,并使用了 this ,即 worker 对象传入。当调用 thread 的 start() 方法时,就会调用这里的 run 方法,即在 addWorker() 方法中,调用 t.start() 时,调用了 worker 的 run() 方法。而 worker 的 run() 方法中,实际上又调用了 runWorker(this) 这个方法。下面来看看这个方法:
final void runWorker(Worker w) {
Thread wt = Thread.currentThread();
Runnable task = w.firstTask;
w.firstTask = null;
w.unlock(); // allow interrupts
boolean completedAbruptly = true;
try {
while (task != null || (task = getTask()) != null) {
省略部分代码。。
try {
beforeExecute(wt, task);
Throwable thrown = null;
try {
task.run();//直接调用run方法执行任务
} catch (RuntimeException x) {
thrown = x; throw x;
} catch (Error x) {
thrown = x; throw x;
} catch (Throwable x) {
thrown = x; throw new Error(x);
} finally {
afterExecute(task, thrown);
}
} finally {
task = null;
w.completedTasks++;
w.unlock();
}
}
completedAbruptly = false;
} finally {
processWorkerExit(w, completedAbruptly);
}
}
从代码中可以看到 task.run() 这一句,这个 task 就是我们一开始在 execute 中传进来的那个 Runnable 对象。在这里可能有一些疑惑,为什么可以直接调用 run 方法呢,这跟我们平常用的好像不太一样啊?我一开始也有这个疑惑,不过后来想想,这个一切都是
addWorker() 方法中开启了一个线程之后,在开启的线程中执行的代码,这个线程接受系统的调度,所以没有问题!
另外,在这里可以看到 beforeExecute(wt, task) 和 afterExecute(task, thrown) 这两个方法,它们都是空方法。当我们需要监控每个任务执行的开始和结束时间,或者其他一些自定义的增强功能,可以扩展 ThreadPoolExecutor ,重写这两个方法,即可达到目的。
注:阿里巴巴 Java 开发手册
线程池不允许使用 Executors 去创建,而是通过 ThreadPoolExecutor 的方式,这样的处理方式让写代码的同学明确线程池的运行规则,规避资源耗尽的风险。
说明:
① FixedThreadPool 和 SingleThreadPool :
允许的请求队列长度为 Integer.MAX_VALUE,可能会堆积大量的请求,从而导致OOM。
② CacheThreadPool 和 ScheduledThreadPool:
允许的创建线程数量为 Integer.MAX_VALUE ,可能会创建大量的线程,从而导致OOM。
参考:《实战Java高并发程序设计》 葛一鸣 郭超