/* * Oracle专有/机密。使用须遵守许可条款。 * * * * * * * * * * * * * * * * * * * * */ /* * * * * * * 由Doug Lea在JCP JSR-166专家组成员的协助下编写, * 并发布到公共领域, 解释在 * http://creativecommons.org/publicdomain/zero/1.0/ */ package java.util.concurrent; import java.util.concurrent.locks.AbstractQueuedSynchronizer; import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.ReentrantLock; import java.util.concurrent.atomic.AtomicInteger; import java.util.*; /** * 一个{@link ExecutorService}执行其中一个已提交的任务,任务由池中可能的几个 * 线程之一提交。一般配置用{@link Executors}工厂方法。 * *线程池强调两个不同的问题:它们通常提供改进的性能,当执行大数量的异步任务, * 通过减少每个任务的调用开销,且它们提供了一种方法来限定并管理资源, * 包含线程集,进行消费当执行一个集的任务。 * 每个{@code ThreadPoolExecutor}同时维护一些基本的统计,比如完成的任务数。 * *
考虑可用性通过一个大范围的上下文,该类提供了许多可调节的参数和可扩展的钩子。 * 然而,程序员迫切使用更方便的{@link Executors}工厂方法 * {@link Executors#newCachedThreadPool}(无限的线程池,自动增加线程个数), * {@link Executors#newFixedThreadPool} (固定个数的线程池) * {@link Executors#newSingleThreadExecutor} (单个后台线程) * 这些预设置用于最常用的场景。 若非如此,使用下面的指导当需要手动配置和调节该类时: * *
-
*
*
- 核心和最大线程数 * *
- 一 {@code ThreadPoolExecutor} 将自动的调节线程数 (see {@link #getPoolSize}) * 根据边界设定,通过核心线程数 (see {@link #getCorePoolSize}) 和 * 最大线程数 (see {@link #getMaximumPoolSize}). * * 当一个新的任务提交于方法 {@link #execute(Runnable)}, * 且小于核心池数的线程在运行, 一个新的线程被创建来处理请求,即使其他工作线程 * 是空闲的。如果有超过核心池数但是少于最大池数的线程在运行,一个新的线程将 * 只会在队列满时才会被创建。 通过设置核心池数和最大池数,你创建一个固定尺寸的 * 线程池。通过设置最大池数给一个本质上无边界值比如{@code Integer.MAX_VALUE}, * 你允许池容纳一个任意数量并发任务。 最典型的,核心和最大池尺寸只能基于上面的 * 限制被设置,但是它们可能也会被动态改变使用 {@link #setCorePoolSize} 和 * {@link #setMaximumPoolSize} * *
- 按需构造 * *
- 默认,每个核心线程被初始创建并启动,只会在一个新的任务到达的时候, * 但是这可以被动态重写用方法{@link #prestartCoreThread}或者 {@link * #prestartAllCoreThreads}。 你可能会想预启动线程,如果你构建的池持有一个 * 非空的队列 * *
- 创建新线程 * *
- 新线程被创建用一个{@link ThreadFactory}。如无其他特别规定,一个 * {@link Executors#defaultThreadFactory}可以用,其创建的线程会归为相同的 * {@link ThreadGroup}以及相同{@code NORM_PRIORITY}优先级及非守护状态。 * 通过提供一个不同的线程工程,你可以改变线程名,线程组,优先级,守护状态,等。 * 如果一个{@code ThreadFactory}创建线程失败当返回null自{@code newThread}, * 执行器将继续,但可能不能执行任何任务。线程可能处理"modifyThread" {@code * RuntimePermission}.如果工作者线程或其他线程使用池确实没有处理这个使命, * 服务会被降级:配置改变可能不能及时的起作用,且关闭池会维护在一个状态,即关闭 * 了但是没有完成。 * * *
- 保持活跃时间 * *
- If the pool currently has more than corePoolSize threads, * excess threads will be terminated if they have been idle for more * than the keepAliveTime (see {@link #getKeepAliveTime(TimeUnit)}). * This provides a means of reducing resource consumption when the * pool is not being actively used. If the pool becomes more active * later, new threads will be constructed. This parameter can also be * changed dynamically using method {@link #setKeepAliveTime(long, * TimeUnit)}. Using a value of {@code Long.MAX_VALUE} {@link * TimeUnit#NANOSECONDS} effectively disables idle threads from ever * terminating prior to shut down. By default, the keep-alive policy * applies only when there are more than corePoolSize threads. But * method {@link #allowCoreThreadTimeOut(boolean)} can be used to * apply this time-out policy to core threads as well, so long as the * keepAliveTime value is non-zero. * *
- 排队 * *
- Any {@link BlockingQueue} may be used to transfer and hold
* submitted tasks. The use of this queue interacts with pool sizing:
*
*
-
*
*
- If fewer than corePoolSize threads are running, the Executor * always prefers adding a new thread * rather than queuing. * *
- If corePoolSize or more threads are running, the Executor * always prefers queuing a request rather than adding a new * thread. * *
- If a request cannot be queued, a new thread is created unless * this would exceed maximumPoolSize, in which case, the task will be * rejected. * *
-
*
*
- Direct handoffs. A good default choice for a work * queue is a {@link SynchronousQueue} that hands off tasks to threads * without otherwise holding them. Here, an attempt to queue a task * will fail if no threads are immediately available to run it, so a * new thread will be constructed. This policy avoids lockups when * handling sets of requests that might have internal dependencies. * Direct handoffs generally require unbounded maximumPoolSizes to * avoid rejection of new submitted tasks. This in turn admits the * possibility of unbounded thread growth when commands continue to * arrive on average faster than they can be processed. * *
- 无限队列. Using an unbounded queue (for * example a {@link LinkedBlockingQueue} without a predefined * capacity) will cause new tasks to wait in the queue when all * corePoolSize threads are busy. Thus, no more than corePoolSize * threads will ever be created. (And the value of the maximumPoolSize * therefore doesn't have any effect.) This may be appropriate when * each task is completely independent of others, so tasks cannot * affect each others execution; for example, in a web page server. * While this style of queuing can be useful in smoothing out * transient bursts of requests, it admits the possibility of * unbounded work queue growth when commands continue to arrive on * average faster than they can be processed. * *
- 有限队列. A bounded queue (for example, an * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when * used with finite maximumPoolSizes, but can be more difficult to * tune and control. Queue sizes and maximum pool sizes may be traded * off for each other: Using large queues and small pools minimizes * CPU usage, OS resources, and context-switching overhead, but can * lead to artificially low throughput. If tasks frequently block (for * example if they are I/O bound), a system may be able to schedule * time for more threads than you otherwise allow. Use of small queues * generally requires larger pool sizes, which keeps CPUs busier but * may encounter unacceptable scheduling overhead, which also * decreases throughput. * *
*
* - 拒绝任务 * *
- New tasks submitted in method {@link #execute(Runnable)} will be
* rejected when the Executor has been shut down, and also when
* the Executor uses finite bounds for both maximum threads and work queue
* capacity, and is saturated. In either case, the {@code execute} method
* invokes the {@link
* RejectedExecutionHandler#rejectedExecution(Runnable, ThreadPoolExecutor)}
* method of its {@link RejectedExecutionHandler}. Four predefined handler
* policies are provided:
*
*
-
*
*
- In the default {@link ThreadPoolExecutor.AbortPolicy}, the * handler throws a runtime {@link RejectedExecutionException} upon * rejection. * *
- In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread * that invokes {@code execute} itself runs the task. This provides a * simple feedback control mechanism that will slow down the rate that * new tasks are submitted. * *
- In {@link ThreadPoolExecutor.DiscardPolicy}, a task that * cannot be executed is simply dropped. * *
- In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the * executor is not shut down, the task at the head of the work queue * is dropped, and then execution is retried (which can fail again, * causing this to be repeated.) * *
*
* - 钩子方法 * *
- This class provides {@code protected} overridable
* {@link #beforeExecute(Thread, Runnable)} and
* {@link #afterExecute(Runnable, Throwable)} methods that are called
* before and after execution of each task. These can be used to
* manipulate the execution environment; for example, reinitializing
* ThreadLocals, gathering statistics, or adding log entries.
* Additionally, method {@link #terminated} can be overridden to perform
* any special processing that needs to be done once the Executor has
* fully terminated.
*
*
If hook or callback methods throw exceptions, internal worker * threads may in turn fail and abruptly terminate.
*
* - 队列维护 * *
- Method {@link #getQueue()} allows access to the work queue * for purposes of monitoring and debugging. Use of this method for * any other purpose is strongly discouraged. Two supplied methods, * {@link #remove(Runnable)} and {@link #purge} are available to * assist in storage reclamation when large numbers of queued tasks * become cancelled. * *
- 收尾处理 * *
- A pool that is no longer referenced in a program AND * has no remaining threads will be {@code shutdown} automatically. If * you would like to ensure that unreferenced pools are reclaimed even * if users forget to call {@link #shutdown}, then you must arrange * that unused threads eventually die, by setting appropriate * keep-alive times, using a lower bound of zero core threads and/or * setting {@link #allowCoreThreadTimeOut(boolean)}. * *
扩展例子. 大多数该类的扩展覆写一个或多个protected钩子方法。比如, * 下面的子类添加了一个简单的pause/resume(暂停/恢复)特性: * *
{@code * class PausableThreadPoolExecutor extends ThreadPoolExecutor { * private boolean isPaused; * private ReentrantLock pauseLock = new ReentrantLock(); * private Condition unpaused = pauseLock.newCondition(); * * public PausableThreadPoolExecutor(...) { super(...); } * * protected void beforeExecute(Thread t, Runnable r) { * super.beforeExecute(t, r); * pauseLock.lock(); * try { * while (isPaused) unpaused.await(); * } catch (InterruptedException ie) { * t.interrupt(); * } finally { * pauseLock.unlock(); * } * } * * public void pause() { * pauseLock.lock(); * try { * isPaused = true; * } finally { * pauseLock.unlock(); * } * } * * public void resume() { * pauseLock.lock(); * try { * isPaused = false; * unpaused.signalAll(); * } finally { * pauseLock.unlock(); * } * } * }}* * @since 1.5 * @author Doug Lea */ public class ThreadPoolExecutor extends AbstractExecutorService { /** * 主要池控制状态,ctl,是一个原子整数包装了两个概念字段 * workerCount,指有效线程数 * runState,指是否运行,关闭等 * * 为了将他们包装到一个int里,我们限制工作线程数在(2^29)-1(大约5亿) * 线程而不是(2^31)-1(2十亿)别的可表示的。如果将来这仍是一个问题, * 该变量可以被改为一个AtomicLong,且移位/屏蔽常数低于调整值。但在此之前, * 用int要快一点,简单一点 * * workerCount是工作者的个数,工作者允许启动且不允许停止。 这个值会有 * 瞬时的不同于实际存活的线程数,比如当一个线程工厂创建线程失败,和当退出 * 线程仍在进行记录在终止之前。用户可见池尺寸报告为工作集当前尺寸。 * * runState提供主要生命周期控制,管理着这些值: * * RUNNING: 接收新任务并处理入队任务 * SHUTDOWN: 不接受新任务,但处理入队任务 * STOP: 不接受新任务,不处理入队任务,并中断处理中的任务。 * TIDYING: 结束所有任务,workerCount为0,线程转为状态TIDYING * 将执行terminated()钩子方法 * TERMINATED: terminated() 完成 * * The numerical order among these values matters, to allow * ordered comparisons. The runState monotonically increases over * time, but need not hit each state. The transitions are: * * RUNNING -> SHUTDOWN * On invocation of shutdown(), perhaps implicitly in finalize() * (RUNNING or SHUTDOWN) -> STOP * On invocation of shutdownNow() * SHUTDOWN -> TIDYING * When both queue and pool are empty * STOP -> TIDYING * When pool is empty * TIDYING -> TERMINATED * When the terminated() hook method has completed * * Threads waiting in awaitTermination() will return when the * state reaches TERMINATED. * * 检测从SHUTDOWN到TIDYING的转变,比你想要的更直接。 因为队列会从非空变空 * 且反之亦然在SHUTDOWN状态下,但我们只能终结如果在看到它为空后,我们看到 * workerCount为0(这有时候必要的做个复检--如下) */ private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0)); private static final int COUNT_BITS = Integer.SIZE - 3; private static final int CAPACITY = (1 << COUNT_BITS) - 1; // runState 存储于高位 private static final int RUNNING = -1 << COUNT_BITS; private static final int SHUTDOWN = 0 << COUNT_BITS; private static final int STOP = 1 << COUNT_BITS; private static final int TIDYING = 2 << COUNT_BITS; private static final int TERMINATED = 3 << COUNT_BITS; // 包装和开箱 ctl private static int runStateOf(int c) { return c & ~CAPACITY; } private static int workerCountOf(int c) { return c & CAPACITY; } private static int ctlOf(int rs, int wc) { return rs | wc; } /* * Bit字段访问器无需开箱ctl * 因为bit结构,及workerCount为非负数 */ private static boolean runStateLessThan(int c, int s) { return c < s; } private static boolean runStateAtLeast(int c, int s) { return c >= s; } private static boolean isRunning(int c) { return c < SHUTDOWN; } /** * 尝试比较并设置增加ctl的workerCount字段 */ private boolean compareAndIncrementWorkerCount(int expect) { return ctl.compareAndSet(expect, expect + 1); } /** * 尝试比较并设置减小ctl的workerCount字段 */ private boolean compareAndDecrementWorkerCount(int expect) { return ctl.compareAndSet(expect, expect - 1); } /** * 减小ctl的workerCount字段。这只在突然终止一个线程时(见processWorkerExit)被调用 * 其他减小运行于getTask中 */ private void decrementWorkerCount() { do {} while (! compareAndDecrementWorkerCount(ctl.get())); } /** * The queue used for holding tasks and handing off to worker * threads. We do not require that workQueue.poll() returning * null necessarily means that workQueue.isEmpty(), so rely * solely on isEmpty to see if the queue is empty (which we must * do for example when deciding whether to transition from * SHUTDOWN to TIDYING). This accommodates special-purpose * queues such as DelayQueues for which poll() is allowed to * return null even if it may later return non-null when delays * expire. * 该队列用于持有任务并传递给工作者线程集。 我们要求workQueue.poll()不返回null * 必然需要通过workQueue.isEmpty()判断,所以单靠isEmpty看是否队列为 * 空(我们必须这样,比如当决定是否从SHUTDOWN转变到TIDYING)。这兼容特定目的 * 的队列,比如DelayQueues之于poll()允许返回null,即使当延迟到期它会晚一点 * 返回非null */ private final BlockingQueue
* {@code corePoolSize < 0}
* {@code keepAliveTime < 0}
* {@code maximumPoolSize <= 0}
* {@code maximumPoolSize < corePoolSize} * @throws NullPointerException if {@code workQueue} is null */ public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue
* {@code corePoolSize < 0}
* {@code keepAliveTime < 0}
* {@code maximumPoolSize <= 0}
* {@code maximumPoolSize < corePoolSize} * @throws NullPointerException if {@code workQueue} * or {@code threadFactory} is null */ public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue
* {@code corePoolSize < 0}
* {@code keepAliveTime < 0}
* {@code maximumPoolSize <= 0}
* {@code maximumPoolSize < corePoolSize} * @throws NullPointerException if {@code workQueue} * or {@code handler} is null */ public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue
* {@code corePoolSize < 0}
* {@code keepAliveTime < 0}
* {@code maximumPoolSize <= 0}
* {@code maximumPoolSize < corePoolSize} * @throws NullPointerException if {@code workQueue} * or {@code threadFactory} or {@code handler} is null */ public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue
This method does not wait for previously submitted tasks to * complete execution. Use {@link #awaitTermination awaitTermination} * to do that. * * @throws SecurityException {@inheritDoc} */ public void shutdown() { final ReentrantLock mainLock = this.mainLock; mainLock.lock(); try { checkShutdownAccess(); advanceRunState(SHUTDOWN); interruptIdleWorkers(); onShutdown(); // hook for ScheduledThreadPoolExecutor } finally { mainLock.unlock(); } tryTerminate(); } /** * Attempts to stop all actively executing tasks, halts the * processing of waiting tasks, and returns a list of the tasks * that were awaiting execution. These tasks are drained (removed) * from the task queue upon return from this method. * *
This method does not wait for actively executing tasks to * terminate. Use {@link #awaitTermination awaitTermination} to * do that. * *
There are no guarantees beyond best-effort attempts to stop
* processing actively executing tasks. This implementation
* cancels tasks via {@link Thread#interrupt}, so any task that
* fails to respond to interrupts may never terminate.
*
* @throws SecurityException {@inheritDoc}
*/
public List This method may be useful as one part of a cancellation
* scheme. It may fail to remove tasks that have been converted
* into other forms before being placed on the internal queue. For
* example, a task entered using {@code submit} might be
* converted into a form that maintains {@code Future} status.
* However, in such cases, method {@link #purge} may be used to
* remove those Futures that have been cancelled.
*
* @param task the task to remove
* @return {@code true} if the task was removed
*/
public boolean remove(Runnable task) {
boolean removed = workQueue.remove(task);
tryTerminate(); // In case SHUTDOWN and now empty
return removed;
}
/**
* Tries to remove from the work queue all {@link Future}
* tasks that have been cancelled. This method can be useful as a
* storage reclamation operation, that has no other impact on
* functionality. Cancelled tasks are never executed, but may
* accumulate in work queues until worker threads can actively
* remove them. Invoking this method instead tries to remove them now.
* However, this method may fail to remove tasks in
* the presence of interference by other threads.
*/
public void purge() {
final BlockingQueue This implementation does nothing, but may be customized in
* subclasses. Note: To properly nest multiple overridings, subclasses
* should generally invoke {@code super.beforeExecute} at the end of
* this method.
*
* @param t the thread that will run task {@code r}
* @param r the task that will be executed
*/
protected void beforeExecute(Thread t, Runnable r) { }
/**
* Method invoked upon completion of execution of the given Runnable.
* This method is invoked by the thread that executed the task. If
* non-null, the Throwable is the uncaught {@code RuntimeException}
* or {@code Error} that caused execution to terminate abruptly.
*
* This implementation does nothing, but may be customized in
* subclasses. Note: To properly nest multiple overridings, subclasses
* should generally invoke {@code super.afterExecute} at the
* beginning of this method.
*
* Note: When actions are enclosed in tasks (such as
* {@link FutureTask}) either explicitly or via methods such as
* {@code submit}, these task objects catch and maintain
* computational exceptions, and so they do not cause abrupt
* termination, and the internal exceptions are not
* passed to this method. If you would like to trap both kinds of
* failures in this method, you can further probe for such cases,
* as in this sample subclass that prints either the direct cause
* or the underlying exception if a task has been aborted:
*
* {@code
* class ExtendedExecutor extends ThreadPoolExecutor {
* // ...
* protected void afterExecute(Runnable r, Throwable t) {
* super.afterExecute(r, t);
* if (t == null && r instanceof Future>) {
* try {
* Object result = ((Future>) r).get();
* } catch (CancellationException ce) {
* t = ce;
* } catch (ExecutionException ee) {
* t = ee.getCause();
* } catch (InterruptedException ie) {
* Thread.currentThread().interrupt(); // ignore/reset
* }
* }
* if (t != null)
* System.out.println(t);
* }
* }}
*
* @param r the runnable that has completed
* @param t the exception that caused termination, or null if
* execution completed normally
*/
protected void afterExecute(Runnable r, Throwable t) { }
/**
* Method invoked when the Executor has terminated. Default
* implementation does nothing. Note: To properly nest multiple
* overridings, subclasses should generally invoke
* {@code super.terminated} within this method.
*/
protected void terminated() { }
/* Predefined RejectedExecutionHandlers */
/**
* A handler for rejected tasks that runs the rejected task
* directly in the calling thread of the {@code execute} method,
* unless the executor has been shut down, in which case the task
* is discarded.
*/
public static class CallerRunsPolicy implements RejectedExecutionHandler {
/**
* Creates a {@code CallerRunsPolicy}.
*/
public CallerRunsPolicy() { }
/**
* Executes task r in the caller's thread, unless the executor
* has been shut down, in which case the task is discarded.
*
* @param r the runnable task requested to be executed
* @param e the executor attempting to execute this task
*/
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
if (!e.isShutdown()) {
r.run();
}
}
}
/**
* A handler for rejected tasks that throws a
* {@code RejectedExecutionException}.
*/
public static class AbortPolicy implements RejectedExecutionHandler {
/**
* Creates an {@code AbortPolicy}.
*/
public AbortPolicy() { }
/**
* Always throws RejectedExecutionException.
*
* @param r the runnable task requested to be executed
* @param e the executor attempting to execute this task
* @throws RejectedExecutionException always
*/
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
throw new RejectedExecutionException("Task " + r.toString() +
" rejected from " +
e.toString());
}
}
/**
* A handler for rejected tasks that silently discards the
* rejected task.
*/
public static class DiscardPolicy implements RejectedExecutionHandler {
/**
* Creates a {@code DiscardPolicy}.
*/
public DiscardPolicy() { }
/**
* Does nothing, which has the effect of discarding task r.
*
* @param r the runnable task requested to be executed
* @param e the executor attempting to execute this task
*/
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
}
}
/**
* A handler for rejected tasks that discards the oldest unhandled
* request and then retries {@code execute}, unless the executor
* is shut down, in which case the task is discarded.
*/
public static class DiscardOldestPolicy implements RejectedExecutionHandler {
/**
* Creates a {@code DiscardOldestPolicy} for the given executor.
*/
public DiscardOldestPolicy() { }
/**
* Obtains and ignores the next task that the executor
* would otherwise execute, if one is immediately available,
* and then retries execution of task r, unless the executor
* is shut down, in which case task r is instead discarded.
*
* @param r the runnable task requested to be executed
* @param e the executor attempting to execute this task
*/
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
if (!e.isShutdown()) {
e.getQueue().poll();
e.execute(r);
}
}
}
}