概要
在前面一章"Java多线程系列--“JUC线程池”02之 线程池原理(一)"中介绍了线程池的数据结构,本章会通过分析线程池的源码,对线程池进行说明。内容包括:
- 线程池示例
- 参考代码(基于JDK1.7.0_40)
- 线程池源码分析
- (一) 创建“线程池”
- (二) 添加任务到“线程池”
- (三) 关闭“线程池”
转载请注明出处:http://www.cnblogs.com/skywang12345/p/3509954.html
线程池示例
在分析线程池之前,先看一个简单的线程池示例。
import java.util.concurrent.Executors; import java.util.concurrent.ExecutorService; public class ThreadPoolDemo1 { public static void main(String[] args) { // 创建一个可重用固定线程数的线程池 ExecutorService pool = Executors.newFixedThreadPool(2); // 创建实现了Runnable接口对象,Thread对象当然也实现了Runnable接口 Thread ta = new MyThread(); Thread tb = new MyThread(); Thread tc = new MyThread(); Thread td = new MyThread(); Thread te = new MyThread(); // 将线程放入池中进行执行 pool.execute(ta); pool.execute(tb); pool.execute(tc); pool.execute(td); pool.execute(te); // 关闭线程池 pool.shutdown(); } } class MyThread extends Thread { @Override public void run() { System.out.println(Thread.currentThread().getName()+ " is running."); } }
运行结果:
pool-1-thread-1 is running. pool-1-thread-2 is running. pool-1-thread-1 is running. pool-1-thread-2 is running. pool-1-thread-1 is running.
示例中,包括了线程池的创建,将任务添加到线程池中,关闭线程池这3个主要的步骤。稍后,我们会从这3个方面来分析ThreadPoolExecutor。
参考代码(基于JDK1.7.0_40)
Executors完整源码
1 /* 2 * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. 3 * 4 * 5 * 6 * 7 * 8 * 9 * 10 * 11 * 12 * 13 * 14 * 15 * 16 * 17 * 18 * 19 * 20 * 21 * 22 * 23 */ 24 25 /* 26 * 27 * 28 * 29 * 30 * 31 * Written by Doug Lea with assistance from members of JCP JSR-166 32 * Expert Group and released to the public domain, as explained at 33 * http://creativecommons.org/publicdomain/zero/1.0/ 34 */ 35 36 package java.util.concurrent; 37 import java.util.*; 38 import java.util.concurrent.atomic.AtomicInteger; 39 import java.security.AccessControlContext; 40 import java.security.AccessController; 41 import java.security.PrivilegedAction; 42 import java.security.PrivilegedExceptionAction; 43 import java.security.PrivilegedActionException; 44 import java.security.AccessControlException; 45 import sun.security.util.SecurityConstants; 46 47 /** 48 * Factory and utility methods for {@link Executor}, {@link 49 * ExecutorService}, {@link ScheduledExecutorService}, {@link 50 * ThreadFactory}, and {@link Callable} classes defined in this 51 * package. This class supports the following kinds of methods: 52 * 53 *54 *
Methods that create and return an { @link ExecutorService} 55 * set up with commonly useful configuration settings. 56 *Methods that create and return a { @link ScheduledExecutorService} 57 * set up with commonly useful configuration settings. 58 *Methods that create and return a "wrapped" ExecutorService, that 59 * disables reconfiguration by making implementation-specific methods 60 * inaccessible. 61 *Methods that create and return a { @link ThreadFactory} 62 * that sets newly created threads to a known state. 63 *Methods that create and return a { @link Callable} 64 * out of other closure-like forms, so they can be used 65 * in execution methods requiring Callable. 66 * 67 * 68 * @since 1.5 69 * @author Doug Lea 70 */ 71 public class Executors { 72 73 /** 74 * Creates a thread pool that reuses a fixed number of threads 75 * operating off a shared unbounded queue. At any point, at most 76 * nThreads threads will be active processing tasks. 77 * If additional tasks are submitted when all threads are active, 78 * they will wait in the queue until a thread is available. 79 * If any thread terminates due to a failure during execution 80 * prior to shutdown, a new one will take its place if needed to 81 * execute subsequent tasks. The threads in the pool will exist 82 * until it is explicitly {@link ExecutorService#shutdown shutdown}. 83 * 84 * @param nThreads the number of threads in the pool 85 * @return the newly created thread pool 86 * @throws IllegalArgumentException if {@code nThreads <= 0} 87 */ 88 public static ExecutorService newFixedThreadPool(int nThreads) { 89 return new ThreadPoolExecutor(nThreads, nThreads, 90 0L, TimeUnit.MILLISECONDS, 91 new LinkedBlockingQueue()); 92 } 93 94 /** 95 * Creates a thread pool that reuses a fixed number of threads 96 * operating off a shared unbounded queue, using the provided 97 * ThreadFactory to create new threads when needed. At any point, 98 * at most nThreads threads will be active processing 99 * tasks. If additional tasks are submitted when all threads are 100 * active, they will wait in the queue until a thread is 101 * available. If any thread terminates due to a failure during 102 * execution prior to shutdown, a new one will take its place if 103 * needed to execute subsequent tasks. The threads in the pool will 104 * exist until it is explicitly {@link ExecutorService#shutdown 105 * shutdown}. 106 * 107 * @param nThreads the number of threads in the pool 108 * @param threadFactory the factory to use when creating new threads 109 * @return the newly created thread pool 110 * @throws NullPointerException if threadFactory is null 111 * @throws IllegalArgumentException if {@code nThreads <= 0} 112 */ 113 public static ExecutorService newFixedThreadPool(int nThreads, ThreadFactory threadFactory) { 114 return new ThreadPoolExecutor(nThreads, nThreads, 115 0L, TimeUnit.MILLISECONDS, 116 new LinkedBlockingQueue (), 117 threadFactory); 118 } 119 120 /** 121 * Creates an Executor that uses a single worker thread operating 122 * off an unbounded queue. (Note however that if this single 123 * thread terminates due to a failure during execution prior to 124 * shutdown, a new one will take its place if needed to execute 125 * subsequent tasks.) Tasks are guaranteed to execute 126 * sequentially, and no more than one task will be active at any 127 * given time. Unlike the otherwise equivalent 128 * newFixedThreadPool(1) the returned executor is 129 * guaranteed not to be reconfigurable to use additional threads. 130 * 131 * @return the newly created single-threaded Executor 132 */ 133 public static ExecutorService newSingleThreadExecutor() { 134 return new FinalizableDelegatedExecutorService 135 (new ThreadPoolExecutor(1, 1, 136 0L, TimeUnit.MILLISECONDS, 137 new LinkedBlockingQueue ())); 138 } 139 140 /** 141 * Creates an Executor that uses a single worker thread operating 142 * off an unbounded queue, and uses the provided ThreadFactory to 143 * create a new thread when needed. Unlike the otherwise 144 * equivalent newFixedThreadPool(1, threadFactory) the 145 * returned executor is guaranteed not to be reconfigurable to use 146 * additional threads. 147 * 148 * @param threadFactory the factory to use when creating new 149 * threads 150 * 151 * @return the newly created single-threaded Executor 152 * @throws NullPointerException if threadFactory is null 153 */ 154 public static ExecutorService newSingleThreadExecutor(ThreadFactory threadFactory) { 155 return new FinalizableDelegatedExecutorService 156 (new ThreadPoolExecutor(1, 1, 157 0L, TimeUnit.MILLISECONDS, 158 new LinkedBlockingQueue (), 159 threadFactory)); 160 } 161 162 /** 163 * Creates a thread pool that creates new threads as needed, but 164 * will reuse previously constructed threads when they are 165 * available. These pools will typically improve the performance 166 * of programs that execute many short-lived asynchronous tasks. 167 * Calls to execute will reuse previously constructed 168 * threads if available. If no existing thread is available, a new 169 * thread will be created and added to the pool. Threads that have 170 * not been used for sixty seconds are terminated and removed from 171 * the cache. Thus, a pool that remains idle for long enough will 172 * not consume any resources. Note that pools with similar 173 * properties but different details (for example, timeout parameters) 174 * may be created using {@link ThreadPoolExecutor} constructors. 175 * 176 * @return the newly created thread pool 177 */ 178 public static ExecutorService newCachedThreadPool() { 179 return new ThreadPoolExecutor(0, Integer.MAX_VALUE, 180 60L, TimeUnit.SECONDS, 181 new SynchronousQueue ()); 182 } 183 184 /** 185 * Creates a thread pool that creates new threads as needed, but 186 * will reuse previously constructed threads when they are 187 * available, and uses the provided 188 * ThreadFactory to create new threads when needed. 189 * @param threadFactory the factory to use when creating new threads 190 * @return the newly created thread pool 191 * @throws NullPointerException if threadFactory is null 192 */ 193 public static ExecutorService newCachedThreadPool(ThreadFactory threadFactory) { 194 return new ThreadPoolExecutor(0, Integer.MAX_VALUE, 195 60L, TimeUnit.SECONDS, 196 new SynchronousQueue (), 197 threadFactory); 198 } 199 200 /** 201 * Creates a single-threaded executor that can schedule commands 202 * to run after a given delay, or to execute periodically. 203 * (Note however that if this single 204 * thread terminates due to a failure during execution prior to 205 * shutdown, a new one will take its place if needed to execute 206 * subsequent tasks.) Tasks are guaranteed to execute 207 * sequentially, and no more than one task will be active at any 208 * given time. Unlike the otherwise equivalent 209 * newScheduledThreadPool(1) the returned executor is 210 * guaranteed not to be reconfigurable to use additional threads. 211 * @return the newly created scheduled executor 212 */ 213 public static ScheduledExecutorService newSingleThreadScheduledExecutor() { 214 return new DelegatedScheduledExecutorService 215 (new ScheduledThreadPoolExecutor(1)); 216 } 217 218 /** 219 * Creates a single-threaded executor that can schedule commands 220 * to run after a given delay, or to execute periodically. (Note 221 * however that if this single thread terminates due to a failure 222 * during execution prior to shutdown, a new one will take its 223 * place if needed to execute subsequent tasks.) Tasks are 224 * guaranteed to execute sequentially, and no more than one task 225 * will be active at any given time. Unlike the otherwise 226 * equivalent newScheduledThreadPool(1, threadFactory) 227 * the returned executor is guaranteed not to be reconfigurable to 228 * use additional threads. 229 * @param threadFactory the factory to use when creating new 230 * threads 231 * @return a newly created scheduled executor 232 * @throws NullPointerException if threadFactory is null 233 */ 234 public static ScheduledExecutorService newSingleThreadScheduledExecutor(ThreadFactory threadFactory) { 235 return new DelegatedScheduledExecutorService 236 (new ScheduledThreadPoolExecutor(1, threadFactory)); 237 } 238 239 /** 240 * Creates a thread pool that can schedule commands to run after a 241 * given delay, or to execute periodically. 242 * @param corePoolSize the number of threads to keep in the pool, 243 * even if they are idle. 244 * @return a newly created scheduled thread pool 245 * @throws IllegalArgumentException if {@code corePoolSize < 0} 246 */ 247 public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) { 248 return new ScheduledThreadPoolExecutor(corePoolSize); 249 } 250 251 /** 252 * Creates a thread pool that can schedule commands to run after a 253 * given delay, or to execute periodically. 254 * @param corePoolSize the number of threads to keep in the pool, 255 * even if they are idle. 256 * @param threadFactory the factory to use when the executor 257 * creates a new thread. 258 * @return a newly created scheduled thread pool 259 * @throws IllegalArgumentException if {@code corePoolSize < 0} 260 * @throws NullPointerException if threadFactory is null 261 */ 262 public static ScheduledExecutorService newScheduledThreadPool( 263 int corePoolSize, ThreadFactory threadFactory) { 264 return new ScheduledThreadPoolExecutor(corePoolSize, threadFactory); 265 } 266 267 268 /** 269 * Returns an object that delegates all defined {@link 270 * ExecutorService} methods to the given executor, but not any 271 * other methods that might otherwise be accessible using 272 * casts. This provides a way to safely "freeze" configuration and 273 * disallow tuning of a given concrete implementation. 274 * @param executor the underlying implementation 275 * @return an ExecutorService instance 276 * @throws NullPointerException if executor null 277 */ 278 public static ExecutorService unconfigurableExecutorService(ExecutorService executor) { 279 if (executor == null) 280 throw new NullPointerException(); 281 return new DelegatedExecutorService(executor); 282 } 283 284 /** 285 * Returns an object that delegates all defined {@link 286 * ScheduledExecutorService} methods to the given executor, but 287 * not any other methods that might otherwise be accessible using 288 * casts. This provides a way to safely "freeze" configuration and 289 * disallow tuning of a given concrete implementation. 290 * @param executor the underlying implementation 291 * @return a ScheduledExecutorService instance 292 * @throws NullPointerException if executor null 293 */ 294 public static ScheduledExecutorService unconfigurableScheduledExecutorService(ScheduledExecutorService executor) { 295 if (executor == null) 296 throw new NullPointerException(); 297 return new DelegatedScheduledExecutorService(executor); 298 } 299 300 /** 301 * Returns a default thread factory used to create new threads. 302 * This factory creates all new threads used by an Executor in the 303 * same {@link ThreadGroup}. If there is a {@link 304 * java.lang.SecurityManager}, it uses the group of {@link 305 * System#getSecurityManager}, else the group of the thread 306 * invoking this defaultThreadFactory method. Each new 307 * thread is created as a non-daemon thread with priority set to 308 * the smaller of Thread.NORM_PRIORITY and the maximum 309 * priority permitted in the thread group. New threads have names 310 * accessible via {@link Thread#getName} of 311 * pool-N-thread-M, where N is the sequence 312 * number of this factory, and M is the sequence number 313 * of the thread created by this factory. 314 * @return a thread factory 315 */ 316 public static ThreadFactory defaultThreadFactory() { 317 return new DefaultThreadFactory(); 318 } 319 320 /** 321 * Returns a thread factory used to create new threads that 322 * have the same permissions as the current thread. 323 * This factory creates threads with the same settings as {@link 324 * Executors#defaultThreadFactory}, additionally setting the 325 * AccessControlContext and contextClassLoader of new threads to 326 * be the same as the thread invoking this 327 * privilegedThreadFactory method. A new 328 * privilegedThreadFactory can be created within an 329 * {@link AccessController#doPrivileged} action setting the 330 * current thread's access control context to create threads with 331 * the selected permission settings holding within that action. 332 * 333 * Note that while tasks running within such threads will have
334 * the same access control and class loader settings as the 335 * current thread, they need not have the same {@link 336 * java.lang.ThreadLocal} or {@link 337 * java.lang.InheritableThreadLocal} values. If necessary, 338 * particular values of thread locals can be set or reset before 339 * any task runs in {@link ThreadPoolExecutor} subclasses using 340 * {@link ThreadPoolExecutor#beforeExecute}. Also, if it is 341 * necessary to initialize worker threads to have the same 342 * InheritableThreadLocal settings as some other designated 343 * thread, you can create a custom ThreadFactory in which that 344 * thread waits for and services requests to create others that 345 * will inherit its values. 346 * 347 * @return a thread factory 348 * @throws AccessControlException if the current access control 349 * context does not have permission to both get and set context 350 * class loader. 351 */ 352 public static ThreadFactory privilegedThreadFactory() { 353 return new PrivilegedThreadFactory(); 354 } 355 356 /** 357 * Returns a {@link Callable} object that, when 358 * called, runs the given task and returns the given result. This 359 * can be useful when applying methods requiring a 360 * Callable to an otherwise resultless action. 361 * @param task the task to run 362 * @param result the result to return 363 * @return a callable object 364 * @throws NullPointerException if task null 365 */ 366 public staticCallable callable(Runnable task, T result) { 367 if (task == null) 368 throw new NullPointerException(); 369 return new RunnableAdapter (task, result); 370 } 371 372 /** 373 * Returns a {@link Callable} object that, when 374 * called, runs the given task and returns null. 375 * @param task the task to run 376 * @return a callable object 377 * @throws NullPointerException if task null 378 */ 379 public static Callable
ThreadPoolExecutor完整源码
1 /* 2 * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. 3 * 4 * 5 * 6 * 7 * 8 * 9 * 10 * 11 * 12 * 13 * 14 * 15 * 16 * 17 * 18 * 19 * 20 * 21 * 22 * 23 */ 24 25 /* 26 * 27 * 28 * 29 * 30 * 31 * Written by Doug Lea with assistance from members of JCP JSR-166 32 * Expert Group and released to the public domain, as explained at 33 * http://creativecommons.org/publicdomain/zero/1.0/ 34 */ 35 36 package java.util.concurrent; 37 import java.util.concurrent.locks.AbstractQueuedSynchronizer; 38 import java.util.concurrent.locks.Condition; 39 import java.util.concurrent.locks.ReentrantLock; 40 import java.util.concurrent.atomic.AtomicInteger; 41 import java.util.*; 42 43 /** 44 * An {@link ExecutorService} that executes each submitted task using 45 * one of possibly several pooled threads, normally configured 46 * using {@link Executors} factory methods. 47 * 48 *313 * 314 * @since 1.5 315 * @author Doug Lea 316 */ 317 public class ThreadPoolExecutor extends AbstractExecutorService { 318 /** 319 * The main pool control state, ctl, is an atomic integer packing 320 * two conceptual fields 321 * workerCount, indicating the effective number of threads 322 * runState, indicating whether running, shutting down etc 323 * 324 * In order to pack them into one int, we limit workerCount to 325 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2 326 * billion) otherwise representable. If this is ever an issue in 327 * the future, the variable can be changed to be an AtomicLong, 328 * and the shift/mask constants below adjusted. But until the need 329 * arises, this code is a bit faster and simpler using an int. 330 * 331 * The workerCount is the number of workers that have been 332 * permitted to start and not permitted to stop. The value may be 333 * transiently different from the actual number of live threads, 334 * for example when a ThreadFactory fails to create a thread when 335 * asked, and when exiting threads are still performing 336 * bookkeeping before terminating. The user-visible pool size is 337 * reported as the current size of the workers set. 338 * 339 * The runState provides the main lifecyle control, taking on values: 340 * 341 * RUNNING: Accept new tasks and process queued tasks 342 * SHUTDOWN: Don't accept new tasks, but process queued tasks 343 * STOP: Don't accept new tasks, don't process queued tasks, 344 * and interrupt in-progress tasks 345 * TIDYING: All tasks have terminated, workerCount is zero, 346 * the thread transitioning to state TIDYING 347 * will run the terminated() hook method 348 * TERMINATED: terminated() has completed 349 * 350 * The numerical order among these values matters, to allow 351 * ordered comparisons. The runState monotonically increases over 352 * time, but need not hit each state. The transitions are: 353 * 354 * RUNNING -> SHUTDOWN 355 * On invocation of shutdown(), perhaps implicitly in finalize() 356 * (RUNNING or SHUTDOWN) -> STOP 357 * On invocation of shutdownNow() 358 * SHUTDOWN -> TIDYING 359 * When both queue and pool are empty 360 * STOP -> TIDYING 361 * When pool is empty 362 * TIDYING -> TERMINATED 363 * When the terminated() hook method has completed 364 * 365 * Threads waiting in awaitTermination() will return when the 366 * state reaches TERMINATED. 367 * 368 * Detecting the transition from SHUTDOWN to TIDYING is less 369 * straightforward than you'd like because the queue may become 370 * empty after non-empty and vice versa during SHUTDOWN state, but 371 * we can only terminate if, after seeing that it is empty, we see 372 * that workerCount is 0 (which sometimes entails a recheck -- see 373 * below). 374 */ 375 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0)); 376 private static final int COUNT_BITS = Integer.SIZE - 3; 377 private static final int CAPACITY = (1 << COUNT_BITS) - 1; 378 379 // runState is stored in the high-order bits 380 private static final int RUNNING = -1 << COUNT_BITS; 381 private static final int SHUTDOWN = 0 << COUNT_BITS; 382 private static final int STOP = 1 << COUNT_BITS; 383 private static final int TIDYING = 2 << COUNT_BITS; 384 private static final int TERMINATED = 3 << COUNT_BITS; 385 386 // Packing and unpacking ctl 387 private static int runStateOf(int c) { return c & ~CAPACITY; } 388 private static int workerCountOf(int c) { return c & CAPACITY; } 389 private static int ctlOf(int rs, int wc) { return rs | wc; } 390 391 /* 392 * Bit field accessors that don't require unpacking ctl. 393 * These depend on the bit layout and on workerCount being never negative. 394 */ 395 396 private static boolean runStateLessThan(int c, int s) { 397 return c < s; 398 } 399 400 private static boolean runStateAtLeast(int c, int s) { 401 return c >= s; 402 } 403 404 private static boolean isRunning(int c) { 405 return c < SHUTDOWN; 406 } 407 408 /** 409 * Attempt to CAS-increment the workerCount field of ctl. 410 */ 411 private boolean compareAndIncrementWorkerCount(int expect) { 412 return ctl.compareAndSet(expect, expect + 1); 413 } 414 415 /** 416 * Attempt to CAS-decrement the workerCount field of ctl. 417 */ 418 private boolean compareAndDecrementWorkerCount(int expect) { 419 return ctl.compareAndSet(expect, expect - 1); 420 } 421 422 /** 423 * Decrements the workerCount field of ctl. This is called only on 424 * abrupt termination of a thread (see processWorkerExit). Other 425 * decrements are performed within getTask. 426 */ 427 private void decrementWorkerCount() { 428 do {} while (! compareAndDecrementWorkerCount(ctl.get())); 429 } 430 431 /** 432 * The queue used for holding tasks and handing off to worker 433 * threads. We do not require that workQueue.poll() returning 434 * null necessarily means that workQueue.isEmpty(), so rely 435 * solely on isEmpty to see if the queue is empty (which we must 436 * do for example when deciding whether to transition from 437 * SHUTDOWN to TIDYING). This accommodates special-purpose 438 * queues such as DelayQueues for which poll() is allowed to 439 * return null even if it may later return non-null when delays 440 * expire. 441 */ 442 private final BlockingQueueThread pools address two different problems: they usually
49 * provide improved performance when executing large numbers of 50 * asynchronous tasks, due to reduced per-task invocation overhead, 51 * and they provide a means of bounding and managing the resources, 52 * including threads, consumed when executing a collection of tasks. 53 * Each {@code ThreadPoolExecutor} also maintains some basic 54 * statistics, such as the number of completed tasks. 55 * 56 *To be useful across a wide range of contexts, this class
57 * provides many adjustable parameters and extensibility 58 * hooks. However, programmers are urged to use the more convenient 59 * {@link Executors} factory methods {@link 60 * Executors#newCachedThreadPool} (unbounded thread pool, with 61 * automatic thread reclamation), {@link Executors#newFixedThreadPool} 62 * (fixed size thread pool) and {@link 63 * Executors#newSingleThreadExecutor} (single background thread), that 64 * preconfigure settings for the most common usage 65 * scenarios. Otherwise, use the following guide when manually 66 * configuring and tuning this class: 67 * 68 *69 * 70 *
Core and maximum pool sizes 71 * 72 *A { @code ThreadPoolExecutor} will automatically adjust the 73 * pool size (see {@link #getPoolSize}) 74 * according to the bounds set by 75 * corePoolSize (see {@link #getCorePoolSize}) and 76 * maximumPoolSize (see {@link #getMaximumPoolSize}). 77 * 78 * When a new task is submitted in method {@link #execute}, and fewer 79 * than corePoolSize threads are running, a new thread is created to 80 * handle the request, even if other worker threads are idle. If 81 * there are more than corePoolSize but less than maximumPoolSize 82 * threads running, a new thread will be created only if the queue is 83 * full. By setting corePoolSize and maximumPoolSize the same, you 84 * create a fixed-size thread pool. By setting maximumPoolSize to an 85 * essentially unbounded value such as {@code Integer.MAX_VALUE}, you 86 * allow the pool to accommodate an arbitrary number of concurrent 87 * tasks. Most typically, core and maximum pool sizes are set only 88 * upon construction, but they may also be changed dynamically using 89 * {@link #setCorePoolSize} and {@link #setMaximumPoolSize}. 90 * 91 *On-demand construction 92 * 93 *By default, even core threads are initially created and 94 * started only when new tasks arrive, but this can be overridden 95 * dynamically using method {@link #prestartCoreThread} or {@link 96 * #prestartAllCoreThreads}. You probably want to prestart threads if 97 * you construct the pool with a non-empty queue. 98 * 99 *Creating new threads 100 * 101 *New threads are created using a { @link ThreadFactory}. If not 102 * otherwise specified, a {@link Executors#defaultThreadFactory} is 103 * used, that creates threads to all be in the same {@link 104 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and 105 * non-daemon status. By supplying a different ThreadFactory, you can 106 * alter the thread's name, thread group, priority, daemon status, 107 * etc. If a {@code ThreadFactory} fails to create a thread when asked 108 * by returning null from {@code newThread}, the executor will 109 * continue, but might not be able to execute any tasks. Threads 110 * should possess the "modifyThread" {@code RuntimePermission}. If 111 * worker threads or other threads using the pool do not possess this 112 * permission, service may be degraded: configuration changes may not 113 * take effect in a timely manner, and a shutdown pool may remain in a 114 * state in which termination is possible but not completed. 115 * 116 *Keep-alive times 117 * 118 *If the pool currently has more than corePoolSize threads, 119 * excess threads will be terminated if they have been idle for more 120 * than the keepAliveTime (see {@link #getKeepAliveTime}). This 121 * provides a means of reducing resource consumption when the pool is 122 * not being actively used. If the pool becomes more active later, new 123 * threads will be constructed. This parameter can also be changed 124 * dynamically using method {@link #setKeepAliveTime}. Using a value 125 * of {@code Long.MAX_VALUE} {@link TimeUnit#NANOSECONDS} effectively 126 * disables idle threads from ever terminating prior to shut down. By 127 * default, the keep-alive policy applies only when there are more 128 * than corePoolSizeThreads. But method {@link 129 * #allowCoreThreadTimeOut(boolean)} can be used to apply this 130 * time-out policy to core threads as well, so long as the 131 * keepAliveTime value is non-zero. 132 * 133 *Queuing 134 * 135 *Any { @link BlockingQueue} may be used to transfer and hold 136 * submitted tasks. The use of this queue interacts with pool sizing: 137 * 138 *139 * 140 *
If fewer than corePoolSize threads are running, the Executor 141 * always prefers adding a new thread 142 * rather than queuing. 143 * 144 *If corePoolSize or more threads are running, the Executor 145 * always prefers queuing a request rather than adding a new 146 * thread. 147 * 148 *If a request cannot be queued, a new thread is created unless 149 * this would exceed maximumPoolSize, in which case, the task will be 150 * rejected. 151 * 152 * 153 * 154 * There are three general strategies for queuing: 155 *156 * 157 *
Direct handoffs. A good default choice for a work 158 * queue is a {@link SynchronousQueue} that hands off tasks to threads 159 * without otherwise holding them. Here, an attempt to queue a task 160 * will fail if no threads are immediately available to run it, so a 161 * new thread will be constructed. This policy avoids lockups when 162 * handling sets of requests that might have internal dependencies. 163 * Direct handoffs generally require unbounded maximumPoolSizes to 164 * avoid rejection of new submitted tasks. This in turn admits the 165 * possibility of unbounded thread growth when commands continue to 166 * arrive on average faster than they can be processed. 167 * 168 *Unbounded queues. Using an unbounded queue (for 169 * example a {@link LinkedBlockingQueue} without a predefined 170 * capacity) will cause new tasks to wait in the queue when all 171 * corePoolSize threads are busy. Thus, no more than corePoolSize 172 * threads will ever be created. (And the value of the maximumPoolSize 173 * therefore doesn't have any effect.) This may be appropriate when 174 * each task is completely independent of others, so tasks cannot 175 * affect each others execution; for example, in a web page server. 176 * While this style of queuing can be useful in smoothing out 177 * transient bursts of requests, it admits the possibility of 178 * unbounded work queue growth when commands continue to arrive on 179 * average faster than they can be processed. 180 * 181 *Bounded queues. A bounded queue (for example, an 182 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when 183 * used with finite maximumPoolSizes, but can be more difficult to 184 * tune and control. Queue sizes and maximum pool sizes may be traded 185 * off for each other: Using large queues and small pools minimizes 186 * CPU usage, OS resources, and context-switching overhead, but can 187 * lead to artificially low throughput. If tasks frequently block (for 188 * example if they are I/O bound), a system may be able to schedule 189 * time for more threads than you otherwise allow. Use of small queues 190 * generally requires larger pool sizes, which keeps CPUs busier but 191 * may encounter unacceptable scheduling overhead, which also 192 * decreases throughput. 193 * 194 * 195 * 196 * 197 * 198 *Rejected tasks 199 * 200 *New tasks submitted in method { @link #execute} will be 201 * rejected when the Executor has been shut down, and also 202 * when the Executor uses finite bounds for both maximum threads and 203 * work queue capacity, and is saturated. In either case, the {@code 204 * execute} method invokes the {@link 205 * RejectedExecutionHandler#rejectedExecution} method of its {@link 206 * RejectedExecutionHandler}. Four predefined handler policies are 207 * provided: 208 * 209 *210 * 211 *
In the default { @link ThreadPoolExecutor.AbortPolicy}, the 212 * handler throws a runtime {@link RejectedExecutionException} upon 213 * rejection. 214 * 215 *In { @link ThreadPoolExecutor.CallerRunsPolicy}, the thread 216 * that invokes {@code execute} itself runs the task. This provides a 217 * simple feedback control mechanism that will slow down the rate that 218 * new tasks are submitted. 219 * 220 *In { @link ThreadPoolExecutor.DiscardPolicy}, a task that 221 * cannot be executed is simply dropped. 222 * 223 *In { @link ThreadPoolExecutor.DiscardOldestPolicy}, if the 224 * executor is not shut down, the task at the head of the work queue 225 * is dropped, and then execution is retried (which can fail again, 226 * causing this to be repeated.) 227 * 228 * 229 * 230 * It is possible to define and use other kinds of {@link 231 * RejectedExecutionHandler} classes. Doing so requires some care 232 * especially when policies are designed to work only under particular 233 * capacity or queuing policies. 234 * 235 *Hook methods 236 * 237 *This class provides { @code protected} overridable {@link 238 * #beforeExecute} and {@link #afterExecute} methods that are called 239 * before and after execution of each task. These can be used to 240 * manipulate the execution environment; for example, reinitializing 241 * ThreadLocals, gathering statistics, or adding log 242 * entries. Additionally, method {@link #terminated} can be overridden 243 * to perform any special processing that needs to be done once the 244 * Executor has fully terminated. 245 * 246 *If hook or callback methods throw exceptions, internal worker
247 * threads may in turn fail and abruptly terminate. 248 * 249 *Queue maintenance 250 * 251 *Method { @link #getQueue} allows access to the work queue for 252 * purposes of monitoring and debugging. Use of this method for any 253 * other purpose is strongly discouraged. Two supplied methods, 254 * {@link #remove} and {@link #purge} are available to assist in 255 * storage reclamation when large numbers of queued tasks become 256 * cancelled. 257 * 258 *Finalization 259 * 260 *A pool that is no longer referenced in a program AND 261 * has no remaining threads will be {@code shutdown} automatically. If 262 * you would like to ensure that unreferenced pools are reclaimed even 263 * if users forget to call {@link #shutdown}, then you must arrange 264 * that unused threads eventually die, by setting appropriate 265 * keep-alive times, using a lower bound of zero core threads and/or 266 * setting {@link #allowCoreThreadTimeOut(boolean)}. 267 * 268 * 269 * 270 *Extension example. Most extensions of this class
271 * override one or more of the protected hook methods. For example, 272 * here is a subclass that adds a simple pause/resume feature: 273 * 274 *{@code 275 * class PausableThreadPoolExecutor extends ThreadPoolExecutor { 276 * private boolean isPaused; 277 * private ReentrantLock pauseLock = new ReentrantLock(); 278 * private Condition unpaused = pauseLock.newCondition(); 279 * 280 * public PausableThreadPoolExecutor(...) { super(...); } 281 * 282 * protected void beforeExecute(Thread t, Runnable r) { 283 * super.beforeExecute(t, r); 284 * pauseLock.lock(); 285 * try { 286 * while (isPaused) unpaused.await(); 287 * } catch (InterruptedException ie) { 288 * t.interrupt(); 289 * } finally { 290 * pauseLock.unlock(); 291 * } 292 * } 293 * 294 * public void pause() { 295 * pauseLock.lock(); 296 * try { 297 * isPaused = true; 298 * } finally { 299 * pauseLock.unlock(); 300 * } 301 * } 302 * 303 * public void resume() { 304 * pauseLock.lock(); 305 * try { 306 * isPaused = false; 307 * unpaused.signalAll(); 308 * } finally { 309 * pauseLock.unlock(); 310 * } 311 * } 312 * }}
1187 * {@code corePoolSize < 0}
1188 * {@code keepAliveTime < 0}
1189 * {@code maximumPoolSize <= 0}
1190 * {@code maximumPoolSize < corePoolSize} 1191 * @throws NullPointerException if {@code workQueue} is null 1192 */ 1193 public ThreadPoolExecutor(int corePoolSize, 1194 int maximumPoolSize, 1195 long keepAliveTime, 1196 TimeUnit unit, 1197 BlockingQueue
1220 * {@code corePoolSize < 0}
1221 * {@code keepAliveTime < 0}
1222 * {@code maximumPoolSize <= 0}
1223 * {@code maximumPoolSize < corePoolSize} 1224 * @throws NullPointerException if {@code workQueue} 1225 * or {@code threadFactory} is null 1226 */ 1227 public ThreadPoolExecutor(int corePoolSize, 1228 int maximumPoolSize, 1229 long keepAliveTime, 1230 TimeUnit unit, 1231 BlockingQueue
1255 * {@code corePoolSize < 0}
1256 * {@code keepAliveTime < 0}
1257 * {@code maximumPoolSize <= 0}
1258 * {@code maximumPoolSize < corePoolSize} 1259 * @throws NullPointerException if {@code workQueue} 1260 * or {@code handler} is null 1261 */ 1262 public ThreadPoolExecutor(int corePoolSize, 1263 int maximumPoolSize, 1264 long keepAliveTime, 1265 TimeUnit unit, 1266 BlockingQueue
1292 * {@code corePoolSize < 0}
1293 * {@code keepAliveTime < 0}
1294 * {@code maximumPoolSize <= 0}
1295 * {@code maximumPoolSize < corePoolSize} 1296 * @throws NullPointerException if {@code workQueue} 1297 * or {@code threadFactory} or {@code handler} is null 1298 */ 1299 public ThreadPoolExecutor(int corePoolSize, 1300 int maximumPoolSize, 1301 long keepAliveTime, 1302 TimeUnit unit, 1303 BlockingQueue
This method does not wait for previously submitted tasks to
1381 * complete execution. Use {@link #awaitTermination awaitTermination} 1382 * to do that. 1383 * 1384 * @throws SecurityException {@inheritDoc} 1385 */ 1386 public void shutdown() { 1387 final ReentrantLock mainLock = this.mainLock; 1388 mainLock.lock(); 1389 try { 1390 checkShutdownAccess(); 1391 advanceRunState(SHUTDOWN); 1392 interruptIdleWorkers(); 1393 onShutdown(); // hook for ScheduledThreadPoolExecutor 1394 } finally { 1395 mainLock.unlock(); 1396 } 1397 tryTerminate(); 1398 } 1399 1400 /** 1401 * Attempts to stop all actively executing tasks, halts the 1402 * processing of waiting tasks, and returns a list of the tasks 1403 * that were awaiting execution. These tasks are drained (removed) 1404 * from the task queue upon return from this method. 1405 * 1406 *This method does not wait for actively executing tasks to
1407 * terminate. Use {@link #awaitTermination awaitTermination} to 1408 * do that. 1409 * 1410 *There are no guarantees beyond best-effort attempts to stop
1411 * processing actively executing tasks. This implementation 1412 * cancels tasks via {@link Thread#interrupt}, so any task that 1413 * fails to respond to interrupts may never terminate. 1414 * 1415 * @throws SecurityException {@inheritDoc} 1416 */ 1417 public ListThis method may be useful as one part of a cancellation
1742 * scheme. It may fail to remove tasks that have been converted 1743 * into other forms before being placed on the internal queue. For 1744 * example, a task entered using {@code submit} might be 1745 * converted into a form that maintains {@code Future} status. 1746 * However, in such cases, method {@link #purge} may be used to 1747 * remove those Futures that have been cancelled. 1748 * 1749 * @param task the task to remove 1750 * @return true if the task was removed 1751 */ 1752 public boolean remove(Runnable task) { 1753 boolean removed = workQueue.remove(task); 1754 tryTerminate(); // In case SHUTDOWN and now empty 1755 return removed; 1756 } 1757 1758 /** 1759 * Tries to remove from the work queue all {@link Future} 1760 * tasks that have been cancelled. This method can be useful as a 1761 * storage reclamation operation, that has no other impact on 1762 * functionality. Cancelled tasks are never executed, but may 1763 * accumulate in work queues until worker threads can actively 1764 * remove them. Invoking this method instead tries to remove them now. 1765 * However, this method may fail to remove tasks in 1766 * the presence of interference by other threads. 1767 */ 1768 public void purge() { 1769 final BlockingQueueThis implementation does nothing, but may be customized in
1937 * subclasses. Note: To properly nest multiple overridings, subclasses 1938 * should generally invoke {@code super.beforeExecute} at the end of 1939 * this method. 1940 * 1941 * @param t the thread that will run task {@code r} 1942 * @param r the task that will be executed 1943 */ 1944 protected void beforeExecute(Thread t, Runnable r) { } 1945 1946 /** 1947 * Method invoked upon completion of execution of the given Runnable. 1948 * This method is invoked by the thread that executed the task. If 1949 * non-null, the Throwable is the uncaught {@code RuntimeException} 1950 * or {@code Error} that caused execution to terminate abruptly. 1951 * 1952 *This implementation does nothing, but may be customized in
1953 * subclasses. Note: To properly nest multiple overridings, subclasses 1954 * should generally invoke {@code super.afterExecute} at the 1955 * beginning of this method. 1956 * 1957 *Note: When actions are enclosed in tasks (such as
1958 * {@link FutureTask}) either explicitly or via methods such as 1959 * {@code submit}, these task objects catch and maintain 1960 * computational exceptions, and so they do not cause abrupt 1961 * termination, and the internal exceptions are not 1962 * passed to this method. If you would like to trap both kinds of 1963 * failures in this method, you can further probe for such cases, 1964 * as in this sample subclass that prints either the direct cause 1965 * or the underlying exception if a task has been aborted: 1966 * 1967 *{@code 1968 * class ExtendedExecutor extends ThreadPoolExecutor { 1969 * // ... 1970 * protected void afterExecute(Runnable r, Throwable t) { 1971 * super.afterExecute(r, t); 1972 * if (t == null && r instanceof Future>) { 1973 * try { 1974 * Object result = ((Future>) r).get(); 1975 * } catch (CancellationException ce) { 1976 * t = ce; 1977 * } catch (ExecutionException ee) { 1978 * t = ee.getCause(); 1979 * } catch (InterruptedException ie) { 1980 * Thread.currentThread().interrupt(); // ignore/reset 1981 * } 1982 * } 1983 * if (t != null) 1984 * System.out.println(t); 1985 * } 1986 * }} 1987 * 1988 * @param r the runnable that has completed 1989 * @param t the exception that caused termination, or null if 1990 * execution completed normally 1991 */ 1992 protected void afterExecute(Runnable r, Throwable t) { } 1993 1994 /** 1995 * Method invoked when the Executor has terminated. Default 1996 * implementation does nothing. Note: To properly nest multiple 1997 * overridings, subclasses should generally invoke 1998 * {@code super.terminated} within this method. 1999 */ 2000 protected void terminated() { } 2001 2002 /* Predefined RejectedExecutionHandlers */ 2003 2004 /** 2005 * A handler for rejected tasks that runs the rejected task 2006 * directly in the calling thread of the {@code execute} method, 2007 * unless the executor has been shut down, in which case the task 2008 * is discarded. 2009 */ 2010 public static class CallerRunsPolicy implements RejectedExecutionHandler { 2011 /** 2012 * Creates a {@code CallerRunsPolicy}. 2013 */ 2014 public CallerRunsPolicy() { } 2015 2016 /** 2017 * Executes task r in the caller's thread, unless the executor 2018 * has been shut down, in which case the task is discarded. 2019 * 2020 * @param r the runnable task requested to be executed 2021 * @param e the executor attempting to execute this task 2022 */ 2023 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { 2024 if (!e.isShutdown()) { 2025 r.run(); 2026 } 2027 } 2028 } 2029 2030 /** 2031 * A handler for rejected tasks that throws a 2032 * {@code RejectedExecutionException}. 2033 */ 2034 public static class AbortPolicy implements RejectedExecutionHandler { 2035 /** 2036 * Creates an {@code AbortPolicy}. 2037 */ 2038 public AbortPolicy() { } 2039 2040 /** 2041 * Always throws RejectedExecutionException. 2042 * 2043 * @param r the runnable task requested to be executed 2044 * @param e the executor attempting to execute this task 2045 * @throws RejectedExecutionException always. 2046 */ 2047 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { 2048 throw new RejectedExecutionException("Task " + r.toString() + 2049 " rejected from " + 2050 e.toString()); 2051 } 2052 } 2053 2054 /** 2055 * A handler for rejected tasks that silently discards the 2056 * rejected task. 2057 */ 2058 public static class DiscardPolicy implements RejectedExecutionHandler { 2059 /** 2060 * Creates a {@code DiscardPolicy}. 2061 */ 2062 public DiscardPolicy() { } 2063 2064 /** 2065 * Does nothing, which has the effect of discarding task r. 2066 * 2067 * @param r the runnable task requested to be executed 2068 * @param e the executor attempting to execute this task 2069 */ 2070 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { 2071 } 2072 } 2073 2074 /** 2075 * A handler for rejected tasks that discards the oldest unhandled 2076 * request and then retries {@code execute}, unless the executor 2077 * is shut down, in which case the task is discarded. 2078 */ 2079 public static class DiscardOldestPolicy implements RejectedExecutionHandler { 2080 /** 2081 * Creates a {@code DiscardOldestPolicy} for the given executor. 2082 */ 2083 public DiscardOldestPolicy() { } 2084 2085 /** 2086 * Obtains and ignores the next task that the executor 2087 * would otherwise execute, if one is immediately available, 2088 * and then retries execution of task r, unless the executor 2089 * is shut down, in which case task r is instead discarded. 2090 * 2091 * @param r the runnable task requested to be executed 2092 * @param e the executor attempting to execute this task 2093 */ 2094 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { 2095 if (!e.isShutdown()) { 2096 e.getQueue().poll(); 2097 e.execute(r); 2098 } 2099 } 2100 } 2101 }
线程池源码分析
(一) 创建“线程池”
创建一个线程池时需要输入以下几个参数:
1)corePoolSize(线程池的基本大小):当提交一个任务到线程池时,线程池会创建一个线
程来执行任务,即使其他空闲的基本线程能够执行新任务也会创建线程,等到需要执行的任 务数大于线程池基本大小时就不再创建。如果调用了线程池的prestartAllCoreThreads()方法,
线程池会提前创建并启动所有基本线程。
2)runnableTaskQueue(任务队列):用于保存等待执行的任务的阻塞队列。可以选择以下几
个阻塞队列。
·ArrayBlockingQueue:是一个基于数组结构的有界阻塞队列,此队列按FIFO(先进先出)原
则对元素进行排序。
·LinkedBlockingQueue:一个基于链表结构的阻塞队列,此队列按FIFO排序元素,吞吐量通 常要高于ArrayBlockingQueue。静态工厂方法Executors.newFixedThreadPool()使用了这个队列。
·SynchronousQueue:一个不存储元素的阻塞队列。每个插入操作必须等到另一个线程调用 移除操作,否则插入操作一直处于阻塞状态,吞吐量通常要高于Linked-BlockingQueue,静态工 厂方法Executors.newCachedThreadPool使用了这个队列。
·PriorityBlockingQueue:一个具有优先级的无限阻塞队列。
3)maximumPoolSize(线程池最大数量):线程池允许创建的最大线程数。如果队列满了,并
且已创建的线程数小于最大线程数,则线程池会再创建新的线程执行任务。值得注意的是,如
果使用了无界的任务队列这个参数就没什么效果。
4)ThreadFactory:用于设置创建线程的工厂,可以通过线程工厂给每个创建出来的线程设 置更有意义的名字。使用开源框架guava提供的ThreadFactoryBuilder可以快速给线程池里的线
程设置有意义的名字,代码如下。
new ThreadFactoryBuilder().setNameFormat("XX-task-%d").build();
5)RejectedExecutionHandler(饱和策略):当队列和线程池都满了,说明线程池处于饱和状 态,那么必须采取一种策略处理提交的新任务。这个策略默认情况下是AbortPolicy,表示无法 处理新任务时抛出异常。在JDK 1.5中Java线程池框架提供了以下4种策略。
·AbortPolicy:直接抛出异常。
·CallerRunsPolicy:只用调用者所在线程来运行任务。
·DiscardOldestPolicy:丢弃队列里最近的一个任务,并执行当前任务。
·DiscardPolicy:不处理,丢弃掉。
当然,也可以根据应用场景需要来实现RejectedExecutionHandler接口自定义策略。如记录
日志或持久化存储不能处理的任务。
·keepAliveTime(线程活动保持时间):线程池的工作线程空闲后,保持存活的时间。所以,
如果任务很多,并且每个任务执行的时间比较短,可以调大时间,提高线程的利用率。
·TimeUnit(线程活动保持时间的单位):可选的单位有天(DAYS)、小时(HOURS)、分钟 (MINUTES)、毫秒(MILLISECONDS)、微秒(MICROSECONDS,千分之一毫秒)和纳秒
(NANOSECONDS,千分之一微秒)。
下面以newFixedThreadPool()介绍线程池的创建过程。
1. newFixedThreadPool()
newFixedThreadPool()在Executors.java中定义,源码如下:
public static ExecutorService newFixedThreadPool(int nThreads) { return new ThreadPoolExecutor(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue()); }
说明:newFixedThreadPool(int nThreads)的作用是创建一个线程池,线程池的容量是nThreads。
newFixedThreadPool()在调用ThreadPoolExecutor()时,会传递一个LinkedBlockingQueue()对象,而LinkedBlockingQueue是单向链表实现的阻塞队列。在线程池中,就是通过该阻塞队列来实现"当线程池中任务数量超过允许的任务数量时,部分任务会阻塞等待"。
2. ThreadPoolExecutor()
ThreadPoolExecutor()在ThreadPoolExecutor.java中定义,源码如下:
public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueueworkQueue) { this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, Executors.defaultThreadFactory(), defaultHandler); }
说明:该函数实际上是调用ThreadPoolExecutor的另外一个构造函数。该函数的源码如下:
public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueueworkQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler) { if (corePoolSize < 0 || maximumPoolSize <= 0 || maximumPoolSize < corePoolSize || keepAliveTime < 0) throw new IllegalArgumentException(); if (workQueue == null || threadFactory == null || handler == null) throw new NullPointerException(); // 核心池大小 this.corePoolSize = corePoolSize; // 最大池大小 this.maximumPoolSize = maximumPoolSize; // 线程池的等待队列 this.workQueue = workQueue; this.keepAliveTime = unit.toNanos(keepAliveTime); // 线程工厂对象 this.threadFactory = threadFactory; // 拒绝策略的句柄 this.handler = handler; }
说明:在ThreadPoolExecutor()的构造函数中,进行的是初始化工作。
corePoolSize, maximumPoolSize, unit, keepAliveTime和workQueue这些变量的值是已知的,它们都是通过newFixedThreadPool()传递而来。下面看看threadFactory和handler对象。
2.1 ThreadFactory
线程池中的ThreadFactory是一个线程工厂,线程池创建线程都是通过线程工厂对象(threadFactory)来完成的。
上面所说的threadFactory对象,是通过 Executors.defaultThreadFactory()返回的。Executors.java中的defaultThreadFactory()源码如下:
public static ThreadFactory defaultThreadFactory() { return new DefaultThreadFactory(); }
defaultThreadFactory()返回DefaultThreadFactory对象。Executors.java中的DefaultThreadFactory()源码如下:
static class DefaultThreadFactory implements ThreadFactory { private static final AtomicInteger poolNumber = new AtomicInteger(1); private final ThreadGroup group; private final AtomicInteger threadNumber = new AtomicInteger(1); private final String namePrefix; DefaultThreadFactory() { SecurityManager s = System.getSecurityManager(); group = (s != null) ? s.getThreadGroup() : Thread.currentThread().getThreadGroup(); namePrefix = "pool-" + poolNumber.getAndIncrement() + "-thread-"; } // 提供创建线程的API。 public Thread newThread(Runnable r) { // 线程对应的任务是Runnable对象r Thread t = new Thread(group, r, namePrefix + threadNumber.getAndIncrement(), 0); // 设为“非守护线程” if (t.isDaemon()) t.setDaemon(false); // 将优先级设为“Thread.NORM_PRIORITY” if (t.getPriority() != Thread.NORM_PRIORITY) t.setPriority(Thread.NORM_PRIORITY); return t; } }
说明:ThreadFactory的作用就是提供创建线程的功能的线程工厂。
它是通过newThread()提供创建线程功能的,下面简单说说newThread()。newThread()创建的线程对应的任务是Runnable对象,它创建的线程都是“非守护线程”而且“线程优先级都是Thread.NORM_PRIORITY”。
2.2 RejectedExecutionHandler
handler是ThreadPoolExecutor中拒绝策略的处理句柄。所谓拒绝策略,是指将任务添加到线程池中时,线程池拒绝该任务所采取的相应策略。
线程池默认会采用的是defaultHandler策略,即AbortPolicy策略。在AbortPolicy策略中,线程池拒绝任务时会抛出异常!
defaultHandler的定义如下:
private static final RejectedExecutionHandler defaultHandler = new AbortPolicy();
AbortPolicy的源码如下:
public static class AbortPolicy implements RejectedExecutionHandler { public AbortPolicy() { } // 抛出异常 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { throw new RejectedExecutionException("Task " + r.toString() + " rejected from " + e.toString()); } }
(二) 添加任务到“线程池”
可以使用两个方法向线程池提交任务,分别为execute()和submit()方法。
execute()方法用于提交不需要返回值的任务,所以无法判断任务是否被线程池执行成功;
submit()方法用于提交需要返回值的任务。线程池会返回一个future类型的对象。通过这个future对象可以判断任务是否执行成功。并且可以通过future的get()方法来获取返回值。get()方法会阻塞当前线程值直到任务完成。而使用get(long timeout,TimeUnit unit)方法则会阻塞当前线程一段时间后立即返回,这时候有可能任务没有执行完。
1. execute()
execute()定义在ThreadPoolExecutor.java中,源码如下:
public void execute(Runnable command) { // 如果任务为null,则抛出异常。 if (command == null) throw new NullPointerException(); // 获取ctl对应的int值。该int值保存了"线程池中任务的数量"和"线程池状态"信息 int c = ctl.get(); // 当线程池中的任务数量 < "核心池大小"时,即线程池中少于corePoolSize个任务。 // 则通过addWorker(command, true)新建一个线程,并将任务(command)添加到该线程中;然后,启动该线程从而执行任务。 if (workerCountOf(c) < corePoolSize) { if (addWorker(command, true)) return; c = ctl.get(); } // 当线程池中的任务数量 >= "核心池大小"时, // 而且,"线程池处于允许状态"时,则尝试将任务添加到阻塞队列中。 if (isRunning(c) && workQueue.offer(command)) { // 再次确认“线程池状态”,若线程池异常终止了,则删除任务;然后通过reject()执行相应的拒绝策略的内容。 int recheck = ctl.get(); if (! isRunning(recheck) && remove(command)) reject(command); // 否则,如果"线程池中任务数量"为0,则通过addWorker(null, false)尝试新建一个线程,新建线程对应的任务为null。 else if (workerCountOf(recheck) == 0) addWorker(null, false); } // 通过addWorker(command, false)新建一个线程,并将任务(command)添加到该线程中;然后,启动该线程从而执行任务。 // 如果addWorker(command, false)执行失败,则通过reject()执行相应的拒绝策略的内容。 else if (!addWorker(command, false)) reject(command); }
说明:execute()的作用是将任务添加到线程池中执行。它会分为3种情况进行处理:
情况1 -- 如果"线程池中任务数量" < "核心池大小"时,即线程池中少于corePoolSize个任务;此时就新建一个线程,并将该任务添加到线程中进行执行。
情况2 -- 如果"线程池中任务数量" >= "核心池大小",并且"线程池是允许状态";此时,则将任务添加到阻塞队列中阻塞等待。在该情况下,会再次确认"线程池的状态",如果"第2次读到的线程池状态"和"第1次读到的线程池状态"不同,则从阻塞队列中删除该任务。
情况3 -- 非以上两种情况。在这种情况下,尝试新建一个线程,并将该任务添加到线程中进行执行。如果执行失败,则通过reject()拒绝该任务。
2. addWorker()
addWorker()的源码如下:
private boolean addWorker(Runnable firstTask, boolean core) { retry: // 更新"线程池状态和计数"标记,即更新ctl。 for (;;) { // 获取ctl对应的int值。该int值保存了"线程池中任务的数量"和"线程池状态"信息 int c = ctl.get(); // 获取线程池状态。 int rs = runStateOf(c); // 有效性检查 if (rs >= SHUTDOWN && ! (rs == SHUTDOWN && firstTask == null && ! workQueue.isEmpty())) return false; for (;;) { // 获取线程池中任务的数量。 int wc = workerCountOf(c); // 如果"线程池中任务的数量"超过限制,则返回false。 if (wc >= CAPACITY || wc >= (core ? corePoolSize : maximumPoolSize)) return false; // 通过CAS函数将c的值+1。操作失败的话,则退出循环。 if (compareAndIncrementWorkerCount(c)) break retry; c = ctl.get(); // Re-read ctl // 检查"线程池状态",如果与之前的状态不同,则从retry重新开始。 if (runStateOf(c) != rs) continue retry; // else CAS failed due to workerCount change; retry inner loop } } boolean workerStarted = false; boolean workerAdded = false; Worker w = null; // 添加任务到线程池,并启动任务所在的线程。 try { final ReentrantLock mainLock = this.mainLock; // 新建Worker,并且指定firstTask为Worker的第一个任务。 w = new Worker(firstTask); // 获取Worker对应的线程。 final Thread t = w.thread; if (t != null) { // 获取锁 mainLock.lock(); try { int c = ctl.get(); int rs = runStateOf(c); // 再次确认"线程池状态" if (rs < SHUTDOWN || (rs == SHUTDOWN && firstTask == null)) { if (t.isAlive()) // precheck that t is startable throw new IllegalThreadStateException(); // 将Worker对象(w)添加到"线程池的Worker集合(workers)"中 workers.add(w); // 更新largestPoolSize int s = workers.size(); if (s > largestPoolSize) largestPoolSize = s; workerAdded = true; } } finally { // 释放锁 mainLock.unlock(); } // 如果"成功将任务添加到线程池"中,则启动任务所在的线程。 if (workerAdded) { t.start(); workerStarted = true; } } } finally { if (! workerStarted) addWorkerFailed(w); } // 返回任务是否启动。 return workerStarted; }
说明:
addWorker(Runnable firstTask, boolean core) 的作用是将任务(firstTask)添加到线程池中,并启动该任务。
core为true的话,则以corePoolSize为界限,若"线程池中已有任务数量>=corePoolSize",则返回false;core为false的话,则以maximumPoolSize为界限,若"线程池中已有任务数量>=maximumPoolSize",则返回false。
addWorker()会先通过for循环不断尝试更新ctl状态,ctl记录了"线程池中任务数量和线程池状态"。
更新成功之后,再通过try模块来将任务添加到线程池中,并启动任务所在的线程。
从addWorker()中,我们能清晰的发现:线程池在添加任务时,会创建任务对应的Worker对象;而一个Workder对象包含一个Thread对象。(01) 通过将Worker对象添加到"线程的workers集合"中,从而实现将任务添加到线程池中。 (02) 通过启动Worker对应的Thread线程,则执行该任务。
3. submit()
补充说明一点,submit()实际上也是通过调用execute()实现的,源码如下:
public Future> submit(Runnable task) { if (task == null) throw new NullPointerException(); RunnableFutureftask = newTaskFor(task, null); execute(ftask); return ftask; }
(三) 关闭“线程池”
可以通过调用线程池的shutdown或shutdownNow方法来关闭线程池。它们的原理是遍历线 程池中的工作线程,然后逐个调用线程的interrupt方法来中断线程,所以无法响应中断的任务 可能永远无法终止。但是它们存在一定的区别,shutdownNow首先将线程池的状态设置成 STOP,然后尝试停止所有的正在执行或暂停任务的线程,并返回等待执行任务的列表,而 shutdown只是将线程池的状态设置成SHUTDOWN状态,然后中断所有没有正在执行任务的线
程。
只要调用了这两个关闭方法中的任意一个,isShutdown方法就会返回true。当所有的任务 都已关闭后,才表示线程池关闭成功,这时调用isTerminaed方法会返回true。至于应该调用哪 一种方法来关闭线程池,应该由提交到线程池的任务特性决定,通常调用shutdown方法来关闭 线程池,如果任务不一定要执行完,则可以调用shutdownNow方法。
shutdown()的源码如下:
public void shutdown() { final ReentrantLock mainLock = this.mainLock; // 获取锁 mainLock.lock(); try { // 检查终止线程池的“线程”是否有权限。 checkShutdownAccess(); // 设置线程池的状态为关闭状态。 advanceRunState(SHUTDOWN); // 中断线程池中空闲的线程。 interruptIdleWorkers(); // 钩子函数,在ThreadPoolExecutor中没有任何动作。 onShutdown(); // hook for ScheduledThreadPoolExecutor } finally { // 释放锁 mainLock.unlock(); } // 尝试终止线程池 tryTerminate(); }
说明:shutdown()的作用是关闭线程池。
参考文献:
http://www.cnblogs.com/skywang12345/p/3509954.html
《Java并发编程的艺术》