mina框架提供了几种不同的线程模式
//非常个性化的线程订制,可以提供单线程,单线程池,多线程池
4、任务队列(taskQueue):用于存放没有处理的任务。提供一种缓冲机制。
线程池的基本原理可以参考 http://blog.csdn.net/hsuxu/article/details/8985931
其中如果要用到mina提供的线程池,肯定据需要了解OrderedThreadPoolExecutor
去看下源码:
public class OrderedThreadPoolExecutor extends ThreadPoolExecutor
可以看出是继承ThreadPoolExecutor,里面提供了不同的构造函数,其中比较多的参数一个是:
/**
* Creates a new instance of a OrderedThreadPoolExecutor.
*
* @param corePoolSize The initial pool sizePoolSize
* @param maximumPoolSize The maximum pool size
* @param keepAliveTime Default duration for a thread
* @param unit Time unit used for the keepAlive value
* @param threadFactory The factory used to create threads
* @param eventQueueHandler The queue used to store events
*/
public OrderedThreadPoolExecutor(
int corePoolSize, int maximumPoolSize,
long keepAliveTime, TimeUnit unit,
ThreadFactory threadFactory, IoEventQueueHandler eventQueueHandler) {
// We have to initialize the pool with default values (0 and 1) in order to
// handle the exception in a better way. We can't add a try {} catch() {}
// around the super() call.
super(DEFAULT_INITIAL_THREAD_POOL_SIZE, 1, keepAliveTime, unit,
new SynchronousQueue(), threadFactory, new AbortPolicy());
if (corePoolSize < DEFAULT_INITIAL_THREAD_POOL_SIZE) {
throw new IllegalArgumentException("corePoolSize: " + corePoolSize);
}
if ((maximumPoolSize == 0) || (maximumPoolSize < corePoolSize)) {
throw new IllegalArgumentException("maximumPoolSize: " + maximumPoolSize);
}
// Now, we can setup the pool sizes
super.setCorePoolSize( corePoolSize );
super.setMaximumPoolSize( maximumPoolSize );
// The queueHandler might be null.
if (eventQueueHandler == null) {
this.eventQueueHandler = IoEventQueueHandler.NOOP;
} else {
this.eventQueueHandler = eventQueueHandler;
}
}
public void execute(Runnable task) {
if (shutdown) {
rejectTask(task);
}
// Check that it's a IoEvent task
checkTaskType(task);
IoEvent event = (IoEvent) task;
// Get the associated session
IoSession session = event.getSession();
// Get the session's queue of events
SessionTasksQueue sessionTasksQueue = getSessionTasksQueue(session);
Queue tasksQueue = sessionTasksQueue.tasksQueue;
boolean offerSession;
// propose the new event to the event queue handler. If we
// use a throttle queue handler, the message may be rejected
// if the maximum size has been reached.
boolean offerEvent = eventQueueHandler.accept(this, event);
if (offerEvent) {
// Ok, the message has been accepted
synchronized (tasksQueue) {
// Inject the event into the executor taskQueue
tasksQueue.offer(event);
if (sessionTasksQueue.processingCompleted) {
sessionTasksQueue.processingCompleted = false;
offerSession = true;
} else {
offerSession = false;
}
if (LOGGER.isDebugEnabled()) {
print(tasksQueue, event);
}
}
} else {
offerSession = false;
}
if (offerSession) {
// As the tasksQueue was empty, the task has been executed
// immediately, so we can move the session to the queue
// of sessions waiting for completion.
waitingSessions.offer(session);
}
addWorkerIfNecessary();
if (offerEvent) {
eventQueueHandler.offered(this, event);
}
}
private class SessionTasksQueue {
/** A queue of ordered event waiting to be processed */
private final Queue tasksQueue = new ConcurrentLinkedQueue();
/** The current task state */
private boolean processingCompleted = true;
}
我们知道java提供的Queue类里面有几种不同的队列模式
去文档看看ThreadPoolExecutor这个类关于队列的说明:
BlockingQueue
may be used to transfer and hold submitted tasks. The use of this queue interacts with pool sizing:
SynchronousQueue
that hands off tasks to threads without otherwise holding them. Here, an attempt to queue a task will fail if no threads are immediately available to run it, so a new thread will be constructed. This policy avoids lockups when handling sets of requests that might have internal dependencies. Direct handoffs generally require unbounded maximumPoolSizes to avoid rejection of new submitted tasks. This in turn admits the possibility of unbounded thread growth when commands continue to arrive on average faster than they can be processed.LinkedBlockingQueue
without a predefined capacity) will cause new tasks to wait in the queue when all corePoolSize threads are busy. Thus, no more than corePoolSize threads will ever be created. (And the value of the maximumPoolSize therefore doesn't have any effect.) This may be appropriate when each task is completely independent of others, so tasks cannot affect each others execution; for example, in a web page server. While this style of queuing can be useful in smoothing out transient bursts of requests, it admits the possibility of unbounded work queue growth when commands continue to arrive on average faster than they can be processed.ArrayBlockingQueue
) helps prevent resource exhaustion when used with finite maximumPoolSizes, but can be more difficult to tune and control. Queue sizes and maximum pool sizes may be traded off for each other: Using large queues and small pools minimizes CPU usage, OS resources, and context-switching overhead, but can lead to artificially low throughput. If tasks frequently block (for example if they are I/O bound), a system may be able to schedule time for more threads than you otherwise allow. Use of small queues generally requires larger pool sizes, which keeps CPUs busier but may encounter unacceptable scheduling overhead, which also decreases throughput. ConcurrentLinkedQueue
is an appropriate choice when many threads will share access to a common collection. Like most other concurrent collection implementations, this class does not permit the use ofnull
elements.
可以参考看看具体的测试实例 http://blog.csdn.net/shixing_11/article/details/7109471