随着并发数量的提高,传统nio框架采用一个Selector来支撑大量连接事件的管理和触发已经遇到瓶颈,因此现在各种nio框架的新版本都采用多个Selector并存的结构,由多个Selector均衡地去管理大量连接。这里以Mina和Grizzly的实现为例。
在Mina 2.0中,Selector的管理是由org.apache.mina.transport.socket.nio.NioProcessor来处理,每个NioProcessor对象保存一个Selector,负责具体的select、wakeup、channel的注册和取消、读写事件的注册和判断、实际的IO读写操作等等,核心代码如下:
这些方法的调用都是通过AbstractPollingIoProcessor来处理,这个类里可以看到一个nio框架的核心逻辑,注册、select、派发,具体因为与本文主题不合,不再展开。NioProcessor的初始化是在NioSocketAcceptor的构造方法中调用的:
直接调用了父类AbstractPollingIoAcceptor的构造函数,在其中我们可以看到,默认是启动了一个SimpleIoProcessorPool来包装NioProcessor:
这里其实是一个组合模式,SimpleIoProcessorPool和NioProcessor都实现了Processor接口,一个是组合形成的Processor池,而另一个是单独的类。调用的SimpleIoProcessorPool的构造函数是这样:
Mina当有一个新连接建立的时候,就创建一个NioSocketSession,并且传入上面的SimpleIoProcessorPool,当连接初始化的时候将Session加入SimpleIoProcessorPool:
加入的操作是递增一个整型变量并且模数组大小后对应的NioProcessor注册到session里:
private IoProcessor < T > nextProcessor() {
checkDisposal();
return pool[Math.abs(processorDistributor.getAndIncrement()) % pool.length];
}
if (p == null ) {
p = nextProcessor();
IoProcessor < T > oldp =
(IoProcessor < T > ) session.setAttributeIfAbsent(PROCESSOR, p);
if (oldp != null ) {
p = oldp;
}
}
这样一来,每个连接都关联一个NioProcessor,也就是关联一个Selector对象,避免了所有连接共用一个Selector负载过高导致server响应变慢的后果。但是注意到NioSocketAcceptor也有一个Selector,这个Selector用来干什么的呢?那就是集中处理OP_ACCEPT事件的Selector,主要用于连接的接入,不跟处理读写事件的Selector混在一起,因此Mina的默认open的Selector是cpu+2个。
看完mina2.0之后,我们来看看Grizzly2.0是怎么处理的,Grizzly还是比较保守,它默认就是启动两个Selector,其中一个专门负责accept,另一个负责连接的IO读写事件的管理。Grizzly 2.0中Selector的管理是通过SelectorRunner类,这个类封装了Selector对象以及核心的分发注册逻辑,你可以将他理解成Mina中的NioProcessor,核心的代码如下:
基本上是一个reactor实现的样子,在AbstractNIOTransport类维护了一个SelectorRunner的数组,而Grizzly用于创建tcp server的类TCPNIOTransport正是继承于AbstractNIOTransport类,在它的start方法中调用了startSelectorRunners来创建并启动SelectorRunner数组:
可见Grizzly并没有采用一个单独的池对象来管理SelectorRunner,而是直接采用数组管理,默认数组大小是2。SelectorRunner实现了Runnable接口,它的start方法调用了一个线程池来运行自身。刚才我提到了说Grizzly的Accept是单独一个Selector来管理的,那么是如何表现的呢?答案在RoundRobinConnectionDistributor类,这个类是用于派发注册事件到相应的SelectorRunner上,它的派发方式是这样:
getSelectorRunner这个方法道出了秘密,如果是OP_ACCEPT,那么都使用数组中的第一个SelectorRunner,如果不是,那么就通过取模运算的结果+1从后面的SelectorRunner中取一个来注册。
分析完mina2.0和grizzly2.0对Selector的管理后我们可以得到几个启示:
1、在处理大量连接的情况下,多个Selector比单个Selector好
2、多个Selector的情况下,处理OP_READ和OP_WRITE的Selector要与处理OP_ACCEPT的Selector分离,也就是说处理接入应该要一个单独的Selector对象来处理,避免IO读写事件影响接入速度。
3、Selector的数目问题,mina默认是cpu+2,而grizzly总共就2个,我更倾向于mina的策略,但是我认为应该对cpu个数做一个判断,如果CPU个数超过8个,那么更多的Selector线程可能带来比较大的线程切换的开销,mina默认的策略并非合适,幸好可以设置这个数值。
在Mina 2.0中,Selector的管理是由org.apache.mina.transport.socket.nio.NioProcessor来处理,每个NioProcessor对象保存一个Selector,负责具体的select、wakeup、channel的注册和取消、读写事件的注册和判断、实际的IO读写操作等等,核心代码如下:
public
NioProcessor(Executor executor) {
super (executor);
try {
// Open a new selector
selector = Selector.open();
} catch (IOException e) {
throw new RuntimeIoException( " Failed to open a selector. " , e);
}
}
protected int select( long timeout) throws Exception {
return selector.select(timeout);
}
protected boolean isInterestedInRead(NioSession session) {
SelectionKey key = session.getSelectionKey();
return key.isValid() && (key.interestOps() & SelectionKey.OP_READ) != 0 ;
}
protected boolean isInterestedInWrite(NioSession session) {
SelectionKey key = session.getSelectionKey();
return key.isValid() && (key.interestOps() & SelectionKey.OP_WRITE) != 0 ;
}
protected int read(NioSession session, IoBuffer buf) throws Exception {
return session.getChannel().read(buf.buf());
}
protected int write(NioSession session, IoBuffer buf, int length) throws Exception {
if (buf.remaining() <= length) {
return session.getChannel().write(buf.buf());
} else {
int oldLimit = buf.limit();
buf.limit(buf.position() + length);
try {
return session.getChannel().write(buf.buf());
} finally {
buf.limit(oldLimit);
}
}
}
super (executor);
try {
// Open a new selector
selector = Selector.open();
} catch (IOException e) {
throw new RuntimeIoException( " Failed to open a selector. " , e);
}
}
protected int select( long timeout) throws Exception {
return selector.select(timeout);
}
protected boolean isInterestedInRead(NioSession session) {
SelectionKey key = session.getSelectionKey();
return key.isValid() && (key.interestOps() & SelectionKey.OP_READ) != 0 ;
}
protected boolean isInterestedInWrite(NioSession session) {
SelectionKey key = session.getSelectionKey();
return key.isValid() && (key.interestOps() & SelectionKey.OP_WRITE) != 0 ;
}
protected int read(NioSession session, IoBuffer buf) throws Exception {
return session.getChannel().read(buf.buf());
}
protected int write(NioSession session, IoBuffer buf, int length) throws Exception {
if (buf.remaining() <= length) {
return session.getChannel().write(buf.buf());
} else {
int oldLimit = buf.limit();
buf.limit(buf.position() + length);
try {
return session.getChannel().write(buf.buf());
} finally {
buf.limit(oldLimit);
}
}
}
这些方法的调用都是通过AbstractPollingIoProcessor来处理,这个类里可以看到一个nio框架的核心逻辑,注册、select、派发,具体因为与本文主题不合,不再展开。NioProcessor的初始化是在NioSocketAcceptor的构造方法中调用的:
public
NioSocketAcceptor() {
super ( new DefaultSocketSessionConfig(), NioProcessor. class );
((DefaultSocketSessionConfig) getSessionConfig()).init( this );
}
super ( new DefaultSocketSessionConfig(), NioProcessor. class );
((DefaultSocketSessionConfig) getSessionConfig()).init( this );
}
直接调用了父类AbstractPollingIoAcceptor的构造函数,在其中我们可以看到,默认是启动了一个SimpleIoProcessorPool来包装NioProcessor:
protected
AbstractPollingIoAcceptor(IoSessionConfig sessionConfig,
Class <? extends IoProcessor < T >> processorClass) {
this (sessionConfig, null , new SimpleIoProcessorPool < T > (processorClass),
true );
}
Class <? extends IoProcessor < T >> processorClass) {
this (sessionConfig, null , new SimpleIoProcessorPool < T > (processorClass),
true );
}
这里其实是一个组合模式,SimpleIoProcessorPool和NioProcessor都实现了Processor接口,一个是组合形成的Processor池,而另一个是单独的类。调用的SimpleIoProcessorPool的构造函数是这样:
private
static
final
int
DEFAULT_SIZE
=
Runtime.getRuntime().availableProcessors()
+
1
;
public SimpleIoProcessorPool(Class <? extends IoProcessor < T >> processorType) {
this (processorType, null , DEFAULT_SIZE);
}
可以看到,默认的池大小是cpu个数+1,也就是创建了cpu+1个的Selector对象。它的重载构造函数里是创建了一个数组,启动一个CachedThreadPool来运行NioProcessor,通过反射创建具体的Processor对象,这里就不再列出了。
public SimpleIoProcessorPool(Class <? extends IoProcessor < T >> processorType) {
this (processorType, null , DEFAULT_SIZE);
}
Mina当有一个新连接建立的时候,就创建一个NioSocketSession,并且传入上面的SimpleIoProcessorPool,当连接初始化的时候将Session加入SimpleIoProcessorPool:
protected
NioSession accept(IoProcessor
<
NioSession
>
processor,
ServerSocketChannel handle) throws Exception {
SelectionKey key = handle.keyFor(selector);
if ((key == null ) || ( ! key.isValid()) || ( ! key.isAcceptable()) ) {
return null ;
}
// accept the connection from the client
SocketChannel ch = handle.accept();
if (ch == null ) {
return null ;
}
return new NioSocketSession( this , processor, ch);
}
private void processHandles(Iterator < H > handles) throws Exception {
while (handles.hasNext()) {
H handle = handles.next();
handles.remove();
// Associates a new created connection to a processor,
// and get back a session
T session = accept(processor, handle);
if (session == null ) {
break ;
}
initSession(session, null , null );
// add the session to the SocketIoProcessor
session.getProcessor().add(session);
}
}
ServerSocketChannel handle) throws Exception {
SelectionKey key = handle.keyFor(selector);
if ((key == null ) || ( ! key.isValid()) || ( ! key.isAcceptable()) ) {
return null ;
}
// accept the connection from the client
SocketChannel ch = handle.accept();
if (ch == null ) {
return null ;
}
return new NioSocketSession( this , processor, ch);
}
private void processHandles(Iterator < H > handles) throws Exception {
while (handles.hasNext()) {
H handle = handles.next();
handles.remove();
// Associates a new created connection to a processor,
// and get back a session
T session = accept(processor, handle);
if (session == null ) {
break ;
}
initSession(session, null , null );
// add the session to the SocketIoProcessor
session.getProcessor().add(session);
}
}
加入的操作是递增一个整型变量并且模数组大小后对应的NioProcessor注册到session里:
private IoProcessor < T > nextProcessor() {
checkDisposal();
return pool[Math.abs(processorDistributor.getAndIncrement()) % pool.length];
}
if (p == null ) {
p = nextProcessor();
IoProcessor < T > oldp =
(IoProcessor < T > ) session.setAttributeIfAbsent(PROCESSOR, p);
if (oldp != null ) {
p = oldp;
}
}
这样一来,每个连接都关联一个NioProcessor,也就是关联一个Selector对象,避免了所有连接共用一个Selector负载过高导致server响应变慢的后果。但是注意到NioSocketAcceptor也有一个Selector,这个Selector用来干什么的呢?那就是集中处理OP_ACCEPT事件的Selector,主要用于连接的接入,不跟处理读写事件的Selector混在一起,因此Mina的默认open的Selector是cpu+2个。
看完mina2.0之后,我们来看看Grizzly2.0是怎么处理的,Grizzly还是比较保守,它默认就是启动两个Selector,其中一个专门负责accept,另一个负责连接的IO读写事件的管理。Grizzly 2.0中Selector的管理是通过SelectorRunner类,这个类封装了Selector对象以及核心的分发注册逻辑,你可以将他理解成Mina中的NioProcessor,核心的代码如下:
protected
boolean
doSelect() {
selectorHandler = transport.getSelectorHandler();
selectionKeyHandler = transport.getSelectionKeyHandler();
strategy = transport.getStrategy();
try {
if (isResume) {
// If resume SelectorRunner - finish postponed keys
isResume = false ;
if (keyReadyOps != 0 ) {
if ( ! iterateKeyEvents()) return false ;
}
if ( ! iterateKeys()) return false ;
}
lastSelectedKeysCount = 0 ;
selectorHandler.preSelect( this );
readyKeys = selectorHandler.select( this );
if (stateHolder.getState( false ) == State.STOPPING) return false ;
lastSelectedKeysCount = readyKeys.size();
if (lastSelectedKeysCount != 0 ) {
iterator = readyKeys.iterator();
if ( ! iterateKeys()) return false ;
}
selectorHandler.postSelect( this );
} catch (ClosedSelectorException e) {
notifyConnectionException(key,
" Selector was unexpectedly closed " , e,
Severity.TRANSPORT, Level.SEVERE, Level.FINE);
} catch (Exception e) {
notifyConnectionException(key,
" doSelect exception " , e,
Severity.UNKNOWN, Level.SEVERE, Level.FINE);
} catch (Throwable t) {
logger.log(Level.SEVERE, " doSelect exception " , t);
transport.notifyException(Severity.FATAL, t);
}
return true ;
}
selectorHandler = transport.getSelectorHandler();
selectionKeyHandler = transport.getSelectionKeyHandler();
strategy = transport.getStrategy();
try {
if (isResume) {
// If resume SelectorRunner - finish postponed keys
isResume = false ;
if (keyReadyOps != 0 ) {
if ( ! iterateKeyEvents()) return false ;
}
if ( ! iterateKeys()) return false ;
}
lastSelectedKeysCount = 0 ;
selectorHandler.preSelect( this );
readyKeys = selectorHandler.select( this );
if (stateHolder.getState( false ) == State.STOPPING) return false ;
lastSelectedKeysCount = readyKeys.size();
if (lastSelectedKeysCount != 0 ) {
iterator = readyKeys.iterator();
if ( ! iterateKeys()) return false ;
}
selectorHandler.postSelect( this );
} catch (ClosedSelectorException e) {
notifyConnectionException(key,
" Selector was unexpectedly closed " , e,
Severity.TRANSPORT, Level.SEVERE, Level.FINE);
} catch (Exception e) {
notifyConnectionException(key,
" doSelect exception " , e,
Severity.UNKNOWN, Level.SEVERE, Level.FINE);
} catch (Throwable t) {
logger.log(Level.SEVERE, " doSelect exception " , t);
transport.notifyException(Severity.FATAL, t);
}
return true ;
}
基本上是一个reactor实现的样子,在AbstractNIOTransport类维护了一个SelectorRunner的数组,而Grizzly用于创建tcp server的类TCPNIOTransport正是继承于AbstractNIOTransport类,在它的start方法中调用了startSelectorRunners来创建并启动SelectorRunner数组:
private
static
final
int
DEFAULT_SELECTOR_RUNNERS_COUNT
=
2
;
@Override
public void start() throws IOException {
if (selectorRunnersCount <= 0 ) {
selectorRunnersCount = DEFAULT_SELECTOR_RUNNERS_COUNT;
}
startSelectorRunners();
}
protected void startSelectorRunners() throws IOException {
selectorRunners = new SelectorRunner[selectorRunnersCount];
synchronized (selectorRunners) {
for ( int i = 0 ; i < selectorRunnersCount; i ++ ) {
SelectorRunner runner =
new SelectorRunner( this , SelectorFactory.instance().create());
runner.start();
selectorRunners[i] = runner;
}
}
}
@Override
public void start() throws IOException {
if (selectorRunnersCount <= 0 ) {
selectorRunnersCount = DEFAULT_SELECTOR_RUNNERS_COUNT;
}
startSelectorRunners();
}
protected void startSelectorRunners() throws IOException {
selectorRunners = new SelectorRunner[selectorRunnersCount];
synchronized (selectorRunners) {
for ( int i = 0 ; i < selectorRunnersCount; i ++ ) {
SelectorRunner runner =
new SelectorRunner( this , SelectorFactory.instance().create());
runner.start();
selectorRunners[i] = runner;
}
}
}
可见Grizzly并没有采用一个单独的池对象来管理SelectorRunner,而是直接采用数组管理,默认数组大小是2。SelectorRunner实现了Runnable接口,它的start方法调用了一个线程池来运行自身。刚才我提到了说Grizzly的Accept是单独一个Selector来管理的,那么是如何表现的呢?答案在RoundRobinConnectionDistributor类,这个类是用于派发注册事件到相应的SelectorRunner上,它的派发方式是这样:
public
Future
<
RegisterChannelResult
>
registerChannelAsync(
SelectableChannel channel, int interestOps, Object attachment,
CompletionHandler completionHandler)
throws IOException {
SelectorRunner runner = getSelectorRunner(interestOps);
return transport.getSelectorHandler().registerChannelAsync(
runner, channel, interestOps, attachment, completionHandler);
}
private SelectorRunner getSelectorRunner( int interestOps) {
SelectorRunner[] runners = getTransportSelectorRunners();
int index;
if (interestOps == SelectionKey.OP_ACCEPT || runners.length == 1 ) {
index = 0 ;
} else {
index = (counter.incrementAndGet() % (runners.length - 1 )) + 1 ;
}
return runners[index];
}
SelectableChannel channel, int interestOps, Object attachment,
CompletionHandler completionHandler)
throws IOException {
SelectorRunner runner = getSelectorRunner(interestOps);
return transport.getSelectorHandler().registerChannelAsync(
runner, channel, interestOps, attachment, completionHandler);
}
private SelectorRunner getSelectorRunner( int interestOps) {
SelectorRunner[] runners = getTransportSelectorRunners();
int index;
if (interestOps == SelectionKey.OP_ACCEPT || runners.length == 1 ) {
index = 0 ;
} else {
index = (counter.incrementAndGet() % (runners.length - 1 )) + 1 ;
}
return runners[index];
}
getSelectorRunner这个方法道出了秘密,如果是OP_ACCEPT,那么都使用数组中的第一个SelectorRunner,如果不是,那么就通过取模运算的结果+1从后面的SelectorRunner中取一个来注册。
分析完mina2.0和grizzly2.0对Selector的管理后我们可以得到几个启示:
1、在处理大量连接的情况下,多个Selector比单个Selector好
2、多个Selector的情况下,处理OP_READ和OP_WRITE的Selector要与处理OP_ACCEPT的Selector分离,也就是说处理接入应该要一个单独的Selector对象来处理,避免IO读写事件影响接入速度。
3、Selector的数目问题,mina默认是cpu+2,而grizzly总共就2个,我更倾向于mina的策略,但是我认为应该对cpu个数做一个判断,如果CPU个数超过8个,那么更多的Selector线程可能带来比较大的线程切换的开销,mina默认的策略并非合适,幸好可以设置这个数值。