5.2 AbstractNioChannel源码分析
AbstractNioChannel从名字可以看出是对NIO的抽象,首先看下这个类的NioUnsafe接口:
/**
* Special {@link Unsafe} sub-type which allows to access the underlying {@link SelectableChannel}
*/
public interface NioUnsafe extends Unsafe {
/**
* Return underlying {@link SelectableChannel}
*/
SelectableChannel ch(); // 对应NIO中的JDK实现的Channel
/**
* Finish connect
*/
void finishConnect(); // 连接完成
/**
* Read from underlying {@link SelectableChannel}
*/
void read(); // 从JDK的Channel中读取数据
void forceFlush();
}
回忆NIO的三大概念:Channel、Buffer、Selector,Netty的Channel包装了JDK的Channel从而实现更为复杂的功能。Unsafe中可以使用ch()方法,NioChannel中可以使用javaChannel()方法获得JDK的Channel。接口中定义了finishConnect()方法是因为SelectableChannel设置为非阻塞模式时,connect()方法会立即返回,此时连接操作可能没有完成,如果没有完成,则需要调用JDK的finishConnect()方法完成连接操作。也许你已经注意到,AbstractUnsafe中并没有connect事件框架,这是因为并不是所有连接都有标准的connect过程,比如Netty的LocalChannel和EmbeddedChannel。但是NIO中的连接操作则有较为标准的流程,在介绍Connect事件框架前,先介绍一下其中使用到的相关字段,这些字段定义在AbstractNioChannel中:
/**
* The future of the current connection attempt. If not null, subsequent
* connection attempts will fail.
*/
private ChannelPromise connectPromise; // 连接异步结果
private ScheduledFuture> connectTimeoutFuture; // 连接超时检测任务异步结果
private SocketAddress requestedRemoteAddress; // 连接的远端地址
Connect事件框架:
@Override
public final void connect(
final SocketAddress remoteAddress, final SocketAddress localAddress, final ChannelPromise promise) {
if (!promise.setUncancellable() || !ensureOpen(promise)) {
return; // Channel已被关闭
}
try {
if (connectPromise != null) {
// Already a connect in process.
throw new ConnectionPendingException(); // 已有连接操作正在进行
}
boolean wasActive = isActive();
// 模板方法,细节子类完成
if (doConnect(remoteAddress, localAddress)) {
fulfillConnectPromise(promise, wasActive); // 连接操作已完成
} else {
// 连接操作尚未完成
connectPromise = promise;
requestedRemoteAddress = remoteAddress;
// 这部分代码为Netty的连接超时机制
// Schedule connect timeout.
int connectTimeoutMillis = config().getConnectTimeoutMillis();
if (connectTimeoutMillis > 0) {
connectTimeoutFuture = eventLoop().schedule(new Runnable() {
@Override
public void run() {
ChannelPromise connectPromise = AbstractNioChannel.this.connectPromise;
ConnectTimeoutException cause =
new ConnectTimeoutException("connection timed out: " + remoteAddress);
if (connectPromise != null && connectPromise.tryFailure(cause)) {
close(voidPromise());
}
}
}, connectTimeoutMillis, TimeUnit.MILLISECONDS);
}
promise.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
// 连接操作取消则连接超时检测任务取消
if (future.isCancelled()) {
if (connectTimeoutFuture != null) {
connectTimeoutFuture.cancel(false);
}
connectPromise = null;
close(voidPromise());
}
}
});
}
} catch (Throwable t) {
promise.tryFailure(annotateConnectException(t, remoteAddress));
closeIfClosed();
}
}
Connect事件框架中包含了Netty的连接超时检测机制:向EventLoop提交一个调度任务,设定的超时时间已到则向连接操作的异步结果设置失败然后关闭连接。fulfillConnectPromise()设置异步结果为成功并触发Channel的Active事件:
private void fulfillConnectPromise(ChannelPromise promise, boolean wasActive) {
if (promise == null) {
// Closed via cancellation and the promise has been notified already.
return; // 操作已取消或Promise已被通知
}
// Get the state as trySuccess() may trigger an ChannelFutureListener that will close the Channel.
// We still need to ensure we call fireChannelActive() in this case.
boolean active = isActive();
// trySuccess() will return false if a user cancelled the connection attempt.
boolean promiseSet = promise.trySuccess();
// Regardless if the connection attempt was cancelled, channelActive() event should be triggered,
// because what happened is what happened.
if (!wasActive && active) {
pipeline().fireChannelActive();
}
// If a user cancelled the connection attempt, close the channel, which is followed by channelInactive().
if (!promiseSet) {
close(voidPromise());
}
}
FinishConnect事件框架:
@Override
public final void finishConnect() {
// Note this method is invoked by the event loop only if the connection attempt was
// neither cancelled nor timed out.
assert eventLoop().inEventLoop();
try {
boolean wasActive = isActive();
doFinishConnect();
fulfillConnectPromise(connectPromise, wasActive); // 首次Active触发Active事件
} catch (Throwable t) {
fulfillConnectPromise(connectPromise, annotateConnectException(t, requestedRemoteAddress));
} finally {
// Check for null as the connectTimeoutFuture is only created if a connectTimeoutMillis > 0 is used
// See https://github.com/netty/netty/issues/1770
if (connectTimeoutFuture != null) {
connectTimeoutFuture.cancel(false); // 连接完成,取消超时检测任务
}
connectPromise = null;
}
}
finishConnect()只由EventLoop处理就绪selectionKey的OP_CONNECT事件时调用,从而完成连接操作。注意:连接操作被取消或者超时不会使该方法被调用。
Flush事件细节:
@Override
protected final void flush0() {
// Flush immediately only when there's no pending flush.
// If there's a pending flush operation, event loop will call forceFlush() later,
// and thus there's no need to call it now.
if (!isFlushPending()) {
super.flush0(); // 调用父类方法,在父类判断是否已经又调用;
}
}
@Override
public final void forceFlush() {
// directly call super.flush0() to force a flush now
super.flush0();
}
private boolean isFlushPending() {
SelectionKey selectionKey = selectionKey();
return selectionKey.isValid() && (selectionKey.interestOps() & SelectionKey.OP_WRITE) != 0;
}
forceFlush()方法由EventLoop处理就绪selectionKey的OP_WRITE事件时调用,将缓冲区中的数据写入Channel。isFlushPending()方法容易导致困惑:为什么selectionKey关心OP_WRITE事件表示正在Flush呢?OP_WRITE表示通道可写,而一般情况下通道都可写,如果selectionKey一直关心OP_WRITE事件,那么将不断从select()方法返回从而导致死循环。Netty使用一个写缓冲区,write操作将数据放入缓冲区中,flush时设置selectionKey关心OP_WRITE事件,完成后取消关心OP_WRITE事件。所以,如果selectionKey关心OP_WRITE事件表示此时正在Flush数据。
AbstractNioUnsafe还有最后一个方法removeReadOp():
protected final void removeReadOp() {
SelectionKey key = selectionKey();
// Check first if the key is still valid as it may be canceled as part of the deregistration
// from the EventLoop
// See https://github.com/netty/netty/issues/2104
if (!key.isValid()) {
return; // selectionKey已被取消
}
int interestOps = key.interestOps();
if ((interestOps & readInterestOp) != 0) {
// only remove readInterestOp if needed
key.interestOps(interestOps & ~readInterestOp); //设置为不再感兴趣
}
}
Netty中将服务端的OP_ACCEPT和客户端的Read统一抽象为Read事件,在NIO底层I/O事件使用bitmap表示,一个二进制位对应一个I/O事件。当一个二进制位为1时表示关心该事件,readInterestOp的二进制表示只有1位为1,所以体会interestOps & ~readInterestOp的含义,可知removeReadOp()的功能是设置SelectionKey不再关心Read事件。类似的,还有setReadOp()、removeWriteOp()、setWriteOp()等等。
分析完AbstractNioUnsafe,我们再分析AbstractNioChannel,首先看其中还没讲解的字段:
private final SelectableChannel ch; // 包装的JDK Channel
protected final int readInterestOp; // Read事件,服务端OP_ACCEPT,其他OP_READ
volatile SelectionKey selectionKey; // JDK Channel对应的选择键
boolean readPending; // 底层读事件进行标记
再看一下构造方法:
/**
* Create a new instance
*
* @param parent the parent {@link Channel} by which this instance was created. May be {@code null}
* @param ch the underlying {@link SelectableChannel} on which it operates
* @param readInterestOp the ops to set to receive data from the {@link SelectableChannel}
*/
protected AbstractNioChannel(Channel parent, SelectableChannel ch, int readInterestOp) {
super(parent);
this.ch = ch;
this.readInterestOp = readInterestOp;
try {
ch.configureBlocking(false); // 设置为非阻塞模式
} catch (IOException e) {
try {
ch.close();
} catch (IOException e2) {
if (logger.isWarnEnabled()) {
logger.warn(
"Failed to close a partially initialized socket.", e2);
}
}
throw new ChannelException("Failed to enter non-blocking mode.", e);
}
}
其中的ch.configureBlocking(false)方法设置Channel为非阻塞模式,从而为Netty提供非阻塞处理I/O事件的能力。
对于AbstractNioChannel的方法,我们主要分析它实现I/O事件框架细节部分的doXXX()方法。
@Override
protected void doRegister() throws Exception {
boolean selected = false;
for (;;) {
try {
selectionKey = javaChannel().register(eventLoop().unwrappedSelector(), 0, this);
return;
} catch (CancelledKeyException e) {
if (!selected) {
// Force the Selector to select now as the "canceled" SelectionKey may still be
// cached and not removed because no Select.select(..) operation was called yet.
// 选择键取消重新selectNow(),清除因取消操作而缓存的选择键
eventLoop().selectNow();
selected = true;
} else {
// We forced a select operation on the selector before but the SelectionKey is still cached
// for whatever reason. JDK bug ?
throw e;
}
}
}
}
@Override
protected void doDeregister() throws Exception {
eventLoop().cancel(selectionKey());
}
对于Register事件,当Channel属于NIO时,已经可以确定注册操作的全部细节:将Channel注册到给定NioEventLoop的selector上即可。注意,其中第二个参数0表示注册时不关心任何事件,第三个参数为Netty的NioChannel对象本身。对于Deregister事件,选择键执行cancle()操作,选择键表示JDK Channel和selector的关系,调用cancle()终结这种关系,从而实现从NioEventLoop中Deregister。需要注意的是:cancle操作调用后,注册关系不会立即生效,而会将cancle的key移入selector的一个取消键集合,当下次调用select相关方法或一个正在进行的select调用结束时,会从取消键集合中移除该选择键,此时注销才真正完成。一个Cancel的选择键为无效键,调用它相关的方法会抛出CancelledKeyException。
@Override
protected void doBeginRead() throws Exception {
// Channel.read() or ChannelHandlerContext.read() was called
final SelectionKey selectionKey = this.selectionKey;
if (!selectionKey.isValid()) {
return; // 选择键被取消而不再有效
}
readPending = true; // 设置底层读事件正在进行
final int interestOps = selectionKey.interestOps();
if ((interestOps & readInterestOp) == 0) {
// 选择键关心Read事件
selectionKey.interestOps(interestOps | readInterestOp);
}
}
对于NioChannel的beginRead事件,只需将Read事件设置为选择键所关心的事件,则之后的select()调用如果Channel对应的Read事件就绪,便会触发Netty的read()操作。
@Override
protected void doClose() throws Exception {
ChannelPromise promise = connectPromise;
if (promise != null) {
// Use tryFailure() instead of setFailure() to avoid the race against cancel().
// 连接操作还在进行,但用户调用close操作
promise.tryFailure(DO_CLOSE_CLOSED_CHANNEL_EXCEPTION);
connectPromise = null;
}
ScheduledFuture> future = connectTimeoutFuture;
if (future != null) { // 如果有连接超时检测任务,则取消
future.cancel(false);
connectTimeoutFuture = null;
}
}
此处的doClose操作主要处理了连接操作相关的后续处理。并没有实际关闭Channel,所以需要子类继续增加细节实现。AbstractNioChannel中还有关于创建DirectBuffer的方法,将在以后必要时进行分析。其他的方法则较为简单,不在列出。最后提一下isCompatible()方法,说明NioChannel只在NioEventLoop中可用。
@Override
protected boolean isCompatible(EventLoop loop) {
return loop instanceof NioEventLoop;
}
AbstractNioChannel的子类实现分为服务端AbstractNioMessageChannel和客户端AbstractNioByteChannel,我们将首先分析服务端AbstractNioMessageChannel。
5.3 AbstractNioMessageChannel源码分析
AbstractNioMessageChannel是底层数据为消息的NioChannel。在Netty中,服务端Accept的一个Channel被认为是一条消息,UDP数据报也是一条消息。该类主要完善flush事件框架的doWrite细节和实现read事件框架(在内部类NioMessageUnsafe完成)。首先看read事件框架:
@Override
public void read() {
assert eventLoop().inEventLoop();
final ChannelConfig config = config();
final ChannelPipeline pipeline = pipeline();
final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
allocHandle.reset(config);
boolean closed = false;
Throwable exception = null;
try {
try {
do {
int localRead = doReadMessages(readBuf); // 模板方法,读取消息
if (localRead == 0) { // 没有数据可读
break;
}
if (localRead < 0) { // 读取出错
closed = true;
break;
}
allocHandle.incMessagesRead(localRead);
} while (allocHandle.continueReading());
} catch (Throwable t) {
exception = t;
}
int size = readBuf.size();
for (int i = 0; i < size; i ++) {
readPending = false; // 已没有底层读事件
pipeline.fireChannelRead(readBuf.get(i)); //触发ChannelRead事件,用户处理
}
readBuf.clear();
allocHandle.readComplete();
// ChannelReadComplete事件中如果配置autoRead则会调用beginRead,从而不断进行读操作
pipeline.fireChannelReadComplete(); // 触发ChannelReadComplete事件,用户处理
if (exception != null) {
closed = closeOnReadError(exception);
pipeline.fireExceptionCaught(exception);
}
if (closed) {
inputShutdown = true;
if (isOpen()) {
close(voidPromise()); // 非serverChannel且打开则关闭
}
}
} finally {
// Check if there is a readPending which was not processed yet.
// This could be for two reasons:
// * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
// * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
//
// See https://github.com/netty/netty/issues/2254
if (!readPending && !config.isAutoRead()) {
// 既没有配置autoRead也没有底层读事件进行
removeReadOp(); // 清除read事件,不再关心
}
}
}
}
read事件框架的流程已在代码中注明,需要注意的是读取消息的细节doReadMessages(readBuf)方法由子类实现。
我们主要分析NioServerSocketChannel,它不支持doWrite()操作,所以我们不再分析本类的flush事件框架的doWrite细节方法,直接转向下一个目标:NioServerSocketChannel。
5.4 NioServerSocketChannel源码分析
你肯定已经使用过NioServerSocketChannel,Netty的example中大量使用了此类,作为处于Channel最底层的子类,NioServerSocketChannel会实现I/O事件框架的底层细节。首先需要注意的是:NioServerSocketChannel只支持bind、read和close操作。
@Override
protected void doBind(SocketAddress localAddress) throws Exception {
if (PlatformDependent.javaVersion() >= 7) { // JDK版本1.7以上
javaChannel().bind(localAddress, config.getBacklog());
} else {
javaChannel().socket().bind(localAddress, config.getBacklog());
}
}
@Override
protected void doClose() throws Exception {
javaChannel().close();
}
@Override
protected int doReadMessages(List
其中的实现,都是调用JDK的Channel的方法,从而实现了最底层的细节。需要注意的是:此处的doReadMessages()方法每次最多返回一个消息(客户端连接),由此可知NioServerSocketChannel的read操作一次至多处理的连接数为config.getMaxMessagesPerRead(),也就是参数值MAX_MESSAGES_PER_READ。此外doClose()覆盖了AbstractNioChannel的实现,因为NioServerSocketChannel不支持connect操作,所以不需要连接超时处理。
最后,我们再看关键构造方法:
/**
* Create a new instance
*/
public NioServerSocketChannel() {
this(newSocket(DEFAULT_SELECTOR_PROVIDER));
}
/**
* Create a new instance using the given {@link SelectorProvider}.
*/
public NioServerSocketChannel(SelectorProvider provider) {
this(newSocket(provider));
}
/**
* Create a new instance using the given {@link ServerSocketChannel}.
*/
public NioServerSocketChannel(ServerSocketChannel channel) {
super(null, channel, SelectionKey.OP_ACCEPT);
config = new NioServerSocketChannelConfig(this, javaChannel().socket());
}
其中的SelectionKey.OP_ACCEPT最为关键,Netty正是在此处将NioServerSocketChannel的read事件定义为NIO底层的OP_ACCEPT,统一完成read事件的抽象。
至此,我们已分析完两条线索中的服务端部分,下面分析客户端部分。首先是AbstractNioChannel的另一个子类AbstractNioByteChannel。
5.5 AbstractNioByteChannel源码分析
从字面可推知,AbstractNioByteChannel的底层数据为Byte字节。首先看构造方法:
/**
* Create a new instance
*
* @param parent the parent {@link Channel} by which this instance was created. May be {@code null}
* @param ch the underlying {@link SelectableChannel} on which it operates
*/
protected AbstractNioByteChannel(Channel parent, SelectableChannel ch) {
super(parent, ch, SelectionKey.OP_READ);
}
其中的SelectionKey.OP_READ,说明AbstractNioByteChannel的read事件为NIO底层的OP_READ事件。
然后我们看read事件框架:
@Override
public final void read() {
final ChannelConfig config = config();
if (shouldBreakReadReady(config)) {
clearReadPending();
return;
}
final ChannelPipeline pipeline = pipeline();
final ByteBufAllocator allocator = config.getAllocator();
final RecvByteBufAllocator.Handle allocHandle = recvBufAllocHandle();
allocHandle.reset(config);
ByteBuf byteBuf = null; // 创建一个ByteBuf
boolean close = false;
try {
do {
byteBuf = allocHandle.allocate(allocator); // 创建一个ByteBuf
allocHandle.lastBytesRead(doReadBytes(byteBuf)); // doReadBytes模板方法,子类实现细节
if (allocHandle.lastBytesRead() <= 0) { // 没有数据可读
// nothing was read. release the buffer.
byteBuf.release();
byteBuf = null;
close = allocHandle.lastBytesRead() < 0; // 读取数据量为负数表示对端已经关闭
if (close) {
// There is nothing left to read as we received an EOF.
readPending = false;
}
break;
}
allocHandle.incMessagesRead(1);
readPending = false; // 没有底层读事件进行
pipeline.fireChannelRead(byteBuf); // 触发ChannelRead事件,用户处理
byteBuf = null;
} while (allocHandle.continueReading());
allocHandle.readComplete();
// ReadComplete结束时,如果开启autoRead则会调用beginRead,从而可以继续read
pipeline.fireChannelReadComplete();
if (close) {
closeOnRead(pipeline);
}
} catch (Throwable t) {
handleReadException(pipeline, byteBuf, t, close, allocHandle);
} finally {
// Check if there is a readPending which was not processed yet.
// This could be for two reasons:
// * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
// * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
//
// See https://github.com/netty/netty/issues/2254
if (!readPending && !config.isAutoRead()) {
// 既没有配置autoRead也没有底层读事件进行
removeReadOp();
}
}
}
}
AbstractNioByteChannel的read事件框架处理流程与AbstractNioMessageChannel的稍有不同:AbstractNioMessageChannel依次读取Message,最后统一触发ChannelRead事件;而AbstractNioByteChannel每读取到一定字节就触发ChannelRead事件。这是因为,AbstractNioMessageChannel需求高吞吐量,特别是ServerSocketChannel需要尽可能多地接受连接;而AbstractNioByteChannel需求快响应,要尽可能快地响应远端请求。
read事件的具体流程请参考代码和代码注释进行理解,不再分析。注意到代码中有关于接收缓冲区的代码,这一部分我们单独使用一节讲述,之后会分析。当读取到的数据小于零时,表示远端连接已关闭,这时会调用closeOnRead(pipeline)方法:
private void closeOnRead(ChannelPipeline pipeline) {
if (!isInputShutdown0()) {
if (isAllowHalfClosure(config())) {
shutdownInput();
pipeline.fireUserEventTriggered(ChannelInputShutdownEvent.INSTANCE);
} else {
close(voidPromise()); // 直接关闭
}
} else {
inputClosedSeenErrorOnRead = true;
pipeline.fireUserEventTriggered(ChannelInputShutdownReadComplete.INSTANCE);
}
}
这段代码正是Channel参数ALLOW_HALF_CLOSURE的意义描述,该参数为True时,会触发用户事件ChannelInputShutdownEvent,否则,直接关闭该Channel。抛出异常时,会调用handleReadException(pipeline, byteBuf, t, close)方法:
private void handleReadException(ChannelPipeline pipeline, ByteBuf byteBuf, Throwable cause, boolean close,
RecvByteBufAllocator.Handle allocHandle) {
if (byteBuf != null) { // 已读取到数据
if (byteBuf.isReadable()) { // 数据可读
readPending = false;
pipeline.fireChannelRead(byteBuf);
} else { // 数据不可读
byteBuf.release();
}
}
allocHandle.readComplete();
pipeline.fireChannelReadComplete();
pipeline.fireExceptionCaught(cause);
if (close || cause instanceof IOException) {
closeOnRead(pipeline);
}
}
可见,抛出异常时,如果读取到可用数据和正常读取一样触发ChannelRead事件,只是最后会统一触发ExceptionCaught事件由用户进行处理。
至此,read事件框架分析完毕,下面我们分析write事件的细节实现方法doWrite()。在此之前,先看filterOutboundMessage()方法对需要写的数据进行过滤。
@Override
protected final Object filterOutboundMessage(Object msg) {
if (msg instanceof ByteBuf) {
ByteBuf buf = (ByteBuf) msg;
if (buf.isDirect()) {
return msg;
}
return newDirectBuffer(buf); // 非DirectBuf转为DirectBuf
}
if (msg instanceof FileRegion) {
return msg;
}
throw new UnsupportedOperationException(
"unsupported message type: " + StringUtil.simpleClassName(msg) + EXPECTED_TYPES);
}
可知,Netty支持的写数据类型只有两种:DirectBuffer和FileRegion。我们再看这些数据怎么写到Channel上,也就是doWrite()方法:
/**
* Write objects to the OS.
* @param in the collection which contains objects to write.
* @return The value that should be decremented from the write quantum which starts at
* {@link ChannelConfig#getWriteSpinCount()}. The typical use cases are as follows:
*
* - 0 - if no write was attempted. This is appropriate if an empty {@link ByteBuf} (or other empty content)
* is encountered
* - 1 - if a single call to write data was made to the OS
* - {@link ChannelUtils#WRITE_STATUS_SNDBUF_FULL} - if an attempt to write data was made to the OS, but no
* data was accepted
*
* @throws Exception if an I/O exception occurs during write.
*/
protected final int doWrite0(ChannelOutboundBuffer in) throws Exception {
Object msg = in.current();
if (msg == null) {
// Directly return here so incompleteWrite(...) is not called.
return 0;
}
return doWriteInternal(in, in.current());
}
private int doWriteInternal(ChannelOutboundBuffer in, Object msg) throws Exception {
if (msg instanceof ByteBuf) {
ByteBuf buf = (ByteBuf) msg;
if (!buf.isReadable()) {
in.remove();
return 0;
}
final int localFlushedAmount = doWriteBytes(buf); // 模板方法,子类实现细节
if (localFlushedAmount > 0) {
in.progress(localFlushedAmount); // 记录进度
if (!buf.isReadable()) {
in.remove(); // 完成时,清理缓冲区
}
return 1; // 跳出循环执行incompleteWrite()
}
} else if (msg instanceof FileRegion) {
FileRegion region = (FileRegion) msg;
if (region.transferred() >= region.count()) {
in.remove();
return 0; // 跳出循环执行incompleteWrite()
}
long localFlushedAmount = doWriteFileRegion(region);
if (localFlushedAmount > 0) {
in.progress(localFlushedAmount); // 记录进度
if (region.transferred() >= region.count()) {
in.remove();
}
return 1;
}
} else {
// Should not reach here.
throw new Error(); // 其他类型不支持
}
return WRITE_STATUS_SNDBUF_FULL;
}
@Override
protected void doWrite(ChannelOutboundBuffer in) throws Exception {
int writeSpinCount = config().getWriteSpinCount();
do {
Object msg = in.current();
if (msg == null) { // 数据已全部写完
// Wrote all messages.
clearOpWrite(); // 清除OP_WRITE事件
// Directly return here so incompleteWrite(...) is not called.
return;
}
writeSpinCount -= doWriteInternal(in, msg);
} while (writeSpinCount > 0);
incompleteWrite(writeSpinCount < 0);
}
@Override
protected final Object filterOutboundMessage(Object msg) {
if (msg instanceof ByteBuf) {
ByteBuf buf = (ByteBuf) msg;
if (buf.isDirect()) {
return msg;
}
return newDirectBuffer(buf); // 非DirectBuf转为DirectBuf
}
if (msg instanceof FileRegion) {
return msg;
}
throw new UnsupportedOperationException(
"unsupported message type: " + StringUtil.simpleClassName(msg) + EXPECTED_TYPES);
}
代码中省略了对FileRegion的处理,FileRegion是Netty对NIO底层的FileChannel的封装,负责将File中的数据写入到WritableChannel中。FileRegion的默认实现是DefaultFileRegion,如果你很感兴趣它的实现,可以自行查阅。
我们主要分析对ByteBuf的处理。doWrite的流程简洁明了,核心操作是模板方法doWriteBytes(buf),将ByteBuf中的数据写入到Channel,由于NIO底层的写操作返回已写入的数据量,在非阻塞模式下该值可能为0,此时会调用incompleteWrite()方法:
protected final void incompleteWrite(boolean setOpWrite) {
// Did not write completely.
if (setOpWrite) {
setOpWrite(); // 设置继续关心OP_WRITE事件
} else {
// It is possible that we have set the write OP, woken up by NIO because the socket is writable, and then
// use our write quantum. In this case we no longer want to set the write OP because the socket is still
// writable (as far as we know). We will find out next time we attempt to write if the socket is writable
// and set the write OP if necessary.
clearOpWrite();
// Schedule flush again later so other tasks can be picked up in the meantime
eventLoop().execute(flushTask); // 再次提交一个flush()任务
}
}
该方法分两种情况处理,在上文提到的第一种情况(实际写0数据)下,设置SelectionKey继续关心OP_WRITE事件从而继续进行写操作;第二种情况下,也就是写操作进行次数达到配置中的writeSpinCount值但尚未写完,此时向EventLoop提交一个新的flush任务,此时可以响应其他请求,从而提交响应速度。这样的处理,不会使大数据的写操作占用全部资源而使其他请求得不到响应,可见这是一个较为公平的处理。这里引出一个问题:使用Netty如何搭建高性能文件服务器?
至此,已分析完对于Byte数据的read事件和doWrite细节的处理,接下里,继续分析NioSocketChannel,从而完善各事件框架的细节部分。
5.6 NioSocketChannel源码分析
NioSocketChannel作为Channel的最末端子类,实现了NioSocket相关的最底层细节实现,首先看doBind():
@Override
protected void doBind(SocketAddress localAddress) throws Exception {
doBind0(localAddress);
}
private void doBind0(SocketAddress localAddress) throws Exception {
if (PlatformDependent.javaVersion() >= 7) { // JDK版本1.7以上
SocketUtils.bind(javaChannel(), localAddress);
} else {
SocketUtils.bind(javaChannel().socket(), localAddress);
}
}
这部分代码与NioServerSocketChannel中相同,委托给JDK的Channel进行绑定操作。
接着再看doConnect()和doFinishConnect()方法:
@Override
protected boolean doConnect(SocketAddress remoteAddress, SocketAddress localAddress) throws Exception {
if (localAddress != null) {
doBind0(localAddress);
}
boolean success = false;
try {
boolean connected = SocketUtils.connect(javaChannel(), remoteAddress);
if (!connected) {
// 设置关心OP_CONNECT事件,事件就绪时调用finishConnect()
selectionKey().interestOps(SelectionKey.OP_CONNECT);
}
success = true;
return connected;
} finally {
if (!success) {
doClose();
}
}
}
@Override
protected void doFinishConnect() throws Exception {
if (!javaChannel().finishConnect()) {
throw new Error();
}
}
JDK中的Channel在非阻塞模式下调用connect()方法时,会立即返回结果:成功建立连接返回True,操作还在进行时返回False。返回False时,需要在底层OP_CONNECT事件就绪时,调用finishConnect()方法完成连接操作。
再看doDisconnect()和doClose()方法:
@Override
protected void doDisconnect() throws Exception {
doClose();
}
@Override
protected void doClose() throws Exception {
super.doClose(); // AbstractNioChannel中关于连接超时的处理
javaChannel().close();
}
然后看核心的doReadBytes()和doWriteXXX()方法:
@Override
protected int doReadBytes(ByteBuf byteBuf) throws Exception {
final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
allocHandle.attemptedBytesRead(byteBuf.writableBytes());
return byteBuf.writeBytes(javaChannel(), allocHandle.attemptedBytesRead());
}
@Override
protected int doWriteBytes(ByteBuf buf) throws Exception {
final int expectedWrittenBytes = buf.readableBytes();
return buf.readBytes(javaChannel(), expectedWrittenBytes);
}
@Override
protected long doWriteFileRegion(FileRegion region) throws Exception {
final long position = region.transferred();
return region.transferTo(javaChannel(), position);
}
对于read和write操作,委托给ByteBuf处理,我们将使用专门的一章,对这一部分细节进行完善,将在后面介绍。
NioSocketChannel最重要的部分是覆盖了父类的doWrite()方法,使用更高效的方式进行写操作,其代码如下:
@Override
protected void doWrite(ChannelOutboundBuffer in) throws Exception {
SocketChannel ch = javaChannel();
int writeSpinCount = config().getWriteSpinCount();
do {
if (in.isEmpty()) {
// All written so clear OP_WRITE
clearOpWrite(); // 所有数据已写完,不再关心OP_WRITE事件
// Directly return here so incompleteWrite(...) is not called.
return;
}
// Ensure the pending writes are made of ByteBufs only.
int maxBytesPerGatheringWrite = ((NioSocketChannelConfig) config).getMaxBytesPerGatheringWrite();
ByteBuffer[] nioBuffers = in.nioBuffers(1024, maxBytesPerGatheringWrite);
int nioBufferCnt = in.nioBufferCount();
// Always us nioBuffers() to workaround data-corruption.
// See https://github.com/netty/netty/issues/2761
switch (nioBufferCnt) {
case 0: // 没有ByteBuffer,也就是只有FileRegion
// We have something else beside ByteBuffers to write so fallback to normal writes.
writeSpinCount -= doWrite0(in); // 使用父类方法进行普通处理
break;
case 1: { // 只有一个ByteBuffer,此时的处理等效于父类方法的处理
// Only one ByteBuf so use non-gathering write
// Zero length buffers are not added to nioBuffers by ChannelOutboundBuffer, so there is no need
// to check if the total size of all the buffers is non-zero.
ByteBuffer buffer = nioBuffers[0];
int attemptedBytes = buffer.remaining();
final int localWrittenBytes = ch.write(buffer);
if (localWrittenBytes <= 0) {
incompleteWrite(true);
return;
}
adjustMaxBytesPerGatheringWrite(attemptedBytes, localWrittenBytes, maxBytesPerGatheringWrite);
in.removeBytes(localWrittenBytes);
--writeSpinCount;
break;
}
default: { // 多个ByteBuffer,采用gathering方法处理
// Zero length buffers are not added to nioBuffers by ChannelOutboundBuffer, so there is no need
// to check if the total size of all the buffers is non-zero.
// We limit the max amount to int above so cast is safe
long attemptedBytes = in.nioBufferSize();
// gathering方法,此时一次写多个ByteBuffer
final long localWrittenBytes = ch.write(nioBuffers, 0, nioBufferCnt);
if (localWrittenBytes <= 0) {
incompleteWrite(true);
return;
}
// Casting to int is safe because we limit the total amount of data in the nioBuffers to int above.
adjustMaxBytesPerGatheringWrite((int) attemptedBytes, (int) localWrittenBytes,
maxBytesPerGatheringWrite);
in.removeBytes(localWrittenBytes); // 清理缓冲区
--writeSpinCount;
break;
}
}
} while (writeSpinCount > 0);
incompleteWrite(writeSpinCount < 0);
}
在明白了父类的doWrite方法后,这段代码便容易理解,本段代码做的优化是:当输出缓冲区中有多个buffer时,采用Gathering Writes将数据从这些buffer写入到同一个channel。
在AbstractUnsafe对close事件框架的分析中,有一个prepareToClose()方法,进行关闭的必要处理并在必要时返回一个Executor执行doClose()操作,默认方法返回null,NioSocketChannelUnsafe覆盖了父类的实现,代码如下:
@Override
protected Executor prepareToClose() {
try {
if (javaChannel().isOpen() && config().getSoLinger() > 0) {
// We need to cancel this key of the channel so we may not end up in a eventloop spin
// because we try to read or write until the actual close happens which may be later due
// SO_LINGER handling.
// See https://github.com/netty/netty/issues/4449
doDeregister(); // 取消选择键selectionKey
return GlobalEventExecutor.INSTANCE;
}
} catch (Throwable ignore) {
// Ignore the error as the underlying channel may be closed in the meantime and so
// getSoLinger() may produce an exception. In this case we just return null.
// See https://github.com/netty/netty/issues/4449
}
return null;
}
SO_LINGER表示Socket关闭的延时时间,在此时间内,内核将继续把TCP缓冲区的数据发送给对端且执行close操作的线程将阻塞直到数据发送完成。Netty的原则是I/O线程不能被阻塞,所以此时返回一个Executor用于执行阻塞的doClose()操作。doDeregister()取消选择键selectionKey是因为:延迟关闭期间, 如果selectionKey仍然关心OP_WRITE事件,而输出缓冲区又为null,这样write操作直接返回,不会再执行clearOpWrite()操作取消关心OP_WRITE事件,而Channel一般是可写的,这样OP_WRITE事件会不断就绪从而耗尽CPU,所以需要取消选择键删除注册的事件。