Netty源码– Netty服务器处理流程分析
注:图2 来自百度图库,文中源码来自Netty4.0.15
NIO处理模型中线程工作如下:
EventLoopGroup表示一组EventLoop。常用的服务器初始化代码如下:
bossGroup = new NioEventLoopGroup(bossGroupCount); workerGroup = new NioEventLoopGroup(workGroupCount); ServerBootstrap serverBootstrap = new ServerBootstrap(); // 设置时间循环对象, 前者用来处理accept事件, 后者用于处理已经建立的连接的io serverBootstrap.group(bossGroup, workerGroup); // 用它来建立新accept的连接, 用于构造serversocketchannel的工厂类 serverBootstrap.channel(NioServerSocketChannel.class); // 为accept channel的pipeline预添加的inboundhandler serverBootstrap.childHandler(new ChannelInitializer @Override // 当新连接accept的时候, 这个方法会调用 protectedvoid initChannel(SocketChannel ch) throws Exception { ChannelPipeline pipeline = ch.pipeline(); // 数据编/解码器 pipeline.addLast("codec", new MessageCodec()); // 客户端空闲超时断开连接 if (idleTimeout > 0) { pipeline.addLast("timeout", new IdleStateHandler(0, 0, idleTimeout, TimeUnit.SECONDS)); } // 数据处理类 pipeline.addLast("handler", new DataHandler(socketLogFlag)); } }); 。。。。。。。 serverBootstrap.bind(serverPort).sync(); |
这里我们关注前面四行代码,会发现一个ServerBootStrap会需要两个NioEventLoopGroup来初始化,这两个事件循环组也是NETTY NIO线程的主要模型。
当服务器端绑定了一个地址后,函数的调用流程如下:AbstractBootStrap.bind(SocketAddress) àAbstractBootStrap.doBind(Socketddress)àAbstractBootStrap.initAndRegister().
initAndRegister主要是初始化一个NettyChannel并将Channel注册到第一个NioEventLoopGroup中。
final ChannelFuture initAndRegister() { final Channel channel = channelFactory().newChannel(); try { init(channel); } catch (Throwable t) { channel.unsafe().closeForcibly(); returnchannel.newFailedFuture(t); }
ChannelFuture regFuture = group().register(channel); if (regFuture.cause() != null) { if (channel.isRegistered()) { channel.close(); } else { channel.unsafe().closeForcibly(); } } returnregFuture; } |
也就是说对于Netty服务器来说,启动的时候首先往第一个NioEventLoopGroup 中的注册了一个Channel 对绑定的端口监听NIO事件。
看下NioEventLoopGroup的类继承关系:
本身继承自MultithreadEventLoopGroup,顾名思义这个组中是有多个线程来作为组成员的。看MultithreadEventExecutorGroup的构造函数中:
protected MultithreadEventExecutorGroup(int nThreads, ThreadFactory threadFactory, Object... args) { if (nThreads <= 0) { thrownew IllegalArgumentException(String.format("nThreads: %d (expected: > 0)", nThreads)); }
if (threadFactory == null) { threadFactory = newDefaultThreadFactory(); }
children = new SingleThreadEventExecutor[nThreads]; for (inti = 0; i < nThreads; i ++) { booleansuccess = false; try { children[i] = newChild(threadFactory, args); success = true; } catch (Exception e) { // TODO: Think about if this is a good exception type thrownew IllegalStateException("failed to create a child event loop", e); } finally { …… } |
由此可见我们初始化是创建的NioEventLoopGroup传递的其实是这个组的成员数量,每一个chidren其实最终是一个NioEventLoop,按照我们开篇图片的的流程在处理事件。
MultiThreadEventLoopGroup注册部分代码:
@Override public EventLoop next() { return (EventLoop) super.next(); }
@Override public ChannelFuture register(Channel channel) { return next().register(channel); } |
MultiThreadEventExecutorGroup.next():
@Override public EventExecutor next() { returnchildren[Math.abs(childIndex.getAndIncrement() % children.length)]; } |
看到这里也就可以得出结论:Netty服务器启动绑定到一个端口后,会注册一个对应的Channel到NioEventLoopGroup中,这个过程其实就是选择一个EventLoop来执行第一节的主流程。
代码走到了EventLoop,我们先看下EventLoop的继承关系:
可以直观的看出NioEventLoop是一个单线程的事件Loop(SingleThreadEventLoop),
NioEventLoop.run()
protectedvoid run() { for (;;) { oldWakenUp = wakenUp.getAndSet(false); try { if (hasTasks()) { selectNow(); } else { select();
if (wakenUp.get()) { selector.wakeup(); } }
cancelledKeys = 0;
finallongioStartTime = System.nanoTime(); needsToSelectAgain = false; if (selectedKeys != null) { processSelectedKeysOptimized(selectedKeys.flip()); } else { processSelectedKeysPlain(selector.selectedKeys()); } finallongioTime = System.nanoTime() - ioStartTime;
finalintioRatio = this.ioRatio; runAllTasks(ioTime * (100 - ioRatio) / ioRatio); if (isShuttingDown()) { closeAll(); if (confirmShutdown()) { break; } } } catch (Throwable t) { logger.warn("Unexpected exception in the selector loop.", t);
try { Thread.sleep(1000); } catch (InterruptedException e) { // Ignore. } } } } |
这段代码其实就是开篇的NIO主要线程流程图,当服务器监听一个端口,注册了一个Channel后,也就是启动了一个NioEventLoop接收注册的Channel。下面看下NioEventLoop究竟做了什么。
SingleThreadEventLoop.register
@Override public ChannelFuture register(Channel channel) { return register(channel, new DefaultChannelPromise(channel, this)); }
@Override public ChannelFuture register(final Channel channel, final ChannelPromise promise) { if (channel == null) { thrownew NullPointerException("channel"); } if (promise == null) { thrownew NullPointerException("promise"); }
channel.unsafe().register(this, promise); returnpromise; } |
AbstractNioChannel.doRegister
protectedvoid doRegister() throws Exception { booleanselected = false; for (;;) { try { selectionKey = javaChannel().register(eventLoop().selector, 0, this); return; } catch (CancelledKeyException e) { if (!selected) { eventLoop().selectNow(); selected = true; } else { // We forced a select operation on the selector before but the SelectionKey is still cached // for whatever reason. JDK bug ? throwe; } } } } |
也就是将NioEventLoop的selector注册到对应的底层channel上,然后在NioEventLoop的run函数中就是持续的select对应的NIO事件并进行处理。
我们顺着NioEventLoop的代码来继续查看下去。
NioEventLoop.processSelectedKey()
privatestaticvoid processSelectedKey(SelectionKey k, AbstractNioChannel ch) { final NioUnsafe unsafe = ch.unsafe(); if (!k.isValid()) { // close the channel if the key is not valid anymore unsafe.close(unsafe.voidPromise()); return; }
try { intreadyOps = k.readyOps(); // Also check for readOps of 0 to workaround possible JDK bug which may otherwise lead // to a spin loop if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) { unsafe.read(); if (!ch.isOpen()) { // Connection already closed - no need to handle write. return; } } if ((readyOps & SelectionKey.OP_WRITE) != 0) { // Call forceFlush which will also take care of clear the OP_WRITE once there is nothing left to write ch.unsafe().forceFlush(); } if ((readyOps & SelectionKey.OP_CONNECT) != 0) { // remove OP_CONNECT as otherwise Selector.select(..) will always return without blocking // See https://github.com/netty/netty/issues/924 intops = k.interestOps(); ops &= ~SelectionKey.OP_CONNECT; k.interestOps(ops);
unsafe.finishConnect(); } } catch (CancelledKeyException e) { unsafe.close(unsafe.voidPromise()); } } |
当第一个时间循环组中的Channel接收到一个Accept事件时,对应channel的NioUnsafe.read()方法将被调用,对于Nio服务器来说这里的channel就是io.netty.channel.socket.nio包下的NioServerSocketChannel,私有的内部类NioMessageUnsafe为对应的NioUnsafe接口实现类。
NioMessageUnsafe.read()
publicvoid read() { //这里其他的代码略掉,指标注出关键代码 …… try { for (;;) { int localRead = doReadMessages(readBuf); if (localRead == 0) { break; } if (localRead < 0) { closed = true; break; }
if (readBuf.size() >= maxMessagesPerRead | !autoRead) { break; } } } catch (Throwable t) { exception = t; } …… } |
NioMessageUnsafe.read()调用NioServerSocketChannel的doreadMessage方法。
NioServerSocetkChannel.doReadMessage()
protectedint doReadMessages(Listbuf) throws Exception { SocketChannel ch = javaChannel().accept();
try { if (ch != null) { buf.add(new NioSocketChannel(this, ch)); return 1; } } catch (Throwable t) { logger.warn("Failed to create a new channel from an accepted socket.", t);
try { ch.close(); } catch (Throwable t2) { logger.warn("Failed to close a socket.", t2); } }
return 0; } |
到这里终于看出了第一个NioEventLoopGroup的最主要工作:accept。当有新的客户端连接到服务器,NioServerSocketChannel会accept,并生成一个NioSocketChannel作为新建立的连接。NioSocketChannel对象读取后将有AbstractBootStrap.initAndRegister方法初始化channel处理链进行处理,也就是ServerBootStrapAcceptor类的channelRead方法。
ServerBootStrapAcceptor.ChannelRead()
publicvoid channelRead(ChannelHandlerContext ctx, Object msg) { final Channel child = (Channel) msg;
child.pipeline().addLast(childHandler);
for (Entry try { if (!child.config().setOption((ChannelOptione.getKey(), e.getValue())) { logger.warn("Unknown channel option: " + e); } } catch (Throwable t) { logger.warn("Failed to set a channel option: " + child, t); } }
for (Entry child.attr((AttributeKeye.getKey()).set(e.getValue()); }
try { childGroup.register(child).addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { if (!future.isSuccess()) { forceClose(child, future.cause()); } } }); } catch (Throwable t) { forceClose(child, t); } } |
可以看到这个通道将会被注册到childGroup中,这个childGroup其实就是初始化时的第二个NioEventLoopGroup。
探究了第一个NioEventLoopGroup的具体代码后,明显的看到第二个NioEventLoopGroup将负责服务器accept之后的连接的NIO事件,执行流程类似,NioEventLoop.run() -- >NioByteUnsafe.read() à pipeline.fireChannelRead(byteBuff) 然后进入类服务器具体配置的channelpipeline的处理链,比如应用的编码解析,具体的逻辑处理。
图:2 具体业务处理流程图