对于netty的理解,首先要熟悉NIO相关的概念,可以参考学习这里:Java NIO
netty的启动入口,我们一般会这样配置:
ChannelFactory serverChannelFacory = new NioServerSocketChannelFactory(Executors.newCachedThreadPool(),Executors.newCachedThreadPool());
ServerBootstrap bootstrap = new ServerBootstrap(serverChannelFacory);
bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
@Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("decoder", new HttpRequestDecoder());
pipeline.addLast("encoder", new HttpResponseEncoder());
pipeline.addLast("handler", new YourHandler());
return null;
}
});
bootstrap.bind(new InetSocketAddress(8080));//启动入口
很明显,入口自然就是bootstrap.bind(),bind方法内会处理下面一部分主要的操作
public Channel bind(final SocketAddress localAddress) {
......
ChannelHandler binder = new Binder(localAddress, futureQueue);
ChannelHandler parentHandler = getParentHandler();
ChannelPipeline bossPipeline = pipeline();
bossPipeline.addLast("binder", binder);
if (parentHandler != null) {
bossPipeline.addLast("userHandler", parentHandler);
}
Channel channel = getFactory().newChannel(bossPipeline);
......
}
getFactory()获取的自然是我们前面new ServerBootStrap时的参数NioServerSocketChannelFactory对象,newChannel()的操作是new一个NioServerSocketChannel的对象并返回,传的参数是前面的bossPipeline。所以可以理解这个是针对boss线程的pipeline操作,对应的handler只有binder
NioServerSocketChannel(
ChannelFactory factory,
ChannelPipeline pipeline,
ChannelSink sink) {
super(factory, pipeline, sink);
try {
socket = ServerSocketChannel.open();
} catch (IOException e) {
throw new ChannelException(
"Failed to open a server socket.", e);
}
......
config = new DefaultServerSocketChannelConfig(socket.socket());
fireChannelOpen(this);
}
这里的重点是fireChannelOpen,几乎netty的所有的操作步骤都是基于事件的责任链模式实现。
public static void fireChannelOpen(Channel channel) {
// Notify the parent handler.
if (channel.getParent() != null) {
fireChildChannelStateChanged(channel.getParent(), channel);
}
channel.getPipeline().sendUpstream(
new UpstreamChannelStateEvent(
channel, ChannelState.OPEN, Boolean.TRUE));
}
fireChannelOpen的主要操作是new一个UpstreamChannelStateEvent的事件,并且事件的状态是ChannelState.OPEN,然后在pipeline的upstream的handler里传递处理这个事件。
对于pipeline的sendUpstream()方法,我们具体看DefaultChannelPipeline的实现
public void sendUpstream(ChannelEvent e) {
DefaultChannelHandlerContext head = getActualUpstreamContext(this.head);
if (head == null) {
logger.warn(
"The pipeline contains no upstream handlers; discarding: " + e);
return;
}
sendUpstream(head, e);
}
void sendUpstream(DefaultChannelHandlerContext ctx, ChannelEvent e) {
try {
((ChannelUpstreamHandler) ctx.getHandler()).handleUpstream(ctx, e);
} catch (Throwable t) {
notifyHandlerException(e, t);
}
}
DefaultChannelHandlerContext getActualUpstreamContext(DefaultChannelHandlerContext ctx) {
if (ctx == null) {
return null;
}
DefaultChannelHandlerContext realCtx = ctx;
while (!realCtx.canHandleUpstream()) {
realCtx = realCtx.next;
if (realCtx == null) {
return null;
}
}
return realCtx;
}
几个相关的方法如上,DefaultChannelHandlerContext是按照初始化pipeline时,添加的upstream或downstream的handler的顺序,将channelhandler封装成双向链表的节点。getActualUpstreamContext()获取当前head的DefaultChannelHandlerContext节点后,head会向后移动一个节点,在channelhandler执行完handleUpstream()操作后,会继续执行传入的ChannelHandlerContext的sendUpstream()操作,这样就实现了事件在责任链中的传递。
回到netty的启动过程,启动过程的fireChannelOpen操作最终传递到的ServerBootstrap.Binder这个内部类,这个内部类继承SimpleChannelUpstreamHandler,ChannelState.OPEN的事件最终传递到Binder.channelOpen()方法处理
public void channelOpen(
ChannelHandlerContext ctx,
ChannelStateEvent evt) {
try {
......
// Apply parent options.
evt.getChannel().getConfig().setOptions(parentOptions);
} finally {
ctx.sendUpstream(evt);
}
boolean finished = futureQueue.offer(evt.getChannel().bind(localAddress));
assert finished;
}
可以看到,在ctx.sendUpstream(evt)这个责任链的操作全部完成之后,执行evt.getChannel()的bind操作,这个getChannel()对应的就是前面channelfactory创建的NioServerSocketChannel对象。这个bind操作最终执行到的是Channels类的静态bind方法。
public static ChannelFuture bind(Channel channel, SocketAddress localAddress) {
if (localAddress == null) {
throw new NullPointerException("localAddress");
}
ChannelFuture future = future(channel);
channel.getPipeline().sendDownstream(new DownstreamChannelStateEvent(
channel, future, ChannelState.BOUND, localAddress));
return future;
}
bind()方法通过执行downstream的handler传递ChannelState.BOUND状态的downstream事件.
public void sendDownstream(ChannelEvent e) {
DefaultChannelHandlerContext tail = getActualDownstreamContext(this.tail);
if (tail == null) {
try {
getSink().eventSunk(this, e);
return;
} catch (Throwable t) {
notifyHandlerException(e, t);
return;
}
}
sendDownstream(tail, e);
}
这里有一点要注意,downstream的责任链的处理过程是从后向前,与upstream相反,不过由于netty启动过程的bosspipeline只有一个upstream类型的binder,所以最终走到的是getSink().eventSunk(this, e)方法,这个sink是NioServerSocketChannelFactory构造函数中初始化的NioServerSocketPipelineSink对象。eventSunk执行的操作如下:
public void eventSunk(
ChannelPipeline pipeline, ChannelEvent e) throws Exception {
Channel channel = e.getChannel();
if (channel instanceof NioServerSocketChannel) {
handleServerSocket(e);
} else if (channel instanceof NioSocketChannel) {
handleAcceptedSocket(e);
}
}
private void handleServerSocket(ChannelEvent e) {
if (!(e instanceof ChannelStateEvent)) {
return;
}
ChannelStateEvent event = (ChannelStateEvent) e;
NioServerSocketChannel channel =
(NioServerSocketChannel) event.getChannel();
ChannelFuture future = event.getFuture();
ChannelState state = event.getState();
Object value = event.getValue();
switch (state) {
case OPEN:
if (Boolean.FALSE.equals(value)) {
close(channel, future);
}
break;
case BOUND:
if (value != null) {
bind(channel, future, (SocketAddress) value);
} else {
close(channel, future);
}
break;
}
}
因为前面传递的是ChannelState.BOUND状态的ChannelEvent,所以执行bind(channel, future, (SocketAddress) value),那这个sink对象的bind方法具体做什么呢?
private void bind(
NioServerSocketChannel channel, ChannelFuture future,
SocketAddress localAddress) {
boolean bound = false;
boolean bossStarted = false;
try {
channel.socket.socket().bind(localAddress, channel.getConfig().getBacklog());
bound = true;
future.setSuccess();
fireChannelBound(channel, channel.getLocalAddress());
Executor bossExecutor =
((NioServerSocketChannelFactory) channel.getFactory()).bossExecutor;
DeadLockProofWorker.start(
bossExecutor,
new ThreadRenamingRunnable(
new Boss(channel),
"New I/O server boss #" + id + " (" + channel + ')'));
bossStarted = true;
} catch (Throwable t) {
future.setFailure(t);
fireExceptionCaught(channel, t);
} finally {
if (!bossStarted && bound) {
close(channel, future);
}
}
}
与完成channel.open()后执行fireChannelOpen触发upstream的ChannelState.OPEN状态的ChannelEvent事件一样,执行完成channel对应socket端口的绑定操作后,执行fireChannelBound触发upstream的ChannelState.BOUND状态的ChannelEvent事件传递给所有的handler处理,此处对应的binder没有对BOUND的ChannelEvent做处理,所以fireChannelBound在启动是没有任何操作。
然后实例化Boss类并用bossExecutor线程池启动
private final class Boss implements Runnable {
private final Selector selector;
private final NioServerSocketChannel channel;
Boss(NioServerSocketChannel channel) throws IOException {
this.channel = channel;
selector = Selector.open();
boolean registered = false;
try {
channel.socket.register(selector, SelectionKey.OP_ACCEPT);
registered = true;
} finally {
if (!registered) {
closeSelector();
}
}
channel.selector = selector;
}
public void run() {
final Thread currentThread = Thread.currentThread();
channel.shutdownLock.lock();
try {
for (;;) {
try {
if (selector.select(1000) > 0) {
selector.selectedKeys().clear();
}
SocketChannel acceptedSocket = channel.socket.accept();
if (acceptedSocket != null) {
registerAcceptedChannel(acceptedSocket, currentThread);
}
} catch (SocketTimeoutException e) {
......
}
}
} finally {
channel.shutdownLock.unlock();
closeSelector();
}
}
private void registerAcceptedChannel(SocketChannel acceptedSocket, Thread currentThread) {
......
}
......
}
在DeadLockProofWorker的实现可以看到将bossExecutor设置为ThreadLocal的变量启动boss线程来确保线程安全。Boss类的构造方法中会打开Selector,并将channel注册到selector中。然后在run方法中不断监听socket请求,这就完成了netty的启动过程。监听到的请求交给registerAcceptedChannel()方法处理,具体的处理过程,下节再详细讲解。