这篇文章主要和大家分享一下,在我们基础软件升级过程中遇到的经典Netty问题。当然,官方资料也许是一个更好的补充。另外,大家如果对Netty及其Grizzly架构以及源码有疑问的,欢迎交流。后续会为大家奉献我们基于Grizzly和Netty构建的RPC框架的压测分析,希望大家能够喜欢!
好了,言归正传~
Netty团队大概从3.3.0开始,将依赖坐标从
org.jboss.netty
netty
3.2.10.Final
改成了(Netty作者离开了Jboss公司)
io.netty
netty
3.3.0.Final
io.netty
netty-all
4.0.23.Final
但请注意,从4开始,Netty团队做了模块依赖的优化,像Grizzly一样,分离出很多功能独立的Package。比方说,你希望使用Netty的buffer组件,只需简单依赖这个包就好了。不过本着严谨的态度,我们还是来看下netty-all这个一揽子包里究竟都有哪些依赖,如:
${project.groupId}
netty-buffer
${project.version}
compile
true
${project.groupId}
netty-codec
${project.version}
compile
true
${project.groupId}
netty-codec-http
${project.version}
compile
true
${project.groupId}
netty-codec-socks
${project.version}
compile
true
${project.groupId}
netty-common
${project.version}
compile
true
${project.groupId}
netty-handler
${project.version}
compile
true
${project.groupId}
netty-transport
${project.version}
compile
true
${project.groupId}
netty-transport-rxtx
${project.version}
compile
true
${project.groupId}
netty-transport-sctp
${project.version}
compile
true
${project.groupId}
netty-transport-udt
${project.version}
compile
true
每个包都代表什么呢?描述如下:
通过依赖分析,最终我选择了精简依赖,如下:
io.netty
netty-handler
4.0.23.Final
${project.groupId}
netty-transport-native-epoll
${project.version}
${os.detected.classifier}
compile
true
更多的细节,可以参看这里。
用户可以指定自己的 EventExecutor来执行特定的 handler。通常情况下,这种EventExecutor是单线程的,当然,如果你指定了多线程的 EventExecutor或者 EventLoop,线程sticky特性会保证,除非出现 deregistration,否则其中的一个线程将一直占用。如果两个handler分别注册了不同的EventExecutor,这时就要注意线程安全问题了。
Netty4的线程模型还是有很多可以优化的地方,比方说目前Eventloop对channel的处理不均等问题,而这些问题都会在Netty 5里面优化掉,感兴趣的朋友可以参看官方Issues
先来看两幅图,第一幅图是Netty3的Channel状态模型,第二附图是Netty4优化过的模型。可以看到,channelOpen,channelBound,和channelConnected 已经被channelActive替代。channelDisconnected,channelUnbound和channelClosed 也被 channelInactive替代。
Netty 3
Netty 4
这里就产生了两个问题:
其一,channelRegistered and channelUnregistered 不等价于 channelOpen and channelClosed,它是Netty4新引入的状态为了实现Channel的dynamic registration, deregistration, and re-registration。
第二, 既然是合并,那原先针对channelOpen的方法如何迁移?简单来做,可以直接迁移到替代方法里面。
ChannelPipeline cp = Channels.pipeline();
ChannelPipeline cp = ch.pipeline();
public void handleUpstream(ChannelHandlerContext ctx, ChannelEvent e)
throws Exception
{
if (e instanceof ChannelStateEvent) {
ChannelStateEvent cse = (ChannelStateEvent) e;
switch (cse.getState()) {
case OPEN:
if (Boolean.TRUE.equals(cse.getValue())) {
// connect
channelCount.incrementAndGet();
allChannels.add(e.getChannel());
}
else {
// disconnect
channelCount.decrementAndGet();
allChannels.remove(e.getChannel());
}
break;
case BOUND:
break;
}
}
if (e instanceof UpstreamMessageEvent) {
UpstreamMessageEvent ume = (UpstreamMessageEvent) e;
if (ume.getMessage() instanceof ChannelBuffer) {
ChannelBuffer cb = (ChannelBuffer) ume.getMessage();
int readableBytes = cb.readableBytes();
// compute stats here, bytes read from remote
bytesRead.getAndAdd(readableBytes);
}
}
ctx.sendUpstream(e);
}
public void handleDownstream(ChannelHandlerContext ctx, ChannelEvent e)
throws Exception
{
if (e instanceof DownstreamMessageEvent) {
DownstreamMessageEvent dme = (DownstreamMessageEvent) e;
if (dme.getMessage() instanceof ChannelBuffer) {
ChannelBuffer cb = (ChannelBuffer) dme.getMessage();
int readableBytes = cb.readableBytes();
// compute stats here, bytes written to remote
bytesWritten.getAndAdd(readableBytes);
}
}
ctx.sendDownstream(e);
}
ctx.channel().setReadable(false);//Before
ctx.channel().config().setAutoRead(false);//After
2. TCP参数优化
// Before:
cfg.setOption("tcpNoDelay", true);
cfg.setOption("tcpNoDelay", 0); // Runtime ClassCastException
cfg.setOption("tcpNoDelays", true); // Typo in the option name - ignored silently
// After:
cfg.setOption(ChannelOption.TCP_NODELAY, true);
cfg.setOption(ChannelOption.TCP_NODELAY, 0); // Compile error
3. 单元测试经常用到的CodecEmbedder类已经变名为EmbeddedChannel
@Test
public void testMultipleLinesStrippedDelimiters() {
EmbeddedChannel ch = new EmbeddedChannel(new DelimiterBasedFrameDecoder(8192, true,
Delimiters.lineDelimiter()));
ch.writeInbound(Unpooled.copiedBuffer("TestLine\r\ng\r\n", Charset.defaultCharset()));
assertEquals("TestLine", releaseLater((ByteBuf) ch.readInbound()).toString(Charset.defaultCharset()));
assertEquals("g", releaseLater((ByteBuf) ch.readInbound()).toString(Charset.defaultCharset()));
assertNull(ch.readInbound());
ch.finish();
}
4. 简化的关闭操作,以前我是这么玩stop的
if (serverChannel != null) {
log.info("stopping transport {}:{}",getName(), port);
// first stop accepting
final CountDownLatch latch = new CountDownLatch(1);
serverChannel.close().addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
// stop and process remaining in-flight invocations
if (def.getExecutor() instanceof ExecutorService) {
ExecutorService exe = (ExecutorService) getExecutor();
ShutdownUtil.shutdownExecutor(exe, "dispatcher");
}
latch.countDown();
}
});
latch.await();
serverChannel = null;
}
// If the channelFactory was created by us, we should also clean it up. If the
// channelFactory was passed in by Bootstrap, then it may be shared so don't clean it up.
if (channelFactory != null) {
ShutdownUtil.shutdownChannelFactory(channelFactory, bossExecutor, ioWorkerExecutor,allChannels);
}
}
public void stop() throws InterruptedException {
// Shut down all event loops to terminate all threads.
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
// Wait until all threads are terminated.
bossGroup.terminationFuture().sync();
workerGroup.terminationFuture().sync();
}
cp.addLast("idleTimeoutHandler", new IdleStateHandler(getTimer(),
getClientIdleTimeout().toMillis(),
NO_WRITER_IDLE_TIMEOUT,
NO_ALL_IDLE_TIMEOUT,
TimeUnit.MILLISECONDS));
cp.addLast("heartbeatHandler", new HeartbeatHandler());
cp.addLast("idleTimeoutHandler", new IdleStateHandler(
NO_WRITER_IDLE_TIMEOUT, NO_WRITER_IDLE_TIMEOUT,
NO_ALL_IDLE_TIMEOUT, TimeUnit.MILLISECONDS));
cp.addLast("heartbeatHandler", new HeartbeatHandler());