采用的是 jboss netty的benchmark,环
境是两台linux机器,都是4核16G内存以及2.6内核,网络环境是公司内网,带宽是1Gbps
,JDK1.6.0_07。对比的是
mina 2.0M6和
yanf4j 1.0-stable,两者都在压到16K,5000并发的时候客户端退出,因此后面给出的图有个16K的在5000并发为0,事实上只是几个连接失败,但是benchmark client就忽略了这个数据。实际过程还测试了1万并发连接的情况,但是由于测试客户端很容易退出,因此最后还是选定最大并发5000。注意,并非mina和yanf4j无法支撑1万个连接,而是benchmark client本身的处理,再加上内核tcp参数没有调整造成的。
首先看源码,mina的Echo Server:
再看Yanf4j的Echo Server,没有多大区别:
两者都启用线程池(16个线程),开启TCP_NODELAY选项,Client采用SYNC模式,压测结果如下(仅供参考),分别是数据大小为128、1K、4K和16K情况下,随着并发client上升吞吐量的对比图:
系统的资源消耗来看,Mina的load相对偏高。
首先看源码,mina的Echo Server:
package
org.jboss.netty.benchmark.echo.server;
import java.net.InetSocketAddress;
import org.apache.mina.core.buffer.IoBuffer;
import org.apache.mina.core.service.IoHandlerAdapter;
import org.apache.mina.core.session.IoSession;
import org.apache.mina.filter.executor.ExecutorFilter;
import org.apache.mina.transport.socket.SocketAcceptor;
import org.apache.mina.transport.socket.nio.NioSocketAcceptor;
import org.jboss.netty.benchmark.echo.Constant;
/**
* @author The Netty Project ([email protected])
* @author Trustin Lee ([email protected])
*
* @version $Rev: 394 $, $Date: 2008-10-03 12:55:27 +0800 (星期五, 03 十月 2008) $
*
*/
public class MINA {
public static void main(String[] args) throws Exception {
boolean threadPoolDisabled = args.length > 0 && args[ 0 ].equals( " nothreadpool " );
SocketAcceptor acceptor = new NioSocketAcceptor(Runtime.getRuntime().availableProcessors());
acceptor.getSessionConfig().setMinReadBufferSize(Constant.MIN_READ_BUFFER_SIZE);
acceptor.getSessionConfig().setReadBufferSize(Constant.INITIAL_READ_BUFFER_SIZE);
acceptor.getSessionConfig().setMaxReadBufferSize(Constant.MAX_READ_BUFFER_SIZE);
acceptor.getSessionConfig().setThroughputCalculationInterval( 0 );
acceptor.getSessionConfig().setTcpNoDelay( true );
acceptor.setDefaultLocalAddress( new InetSocketAddress(Constant.PORT));
if ( ! threadPoolDisabled) {
// Throttling has been disabled because it causes a dead lock.
// Also, it doesn't have per-channel memory limit.
acceptor.getFilterChain().addLast(
" executor " ,
new ExecutorFilter(
Constant.THREAD_POOL_SIZE, Constant.THREAD_POOL_SIZE));
}
acceptor.setHandler( new EchoHandler());
acceptor.bind();
System.out.println( " MINA EchoServer is ready to serve at port " + Constant.PORT + " . " );
System.out.println( " Enter 'ant benchmark' on the client side to begin. " );
System.out.println( " Thread pool: " + (threadPoolDisabled ? " DISABLED " : " ENABLED " ));
}
private static class EchoHandler extends IoHandlerAdapter {
EchoHandler() {
super ();
}
@Override
public void messageReceived(IoSession session, Object message)
throws Exception {
session.write(((IoBuffer) message).duplicate());
}
@Override
public void exceptionCaught(IoSession session, Throwable cause)
throws Exception {
session.close();
}
}
}
import java.net.InetSocketAddress;
import org.apache.mina.core.buffer.IoBuffer;
import org.apache.mina.core.service.IoHandlerAdapter;
import org.apache.mina.core.session.IoSession;
import org.apache.mina.filter.executor.ExecutorFilter;
import org.apache.mina.transport.socket.SocketAcceptor;
import org.apache.mina.transport.socket.nio.NioSocketAcceptor;
import org.jboss.netty.benchmark.echo.Constant;
/**
* @author The Netty Project ([email protected])
* @author Trustin Lee ([email protected])
*
* @version $Rev: 394 $, $Date: 2008-10-03 12:55:27 +0800 (星期五, 03 十月 2008) $
*
*/
public class MINA {
public static void main(String[] args) throws Exception {
boolean threadPoolDisabled = args.length > 0 && args[ 0 ].equals( " nothreadpool " );
SocketAcceptor acceptor = new NioSocketAcceptor(Runtime.getRuntime().availableProcessors());
acceptor.getSessionConfig().setMinReadBufferSize(Constant.MIN_READ_BUFFER_SIZE);
acceptor.getSessionConfig().setReadBufferSize(Constant.INITIAL_READ_BUFFER_SIZE);
acceptor.getSessionConfig().setMaxReadBufferSize(Constant.MAX_READ_BUFFER_SIZE);
acceptor.getSessionConfig().setThroughputCalculationInterval( 0 );
acceptor.getSessionConfig().setTcpNoDelay( true );
acceptor.setDefaultLocalAddress( new InetSocketAddress(Constant.PORT));
if ( ! threadPoolDisabled) {
// Throttling has been disabled because it causes a dead lock.
// Also, it doesn't have per-channel memory limit.
acceptor.getFilterChain().addLast(
" executor " ,
new ExecutorFilter(
Constant.THREAD_POOL_SIZE, Constant.THREAD_POOL_SIZE));
}
acceptor.setHandler( new EchoHandler());
acceptor.bind();
System.out.println( " MINA EchoServer is ready to serve at port " + Constant.PORT + " . " );
System.out.println( " Enter 'ant benchmark' on the client side to begin. " );
System.out.println( " Thread pool: " + (threadPoolDisabled ? " DISABLED " : " ENABLED " ));
}
private static class EchoHandler extends IoHandlerAdapter {
EchoHandler() {
super ();
}
@Override
public void messageReceived(IoSession session, Object message)
throws Exception {
session.write(((IoBuffer) message).duplicate());
}
@Override
public void exceptionCaught(IoSession session, Throwable cause)
throws Exception {
session.close();
}
}
}
再看Yanf4j的Echo Server,没有多大区别:
package
org.jboss.netty.benchmark.echo.server;
import java.nio.ByteBuffer;
import org.jboss.netty.benchmark.echo.Constant;
import com.google.code.yanf4j.config.Configuration;
import com.google.code.yanf4j.core.Session;
import com.google.code.yanf4j.core.impl.HandlerAdapter;
import com.google.code.yanf4j.core.impl.StandardSocketOption;
import com.google.code.yanf4j.nio.TCPController;
public class Yanf4j {
public static void main(String[] args) throws Exception {
boolean threadPoolDisabled = args.length > 0
&& args[ 0 ].equals( " nothreadpool " );
Configuration configuration = new Configuration();
configuration.setCheckSessionTimeoutInterval( 0 );
configuration.setSessionIdleTimeout( 0 );
configuration
.setSessionReadBufferSize(Constant.INITIAL_READ_BUFFER_SIZE);
TCPController controller = new TCPController(configuration);
controller.setSocketOption(StandardSocketOption.SO_REUSEADDR, true );
controller.setSocketOption(StandardSocketOption.TCP_NODELAY, true );
controller.setHandler( new EchoHandler());
if ( ! threadPoolDisabled) {
controller.setReadThreadCount(Constant.THREAD_POOL_SIZE);
}
controller.bind(Constant.PORT);
System.out.println( " Yanf4j EchoServer is ready to serve at port "
+ Constant.PORT + " . " );
System.out
.println( " Enter 'ant benchmark' on the client side to begin. " );
System.out.println( " Thread pool: "
+ (threadPoolDisabled ? " DISABLED " : " ENABLED " ));
}
static class EchoHandler extends HandlerAdapter {
@Override
public void onMessageReceived( final Session session, final Object msg) {
session.write(((ByteBuffer) msg).duplicate());
}
@Override
public void onExceptionCaught(Session session, Throwable t) {
session.close();
}
}
}
import java.nio.ByteBuffer;
import org.jboss.netty.benchmark.echo.Constant;
import com.google.code.yanf4j.config.Configuration;
import com.google.code.yanf4j.core.Session;
import com.google.code.yanf4j.core.impl.HandlerAdapter;
import com.google.code.yanf4j.core.impl.StandardSocketOption;
import com.google.code.yanf4j.nio.TCPController;
public class Yanf4j {
public static void main(String[] args) throws Exception {
boolean threadPoolDisabled = args.length > 0
&& args[ 0 ].equals( " nothreadpool " );
Configuration configuration = new Configuration();
configuration.setCheckSessionTimeoutInterval( 0 );
configuration.setSessionIdleTimeout( 0 );
configuration
.setSessionReadBufferSize(Constant.INITIAL_READ_BUFFER_SIZE);
TCPController controller = new TCPController(configuration);
controller.setSocketOption(StandardSocketOption.SO_REUSEADDR, true );
controller.setSocketOption(StandardSocketOption.TCP_NODELAY, true );
controller.setHandler( new EchoHandler());
if ( ! threadPoolDisabled) {
controller.setReadThreadCount(Constant.THREAD_POOL_SIZE);
}
controller.bind(Constant.PORT);
System.out.println( " Yanf4j EchoServer is ready to serve at port "
+ Constant.PORT + " . " );
System.out
.println( " Enter 'ant benchmark' on the client side to begin. " );
System.out.println( " Thread pool: "
+ (threadPoolDisabled ? " DISABLED " : " ENABLED " ));
}
static class EchoHandler extends HandlerAdapter {
@Override
public void onMessageReceived( final Session session, final Object msg) {
session.write(((ByteBuffer) msg).duplicate());
}
@Override
public void onExceptionCaught(Session session, Throwable t) {
session.close();
}
}
}
两者都启用线程池(16个线程),开启TCP_NODELAY选项,Client采用SYNC模式,压测结果如下(仅供参考),分别是数据大小为128、1K、4K和16K情况下,随着并发client上升吞吐量的对比图:
系统的资源消耗来看,Mina的load相对偏高。