Netty

Netty工作流程图

  1. Netty 抽象出两组线程池BossGroup和WorkerGroup,Boss专门负责接收客户端的连接,Worker专门负责网络的读写
  2. BossGroup和WorkerGroup类型都是NioEventLoopGroup
  3. NioEventLoopGroup 相当于一个事件循环线程组, 这个组中含有多个事件循环线程 , 每一个事件循环线程是NioEventLoop
  4. 每个NioEventLoop都有一个selector , 用于监听注册在其上的socketChannel的网络通讯
  5. 每个Boss NioEventLoop线程内部循环执行的步骤有 3 步
  • 处理accept事件 , 与client 建立连接 , 生成 NioSocketChannel
  • 将NioSocketChannel注册到某个worker NIOEventLoop上的selector
  • 处理任务队列的任务 , 即runAllTasks
  1. 每个worker NIOEventLoop线程循环执行的步骤
  • 轮询注册到自己selector上的所有NioSocketChannel 的read, write事件
  • 处理 I/O 事件, 即read , write 事件, 在对应NioSocketChannel 处理业务
  • runAllTasks处理任务队列TaskQueue的任务 ,一些耗时的业务处理一般可以放入TaskQueue中慢慢处理,这样不影响数据在 pipeline 中的流动处理
  1. 每个worker NIOEventLoop处理NioSocketChannel业务时,会使用 pipeline (管道),管道中维护了很多 handler 处理器用来处理 channel 中的数据

Netty几个组件

  • NioEventLoopGroup 主要用于处理连接,消息事务等


  • Bootstrap、ServerBootstrap:
    Bootstrap 意思是引导,一个 Netty 应用通常由一个 Bootstrap 开始,主要作用是配置整个 Netty 程
    序,串联各个组件,Netty 中 Bootstrap 类是客户端程序的启动引导类,ServerBootstrap 是服务端
    启动引导类。

  • Channel:
    Netty 网络通信的组件,能够用于执行网络 I/O 操作。
    channel里面包含ChannelPipline,
    ChannelPipline里面包含ChannelHandlerContext列表
    ChannelHandlerContext里面包含ChannelHandler等


  • ChannelPipline:
    保存 ChannelHandlerContext的 List,用于处理或拦截 Channel 的入站事件和出站操作。 一个 Channel 包含了一个 ChannelPipeline,而 ChannelPipeline 中又维护了一个由 ChannelHandlerContext 组成的双向链表,并且每个 ChannelHandlerContext 中又关联着一个 ChannelHandler。
    read,write拦截链执行顺序:read事件(入站事件)和write事件(出站事件)在一个双向链表中,入站事件会从链表 head 往后传递到最后一个入站的 handler,出站事件会从链表 tail 往前传递到最前一个出站的 handler,两种类型的 handler 互不干扰。

  • ChannelHandlerContext:
    保存 Channel 相关的所有上下文信息,同时关联一个 ChannelHandler 对象。

  • ChannelHandler:
    真正负责处理 I/O 事件或拦截 I/O 操作的地方,并将其转发到其 ChannelPipeline(业务处理链)中的下一个处理程序。

主要流程图

引用网上的一张图


Netty线程模型源码剖析

我们先来看服务端,服务端源码入口(一个简单的netty服务端代码)

public class NettyServer {
    public static void main(String[] args) throws Exception {
        //创建两个线程组bossGroup和workerGroup, 含有的子线程NioEventLoop的个数默认为cpu核数的两倍
        // bossGroup只是处理连接请求 ,真正的和客户端业务处理,会交给workerGroup完成
        EventLoopGroup bossGroup = new NioEventLoopGroup();
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        try {
            //创建服务器端的启动对象
            ServerBootstrap bootstrap = new ServerBootstrap();
            //使用链式编程来配置参数
            bootstrap.group(bossGroup, workerGroup) //设置两个线程组
                    .channel(NioServerSocketChannel.class) //使用NioServerSocketChannel作为服务器的通道实现
                    // 初始化服务器连接队列大小,服务端处理客户端连接请求是顺序处理的,所以同一时间只能处理一个客户端连接。
                    // 多个客户端同时来的时候,服务端将不能处理的客户端连接请求放在队列中等待处理
                    .option(ChannelOption.SO_BACKLOG, 1024)
                    .childHandler(new ChannelInitializer() {//创建通道初始化对象,设置初始化参数
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {
                            //对workerGroup的SocketChannel设置处理器
                            ch.pipeline().addLast(new NettyServerHandler());
                        }
                    });
            System.out.println("netty server start。。");
            //绑定一个端口并且同步, 生成了一个ChannelFuture异步对象,通过isDone()等方法可以判断异步事件的执行情况
            //启动服务器(并绑定端口),bind是异步操作,sync方法是等待异步操作执行完毕
            ChannelFuture cf = bootstrap.bind(9000).sync();
            //对通道关闭进行监听,closeFuture是异步操作,监听通道关闭
            // 通过sync方法同步等待通道关闭处理完毕,这里会阻塞等待通道关闭完成
            cf.channel().closeFuture().sync();
        } finally {
            bossGroup.shutdownGracefully();
            workerGroup.shutdownGracefully();
        }
    }
}
我们先来看ChannelFuture cf = bootstrap.bind(9000).sync();之前初始化一些参数的代码

EventLoopGroup bossGroup = new NioEventLoopGroup();

//----------------------------------✂------------------------------------------

EventLoopGroup bossGroup = new NioEventLoopGroup();

创建bossGroup和workerGroup ,最后是调用到MultithreadEventExecutorGroup这个方法
    protected MultithreadEventExecutorGroup(int nThreads, Executor executor,
                                            EventExecutorChooserFactory chooserFactory, Object... args) {
        if (executor == null) {
           //新建一个executor
            executor = new ThreadPerTaskExecutor(newDefaultThreadFactory());
        }
        //根据传入的子线程数来创建线程池executor
        children = new EventExecutor[nThreads];
        for (int i = 0; i < nThreads; i ++) {
            boolean success = false;
            try {
                //这边主要是创建一个NioEventLoop,具体结构可参考最上面的图,这边比较重要下面详解
                children[i] = newChild(executor, args);
                success = true;
            } catch (Exception e) {
                // TODO: Think about if this is a good exception type
                throw new IllegalStateException("failed to create a child event loop", e);
            } finally {
                if (!success) {
                    for (int j = 0; j < i; j ++) {
                        children[j].shutdownGracefully();
                    }

                    for (int j = 0; j < i; j ++) {
                        EventExecutor e = children[j];
                        try {
                            while (!e.isTerminated()) {
                                e.awaitTermination(Integer.MAX_VALUE, TimeUnit.SECONDS);
                            }
                        } catch (InterruptedException interrupted) {
                            // Let the caller handle the interruption.
                            Thread.currentThread().interrupt();
                            break;
                        }
                    }
                }
            }
        }

        chooser = chooserFactory.newChooser(children);

        final FutureListener terminationListener = new FutureListener() {
            @Override
            public void operationComplete(Future future) throws Exception {
                if (terminatedChildren.incrementAndGet() == children.length) {
                    terminationFuture.setSuccess(null);
                }
            }
        };

        for (EventExecutor e: children) {
            e.terminationFuture().addListener(terminationListener);
        }

        Set childrenSet = new LinkedHashSet(children.length);
        Collections.addAll(childrenSet, children);
        readonlyChildren = Collections.unmodifiableSet(childrenSet);
    }
//----------------------------------✂------------------------------------------

创建children的结构 children[i] = newChild(executor, args);
    protected EventLoop newChild(Executor executor, Object... args) throws Exception {
        return new NioEventLoop(this, executor, (SelectorProvider) args[0],
            ((SelectStrategyFactory) args[1]).newSelectStrategy(), (RejectedExecutionHandler) args[2]);
    }
//----------------------------------✂------------------------------------------

创建一个nio loop  这个也是最重要的一个组件
    NioEventLoop(NioEventLoopGroup parent, Executor executor, SelectorProvider selectorProvider,
                 SelectStrategy strategy, RejectedExecutionHandler rejectedExecutionHandler) {
        super(parent, executor, false, DEFAULT_MAX_PENDING_TASKS, rejectedExecutionHandler);
        if (selectorProvider == null) {
            throw new NullPointerException("selectorProvider");
        }
        if (strategy == null) {
            throw new NullPointerException("selectStrategy");
        }
        provider = selectorProvider;
        //主要是创建一个nio的selector,这边主要用的是nio的创建,篇幅过长,这边不加解释
        final SelectorTuple selectorTuple = openSelector();
        selector = selectorTuple.selector;
        unwrappedSelector = selectorTuple.unwrappedSelector;
        selectStrategy = strategy;
    }
 
 
  • 初始化bootstrap
ServerBootstrap bootstrap = new ServerBootstrap();
这边主要是对ServerBootstrap的创建,初始化一些结构参数
    private final Map, Object> childOptions = new LinkedHashMap, Object>();
    private final Map, Object> childAttrs = new LinkedHashMap, Object>();
    private final ServerBootstrapConfig config = new ServerBootstrapConfig(this);
    private volatile EventLoopGroup childGroup;
    private volatile ChannelHandler childHandler;
    public ServerBootstrap() { }
  • 绑定bootstrap参数
            bootstrap.group(bossGroup, workerGroup) //设置两个线程组
                    .channel(NioServerSocketChannel.class) //使用NioServerSocketChannel作为服务器的通道实现
                    // 初始化服务器连接队列大小,服务端处理客户端连接请求是顺序处理的,所以同一时间只能处理一个客户端连接。
                    // 多个客户端同时来的时候,服务端将不能处理的客户端连接请求放在队列中等待处理
                    .option(ChannelOption.SO_BACKLOG, 1024)
                    .childHandler(new ChannelInitializer() {//创建通道初始化对象,设置初始化参数
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {
                            //对workerGroup的SocketChannel设置处理器
                            ch.pipeline().addLast(new NettyServerHandler());
                        }
                    });
//----------------------------------✂------------------------------------------

bootstrap.group(bossGroup, workerGroup) //设置boss线程跟worker线程,这边主要就是两个赋值
    public ServerBootstrap group(EventLoopGroup parentGroup, EventLoopGroup childGroup) {
        super.group(parentGroup);//主要代码为 this.group = group;将bossGroup赋值给bootstrap的group
        if (childGroup == null) {
            throw new NullPointerException("childGroup");
        }
        if (this.childGroup != null) {
            throw new IllegalStateException("childGroup set already");
        }
        this.childGroup = childGroup; //将workerGroup线程赋值给childGroup
        return this;
    }
//----------------------------------✂------------------------------------------

初始化bootbootstrap的channel
bootstrap.channel(NioServerSocketChannel.class) //使用NioServerSocketChannel作为服务器的通道实现
    public B channel(Class channelClass) {
        if (channelClass == null) {
            throw new NullPointerException("channelClass");
        }
      //反射获取NioServerSocketChannel.class
        return channelFactory(new ReflectiveChannelFactory(channelClass));
    }

   @Deprecated
    public B channelFactory(ChannelFactory channelFactory) {
        if (channelFactory == null) {
            throw new NullPointerException("channelFactory");
        }
        if (this.channelFactory != null) {
            throw new IllegalStateException("channelFactory set already");
        }
      //将 this.channelFactory设置为NioServerSocketChannel.class
        this.channelFactory = channelFactory;
        return self();
    }
//----------------------------------✂------------------------------------------

设置参数,根据option传入的值
bootstrap.option(ChannelOption.SO_BACKLOG, 1024)
没什么特殊的 就是把传入的值保存到options里面
    public  B option(ChannelOption option, T value) {
        if (option == null) {
            throw new NullPointerException("option");
        }
        if (value == null) {
            synchronized (options) {
                options.remove(option);
            }
        } else {
            synchronized (options) {
                options.put(option, value);
            }
        }
        return self();
    }
//----------------------------------✂------------------------------------------

设置处理函数
bootstrap.childHandler(new ChannelInitializer() {//创建通道初始化对象,设置初始化参数
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {
                            //对workerGroup的SocketChannel设置处理器
                            ch.pipeline().addLast(new NettyServerHandler());
                        }
                    });

初始化基本组件之后执行bind方法,这个方法是最重要的

ChannelFuture cf = bootstrap.bind(9000).sync();
//----------------------------------✂------------------------------------------

根据端口号创建一个InetSocketAddress()
    public ChannelFuture bind(int inetPort) {
        return bind(new InetSocketAddress(inetPort));
    }
他其实就是保存了这些字段
        private InetSocketAddressHolder(String hostname, InetAddress addr, int port) {
            this.hostname = hostname;
            this.addr = addr;
            this.port = port;
        }
//----------------------------------✂------------------------------------------

//主要是调到doBind方法
    private ChannelFuture doBind(final SocketAddress localAddress) {
//这个方法主要是初始化所有数据的,接下来我们先看这个方法
        final ChannelFuture regFuture = initAndRegister();
        final Channel channel = regFuture.channel();
        if (regFuture.cause() != null) {
            return regFuture;
        }

        if (regFuture.isDone()) {
            ChannelPromise promise = channel.newPromise();
            doBind0(regFuture, channel, localAddress, promise);
            return promise;
        } else {
            final PendingRegistrationPromise promise = new PendingRegistrationPromise(channel);
            regFuture.addListener(new ChannelFutureListener() {
                @Override
                public void operationComplete(ChannelFuture future) throws Exception {
                    Throwable cause = future.cause();
                    if (cause != null) {
                        promise.setFailure(cause);
                    } else {
                        promise.registered();
                        doBind0(regFuture, channel, localAddress, promise);
                    }
                }
            });
            return promise;
        }
    }
  1. 初始化initAndRegister,
//----------------------------------✂------------------------------------------

    final ChannelFuture initAndRegister() {
        Channel channel = null;
        try {
            //根据channelFactory来生成一个新的channel(NioServerSocketChannel)
            channel = channelFactory.newChannel();
            //初始化这个channel
            init(channel);
        } catch (Throwable t) {
            if (channel != null) {
                channel.unsafe().closeForcibly();
                return new DefaultChannelPromise(channel, GlobalEventExecutor.INSTANCE).setFailure(t);
            }
            return new DefaultChannelPromise(new FailedChannel(), GlobalEventExecutor.INSTANCE).setFailure(t);
        }
        ChannelFuture regFuture = config().group().register(channel);
        if (regFuture.cause() != null) {
            if (channel.isRegistered()) {
                channel.close();
            } else {
                channel.unsafe().closeForcibly();
            }
        }
        return regFuture;
    }

channelFactory.newChannel()这个方法是利用反射获取NioServerSocketChannel实例
    public T newChannel() {
//这个constructor是我们之前初始化时传入的NioServerSocketChannel.class
        return constructor.newInstance();
    }
这边NioServerSocketChannel初始化了很多东西,下面代码完后画张结构图
//----------------------------------✂------------------------------------------
调用init(channel);方法
    @Override
    void init(Channel channel) throws Exception {
        //获取到我们之前bootstrap.option里面传的参数
        final Map, Object> options = options0();
        synchronized (options) {
        //将参数写入到NIOchannel里面
            setChannelOptions(channel, options, logger);
        }
        //获取到bootstrap.attr里面传的参数
        final Map, Object> attrs = attrs0();
        synchronized (attrs) {
            for (Entry, Object> e: attrs.entrySet()) {
                @SuppressWarnings("unchecked")
                AttributeKey key = (AttributeKey) e.getKey();
        //将参数写入到NIOchannel里面
                channel.attr(key).set(e.getValue());
            }
        }
        //获取当前channel的pipeline
        ChannelPipeline p = channel.pipeline();

        final EventLoopGroup currentChildGroup = childGroup;
        final ChannelHandler currentChildHandler = childHandler;
        final Entry, Object>[] currentChildOptions;
        final Entry, Object>[] currentChildAttrs;
        synchronized (childOptions) {
            currentChildOptions = childOptions.entrySet().toArray(newOptionArray(0));
        }
        synchronized (childAttrs) {
            currentChildAttrs = childAttrs.entrySet().toArray(newAttrArray(0));
        }
        //往pipline里面增加一个ChannelInitializer执行器
        p.addLast(new ChannelInitializer() {
            @Override
            public void initChannel(final Channel ch) throws Exception {
                final ChannelPipeline pipeline = ch.pipeline();
                ChannelHandler handler = config.handler();
                if (handler != null) {
                    pipeline.addLast(handler);
                }
                ch.eventLoop().execute(new Runnable() {
                    @Override
                    public void run() {
                        pipeline.addLast(new ServerBootstrapAcceptor(
                                ch, currentChildGroup, currentChildHandler, currentChildOptions, currentChildAttrs));
                    }
                });
            }
        });
    }
 
 

执行完代码后生成的结构为


接下来执行ChannelFuture regFuture = config().group().register(channel);方法

    @Override
    public ChannelFuture register(Channel channel) {
        //从bossGroup里拿一个线程来处理channel的注册请求,
        return next().register(channel);
    }
//----------------------------------✂------------------------------------------

    @Override
    public ChannelFuture register(Channel channel) {
        return register(new DefaultChannelPromise(channel, this));
    }
//----------------------------------✂------------------------------------------

    @Override
    public ChannelFuture register(final ChannelPromise promise) {
        ObjectUtil.checkNotNull(promise, "promise");
        promise.channel().unsafe().register(this, promise);
        return promise;
    }
//----------------------------------✂------------------------------------------

        @Override
        public final void register(EventLoop eventLoop, final ChannelPromise promise) {
  ...删除校验内容
            AbstractChannel.this.eventLoop = eventLoop;

            if (eventLoop.inEventLoop()) {
                register0(promise);
            } else {
                try {
                    //调用eventLoop异步执行register0
                    eventLoop.execute(new Runnable() {
                        @Override
                        public void run() {
                            register0(promise);
                        }
                    });
                } catch (Throwable t) {
                    closeForcibly();
                    closeFuture.setClosed();
                    safeSetFailure(promise, t);
                }
            }
        }
//----------------------------------✂------------------------------------------

把register0方法提交到SingleThreadEventExecutor#execute
    @Override
    public void execute(Runnable task) {
        //判断当前线程是否为主线程,正常这边为false
        boolean inEventLoop = inEventLoop();
        //将任务加入到taskQueue里面
        addTask(task);
        if (!inEventLoop) {
            startThread();
            if (isShutdown()) {
                boolean reject = false;
                try {
                    if (removeTask(task)) {
                        reject = true;
                    }
                } catch (UnsupportedOperationException e) {
                }
                if (reject) {
                    reject();
                }
            }
        }

        if (!addTaskWakesUp && wakesUpForTask(task)) {
            wakeup(inEventLoop);
        }
    }
//----------------------------------✂------------------------------------------

   private void startThread() {
        if (state == ST_NOT_STARTED) {
            if (STATE_UPDATER.compareAndSet(this, ST_NOT_STARTED, ST_STARTED)) {
                try {
                    doStartThread();
                } catch (Throwable cause) {
                    STATE_UPDATER.set(this, ST_NOT_STARTED);
                    PlatformDependent.throwException(cause);
                }
            }
        }
    }
//----------------------------------✂------------------------------------------

    private void doStartThread() {
        assert thread == null;
        executor.execute(new Runnable() {
            @Override
            public void run() {
                thread = Thread.currentThread();
                if (interrupted) {
                    thread.interrupt();
                }

                boolean success = false;
                updateLastExecutionTime();
                try {
                    SingleThreadEventExecutor.this.run();
                    success = true;
                } 
...删除剩余代码,我们主要看SingleThreadEventExecutor.this.run()这个方法
    }
//----------------------------------✂------------------------------------------

    @Override
    protected void run() {
//死循环
        for (;;) {
            try {
                try {
//监听taskQueue里面是否有任务
//如果没有任务的话返回 return hasTasks ? selectSupplier.get() : SelectStrategy.SELECT;
                    switch (selectStrategy.calculateStrategy(selectNowSupplier, hasTasks())) {
                    case SelectStrategy.CONTINUE:
                        continue;
                    case SelectStrategy.BUSY_WAIT:
// 如果没有事件的话会进到这里
                    case SelectStrategy.SELECT:
// 重点:主要是调用nio的 int selectedKeys = selector.select(timeoutMillis),
//增加了超时时间,如果长时间没有事件过来会执行接下来的逻辑
                        select(wakenUp.getAndSet(false));
                        if (wakenUp.get()) {
                            selector.wakeup();
                        }
                    default:
                    }
                } catch (IOException e) {
                    rebuildSelector0();
                    handleLoopException(e);
                    continue;
                }

                cancelledKeys = 0;
                needsToSelectAgain = false;
                final int ioRatio = this.ioRatio;
                if (ioRatio == 100) {
                    try {
                       //这个方法主要是监听客户端io请求的,放到后面连接部分详解
                        processSelectedKeys();
                    } finally {
                        runAllTasks();
                    }
                } else {
                    final long ioStartTime = System.nanoTime();
                    try {
                        processSelectedKeys();
                    } finally {
                        final long ioTime = System.nanoTime() - ioStartTime;
                      //执行task里面的所有任务,也就是我们之前传入的register0的运行地址
                        runAllTasks(ioTime * (100 - ioRatio) / ioRatio);
                    }
                }
            } catch (Throwable t) {
                handleLoopException(t);
            }
            try {
                if (isShuttingDown()) {
                    closeAll();
                    if (confirmShutdown()) {
                        return;
                    }
                }
            } catch (Throwable t) {
                handleLoopException(t);
            }
        }
    }
//----------------------------------✂------------------------------------------

第一次没有初始化没有io请求,他会执行runAllTasks,也就是执行任务register0
        private void register0(ChannelPromise promise) {
            try {
                boolean firstRegistration = neverRegistered;
                //这个方法很关键,是注册channel到selector上
                doRegister();
                neverRegistered = false;
                registered = true;
                //这边是调用pipline里的每个handler的handlerAdded方法
                //也就是我们自己实现handle的时候那边实现的
                //之前我们在上面初始化的时候加入了一个重要组件ChannelInitializer下面详解
                pipeline.invokeHandlerAddedIfNeeded();

                safeSetSuccess(promise);
                //调用pipline里的每个handler的channelRegistered方法
                pipeline.fireChannelRegistered();
                if (isActive()) {
                    if (firstRegistration) {
                //调用pipline里的每个handler的channelActive方法
                        pipeline.fireChannelActive();
                    } else if (config().isAutoRead()) {
                        beginRead();
                    }
                }
            } catch (Throwable t) {
                closeForcibly();
                closeFuture.setClosed();
                safeSetFailure(promise, t);
            }
        }
//----------------------------------✂------------------------------------------

这边是注册channel到selector上,这边是nio的内容了,具体可查看代码
    @Override
    protected void doRegister() throws Exception {
        boolean selected = false;
        for (;;) {
                selectionKey = javaChannel().register(eventLoop().unwrappedSelector(), 0, this);
                return;
            }
        }
    }
//----------------------------------✂------------------------------------------
执行add的方法时会调用组件ChannelInitializer.handlerAdded方法
    @Override
    public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
        //判断该channel是否已经注册过了,因为他的channelRegistered也会注册
        if (ctx.channel().isRegistered()) {
            if (initChannel(ctx)) {
                removeState(ctx);
            }
        }
    }

    private boolean initChannel(ChannelHandlerContext ctx) throws Exception {
        if (initMap.add(ctx)) { 
            try {
                initChannel((C) ctx.channel());
            } catch (Throwable cause) {
                exceptionCaught(ctx, cause);
            } finally {
                ChannelPipeline pipeline = ctx.pipeline();
                if (pipeline.context(this) != null) {
                    //从pipline里面移除掉ChannelInitializer
                    pipeline.remove(this);
                }
            }
            return true;
        }
        return false;
    }

initChannel里面主要是添加一个ServerBootstrapAcceptor 的handler
这个handler主要是负责将客户端注册完后把channel交给workGroup
pipeline.addLast(new ServerBootstrapAcceptor(
ch, currentChildGroup, currentChildHandler, currentChildOptions, currentChildAttrs)

到目前为止服务端的数据初始化差不多都完成了,ServerSocketChannel的pipline变为


现在我们来看一下客户端初始化

public class NettyClient {
    public static void main(String[] args) throws Exception {
        //创建一组工作组线程,这边与服务端一致
        EventLoopGroup group = new NioEventLoopGroup();
        try {
            //注意客户端使用的不是ServerBootstrap而是Bootstrap
            Bootstrap bootstrap = new Bootstrap();
            bootstrap.group(group) //设置线程组
                    .channel(NioSocketChannel.class) // 使用NioSocketChannel作为客户端的通道实现
                    .handler(new ChannelInitializer() {
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {
                            ch.pipeline().addLast(new NettyClientHandler());
                        }
                    });
            ChannelFuture cf = bootstrap.connect("127.0.0.1", 9000).sync();
            cf.channel().closeFuture().sync();
        } finally {
            group.shutdownGracefully();
        }
    }
}

服务端与客户端主要有三个不同点

  1. 创建的Bootstrap为Bootstrap,服务端的为ServerBootstrap
  2. 构建bootstrap传入的channel为NioSocketChannel,服务端为NioServerSocketChannel
  3. 最后客户端用的是bootstrap.connect而服务端是bootstrap.bind(xx)
    前两个只是参数上的区别,我们从connect方法开始查看
直接跟进来,进到这个方法
    private ChannelFuture doResolveAndConnect(final SocketAddress remoteAddress, final SocketAddress localAddress) {
        //这边创建的为NioSocketChannel,这里也注册了监听事件
        //跟服务端唯一的区别就是他没有添加ServerBootstrapAcceptor
        final ChannelFuture regFuture = initAndRegister();
        final Channel channel = regFuture.channel();

        if (regFuture.isDone()) {
            if (!regFuture.isSuccess()) {
                return regFuture;
            }
            return doResolveAndConnect0(channel, remoteAddress, localAddress, channel.newPromise());
        } else {
            final PendingRegistrationPromise promise = new PendingRegistrationPromise(channel);
            regFuture.addListener(new ChannelFutureListener() {
                @Override
                public void operationComplete(ChannelFuture future) throws Exception {
                    Throwable cause = future.cause();
                    if (cause != null) {
                        promise.setFailure(cause);
                    } else {
                        promise.registered();
                        //这边是客户端初始化的主线逻辑,这里面有连接服务端的代码
                        doResolveAndConnect0(channel, remoteAddress, localAddress, promise);
                    }
                }
            });
            return promise;
        }
    }
//----------------------------------✂------------------------------------------

doResolveAndConnect0一直跟进去,客户端连接服务端的方法是在该方法下面,其他代码不分析
NioSocketChannel#doConnect()
    @Override
    protected boolean doConnect(SocketAddress remoteAddress, SocketAddress localAddress) throws Exception {
        if (localAddress != null) {
            doBind0(localAddress);
        }
        boolean success = false;
        try {
            //这边真正的进行连接通信
            boolean connected = SocketUtils.connect(javaChannel(), remoteAddress);
            if (!connected) {
                selectionKey().interestOps(SelectionKey.OP_CONNECT);
            }
            success = true;
            return connected;
        } finally {
            if (!success) {
                doClose();
            }
        }
    }

服务端接收客户端连接并转交给workGroup

当有新事件的时候,服务端那边的处理

    private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
        final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
        if (!k.isValid()) {
            final EventLoop eventLoop;
            try {
                eventLoop = ch.eventLoop();
            } catch (Throwable ignored) {
                return;
            }
            if (eventLoop != this || eventLoop == null) {
                return;
            }
            unsafe.close(unsafe.voidPromise());
            return;
        }
        try {
            int readyOps = k.readyOps();
            if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
                int ops = k.interestOps();
                ops &= ~SelectionKey.OP_CONNECT;
                k.interestOps(ops);
                unsafe.finishConnect();
            }
            if ((readyOps & SelectionKey.OP_WRITE) != 0) {
                ch.unsafe().forceFlush();
            }
            //主要是这个方法来处理事件
            if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
                unsafe.read();
            }
        } catch (CancelledKeyException ignored) {
            unsafe.close(unsafe.voidPromise());
        }
    }
//----------------------------------✂------------------------------------------


    private final class NioMessageUnsafe extends AbstractNioUnsafe {
        private final List readBuf = new ArrayList();
        @Override
        public void read() {
            assert eventLoop().inEventLoop();
            final ChannelConfig config = config();
            final ChannelPipeline pipeline = pipeline();
            final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
            allocHandle.reset(config);
            boolean closed = false;
            Throwable exception = null;
            try {
                try {
                    do {
                        int localRead = doReadMessages(readBuf);
                        if (localRead == 0) {
                            break;
                        }
                        if (localRead < 0) {
                            closed = true;
                            break;
                        }
                        allocHandle.incMessagesRead(localRead);
                    } while (allocHandle.continueReading());
                } catch (Throwable t) {
                    exception = t;
                }

                int size = readBuf.size();
                for (int i = 0; i < size; i ++) {
                    readPending = false;
                    //这边就是调用我们服务端写的handler里面的read方法
                    pipeline.fireChannelRead(readBuf.get(i));
                }
                readBuf.clear();
                allocHandle.readComplete();
                //这边就是调用我们服务端写的handler里面的Complete方法
                pipeline.fireChannelReadComplete();

                if (exception != null) {
                    closed = closeOnReadError(exception);
                    //发生错误时调用我们写的handler里面的ExceptionCaught方法
                    pipeline.fireExceptionCaught(exception);
                }
                if (closed) {
                    inputShutdown = true;
                    if (isOpen()) {
                        close(voidPromise());
                    }
                }
            } finally {
                if (!readPending && !config.isAutoRead()) {
                    removeReadOp();
                }
            }
        }
    }
//----------------------------------✂------------------------------------------

这边将nio 链接accept
    protected int doReadMessages(List buf) throws Exception {
        SocketChannel ch = SocketUtils.accept(javaChannel());

        try {
            if (ch != null) {
                //将客户端的channel包装成一个NioSocketChannel
注意
这边的NioSocketChannel跟服务端的channel不一样,他没有initChannel这个拦截器,到时候读写请求执行的方法也跟服务端的那个不一样
                buf.add(new NioSocketChannel(this, ch));
                return 1;
            }
        } catch (Throwable t) {
            logger.warn("Failed to create a new channel from an accepted socket.", t);

            try {
                ch.close();
            } catch (Throwable t2) {
                logger.warn("Failed to close a socket.", t2);
            }
        }
        return 0;
    }
//----------------------------------✂------------------------------------------


当accept完后继续往下执行会执行pipeline.fireChannelRead(readBuf.get(i));
我们之前初始化服务端的时候注册了一个ServerBootstrapAcceptor进去,这个很重要
我们来看看调用它的read方法
        @Override
        @SuppressWarnings("unchecked")
        public void channelRead(ChannelHandlerContext ctx, Object msg) {
            //将消息转为channel
            final Channel child = (Channel) msg;
            //给pipline添加childHandler
            //这个Handler就是我们刚开始在初始化的时候写的bootstrap.childHandler()里面的
            child.pipeline().addLast(childHandler);
            //给pipline设置属性
            setChannelOptions(child, childOptions, logger);

            for (Entry, Object> e: childAttrs) {
                child.attr((AttributeKey) e.getKey()).set(e.getValue());
            }

            try {
                //这个方法很重要,这个方法就是把channel交给workGroup来处理
                //这边的逻辑与服务端register一模一样,可以直接参考服务端注册
                childGroup.register(child).addListener(new ChannelFutureListener() {
                    @Override
                    public void operationComplete(ChannelFuture future) throws Exception {
                        if (!future.isSuccess()) {
                            forceClose(child, future.cause());
                        }
                    }
                });
            } catch (Throwable t) {
                forceClose(child, t);
            }
        }
 
 

到此为止netty 服务端的初始化就完成了,此时生成的NioSocketChannel结构图为


服务端接收客户端数据

发送数据的时候也是到这个方法,只不过他处理的unsafe是不一样的

    private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
        final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
        if (!k.isValid()) {
            final EventLoop eventLoop;
            try {
                eventLoop = ch.eventLoop();
            } catch (Throwable ignored) {
                return;
            }
            if (eventLoop != this || eventLoop == null) {
                return;
            }
            unsafe.close(unsafe.voidPromise());
            return;
        }
        try {
            int readyOps = k.readyOps();
            if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
                int ops = k.interestOps();
                ops &= ~SelectionKey.OP_CONNECT;
                k.interestOps(ops);
                unsafe.finishConnect();
            }
            if ((readyOps & SelectionKey.OP_WRITE) != 0) {
                ch.unsafe().forceFlush();
            }
            //主要是这个方法来处理事件
            if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
                unsafe.read();
            }
        } catch (CancelledKeyException ignored) {
            unsafe.close(unsafe.voidPromise());
        }
    }
由于我们客户端初始化的时候channel是NioSocketChannel
他接受读写消息的时候这边的unsafe为NioByteUnsafe ,服务端接收链接的unsafe为NioMessageUnsafe
//----------------------------------✂------------------------------------------


    protected class NioByteUnsafe extends AbstractNioUnsafe {
调用这个read方法,这个跟连接的那个其实都差不多,
区别就在于服务端连接的增加了一个doReadMessages来accpt客户端channel

        @Override
        public final void read() {
            final ChannelConfig config = config();
            if (shouldBreakReadReady(config)) {
                clearReadPending();
                return;
            }
            final ChannelPipeline pipeline = pipeline();
            final ByteBufAllocator allocator = config.getAllocator();
            final RecvByteBufAllocator.Handle allocHandle = recvBufAllocHandle();
            allocHandle.reset(config);

            ByteBuf byteBuf = null;
            boolean close = false;
            try {
                do {
                    byteBuf = allocHandle.allocate(allocator);
                    allocHandle.lastBytesRead(doReadBytes(byteBuf));
                    if (allocHandle.lastBytesRead() <= 0) {
                        byteBuf.release();
                        byteBuf = null;
                        close = allocHandle.lastBytesRead() < 0;
                        if (close) {
                            readPending = false;
                        }
                        break;
                    }

                    allocHandle.incMessagesRead(1);
                    readPending = false;
                    pipeline.fireChannelRead(byteBuf);
                    byteBuf = null;
                } while (allocHandle.continueReading());

                allocHandle.readComplete();
                pipeline.fireChannelReadComplete();

                if (close) {
                    closeOnRead(pipeline);
                }
            } catch (Throwable t) {
                handleReadException(pipeline, byteBuf, t, close, allocHandle);
            } finally {
                if (!readPending && !config.isAutoRead()) {
                    removeReadOp();
                }
            }
        }
    }

问题环节

  1. 我们bossGroup跟workerGroup初始化的时候只初始化了几个exector,如果满了怎么办
    一个exector是可以监听多个channel的,每次选取的时候也是利用的choose来选择
        public EventExecutor next() {
            return executors[Math.abs(idx.getAndIncrement() % executors.length)];
        }
  1. 当我们有客户端连接或者消息的时候,是怎么感知到的
    有客户端来的时候,服务端process key怎么变成加一
  2. 为什么客户端连接的时候使用的是NioMessageUnsafe,写数据的时候调用的又是NioByteUnsafe
    这个是因为服务端初始化的时候使用的是NioServerSocketChannel,客户端连接是打到服务端的这个channel里的
    客户端连接完成后会为客户端注册一个NioSocketChannel并把它交给workerGroup去,这时候客户端再通讯接收消息用的就是NioSocketChannel,两个channel的unsafe方法初始化是不一样的,所以调用的时候也不一样

你可能感兴趣的:(Netty)