kafka 请求处理与RPC(四)

kafka server启动后,会监听一些端口,然后开始接收请求进行日常的工作。
与请求处理相关的组件有 SocketServer、KafkaApis、KafkaRequestHandlerPool。这些都是在kafka server启动时初始化并开始运行的。SocketServer是一个NIO服务,基于N+M的线程模型,由N个Acceptor线程以及M个Processor线程组成,和netty的网络模型有点像。N个Acceptor线程专门用于监听连接事件,连接建立后将连接交给其他M个Processor线程继续监听读事件,这样的线程模型使kafka可以很轻松的处理高并发的场景。

kafka请求处理架构图

[图片上传失败...(image-579286-1533214720127)]

  1. kakfa server在启动时调用SocketServer#startup()方法,这个方法内会初始化N个Acceptor开始监听OP_ACCEPT事件,等待客户端连接。初始化的Acceptor数量取决于用户配置的listeners有几个。在初始化每个Acceptor的同时,还会初始化M个Processor,并分配给Acceptor用于监听连接事件。Processor的数量取决于num.network.threads配置,该配置默认值是3,表示每个Acceptor分配3个Processor。
  2. Acceptor接收到一个新的连接时,会将这个请求以轮询的方式分配给它管理的其中一个Processor处理
  3. Processor收到一个连接时,便开始监听它的OP_READ事件
  4. 如果Processor发现有请求发过来,就将这个请求放入Request队列中,等待处理。该Request队列的容量由配置queued.max.requests决定,改配置默认值是500.
  5. kakfa server在启动时会初始化KafkaRequestHandlerPool类,该类在初始化时会构造一些的KafkaRequestHandler线程并启动,构造的KafkaRequestHandler线程数量取决于配置num.io.threads的值,该配置默认值是8.。
  6. KafkaRequestHandler线程启动后,会不断自旋,从request queue中获取请求,然后交给KafkaApis进行处理。KafkaApis根据请求的类型进行不同的业务处理
  7. KafkaApis组件处理完后,会将结果放入对应的Processor的response queue中,等待Processor处理
  8. Processor也是一个不断自旋的线程,在自旋的过程中,Processor会检查自己的response queue中是否有新的结果,如果有新的结果就将其从队列中取出,准备发回给客户端
  9. Processor通过NioChannel将结果写回客户端,自此一个通信流程结束

SocketServer的启动

def startup() {
    this.synchronized {
      connectionQuotas = new ConnectionQuotas(maxConnectionsPerIp, maxConnectionsPerIpOverrides)

      val sendBufferSize = config.socketSendBufferBytes
      val recvBufferSize = config.socketReceiveBufferBytes
      val brokerId = config.brokerId

      var processorBeginIndex = 0
      config.listeners.foreach { endpoint =>
        val listenerName = endpoint.listenerName
        val securityProtocol = endpoint.securityProtocol
        val processorEndIndex = processorBeginIndex + numProcessorThreads

        for (i <- processorBeginIndex until processorEndIndex)
          processors(i) = newProcessor(i, connectionQuotas, listenerName, securityProtocol)

        val acceptor = new Acceptor(endpoint, sendBufferSize, recvBufferSize, brokerId,
          processors.slice(processorBeginIndex, processorEndIndex), connectionQuotas)
        acceptors.put(endpoint, acceptor)
        Utils.newThread(s"kafka-socket-acceptor-$listenerName-$securityProtocol-${endpoint.port}", acceptor, false).start()
        acceptor.awaitStartup()

        processorBeginIndex = processorEndIndex
      }
    }

    info("Started " + acceptors.size + " acceptor threads")
  }

socketServer启动时,会初始化N个Acceptor,并为其分配好对应数量的Processor,然后启动Acceptor线程。

Acceptor启动监听相关代码

def run() {
  //往selector监听OP_ACCEPT事件
  serverChannel.register(nioSelector, SelectionKey.OP_ACCEPT)
  startupComplete()
  try {
    var currentProcessor = 0
    while (isRunning) {
      try {
        //开始轮询
        val ready = nioSelector.select(500)
        if (ready > 0) {
          val keys = nioSelector.selectedKeys()
          val iter = keys.iterator()
          while (iter.hasNext && isRunning) {
            try {
              val key = iter.next
              iter.remove()
              //如果有连接进来,就将它交给指定的Processor处理
              if (key.isAcceptable)
                accept(key, processors(currentProcessor))
              else
                throw new IllegalStateException("Unrecognized key state for acceptor thread.")

              // round robin to the next processor thread
              currentProcessor = (currentProcessor + 1) % processors.length
            } catch {
              case e: Throwable => error("Error while accepting connection", e)
            }
          }
        }
      }
      catch {
        case e: ControlThrowable => throw e
        case e: Throwable => error("Error occurred", e)
      }
    }
  } finally {
    debug("Closing server socket and selector.")
    swallowError(serverChannel.close())
    swallowError(nioSelector.close())
    shutdownComplete()
  }
}
def accept(key: SelectionKey, processor: Processor) {
  //获取对应channel
  val serverSocketChannel = key.channel().asInstanceOf[ServerSocketChannel]
  val socketChannel = serverSocketChannel.accept()
  try {
    connectionQuotas.inc(socketChannel.socket().getInetAddress)
    socketChannel.configureBlocking(false)
    socketChannel.socket().setTcpNoDelay(true)
    socketChannel.socket().setKeepAlive(true)
    if (sendBufferSize != Selectable.USE_DEFAULT_BUFFER_SIZE)
      socketChannel.socket().setSendBufferSize(sendBufferSize)
    //Processor接收channel
    processor.accept(socketChannel)
  } catch {
    case e: TooManyConnectionsException =>
      info("Rejected connection from %s, address already has the configured maximum of %d connections.".format(e.ip, e.count))
      close(socketChannel)
  }
}

Acceptor线程启动后,就开始监听端口看有没有新的连接进来。这里使用nio实现无阻塞的监听请求。收到请求后就分发给它管理的其中一个Processor线程处理。

def accept(socketChannel: SocketChannel) {
  //接收一个新的连接,newConnections集合表示当前Processor管理的连接
  newConnections.add(socketChannel)
  wakeup()
}
override def run() {
  startupComplete()
  //Processor线程不断自旋
  while (isRunning) {
    try {
      //把新接收的连接拿到并注册OP_READ事件
      configureNewConnections()
      //从对应的response队列中获取response,然后进行相应的操作.这里不一定是将响应发送给客户端,可能不用响应客户端,也可能关闭连接
      //另外,这个方法也不真正的发送响应,即使要发送响应给客户端,这个方法里面也只是往对应的连接注册OP_WRITE事件,然后等后面的poll()方法执行时才真正将响应发送出去
      processNewResponses()
      //select()阻塞等待OP_READ和OP_WRITE事件被触发,然后处理,最长阻塞时间是300ms
      //如果OP_READ事件就绪,说明有新的请求发送过来,这些请求的信息最终会被放入selector.completedReceives集合中,也就是List
      //如果OP_WRITE事件就绪,说明有响应需要发送出去,这时候才会将响应发送给客户端。同时将这个连接放入completedSends表示该连接已经完成
      poll()
      //开始处理selector.completedReceives中的信息,最终会被封装成RequestChannel.Request后放入request队列中
      processCompletedReceives()
      //遍历completedSends集合,将已经完成的连接从inflightResponses集合中移除
      processCompletedSends()
      //将已经断开的连接从inflightResponses集合中移除
      processDisconnected()
    } catch {
      // We catch all the throwables here to prevent the processor thread from exiting. We do this because
      // letting a processor exit might cause a bigger impact on the broker. Usually the exceptions thrown would
      // be either associated with a specific socket channel or a bad request. We just ignore the bad socket channel
      // or request. This behavior might need to be reviewed if we see an exception that need the entire broker to stop.
      case e: ControlThrowable => throw e
      case e: Throwable =>
        error("Processor got uncaught exception.", e)
    }
  }

  debug("Closing selector - processor " + id)
  swallowError(closeAll())
  shutdownComplete()
}

Processor线程拿到Acceptor传过来的请求后开始监听该连接的读请求。同时还会做许多事情。比如发送响应、读取请求、关闭连接等等。

KafkaRequestHandler 线程相关代码

kakfa server在启动时会初始化KafkaRequestHandlerPool类,该类在初始化时会构造一些的KafkaRequestHandler线程并启动,构造的KafkaRequestHandler线程数量取决于配置num.io.threads的值,该配置默认值是8。

下面是KafkaRequestHandler线程的run方法

def run() {
  while(true) {
    try {
      var req : RequestChannel.Request = null
      while (req == null) {
        //从request队列中获取请求
        val startSelectTime = time.nanoseconds
        req = requestChannel.receiveRequest(300)
        val idleTime = time.nanoseconds - startSelectTime
        aggregateIdleMeter.mark(idleTime / totalHandlerThreads)
      }

      if(req eq RequestChannel.AllDone) {
        debug("Kafka request handler %d on broker %d received shut down command".format(
          id, brokerId))
        return
      }
      req.requestDequeueTimeMs = time.milliseconds
      trace("Kafka request handler %d on broker %d handling request %s".format(id, brokerId, req))
      //使用KafkaApis处理请求
      apis.handle(req)
    } catch {
      case e: Throwable => error("Exception when handling request", e)
    }
  }
}

KafkaRequestHandler线程不断的从请求队列中取出请求处理。具体的请求最后交给KafkaApis处理。

KafkaApis 相关代码

def handle(request: RequestChannel.Request) {
  try {
    trace("Handling request:%s from connection %s;securityProtocol:%s,principal:%s".
      format(request.requestDesc(true), request.connectionId, request.securityProtocol, request.session.principal))
    //根据请求的类型处理请求
    ApiKeys.forId(request.requestId) match {
      case ApiKeys.PRODUCE => handleProducerRequest(request)
      case ApiKeys.FETCH => handleFetchRequest(request)
      case ApiKeys.LIST_OFFSETS => handleOffsetRequest(request)
      case ApiKeys.METADATA => handleTopicMetadataRequest(request)
      case ApiKeys.LEADER_AND_ISR => handleLeaderAndIsrRequest(request)
      case ApiKeys.STOP_REPLICA => handleStopReplicaRequest(request)
      case ApiKeys.UPDATE_METADATA_KEY => handleUpdateMetadataRequest(request)
      case ApiKeys.CONTROLLED_SHUTDOWN_KEY => handleControlledShutdownRequest(request)
      case ApiKeys.OFFSET_COMMIT => handleOffsetCommitRequest(request)
      case ApiKeys.OFFSET_FETCH => handleOffsetFetchRequest(request)
      case ApiKeys.GROUP_COORDINATOR => handleGroupCoordinatorRequest(request)
      case ApiKeys.JOIN_GROUP => handleJoinGroupRequest(request)
      case ApiKeys.HEARTBEAT => handleHeartbeatRequest(request)
      case ApiKeys.LEAVE_GROUP => handleLeaveGroupRequest(request)
      case ApiKeys.SYNC_GROUP => handleSyncGroupRequest(request)
      case ApiKeys.DESCRIBE_GROUPS => handleDescribeGroupRequest(request)
      case ApiKeys.LIST_GROUPS => handleListGroupsRequest(request)
      case ApiKeys.SASL_HANDSHAKE => handleSaslHandshakeRequest(request)
      case ApiKeys.API_VERSIONS => handleApiVersionsRequest(request)
      case ApiKeys.CREATE_TOPICS => handleCreateTopicsRequest(request)
      case ApiKeys.DELETE_TOPICS => handleDeleteTopicsRequest(request)
      case requestId => throw new KafkaException("Unknown api code " + requestId)
    }
  } catch {
    case e: Throwable =>
      if (request.requestObj != null) {
        request.requestObj.handleError(e, requestChannel, request)
        error("Error when handling request %s".format(request.requestObj), e)
      } else {
        val response = request.body.getErrorResponse(e)

        /* If request doesn't have a default error response, we just close the connection.
           For example, when produce request has acks set to 0 */
        if (response == null)
          requestChannel.closeConnection(request.processor, request)
        else
          requestChannel.sendResponse(new Response(request, response))

        error("Error when handling request %s".format(request.body), e)
      }
  } finally
    request.apiLocalCompleteTimeMs = time.milliseconds
}

kafkaApis根据请求的类型执行不同的操作来处理请求。
在0.10.2版本中,kafkaApis可以处理21种类型的请求。

你可能感兴趣的:(kafka 请求处理与RPC(四))