RocketMQ学习(五)——RocketMQ消费之pull和push

消费者重要参数

  • ConsumeFromWhere

    代表消费者首次消费从哪个位置开始,当消费者再次消费就会根据上传到nameserver上到消费进度开始消费。分成三种:CONSUME_FROM_LAST_OFFSET(从队尾开始消费)、CONSUME_FROM_FIRST_OFFSET(从队首开始消费)、CONSUME_FROM_TIMESTAMP(从特定时间开始消费)。

  • AllocateMessageQueueStrategy

    代表消息队列分配到消费者到策略,有如下几种:平均分配策略(默认)(AllocateMessageQueueAveragely)、环形分配策略(AllocateMessageQueueAveragelyByCircle)、手动配置分配策略(AllocateMessageQueueByConfig)、机房分配策略(AllocateMessageQueueByMachineRoom)、一致性哈希分配策略(AllocateMessageQueueConsistentHash)。具体参照:Rocketmq之消息队列分配策略算法实现的源码分析

  • offsetStore

    有两种模式:RemoteBrokerOffsetStore和LocalfileOffsetStore
    Rocketmq集群有两种消费模式:
    1、默认是 CLUSTERING 模 式,也就是同一个 Consumer group 里的多个消费者每人消费一部分,各自收到 的消息内容不一样 。 这种情况下,由 Broker 端存储和控制 Offset 的值,使用 RemoteBrokerOffsetStore 结构 。
    2、BROADCASTING模式下,每个 Consumer 都收到这个 Topic 的全部消息,各个 Consumer 间相互没有干扰, RocketMQ 使 用 LocalfileOffsetStore,把 Offset存到本地 。

  • consumeThreadMin/consumeThreadMax

    调整消费者线程池的线程数量,辅助consume消费。

  • consumeConcurrentlyMaxSpan/pullThresholdForQueue

    这两个参数用于限流,第一个表示单个队列并行消费最大的跨度,第二个表示单个队列最大的消费个数。

  • consumeMessageBatchMaxSize

    消费者每次消费消息的最大条数,默认是1。每次只能拉取一条消息进行消费。

MQ中Pull和Push的两种消费方式

对于任何一款消息中间件而言,消费者客户端一般有两种方式从消息中间件获取消息并消费。严格意义上来讲,RocketMQ并没有实现PUSH模式,而是对拉模式进行一层包装,名字虽然是 Push 开头,实际在实现时,使用 Pull 方式实现。通过 Pull 不断不断不断轮询 Broker 获取消息。当不存在新消息时,Broker 会挂起请求,直到有新消息产生,取消挂起,返回新消息。这样,基本和 Broker 主动 Push 做到接近的实时性(当然,还是有相应的实时性损失)。原理类似 长轮询( Long-Polling )
(1)Pull方式
由消费者客户端主动向消息中间件(MQ消息服务器代理)拉取消息;采用Pull方式,如何设置Pull消息的频率需要重点去考虑,举个例子来说,可能1分钟内连续来了1000条消息,然后2小时内没有新消息产生(概括起来说就是“消息延迟与忙等待”)。如果每次Pull的时间间隔比较久,会增加消息的延迟,即消息到达消费者的时间加长,MQ中消息的堆积量变大;若每次Pull的时间间隔较短,但是在一段时间内MQ中并没有任何消息可以消费,那么会产生很多无效的Pull请求的RPC开销,影响MQ整体的网络性能;
(2)Push方式
由消息中间件(MQ消息服务器代理)主动地将消息推送给消费者;采用Push方式,可以尽可能实时地将消息发送给消费者进行消费。但是,在消费者的处理消息的能力较弱的时候(比如,消费者端的业务系统处理一条消息的流程比较复杂,其中的调用链路比较多导致消费时间比较久。概括起来地说就是“慢消费问题”),而MQ不断地向消费者Push消息,消费者端的缓冲区可能会溢出,导致异常;

RocketMQ中两种消费方式的demo代码

(1)Pull模式的Consumer端代码如下
在示例代码中,可以看到业务工程在Consumer启动后,Consumer主动获取MessageQueue的Set集合,遍历该集合中的每一个队列,发送Pull的请求(参数中带有队列中的消息偏移量),同时需要Consumer端自己保存消息消费的offset偏移量至本地变量中。在Pull模式下,需要业务应用代码自身去完成比较多的事情,因此在实际应用中用的较少。

public class ConsumerPullTest  {

    private String namesrvAddr = "192.168.152.129:9876;192.168.152.130:9876";

    private String TOPIC_TEST = "TOPIC_TEST";

    private String TAG_TEST = "TAG_TEST";

    //pull消费模式
    private DefaultMQPullConsumer consumer = new DefaultMQPullConsumer("ConsumerTest");

    private static final Map<MessageQueue, Long> offseTable = new HashMap<MessageQueue, Long>();

    @PostConstruct
    public void start() {
        try {
            System.out.println("MQ:ConsumerPullTest");
            //设置nameserver地址
            consumer.setNamesrvAddr(namesrvAddr);
            //消费模式 集群消费
            consumer.setMessageModel(MessageModel.CLUSTERING);
            //启动消费
            consumer.start();

            consumeMessage();
            System.out.println("\n\t MQ:start  ConsumerTest is success ! \n\t"
                    + "    topic is " + TOPIC_TEST + "  \n\t"
                    + "    tag is " + TAG_TEST + " \n\t");
        } catch (MQClientException e) {
            System.out.println("MQ:start ConsumerTest is fail" + e.getResponseCode() + e.getErrorMessage());
            throw new RuntimeException(e.getMessage(), e);
        }
    }

    public void consumeMessage() throws MQClientException {
        //根据topic查询queue
        Set<MessageQueue> mqs = consumer.fetchSubscribeMessageQueues(TOPIC_TEST);
        for(MessageQueue mq : mqs) {
            System.out.printf("Consume from the queue: %s%n", mq);
            SINGLE_MQ:while (true) {
                try {
                    PullResult pullResult =
                            consumer.pullBlockIfNotFound(mq, null, getMessageQueueOffset(mq), 32);
                    putMessageQueueOffset(mq, pullResult.getNextBeginOffset());
                    switch (pullResult.getPullStatus()) {
                        case FOUND:
                            //如果找到
                            for(Message message : pullResult.getMsgFoundList()) {
                                System.out.println(new String(message.getBody()));
                            }
                            break;
                        case NO_MATCHED_MSG:
                            break;
                        case NO_NEW_MSG:
                            break SINGLE_MQ;
                        case OFFSET_ILLEGAL:
                            break;
                        default:
                            break;
                    }
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        }
    }

    private static void putMessageQueueOffset(MessageQueue mq, long offset) {
        offseTable.put(mq, offset);
    }

    private static long getMessageQueueOffset(MessageQueue mq) {
        Long offset = offseTable.get(mq);
        if (offset != null) {
            return offset;
        }
        return 0;

    }

    @PreDestroy
    public void stop() {
        if (consumer != null) {
            consumer.shutdown();
            System.out.println("MQ:stop ConsumerTest success! ");
        }
    }

    public static void main(String [] args) {
        ConsumerPullTest consumerTest = new ConsumerPullTest();
        consumerTest.start();
    }
}

上面是官网例子里的代码,但是我们通常使用另一种方式实现:

public class ConsumerPullDemo {
    public static void main(String[] args) {
        String groupName = "schedule_consumer";
        String TOPIC_TEST = "TOPIC_TEST";
        final MQPullConsumerScheduleService scheduleService = new MQPullConsumerScheduleService(groupName);
        scheduleService.getDefaultMQPullConsumer().setNamesrvAddr("192.168.152.129:9876;192.168.152.130:9876");
        scheduleService.setMessageModel(MessageModel.CLUSTERING);
        scheduleService.registerPullTaskCallback(TOPIC_TEST, new PullTaskCallback() {
            public void doPullTask(MessageQueue mq, PullTaskContext context) {
                MQPullConsumer consumer = context.getPullConsumer();
                try {
                    //获取从哪里开始拉取
                    long offset = consumer.fetchConsumeOffset(mq, false);
                    if(offset < 0) {
                        offset = 0;
                    }
                    PullResult pullResult = consumer.pull(mq, "*", offset, 32);
                    switch (pullResult.getPullStatus()) {
                        case FOUND:
                            List<MessageExt> list = pullResult.getMsgFoundList();
                            for (MessageExt msg : list) {
                                System.out.println(new String(msg.getBody()));
                            }
                            break;
                        case NO_MATCHED_MSG:
                            break;
                        case NO_NEW_MSG:
                        case OFFSET_ILLEGAL:
                            break;
                        default:
                            break;
                    }
                    //存储offset,客户端每隔5s会定时刷新到broker
                    consumer.updateConsumeOffset(mq, pullResult.getNextBeginOffset());
                    //重新拉取 建议超过5s这样就不会重复获取
                    context.setPullNextDelayTimeMillis(10000);
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        });
    }
}

(2)Push模式的Consumer端代码如下
在示例代码中,业务工程的应用程序使用Push方式进行消费时,Consumer端注册了一个监听器,Consumer在收到消息后主动调用这个监听器完成消费并进行对应的业务逻辑处理。由此可见,业务应用代码只需要完成消息消费即可,无需参与MQ本身的一些任务处理(ps:业务代码显得更为简洁一些)。

public class ConsumerTest implements MessageListenerConcurrently {

    private String namesrvAddr = "192.168.152.131:9876;192.168.152.133:9876";

    private String TOPIC_TEST = "TOPIC_TEST";

    private String TAG_TEST = "TAG_TEST";

    private DefaultMQPushConsumer consumer = new DefaultMQPushConsumer("ConsumerTest");

    @PostConstruct
    public void start() {
        try {
            System.out.println("MQ:启动ConsumerTest消费者");
            //设置nameserver地址
            consumer.setNamesrvAddr(namesrvAddr);
            //从队列最后开始消费
            consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_LAST_OFFSET);
            //消费模式 集群消费
            consumer.setMessageModel(MessageModel.CLUSTERING);
            //订阅的topic
            consumer.subscribe(TOPIC_TEST, TAG_TEST);
            //push消费设置监听
            consumer.registerMessageListener(this);
            //启动消费
            consumer.start();
            System.out.println("\n\t MQ:start  ConsumerTest is success ! \n\t"
                    + "    topic is " + TOPIC_TEST + "  \n\t"
                    + "    tag is " + TAG_TEST + " \n\t");
        } catch (MQClientException e) {
            System.out.println("MQ:start ConsumerTest is fail" + e.getResponseCode() + e.getErrorMessage());
            throw new RuntimeException(e.getMessage(), e);
        }
    }

    public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> msgs, ConsumeConcurrentlyContext context) {
        //push方法 消费者先启动的话 每次读取到一条数据就会返回
        MessageExt msg = msgs.get(0);
        String messageBody = "";
        try {
            messageBody = new String(msg.getBody(), RemotingHelper.DEFAULT_CHARSET);
            messageBody = new String(messageBody.getBytes());
            System.out.println("MQ:ConsumerTest consume msg is "+
                    msg.getMsgId()+ "topic:" + msg.getTopic() + "tag:" + msg.getTags() + "key:" + msg.getKeys() + "message:" + messageBody);
            return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
        } catch (Exception e) {
            System.out.println("errorQueue send fail e:" + e);
            return ConsumeConcurrentlyStatus.RECONSUME_LATER;// 重试
        }
    }

    @PreDestroy
    public void stop() {
        if (consumer != null) {
            consumer.shutdown();
            System.out.println("MQ:stop ConsumerTest success! ");
        }
    }

    public static void main(String [] args) {
        ConsumerTest consumerTest = new ConsumerTest();
        consumerTest.start();
    }
}

实现原理

与消息的生成者启动类似,消费者也需要先创建group,再调用start方法启动,根据pull和push的消费模式进行对消息的处理。让我们看看消费者是如何实现的。

核心类

无论是pull还是push消费模式都要一些核心类。

(1)RebalanceImpl

字面上的意思(重新平衡)也就是消费端消费者与消息队列的重新分布,与消息应该分配给哪个消费者消费息息相关。负责分配当前 Consumer 可消费的消息队列( MessageQueue )。当有新的 Consumer 的加入或移除,都会重新分配消息队列。启动MQClientInstance实例时候,会完成负载均衡服务线程—RebalanceService的启动(每隔20s执行一次)。

  • Rebalance的平衡粒度

    Rebalance是针对Topic+ConsumerGroup进行Rebalance的,在我们创建的comsumer过程中会订阅topic(包括%retry%consumerGroup),Rebalance就是要这些Topic下的所有messageQueue按照一定的规则分发给consumerGroup下的consumer进行消费。
    doRebalance是主要的处理方法。

      public void doRebalance(boolean isOrder) {
      	//subTable的key是consumerGroup,value是topic的订阅信息
        Map<String, SubscriptionData> subTable = this.getSubscriptionInner();
        if (subTable != null) {
            Iterator i$ = subTable.entrySet().iterator();
    
            while(i$.hasNext()) {
                Entry<String, SubscriptionData> entry = (Entry)i$.next();
                String topic = (String)entry.getKey();
    
                try {
                	//根据topic进行rebalance
                    this.rebalanceByTopic(topic, isOrder);
                } catch (Throwable var7) {
                    if (!topic.startsWith("%RETRY%")) {
                        log.warn("rebalanceByTopic Exception", var7);
                    }
                }
            }
        }
    
        this.truncateMessageQueueNotMyTopic();
    }
    
  • Rebalance的平衡过程

    Rebalance的过程分为三个步骤:
    1、获取messageQueue队列和consumer列表
    2、进行Rebalance操作
    3、如果分配的结果改变了,需要做更新

    private void rebalanceByTopic(String topic, boolean isOrder) {
        Set mqSet;
        switch(this.messageModel) {
        case BROADCASTING:
            mqSet = (Set)this.topicSubscribeInfoTable.get(topic);
            if (mqSet != null) {
                boolean changed = this.updateProcessQueueTableInRebalance(topic, mqSet, isOrder);
                if (changed) {
                    this.messageQueueChanged(topic, mqSet, mqSet);
                    log.info("messageQueueChanged {} {} {} {}", new Object[]{this.consumerGroup, topic, mqSet, mqSet});
                }
            } else {
                log.warn("doRebalance, {}, but the topic[{}] not exist.", this.consumerGroup, topic);
            }
            break;
        //只分析集群模式    
        case CLUSTERING:
        	//根据topic获取messageQueue
            mqSet = (Set)this.topicSubscribeInfoTable.get(topic);
            //获取consumerGroup中所有的consumer
            List<String> cidAll = this.mQClientFactory.findConsumerIdList(topic, this.consumerGroup);
            if (null == mqSet && !topic.startsWith("%RETRY%")) {
                log.warn("doRebalance, {}, but the topic[{}] not exist.", this.consumerGroup, topic);
            }
    
            if (null == cidAll) {
                log.warn("doRebalance, {} {}, get consumer id list failed", this.consumerGroup, topic);
            }
    
            if (mqSet != null && cidAll != null) {
                List<MessageQueue> mqAll = new ArrayList();
                mqAll.addAll(mqSet);
                Collections.sort(mqAll);
                Collections.sort(cidAll);
                //获取处理rebalance的逻辑类
                AllocateMessageQueueStrategy strategy = this.allocateMessageQueueStrategy;
                List allocateResult = null;
    
                try {
                	//进行Rebalance,返回当前消费者需要处理的messageQueue
                    allocateResult = strategy.allocate(this.consumerGroup, this.mQClientFactory.getClientId(), mqAll, cidAll);
                } catch (Throwable var10) {
                    log.error("AllocateMessageQueueStrategy.allocate Exception. allocateMessageQueueStrategyName={}", strategy.getName(), var10);
                    return;
                }
    
                Set<MessageQueue> allocateResultSet = new HashSet();
                if (allocateResult != null) {
                    allocateResultSet.addAll(allocateResult);
                }
    			//更新消费者应该消费的队列,返回是否消费列表改变
                boolean changed = this.updateProcessQueueTableInRebalance(topic, allocateResultSet, isOrder);
                if (changed) {
                    log.info("rebalanced result changed. allocateMessageQueueStrategyName={}, group={}, topic={}, clientId={}, mqAllSize={}, cidAllSize={}, rebalanceResultSize={}, rebalanceResultSet={}", new Object[]{strategy.getName(), this.consumerGroup, topic, this.mQClientFactory.getClientId(), mqSet.size(), cidAll.size(), allocateResultSet.size(), allocateResultSet});
                    //判断状态,真正处理逻辑
                    this.messageQueueChanged(topic, mqSet, allocateResultSet);
                }
            }
        }
    
    }
    
  • Rebalance策略

    这个策略有多个类,举其中一种策略说明,这个策略是考虑当前consumerId的位置,consumer的数量,MessageQueue的数量,根据consumerId所处的位置决定分配多少消费队列。例如:8个队列,2个consumer,第一个consumer处理队列中下标为0-3,第二个consumer处理队列下标为4-7。
    类似于分页的算法,将所有MessageQueue排好序类似于记录,将所有消费端Consumer排好序类似页数,并求出每一页需要包含的平均size和每个页面记录的范围range,最后遍历整个range而计算出当前Consumer端应该分配到的记录(这里即为:MessageQueue)。

        public List<MessageQueue> allocate(String consumerGroup, String currentCID, List<MessageQueue> mqAll, List<String> cidAll) {
        if (currentCID != null && currentCID.length() >= 1) {
            if (mqAll != null && !mqAll.isEmpty()) {
                if (cidAll != null && !cidAll.isEmpty()) {
                    List<MessageQueue> result = new ArrayList();
                    if (!cidAll.contains(currentCID)) {
                        this.log.info("[BUG] ConsumerGroup: {} The consumerId: {} not in cidAll: {}", new Object[]{consumerGroup, currentCID, cidAll});
                        return result;
                    } else {
                        int index = cidAll.indexOf(currentCID);
                        int mod = mqAll.size() % cidAll.size();
                        int averageSize = mqAll.size() <= cidAll.size() ? 1 : (mod > 0 && index < mod ? mqAll.size() / cidAll.size() + 1 : mqAll.size() / cidAll.size());
                        int startIndex = mod > 0 && index < mod ? index * averageSize : index * averageSize + mod;
                        int range = Math.min(averageSize, mqAll.size() - startIndex);
    
                        for(int i = 0; i < range; ++i) {
                            result.add(mqAll.get((startIndex + i) % mqAll.size()));
                        }
    
                        return result;
                    }
                } else {
                    throw new IllegalArgumentException("cidAll is null or cidAll empty");
                }
            } else {
                throw new IllegalArgumentException("mqAll is null or mqAll empty");
            }
        } else {
            throw new IllegalArgumentException("currentCID is empty");
        }
    }
    
  • Rebalance更新Queue

    调用updateProcessQueueTableInRebalance()方法,具体的做法是,先将分配到的消息队列集合(mqSet)与processQueueTable做一个过滤比对,具体的过滤比对方式如下图:
    RocketMQ学习(五)——RocketMQ消费之pull和push_第1张图片

        private boolean updateProcessQueueTableInRebalance(String topic, Set<MessageQueue> mqSet, boolean isOrder) {
        boolean changed = false;
        //因为是在定时任务里不断做更新,所以先取出已经存在的mq队列集合
        Iterator it = this.processQueueTable.entrySet().iterator();
    
        while(it.hasNext()) {
        	//将已经存在的queue和刚刚分配的queue做比对
            Entry<MessageQueue, ProcessQueue> next = (Entry)it.next();
            MessageQueue mq = (MessageQueue)next.getKey();
            ProcessQueue pq = (ProcessQueue)next.getValue();
            if (mq.getTopic().equals(topic)) {
            	//没有匹配上,从processQueueTable中删除
                if (!mqSet.contains(mq)) {
                    pq.setDropped(true);
                    if (this.removeUnnecessaryMessageQueue(mq, pq)) {
                        it.remove();
                        changed = true;
                        log.info("doRebalance, {}, remove unnecessary mq, {}", this.consumerGroup, mq);
                    }
                } else if (pq.isPullExpired()) {
                	//匹配上后判断是否过期,如果过期且为push模式也进行删除
                    switch(this.consumeType()) {
                    case CONSUME_ACTIVELY:
                    default:
                        break;
                    case CONSUME_PASSIVELY:
                        pq.setDropped(true);
                        if (this.removeUnnecessaryMessageQueue(mq, pq)) {
                            it.remove();
                            changed = true;
                            log.error("[BUG]doRebalance, {}, remove unnecessary mq, {}, because pull is pause, so try to fixed it", this.consumerGroup, mq);
                        }
                    }
                }
            }
        }
    	//为每个队列创建pullRequest请求,放到pullRequestQueue中,待该服务线程取出后向Broker端发起Pull消息的请求
        List<PullRequest> pullRequestList = new ArrayList();
        Iterator i$ = mqSet.iterator();
    
        while(true) {
            while(true) {
                MessageQueue mq;
                do {
                    if (!i$.hasNext()) {
                    	//最好调用该方法,将pullRequest请求放到pullRequestQueue中
                    	//需要注意的是这个方法,pull是没有做实现的,也就是说pull不会进行Rebalance
                        this.dispatchPullRequest(pullRequestList);
                        return changed;
                    }
    
                    mq = (MessageQueue)i$.next();
                } while(this.processQueueTable.containsKey(mq));
    
                if (isOrder && !this.lock(mq)) {
                    log.warn("doRebalance, {}, add a new mq failed, {}, because lock failed", this.consumerGroup, mq);
                } else {
                    this.removeDirtyOffset(mq);
                    ProcessQueue pq = new ProcessQueue();
                    long nextOffset = this.computePullFromWhere(mq);
                    if (nextOffset >= 0L) {
                        ProcessQueue pre = (ProcessQueue)this.processQueueTable.putIfAbsent(mq, pq);
                        if (pre != null) {
                            log.info("doRebalance, {}, mq already exists, {}", this.consumerGroup, mq);
                        } else {
                            log.info("doRebalance, {}, add a new mq, {}", this.consumerGroup, mq);
                            PullRequest pullRequest = new PullRequest();
                            pullRequest.setConsumerGroup(this.consumerGroup);
                            pullRequest.setNextOffset(nextOffset);
                            pullRequest.setMessageQueue(mq);
                            pullRequest.setProcessQueue(pq);
                            pullRequestList.add(pullRequest);
                            changed = true;
                        }
                    } else {
                        log.warn("doRebalance, {}, add new mq failed, {}", this.consumerGroup, mq);
                    }
                }
            }
        }
    }
    

    这里可以分如下两种情况来筛选过滤:
    a.图中processQueueTable标注的红色部分,表示与分配到的消息队列集合mqSet互不包含。 将这些队列设置Dropped属性为true,然后查看这些队列是否可以移除出processQueueTable缓存变量,这里具体执行removeUnnecessaryMessageQueue()方法,即每隔1s 查看是否可以获取当前消费处理队列的锁,拿到的话返回true。如果等待1s后,仍然拿不到当前消费处理队列的锁则返回false。如果返回true,则从processQueueTable缓存变量中移除对应的Entry;
    b.图中processQueueTable的绿色部分,表示与分配到的消息队列集合mqSet的交集。 判断该ProcessQueue是否已经过期了,在Pull模式的不用管,如果是Push模式的,设置Dropped属性为true,并且调用removeUnnecessaryMessageQueue()方法,像上面一样尝试移除Entry;
    最后,为过滤后的消息队列集合(mqSet)中的每个MessageQueue创建一个ProcessQueue对象并存入RebalanceImpl的processQueueTable队列中(其中调用RebalanceImpl实例的computePullFromWhere(MessageQueue mq)方法获取该MessageQueue对象的下一个进度消费值offset,随后填充至接下来要创建的pullRequest对象属性中),并创建拉取请求对象—pullRequest添加到拉取列表—pullRequestList中,最后执行dispatchPullRequest()方法,将Pull消息的请求对象PullRequest依次放入PullMessageService服务线程的阻塞队列pullRequestQueue中,待该服务线程取出后向Broker端发起Pull消息的请求。其中,可以重点对比下,RebalancePushImpl和RebalancePullImpl两个实现类的dispatchPullRequest()方法不同,RebalancePullImpl类里面的该方法为空。所以Rebalance只为push模式服务。

(2)PullMessageService

由于push模式只是对pull的一种封装,这个后台线程会不断从队列中获取pullRequest类并且向broker发送请求获取message。但这个后台线程只会对push模式有作用,因为pull模式并不会在队列中放置pullRequest对象。而是需要消费者自己去pull数据和报错偏移量。

public class PullMessageService extends ServiceThread {
	//存放pull请求类的队列,每一个队列会对应一个pull请求对象
    private final LinkedBlockingQueue<PullRequest> pullRequestQueue = new LinkedBlockingQueue();
    ...
    //启动线程定时去发送请求到brokerpull数据
    public void run() {
        this.log.info(this.getServiceName() + " service started");

        while(!this.isStopped()) {
            try {
                PullRequest pullRequest = (PullRequest)this.pullRequestQueue.take();
                if (pullRequest != null) {
                    this.pullMessage(pullRequest);
                }
            } catch (InterruptedException var2) {
                ;
            } catch (Exception var3) {
                this.log.error("Pull Message Service Run Method exception", var3);
            }
        }

        this.log.info(this.getServiceName() + " service end");
    }
	//真正的调用DefaultMQPushConsumerImpl去发送pull请求
	//需要注意的是pull的模式队列中是不会有pull请求在队列中的,所以只要push模式才会有这个模式
	private void pullMessage(PullRequest pullRequest) {
        MQConsumerInner consumer = this.mQClientFactory.selectConsumer(pullRequest.getConsumerGroup());
        if (consumer != null) {
            DefaultMQPushConsumerImpl impl = (DefaultMQPushConsumerImpl)consumer;
            impl.pullMessage(pullRequest);
        } else {
            this.log.warn("No matched consumer for the PullRequest {}, drop it", pullRequest);
        }

    }
    ...
}

从上面的方法我们看到具体的实现是通过调用DefaultMQPushConsumerImpl方法。

public class DefaultMQPushConsumerImpl implements MQConsumerInner {
	...
	public void pullMessage(final PullRequest pullRequest) {
		//首先获取PullRequest的 处理队列ProcessQueue,然后更新该消息队列最后一次拉取的时间。
        final ProcessQueue processQueue = pullRequest.getProcessQueue();
        if (processQueue.isDropped()) {
            this.log.info("the pull request[{}] is dropped.", pullRequest.toString());
        } else {
            pullRequest.getProcessQueue().setLastPullTimestamp(System.currentTimeMillis());
			//如果消费者 服务状态不为ServiceState.RUNNING,或当前处于暂停状态,默认延迟3秒再执行
            try {
                this.makeSureStateOK();
            } catch (MQClientException var20) {
                this.log.warn("pullMessage exception, consumer state not ok", var20);
                this.executePullRequestLater(pullRequest, 3000L);
                return;
            }
			//流量控制,两个维度,消息数量达到阔值(默认1000个),或者消息体总大小(默认100m)
            if (this.isPause()) {
                this.log.warn("consumer was paused, execute pull request later. instanceName={}, group={}", this.defaultMQPushConsumer.getInstanceName(), this.defaultMQPushConsumer.getConsumerGroup());
                this.executePullRequestLater(pullRequest, 1000L);
            } else {
                long cachedMessageCount = processQueue.getMsgCount().get();
                long cachedMessageSizeInMiB = processQueue.getMsgSize().get() / 1048576L;
                if (cachedMessageCount > (long)this.defaultMQPushConsumer.getPullThresholdForQueue()) {
                    this.executePullRequestLater(pullRequest, 50L);
                    if (this.queueFlowControlTimes++ % 1000L == 0L) {
                        this.log.warn("the cached message count exceeds the threshold {}, so do flow control, minOffset={}, maxOffset={}, count={}, size={} MiB, pullRequest={}, flowControlTimes={}", new Object[]{this.defaultMQPushConsumer.getPullThresholdForQueue(), processQueue.getMsgTreeMap().firstKey(), processQueue.getMsgTreeMap().lastKey(), cachedMessageCount, cachedMessageSizeInMiB, pullRequest, this.queueFlowControlTimes});
                    }

                } else if (cachedMessageSizeInMiB > (long)this.defaultMQPushConsumer.getPullThresholdSizeForQueue()) {
                    this.executePullRequestLater(pullRequest, 50L);
                    if (this.queueFlowControlTimes++ % 1000L == 0L) {
                        this.log.warn("the cached message size exceeds the threshold {} MiB, so do flow control, minOffset={}, maxOffset={}, count={}, size={} MiB, pullRequest={}, flowControlTimes={}", new Object[]{this.defaultMQPushConsumer.getPullThresholdSizeForQueue(), processQueue.getMsgTreeMap().firstKey(), processQueue.getMsgTreeMap().lastKey(), cachedMessageCount, cachedMessageSizeInMiB, pullRequest, this.queueFlowControlTimes});
                    }

                } else {
                    if (!this.consumeOrderly) {
                        if (processQueue.getMaxSpan() > (long)this.defaultMQPushConsumer.getConsumeConcurrentlyMaxSpan()) {
                            this.executePullRequestLater(pullRequest, 50L);
                            if (this.queueMaxSpanFlowControlTimes++ % 1000L == 0L) {
                                this.log.warn("the queue's messages, span too long, so do flow control, minOffset={}, maxOffset={}, maxSpan={}, pullRequest={}, flowControlTimes={}", new Object[]{processQueue.getMsgTreeMap().firstKey(), processQueue.getMsgTreeMap().lastKey(), processQueue.getMaxSpan(), pullRequest, this.queueMaxSpanFlowControlTimes});
                            }

                            return;
                        }
                    } else {
                        if (!processQueue.isLocked()) {
                            this.executePullRequestLater(pullRequest, 3000L);
                            this.log.info("pull message later because not locked in broker, {}", pullRequest);
                            return;
                        }

                        if (!pullRequest.isLockedFirst()) {
                            long offset = this.rebalanceImpl.computePullFromWhere(pullRequest.getMessageQueue());
                            boolean brokerBusy = offset < pullRequest.getNextOffset();
                            this.log.info("the first time to pull message, so fix offset from broker. pullRequest: {} NewOffset: {} brokerBusy: {}", new Object[]{pullRequest, offset, brokerBusy});
                            if (brokerBusy) {
                                this.log.info("[NOTIFYME]the first time to pull message, but pull request offset larger than broker consume offset. pullRequest: {} NewOffset: {}", pullRequest, offset);
                            }

                            pullRequest.setLockedFirst(true);
                            pullRequest.setNextOffset(offset);
                        }
                    }
					//获取主题订阅信息
                    final SubscriptionData subscriptionData = (SubscriptionData)this.rebalanceImpl.getSubscriptionInner().get(pullRequest.getMessageQueue().getTopic());
                    if (null == subscriptionData) {
                        this.executePullRequestLater(pullRequest, 3000L);
                        this.log.warn("find the consumer's subscription failed, {}", pullRequest);
                    } else {
                    	//创建异步处理pullRequest的类,push模式都是异步的
                        final long beginTimestamp = System.currentTimeMillis();
                        PullCallback pullCallback = new PullCallback() {
                            public void onSuccess(PullResult pullResult) {
                                if (pullResult != null) {
                                	//首先对PullResult进行处理,主要完成如下3件事:1)对消息体解码成一条条消息 2)执行消息过滤 3)执行回调
                                    pullResult = DefaultMQPushConsumerImpl.this.pullAPIWrapper.processPullResult(pullRequest.getMessageQueue(), pullResult, subscriptionData);
                                    switch(pullResult.getPullStatus()) {
                                    case FOUND:
                                        long prevRequestOffset = pullRequest.getNextOffset();
                                        pullRequest.setNextOffset(pullResult.getNextBeginOffset());
                                        long pullRT = System.currentTimeMillis() - beginTimestamp;
                                        DefaultMQPushConsumerImpl.this.getConsumerStatsManager().incPullRT(pullRequest.getConsumerGroup(), pullRequest.getMessageQueue().getTopic(), pullRT);
                                        long firstMsgOffset = 9223372036854775807L;
                                        if (pullResult.getMsgFoundList() != null && !pullResult.getMsgFoundList().isEmpty()) {
                                            firstMsgOffset = ((MessageExt)pullResult.getMsgFoundList().get(0)).getQueueOffset();
                                            DefaultMQPushConsumerImpl.this.getConsumerStatsManager().incPullTPS(pullRequest.getConsumerGroup(), pullRequest.getMessageQueue().getTopic(), (long)pullResult.getMsgFoundList().size());
                                            //拉取到消息,首先放入到处理队列中;然后是消费消息服务提交
                                            boolean dispathToConsume = processQueue.putMessage(pullResult.getMsgFoundList());
                                            DefaultMQPushConsumerImpl.this.consumeMessageService.submitConsumeRequest(pullResult.getMsgFoundList(), processQueue, pullRequest.getMessageQueue(), dispathToConsume);
                                            if (DefaultMQPushConsumerImpl.this.defaultMQPushConsumer.getPullInterval() > 0L) {
                                                DefaultMQPushConsumerImpl.this.executePullRequestLater(pullRequest, DefaultMQPushConsumerImpl.this.defaultMQPushConsumer.getPullInterval());
                                            } else {
                                                DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
                                            }
                                        } else {
                                            DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
                                        }

                                        if (pullResult.getNextBeginOffset() < prevRequestOffset || firstMsgOffset < prevRequestOffset) {
                                            DefaultMQPushConsumerImpl.this.log.warn("[BUG] pull message result maybe data wrong, nextBeginOffset: {} firstMsgOffset: {} prevRequestOffset: {}", new Object[]{pullResult.getNextBeginOffset(), firstMsgOffset, prevRequestOffset});
                                        }
                                        break;
                                    case NO_NEW_MSG:
                                        pullRequest.setNextOffset(pullResult.getNextBeginOffset());
                                        DefaultMQPushConsumerImpl.this.correctTagsOffset(pullRequest);
                                        DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
                                        break;
                                    case NO_MATCHED_MSG:
                                        pullRequest.setNextOffset(pullResult.getNextBeginOffset());
                                        DefaultMQPushConsumerImpl.this.correctTagsOffset(pullRequest);
                                        DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
                                        break;
                                    case OFFSET_ILLEGAL:
                                        DefaultMQPushConsumerImpl.this.log.warn("the pull request offset illegal, {} {}", pullRequest.toString(), pullResult.toString());
                                        pullRequest.setNextOffset(pullResult.getNextBeginOffset());
                                        pullRequest.getProcessQueue().setDropped(true);
                                        DefaultMQPushConsumerImpl.this.executeTaskLater(new Runnable() {
                                            public void run() {
                                                try {
                                                    DefaultMQPushConsumerImpl.this.offsetStore.updateOffset(pullRequest.getMessageQueue(), pullRequest.getNextOffset(), false);
                                                    DefaultMQPushConsumerImpl.this.offsetStore.persist(pullRequest.getMessageQueue());
                                                    DefaultMQPushConsumerImpl.this.rebalanceImpl.removeProcessQueue(pullRequest.getMessageQueue());
                                                    DefaultMQPushConsumerImpl.this.log.warn("fix the pull request offset, {}", pullRequest);
                                                } catch (Throwable var2) {
                                                    DefaultMQPushConsumerImpl.this.log.error("executeTaskLater Exception", var2);
                                                }

                                            }
                                        }, 10000L);
                                    }
                                }

                            }

                            public void onException(Throwable e) {
                                if (!pullRequest.getMessageQueue().getTopic().startsWith("%RETRY%")) {
                                    DefaultMQPushConsumerImpl.this.log.warn("execute the pull request exception", e);
                                }

                                DefaultMQPushConsumerImpl.this.executePullRequestLater(pullRequest, 3000L);
                            }
                        };
                        //如果是集群消费模式,从内存中获取MessageQueue的commitlog偏移量。
                        boolean commitOffsetEnable = false;
                        long commitOffsetValue = 0L;
                        if (MessageModel.CLUSTERING == this.defaultMQPushConsumer.getMessageModel()) {
                            commitOffsetValue = this.offsetStore.readOffset(pullRequest.getMessageQueue(), ReadOffsetType.READ_FROM_MEMORY);
                            if (commitOffsetValue > 0L) {
                                commitOffsetEnable = true;
                            }
                        }
						//构建拉取消息系统Flag: 是否支持comitOffset,suspend,subExpression,classFilter
                        String subExpression = null;
                        boolean classFilter = false;
                        SubscriptionData sd = (SubscriptionData)this.rebalanceImpl.getSubscriptionInner().get(pullRequest.getMessageQueue().getTopic());
                        if (sd != null) {
                            if (this.defaultMQPushConsumer.isPostSubscriptionWhenPull() && !sd.isClassFilterMode()) {
                                subExpression = sd.getSubString();
                            }

                            classFilter = sd.isClassFilterMode();
                        }

                        int sysFlag = PullSysFlag.buildSysFlag(commitOffsetEnable, true, subExpression != null, classFilter);

                        try {
                            //真正向broker发送请求,处理结果会回调到上面创建的异步处理类。
                            this.pullAPIWrapper.pullKernelImpl(pullRequest.getMessageQueue(), subExpression, subscriptionData.getExpressionType(), subscriptionData.getSubVersion(), pullRequest.getNextOffset(), this.defaultMQPushConsumer.getPullBatchSize(), sysFlag, commitOffsetValue, 15000L, 30000L, CommunicationMode.ASYNC, pullCallback);
                        } catch (Exception var19) {
                            this.log.error("pullKernelImpl exception", var19);
                            this.executePullRequestLater(pullRequest, 3000L);
                        }

                    }
                }
            }
        }
    }
    ...
}

这里我们总结一下PullMessageService的作用,首先他会启动一个定时任务从pullRequestQueue获取pullRequest,这个任务只会为push模式服务。主要的发送pull请求到broker的逻辑是由PullAPIWrapper 完成,需要注意的是,PullAPIWrapper 的pullKernelImpl方法如果是需要异步发送pullRequest需要传入回调处理类,push模式为异步调用,所以上面构建了异步处理的类PullCallback。PullCallback的类主要是判断返回的状态和值,如果成功返回消息会调用我们配置listener的业务处理类去处理整条消息,根据返回的状态进行更新broker消费进度还是进行重试。pullKernelImpl方法的发送逻辑下文会单独介绍。

(3)MQClientInstance

消息客户端实例,负载与MQ服务器(Broker,Nameserver)交互的网络实现。这里面包含了很多的后台线程和定时任务。

public class MQClientInstance {
	...
	//启动方法,这里面会启动多个后台线程和定时任务
    public void start() throws MQClientException {
        synchronized(this) {
            switch(this.serviceState) {
            case CREATE_JUST:
                this.serviceState = ServiceState.START_FAILED;
                if (null == this.clientConfig.getNamesrvAddr()) {
                    this.mQClientAPIImpl.fetchNameServerAddr();
                }
				//netty request-response开启
                this.mQClientAPIImpl.start();
                //启动定时任务
                this.startScheduledTask();
                this.pullMessageService.start();
                this.rebalanceService.start();
                this.defaultMQProducer.getDefaultMQProducerImpl().start(false);
                this.log.info("the client factory [{}] start OK", this.clientId);
                this.serviceState = ServiceState.RUNNING;
            case RUNNING:
            case SHUTDOWN_ALREADY:
            default:
                return;
            case START_FAILED:
                throw new MQClientException("The Factory object[" + this.getClientId() + "] has been created before, and failed.", (Throwable)null);
            }
        }
    }
    
    private void startScheduledTask() {
    	//每隔2分钟尝试获取一次NameServer地址
        if (null == this.clientConfig.getNamesrvAddr()) {
            this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
                public void run() {
                    try {
                        MQClientInstance.this.mQClientAPIImpl.fetchNameServerAddr();
                    } catch (Exception var2) {
                        MQClientInstance.this.log.error("ScheduledTask fetchNameServerAddr exception", var2);
                    }

                }
            }, 10000L, 120000L, TimeUnit.MILLISECONDS);
        }
		//每隔30S尝试更新主题路由信息
        this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
            public void run() {
                try {
                    MQClientInstance.this.updateTopicRouteInfoFromNameServer();
                } catch (Exception var2) {
                    MQClientInstance.this.log.error("ScheduledTask updateTopicRouteInfoFromNameServer exception", var2);
                }

            }
        }, 10L, (long)this.clientConfig.getPollNameServerInterval(), TimeUnit.MILLISECONDS);
        //每隔30S 进行Broker心跳检测
        this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
            public void run() {
                try {
                    MQClientInstance.this.cleanOfflineBroker();
                    MQClientInstance.this.sendHeartbeatToAllBrokerWithLock();
                } catch (Exception var2) {
                    MQClientInstance.this.log.error("ScheduledTask sendHeartbeatToAllBroker exception", var2);
                }

            }
        }, 1000L, (long)this.clientConfig.getHeartbeatBrokerInterval(), TimeUnit.MILLISECONDS);
        //默认每隔5秒持久化ConsumeOffset
        this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
            public void run() {
                try {
                    MQClientInstance.this.persistAllConsumerOffset();
                } catch (Exception var2) {
                    MQClientInstance.this.log.error("ScheduledTask persistAllConsumerOffset exception", var2);
                }

            }
        }, 10000L, (long)this.clientConfig.getPersistConsumerOffsetInterval(), TimeUnit.MILLISECONDS);
        //默认每隔1S检查线程池适配
        this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
            public void run() {
                try {
                    MQClientInstance.this.adjustThreadPool();
                } catch (Exception var2) {
                    MQClientInstance.this.log.error("ScheduledTask adjustThreadPool exception", var2);
                }

            }
        }, 1L, 1L, TimeUnit.MINUTES);
    }
	...
}
**(3)PullAPIWrapper **

这个类是发送pullRequest到broker的处理类,如果是异步会调用传入的异步处理类处理返回值。对于顺序消费和非顺序消费也有不同的处理,让我们看下pullKernelImpl方法:
这个方法主要是获取broker信息和封装请求参数,然后调用pullMessage方法。这里的参数需要我们注意的是:1)brokerSuspendMaxTimeMillis(允许的broker 暂停的时间,毫秒为单位,默认为15s)。2)timeoutMillis(超时时间,默认为30s)。3)communicationMode(SYNC ASYNC ONEWAY)4)pullCallback ( pull 回调)。

    public PullResult pullKernelImpl(MessageQueue mq, String subExpression, String expressionType, long subVersion, long offset, int maxNums, int sysFlag, long commitOffset, long brokerSuspendMaxTimeMillis, long timeoutMillis, CommunicationMode communicationMode, PullCallback pullCallback) throws MQClientException, RemotingException, MQBrokerException, InterruptedException {
    	//根据MQ的Broker信息获取查找Broker信息,封装成FindBrokerResult。
        FindBrokerResult findBrokerResult = this.mQClientFactory.findBrokerAddressInSubscribe(mq.getBrokerName(), this.recalculatePullFromWhichNode(mq), false);
        if (null == findBrokerResult) {
            this.mQClientFactory.updateTopicRouteInfoFromNameServer(mq.getTopic());
            findBrokerResult = this.mQClientFactory.findBrokerAddressInSubscribe(mq.getBrokerName(), this.recalculatePullFromWhichNode(mq), false);
        }

        if (findBrokerResult != null) {
            if (!ExpressionType.isTagType(expressionType) && findBrokerResult.getBrokerVersion() < Version.V4_1_0_SNAPSHOT.ordinal()) {
                throw new MQClientException("The broker[" + mq.getBrokerName() + ", " + findBrokerResult.getBrokerVersion() + "] does not upgrade to support for filter message by " + expressionType, (Throwable)null);
            } else {
                int sysFlagInner = sysFlag;
                if (findBrokerResult.isSlave()) {
                    sysFlagInner = PullSysFlag.clearCommitOffsetFlag(sysFlag);
                }

				//然后通过网络去 拉取具体的消息,也就是消息体 中的数据
                PullMessageRequestHeader requestHeader = new PullMessageRequestHeader();
                requestHeader.setConsumerGroup(this.consumerGroup);
                requestHeader.setTopic(mq.getTopic());
                requestHeader.setQueueId(mq.getQueueId());
                requestHeader.setQueueOffset(offset);
                requestHeader.setMaxMsgNums(maxNums);
                requestHeader.setSysFlag(sysFlagInner);
                requestHeader.setCommitOffset(commitOffset);
                requestHeader.setSuspendTimeoutMillis(brokerSuspendMaxTimeMillis);
                requestHeader.setSubscription(subExpression);
                requestHeader.setSubVersion(subVersion);
                requestHeader.setExpressionType(expressionType);
                String brokerAddr = findBrokerResult.getBrokerAddr();
                if (PullSysFlag.hasClassFilterFlag(sysFlagInner)) {
                    brokerAddr = this.computPullFromWhichFilterServer(mq.getTopic(), brokerAddr);
                }

                PullResult pullResult = this.mQClientFactory.getMQClientAPIImpl().pullMessage(brokerAddr, requestHeader, timeoutMillis, communicationMode, pullCallback);
                return pullResult;
            }
        } else {
            throw new MQClientException("The broker[" + mq.getBrokerName() + "] not exist", (Throwable)null);
        }
    }

接下来我们看下pullMessage拉取消息方法,会根据拉取模式,是同步还是异步模式,调用回调或直接处理。

    public PullResult pullMessage(String addr, PullMessageRequestHeader requestHeader, long timeoutMillis, CommunicationMode communicationMode, PullCallback pullCallback) throws RemotingException, MQBrokerException, InterruptedException {
        RemotingCommand request = RemotingCommand.createRequestCommand(11, requestHeader);
        switch(communicationMode) {
        case ONEWAY:
            assert false;

            return null;
        case ASYNC:
            this.pullMessageAsync(addr, request, timeoutMillis, pullCallback);
            return null;
        case SYNC:
            return this.pullMessageSync(addr, request, timeoutMillis);
        default:
            assert false;

            return null;
        }
    }

processPullResult方法是在PullCallback类处理成功时先进行调用,它的作用是:1)对消息体解码成一条条消息 2)执行消息过滤

    public PullResult processPullResult(MessageQueue mq, PullResult pullResult, SubscriptionData subscriptionData) {
    	//对消息体解码成一条条消息
        PullResultExt pullResultExt = (PullResultExt)pullResult;
        this.updatePullFromWhichNode(mq, pullResultExt.getSuggestWhichBrokerId());
        if (PullStatus.FOUND == pullResult.getPullStatus()) {
            ByteBuffer byteBuffer = ByteBuffer.wrap(pullResultExt.getMessageBinary());
            List<MessageExt> msgList = MessageDecoder.decodes(byteBuffer);
            List<MessageExt> msgListFilterAgain = msgList;
            Iterator i$;
            MessageExt msg;
            if (!subscriptionData.getTagsSet().isEmpty() && !subscriptionData.isClassFilterMode()) {
                msgListFilterAgain = new ArrayList(msgList.size());
                i$ = msgList.iterator();

                while(i$.hasNext()) {
                    msg = (MessageExt)i$.next();
                    if (msg.getTags() != null && subscriptionData.getTagsSet().contains(msg.getTags())) {
                        ((List)msgListFilterAgain).add(msg);
                    }
                }
            }
			//执行消息过滤
            if (this.hasHook()) {
                FilterMessageContext filterMessageContext = new FilterMessageContext();
                filterMessageContext.setUnitMode(this.unitMode);
                filterMessageContext.setMsgList((List)msgListFilterAgain);
                this.executeHook(filterMessageContext);
            }

            i$ = ((List)msgListFilterAgain).iterator();

            while(i$.hasNext()) {
                msg = (MessageExt)i$.next();
                MessageAccessor.putProperty(msg, "MIN_OFFSET", Long.toString(pullResult.getMinOffset()));
                MessageAccessor.putProperty(msg, "MAX_OFFSET", Long.toString(pullResult.getMaxOffset()));
            }

            pullResultExt.setMsgFoundList((List)msgListFilterAgain);
        }

        pullResultExt.setMessageBinary((byte[])null);
        return pullResult;
    }
(3)PullCallback

PullCallback是发送pull请求异步回调的类,会对pull请求从broker返回的值做处理,最后交给我们设置的listener消息业务处理类进行处理。首先调用我们上面提到过的processPullResult方法,将消息体转换成一条条消息。然后判断状态,成功返回消息时会先将拉取到的消息放到处理队列中,然后是消费消息服务提交。

 PullCallback pullCallback = new PullCallback() {
     public void onSuccess(PullResult pullResult) {
         if (pullResult != null) {
             pullResult = DefaultMQPushConsumerImpl.this.pullAPIWrapper.processPullResult(pullRequest.getMessageQueue(), pullResult, subscriptionData);
             switch(pullResult.getPullStatus()) {
             case FOUND:
                 long prevRequestOffset = pullRequest.getNextOffset();
                 pullRequest.setNextOffset(pullResult.getNextBeginOffset());
                 long pullRT = System.currentTimeMillis() - beginTimestamp;
                 DefaultMQPushConsumerImpl.this.getConsumerStatsManager().incPullRT(pullRequest.getConsumerGroup(), pullRequest.getMessageQueue().getTopic(), pullRT);
                 long firstMsgOffset = 9223372036854775807L;
                 if (pullResult.getMsgFoundList() != null && !pullResult.getMsgFoundList().isEmpty()) {
                     firstMsgOffset = ((MessageExt)pullResult.getMsgFoundList().get(0)).getQueueOffset();
                     DefaultMQPushConsumerImpl.this.getConsumerStatsManager().incPullTPS(pullRequest.getConsumerGroup(), pullRequest.getMessageQueue().getTopic(), (long)pullResult.getMsgFoundList().size());
                     //拉取到消息,首先放入到处理队列中
                     boolean dispathToConsume = processQueue.putMessage(pullResult.getMsgFoundList());
                     //消费消息服务提交
                     DefaultMQPushConsumerImpl.this.consumeMessageService.submitConsumeRequest(pullResult.getMsgFoundList(), processQueue, pullRequest.getMessageQueue(), dispathToConsume);
                     if (DefaultMQPushConsumerImpl.this.defaultMQPushConsumer.getPullInterval() > 0L) {
                         DefaultMQPushConsumerImpl.this.executePullRequestLater(pullRequest, DefaultMQPushConsumerImpl.this.defaultMQPushConsumer.getPullInterval());
                     } else {
                         DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
                     }
                 } else {
                     DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
                 }

                 if (pullResult.getNextBeginOffset() < prevRequestOffset || firstMsgOffset < prevRequestOffset) {
                     DefaultMQPushConsumerImpl.this.log.warn("[BUG] pull message result maybe data wrong, nextBeginOffset: {} firstMsgOffset: {} prevRequestOffset: {}", new Object[]{pullResult.getNextBeginOffset(), firstMsgOffset, prevRequestOffset});
                 }
                 break;
             case NO_NEW_MSG:
                 pullRequest.setNextOffset(pullResult.getNextBeginOffset());
                 DefaultMQPushConsumerImpl.this.correctTagsOffset(pullRequest);
                 DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
                 break;
             case NO_MATCHED_MSG:
                 pullRequest.setNextOffset(pullResult.getNextBeginOffset());
                 DefaultMQPushConsumerImpl.this.correctTagsOffset(pullRequest);
                 DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
                 break;
             case OFFSET_ILLEGAL:
                 DefaultMQPushConsumerImpl.this.log.warn("the pull request offset illegal, {} {}", pullRequest.toString(), pullResult.toString());
                 pullRequest.setNextOffset(pullResult.getNextBeginOffset());
                 pullRequest.getProcessQueue().setDropped(true);
                 DefaultMQPushConsumerImpl.this.executeTaskLater(new Runnable() {
                     public void run() {
                         try {
                             DefaultMQPushConsumerImpl.this.offsetStore.updateOffset(pullRequest.getMessageQueue(), pullRequest.getNextOffset(), false);
                             DefaultMQPushConsumerImpl.this.offsetStore.persist(pullRequest.getMessageQueue());
                             DefaultMQPushConsumerImpl.this.rebalanceImpl.removeProcessQueue(pullRequest.getMessageQueue());
                             DefaultMQPushConsumerImpl.this.log.warn("fix the pull request offset, {}", pullRequest);
                         } catch (Throwable var2) {
                             DefaultMQPushConsumerImpl.this.log.error("executeTaskLater Exception", var2);
                         }

                     }
                 }, 10000L);
             }
         }

     }

     public void onException(Throwable e) {
         if (!pullRequest.getMessageQueue().getTopic().startsWith("%RETRY%")) {
             DefaultMQPushConsumerImpl.this.log.warn("execute the pull request exception", e);
         }

         DefaultMQPushConsumerImpl.this.executePullRequestLater(pullRequest, 3000L);
     }
 };

submitConsumeRequest方法会调用我们自己的consumeMessage方法,对消息进行消费。这个方法有两个实现,分为顺序消费和非顺序消费。
1、非顺序消息:
一个消费者非顺序消费者,内部使用一个线程池来并非消费消息,一个线程一批次最大处理consumeMessageBatchMaxSize条消息。
如果此次拉取的消息条数大于ConsumeMessageBatchMaxSize,则分批消费。

    public void submitConsumeRequest(List<MessageExt> msgs, ProcessQueue processQueue, MessageQueue messageQueue, boolean dispatchToConsume) {
        int consumeBatchSize = this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize();
        if (msgs.size() <= consumeBatchSize) {
            ConsumeMessageConcurrentlyService.ConsumeRequest consumeRequest = new ConsumeMessageConcurrentlyService.ConsumeRequest(msgs, processQueue, messageQueue);

            try {
                this.consumeExecutor.submit(consumeRequest);
            } catch (RejectedExecutionException var10) {
                this.submitConsumeRequestLater(consumeRequest);
            }
        } else {
            int total = 0;

            while(total < msgs.size()) {
                List<MessageExt> msgThis = new ArrayList(consumeBatchSize);

                for(int i = 0; i < consumeBatchSize && total < msgs.size(); ++total) {
                    msgThis.add(msgs.get(total));
                    ++i;
                }

                ConsumeMessageConcurrentlyService.ConsumeRequest consumeRequest = new ConsumeMessageConcurrentlyService.ConsumeRequest(msgThis, processQueue, messageQueue);

                try {
                    this.consumeExecutor.submit(consumeRequest);
                } catch (RejectedExecutionException var11) {
                    while(total < msgs.size()) {
                        msgThis.add(msgs.get(total));
                        ++total;
                    }

                    this.submitConsumeRequestLater(consumeRequest);
                }
            }
        }

    }

具体消费的逻辑是写在

    class ConsumeRequest implements Runnable {
        private final List<MessageExt> msgs;
        private final ProcessQueue processQueue;
        private final MessageQueue messageQueue;

        public ConsumeRequest(List<MessageExt> var1, ProcessQueue msgs, MessageQueue processQueue) {
            this.msgs = msgs;
            this.processQueue = processQueue;
            this.messageQueue = messageQueue;
        }

        public List<MessageExt> getMsgs() {
            return this.msgs;
        }

        public ProcessQueue getProcessQueue() {
            return this.processQueue;
        }

        public void run() {
            if (this.processQueue.isDropped()) {
                ConsumeMessageConcurrentlyService.log.info("the message queue not be able to consume, because it's dropped. group={} {}", ConsumeMessageConcurrentlyService.this.consumerGroup, this.messageQueue);
            } else {
            	//获取业务系统定义的消息消费监听器,负责具体消息的消费
                MessageListenerConcurrently listener = ConsumeMessageConcurrentlyService.this.messageListener;
                ConsumeConcurrentlyContext context = new ConsumeConcurrentlyContext(this.messageQueue);
                ConsumeConcurrentlyStatus status = null;
                ConsumeMessageContext consumeMessageContext = null;
               	//如果消费者注册了消息消费者hook(钩子函数,在消息消费之前,消费之后执行相关方法)
                if (ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
                    consumeMessageContext = new ConsumeMessageContext();
                    consumeMessageContext.setConsumerGroup(ConsumeMessageConcurrentlyService.this.defaultMQPushConsumer.getConsumerGroup());
                    consumeMessageContext.setProps(new HashMap());
                    consumeMessageContext.setMq(this.messageQueue);
                    consumeMessageContext.setMsgList(this.msgs);
                    consumeMessageContext.setSuccess(false);
                    ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.executeHookBefore(consumeMessageContext);
                }

                long beginTimestamp = System.currentTimeMillis();
                boolean hasException = false;
                ConsumeReturnType returnType = ConsumeReturnType.SUCCESS;

				//设置消息的重试主题,并开始消费消息,并返回该批次消息消费结果:
                try {
                    ConsumeMessageConcurrentlyService.this.resetRetryTopic(this.msgs);
                    if (this.msgs != null && !this.msgs.isEmpty()) {
                        Iterator i$ = this.msgs.iterator();

                        while(i$.hasNext()) {
                            MessageExt msg = (MessageExt)i$.next();
                            MessageAccessor.setConsumeStartTimeStamp(msg, String.valueOf(System.currentTimeMillis()));
                        }
                    }

                    status = listener.consumeMessage(Collections.unmodifiableList(this.msgs), context);
                } catch (Throwable var11) {
                    ConsumeMessageConcurrentlyService.log.warn("consumeMessage exception: {} Group: {} Msgs: {} MQ: {}", new Object[]{RemotingHelper.exceptionSimpleDesc(var11), ConsumeMessageConcurrentlyService.this.consumerGroup, this.msgs, this.messageQueue});
                    hasException = true;
                }
	
				//根据是否出现异常等,判断处理结果
                long consumeRT = System.currentTimeMillis() - beginTimestamp;
                if (null == status) {
                    if (hasException) {
                        returnType = ConsumeReturnType.EXCEPTION;
                    } else {
                        returnType = ConsumeReturnType.RETURNNULL;
                    }
                } else if (consumeRT >= ConsumeMessageConcurrentlyService.this.defaultMQPushConsumer.getConsumeTimeout() * 60L * 1000L) {
                    returnType = ConsumeReturnType.TIME_OUT;
                } else if (ConsumeConcurrentlyStatus.RECONSUME_LATER == status) {
                    returnType = ConsumeReturnType.FAILED;
                } else if (ConsumeConcurrentlyStatus.CONSUME_SUCCESS == status) {
                    returnType = ConsumeReturnType.SUCCESS;
                }

                if (ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
                    consumeMessageContext.getProps().put("ConsumeContextType", returnType.name());
                }

                if (null == status) {
                    ConsumeMessageConcurrentlyService.log.warn("consumeMessage return null, Group: {} Msgs: {} MQ: {}", new Object[]{ConsumeMessageConcurrentlyService.this.consumerGroup, this.msgs, this.messageQueue});
                    status = ConsumeConcurrentlyStatus.RECONSUME_LATER;
                }

                if (ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
                    consumeMessageContext.setStatus(status.toString());
                    consumeMessageContext.setSuccess(ConsumeConcurrentlyStatus.CONSUME_SUCCESS == status);
                    ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.executeHookAfter(consumeMessageContext);
                }

                ConsumeMessageConcurrentlyService.this.getConsumerStatsManager().incConsumeRT(ConsumeMessageConcurrentlyService.this.consumerGroup, this.messageQueue.getTopic(), consumeRT);
                if (!this.processQueue.isDropped()) {
                    ConsumeMessageConcurrentlyService.this.processConsumeResult(status, context, this);
                } else {
                    ConsumeMessageConcurrentlyService.log.warn("processQueue is dropped without process consume result. messageQueue={}, msgs={}", this.messageQueue, this.msgs);
                }

            }
        }

        public MessageQueue getMessageQueue() {
            return this.messageQueue;
        }
    }

再总结一下非顺序消费的主要思路:
1)将待消费的消息存入ProcessQueue中存储,并执行消息消费之前钩子函数。
2)修改待消费消息的主题(设置为消费组的重试主题)。
3)分页消费(每次传给业务消费监听器的最大数量为配置的
sconsumeMessageBatchMaxSize。
4)执行消费后钩子函数,并根据业务方返回的消息消费结果(成功,重试)【ACK】确认信息,然后更新消息进度,从ProceeQueue中删除相应的消息。
2、顺序消费
这里与非顺序消息的区别是ConsumeRequest只针对ProcessQueue,messageQueue,接下来,我们重点分析ConsumeMessageOrderlyService 中ConsumeRequest(消息消费任务封装)。

    public void submitConsumeRequest(List<MessageExt> msgs, ProcessQueue processQueue, MessageQueue messageQueue, boolean dispathToConsume) {
        if (dispathToConsume) {
            ConsumeMessageOrderlyService.ConsumeRequest consumeRequest = new ConsumeMessageOrderlyService.ConsumeRequest(processQueue, messageQueue);
            this.consumeExecutor.submit(consumeRequest);
        }

    }

消息消费的逻辑与非顺序消费差不多,但其关键点,在于消息消费或获取的顺序性,既然要保证顺序性消费,就不可避免的引入锁机制。
一个消费者中线程池中线程的锁粒度为,MessageQueue,消费队列,也就是说RocketMQ实现顺序消费是针对MessageQueue,也就是RocketMQ无法做到多MessageQueue的全局顺序消费,如果要使用RocketMQ做的主题的全局顺序消费,那该主题只能允许一个队列。关于顺序消费会在后续详细介绍。

RocketMQ中消费者启动流程

这一节主要先讲下RocketMQ消费者的启动流程,看下在启动的时候究竟完成了什么样的操作。由于RocketMQ的DefaultMQPushConsumer和DefaultMQPullConsumer启动流程大部分类似,而DefaultMQPushConsumer更为复杂一些,因此主要讲的是DefaultMQPushConsumer启动流程。

(1) 设置consumerGroup、NameServer服务地址、消费起始偏移地址并根据参数Topic构建Consumer端的SubscriptionData(订阅关系值);
(2) 在Consumer端注册消费者监听器,当消息到来时完成消费消息;
(3) 启动defaultMQPushConsumerImpl实例,主要完成前置校验、复制订阅关系(将defaultMQPushConsumer的订阅关系复制至rebalanceImpl中,包括retryTopic(重试主题)对应的订阅关系)、创建MQClientInstance实例、设置rebalanceImpl的各个属性值、pullAPIWrapper包装类对象的初始化、初始化offsetStore实例并加载消费进度、启动消息消费服务线程以及在MQClientInstance中注册consumer等任务;
(4) 启动MQClientInstance实例,其中包括完成客户端网络通信线程、拉取消息服务线程、负载均衡服务线程和若干个定时任务的启动;
(5) 向所有的Broker端发送心跳(采用加锁方式);
(6) 最后,唤醒负载均衡服务线程在Consumer端开始负载均衡;

RocketMQ中Pull和Push两种消费模式流程解析

RocketMQ提供了两种消费模式,Push和Pull,大多数场景使用的是Push模式,在源码中这两种模式分别对应的是DefaultMQPushConsumer类和DefaultMQPullConsumer类。Push模式实际上在内部还是使用的Pull方式实现的,通过Pull不断地轮询Broker获取消息,当不存在新消息时,Broker端会挂起Pull请求,直到有新消息产生才取消挂起,返回新消息。

PULL模式

RocketMQ的Pull模式相对来得简单,从上面的demo代码中可以看出,业务应用代码通过由Topic获取到的MessageQueue直接拉取消息(最后真正执行的是PullAPIWrapper的pullKernelImpl()方法,通过发送拉取消息的RPC请求给Broker端)。其中,消息消费的偏移量需要Consumer端自己去维护。
上面的demo我们可以看到,我们通过获取topic对应的queue并进行循环,向每个queue调用pullBlockIfNotFound方法,这个方法会同步的向broker发送pull请求。我们获取到返回值后需要自己存储消息消费的偏移量并判断状态、处理数据。pullBlockIfNotFound方法主要调用pullSyncImpl发送到请求到broker,让我们看下该方法的处理逻辑。这里需要传入参数:1)消息队列(通过调用消费者的fetchSubscibeMessageQueue(topic)可以得到相应topic的所需要消息队列) 2)需要过滤用的表达式(如果不需要可以传null) 3)偏移量即消费队列的进度 4)一次取消息数量的最大值 5)超时时间(可以使用默认值)

    private PullResult pullSyncImpl(MessageQueue mq, String subExpression, long offset, int maxNums, boolean block, long timeout) throws MQClientException, RemotingException, MQBrokerException, InterruptedException {
    	//判断consumer状态
        this.makeSureStateOK();
        //验证判断
        if (null == mq) {
            throw new MQClientException("mq is null", (Throwable)null);
        } else if (offset < 0L) {
            throw new MQClientException("offset < 0", (Throwable)null);
        } else if (maxNums <= 0) {
            throw new MQClientException("maxNums <= 0", (Throwable)null);
        } else {
        	//验证拉取的消息队列的topic是否已经订阅,若没有订阅则创建一个订阅数据并存储。
            this.subscriptionAutomatically(mq.getTopic());
            //创建这次操作的标志量sysFlag
            int sysFlag = PullSysFlag.buildSysFlag(false, block, true, false);

            SubscriptionData subscriptionData;
            try {
            	//然后根据消费者组名、topic、过滤用的表达式封装成subscriptionData
                subscriptionData = FilterAPI.buildSubscriptionData(this.defaultMQPullConsumer.getConsumerGroup(), mq.getTopic(), subExpression);
            } catch (Exception var15) {
                throw new MQClientException("parse subscription error", var15);
            }
			//确定好timeout
            long timeoutMillis = block ? this.defaultMQPullConsumer.getConsumerTimeoutMillisWhenSuspend() : timeout;
            //通过pullAPIWrapper的pullKernelImpl方法实现消息的拉取
            PullResult pullResult = this.pullAPIWrapper.pullKernelImpl(mq, subscriptionData.getSubString(), 0L, offset, maxNums, sysFlag, 0L, this.defaultMQPullConsumer.getBrokerSuspendMaxTimeMillis(), timeoutMillis, CommunicationMode.SYNC, (PullCallback)null);
            //对返回参数进行处理:1)对消息体解码成一条条消息 2)执行消息过滤
            this.pullAPIWrapper.processPullResult(mq, pullResult, subscriptionData);
            //如果有钩子方法,执行钩子方法
            if (!this.consumeMessageHookList.isEmpty()) {
                ConsumeMessageContext consumeMessageContext = null;
                consumeMessageContext = new ConsumeMessageContext();
                consumeMessageContext.setConsumerGroup(this.groupName());
                consumeMessageContext.setMq(mq);
                consumeMessageContext.setMsgList(pullResult.getMsgFoundList());
                consumeMessageContext.setSuccess(false);
                this.executeHookBefore(consumeMessageContext);
                consumeMessageContext.setStatus(ConsumeConcurrentlyStatus.CONSUME_SUCCESS.toString());
                consumeMessageContext.setSuccess(true);
                this.executeHookAfter(consumeMessageContext);
            }
			//返回message结果
            return pullResult;
        }
    }

上面的方法主要是再发送broker前对参数进行整理,broker返回后对返回值进行了转换并返回,发送pull请求的方法是在pullKernelImpl中,这个方法上文也进行了介绍,参数中可以选择同步和异步两种发请求的方式。同步方式consumer会等待broker的返回或超时返回。这里就不再做介绍了。

PUSH模式

在本文前面已经提到过了,从严格意义上说,RocketMQ并没有实现真正的消息消费的Push模式,而是对Pull模式进行了一定的优化,一方面在Consumer端开启后台独立的线程—PullMessageService不断地从阻塞队列—pullRequestQueue中获取PullRequest请求并通过网络通信模块发送Pull消息的RPC请求给Broker端。另外一方面,后台独立线程—rebalanceService根据Topic中消息队列个数和当前消费组内消费者个数进行负载均衡,将产生的对应PullRequest实例放入阻塞队列—pullRequestQueue中。这里算是比较典型的生产者-消费者模型,实现了准实时的自动消息拉取。然后,再根据业务反馈是否成功消费来推动消费进度。
在Broker端,PullMessageProcessor业务处理器收到Pull消息的RPC请求后,通过MessageStore实例从commitLog获取消息。如果第一次尝试Pull消息失败(比如Broker端没有可以消费的消息),则通过长轮询机制先hold住并且挂起该请求,然后通过Broker端的后台线程PullRequestHoldService重新尝试和后台线程ReputMessageService的二次处理。
由于上文已经介绍过涉及到的类和方法,这里就不在代码方面做过多的介绍。

RocketMQ中长轮询的Pull消息机制

这是MQ对consumer向broker拉取请求时的一种优化,其主要的思路是:Consumer如果第一次尝试Pull消息失败(比如:Broker端没有可以消费的消息),并不立即给消费者客户端返回Response的响应,而是先hold住并且挂起请求。然后在Broker端,通过后台独立线程—PullRequestHoldService重复尝试执行Pull消息请求来取消息。同时,另外一个ReputMessageService线程不断地构建ConsumeQueue/IndexFile数据,并取出hold住的Pull请求进行二次处理。通过这种长轮询机制,即可解决Consumer端需要通过不断地发送无效的轮询Pull请求,而导致整个RocketMQ集群中Broker端负载很高的问题。

Consumer向Broker端发送Pull消息请求的主要过程

在RocketMQ的Consumer端,后台独立线程服务—pullMessageService是Pull消息请求的发起者,它不断地尝试从阻塞队列—LinkedBlockingQueue中获取元素PullRequest,并根据pullRequest中的参数以及订阅关系信息调用pullAPIWrapper的pullKernelImpl()方法发送封装后的Pull消息请求—PullMessageRequestHeader至Broker端来拉取消息(具体完成发送一次Pull消息的PRC通信请求的是MQClientAPIImpl中的pullMessage()方法)。
其中, DefaultMQPushConsumerImpl的pullMessage(pullRequest)方法是发送Pull消息请求的关键:
(1) 校验ProcessQueue是否“drop”, 如果为“drop”为true则直接返回(这个“drop”在“Rebalance”中已经介绍,true代表已经删除)
(2) 给ProcessQueue设置Pull消息的时间戳;
(3) 做流量控制,对于满足下面条件的任何一种情况,稍后再发起Pull消息的请求: 1) 正在消费的队列中,未被消费的消息数和消息大小超过阀值(默认每个队列消息数为1000个/消息存储容量100MB);2) 如果是顺序消费,正在消费的队列中,消息的跨度超过阀值(默认2000);
(4) 根据topic获取订阅关系—SubscriptionData;
(5) 构建Pull消息的回调对象—PullBack,这里从Broker端Pull消息的返回结果处理是通过异步回调(发送异步通信RPC请求),其中如果Broker端返回Pull消息成功,在回调方法中先填充至处理队列—processQueue中(将Pull下来的消息,设置到ProcessQueue的msgTreeMap容器中),然后通过消费消息的服务线程—consumeMessageService,将封装好的ConsumeRequest提交至消费端消费线程池—consumeExecutor异步执行处理(具体处理逻辑:通过业务应用系统在DefaultMQPushConsumer实例中注册的消息监听器完成业务端的消息消费);
(6) 从Consumer端内存中获取commitOffsetValue;
(7) 通过RocketMQ的Remoting通信层向Broker端发送Pull消息的RPC请求;

Broker端处理Pull消息请求的一般过程

这里先来说下对于一般情况下(即为所要Pull的消息在RocketMQ的Broker端已经是存在,一般可以Pull到的情况),Broker端处理这个Pull消息请求的主要过程。processRequest是broker处理请求的入口。

    private RemotingCommand processRequest(final Channel channel, RemotingCommand request, boolean brokerAllowSuspend)
        throws RemotingCommandException {
        RemotingCommand response = RemotingCommand.createResponseCommand(PullMessageResponseHeader.class);
        final PullMessageResponseHeader responseHeader = (PullMessageResponseHeader) response.readCustomHeader();
        final PullMessageRequestHeader requestHeader =
            (PullMessageRequestHeader) request.decodeCommandCustomHeader(PullMessageRequestHeader.class);

        response.setOpaque(request.getOpaque());

        log.debug("receive PullMessage request command, {}", request);

        if (!PermName.isReadable(this.brokerController.getBrokerConfig().getBrokerPermission())) {
            response.setCode(ResponseCode.NO_PERMISSION);
            response.setRemark(String.format("the broker[%s] pulling message is forbidden", this.brokerController.getBrokerConfig().getBrokerIP1()));
            return response;
        }

        SubscriptionGroupConfig subscriptionGroupConfig =
            this.brokerController.getSubscriptionGroupManager().findSubscriptionGroupConfig(requestHeader.getConsumerGroup());
        if (null == subscriptionGroupConfig) {
            response.setCode(ResponseCode.SUBSCRIPTION_GROUP_NOT_EXIST);
            response.setRemark(String.format("subscription group [%s] does not exist, %s", requestHeader.getConsumerGroup(), FAQUrl.suggestTodo(FAQUrl.SUBSCRIPTION_GROUP_NOT_EXIST)));
            return response;
        }

        if (!subscriptionGroupConfig.isConsumeEnable()) {
            response.setCode(ResponseCode.NO_PERMISSION);
            response.setRemark("subscription group no permission, " + requestHeader.getConsumerGroup());
            return response;
        }

        final boolean hasSuspendFlag = PullSysFlag.hasSuspendFlag(requestHeader.getSysFlag());
        final boolean hasCommitOffsetFlag = PullSysFlag.hasCommitOffsetFlag(requestHeader.getSysFlag());
        final boolean hasSubscriptionFlag = PullSysFlag.hasSubscriptionFlag(requestHeader.getSysFlag());

        final long suspendTimeoutMillisLong = hasSuspendFlag ? requestHeader.getSuspendTimeoutMillis() : 0;

        TopicConfig topicConfig = this.brokerController.getTopicConfigManager().selectTopicConfig(requestHeader.getTopic());
        if (null == topicConfig) {
            log.error("the topic {} not exist, consumer: {}", requestHeader.getTopic(), RemotingHelper.parseChannelRemoteAddr(channel));
            response.setCode(ResponseCode.TOPIC_NOT_EXIST);
            response.setRemark(String.format("topic[%s] not exist, apply first please! %s", requestHeader.getTopic(), FAQUrl.suggestTodo(FAQUrl.APPLY_TOPIC_URL)));
            return response;
        }

        if (!PermName.isReadable(topicConfig.getPerm())) {
            response.setCode(ResponseCode.NO_PERMISSION);
            response.setRemark("the topic[" + requestHeader.getTopic() + "] pulling message is forbidden");
            return response;
        }

        if (requestHeader.getQueueId() < 0 || requestHeader.getQueueId() >= topicConfig.getReadQueueNums()) {
            String errorInfo = String.format("queueId[%d] is illegal, topic:[%s] topicConfig.readQueueNums:[%d] consumer:[%s]",
                requestHeader.getQueueId(), requestHeader.getTopic(), topicConfig.getReadQueueNums(), channel.remoteAddress());
            log.warn(errorInfo);
            response.setCode(ResponseCode.SYSTEM_ERROR);
            response.setRemark(errorInfo);
            return response;
        }

        SubscriptionData subscriptionData = null;
        ConsumerFilterData consumerFilterData = null;
        if (hasSubscriptionFlag) {
            try {
                subscriptionData = FilterAPI.build(
                    requestHeader.getTopic(), requestHeader.getSubscription(), requestHeader.getExpressionType()
                );
                if (!ExpressionType.isTagType(subscriptionData.getExpressionType())) {
                    consumerFilterData = ConsumerFilterManager.build(
                        requestHeader.getTopic(), requestHeader.getConsumerGroup(), requestHeader.getSubscription(),
                        requestHeader.getExpressionType(), requestHeader.getSubVersion()
                    );
                    assert consumerFilterData != null;
                }
            } catch (Exception e) {
                log.warn("Parse the consumer's subscription[{}] failed, group: {}", requestHeader.getSubscription(),
                    requestHeader.getConsumerGroup());
                response.setCode(ResponseCode.SUBSCRIPTION_PARSE_FAILED);
                response.setRemark("parse the consumer's subscription failed");
                return response;
            }
        } else {
            ConsumerGroupInfo consumerGroupInfo =
                this.brokerController.getConsumerManager().getConsumerGroupInfo(requestHeader.getConsumerGroup());
            if (null == consumerGroupInfo) {
                log.warn("the consumer's group info not exist, group: {}", requestHeader.getConsumerGroup());
                response.setCode(ResponseCode.SUBSCRIPTION_NOT_EXIST);
                response.setRemark("the consumer's group info not exist" + FAQUrl.suggestTodo(FAQUrl.SAME_GROUP_DIFFERENT_TOPIC));
                return response;
            }

            if (!subscriptionGroupConfig.isConsumeBroadcastEnable()
                && consumerGroupInfo.getMessageModel() == MessageModel.BROADCASTING) {
                response.setCode(ResponseCode.NO_PERMISSION);
                response.setRemark("the consumer group[" + requestHeader.getConsumerGroup() + "] can not consume by broadcast way");
                return response;
            }

            subscriptionData = consumerGroupInfo.findSubscriptionData(requestHeader.getTopic());
            if (null == subscriptionData) {
                log.warn("the consumer's subscription not exist, group: {}, topic:{}", requestHeader.getConsumerGroup(), requestHeader.getTopic());
                response.setCode(ResponseCode.SUBSCRIPTION_NOT_EXIST);
                response.setRemark("the consumer's subscription not exist" + FAQUrl.suggestTodo(FAQUrl.SAME_GROUP_DIFFERENT_TOPIC));
                return response;
            }

            if (subscriptionData.getSubVersion() < requestHeader.getSubVersion()) {
                log.warn("The broker's subscription is not latest, group: {} {}", requestHeader.getConsumerGroup(),
                    subscriptionData.getSubString());
                response.setCode(ResponseCode.SUBSCRIPTION_NOT_LATEST);
                response.setRemark("the consumer's subscription not latest");
                return response;
            }
            if (!ExpressionType.isTagType(subscriptionData.getExpressionType())) {
                consumerFilterData = this.brokerController.getConsumerFilterManager().get(requestHeader.getTopic(),
                    requestHeader.getConsumerGroup());
                if (consumerFilterData == null) {
                    response.setCode(ResponseCode.FILTER_DATA_NOT_EXIST);
                    response.setRemark("The broker's consumer filter data is not exist!Your expression may be wrong!");
                    return response;
                }
                if (consumerFilterData.getClientVersion() < requestHeader.getSubVersion()) {
                    log.warn("The broker's consumer filter data is not latest, group: {}, topic: {}, serverV: {}, clientV: {}",
                        requestHeader.getConsumerGroup(), requestHeader.getTopic(), consumerFilterData.getClientVersion(), requestHeader.getSubVersion());
                    response.setCode(ResponseCode.FILTER_DATA_NOT_LATEST);
                    response.setRemark("the consumer's consumer filter data not latest");
                    return response;
                }
            }
        }

        if (!ExpressionType.isTagType(subscriptionData.getExpressionType())
            && !this.brokerController.getBrokerConfig().isEnablePropertyFilter()) {
            response.setCode(ResponseCode.SYSTEM_ERROR);
            response.setRemark("The broker does not support consumer to filter message by " + subscriptionData.getExpressionType());
            return response;
        }

        MessageFilter messageFilter;
        if (this.brokerController.getBrokerConfig().isFilterSupportRetry()) {
            messageFilter = new ExpressionForRetryMessageFilter(subscriptionData, consumerFilterData,
                this.brokerController.getConsumerFilterManager());
        } else {
            messageFilter = new ExpressionMessageFilter(subscriptionData, consumerFilterData,
                this.brokerController.getConsumerFilterManager());
        }

        final GetMessageResult getMessageResult =
            this.brokerController.getMessageStore().getMessage(requestHeader.getConsumerGroup(), requestHeader.getTopic(),
                requestHeader.getQueueId(), requestHeader.getQueueOffset(), requestHeader.getMaxMsgNums(), messageFilter);
        if (getMessageResult != null) {
            response.setRemark(getMessageResult.getStatus().name());
            responseHeader.setNextBeginOffset(getMessageResult.getNextBeginOffset());
            responseHeader.setMinOffset(getMessageResult.getMinOffset());
            responseHeader.setMaxOffset(getMessageResult.getMaxOffset());

            if (getMessageResult.isSuggestPullingFromSlave()) {
                responseHeader.setSuggestWhichBrokerId(subscriptionGroupConfig.getWhichBrokerWhenConsumeSlowly());
            } else {
                responseHeader.setSuggestWhichBrokerId(MixAll.MASTER_ID);
            }

            switch (this.brokerController.getMessageStoreConfig().getBrokerRole()) {
                case ASYNC_MASTER:
                case SYNC_MASTER:
                    break;
                case SLAVE:
                    if (!this.brokerController.getBrokerConfig().isSlaveReadEnable()) {
                        response.setCode(ResponseCode.PULL_RETRY_IMMEDIATELY);
                        responseHeader.setSuggestWhichBrokerId(MixAll.MASTER_ID);
                    }
                    break;
            }

            if (this.brokerController.getBrokerConfig().isSlaveReadEnable()) {
                // consume too slow ,redirect to another machine
                if (getMessageResult.isSuggestPullingFromSlave()) {
                    responseHeader.setSuggestWhichBrokerId(subscriptionGroupConfig.getWhichBrokerWhenConsumeSlowly());
                }
                // consume ok
                else {
                    responseHeader.setSuggestWhichBrokerId(subscriptionGroupConfig.getBrokerId());
                }
            } else {
                responseHeader.setSuggestWhichBrokerId(MixAll.MASTER_ID);
            }

            switch (getMessageResult.getStatus()) {
                case FOUND:
                    response.setCode(ResponseCode.SUCCESS);
                    break;
                case MESSAGE_WAS_REMOVING:
                    response.setCode(ResponseCode.PULL_RETRY_IMMEDIATELY);
                    break;
                case NO_MATCHED_LOGIC_QUEUE:
                case NO_MESSAGE_IN_QUEUE:
                    if (0 != requestHeader.getQueueOffset()) {
                        response.setCode(ResponseCode.PULL_OFFSET_MOVED);

                        // XXX: warn and notify me
                        log.info("the broker store no queue data, fix the request offset {} to {}, Topic: {} QueueId: {} Consumer Group: {}",
                            requestHeader.getQueueOffset(),
                            getMessageResult.getNextBeginOffset(),
                            requestHeader.getTopic(),
                            requestHeader.getQueueId(),
                            requestHeader.getConsumerGroup()
                        );
                    } else {
                        response.setCode(ResponseCode.PULL_NOT_FOUND);
                    }
                    break;
                case NO_MATCHED_MESSAGE:
                    response.setCode(ResponseCode.PULL_RETRY_IMMEDIATELY);
                    break;
                case OFFSET_FOUND_NULL:
                    response.setCode(ResponseCode.PULL_NOT_FOUND);
                    break;
                case OFFSET_OVERFLOW_BADLY:
                    response.setCode(ResponseCode.PULL_OFFSET_MOVED);
                    // XXX: warn and notify me
                    log.info("the request offset: {} over flow badly, broker max offset: {}, consumer: {}",
                        requestHeader.getQueueOffset(), getMessageResult.getMaxOffset(), channel.remoteAddress());
                    break;
                case OFFSET_OVERFLOW_ONE:
                    response.setCode(ResponseCode.PULL_NOT_FOUND);
                    break;
                case OFFSET_TOO_SMALL:
                    response.setCode(ResponseCode.PULL_OFFSET_MOVED);
                    log.info("the request offset too small. group={}, topic={}, requestOffset={}, brokerMinOffset={}, clientIp={}",
                        requestHeader.getConsumerGroup(), requestHeader.getTopic(), requestHeader.getQueueOffset(),
                        getMessageResult.getMinOffset(), channel.remoteAddress());
                    break;
                default:
                    assert false;
                    break;
            }

            if (this.hasConsumeMessageHook()) {
                ConsumeMessageContext context = new ConsumeMessageContext();
                context.setConsumerGroup(requestHeader.getConsumerGroup());
                context.setTopic(requestHeader.getTopic());
                context.setQueueId(requestHeader.getQueueId());

                String owner = request.getExtFields().get(BrokerStatsManager.COMMERCIAL_OWNER);

                switch (response.getCode()) {
                    case ResponseCode.SUCCESS:
                        int commercialBaseCount = brokerController.getBrokerConfig().getCommercialBaseCount();
                        int incValue = getMessageResult.getMsgCount4Commercial() * commercialBaseCount;

                        context.setCommercialRcvStats(BrokerStatsManager.StatsType.RCV_SUCCESS);
                        context.setCommercialRcvTimes(incValue);
                        context.setCommercialRcvSize(getMessageResult.getBufferTotalSize());
                        context.setCommercialOwner(owner);

                        break;
                    case ResponseCode.PULL_NOT_FOUND:
                        if (!brokerAllowSuspend) {

                            context.setCommercialRcvStats(BrokerStatsManager.StatsType.RCV_EPOLLS);
                            context.setCommercialRcvTimes(1);
                            context.setCommercialOwner(owner);

                        }
                        break;
                    case ResponseCode.PULL_RETRY_IMMEDIATELY:
                    case ResponseCode.PULL_OFFSET_MOVED:
                        context.setCommercialRcvStats(BrokerStatsManager.StatsType.RCV_EPOLLS);
                        context.setCommercialRcvTimes(1);
                        context.setCommercialOwner(owner);
                        break;
                    default:
                        assert false;
                        break;
                }

                this.executeConsumeMessageHookBefore(context);
            }

            switch (response.getCode()) {
                case ResponseCode.SUCCESS:

                    this.brokerController.getBrokerStatsManager().incGroupGetNums(requestHeader.getConsumerGroup(), requestHeader.getTopic(),
                        getMessageResult.getMessageCount());

                    this.brokerController.getBrokerStatsManager().incGroupGetSize(requestHeader.getConsumerGroup(), requestHeader.getTopic(),
                        getMessageResult.getBufferTotalSize());

                    this.brokerController.getBrokerStatsManager().incBrokerGetNums(getMessageResult.getMessageCount());
                    if (this.brokerController.getBrokerConfig().isTransferMsgByHeap()) {
                        final long beginTimeMills = this.brokerController.getMessageStore().now();
                        final byte[] r = this.readGetMessageResult(getMessageResult, requestHeader.getConsumerGroup(), requestHeader.getTopic(), requestHeader.getQueueId());
                        this.brokerController.getBrokerStatsManager().incGroupGetLatency(requestHeader.getConsumerGroup(),
                            requestHeader.getTopic(), requestHeader.getQueueId(),
                            (int) (this.brokerController.getMessageStore().now() - beginTimeMills));
                        response.setBody(r);
                    } else {
                        try {
                            FileRegion fileRegion =
                                new ManyMessageTransfer(response.encodeHeader(getMessageResult.getBufferTotalSize()), getMessageResult);
                            channel.writeAndFlush(fileRegion).addListener(new ChannelFutureListener() {
                                @Override
                                public void operationComplete(ChannelFuture future) throws Exception {
                                    getMessageResult.release();
                                    if (!future.isSuccess()) {
                                        log.error("transfer many message by pagecache failed, {}", channel.remoteAddress(), future.cause());
                                    }
                                }
                            });
                        } catch (Throwable e) {
                            log.error("transfer many message by pagecache exception", e);
                            getMessageResult.release();
                        }

                        response = null;
                    }
                    break;
                case ResponseCode.PULL_NOT_FOUND:

                    if (brokerAllowSuspend && hasSuspendFlag) {
                        long pollingTimeMills = suspendTimeoutMillisLong;
                        if (!this.brokerController.getBrokerConfig().isLongPollingEnable()) {
                            pollingTimeMills = this.brokerController.getBrokerConfig().getShortPollingTimeMills();
                        }

                        String topic = requestHeader.getTopic();
                        long offset = requestHeader.getQueueOffset();
                        int queueId = requestHeader.getQueueId();
                        PullRequest pullRequest = new PullRequest(request, channel, pollingTimeMills,
                            this.brokerController.getMessageStore().now(), offset, subscriptionData, messageFilter);
                        this.brokerController.getPullRequestHoldService().suspendPullRequest(topic, queueId, pullRequest);
                        response = null;
                        break;
                    }

                case ResponseCode.PULL_RETRY_IMMEDIATELY:
                    break;
                case ResponseCode.PULL_OFFSET_MOVED:
                    if (this.brokerController.getMessageStoreConfig().getBrokerRole() != BrokerRole.SLAVE
                        || this.brokerController.getMessageStoreConfig().isOffsetCheckInSlave()) {
                        MessageQueue mq = new MessageQueue();
                        mq.setTopic(requestHeader.getTopic());
                        mq.setQueueId(requestHeader.getQueueId());
                        mq.setBrokerName(this.brokerController.getBrokerConfig().getBrokerName());

                        OffsetMovedEvent event = new OffsetMovedEvent();
                        event.setConsumerGroup(requestHeader.getConsumerGroup());
                        event.setMessageQueue(mq);
                        event.setOffsetRequest(requestHeader.getQueueOffset());
                        event.setOffsetNew(getMessageResult.getNextBeginOffset());
                        this.generateOffsetMovedEvent(event);
                        log.warn(
                            "PULL_OFFSET_MOVED:correction offset. topic={}, groupId={}, requestOffset={}, newOffset={}, suggestBrokerId={}",
                            requestHeader.getTopic(), requestHeader.getConsumerGroup(), event.getOffsetRequest(), event.getOffsetNew(),
                            responseHeader.getSuggestWhichBrokerId());
                    } else {
                        responseHeader.setSuggestWhichBrokerId(subscriptionGroupConfig.getBrokerId());
                        response.setCode(ResponseCode.PULL_RETRY_IMMEDIATELY);
                        log.warn("PULL_OFFSET_MOVED:none correction. topic={}, groupId={}, requestOffset={}, suggestBrokerId={}",
                            requestHeader.getTopic(), requestHeader.getConsumerGroup(), requestHeader.getQueueOffset(),
                            responseHeader.getSuggestWhichBrokerId());
                    }

                    break;
                default:
                    assert false;
            }
        } else {
            response.setCode(ResponseCode.SYSTEM_ERROR);
            response.setRemark("store getMessage return null");
        }

        boolean storeOffsetEnable = brokerAllowSuspend;
        storeOffsetEnable = storeOffsetEnable && hasCommitOffsetFlag;
        storeOffsetEnable = storeOffsetEnable
            && this.brokerController.getMessageStoreConfig().getBrokerRole() != BrokerRole.SLAVE;
        if (storeOffsetEnable) {
            this.brokerController.getConsumerOffsetManager().commitOffset(RemotingHelper.parseChannelRemoteAddr(channel),
                requestHeader.getConsumerGroup(), requestHeader.getTopic(), requestHeader.getQueueId(), requestHeader.getCommitOffset());
        }
        return response;
    }

主要关键点如下:
(1) Pull消息的业务处理器—PullMessageProcessor的processRequest为处理拉取消息请求的入口,在设置reponse返回结果中的opaque值后,就完成一些前置的校验(Broker是否可读、Topic/ConsumerGroup是否存在、读取队列Id是否在Topic配置的队列范围数内);
(2) 根据“ConsumerGroup”、“Topic”、“queueId”和“offset”这些参数来调用MessageStore实例的getMessage()方法来产尝试读取Broker端的消息;
(3) 其中,通过findConsumeQueue()方法,获取逻辑消费队列—ConsumeQueue;
(4) 根据offset与逻辑消费队列中的maxOffset、minOffset的比较,来设置状态值status,同时计算出下次Pull消息的开始偏移量值—nextBeginOffset,然后通过MappedFile的方式获取ConsumeQueue的Buffer映射结果值;
(5) 根据算出来的offsetPy(物理偏移量值)和sizePy(消息的物理大小),从commitLog获取对应消息的Buffer映射结果值,并填充至GetMessageResult返回对象,并设置返回结果(状态/下次其实偏移量/maxOffset/minOffset)后return;
(6) 根据isTransferMsgByHeap的设置情况(默认为true),选择下面两种方式之一来真正读取GetMessageResult的消息内容并返回至Consumer端;
方式1: 用JDK NIO的ByteBuffer,循环地读取存有消息内容的messageBufferList至堆内内存中,返回byte[]字节数组,并设置到响应的body中;然后,通过RPC通信组件—NettyRemotingServer发送响应至Consumer端;
方式2: 采用基于Zero-Copy的Netty组件的FileRegion,其包装的“FileChannel.tranferTo”实现文件传输,可以直接将文件缓冲区的数据发送至通信目标通道Channel中,避免了通过循环write方式导致的内存拷贝开销,这种方式性能上更优;
(7) 在PullMessageProcessor业务处理器的最后,提交并持久化消息消费的offset偏移量进度;

Broker端对于Pull请求挂起处理的流程

说完了Pull消息请求的一般流程,下面主要看下Broker端的PullMessageProcessor业务处理器在RocketMQ中还没有消息可以拉取情况下(即为:PULLNOTFOUND)的处理流程,RocketMQ中长轮询机制的关键。
长轮询机制是对普通轮询的一种优化方案,它平衡了传统Push/Pull模型的各自缺点,Server端如果当前没有Client端请求拉取的相关数据会hold住这个请求,直到Server端存在相关的数据,或者等待超时时间后返回。在响应返回后,Client端又会再次发起下一次的长轮询请求。RocketMQ的push模式正是采用了这种长轮询机制的设计思路,如果在上面所述的第一次尝试Pull消息失败后(比如Broker端暂时没有可以消费的消息),先hold住并且挂起该请求(这里,设置返回响应response为null,此时不会向Consumer端发送任何响应的内容,即不会对响应结果进行处理),然后通过Broker端的后台线程PullRequestHoldService重新尝试和后台线程ReputMessageService的二次处理。在Broker端,两个后台线程服务PullRequestHoldService和ReputMessageService是实现长轮询机制的关键点。下面就来分别介绍这两个服务线程:
(1) PullRequestHoldService线程: 该服务线程会从pullRequestTable本地缓存变量中取PullRequest请求,检查轮询条件—“待拉取消息的偏移量是否小于消费队列最大偏移量”是否成立,如果条件成立则说明有新消息达到Broker端,则通过PullMessageProcessor的executeRequestWhenWakeup()方法重新尝试发起Pull消息的RPC请求(此处,每隔5S重试一次,默认长轮询整体的时间设置为30s);
(2) ReputMessageService线程: 该服务线程会在Broker端不断地从数据存储对象—commitLog中解析数据并分发请求,随后构建出ConsumeQueue(逻辑消费队列)和IndexFile(消息索引文件)两种类型的数据。同时从本地缓存变量—pullRequestTable中,取出hold住的PullRequest请求并执行二次处理(具体的做法是,在PullMessageProcessor的executeRequestWhenWakeup()方法中,通过业务线程池pullMessageExecutor,异步提交重新Pull消息的请求任务,即为重新调了一次PullMessageProcessor业务处理器的processRequest()方法,来实现Pull消息请求的二次处理)。这里,ReputMessageService服务线程,每处理一次,Thread.sleep(1),继续下一次处理。

你可能感兴趣的:(rocketmq)