rocketmq writeQueueNum和readQueueNum理解-part2

rocketmq:4.3.2

测试环境一个小集群:2m, 2s

1.消息消费的入口,简单示例代码如下。

DefaultMQPushConsumer consumer = new DefaultMQPushConsumer("group_name_4"); //step1
consumer.setNamesrvAddr(nameserver);
consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_FIRST_OFFSET);
consumer.subscribe("topic_store_test4", "*"); //step2
consumer.registerMessageListener(new MessageListenerConcurrently() {
    @Override
    public ConsumeConcurrentlyStatus consumeMessage(List msgs,
                                                            ConsumeConcurrentlyContext context) {
        System.out.printf("%s Receive New Messages: %s %n", Thread.currentThread().getName(), msgs);
        return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
    }
});
consumer.start(); //step3
System.out.printf("Consumer Started.%n");
TimeUnit.MINUTES.sleep(10L);

具体流程说明:

1.1 在step1处,实际new的对象是DefaultMQPushConsumerImpl,它里面有一个变量,

private final RebalanceImpl rebalanceImpl = new RebalancePushImpl(this);

1.2 在这个类的父类里面有一个成员变量,

protected final ConcurrentMap> topicSubscribeInfoTable =
        new ConcurrentHashMap>();

1.3 继续往下到step3,在start方法的内部,倒数第4个方法, 概要流程如下,关注step4

start() {
// do xxx
mQClientFactory.start(); //step3.5
this.updateTopicSubscribeInfoWhenSubscriptionChanged(); //step4
this.mQClientFactory.checkClientInBroker();
this.mQClientFactory.sendHeartbeatToAllBrokerWithLock();
this.mQClientFactory.rebalanceImmediately();
}

1.4.1 在step3.5处,实际调用了MQClientInstance.start(), 同时有一个任务线程RebalanceService也被启动了(这个是干队列负载均衡的), 而最终进行负载均衡操作的地方又被委托到了 RebalanceImpl这个类里面, 即上面1.1 提到的。然后最后一顿操作后,会创建PullRequest的一个list,这些对象被添加进了PullMessageService对象的pullRequestQueue(一个阻塞队列).

1.4.2 PullMessageService这个也是一个线程任务,在run方法里面,只有能够从pullRequestQueue拿到任务时,才会去broker实际的拉消息.

1.4.3 在step4的方法里一直向下debug,最终进入updateTopicRouteInfoFromNameServer方法,贴下代码(有点长)

public boolean updateTopicRouteInfoFromNameServer(final String topic, boolean isDefault,
        DefaultMQProducer defaultMQProducer) {
        try {
            if (this.lockNamesrv.tryLock(LOCK_TIMEOUT_MILLIS, TimeUnit.MILLISECONDS)) {
                try {
                    TopicRouteData topicRouteData;
                    if (isDefault && defaultMQProducer != null) {
                        topicRouteData = this.mQClientAPIImpl.getDefaultTopicRouteInfoFromNameServer(defaultMQProducer.getCreateTopicKey(),
                            1000 * 3);
                        if (topicRouteData != null) {
                            for (QueueData data : topicRouteData.getQueueDatas()) {
                                int queueNums = Math.min(defaultMQProducer.getDefaultTopicQueueNums(), data.getReadQueueNums());
                                data.setReadQueueNums(queueNums);
                                data.setWriteQueueNums(queueNums);
                            }
                        }
                    } else {
                        topicRouteData = this.mQClientAPIImpl.getTopicRouteInfoFromNameServer(topic, 1000 * 3);
                    }
                    if (topicRouteData != null) {
                        TopicRouteData old = this.topicRouteTable.get(topic);
                        boolean changed = topicRouteDataIsChange(old, topicRouteData);
                        if (!changed) {
                            changed = this.isNeedUpdateTopicRouteInfo(topic);
                        } else {
                            log.info("the topic[{}] route info changed, old[{}] ,new[{}]", topic, old, topicRouteData);
                        }

                        if (changed) {
                            TopicRouteData cloneTopicRouteData = topicRouteData.cloneTopicRouteData();

                            for (BrokerData bd : topicRouteData.getBrokerDatas()) {
                                this.brokerAddrTable.put(bd.getBrokerName(), bd.getBrokerAddrs());
                            }

                            // Update Pub info
                            {
                                TopicPublishInfo publishInfo = topicRouteData2TopicPublishInfo(topic, topicRouteData);
                                publishInfo.setHaveTopicRouterInfo(true);
                                Iterator> it = this.producerTable.entrySet().iterator();
                                while (it.hasNext()) {
                                    Entry entry = it.next();
                                    MQProducerInner impl = entry.getValue();
                                    if (impl != null) {
                                        impl.updateTopicPublishInfo(topic, publishInfo); 
                                    }
                                }
                            }

                            // Update sub info
                            {
                                Set subscribeInfo = topicRouteData2TopicSubscribeInfo(topic, topicRouteData);
                                Iterator> it = this.consumerTable.entrySet().iterator();
                                while (it.hasNext()) {
                                    Entry entry = it.next();
                                    MQConsumerInner impl = entry.getValue();
                                    if (impl != null) {
                                        impl.updateTopicSubscribeInfo(topic, subscribeInfo); // step5
                                    }
                                }
                            }
                            log.info("topicRouteTable.put. Topic = {}, TopicRouteData[{}]", topic, cloneTopicRouteData);
                            this.topicRouteTable.put(topic, cloneTopicRouteData);
                            return true;
                        }
                    } else {
                        log.warn("updateTopicRouteInfoFromNameServer, getTopicRouteInfoFromNameServer return null, Topic: {}", topic);
                    }
                } catch (Exception e) {
                    if (!topic.startsWith(MixAll.RETRY_GROUP_TOPIC_PREFIX) && !topic.equals(MixAll.AUTO_CREATE_TOPIC_KEY_TOPIC)) {
                        log.warn("updateTopicRouteInfoFromNameServer Exception", e);
                    }
                } finally {
                    this.lockNamesrv.unlock();
                }
            } else {
                log.warn("updateTopicRouteInfoFromNameServer tryLock timeout {}ms", LOCK_TIMEOUT_MILLIS);
            }
        } catch (InterruptedException e) {
            log.warn("updateTopicRouteInfoFromNameServer Exception", e);
        }

        return false;
    }

在注释 // Update sub info  处,会根据readQueueNum的数量 * broker的数量,形成一个MessageQueue的集合。

然后在step5处,会把在1.2 处提到的变量(topicSubscribeInfoTable)赋值. 这样在1.4.2 处,才会实际去拉消息

 

2.服务端处理客户端拉取消息的基本操作

在broker模块, 接收到client端(最终进行发送请求的地方:NettyRemotingAbstract.invokeSyncImpl方法)发送的pull消息的请求的处理类是:PullMessageProcessor.processRequest。

根据上篇(https://blog.csdn.net/shuipinglp/article/details/101058449)的验证case-2. 

当writeQueueNum=2, readQueueNum=4时,写入消息生成了2个consumeQueue文件,

在往下就是读取消息的代码位置了。具体看一下就好,不复杂。

final GetMessageResult getMessageResult =
            this.brokerController.getMessageStore().getMessage(requestHeader.getConsumerGroup(), requestHeader.getTopic(),
                requestHeader.getQueueId(), requestHeader.getQueueOffset(), requestHeader.getMaxMsgNums(), messageFilter);

此时getMessageResult的数据类似如下:

GetMessageResult [status=NO_MESSAGE_IN_QUEUE, nextBeginOffset=0, minOffset=0, maxOffset=0, bufferTotalSize=0, suggestPullingFromSlave=false]。

具体的调用流程如下

PullRequestHoldService(任务线程).run方法 --> 然后扔一个异步任务
this.brokerController.getPullMessageProcessor().executeRequestWhenWakeup(request.getClientChannel(),
                                request.getRequestCommand());

流程在回到PullMessageProcessor.processRequest,进入到代码下面的一段switch语句

switch (getMessageResult.getStatus()) {
                case FOUND:
                    response.setCode(ResponseCode.SUCCESS);
                    break;
                case MESSAGE_WAS_REMOVING:
                    response.setCode(ResponseCode.PULL_RETRY_IMMEDIATELY);
                    break;
                case NO_MATCHED_LOGIC_QUEUE:
                case NO_MESSAGE_IN_QUEUE: 
                    if (0 != requestHeader.getQueueOffset()) {
                        response.setCode(ResponseCode.PULL_OFFSET_MOVED);

                        // XXX: warn and notify me
                        log.info("the broker store no queue data, fix the request offset {} to {}, Topic: {} QueueId: {} Consumer Group: {}",
                            requestHeader.getQueueOffset(),
                            getMessageResult.getNextBeginOffset(),
                            requestHeader.getTopic(),
                            requestHeader.getQueueId(),
                            requestHeader.getConsumerGroup()
                        );
                    } else {
                        response.setCode(ResponseCode.PULL_NOT_FOUND);
                    }
                    break;
                case NO_MATCHED_MESSAGE:
//省略

在case NO_MESSAGE_IN_QUQUE处,response code被设置成了PULL_NOT_FOUND

接下来看客户端拉取消息处的处理逻辑:

调用发起的地方:

DefaultMQPushConsumerImpl.pullMessage 调用

PullAPIWrapper.pullKernelImpl, (同时传递了一个PullCallback)继续 调用

MQClientAPIImpl.pullMessage, 然后方法processPullResponse,会处理返回的结果。部分代码

switch (response.getCode()) {
            case ResponseCode.SUCCESS:
                pullStatus = PullStatus.FOUND;
                break;
            case ResponseCode.PULL_NOT_FOUND:
                pullStatus = PullStatus.NO_NEW_MSG;
                break;
            case ResponseCode.PULL_RETRY_IMMEDIATELY:
                pullStatus = PullStatus.NO_MATCHED_MSG;
                break;
            case ResponseCode.PULL_OFFSET_MOVED:
                pullStatus = PullStatus.OFFSET_ILLEGAL;
                break;

            default:
                throw new MQBrokerException(response.getCode(), response.getRemark());
        }

最终把pullStatus设置成了NO_NEW_MSG.

消息获取之后,触发上面提到的PullCalllback,部分代码如下

case NO_NEW_MSG:
                            pullRequest.setNextOffset(pullResult.getNextBeginOffset());

                            DefaultMQPushConsumerImpl.this.correctTagsOffset(pullRequest);

                            DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
                            break;

重置了一下相关的offset,然后PullRequest继续入队列,继续从broker拉消息(其实啥也拉不到)。

 

3.当writeQueueNum=2, readQueueNum=4,消息是可以全部获取的, 但是在rebalance的时候会分配到queueId=2和3的情况,客端户是没有报错发生。

备注:可以通过RequestCode里面的code码进行查找客户端和服务器的处理对应关系,里面的码值在加上注释就更好了。

你可能感兴趣的:(mq,rocketmq)