RocketMQ源码分析——消息的消费与进度存储

消息拉取到本地 ProcessQueue 后,生成一个消费任务交给具体的消费服务消费。

并发消费

ConsumeMessageConcurrentlyService

public ConsumeMessageConcurrentlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl,
    MessageListenerConcurrently messageListener) {
    // 客户端push模式实现类
    this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl;
    // 业务消息监听,创建消费者时指定
    this.messageListener = messageListener;
	// 消费者
    this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer();
    // 消费者组
    this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup();
    // 阻塞队列
    this.consumeRequestQueue = new LinkedBlockingQueue<Runnable>();
	// 消费线程池
    this.consumeExecutor = new ThreadPoolExecutor(
        this.defaultMQPushConsumer.getConsumeThreadMin(),
        this.defaultMQPushConsumer.getConsumeThreadMax(),
        1000 * 60,
        TimeUnit.MILLISECONDS,
        this.consumeRequestQueue,
        new ThreadFactoryImpl("ConsumeMessageThread_"));
}

上篇中回调函数将消费请求提交到 ConsumeMessageConcurrentlyService 就返回,再由它内部的消费线程池 consumeExecutor 处理消费请求。

public void submitConsumeRequest(
    final List<MessageExt> msgs,
    final ProcessQueue processQueue,
    final MessageQueue messageQueue,
    final boolean dispatchToConsume) {
    // 最大并发消费值,默认为1,此处值为传入消息监听 List msgs 的个数
    final int consumeBatchSize = this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize();
    if (msgs.size() <= consumeBatchSize) {
    	// 构造消费请求
        ConsumeRequest consumeRequest = new ConsumeRequest(msgs, processQueue, messageQueue);
        try {
        	// 提交到线程池
            this.consumeExecutor.submit(consumeRequest);
        } catch (RejectedExecutionException e) {
        	// 实际无界阻塞队列不会被拒绝,这里标准处理请求,延迟5s再提交到线程池
            this.submitConsumeRequestLater(consumeRequest);
        }
    } else {
        // 大于最大并发消费值,分片拆分为多个消费请求
    }
}

具体会执行 ConsumeRequest 的 run 方法

public void run() {
    if (this.processQueue.isDropped()) {
    	// 消费队列被删除,不能继续消费
        return;
    }
    // 执行钩子方法
    try {
    	// 恢复重试主题的真实主题,重试消息会修改原主题为“RETRY_TOPIC”
        ConsumeMessageConcurrentlyService.this.resetRetryTopic(msgs);
        if (msgs != null && !msgs.isEmpty()) {
            for (MessageExt msg : msgs) {
            	// 记录消息开始消费时间到属性中
                MessageAccessor.setConsumeStartTimeStamp(msg, String.valueOf(System.currentTimeMillis()));
            }
        }
        // 真正的消费逻辑,调用创建消费者时指定的监听程序
        status = listener.consumeMessage(Collections.unmodifiableList(msgs), context);
    } catch (Throwable e) {
        hasException = true;
    }
    // 处理执行结果:成功、超时、异常、返回空、失败
    // 执行钩子方法
	// 统计消费次数
	// 判断队列是否被删除
    if (!processQueue.isDropped()) {
    	// 处理消费结果
        ConsumeMessageConcurrentlyService.this.processConsumeResult(status, context, this);
    } else {
    }
}

按照消息监听器执行的结果,修改消费偏移量

public void processConsumeResult(
    final ConsumeConcurrentlyStatus status,
    final ConsumeConcurrentlyContext context,
    final ConsumeRequest consumeRequest
) {
	// 计算应该返回的消息位置
    int ackIndex = context.getAckIndex();

    if (consumeRequest.getMsgs().isEmpty())
        return;

    switch (status) {
        case CONSUME_SUCCESS:
            if (ackIndex >= consumeRequest.getMsgs().size()) {
                ackIndex = consumeRequest.getMsgs().size() - 1;
            }
            break;
        case RECONSUME_LATER:
            ackIndex = -1;
            break;
        default:
            break;
    }

    switch (this.defaultMQPushConsumer.getMessageModel()) {
        case BROADCASTING:
            // 忽略日志
            break;
        case CLUSTERING:
            List<MessageExt> msgBackFailed = new ArrayList<MessageExt>(consumeRequest.getMsgs().size());
            // 消息消费需要重试的,重新执行消费
            for (int i = ackIndex + 1; i < consumeRequest.getMsgs().size(); i++) {
                MessageExt msg = consumeRequest.getMsgs().get(i);
                // 向Broker发送重试消息,此消息和原消息一样
                boolean result = this.sendMessageBack(msg, context);
                if (!result) {
                	// 若重试消息发送失败,那么此消息重试次数加一,之后重新提交到线程池
                    msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
                    msgBackFailed.add(msg);
                }
            }

            if (!msgBackFailed.isEmpty()) {
            	// 重试消息发送失败,将这些消息从列表中删除
                consumeRequest.getMsgs().removeAll(msgBackFailed);
				// 延迟5s再提交到线程池进行消费
                this.submitConsumeRequestLater(msgBackFailed, consumeRequest.getProcessQueue(), consumeRequest.getMessageQueue());
            }
            break;
        default:
            break;
    }
	// 移除已成功消费的消息,返回 ProcessQueue 中偏移量最小的值,若本地消息列表无消息了,那么使用 queueOffsetMax 去更新。
    long offset = consumeRequest.getProcessQueue().removeMessage(consumeRequest.getMsgs());
    if (offset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
    	// 修改消费进度
        this.defaultMQPushConsumerImpl.getOffsetStore().updateOffset(consumeRequest.getMessageQueue(), offset, true);
    }
}

重试消息发送到Broker,会将主题变为重试主题,其他属性不变,若超过重试次数就放入死信队列。重试消息由Broker定时任务取出放到CommitLog文件中,然后等待被客户端再次拉取。这样重试消息的消费进度就可以向前推进。

顺序消费

ConsumeMessageOrderlyService
执行负载之后,新增加的消费队列在拉取任务前,需要先把Broker上的消费队列锁定
org.apache.rocketmq.client.impl.consumer.RebalanceImpl#updateProcessQueueTableInRebalance

if (isOrder && !this.lock(mq)) {
    continue;
}

正式拉取消息时,检查消费队列有没有被锁定,没有被锁定延迟3s再拉取
org.apache.rocketmq.client.impl.consumer.DefaultMQPushConsumerImpl#pullMessage

if (processQueue.isLocked()) {
    if (!pullRequest.isLockedFirst()) {
        // 设置第一次拉取偏移量
    }
} else {
    this.executePullRequestLater(pullRequest, PULL_TIME_DELAY_MILLS_WHEN_EXCEPTION);
    return;
}

定时任务每隔20s,锁定所有分配给自己的消费队列,向Broker发送消费队列锁请求
org.apache.rocketmq.client.impl.consumer.RebalanceImpl#lockAll

this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
    public void run() {
        ConsumeMessageOrderlyService.this.lockMQPeriodically();
    }
}, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);

...
Set<MessageQueue> lockOKMQSet =
	this.mQClientFactory.getMQClientAPIImpl().lockBatchMQ(findBrokerResult.getBrokerAddr(), requestBody, 1000);

若消费队列分配给其他消费者,则需要设置消费队列尾drop并清除Broker消费队列锁
org.apache.rocketmq.client.impl.consumer.RebalanceImpl#unlock

this.mQClientFactory.getMQClientAPIImpl().unlockBatchMQ(findBrokerResult.getBrokerAddr(), requestBody, 1000, oneway);

正式消费前,检查消费队列状态,是否被删除、未锁定、锁定超时(60s后切换到其他线程取消费此消费队列)
org.apache.rocketmq.client.impl.consumer.ConsumeMessageOrderlyService.ConsumeRequest#run

public void run() {
    if (this.processQueue.isDropped()) {
        return;
    }
	// 获取消费队列独占锁,保证线程池只有一个线程能消费此队列,保证消费有序
    final Object objLock = messageQueueLock.fetchLockObject(this.messageQueue);
    synchronized (objLock) {
        if (MessageModel.BROADCASTING.equals(ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.messageModel())
            || (this.processQueue.isLocked() && !this.processQueue.isLockExpired())) {
            		...
            			// 执行监听器时,获取到消费队列的消费锁,再次判断队列状态
                        this.processQueue.getLockConsume().lock();
                        if (this.processQueue.isDropped()) {
                            break;
                        }
                        status = messageListener.consumeMessage(Collections.unmodifiableList(msgs), context);
                        ...

消费失败后,默认会暂停10s继续消费,直到消费次数达到 Integer.MAX_VALUE

    private boolean checkReconsumeTimes(List<MessageExt> msgs) {
        boolean suspend = false;
        if (msgs != null && !msgs.isEmpty()) {
            for (MessageExt msg : msgs) {
            	// 默认最大重试次数为 Integer.MAX_VALUE
                if (msg.getReconsumeTimes() >= getMaxReconsumeTimes()) {
                    MessageAccessor.setReconsumeTime(msg, String.valueOf(msg.getReconsumeTimes()));
                	// 达到最大重试次数后,向Broker发送消息进入死信队列,认为当前消息消费成功
                    if (!sendMessageBack(msg)) {
                        suspend = true;
                        msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
                    }
                } else {
                    suspend = true;
                    msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
                }
            }
        }
        return suspend;
    }
顺序消息的缺陷
  • 収送顺序消息无法利用集群 FailOver 特性
  • 消费顺序消息的并行度依赖于队列数量
  • 队列热点问题,个别队列由于哈希不均导致消息过多,消费速度跟不上,产生消息堆积问题
  • 遇到消息失败的消息,无法跳过,当前队列消费暂停
消费进度存储

广播模式,消费进度存储在本地,集群模式存储在Broker。
集群模式:消息消费成功后,先更新本地缓存

public void updateOffset(MessageQueue mq, long offset, boolean increaseOnly) {
    if (mq != null) {
        AtomicLong offsetOld = this.offsetTable.get(mq);
        if (null == offsetOld) {
            offsetOld = this.offsetTable.putIfAbsent(mq, new AtomicLong(offset));
        }

        if (null != offsetOld) {
            if (increaseOnly) {
                MixAll.compareAndIncreaseOnly(offsetOld, offset);
            } else {
                offsetOld.set(offset);
            }
        }
    }
}

再由客户端实例定时刷新所有消费进度到Broker,默认5s
org.apache.rocketmq.client.impl.factory.MQClientInstance#scheduledExecutorService

this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
    public void run() {
        try {
            MQClientInstance.this.persistAllConsumerOffset();
        } catch (Exception e) {
        }
    }
}, 1000 * 10, this.clientConfig.getPersistConsumerOffsetInterval(), TimeUnit.MILLISECONDS);

请求代码:RequestCode#UPDATE_CONSUMER_OFFSET
Broker先存储到缓存
org.apache.rocketmq.broker.offset.ConsumerOffsetManager#commitOffset

public void commitOffset(final String clientHost, final String group, final String topic, final int queueId,
    final long offset) {
    // topic@group
    String key = topic + TOPIC_GROUP_SEPARATOR + group;
    this.commitOffset(clientHost, key, queueId, offset);
}

private void commitOffset(final String clientHost, final String key, final int queueId, final long offset) {
    ConcurrentMap<Integer, Long> map = this.offsetTable.get(key);
    if (null == map) {
        map = new ConcurrentHashMap<Integer, Long>(32);
        map.put(queueId, offset);
        this.offsetTable.put(key, map);
    } else {
        Long storeOffset = map.put(queueId, offset);
    }
}

然后通过定时任务去刷盘,默认间隔5s

this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
    @Override
    public void run() {
        try {
            BrokerController.this.consumerOffsetManager.persist();
        } catch (Throwable e) {
            log.error("schedule persist consumerOffset error.", e);
        }
    }
}, 1000 * 10, this.brokerConfig.getFlushConsumerOffsetInterval(), TimeUnit.MILLISECONDS);

存储到文件 consumerOffset.json

public synchronized void persist() {
	// 对象序列化成JSON字符串
    String jsonString = this.encode(true);
    if (jsonString != null) {
    	// 找到存储路径
        String fileName = this.configFilePath();
        try {
        	// 刷盘
            MixAll.string2File(jsonString, fileName);
        } catch (IOException e) {
        }
    }
}
public String encode(final boolean prettyFormat) {
    return RemotingSerializable.toJson(this, prettyFormat);
}

你可能感兴趣的:(RocketMQ实战)