RocketMQ源码-ConsumeQueue的构建


1 概述
2 入口方法
3 ConsumeQueue索引结构
4 索引构建

1 概述

RocketMQ一个Broker中可以建立多个Topic,每个Topic又可以有多个queue,Broker在接收生产者发来的消息时,是按照消息到来的顺序追加到同一个文件中的,当然文件默认大小为1G,如果超过文件最大大小,则会接着前一个文件写入的数据继续写入。

所有Topic所有queue的数据放在一起就造成了查询数据或者消费数据时面临着大量的随机读,也造成查询数据需要从头到尾读取所有的数据。为了避免每次查询或者消费者拉去数据时从头到尾遍历,RocketMQ在消息数据上构建了两种索引,一个是笔者文章RocketMQ源码-Index索引介绍介绍的全局索引Index索引,另一个就是本文介绍的根据queue划分后的每个队列的ConsumeQueue索引。

Index索引和ConsumeQueue的区别主要有三个,第一是Index基于消息的MessageConst.PROPERTY_UNIQ_CLIENT_MESSAGE_ID_KEYIDX属性构建,而ConsumeQueue基于消息标签的hash码构建;第二是Index为全局索引,不区分主题队列,所有消息索引在一个文件中,而ConsumeQueue对应一个主题的一个队列,每个主题的每个队列都会有一个ConsumeQueue索引;第三是Index主要用于消息查询,而ConsumeQueue主要用于消息消费时,消费者拉取消息使用,这里也能说明为什么Index设计为全局索引而ConsumeQueue为单个队列的索引,因为消息查询时一般为查询所有消息中的满足指定条件的消息,而消息消费时,消费者一般只会拉取自己订阅(或者是订阅之后负载均衡的被分配)的某个主题下某个队列的消息。

2 入口方法

和笔者文章RocketMQ源码-Index索引介绍一样,ConsumeQueue构建的入口也是在ReputMessageService服务的run方法中进行reput操作触发的,用于构建ConsumeQueue的类为CommitLogDispatcherBuildConsumeQueue,其也是DefaultMessageStore的内部类,源码如下:

class CommitLogDispatcherBuildConsumeQueue implements CommitLogDispatcher {

    @Override
    public void dispatch(DispatchRequest request) {
        final int tranType = MessageSysFlag.getTransactionValue(request.getSysFlag());
        switch (tranType) {
            //只会为普通的非事务消息和已提交的事务消息
            //做索引
            case MessageSysFlag.TRANSACTION_NOT_TYPE:
            case MessageSysFlag.TRANSACTION_COMMIT_TYPE:
                DefaultMessageStore.this.putMessagePositionInfo(request);
                break;
            //为提交的事务消息或者已经回滚的事务消息
            //则不索引
            case MessageSysFlag.TRANSACTION_PREPARED_TYPE:
            case MessageSysFlag.TRANSACTION_ROLLBACK_TYPE:
                break;
        }
    }
}

在介绍具体如何构建ConsumeQueue之前,我们先介绍下ConsumeQueue索引的结构。

3 ConsumeQueue索引结构

ConsumeQueue的结构比较简单,如下:

ConsumeQueue索引结构.jpg

如上图所示,每个索引项在文件中占20个字节,各字段分别为:

  • CommitLog Offset:该消息在CommitLog的起始物理偏移,long类型,8字节;
  • Size:该消息的大小,int类型,4字节;
  • Message Tags HashCode:消息标签对应的hashCode,long类型,8字节。

这里要注意一下,每个ConsumeQueue还有一个用于记录扩展索引信息的ConsumeQueueExt类实例,如果配置启动了ConsumeQueue扩展类型,则ConsumeQueue中的Message Tags HashCode记录的并不是消息标签对应的hashCode,记录的是该消息索引在扩展信息ConsumeQueueExt文件中的物理偏移,真正的Message Tags HashCode则记录在ConsumeQueueExt文件中。那么在读取ConsumeQueue如何区分Message Tags HashCode记录的是消息标签的hashCode,还是扩展信息偏移呢?ConsumeQueue中有个方法isExtAddr(long tagsCode)则用于实现这个判断:

//ConsumeQueue
/**
* Check {@code tagsCode} is address of extend file or tags code.
*/
public boolean isExtAddr(long tagsCode) {
    return ConsumeQueueExt.isExtAddr(tagsCode);
}

//ConsumeQueueExt
/**
* Check whether {@code address} point to extend file.
* 

* Just test {@code address} is less than 0. *

*/ public static boolean isExtAddr(final long address) { //MAX_ADDR = Integer.MIN_VALUE - 1L; //也即如果消息tagsCode小于Integer.Min_VALUE-1, //则为偏移地址,而不是tags的hashCode return address <= MAX_ADDR; }

扩展索引ConsumeQueueExt除了记录消息的标签code,还记录了消息bitMap信息和存储时间。消息bitMap主要用于消息过了,暂不介绍。ConsumeQueueExt的基本存储结构为ConsumeQueueExt.CqExtUnit

4 索引构建

我们现在接着第2节的入口方法介绍,入口方法是调用DefaultMessageStore.this.putMessagePositionInfo(request);进行索引构建的,DefaultMessageStore.this.putMessagePositionInfo(request);实现如下:

//DefaultMessageStore
public void putMessagePositionInfo(DispatchRequest dispatchRequest) {
    //先根据
    ConsumeQueue cq = this.findConsumeQueue(dispatchRequest.getTopic(), dispatchRequest.getQueueId());
    cq.putMessagePositionInfoWrapper(dispatchRequest);
}

ConsumeQueue中具体实现如下:

//ConsumeQueue
public void putMessagePositionInfoWrapper(DispatchRequest request) {
    final int maxRetries = 30;
    boolean canWrite = this.defaultMessageStore.getRunningFlags().isCQWriteable();
    //写入失败则会连续尝试30次
    for (int i = 0; i < maxRetries && canWrite; i++) {
        long tagsCode = request.getTagsCode();
        //如果启用了扩展索引,则先构造扩展索引保存单元
        //CqExtUnit,写入bitMap、保存时间、实际的消息
        //标签hashCode
        if (isExtWriteEnable()) {
            ConsumeQueueExt.CqExtUnit cqExtUnit = new ConsumeQueueExt.CqExtUnit();
            cqExtUnit.setFilterBitMap(request.getBitMap());
            cqExtUnit.setMsgStoreTime(request.getStoreTimestamp());
            cqExtUnit.setTagsCode(request.getTagsCode());
            //写入之后则返回扩展索引刚写入的偏移地址
            long extAddr = this.consumeQueueExt.put(cqExtUnit);
            if (isExtAddr(extAddr)) {
                //tagsCode重置为扩展索引偏移地址
                tagsCode = extAddr;
            } else {
                log.warn("Save consume queue extend fail, So just save tagsCode! {}, topic:{}, queueId:{}, offset:{}", cqExtUnit,
                    topic, queueId, request.getCommitLogOffset());
            }
        }
        //进行实际写入
        boolean result = this.putMessagePositionInfo(request.getCommitLogOffset(),
            request.getMsgSize(), tagsCode, request.getConsumeQueueOffset());
        if (result) {
            this.defaultMessageStore.getStoreCheckpoint().setLogicsMsgTimestamp(request.getStoreTimestamp());
            return;
        } else {
            // XXX: warn and notify me
            log.warn("[BUG]put commit log position info to " + topic + ":" + queueId + " " + request.getCommitLogOffset()
                + " failed, retry " + i + " times");

            try {
                Thread.sleep(1000);
            } catch (InterruptedException e) {
                log.warn("", e);
            }
        }
    }

    // XXX: warn and notify me
    log.error("[BUG]consume queue can not write, {} {}", this.topic, this.queueId);
    this.defaultMessageStore.getRunningFlags().makeLogicsQueueError();
}


private boolean putMessagePositionInfo(final long offset, final int size, final long tagsCode,
    final long cqOffset) {

    if (offset + size <= this.maxPhysicOffset) {
        log.warn("Maybe try to build consume queue repeatedly maxPhysicOffset={} phyOffset={}", maxPhysicOffset, offset);
        return true;
    }
    //写入消息物理偏移、消息大小和tagsCode
    //tagsCode可能为扩展索引偏移或者实际标签code
    this.byteBufferIndex.flip();
    this.byteBufferIndex.limit(CQ_STORE_UNIT_SIZE);
    this.byteBufferIndex.putLong(offset);
    this.byteBufferIndex.putInt(size);
    this.byteBufferIndex.putLong(tagsCode);

    final long expectLogicOffset = cqOffset * CQ_STORE_UNIT_SIZE;

    MappedFile mappedFile = this.mappedFileQueue.getLastMappedFile(expectLogicOffset);
    if (mappedFile != null) {

        if (mappedFile.isFirstCreateInQueue() && cqOffset != 0 && mappedFile.getWrotePosition() == 0) {
            this.minLogicOffset = expectLogicOffset;
            this.mappedFileQueue.setFlushedWhere(expectLogicOffset);
            this.mappedFileQueue.setCommittedWhere(expectLogicOffset);
            this.fillPreBlank(mappedFile, expectLogicOffset);
            log.info("fill pre blank space " + mappedFile.getFileName() + " " + expectLogicOffset + " "
                + mappedFile.getWrotePosition());
        }

        if (cqOffset != 0) {
            long currentLogicOffset = mappedFile.getWrotePosition() + mappedFile.getFileFromOffset();

            if (expectLogicOffset < currentLogicOffset) {
                log.warn("Build  consume queue repeatedly, expectLogicOffset: {} currentLogicOffset: {} Topic: {} QID: {} Diff: {}",
                    expectLogicOffset, currentLogicOffset, this.topic, this.queueId, expectLogicOffset - currentLogicOffset);
                return true;
            }

            if (expectLogicOffset != currentLogicOffset) {
                LOG_ERROR.warn(
                    "[BUG]logic queue order maybe wrong, expectLogicOffset: {} currentLogicOffset: {} Topic: {} QID: {} Diff: {}",
                    expectLogicOffset,
                    currentLogicOffset,
                    this.topic,
                    this.queueId,
                    expectLogicOffset - currentLogicOffset
                );
            }
        }
        this.maxPhysicOffset = offset + size;
        return mappedFile.appendMessage(this.byteBufferIndex.array());
    }
    return false;
}

你可能感兴趣的:(RocketMQ源码-ConsumeQueue的构建)