前言:这一篇幅可能较长,我会说的比较细致。有耐心的看下去,没耐心的,那就这样吧!!嘻嘻。对了,我这里说的是消费端的顺序消费,生产端没什么好讲的,就顺序放进去就好了。有疑问的可以在下面留言,我这里说的主要是3.2.6版本的,负载均衡消费的顺序消费,也会说点相关4.4.0版本的东西。, just start!!!wow!
先来看下消费启动的代码:
public static void main(String[] args) throws MQClientException {
DefaultMQPushConsumer consumer = new DefaultMQPushConsumer("please_rename_unique_group_name_3");
/**
* 设置Consumer第一次启动是从队列头部开始消费还是队列尾部开始消费
* 如果非第一次启动,那么按照上次消费的位置继续消费
*/
consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_FIRST_OFFSET);
consumer.subscribe("TopicTest", "TagA || TagC || TagD");
consumer.registerMessageListener(new MessageListenerOrderly() {
AtomicLong consumeTimes = new AtomicLong(0);
@Override
public ConsumeOrderlyStatus consumeMessage(List msgs, ConsumeOrderlyContext context) {
context.setAutoCommit(false);
System.out.println(Thread.currentThread().getName() + " Receive New Messages: " + msgs);
this.consumeTimes.incrementAndGet();
if ((this.consumeTimes.get() % 2) == 0) {
return ConsumeOrderlyStatus.SUCCESS;
}
else if ((this.consumeTimes.get() % 3) == 0) {
return ConsumeOrderlyStatus.ROLLBACK;
}
else if ((this.consumeTimes.get() % 4) == 0) {
return ConsumeOrderlyStatus.COMMIT;
}
else if ((this.consumeTimes.get() % 5) == 0) {
context.setSuspendCurrentQueueTimeMillis(3000);
return ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT;
}
return ConsumeOrderlyStatus.SUCCESS;
}
});
consumer.start();
System.out.println("Consumer Started.");
}
public DefaultMQPushConsumer(final String consumerGroup) {
this(consumerGroup, null, new AllocateMessageQueueAveragely());
}
public DefaultMQPushConsumer(final String consumerGroup, RPCHook rpcHook,
AllocateMessageQueueStrategy allocateMessageQueueStrategy) {
this.consumerGroup = consumerGroup;
this.allocateMessageQueueStrategy = allocateMessageQueueStrategy;
defaultMQPushConsumerImpl = new DefaultMQPushConsumerImpl(this, rpcHook);
}
默认的的策略是平均策略。这个策略待会会有奇效
public void start() throws MQClientException {
switch (this.serviceState) {
case CREATE_JUST:
log.info("the consumer [{}] start beginning. messageModel={}, isUnitMode={}",
this.defaultMQPushConsumer.getConsumerGroup(), this.defaultMQPushConsumer.getMessageModel(),
this.defaultMQPushConsumer.isUnitMode());
this.serviceState = ServiceState.START_FAILED;
this.checkConfig();
//把%REPTY%+consumerGroup的topic请求加入队列
this.copySubscription();
if (this.defaultMQPushConsumer.getMessageModel() == MessageModel.CLUSTERING) {
this.defaultMQPushConsumer.changeInstanceNameToPID();
}
this.mQClientFactory =
MQClientManager.getInstance().getAndCreateMQClientInstance(this.defaultMQPushConsumer,
this.rpcHook);
this.rebalanceImpl.setConsumerGroup(this.defaultMQPushConsumer.getConsumerGroup());
this.rebalanceImpl.setMessageModel(this.defaultMQPushConsumer.getMessageModel());
this.rebalanceImpl.setAllocateMessageQueueStrategy(this.defaultMQPushConsumer
.getAllocateMessageQueueStrategy());
this.rebalanceImpl.setmQClientFactory(this.mQClientFactory);
this.pullAPIWrapper = new PullAPIWrapper(//
mQClientFactory,//
this.defaultMQPushConsumer.getConsumerGroup(), isUnitMode());
this.pullAPIWrapper.registerFilterMessageHook(filterMessageHookList);
if (this.defaultMQPushConsumer.getOffsetStore() != null) {
this.offsetStore = this.defaultMQPushConsumer.getOffsetStore();
}
else {
switch (this.defaultMQPushConsumer.getMessageModel()) {
case BROADCASTING:
this.offsetStore =
new LocalFileOffsetStore(this.mQClientFactory,
this.defaultMQPushConsumer.getConsumerGroup());
break;
case CLUSTERING:
this.offsetStore =
new RemoteBrokerOffsetStore(this.mQClientFactory,
this.defaultMQPushConsumer.getConsumerGroup());
break;
default:
break;
}
}
this.offsetStore.load();
if (this.getMessageListenerInner() instanceof MessageListenerOrderly) {
this.consumeOrderly = true;
this.consumeMessageService =
new ConsumeMessageOrderlyService(this,
(MessageListenerOrderly) this.getMessageListenerInner());
}
else if (this.getMessageListenerInner() instanceof MessageListenerConcurrently) {
this.consumeOrderly = false;
this.consumeMessageService =
new ConsumeMessageConcurrentlyService(this,
(MessageListenerConcurrently) this.getMessageListenerInner());
}
this.consumeMessageService.start();
boolean registerOK =
mQClientFactory.registerConsumer(this.defaultMQPushConsumer.getConsumerGroup(), this);
if (!registerOK) {
this.serviceState = ServiceState.CREATE_JUST;
this.consumeMessageService.shutdown();
throw new MQClientException("The consumer group["
+ this.defaultMQPushConsumer.getConsumerGroup()
+ "] has been created before, specify another name please."
+ FAQUrl.suggestTodo(FAQUrl.GROUP_NAME_DUPLICATE_URL), null);
}
mQClientFactory.start();
log.info("the consumer [{}] start OK.", this.defaultMQPushConsumer.getConsumerGroup());
this.serviceState = ServiceState.RUNNING;
break;
case RUNNING:
case START_FAILED:
case SHUTDOWN_ALREADY:
throw new MQClientException("The PushConsumer service state not OK, maybe started once, "//
+ this.serviceState//
+ FAQUrl.suggestTodo(FAQUrl.CLIENT_SERVICE_NOT_OK), null);
default:
break;
}
//获取topic请求下的消费队列
this.updateTopicSubscribeInfoWhenSubscriptionChanged();
this.mQClientFactory.sendHeartbeatToAllBrokerWithLock();
this.mQClientFactory.rebalanceImmediately();
}
public void start() throws MQClientException {
//校验fastjson版本
PackageConflictDetect.detectFastjson();
synchronized (this) {
switch (this.serviceState) {
case CREATE_JUST:
this.serviceState = ServiceState.START_FAILED;
//If not specified,looking address from name server
if (null == this.clientConfig.getNamesrvAddr()) {
this.clientConfig.setNamesrvAddr(this.mQClientAPIImpl.fetchNameServerAddr());
}
//Start request-response channel
this.mQClientAPIImpl.start();
//Start various schedule tasks
this.startScheduledTask();
//开始拉取消息
this.pullMessageService.start();
//与broker 实时 消息topic和queue的分配
this.rebalanceService.start();
//Start push service
this.defaultMQProducer.getDefaultMQProducerImpl().start(false);
log.info("the client factory [{}] start OK", this.clientId);
this.serviceState = ServiceState.RUNNING;
break;
case RUNNING:
break;
case SHUTDOWN_ALREADY:
break;
case START_FAILED:
throw new MQClientException("The Factory object[" + this.getClientId()
+ "] has been created before, and failed.", null);
default:
break;
}
}
}
private void startScheduledTask() {
//地址不存在就会调用寻址方法
if (null == this.clientConfig.getNamesrvAddr()) {
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
try {
MQClientInstance.this.mQClientAPIImpl.fetchNameServerAddr();
} catch (Exception e) {
log.error("ScheduledTask fetchNameServerAddr exception", e);
}
}
}, 1000 * 10, 1000 * 60 * 2, TimeUnit.MILLISECONDS);
}
//定时从nameServer更新生产者消费者路由信息
//更新topic的信息,但不更新topic下面的queue
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
try {
MQClientInstance.this.updateTopicRouteInfoFromNameServer();
} catch (Exception e) {
log.error("ScheduledTask updateTopicRouteInfoFromNameServer exception", e);
}
}
}, 10, this.clientConfig.getPollNameServerInteval(), TimeUnit.MILLISECONDS);
//定期清除已经离线的Broker服务器(在从名称服务获取的路由信息中该Broker的地址已经不存在),
// 以及向所有仍在线的Broker发送心跳信息。
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
try {
MQClientInstance.this.cleanOfflineBroker();
MQClientInstance.this.sendHeartbeatToAllBrokerWithLock();
} catch (Exception e) {
log.error("ScheduledTask sendHeartbeatToAllBroker exception", e);
}
}
}, 1000, this.clientConfig.getHeartbeatBrokerInterval(), TimeUnit.MILLISECONDS);
//定期持久化各消费者队列消费进度。
//针对consumerclient才用
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
try {
MQClientInstance.this.persistAllConsumerOffset();
} catch (Exception e) {
log.error("ScheduledTask persistAllConsumerOffset exception", e);
}
}
}, 1000 * 10, this.clientConfig.getPersistConsumerOffsetInterval(), TimeUnit.MILLISECONDS);
//定期根据消费者数量调整线程池大小。
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
try {
MQClientInstance.this.adjustThreadPool();
} catch (Exception e) {
log.error("ScheduledTask adjustThreadPool exception", e);
}
}
}, 1, 1, TimeUnit.MINUTES);
}
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
try {
MQClientInstance.this.updateTopicRouteInfoFromNameServer();
} catch (Exception e) {
log.error("ScheduledTask updateTopicRouteInfoFromNameServer exception", e);
}
}
}, 10, this.clientConfig.getPollNameServerInteval(), TimeUnit.MILLISECONDS);
定时更新topic下面的queue。每台机都能拉取到一个topic上面有多少个queue,比如有一个topic Test123,他上面有8个queue。
消费c1,c2 两台机器是消费者,他们各自都能同步到Test123这个topic上8个queue到各自本地缓存中。
private void rebalanceByTopic(final String topic) {
switch (messageModel) {
case BROADCASTING: {
Set mqSet = this.topicSubscribeInfoTable.get(topic);
if (mqSet != null) {
boolean changed = this.updateProcessQueueTableInRebalance(topic, mqSet);
if (changed) {
this.messageQueueChanged(topic, mqSet, mqSet);
log.info("messageQueueChanged {} {} {} {}",consumerGroup,topic,mqSet,mqSet);
}
} else {
log.warn("doRebalance, {}, but the topic[{}] not exist.", consumerGroup, topic);
}
break;
}
case CLUSTERING: {
/**
* 1.先通过topic得到topic对应的messageQueue数据
* 2.如果是2个消费者。总共有4个queue
* 每个mq到这里这里的mqSet就是4个queue
*/
Set mqSet = this.topicSubscribeInfoTable.get(topic);
/**
* 2.根据topic来获取所有订阅这个topic的消费者
* 比如我现在有两个ip消费这个topic
* 那这里cidAll就会返回 两个消费的ip
*/
List cidAll = this.mQClientFactory.findConsumerIdList(topic, consumerGroup);
if (null == mqSet) {
if (!topic.startsWith(MixAll.RETRY_GROUP_TOPIC_PREFIX)) {
log.warn("doRebalance, {}, but the topic[{}] not exist.", consumerGroup, topic);
}
}
if (null == cidAll) {
log.warn("doRebalance, {} {}, get consumer id list failed", consumerGroup, topic);
}
/**
* 3.接下来就会根据在之前配置的消息队列分配策略,调用分配策略的allocate()方法,完成消息队列的重新分配。
*/
if (mqSet != null && cidAll != null) {
// 排序 消息队列 和 消费者数组。因为各 Consumer 是在本地分配消息队列,排序后才能保证各 Consumer 顺序一致。
List mqAll = new ArrayList();
mqAll.addAll(mqSet);
Collections.sort(mqAll);
Collections.sort(cidAll);
AllocateMessageQueueStrategy strategy = this.allocateMessageQueueStrategy;
// 根据 队列分配策略 分配消息队列,就是在这里对topic里所有的队列去分配,一些机器上分配相应的消费queue
List allocateResult = null;
try {
allocateResult = strategy.allocate(//
this.consumerGroup, //
this.mQClientFactory.getClientId(), //
mqAll,//
cidAll);
} catch (Throwable e) {
log.error(
"AllocateMessageQueueStrategy.allocate Exception. allocateMessageQueueStrategyName={}",
strategy.getName(), e);
return;
}
Set allocateResultSet = new HashSet();
if (allocateResult != null) {
allocateResultSet.addAll(allocateResult);
}
/**
* 4.接下来调用updateProcessQueueTableInRebalance()方法来根据重新平衡的结果,更新 Topic 对应的消息队列,并返回是否有变更。
*/
boolean changed = this.updateProcessQueueTableInRebalance(topic, allocateResultSet);
if (changed) {
log.info(
"rebalanced allocate source. allocateMessageQueueStrategyName={}, group={}, topic={}, mqAllSize={}, cidAllSize={}, mqAll={}, cidAll={}",
strategy.getName(), consumerGroup, topic, mqSet.size(), cidAll.size(), mqSet, cidAll);
log.info(
"rebalanced result changed. allocateMessageQueueStrategyName={}, group={}, topic={}, ConsumerId={}, rebalanceSize={}, rebalanceMqSet={}",
strategy.getName(), consumerGroup, topic, this.mQClientFactory.getClientId(),
allocateResultSet.size(), mqAll.size(), cidAll.size(), allocateResultSet);
//如果消息队列在这次rebalance的过程中发生了修改,那么则会调用messageQueueChanged()方法来处理相应的改变。具体实现在RebalancePullImpl中。
this.messageQueueChanged(topic, mqSet, allocateResultSet);
}
}
break;
}
default:
break;
}
}
Set mqSet = this.topicSubscribeInfoTable.get(topic);
上面这句话就是获取本地这个topic下有多少个queue,这个我们在上面就说过(在第五点),有个方法会先去同步break下面topic及topic下面的queue。
List cidAll = this.mQClientFactory.findConsumerIdList(topic, consumerGroup);
上面这句话就是,同个group下,一个topic下有多少个消费者。
然后根据第一点构造方法里的策略,然后那台消费机器,会计算自己能消费的queue,然后再装成PullRequest进行消息拉取。
这里有一点很重要,我先给大家举个例子
topic Test123,他上面有8个queue。
消费c1,c2 两台机器是消费者
c1,会认为,queue0,queue1,queue2,queue3可以消费,然后构造pullrequest向broker进行消息拉取
接着c2,会根据策略计算 queue4,queue5,queue6,queue7,然后构造pullrequest向broker进行消费拉取
然后有人会问,那我有没有可能一台消费者是这个策略,另一台机是另一个策略?
我觉得这是一个好问题 。
那我们回到日常的生产环境,是不是一个项目代码部署到多台机器上,那你这个项目代码在多台机器上,是不是都是同一份呢???也就是说应该都是同一个git上,同一个分支吧。
那何来多个策略呢。都是同一个策略。
(ps:有时候我们发散性的思考是好的,但回归到正式环境,结合实际去考虑下,我之前也是疑惑这一点)
他会根据新的各台机器能消费messageQueue去更新ProcessQueue。和组装pullrequest
/**
*
* 功能描述: 首先,通过遍历所有消息队列与处理队列的对应关系,如果在新的分配之后,
* 该消息队列已经不再是负责原来topic下的消息传送,那么这一对应关系将会被清除,
* 这一消息队列的数据messageQueue也会被相应的从消费者的存储中remove掉。
* 既然有旧的无用消息队列被清除,那自然有新的消息队列需要建立新的处理队列processQueue与其建立对应关系。
* 在这里将会生成pullRequest来建立新的对应关系。并通过computePullFromWhere()得到下次拉取数据的位置,
* 在RebalancePullImpl中具体实现了这一方法,直接返回0,来确认下一次拉取数据的位置。并将新的处理队列与对应的messageQueue放入map保存。
*
* 当负载均衡时,更新 消息处理队列
* - 移除 在processQueueTable && 不存在于 mqSet 里的消息队列
* - 增加 不在processQueueTable && 存在于mqSet 里的消息队列
*
* @param topic Topic
* @param mqSet 负载均衡结果后的消息队列数组
* @return 是否变更
*
* @auther: miaomiao
* @date: 19/1/21 上午11:47
*/
private boolean updateProcessQueueTableInRebalance(final String topic, final Set mqSet) {
boolean changed = false;
// 移除 在processQueueTable && 不存在于 mqSet 里的消息队列
Iterator> it = this.processQueueTable.entrySet().iterator();
while (it.hasNext()) {
Entry next = it.next();
MessageQueue mq = next.getKey();
ProcessQueue pq = next.getValue();
if (mq.getTopic().equals(topic)) {
if (!mqSet.contains(mq)) {// 不包含的队列
pq.setDropped(true);
//去给远端的队列解锁
if (this.removeUnnecessaryMessageQueue(mq, pq)) {
it.remove();
changed = true;
log.info("doRebalance, {}, remove unnecessary mq, {}", consumerGroup, mq);
}
}
else if (pq.isPullExpired()) {// 队列拉取超时,进行清理
switch (this.consumeType()) {
case CONSUME_ACTIVELY:
break;
case CONSUME_PASSIVELY:
pq.setDropped(true);
if (this.removeUnnecessaryMessageQueue(mq, pq)) {
it.remove();
changed = true;
log.error(
"[BUG]doRebalance, {}, remove unnecessary mq, {}, because pull is pause, so try to fixed it",
consumerGroup, mq);
}
break;
default:
break;
}
}
}
}
// 增加 不在processQueueTable && 存在于mqSet 里的消息队列。
List pullRequestList = new ArrayList(); // 拉消息请求数组
for (MessageQueue mq : mqSet) {
if (!this.processQueueTable.containsKey(mq)) {
PullRequest pullRequest = new PullRequest();
pullRequest.setConsumerGroup(consumerGroup);
pullRequest.setMessageQueue(mq);
pullRequest.setProcessQueue(new ProcessQueue());
long nextOffset = this.computePullFromWhere(mq);
if (nextOffset >= 0) {
pullRequest.setNextOffset(nextOffset);
pullRequestList.add(pullRequest);
changed = true;
this.processQueueTable.put(mq, pullRequest.getProcessQueue());
log.info("doRebalance, {}, add a new mq, {}", consumerGroup, mq);
} else {
log.warn("doRebalance, {}, add new mq failed, {}", consumerGroup, mq);
}
}
}
// 发起消息拉取请求
this.dispatchPullRequest(pullRequestList);
return changed;
}
这里留意下,以下这个地方,待会我会回头讲的
if (mq.getTopic().equals(topic)) {
if (!mqSet.contains(mq)) {// 不包含的队列
pq.setDropped(true);
//去给远端的队列解锁
if (this.removeUnnecessaryMessageQueue(mq, pq)) {
it.remove();
changed = true;
log.info("doRebalance, {}, remove unnecessary mq, {}", consumerGroup, mq);
}
}
else if (pq.isPullExpired()) {// 队列拉取超时,进行清理
switch (this.consumeType()) {
case CONSUME_ACTIVELY:
break;
case CONSUME_PASSIVELY:
pq.setDropped(true);
if (this.removeUnnecessaryMessageQueue(mq, pq)) {
it.remove();
changed = true;
log.error(
"[BUG]doRebalance, {}, remove unnecessary mq, {}, because pull is pause, so try to fixed it",
consumerGroup, mq);
}
break;
default:
break;
}
}
}
private final LinkedBlockingQueue pullRequestQueue = new LinkedBlockingQueue();
public void start() {
if (MessageModel.CLUSTERING.equals(ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl
.messageModel())) {
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
ConsumeMessageOrderlyService.this.lockMQPeriodically();
}
}, 1000 * 1, ProcessQueue.RebalanceLockInterval, TimeUnit.MILLISECONDS);
}
}
/**
*
* 功能描述: 在lockAll()方法当中,会遍历所有的BrokerName,从客户端根据brokerName获取相应的地址,
* 将消费者组名,相应消费者的客户端id,以及该broker下面的消费者队列作为参数发送给Broker
* 通过clientAPIInstance发送给broker获取需要加锁的消费队列作为结果返回。
*
* @auther: miaomiao
* @date: 19/1/22 下午8:32
*/
public void lockAll() {
//根据当前负载的消息队列,按照Broker分类存储在Map。
// 负载的消息队列在RebalanceService时根据当前消费者数量与消息消费队列按照负载算法进行分配,然后尝试对该消息队列加锁,如果申请锁成功,则加入到待拉取任务中。
HashMap> brokerMqs = this.buildProcessQueueTableByBrokerName();
Iterator>> it = brokerMqs.entrySet().iterator();
while (it.hasNext()) {
Entry> entry = it.next();
final String brokerName = entry.getKey();
final Set mqs = entry.getValue();
if (mqs.isEmpty())
continue;
//根据Broker获取主节点的地址
FindBrokerResult findBrokerResult =
this.mQClientFactory.findBrokerAddressInSubscribe(brokerName, MixAll.MASTER_ID, true);
if (findBrokerResult != null) {
LockBatchRequestBody requestBody = new LockBatchRequestBody();
requestBody.setConsumerGroup(this.consumerGroup);
requestBody.setClientId(this.mQClientFactory.getClientId());
requestBody.setMqSet(mqs);
try {
//向Broker发送锁定消息队列请求,该方法会返回本次成功锁定的消息消费队列,关于Broker端消息队列锁定实现见下文详细分析。
Set lockOKMQSet =
this.mQClientFactory.getMQClientAPIImpl().lockBatchMQ(
findBrokerResult.getBrokerAddr(), requestBody, 1000);
//遍历本次成功锁定的队列来更新对应的ProcessQueue的locked状态,如果locked为false,则设置成true,并更新锁定时间。
for (MessageQueue mq : lockOKMQSet) {
ProcessQueue processQueue = this.processQueueTable.get(mq);
if (processQueue != null) {
if (!processQueue.isLocked()) {
log.info("the message queue locked OK, Group: {} {}", this.consumerGroup, mq);
}
processQueue.setLocked(true);
processQueue.setLastLockTimestamp(System.currentTimeMillis());
}
}
//遍历mqs,如果消息队列未成功锁定,需要将ProceeQueue的locked状态为false,在该处理队列未被其他消费者锁定之前,该消息队列将暂停拉取消息。
for (MessageQueue mq : mqs) {
if (!lockOKMQSet.contains(mq)) {
ProcessQueue processQueue = this.processQueueTable.get(mq);
if (processQueue != null) {
processQueue.setLocked(false);
log.warn("the message queue locked Failed, Group: {} {}", this.consumerGroup,
mq);
}
}
}
} catch (Exception e) {
log.error("lockBatchMQ exception, " + mqs, e);
}
}
}
}
以上代码会对通知broker,我要对这个group下面的这个topic里的这个queue进行加锁,如果加锁成功,broker返回给消费端成功,消费端就可以对processqueue进行加锁。
这里看下broker的代码
/**
* 批量方式锁队列,返回锁定成功的队列集合
*
* @return 是否lock成功
*/
public Set tryLockBatch(final String group, final Set mqs,
final String clientId) {
Set lockedMqs = new HashSet(mqs.size());
Set notLockedMqs = new HashSet(mqs.size());
// 先通过不加锁的方式尝试查看哪些锁定,哪些没锁定
for (MessageQueue mq : mqs) {
if (this.isLocked(group, mq, clientId)) {
lockedMqs.add(mq);
}
else {
notLockedMqs.add(mq);
}
}
if (!notLockedMqs.isEmpty()) {
try {
this.lock.lockInterruptibly();
try {
ConcurrentHashMap groupValue = this.mqLockTable.get(group);
if (null == groupValue) {
groupValue = new ConcurrentHashMap(32);
this.mqLockTable.put(group, groupValue);
}
// 遍历没有锁住的队列
for (MessageQueue mq : notLockedMqs) {
LockEntry lockEntry = groupValue.get(mq);
if (null == lockEntry) {
lockEntry = new LockEntry();
lockEntry.setClientId(clientId);
groupValue.put(mq, lockEntry);
log.info(
"tryLockBatch, message queue not locked, I got it. Group: {} NewClientId: {} {}", //
group, //
clientId, //
mq);
}
// 已经锁定
if (lockEntry.isLocked(clientId)) {
lockEntry.setLastUpdateTimestamp(System.currentTimeMillis());
lockedMqs.add(mq);
continue;
}
String oldClientId = lockEntry.getClientId();
// 锁已经过期,抢占它
if (lockEntry.isExpired()) {
lockEntry.setClientId(clientId);
lockEntry.setLastUpdateTimestamp(System.currentTimeMillis());
log.warn(
"tryLockBatch, message queue lock expired, I got it. Group: {} OldClientId: {} NewClientId: {} {}", //
group, //
oldClientId, //
clientId, //
mq);
lockedMqs.add(mq);
continue;
}
// 锁被别的Client占用
log.warn(
"tryLockBatch, message queue locked by other client. Group: {} OtherClientId: {} NewClientId: {} {}", //
group, //
oldClientId, //
clientId, //
mq);
}
}
finally {
this.lock.unlock();
}
}
catch (InterruptedException e) {
log.error("putMessage exception", e);
}
}
return lockedMqs;
}
这里会根据clientId去判断是否拥有这个group下这个topic下的queue的锁。然后进行判断是否过期,然后进行加锁。代码很简单,可以看看
private final LinkedBlockingQueue pullRequestQueue = new LinkedBlockingQueue();
就我们刚刚放进去的pullRequest,然后取出来,然后去broker进行消息拉取
4.4.0是这样的
1.遍历LinkedBlockingQueue
2.去broker拉取消息之前,会判断 processQueue 是否加锁
3.未加锁,则延迟放到延迟队列里,延迟3s把这个对象加入pullRequestQueue列表,让1再去take他
3.2.6是这样的
1.遍历LinkedBlockingQueue
2.直接去拉取消息
3.在消费的时候判断 processQueue 是否加锁,没有加锁,放到延迟线程池里,过一会儿再消费
以下是3.2.6版本的代码,上面只是说下区别
public void pullMessage(final PullRequest pullRequest) {
final ProcessQueue processQueue = pullRequest.getProcessQueue();
if (processQueue.isDropped()) {
log.info("the pull request[{}] is droped.", pullRequest.toString());
return;
}
//更新pullRequest里的pullQueue的最新更新时间为当前时间
//设置队列最后拉取消息时间
pullRequest.getProcessQueue().setLastPullTimestamp(System.currentTimeMillis());
// 判断consumer状态是否运行中。如果不是,则延迟拉取消息。
try {
this.makeSureStateOK();
}
catch (MQClientException e) {
log.warn("pullMessage exception, consumer state not ok", e);
this.executePullRequestLater(pullRequest, PullTimeDelayMillsWhenException);
return;
}
// 判断是否暂停中。
if (this.isPause()) {
log.warn("consumer was paused, execute pull request later. instanceName={}",
this.defaultMQPushConsumer.getInstanceName());
this.executePullRequestLater(pullRequest, PullTimeDelayMillsWhenSuspend);
return;
}
/**
* 对于流量控制消费者状态消息长度等一系列的判断,如果当前发送条件不符合当前消费者配置的,
* 将会将其丢入定时任务线程池中在一定的timeDelay之后重新尝试发送。
*/
// 判断是否超过最大持有消息数量。默认最大值为1000。
long size = processQueue.getMsgCount().get();
if (size > this.defaultMQPushConsumer.getPullThresholdForQueue()) {
this.executePullRequestLater(pullRequest, PullTimeDelayMillsWhenFlowControl);// 提交延迟消息拉取请求。50ms。
if ((flowControlTimes1++ % 1000) == 0) {
log.warn("the consumer message buffer is full, so do flow control, {} {} {}", size,
pullRequest, flowControlTimes1);
}
return;
}
if (!this.consumeOrderly) {// 判断消息跨度是否过大。
//Consumer 为并发消费 并且 消息队列持有消息跨度过大(消息跨度 = 持有消息最后一条和第一条的消息位置差,默认:2000)
if (processQueue.getMaxSpan() > this.defaultMQPushConsumer.getConsumeConcurrentlyMaxSpan()) {
this.executePullRequestLater(pullRequest, PullTimeDelayMillsWhenFlowControl);// 提交延迟消息拉取请求。50ms。
if ((flowControlTimes2++ % 1000) == 0) {
log.warn("the queue's messages, span too long, so do flow control, {} {} {}",
processQueue.getMaxSpan(), pullRequest, flowControlTimes2);
}
return;
}
}
// 获取Topic 对应的订阅信息。若不存在,则延迟拉取消息
final SubscriptionData subscriptionData =
this.rebalanceImpl.getSubscriptionInner().get(pullRequest.getMessageQueue().getTopic());
if (null == subscriptionData) {
// 由于并发关系,即使找不到订阅关系,也要重试下,防止丢失PullRequest
this.executePullRequestLater(pullRequest, PullTimeDelayMillsWhenException);
log.warn("find the consumer's subscription failed, {}", pullRequest);
return;
}
final long beginTimestamp = System.currentTimeMillis();
PullCallback pullCallback = new PullCallback() {
@Override
public void onSuccess(PullResult pullResult) {
if (pullResult != null) {
pullResult =
DefaultMQPushConsumerImpl.this.pullAPIWrapper.processPullResult(
pullRequest.getMessageQueue(), pullResult, subscriptionData);
switch (pullResult.getPullStatus()) {
case FOUND:
// 设置下次拉取消息队列位置
long prevRequestOffset = pullRequest.getNextOffset();
// System.out.println("下次拉取的偏移量>>>"+prevRequestOffset+",所属topic:"+pullRequest.getMessageQueue().getTopic()+",所属队列:"+pullRequest.getMessageQueue().getQueueId());
pullRequest.setNextOffset(pullResult.getNextBeginOffset());
// 统计
long pullRT = System.currentTimeMillis() - beginTimestamp;
DefaultMQPushConsumerImpl.this.getConsumerStatsManager().incPullRT(
pullRequest.getConsumerGroup(), pullRequest.getMessageQueue().getTopic(), pullRT);
long firstMsgOffset = Long.MAX_VALUE;
if (pullResult.getMsgFoundList() == null || pullResult.getMsgFoundList().isEmpty()) {
DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
}
else {
firstMsgOffset = pullResult.getMsgFoundList().get(0).getQueueOffset();
// 统计
DefaultMQPushConsumerImpl.this.getConsumerStatsManager().incPullTPS(
pullRequest.getConsumerGroup(), pullRequest.getMessageQueue().getTopic(),
pullResult.getMsgFoundList().size());
/**
* 把消息放入processQueue,等待消费
*/
boolean dispathToConsume = processQueue.putMessage(pullResult.getMsgFoundList());
/**
* 在存放消息之后,将是消息的消费。
*
* 调用defaultMQPushConsumerImpl下的ConsumeMessageService的submitConsumeRequest()方法来消费消息。
*/
DefaultMQPushConsumerImpl.this.consumeMessageService.submitConsumeRequest(//
pullResult.getMsgFoundList(), //
processQueue, //
pullRequest.getMessageQueue(), //
dispathToConsume);
// 根据拉取频率( pullInterval ),提交立即或者延迟拉取消息请求。默认拉取频率为 0ms ,提交立即拉取消息请求。
if (DefaultMQPushConsumerImpl.this.defaultMQPushConsumer.getPullInterval() > 0) {
DefaultMQPushConsumerImpl.this.executePullRequestLater(pullRequest,
DefaultMQPushConsumerImpl.this.defaultMQPushConsumer.getPullInterval());
}
else {
DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
}
}
// 下次拉取消息队列位置小于上次拉取消息队列位置 或者 第一条消息的消息队列位置小于上次拉取消息队列位置,则判定为BUG,输出log
if (pullResult.getNextBeginOffset() < prevRequestOffset//
|| firstMsgOffset < prevRequestOffset) {
log.warn(
"[BUG] pull message result maybe data wrong, nextBeginOffset: {} firstMsgOffset: {} prevRequestOffset: {}",//
pullResult.getNextBeginOffset(),//
firstMsgOffset,//
prevRequestOffset);
}
break;
case NO_NEW_MSG:
// 设置下次拉取消息队列位置
pullRequest.setNextOffset(pullResult.getNextBeginOffset());
// 持久化消费进度
DefaultMQPushConsumerImpl.this.correctTagsOffset(pullRequest);
// 立即提交拉取消息请求
DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
break;
case NO_MATCHED_MSG:
// 设置下次拉取消息队列位置
pullRequest.setNextOffset(pullResult.getNextBeginOffset());
// 持久化消费进度
DefaultMQPushConsumerImpl.this.correctTagsOffset(pullRequest);
// 提交立即拉取消息请求
DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
break;
case OFFSET_ILLEGAL: //拉取请求的消息队列位置不合法
log.warn("the pull request offset illegal, {} {}",//
pullRequest.toString(), pullResult.toString());
// 设置下次拉取消息队列位置
pullRequest.setNextOffset(pullResult.getNextBeginOffset());
// 设置消息处理队列为dropped
pullRequest.getProcessQueue().setDropped(true);
// 提交延迟任务,进行消费处理队列移除。不立即移除的原因:可能有地方正在使用,避免受到影响。
DefaultMQPushConsumerImpl.this.executeTaskLater(new Runnable() {
@Override
public void run() {
try {
// 更新消费进度,同步消费进度到Broker
DefaultMQPushConsumerImpl.this.offsetStore.updateOffset(
pullRequest.getMessageQueue(), pullRequest.getNextOffset(), false);
DefaultMQPushConsumerImpl.this.offsetStore.persist(pullRequest
.getMessageQueue());
// 移除消费处理队列
DefaultMQPushConsumerImpl.this.rebalanceImpl
.removeProcessQueue(pullRequest.getMessageQueue());
log.warn("fix the pull request offset, {}", pullRequest);
}
catch (Throwable e) {
log.error("executeTaskLater Exception", e);
}
}
}, 10000);
break;
default:
break;
}
}
}
@Override
public void onException(Throwable e) {
if (!pullRequest.getMessageQueue().getTopic().startsWith(MixAll.RETRY_GROUP_TOPIC_PREFIX)) {
log.warn("execute the pull request exception", e);
}
// 提交延迟拉取消息请求
DefaultMQPushConsumerImpl.this.executePullRequestLater(pullRequest,
PullTimeDelayMillsWhenException);
}
};
// 集群消息模型下,更新本地的消费进度。
boolean commitOffsetEnable = false;
long commitOffsetValue = 0L;
if (MessageModel.CLUSTERING == this.defaultMQPushConsumer.getMessageModel()) {
commitOffsetValue =
this.offsetStore.readOffset(pullRequest.getMessageQueue(),
ReadOffsetType.READ_FROM_MEMORY);
if (commitOffsetValue > 0) {
commitOffsetEnable = true;
}
}
// 计算请求的 订阅表达式 和 是否进行filtersrv过滤消息
String subExpression = null;
boolean classFilter = false;
SubscriptionData sd =
this.rebalanceImpl.getSubscriptionInner().get(pullRequest.getMessageQueue().getTopic());
if (sd != null) {
if (this.defaultMQPushConsumer.isPostSubscriptionWhenPull() && !sd.isClassFilterMode()) {
subExpression = sd.getSubString();
}
classFilter = sd.isClassFilterMode();
}
// 计算拉取消息系统标识
int sysFlag = PullSysFlag.buildSysFlag(//
commitOffsetEnable, // commitOffset
true, // suspend
subExpression != null,// subscription
classFilter // class filter
);
// 执行拉取。如果拉取请求发生异常时,提交延迟拉取消息请求。
try {
this.pullAPIWrapper.pullKernelImpl(//
pullRequest.getMessageQueue(), // 1
subExpression, // 2
subscriptionData.getSubVersion(), // 3
pullRequest.getNextOffset(), // 4
this.defaultMQPushConsumer.getPullBatchSize(), // 5
sysFlag, // 6
commitOffsetValue,// 7
BrokerSuspendMaxTimeMillis, // 8
ConsumerTimeoutMillisWhenSuspend, // 9
CommunicationMode.ASYNC, // 10
pullCallback// 11
);
}
catch (Exception e) {
log.error("pullKernelImpl exception", e);
this.executePullRequestLater(pullRequest, PullTimeDelayMillsWhenException);
}
}
/**
* 把消息放入processQueue,等待消费
*/
boolean dispathToConsume = processQueue.putMessage(pullResult.getMsgFoundList());
/**
* 在存放消息之后,将是消息的消费。
*
* 调用defaultMQPushConsumerImpl下的ConsumeMessageService的submitConsumeRequest()方法来消费消息。
*/
DefaultMQPushConsumerImpl.this.consumeMessageService.submitConsumeRequest(//
pullResult.getMsgFoundList(), //
processQueue, //
pullRequest.getMessageQueue(), //
dispathToConsume);
我们主要看下这两块的东西,
把拉取到的消息放入 processQueue。
我们看下 processQueue的放入代码
/**
*
* 功能描述: 取回来的消息将会在proceeQueue当中存放在其中的treeMap中(整个操作为了保证线程安全,全程加锁),并且在之后统计消费的数量统计。
*
* 添加消息,并返回是否提交给消费者
* 返回true,当有新消息添加成功时,
*
* @param msgs 消息
* @return 是否提交给消费者
*/
public boolean putMessage(final List msgs) {
boolean dispatchToConsume = false;
try {
this.lockTreeMap.writeLock().lockInterruptibly();
try {
// 添加消息
int validMsgCnt = 0;
for (MessageExt msg : msgs) {
MessageExt old = msgTreeMap.put(msg.getQueueOffset(), msg);
if (null == old) {
validMsgCnt++;
this.queueOffsetMax = msg.getQueueOffset();
}
}
msgCount.addAndGet(validMsgCnt);
// 计算是否正在消费 msgTreeMap不为空,consuming初始的时候为false,
//当msgTreeMap为空的时候,consuming会被设为false
if (!msgTreeMap.isEmpty() && !this.consuming) {
dispatchToConsume = true;
this.consuming = true;
}
// Broker累计消息数量
if (!msgs.isEmpty()) {
MessageExt messageExt = msgs.get(msgs.size() - 1);
String property = messageExt.getProperty(MessageConst.PROPERTY_MAX_OFFSET);
if (property != null) {
long accTotal = Long.parseLong(property) - messageExt.getQueueOffset();
if (accTotal > 0) {
this.msgAccCnt = accTotal;
}
}
}
}
finally {
this.lockTreeMap.writeLock().unlock();
}
}
catch (InterruptedException e) {
log.error("putMessage exception", e);
}
return dispatchToConsume;
}
对以上代码分析下
1.lockTreeMap——processQueue里消息的放入,取出,回滚都是有这个锁去控制安全
2.dispatchToConsume——这个字段我觉得挺重要的,就是靠着这个字段是否开启一个线程去顺序消费
@Override
public void submitConsumeRequest(//
final List msgs, //
final ProcessQueue processQueue, //
final MessageQueue messageQueue, //
final boolean dispathToConsume) {
if (dispathToConsume) {
ConsumeRequest consumeRequest = new ConsumeRequest(processQueue, messageQueue);
this.consumeExecutor.submit(consumeRequest);
}
}
/**
* order顺序消费
*/
@Override
public void run() {
if (this.processQueue.isDropped()) {
log.warn("run, the message queue not be able to consume, because it's dropped. {}",
this.messageQueue);
return;
}
// 获得 Consumer 消息队列锁
/**
* 获取MessageQueue对应的锁,在消费某一个消息消费队列时先加锁
* 意味着一个消费者内消费线程池中的线程并发度是消息消费队列级别,同一个消费队列在同一时刻只会被一个线程消费,其他线程排队消费。
*/
final Object objLock = messageQueueLock.fetchLockObject(this.messageQueue);
synchronized (objLock) {
// (广播模式) 或者 (集群模式 && Broker消息队列锁有效)
if (MessageModel.BROADCASTING.equals(ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.messageModel())
|| (this.processQueue.isLocked() && !this.processQueue.isLockExpired())) {
final long beginTime = System.currentTimeMillis();
// 循环
for (boolean continueConsume = true; continueConsume;) {
if (this.processQueue.isDropped()) {
log.warn("the message queue not be able to consume, because it's dropped. {}",
this.messageQueue);
break;
}
// 消息队列分布式锁未锁定,提交延迟获得锁并消费请求
if (MessageModel.CLUSTERING
.equals(ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl
.messageModel())
&& !this.processQueue.isLocked()) {
log.warn("the message queue not locked, so consume later, {}", this.messageQueue);
ConsumeMessageOrderlyService.this.tryLockLaterAndReconsume(this.messageQueue,
this.processQueue, 10);
break;
}
// 消息队列分布式锁已经过期,提交延迟获得锁并消费请求
if (MessageModel.CLUSTERING
.equals(ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl
.messageModel())
&& this.processQueue.isLockExpired()) {
log.warn("the message queue lock expired, so consume later, {}",
this.messageQueue);
ConsumeMessageOrderlyService.this.tryLockLaterAndReconsume(this.messageQueue,
this.processQueue, 10);
break;
}
// 当前周期消费时间超过连续时长,默认:60s,提交延迟消费请求。默认情况下,每消费1分钟休息10ms。
/**
* 顺序消息消费处理逻辑,每一个ConsumeRequest消费任务不是以消费消息条数来计算,而是根据消费时间,
* 默认当消费时长大于MAX_TIME_CONSUME_CONTINUOUSLY,默认60s后,本次消费任务结束,由消费组内其他线程继续消费。
*/
long interval = System.currentTimeMillis() - beginTime;
if (interval > MaxTimeConsumeContinuously) {
ConsumeMessageOrderlyService.this.submitConsumeRequestLater(processQueue,
messageQueue, 10);
break;
}
// 获取消费消息。此处和并发消息请求不同,并发消息请求已经带了消费哪些消息。
final int consumeBatchSize =
ConsumeMessageOrderlyService.this.defaultMQPushConsumer
.getConsumeMessageBatchMaxSize();
List msgs = this.processQueue.takeMessags(consumeBatchSize);
if (!msgs.isEmpty()) {
final ConsumeOrderlyContext context =
new ConsumeOrderlyContext(this.messageQueue);
ConsumeOrderlyStatus status = null;
ConsumeMessageContext consumeMessageContext = null;
if (ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.hasHook()) {
consumeMessageContext = new ConsumeMessageContext();
consumeMessageContext
.setConsumerGroup(ConsumeMessageOrderlyService.this.defaultMQPushConsumer
.getConsumerGroup());
consumeMessageContext.setMq(messageQueue);
consumeMessageContext.setMsgList(msgs);
consumeMessageContext.setSuccess(false);
ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl
.executeHookBefore(consumeMessageContext);
}
// 执行消费
long beginTimestamp = System.currentTimeMillis();
try {
// 锁定队列消费锁
this.processQueue.getLockConsume().lock();
if (this.processQueue.isDropped()) {
log.warn(
"consumeMessage, the message queue not be able to consume, because it's dropped. {}",
this.messageQueue);
break;
}
status =
messageListener.consumeMessage(Collections.unmodifiableList(msgs),
context);
}
catch (Throwable e) {
log.warn("consumeMessage exception: {} Group: {} Msgs: {} MQ: {}",//
RemotingHelper.exceptionSimpleDesc(e),//
ConsumeMessageOrderlyService.this.consumerGroup,//
msgs,//
messageQueue);
}
finally {
// 锁定队列消费锁
this.processQueue.getLockConsume().unlock();
}
if (null == status //
|| ConsumeOrderlyStatus.ROLLBACK == status//
|| ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT == status) {
log.warn("consumeMessage Orderly return not OK, Group: {} Msgs: {} MQ: {}",//
ConsumeMessageOrderlyService.this.consumerGroup,//
msgs,//
messageQueue);
}
long consumeRT = System.currentTimeMillis() - beginTimestamp;
if (null == status) {
status = ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT;
}
if (ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.hasHook()) {
consumeMessageContext.setStatus(status.toString());
consumeMessageContext.setSuccess(ConsumeOrderlyStatus.SUCCESS == status
|| ConsumeOrderlyStatus.COMMIT == status);
ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl
.executeHookAfter(consumeMessageContext);
}
ConsumeMessageOrderlyService.this.getConsumerStatsManager().incConsumeRT(
ConsumeMessageOrderlyService.this.consumerGroup, messageQueue.getTopic(),
consumeRT);
// 处理消费结果
continueConsume =
ConsumeMessageOrderlyService.this.processConsumeResult(msgs, status,
context, this);
}
else {
continueConsume = false;
}
}
}
else {
if (this.processQueue.isDropped()) {
log.warn("the message queue not be able to consume, because it's dropped. {}",
this.messageQueue);
return;
}
ConsumeMessageOrderlyService.this.tryLockLaterAndReconsume(this.messageQueue,
this.processQueue, 100);
}
}
}
先给大家从大的方面去分析下
1.从上面的代码可以看出,1个processQueue只有一个线程去消费
2.只有processQueue加了锁才会被消费
// 消息队列分布式锁未锁定,提交延迟获得锁并消费请求
if (MessageModel.CLUSTERING
.equals(ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl
.messageModel())
&& !this.processQueue.isLocked()) {
log.warn("the message queue not locked, so consume later, {}", this.messageQueue);
ConsumeMessageOrderlyService.this.tryLockLaterAndReconsume(this.messageQueue,
this.processQueue, 10);
break;
}
2-1:processQueue没加锁,尝试去加锁,加锁成功,10毫秒之后 再次启动一个线程 对processQueue里面的消息消费。然后当前线程退出。
2-2:processQueue没加锁,尝试去加锁,加锁不成功 3秒之后 再次启动一个线程 对processQueue里面的消息消费。然后当前线程退出。
3.一个线程处理一个 processQueue只有60s,当到达时间之后,10毫秒之后 再次启动一个线程 对processQueue里面的消息消费。然后当前线程退出。
4. this.processQueue.takeMessags(consumeBatchSize),取出processQueue 消费者设定的消息数量进行消费。当消息消息为空的时候consuming为被置为false
5.取出消息消费的时候会判断下,processQueue是否可用,有可能因为有新的消费机器加入,这个processQueue被分配给新的消费机器,导致此processQueue不可用,这里就是我们上面说的(第七点需要关注的地方)
// 锁定队列消费锁
this.processQueue.getLockConsume().lock();
if (this.processQueue.isDropped()) {
log.warn(
"consumeMessage, the message queue not be able to consume, because it's dropped. {}",
this.messageQueue);
break;
}
所以总结下。什么时候会开一个线程去消费 processQueue
1.第一次线程进来的时候,dispathToConsume为true的时候
2.第二次processQueue里面有数据,新从broker拉来的消息,也不会去再开一个线程,因为 dispathToConsume 是false
(有人有问题了,在哪里dispathToConsume 设置的值呢,就是在 processQueue putMessage的时候)
3.当一个线程消费processQueue消费了60s的时候,会终结本线程,去重新开一个线程再来消费此 processQueue
ok,到此为止已经全部讲完了。
那我来总结下:
1.首先去broker拉取 同group 下的topic下面的queue
2.各个消费机器根据 消费策略 去分配 自己消费的topic queue,这个是在消费端去做的
2 -1 消费端先去 broker上去拉取,这个group 下的这个topic 有多少个消费者在消费
2-2 比如 一个topic Test123 ,有4个queue ,有2两个消费者 c1,c2。
然后c1从 broker上拉取消费者机器,发现有两台,自己是第一个注册上去的,所以位置是第一个,
还有台c2,因为是后面注册上去的,所以位置是第二个
然后进行queue的分配(每个消费者都知道有4个queue,在本机器上自己认领,自己可以消费的queue),因为c1位置是第一个所以分到 queue0,queue1。
所以c2是第二个分到 queue2,queue3
3.queue分好了,然后各自对自己的queue 去broker上加锁,告诉broker这个queue我在消费,别的消费机器来拿,就返回他失败
4.然后queue就去broker上去拉取自己这个队列里的消息,一个queue对应一个线程去消费
总结都是比较粗略了,没上面源码讲的详细,好了,那就这样吧