官方对consumer的定义如下:
Similar to previously mentioned producer group, consumers of the exactly same role are grouped together and named Consumer Group.
Consumer Group is a great concept with which achieving goals of load-balance and fault-tolerance, in terms of message consuming, is super easy.
Warning: consumer instances of a consumer group must have exactly the same topic subscription(s).
大意是消费者准确按照相同角色来分组,分组的目的是负载均衡和失败转移,并且警告同个分组中的消费者一定要订阅相同的topic。
在代码中看到consumerGroup变量定义注释如下,大意差不多。
Consumers of the same role is required to have exactly same subscriptions and consumerGroup to correctly achieve load balance. It's required and needs to be globally unique.
通过上面的定义,可以准确知道同一组的consumer一定要订阅相同的topic,那么问题来了,订阅的时候除了topic还有tags,这个tag会有影响吗?没有找到相关资料,我就自己做了测试。
instanceName | groupName | Topic | Tag |
---|---|---|---|
A | GroupA | TopicA | TagA |
B | GroupA | TopicA | TagB |
测试目标:测试在相同的消费组中的消费者,订阅相同的topic时,tag不同会不会影响消费和负载均衡。
测试计划:
1.创建生产者发送100条消息;
2.创建消费者
instanceName | groupName | Topic | Tag |
---|---|---|---|
A | GroupA | TopicA | TagA |
B | GroupA | TopicA | TagB |
3.观察消息消费情况和队列分配情况。
发消息
public class SimpleProducer {
public static void sendSync() throws Exception {
ClientConfig clientConfig=new ClientConfig();
clientConfig.setNamesrvAddr("localhost:9876");
MQClientInstance clientInstance=MQClientManager.getInstance().getAndCreateMQClientInstance(clientConfig);
DefaultMQProducer producer = clientInstance.getDefaultMQProducer();
producer.setProducerGroup("GroupA");
producer.start();
for (int i = 0; i < 100; i++) {
Message msg = new Message("TopicA", "TagA", ("Hello mq" + i).getBytes(RemotingHelper.DEFAULT_CHARSET));
SendResult sendResult = producer.send(msg);
System.out.println("send " + i + " , result:" + sendResult.getMsgId());
}
producer.shutdown();
}
}
消费者
public class SimpleConsumer {
public static void pushConsume(final String instanceName, final String group, final String topic, final String tag) throws Exception {
DefaultMQPushConsumer consumer = new DefaultMQPushConsumer(group);
consumer.setNamesrvAddr("localhost:9876");
consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_FIRST_OFFSET);
consumer.subscribe(topic, tag);
consumer.setInstanceName(instanceName);
consumer.registerMessageListener(new MessageListenerOrderly() {
@Override
public ConsumeOrderlyStatus consumeMessage(List msgs, ConsumeOrderlyContext context) {
for (MessageExt msg : msgs) {
System.out.println("[" + instanceName + "," + group + "," + topic + "," + tag + "] consume: " + new String(msg.getBody()));
}
return ConsumeOrderlyStatus.SUCCESS;
}
});
consumer.start();
}
}
测试
public static void main(String[] args) throws Exception {
try {
SimpleProducer.sendSync();
} catch (Exception e) {
e.printStackTrace();
}
Thread t2 = new Thread() {
@Override
public void run() {
try {
SimpleConsumer.pushConsume("A", "GroupA", "TopicA", "TagA");
} catch (Exception e) {
e.printStackTrace();
}
}
};
t2.start();
Thread t3 = new Thread() {
@Override
public void run() {
try {
SimpleConsumer.pushConsume("B", "GroupA", "TopicA", "TagB");
} catch (Exception e) {
e.printStackTrace();
}
}
};
t3.start();
t2.join();
t3.join();
}
结果:
....
send 97 , result:AC1100013B2118B4AAC2808264CC0061
send 98 , result:AC1100013B2118B4AAC2808264CD0062
send 99 , result:AC1100013B2118B4AAC2808264CE0063
[A,GroupA,TopicA,TagA] consume: Hello mq1
[A,GroupA,TopicA,TagA] consume: Hello mq2
[A,GroupA,TopicA,TagA] consume: Hello mq5
....
查看队列分布情况:
消费者A
./bin/mqadmin consumerStatus -n "localhost:9876" -g "GroupA" -i "172.17.0.1@A"
#Consumer MQ Detail#
#Topic #Broker Name #QID #ProcessQueueInfo
%RETRY%GroupA mo-x 0 ProcessQueueInfo [commitOffset=0, cachedMsgMinOffset=0, cachedMsgMaxOffset=0, cachedMsgCount=0, cachedMsgSizeInMiB=0, transactionMsgMinOffset=0, transactionMsgMaxOffset=0, transactionMsgCount=0, locked=true, tryUnlockTimes=0, lastLockTimestamp=20180625231130503, droped=false, lastPullTimestamp=20180625231132580, lastConsumeTimestamp=20180625231129554]
TopicA mo-x 0 ProcessQueueInfo [commitOffset=50, cachedMsgMinOffset=0, cachedMsgMaxOffset=0, cachedMsgCount=0, cachedMsgSizeInMiB=0, transactionMsgMinOffset=0, transactionMsgMaxOffset=0, transactionMsgCount=0, locked=true, tryUnlockTimes=0, lastLockTimestamp=20180625231130503, droped=true, lastPullTimestamp=20180625231132552, lastConsumeTimestamp=20180625231129547]
TopicA mo-x 1 ProcessQueueInfo [commitOffset=50, cachedMsgMinOffset=0, cachedMsgMaxOffset=0, cachedMsgCount=0, cachedMsgSizeInMiB=0, transactionMsgMinOffset=0, transactionMsgMaxOffset=0, transactionMsgCount=0, locked=true, tryUnlockTimes=0, lastLockTimestamp=20180625231130503, droped=true, lastPullTimestamp=20180625231132572, lastConsumeTimestamp=20180625231129549]
消费者B
./bin/mqadmin consumerStatus -n "localhost:9876" -g "GroupA" -i "172.17.0.1@B"
#Consumer MQ Detail#
#Topic #Broker Name #QID #ProcessQueueInfo
TopicA mo-x 2 ProcessQueueInfo [commitOffset=25, cachedMsgMinOffset=0, cachedMsgMaxOffset=0, cachedMsgCount=0, cachedMsgSizeInMiB=0, transactionMsgMinOffset=0, transactionMsgMaxOffset=0, transactionMsgCount=0, locked=true, tryUnlockTimes=0, lastLockTimestamp=20180625231150500, droped=false, lastPullTimestamp=20180625231209016, lastConsumeTimestamp=20180625231149571]
TopicA mo-x 3 ProcessQueueInfo [commitOffset=25, cachedMsgMinOffset=0, cachedMsgMaxOffset=0, cachedMsgCount=0, cachedMsgSizeInMiB=0, transactionMsgMinOffset=0, transactionMsgMaxOffset=0, transactionMsgCount=0, locked=true, tryUnlockTimes=0, lastLockTimestamp=20180625231150500, droped=false, lastPullTimestamp=20180625231209016, lastConsumeTimestamp=20180625231149565]
mo@mo-x:~/rocket-mq$ ./bin/mqadmin consumerProgress -n localhost:9876 -g "GroupA"
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
#Topic #Broker Name #QID #Broker Offset #Consumer Offset #Client IP #Diff #LastTime
%RETRY%GroupA mo-x 0 0 0 172.17.0.1 0 1970-01-01 08:00:00
TopicA mo-x 0 25 25 172.17.0.1 0 2018-06-25 23:11:29
TopicA mo-x 1 25 25 172.17.0.1 0 2018-06-25 23:11:29
TopicA mo-x 2 25 25 172.17.0.1 0 2018-06-25 23:11:29
TopicA mo-x 3 25 25 172.17.0.1 0 2018-06-25 23:11:29
测试结果总结:
1.生产者发送了100条TagA消息到TopicA
2.消费者A和消费者B都在GroupA中,都订阅TopicA
3.消费者A订阅TagA,消费者B订阅TagB
4.消费者A收到了部分消息
5.消费者A分配到了两个GroupA-TopicA的队列
6.消费者B分配到了两个GroupA-TopicA的队列
总结:
Tag对同组同Topic的消费者有影响,当存在不同Tag的时候,会导致消费混乱,比如TagA的消息被TagB的消费者消费了。