ERROR org.apache.kafka.clients.consumer.internals.ConsumerCoordinator 异常!原因和解决方案!

 

kafka版本:kafka_2.12-2.3.0

具体报错:

2019-12-10 15:27:36.006[main] ERROR org.apache.kafka.clients.consumer.internals.ConsumerCoordinator[843] [Consumer clientId=consumer-1, groupId=01] Offset commit failed on partition test-0 at offset 175256: The coordinator is not aware of this member.
2019-12-10 15:27:36.010[main] WARN  org.apache.kafka.clients.consumer.internals.ConsumerCoordinator[737] [Consumer clientId=consumer-1, groupId=01] Asynchronous auto-commit of offsets {test-0=OffsetAndMetadata{offset=175256, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
原因:

Note that it isn't possible to mix manual partition assignment (i.e. using assign) with dynamic partition assignment through topic subscription (i.e. using subscribe).

意思就是:注意,手动分配分区(即,assgin)和动态分区分配的订阅topic模式(即,subcribe)不能混合使用。

错误复现:

首先启动一个消费者,使用assgin ,配置如下:

Properties props = new Properties();
props.put("bootstrap.servers", "10.20.87.23:9092");
props.put("group.id", "01");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer consumer = new KafkaConsumer<>(props);
String topic = "test";
TopicPartition partition0 = new TopicPartition(topic, 0);
consumer.assign(Arrays.asList(partition0));

然后再启动一个消费者,使用subscribe,配置如下:

Properties props = new Properties();
props.put("bootstrap.servers", "10.20.87.23:9092");
props.put("group.id", "01");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("test"));

在启动第二个时,就会出现上述报错。但是可以正常运行,只不过两个都会消费整个partition0 

解决方案:

将第二个消费者的配置group.id修改为02(意思就是与第一个assgin的group.id不一样)!然后重新启动消费者,错误就消失了!

你可能感兴趣的:(灭霸,kafka)