记一次kafka并发配置踩到的坑:javax.management.InstanceAlreadyExistsException

12:51:28.426 [pool-1-thread-218] WARN org.apache.kafka.common.utils.AppInfoParser - Error registering AppInfo mbean
javax.management.InstanceAlreadyExistsException: kafka.producer:type=app-info,id=DemoProducer
	at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
	at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
	at org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62)
	at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:451)
	at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:304)
	at com.study.kafka.ProducerNew.(ProducerNew.java:36)
	at com.study.kafka.ProducerNew.run(ProducerNew.java:76)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

上面是本人在使用spring kafka中所遇到的问题,针对此问题做一个记录,根据error log,发现在调用AppInfoParser.registerAppInfo方法时出现的异常,根据error log定位到 Repository.addMBean()截取一部分代码

上图中可以得知,一个clientId对应一个,不可重复。

最后找到ProduceNew,发现是因为自己配置了client.id导致的;如果不配置的话,看KafkaProducer类源码可知道会为每一个线程生成一个clientid,"consumer" +  自增id,原子性递增。

 String clientId = config.getString("client.id");
            if (clientId.length() <= 0) {
                clientId = "producer-" + PRODUCER_CLIENT_ID_SEQUENCE.getAndIncrement();
            }
基于此可知道配置了并发度大于1,同时配置了kafka的 client.id属性则会出现上述问题,而当你配置为1的时候不会出现上述log 解决方式:不配置client.id这一项,kakfa中会默认为多个线程生成id。
原文链接:https://blog.csdn.net/qq_38286618/article/details/103443896

如果非得自己定义client.id,可以在使用的时候,自己修改配置(可不依赖配置文件):

	@Bean("kafakConsumerProp")
	@ConfigurationProperties(prefix = "daemon.consumer.kafka.properties")
	public Properties getKafkaConsumeProperties() {
		return new Properties();
	}

	@Autowired
	@Qualifier("kafakConsumerProp")
	private Properties properties;

	@PostConstruct
	private void init() {
		rateLimiter = RateLimiter.create(ratelimiterProperties.getUpdateReportNotify());
		propertiesCopy = (Properties) properties.clone();
		propertiesCopy.put("key.deserializer", org.apache.kafka.common.serialization.StringDeserializer.class);
		propertiesCopy.put("value.deserializer", org.apache.kafka.common.serialization.ByteArrayDeserializer.class);
		String bootstrapServers = StringUtils.collectionToDelimitedString(kafkaProperties.getBootstrapServers(), ",");
		propertiesCopy.put("bootstrap.servers", bootstrapServers);
		propertiesCopy.put("client.id", properties.get("client.id")+"-update-report-notify");
		propertiesCopy.put("group.id", properties.get("group.id")+"-update-report-notify");
		startPoll();
	}

 

 

 

你可能感兴趣的:(client.id,配置,kafka)