2020年最新版的spring-kafka 整合消费者端代码实践---采用原始的方式,非@KafkaListener注解监听方式

                                                                    spring-kafka 整合消费端代码

1.pom 文件依赖


    4.0.0
    
        org.springframework.boot
        spring-boot-starter-parent
        2.3.0.RELEASE
        
    
    com.spring.shiro
    shiro-springboot
    0.0.1-SNAPSHOT
    shiro-springboot
    Demo project for Spring Boot

    
        1.8
    

    
       
            org.springframework
            spring-core
            5.2.6.RELEASE
        
        
            com.fasterxml.jackson.core
            jackson-databind
            2.11.0
        

        
            com.fasterxml.jackson.core
            jackson-core
            2.11.0
        

       
            org.springframework.kafka
            spring-kafka
            2.5.1.RELEASE
        
        
            org.springframework.boot
            spring-boot-starter-web
        
        
            org.mybatis.spring.boot
            mybatis-spring-boot-starter
            2.0.1
        
        
            org.springframework.boot
            spring-boot-starter-test
            test
        

        
            org.springframework.boot
            spring-boot-starter
        

        
            org.springframework.boot
            spring-boot-starter-test
            test
            
                
                    org.junit.vintage
                    junit-vintage-engine
                
            
        
    

    
        
            
                org.springframework.boot
                spring-boot-maven-plugin
            
        
    


2.application.properties

#################consumer的配置参数(开始)#################
#如果'enable.auto.commit'为true,则消费者偏移自动提交给Kafka的频率(以毫秒为单位),默认值为5000。
spring.kafka.consumer.auto-commit-interval=5000

#当Kafka中没有初始偏移量或者服务器上不再存在当前偏移量时该怎么办,默认值为latest,表示自动将偏移重置为最新的偏移量
#可选的值为latest, earliest, none

#spring.kafka.consumer.auto-offset-reset=latest

#以逗号分隔的主机:端口对列表,用于建立与Kafka群集的初始连接。
spring.kafka.consumer.bootstrap-servers=localhost:9092

#ID在发出请求时传递给服务器;用于服务器端日志记录。
spring.kafka.consumer.client-id=test-id

#如果为true,则消费者的偏移量将在后台定期提交,默认值为true
spring.kafka.consumer.enable-auto-commit=true

#如果没有足够的数据立即满足“fetch.min.bytes”给出的要求,服务器在回答获取请求之前将阻塞的最长时间(以毫秒为单位)
#默认值为500
#spring.kafka.consumer.fetch-max-wait;

#服务器应以字节为单位返回获取请求的最小数据量,默认值为1,对应的kafka的参数为fetch.min.bytes。
#spring.kafka.consumer.fetch-min-size;

#用于标识此使用者所属的使用者组的唯一字符串。
spring.kafka.consumer.group-id=venus

#心跳与消费者协调员之间的预期时间(以毫秒为单位),默认值为3000
spring.kafka.consumer.heartbeat-interval=3000

#密钥的反序列化器类,实现类实现了接口org.apache.kafka.common.serialization.Deserializer
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer

#值的反序列化器类,实现类实现了接口org.apache.kafka.common.serialization.Deserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

#一次调用poll()操作时返回的最大记录数,默认值为500
spring.kafka.consumer.max-poll-records=500

#################consumer的配置参数(结束)#################
#################producer的配置参数(开始)#################
#procedure要求leader在考虑完成请求之前收到的确认数,用于控制发送记录在服务端的持久化,其值可以为如下:
#acks = 0 如果设置为零,则生产者将不会等待来自服务器的任何确认,该记录将立即添加到套接字缓冲区并视为已发送。在这种情况下,无法保证服务器已收到记录,并且重试配置将不会生效(因为客户端通常不会知道任何故障),为每条记录返回的偏移量始终设置为-1。
#acks = 1 这意味着leader会将记录写入其本地日志,但无需等待所有副本服务器的完全确认即可做出回应,在这种情况下,如果leader在确认记录后立即失败,但在将数据复制到所有的副本服务器之前,则记录将会丢失。
#acks = all 这意味着leader将等待完整的同步副本集以确认记录,这保证了只要至少一个同步副本服务器仍然存活,记录就不会丢失,这是最强有力的保证,这相当于acks = -1的设置。
#可以设置的值为:all, -1, 0, 1
#spring.kafka.producer.acks=1

#每当多个记录被发送到同一分区时,生产者将尝试将记录一起批量处理为更少的请求,?
#这有助于提升客户端和服务器上的性能,此配置控制默认批量大小(以字节为单位),默认值为16384
#spring.kafka.producer.batch-size=16384

#以逗号分隔的主机:端口对列表,用于建立与Kafka群集的初始连接
#spring.kafka.producer.bootstrap-servers

#生产者可用于缓冲等待发送到服务器的记录的内存总字节数,默认值为33554432
#spring.kafka.producer.buffer-memory=33554432

#ID在发出请求时传递给服务器,用于服务器端日志记录
#spring.kafka.producer.client-id

#生产者生成的所有数据的压缩类型,此配置接受标准压缩编解码器('gzip','snappy','lz4'),
#它还接受'uncompressed'以及'producer',分别表示没有压缩以及保留生产者设置的原始压缩编解码器,
#默认值为producer
#spring.kafka.producer.compression-type=producer

#key的Serializer类,实现类实现了接口org.apache.kafka.common.serialization.Serializer
#spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer

#值的Serializer类,实现类实现了接口org.apache.kafka.common.serialization.Serializer
#spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer

#如果该值大于零时,表示启用重试失败的发送次数
#spring.kafka.producer.retries
#################producer的配置参数(结束)#################
#################listener的配置参数(结束)#################
#侦听器的AckMode,参见https://docs.spring.io/spring-kafka/reference/htmlsingle/#committing-offsets
#当enable.auto.commit的值设置为false时,该值会生效;为true时不会生效
#spring.kafka.listener.ack-mode;

#在侦听器容器中运行的线程数
#spring.kafka.listener.concurrency;

#轮询消费者时使用的超时(以毫秒为单位)
#spring.kafka.listener.poll-timeout;

#当ackMode为“COUNT”或“COUNT_TIME”时,偏移提交之间的记录数
#spring.kafka.listener.ack-count;

#当ackMode为“TIME”或“COUNT_TIME”时,偏移提交之间的时间(以毫秒为单位)
#spring.kafka.listener.ack-time;
#################listener的配置参数(结束)#################

 

3. 配置kafka的注入KafkaConfig

package com.spring.shiro.shirospringboot.config;

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.listener.ContainerProperties;
import org.springframework.kafka.listener.KafkaMessageListenerContainer;
import org.springframework.kafka.listener.MessageListener;

import java.util.HashMap;
import java.util.Map;

@Configuration
@EnableKafka
public class KafkaConfig {
    @Value("${spring.kafka.consumer.bootstrap-servers}")
    private String servers;
    @Value("${spring.kafka.consumer.enable-auto-commit}")//是否自动提交,建议为false
    private boolean enableAutoCommit;
    /**
     * not config this param in application.properties
     */
   /* @Value("${kafka.consumer.session.timeout}")*/
    private String sessionTimeout="60000";
    @Value("${spring.kafka.consumer.auto-commit-interval}")
    private String autoCommitInterval;
    @Value("${spring.kafka.consumer.group-id}")
    private String groupId;
    //@Value("${spring.kafka.consumer.auto-offset-reset")
    private String autoOffsetReset="latest";
    /**
     * not config this param in application.properties
     */
   /* @Value("${kafka.consumer.concurrency}")*/
    private int concurrency=1;

    /**
     * kafka consumer config param
     * @return
     */
    @Bean
    public Map consumerConfigs() {
        Map propsMap = new HashMap<>();
        propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
        propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, enableAutoCommit);
        propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, autoCommitInterval);
       // propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, sessionTimeout);
        propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
        //一次拉取消息数量
        propsMap.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "50");
        return propsMap;
    }

    /**
     * config kafka consumerFactory ,need injection consumerConfigs(is a hashMap)
     * @return
     */
    @Bean
    public DefaultKafkaConsumerFactory consumerFactory(){
        return  new DefaultKafkaConsumerFactory(consumerConfigs());
    }

    /**
     * config kafka ListenerContainerFactory ,need  injection consumerFactory
     * @return
     */
    @Bean
    public ConcurrentKafkaListenerContainerFactory kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        //设置拉取等待时间(也可间接的理解为延时消费)
        //factory.getContainerProperties().setPollTimeout(1500);
        //设置并发量,小于或等于Topic的分区数,并且要在consumerFactory设置一次拉取的数量
       // factory.setConcurrency(concurrency);
        //设置为批量监听
        //factory.setBatchListener(true);
        //指定使用此bean工厂的监听方法,消费确认为方式为用户指定aks,可以用下面的配置也可以直接使用enableAutoCommit参数
       // factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL_IMMEDIATE);

        //设置回复模板,类似于rabbitMQ的死信交换机,但是还有区别,
        //   factory.setReplyTemplate(kafkaTemplate());//发送消息的模板,这里只是消费者的类,所以木有

        //禁止自动启动,用于持久化操作,可先将消息都发送至broker,然后在固定的时间内进行持久化,有丢失消息的风险
        //factory.setAutoStartup(false);

        //使用过滤器
        //配合RecordFilterStrategy使用,被过滤的信息将被丢弃
       // factory.setAckDiscarded(true);
       // factory.setRecordFilterStrategy(kafkaRecordFilterStrategy);
        return factory;
    }

    /**
     * kafka listener demo ,direct consumer message in this method
     * @return
     */
    @Bean
    public KafkaMessageListenerContainer demoListenerContainer() {
        ContainerProperties properties = new ContainerProperties("topic.xuj");

        properties.setGroupId(groupId);

        properties.setMessageListener(new MessageListener() {
            private Logger log = LoggerFactory.getLogger(this.getClass());
            @Override
            public void onMessage(ConsumerRecord record) {
                log.info("topic.xuj receive : " + record.toString());
            }
        });

        return new KafkaMessageListenerContainer(consumerFactory(), properties);
    }
}

4.主启动类:ShiroSpringbootApplication(名字无所谓,这里的类名我没换,这个工程本来是测试shiro权限的)

package com.spring.shiro.shirospringboot;

import com.spring.shiro.shirospringboot.config.ApplicationListenerTest;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration;
import org.springframework.context.ConfigurableApplicationContext;

/**
 * 如果不需要连接数据库时,把exclude = {DataSourceAutoConfiguration.class}加上,
 * 这样就不会在启动springboot项目时启动数据库资源的自动配置进入到spring容器
 */
@SpringBootApplication(exclude = {DataSourceAutoConfiguration.class})
public class ShiroSpringbootApplication {
    public static void main(String[] args) {
        /**
         * 下面的方式是启动application容器的方法之一,我是为了测试其他功能,单测试kafka消费端的时候,直接
         * SpringApplication.run(ShiroSpringbootApplication.class,args);
         */
        SpringApplication application = new SpringApplication(ShiroSpringbootApplication.class);
        application.addListeners(new ApplicationListenerTest());
        ConfigurableApplicationContext context = application.run(args);
        //发布事件
        /*for (int i=0;i<5;i++){
            context.publishEvent(new ApplicationEventTest(new Object()));
        }*/
        //context.close();
    }
}

5.使用web接口,测试发送kafka消息到kafka,执行其他发送消息的方式也是可以的,只要消费正确被投递到kafka消息中心

KafkaSendMessage 发送消息的controller类,下面是详细的内容:
package com.spring.shiro.shirospringboot.controller;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class KafkaSendMessage {
    @Autowired
    private KafkaTemplate kafkaTemplate;
    @RequestMapping("/sendMessage")
    public String sendMessage(String message){
        //KafkaAnnotationDrivenConfiguration
        kafkaTemplate.send("topic.xuj",message);
        return message+"send into xuj topic success";
    }
}

6、经过测试,所有代码能正常运行,请把kafka的消息服务启动,不然java程序无法连接到kafka服务,在控制台显示无法正常连接。下面是本地的window环境启动的kafka命令;我window的kafka版本是kafka-10.0.1 ;zookeeper采用的是kafka内置的。

启动kafka服务命令:

需要在控制台页面,cd D:\xj-Java\idea\kafka-10.0.1  路径,执行命令即可。

bin\windows\zookeeper-server-start.bat config\zookeeper.properties  必须先启动
bin\windows\kafka-server-start.bat config\server.properties

正常启动消费端监听后,服务如果正常的情况下是这样的:

D:\xj-Java\JDK\bin\java.exe -agentlib:jdwp=transport=dt_socket,address=127.0.0.1:64673,suspend=y,server=n -XX:TieredStopAtLevel=1 -noverify -Dspring.output.ansi.enabled=always -Dcom.sun.management.jmxremote -Dspring.jmx.enabled=true -Dspring.liveBeansView.mbeanDomain -Dspring.application.admin.enabled=true -javaagent:C:\Users\lj\.IntelliJIdea2019.3\system\captureAgent\debugger-agent.jar -Dfile.encoding=UTF-8 -classpath "D:\xj-Java\JDK\jre\lib\charsets.jar;D:\xj-Java\JDK\jre\lib\deploy.jar;D:\xj-Java\JDK\jre\lib\ext\access-bridge-64.jar;D:\xj-Java\JDK\jre\lib\ext\cldrdata.jar;D:\xj-Java\JDK\jre\lib\ext\dnsns.jar;D:\xj-Java\JDK\jre\lib\ext\jaccess.jar;D:\xj-Java\JDK\jre\lib\ext\jfxrt.jar;D:\xj-Java\JDK\jre\lib\ext\localedata.jar;D:\xj-Java\JDK\jre\lib\ext\nashorn.jar;D:\xj-Java\JDK\jre\lib\ext\sunec.jar;D:\xj-Java\JDK\jre\lib\ext\sunjce_provider.jar;D:\xj-Java\JDK\jre\lib\ext\sunmscapi.jar;D:\xj-Java\JDK\jre\lib\ext\sunpkcs11.jar;D:\xj-Java\JDK\jre\lib\ext\zipfs.jar;D:\xj-Java\JDK\jre\lib\javaws.jar;D:\xj-Java\JDK\jre\lib\jce.jar;D:\xj-Java\JDK\jre\lib\jfr.jar;D:\xj-Java\JDK\jre\lib\jfxswt.jar;D:\xj-Java\JDK\jre\lib\jsse.jar;D:\xj-Java\JDK\jre\lib\management-agent.jar;D:\xj-Java\JDK\jre\lib\plugin.jar;D:\xj-Java\JDK\jre\lib\resources.jar;D:\xj-Java\JDK\jre\lib\rt.jar;C:\Users\lj\IdeaProjects\shiro-springboot\target\classes;D:\xj-Java\idea\maven\repository\org\springframework\spring-core\5.2.6.RELEASE\spring-core-5.2.6.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\springframework\spring-jcl\5.2.6.RELEASE\spring-jcl-5.2.6.RELEASE.jar;D:\xj-Java\idea\maven\repository\com\fasterxml\jackson\core\jackson-databind\2.11.0\jackson-databind-2.11.0.jar;D:\xj-Java\idea\maven\repository\com\fasterxml\jackson\core\jackson-annotations\2.11.0\jackson-annotations-2.11.0.jar;D:\xj-Java\idea\maven\repository\com\fasterxml\jackson\core\jackson-core\2.11.0\jackson-core-2.11.0.jar;D:\xj-Java\idea\maven\repository\org\springframework\kafka\spring-kafka\2.5.1.RELEASE\spring-kafka-2.5.1.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\springframework\spring-context\5.2.6.RELEASE\spring-context-5.2.6.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\springframework\spring-aop\5.2.6.RELEASE\spring-aop-5.2.6.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\springframework\spring-beans\5.2.6.RELEASE\spring-beans-5.2.6.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\springframework\spring-expression\5.2.6.RELEASE\spring-expression-5.2.6.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\springframework\spring-messaging\5.2.6.RELEASE\spring-messaging-5.2.6.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\springframework\spring-tx\5.2.6.RELEASE\spring-tx-5.2.6.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\springframework\retry\spring-retry\1.2.5.RELEASE\spring-retry-1.2.5.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\apache\kafka\kafka-clients\2.5.0\kafka-clients-2.5.0.jar;D:\xj-Java\idea\maven\repository\com\github\luben\zstd-jni\1.4.4-7\zstd-jni-1.4.4-7.jar;D:\xj-Java\idea\maven\repository\org\lz4\lz4-java\1.7.1\lz4-java-1.7.1.jar;D:\xj-Java\idea\maven\repository\org\xerial\snappy\snappy-java\1.1.7.3\snappy-java-1.1.7.3.jar;D:\xj-Java\idea\maven\repository\org\slf4j\slf4j-api\1.7.30\slf4j-api-1.7.30.jar;D:\xj-Java\idea\maven\repository\org\springframework\boot\spring-boot-starter-web\2.3.0.RELEASE\spring-boot-starter-web-2.3.0.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\springframework\boot\spring-boot-starter-json\2.3.0.RELEASE\spring-boot-starter-json-2.3.0.RELEASE.jar;D:\xj-Java\idea\maven\repository\com\fasterxml\jackson\datatype\jackson-datatype-jdk8\2.11.0\jackson-datatype-jdk8-2.11.0.jar;D:\xj-Java\idea\maven\repository\com\fasterxml\jackson\datatype\jackson-datatype-jsr310\2.11.0\jackson-datatype-jsr310-2.11.0.jar;D:\xj-Java\idea\maven\repository\com\fasterxml\jackson\module\jackson-module-parameter-names\2.11.0\jackson-module-parameter-names-2.11.0.jar;D:\xj-Java\idea\maven\repository\org\springframework\boot\spring-boot-starter-tomcat\2.3.0.RELEASE\spring-boot-starter-tomcat-2.3.0.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\apache\tomcat\embed\tomcat-embed-core\9.0.35\tomcat-embed-core-9.0.35.jar;D:\xj-Java\idea\maven\repository\org\glassfish\jakarta.el\3.0.3\jakarta.el-3.0.3.jar;D:\xj-Java\idea\maven\repository\org\apache\tomcat\embed\tomcat-embed-websocket\9.0.35\tomcat-embed-websocket-9.0.35.jar;D:\xj-Java\idea\maven\repository\org\springframework\spring-web\5.2.6.RELEASE\spring-web-5.2.6.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\springframework\spring-webmvc\5.2.6.RELEASE\spring-webmvc-5.2.6.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\mybatis\spring\boot\mybatis-spring-boot-starter\2.0.1\mybatis-spring-boot-starter-2.0.1.jar;D:\xj-Java\idea\maven\repository\org\springframework\boot\spring-boot-starter-jdbc\2.3.0.RELEASE\spring-boot-starter-jdbc-2.3.0.RELEASE.jar;D:\xj-Java\idea\maven\repository\com\zaxxer\HikariCP\3.4.5\HikariCP-3.4.5.jar;D:\xj-Java\idea\maven\repository\org\springframework\spring-jdbc\5.2.6.RELEASE\spring-jdbc-5.2.6.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\mybatis\spring\boot\mybatis-spring-boot-autoconfigure\2.0.1\mybatis-spring-boot-autoconfigure-2.0.1.jar;D:\xj-Java\idea\maven\repository\org\mybatis\mybatis\3.5.1\mybatis-3.5.1.jar;D:\xj-Java\idea\maven\repository\org\mybatis\mybatis-spring\2.0.1\mybatis-spring-2.0.1.jar;D:\xj-Java\idea\maven\repository\org\apache\shiro\shiro-spring\1.4.0\shiro-spring-1.4.0.jar;D:\xj-Java\idea\maven\repository\org\apache\shiro\shiro-core\1.4.0\shiro-core-1.4.0.jar;D:\xj-Java\idea\maven\repository\org\apache\shiro\shiro-lang\1.4.0\shiro-lang-1.4.0.jar;D:\xj-Java\idea\maven\repository\org\apache\shiro\shiro-cache\1.4.0\shiro-cache-1.4.0.jar;D:\xj-Java\idea\maven\repository\org\apache\shiro\shiro-crypto-hash\1.4.0\shiro-crypto-hash-1.4.0.jar;D:\xj-Java\idea\maven\repository\org\apache\shiro\shiro-crypto-core\1.4.0\shiro-crypto-core-1.4.0.jar;D:\xj-Java\idea\maven\repository\org\apache\shiro\shiro-crypto-cipher\1.4.0\shiro-crypto-cipher-1.4.0.jar;D:\xj-Java\idea\maven\repository\org\apache\shiro\shiro-config-core\1.4.0\shiro-config-core-1.4.0.jar;D:\xj-Java\idea\maven\repository\org\apache\shiro\shiro-config-ogdl\1.4.0\shiro-config-ogdl-1.4.0.jar;D:\xj-Java\idea\maven\repository\commons-beanutils\commons-beanutils\1.9.3\commons-beanutils-1.9.3.jar;D:\xj-Java\idea\maven\repository\commons-collections\commons-collections\3.2.2\commons-collections-3.2.2.jar;D:\xj-Java\idea\maven\repository\org\apache\shiro\shiro-event\1.4.0\shiro-event-1.4.0.jar;D:\xj-Java\idea\maven\repository\org\apache\shiro\shiro-web\1.4.0\shiro-web-1.4.0.jar;D:\xj-Java\idea\maven\repository\org\springframework\boot\spring-boot-starter\2.3.0.RELEASE\spring-boot-starter-2.3.0.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\springframework\boot\spring-boot\2.3.0.RELEASE\spring-boot-2.3.0.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\springframework\boot\spring-boot-autoconfigure\2.3.0.RELEASE\spring-boot-autoconfigure-2.3.0.RELEASE.jar;D:\xj-Java\idea\maven\repository\org\springframework\boot\spring-boot-starter-logging\2.3.0.RELEASE\spring-boot-starter-logging-2.3.0.RELEASE.jar;D:\xj-Java\idea\maven\repository\ch\qos\logback\logback-classic\1.2.3\logback-classic-1.2.3.jar;D:\xj-Java\idea\maven\repository\ch\qos\logback\logback-core\1.2.3\logback-core-1.2.3.jar;D:\xj-Java\idea\maven\repository\org\apache\logging\log4j\log4j-to-slf4j\2.13.2\log4j-to-slf4j-2.13.2.jar;D:\xj-Java\idea\maven\repository\org\apache\logging\log4j\log4j-api\2.13.2\log4j-api-2.13.2.jar;D:\xj-Java\idea\maven\repository\org\slf4j\jul-to-slf4j\1.7.30\jul-to-slf4j-1.7.30.jar;D:\xj-Java\idea\maven\repository\jakarta\annotation\jakarta.annotation-api\1.3.5\jakarta.annotation-api-1.3.5.jar;D:\xj-Java\idea\maven\repository\org\yaml\snakeyaml\1.26\snakeyaml-1.26.jar;D:\xj-Java\ideaInstall\IntelliJ IDEA 2019.3.3\lib\idea_rt.jar" com.spring.shiro.shirospringboot.ShiroSpringbootApplication
Connected to the target VM, address: '127.0.0.1:64673', transport: 'socket'

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.3.0.RELEASE)

2020-05-30 23:15:05.763  INFO 24696 --- [           main] c.s.s.s.ShiroSpringbootApplication       : Starting ShiroSpringbootApplication on xujiang with PID 24696 (C:\Users\lj\IdeaProjects\shiro-springboot\target\classes started by lj in C:\Users\lj\IdeaProjects\shiro-springboot)
2020-05-30 23:15:05.768  INFO 24696 --- [           main] c.s.s.s.ShiroSpringbootApplication       : No active profile set, falling back to default profiles: default
2020-05-30 23:15:07.255  INFO 24696 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8080 (http)
2020-05-30 23:15:07.268  INFO 24696 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2020-05-30 23:15:07.269  INFO 24696 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet engine: [Apache Tomcat/9.0.35]
2020-05-30 23:15:07.271  INFO 24696 --- [           main] o.a.catalina.core.AprLifecycleListener   : Loaded Apache Tomcat Native library [1.2.23] using APR version [1.7.0].
2020-05-30 23:15:07.271  INFO 24696 --- [           main] o.a.catalina.core.AprLifecycleListener   : APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
2020-05-30 23:15:07.271  INFO 24696 --- [           main] o.a.catalina.core.AprLifecycleListener   : APR/OpenSSL configuration: useAprConnector [false], useOpenSSL [true]
2020-05-30 23:15:07.276  INFO 24696 --- [           main] o.a.catalina.core.AprLifecycleListener   : OpenSSL successfully initialized [OpenSSL 1.1.1c  28 May 2019]
2020-05-30 23:15:07.396  INFO 24696 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2020-05-30 23:15:07.396  INFO 24696 --- [           main] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 1560 ms
2020-05-30 23:15:07.758  INFO 24696 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2020-05-30 23:15:08.022  INFO 24696 --- [           main] o.a.k.clients.consumer.ConsumerConfig    : ConsumerConfig values: 
    allow.auto.create.topics = true
    auto.commit.interval.ms = 5000
    auto.offset.reset = latest
    bootstrap.servers = [localhost:9092]
    check.crcs = true
    client.dns.lookup = default
    client.id = 
    client.rack = 
    connections.max.idle.ms = 540000
    default.api.timeout.ms = 60000
    enable.auto.commit = true
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = venus
    group.instance.id = null
    heartbeat.interval.ms = 3000
    interceptor.classes = []
    internal.leave.group.on.close = true
    isolation.level = read_uncommitted
    key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 50
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.2
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer

2020-05-30 23:15:08.112  INFO 24696 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka version: 2.5.0
2020-05-30 23:15:08.113  INFO 24696 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: 66563e712b0b9f84
2020-05-30 23:15:08.113  INFO 24696 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1590851708110
2020-05-30 23:15:08.117  INFO 24696 --- [           main] o.a.k.clients.consumer.KafkaConsumer     : [Consumer clientId=consumer-venus-1, groupId=venus] Subscribed to topic(s): topic.xuj
2020-05-30 23:15:08.122  INFO 24696 --- [           main] o.s.s.c.ThreadPoolTaskScheduler          : Initializing ExecutorService
2020-05-30 23:15:08.170  INFO 24696 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2020-05-30 23:15:08.181  INFO 24696 --- [           main] c.s.s.s.ShiroSpringbootApplication       : Started ShiroSpringbootApplication in 3.103 seconds (JVM running for 4.437)
2020-05-30 23:15:08.595  INFO 24696 --- [erContainer-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-venus-1, groupId=venus] Discovered group coordinator localhost:9092 (id: 2147483647 rack: null)
2020-05-30 23:15:08.598  INFO 24696 --- [erContainer-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-venus-1, groupId=venus] (Re-)joining group
2020-05-30 23:15:08.632  INFO 24696 --- [erContainer-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-venus-1, groupId=venus] Finished assignment for group at generation 1: {consumer-venus-1-c83b1944-a620-4e9c-a88c-ec2f3fd6307f=Assignment(partitions=[topic.xuj-0])}
2020-05-30 23:15:08.651  INFO 24696 --- [erContainer-C-1] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-venus-1, groupId=venus] Successfully joined group with generation 1
2020-05-30 23:15:08.656  INFO 24696 --- [erContainer-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-venus-1, groupId=venus] Adding newly assigned partitions: topic.xuj-0
2020-05-30 23:15:08.656  INFO 24696 --- [erContainer-C-1] o.s.k.l.KafkaMessageListenerContainer    : venus: partitions assigned: [topic.xuj-0]
2020-05-30 23:15:08.671  INFO 24696 --- [erContainer-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-venus-1, groupId=venus] Setting offset for partition topic.xuj-0 to the committed offset FetchPosition{offset=9, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:9092 (id: 0 rack: null)], epoch=absent}}
 

使用controller 发送消息后是这样的:


2020-05-30 23:16:21.346  INFO 24696 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
2020-05-30 23:16:21.354  INFO 24696 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 8 ms
2020-05-30 23:16:21.403  INFO 24696 --- [nio-8080-exec-1] o.a.k.clients.producer.ProducerConfig    : ProducerConfig values: 
    acks = 1
    batch.size = 16384
    bootstrap.servers = [localhost:9092]
    buffer.memory = 33554432
    client.dns.lookup = default
    client.id = producer-1
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 120000
    enable.idempotence = false
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 0
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metadata.max.idle.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.2
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = null
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer

2020-05-30 23:16:21.430  INFO 24696 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser     : Kafka version: 2.5.0
2020-05-30 23:16:21.430  INFO 24696 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: 66563e712b0b9f84
2020-05-30 23:16:21.430  INFO 24696 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1590851781430
2020-05-30 23:16:21.481  INFO 24696 --- [erContainer-C-1] c.s.s.s.config.KafkaConfig$1             : topic.xuj receive : ConsumerRecord(topic = topic.xuj, partition = 0, leaderEpoch = null, offset = 9, CreateTime = 1590851781443, serialized key size = -1, serialized value size = 12, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = helloWorld15)---------》》》》可以看到helloWorld15  就是我使用KafkaSendMessage  发送消息(http://localhost:8080/sendMessage?message=helloWorld15) 后,在后台监听到消息服务

 

7.本博客纯属记录工作之余研究的代码过程,为了以后有更好的指导效果,谢谢。下一次我分享shiro权限和@KafkaListener方式监听消费端消息。

 

 

备注:

附上zookeeper和kafka启发正常的页面:

zookeeper:

Microsoft Windows [版本 10.0.17763.1217]
(c) 2018 Microsoft Corporation。保留所有权利。

D:\xj-Java\idea\kafka-10.0.1>bin\windows\zookeeper-server-start.bat config\zookeeper.properties
'#' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
[2020-05-30 22:23:56,199] INFO Reading configuration from: config\zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-05-30 22:23:56,202] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-05-30 22:23:56,202] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-05-30 22:23:56,202] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-05-30 22:23:56,202] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2020-05-30 22:23:56,222] INFO Reading configuration from: config\zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-05-30 22:23:56,222] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2020-05-30 22:24:07,267] INFO Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,268] INFO Server environment:host.name=xujiang (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,270] INFO Server environment:java.version=1.8.0_121 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,270] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,270] INFO Server environment:java.home=D:\xj-Java\JDK\jre (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,271] INFO Server environment:java.class.path=D:\xj-Java\idea\kafka-10.0.1\libs\aopalliance-repackaged-2.4.0-b34.jar;D:\xj-Java\idea\kafka-10.0.1\libs\argparse4j-0.5.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\connect-api-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\connect-file-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\connect-json-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\connect-runtime-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\guava-18.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\hk2-api-2.4.0-b34.jar;D:\xj-Java\idea\kafka-10.0.1\libs\hk2-locator-2.4.0-b34.jar;D:\xj-Java\idea\kafka-10.0.1\libs\hk2-utils-2.4.0-b34.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jackson-annotations-2.6.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jackson-core-2.6.3.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jackson-databind-2.6.3.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jackson-jaxrs-base-2.6.3.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jackson-jaxrs-json-provider-2.6.3.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jackson-module-jaxb-annotations-2.6.3.jar;D:\xj-Java\idea\kafka-10.0.1\libs\javassist-3.18.2-GA.jar;D:\xj-Java\idea\kafka-10.0.1\libs\javax.annotation-api-1.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\javax.inject-1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\javax.inject-2.4.0-b34.jar;D:\xj-Java\idea\kafka-10.0.1\libs\javax.servlet-api-3.1.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\javax.ws.rs-api-2.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-client-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-common-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-container-servlet-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-container-servlet-core-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-guava-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-media-jaxb-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-server-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-continuation-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-http-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-io-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-security-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-server-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-servlet-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-servlets-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-util-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jopt-simple-4.9.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka-clients-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka-log4j-appender-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka-streams-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka-streams-examples-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka-tools-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka_2.11-0.10.0.1-javadoc.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka_2.11-0.10.0.1-scaladoc.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka_2.11-0.10.0.1-sources.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka_2.11-0.10.0.1-test-sources.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka_2.11-0.10.0.1-test.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka_2.11-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\log4j-1.2.17.jar;D:\xj-Java\idea\kafka-10.0.1\libs\lz4-1.3.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\metrics-core-2.2.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\osgi-resource-locator-1.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\reflections-0.9.10.jar;D:\xj-Java\idea\kafka-10.0.1\libs\rocksdbjni-4.8.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\scala-library-2.11.8.jar;D:\xj-Java\idea\kafka-10.0.1\libs\scala-parser-combinators_2.11-1.0.4.jar;D:\xj-Java\idea\kafka-10.0.1\libs\slf4j-api-1.7.21.jar;D:\xj-Java\idea\kafka-10.0.1\libs\slf4j-log4j12-1.7.21.jar;D:\xj-Java\idea\kafka-10.0.1\libs\snappy-java-1.1.2.6.jar;D:\xj-Java\idea\kafka-10.0.1\libs\validation-api-1.1.0.Final.jar;D:\xj-Java\idea\kafka-10.0.1\libs\zkclient-0.8.jar;D:\xj-Java\idea\kafka-10.0.1\libs\zookeeper-3.4.6.jar (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,278] INFO Server environment:java.library.path=D:\xj-Java\JDK\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;D:\xj-Java\xftp\;D:\xj-Java\xshell\;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;D:\xj-Java\JDK\bin;D:\xj-Java\tomcat8\apache-tomcat-8.5.53\bin;D:\xj-Java\tomcat8\apache-tomcat-8.5.53\lib;C:\Users\lj\AppData\Local\Microsoft\WindowsApps;;D:\xj-Java\ideaInstall\IntelliJ IDEA 2019.3.3\bin;;. (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,286] INFO Server environment:java.io.tmpdir=C:\Users\lj\AppData\Local\Temp\ (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,287] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,288] INFO Server environment:os.name=Windows 10 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,289] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,292] INFO Server environment:os.version=10.0 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,293] INFO Server environment:user.name=lj (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,294] INFO Server environment:user.home=C:\Users\lj (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,299] INFO Server environment:user.dir=D:\xj-Java\idea\kafka-10.0.1 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,331] INFO tickTime set to 3000 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,331] INFO minSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,337] INFO maxSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:24:07,371] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-05-30 22:25:10,578] INFO Accepted socket connection from /127.0.0.1:62611 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-05-30 22:25:10,588] INFO Client attempting to establish new session at /127.0.0.1:62611 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:25:10,600] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog)
[2020-05-30 22:25:10,615] INFO Established session 0x17265f7f0e80000 with negotiated timeout 6000 for client /127.0.0.1:62611 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:25:10,667] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80000 type:create cxid:0x5 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:25:10,689] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80000 type:create cxid:0xb zxid:0x7 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:25:10,703] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80000 type:create cxid:0x13 zxid:0xc txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:25:51,352] WARN Exception causing close of session 0x17265f7f0e80000 due to java.io.IOException: 远程主机强迫关闭了一个现有的连接。 (org.apache.zookeeper.server.NIOServerCnxn)
[2020-05-30 22:25:51,353] INFO Closed socket connection for client /127.0.0.1:62611 which had sessionid 0x17265f7f0e80000 (org.apache.zookeeper.server.NIOServerCnxn)
[2020-05-30 22:25:57,001] INFO Expiring session 0x17265f7f0e80000, timeout of 6000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:25:57,002] INFO Processed session termination for sessionid: 0x17265f7f0e80000 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:32,456] INFO Accepted socket connection from /127.0.0.1:62705 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-05-30 22:27:32,462] INFO Client attempting to establish new session at /127.0.0.1:62705 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:27:32,464] INFO Established session 0x17265f7f0e80001 with negotiated timeout 6000 for client /127.0.0.1:62705 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-05-30 22:27:32,757] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:setData cxid:0x11 zxid:0x14 txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:32,810] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:delete cxid:0x20 zxid:0x16 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:32,878] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x27 zxid:0x17 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:32,879] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x28 zxid:0x18 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,172] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:setData cxid:0x31 zxid:0x1a txntype:-1 reqpath:n/a Error Path:/config/topics/topic.xuj Error:KeeperErrorCode = NoNode for /config/topics/topic.xuj (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,175] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x33 zxid:0x1b txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,204] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:setData cxid:0x3e zxid:0x1e txntype:-1 reqpath:n/a Error Path:/config/topics/__consumer_offsets Error:KeeperErrorCode = NoNode for /config/topics/__consumer_offsets (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,206] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x3f zxid:0x1f txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,217] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x43 zxid:0x22 txntype:-1 reqpath:n/a Error Path:/brokers/topics/topic.xuj/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/topic.xuj/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,219] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x44 zxid:0x23 txntype:-1 reqpath:n/a Error Path:/brokers/topics/topic.xuj/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/topic.xuj/partitions (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,448] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xa5 zxid:0x27 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/32 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/32 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,453] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xa7 zxid:0x28 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,479] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xae zxid:0x2c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/16 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/16 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,492] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xb3 zxid:0x2f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/49 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/49 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,508] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xb9 zxid:0x32 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/44 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/44 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,533] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xbe zxid:0x35 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/28 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/28 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,556] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xc3 zxid:0x38 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/17 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/17 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,566] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xc8 zxid:0x3b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/23 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/23 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,576] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xce zxid:0x3e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/7 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/7 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,589] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xd3 zxid:0x41 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/4 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/4 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,596] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xd8 zxid:0x44 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/29 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/29 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,604] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xdd zxid:0x47 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/35 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/35 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,612] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xe3 zxid:0x4a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/3 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/3 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,621] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xe9 zxid:0x4d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/24 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/24 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,633] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xee zxid:0x50 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/41 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/41 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,641] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xf3 zxid:0x53 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,652] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xf8 zxid:0x56 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/38 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/38 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,658] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0xfe zxid:0x59 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/13 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/13 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,667] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x103 zxid:0x5c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/8 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/8 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,674] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x108 zxid:0x5f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/5 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/5 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,682] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x10d zxid:0x62 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/39 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/39 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,693] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x113 zxid:0x65 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/36 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/36 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,699] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x117 zxid:0x68 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/40 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/40 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,705] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x11c zxid:0x6b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/45 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/45 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,710] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x122 zxid:0x6e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/15 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/15 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,717] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x127 zxid:0x71 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/33 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/33 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,722] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x12c zxid:0x74 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/37 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/37 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,728] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x131 zxid:0x77 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/21 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/21 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,734] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x137 zxid:0x7a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/6 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/6 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,739] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x13c zxid:0x7d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/11 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/11 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,745] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x141 zxid:0x80 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/20 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/20 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,752] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x146 zxid:0x83 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/47 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/47 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,757] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x14c zxid:0x86 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/2 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/2 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,763] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x152 zxid:0x89 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/27 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/27 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,770] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x157 zxid:0x8c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/34 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/34 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,776] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x15c zxid:0x8f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/9 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/9 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,783] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x161 zxid:0x92 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/22 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/22 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,789] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x167 zxid:0x95 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/42 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/42 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,796] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x16b zxid:0x98 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/14 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/14 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,801] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x171 zxid:0x9b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/25 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/25 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,806] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x176 zxid:0x9e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/10 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/10 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,812] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x17c zxid:0xa1 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/48 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/48 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,817] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x182 zxid:0xa4 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/31 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/31 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,822] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x188 zxid:0xa7 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/18 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/18 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,827] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x18e zxid:0xaa txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/19 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/19 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,833] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x194 zxid:0xad txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/12 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/12 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,839] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x199 zxid:0xb0 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/46 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/46 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,844] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x19e zxid:0xb3 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/43 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/43 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,850] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x1a3 zxid:0xb6 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/1 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/1 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,857] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x1a9 zxid:0xb9 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/26 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/26 (org.apache.zookeeper.server.PrepRequestProcessor)
[2020-05-30 22:27:39,864] INFO Got user-level KeeperException when processing sessionid:0x17265f7f0e80001 type:create cxid:0x1ad zxid:0xbc txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/30 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/30 (org.apache.zookeeper.server.PrepRequestProcessor)

 

kafka服务启发正常页面:

Microsoft Windows [版本 10.0.17763.1217]
(c) 2018 Microsoft Corporation。保留所有权利。

D:\xj-Java\idea\kafka-10.0.1>bin\windows\kafka-server-start.bat config\server.properties
'#' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
[2020-05-30 22:27:21,266] INFO KafkaConfig values:
        advertised.host.name = null
        metric.reporters = []
        quota.producer.default = 9223372036854775807
        offsets.topic.num.partitions = 50
        log.flush.interval.messages = 9223372036854775807
        auto.create.topics.enable = true
        controller.socket.timeout.ms = 30000
        log.flush.interval.ms = null
        principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
        replica.socket.receive.buffer.bytes = 65536
        min.insync.replicas = 1
        replica.fetch.wait.max.ms = 500
        num.recovery.threads.per.data.dir = 1
        ssl.keystore.type = JKS
        sasl.mechanism.inter.broker.protocol = GSSAPI
        default.replication.factor = 1
        ssl.truststore.password = null
        log.preallocate = false
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        fetch.purgatory.purge.interval.requests = 1000
        ssl.endpoint.identification.algorithm = null
        replica.socket.timeout.ms = 30000
        message.max.bytes = 1000012
        num.io.threads = 8
        offsets.commit.required.acks = -1
        log.flush.offset.checkpoint.interval.ms = 60000
        delete.topic.enable = false
        quota.window.size.seconds = 1
        ssl.truststore.type = JKS
        offsets.commit.timeout.ms = 5000
        quota.window.num = 11
        zookeeper.connect = localhost:2181
        authorizer.class.name =
        num.replica.fetchers = 1
        log.retention.ms = null
        log.roll.jitter.hours = 0
        log.cleaner.enable = true
        offsets.load.buffer.size = 5242880
        log.cleaner.delete.retention.ms = 86400000
        ssl.client.auth = none
        controlled.shutdown.max.retries = 3
        queued.max.requests = 500
        offsets.topic.replication.factor = 3
        log.cleaner.threads = 1
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        socket.request.max.bytes = 104857600
        ssl.trustmanager.algorithm = PKIX
        zookeeper.session.timeout.ms = 6000
        log.retention.bytes = -1
        log.message.timestamp.type = CreateTime
        sasl.kerberos.min.time.before.relogin = 60000
        zookeeper.set.acl = false
        connections.max.idle.ms = 600000
        offsets.retention.minutes = 1440
        replica.fetch.backoff.ms = 1000
        inter.broker.protocol.version = 0.10.0-IV1
        log.retention.hours = 168
        num.partitions = 1
        broker.id.generation.enable = true
        listeners = PLAINTEXT://localhost:9092
        ssl.provider = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        log.roll.ms = null
        log.flush.scheduler.interval.ms = 9223372036854775807
        ssl.cipher.suites = null
        log.index.size.max.bytes = 10485760
        ssl.keymanager.algorithm = SunX509
        security.inter.broker.protocol = PLAINTEXT
        replica.fetch.max.bytes = 1048576
        advertised.port = null
        log.cleaner.dedupe.buffer.size = 134217728
        replica.high.watermark.checkpoint.interval.ms = 5000
        log.cleaner.io.buffer.size = 524288
        sasl.kerberos.ticket.renew.window.factor = 0.8
        zookeeper.connection.timeout.ms = 6000
        controlled.shutdown.retry.backoff.ms = 5000
        log.roll.hours = 168
        log.cleanup.policy = delete
        host.name =
        log.roll.jitter.ms = null
        max.connections.per.ip = 2147483647
        offsets.topic.segment.bytes = 104857600
        background.threads = 10
        quota.consumer.default = 9223372036854775807
        request.timeout.ms = 30000
        log.message.format.version = 0.10.0-IV1
        log.index.interval.bytes = 4096
        log.dir = /tmp/kafka-logs
        log.segment.bytes = 1073741824
        log.cleaner.backoff.ms = 15000
        offset.metadata.max.bytes = 4096
        ssl.truststore.location = null
        group.max.session.timeout.ms = 300000
        ssl.keystore.password = null
        zookeeper.sync.time.ms = 2000
        port = 9092
        log.retention.minutes = null
        log.segment.delete.delay.ms = 60000
        log.dirs = /var/local/soft/kafka_2.11-0.10.0.1/kafka-logs
        controlled.shutdown.enable = true
        compression.type = producer
        max.connections.per.ip.overrides =
        log.message.timestamp.difference.max.ms = 9223372036854775807
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        auto.leader.rebalance.enable = true
        leader.imbalance.check.interval.seconds = 300
        log.cleaner.min.cleanable.ratio = 0.5
        replica.lag.time.max.ms = 10000
        num.network.threads = 3
        ssl.key.password = null
        reserved.broker.max.id = 1000
        metrics.num.samples = 2
        socket.send.buffer.bytes = 102400
        ssl.protocol = TLS
        socket.receive.buffer.bytes = 102400
        ssl.keystore.location = null
        replica.fetch.min.bytes = 1
        broker.rack = null
        unclean.leader.election.enable = true
        sasl.enabled.mechanisms = [GSSAPI]
        group.min.session.timeout.ms = 6000
        log.cleaner.io.buffer.load.factor = 0.9
        offsets.retention.check.interval.ms = 600000
        producer.purgatory.purge.interval.requests = 1000
        metrics.sample.window.ms = 30000
        broker.id = 0
        offsets.topic.compression.codec = 0
        log.retention.check.interval.ms = 300000
        advertised.listeners = null
        leader.imbalance.per.broker.percentage = 10
 (kafka.server.KafkaConfig)
[2020-05-30 22:27:21,322] INFO starting (kafka.server.KafkaServer)
[2020-05-30 22:27:21,331] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2020-05-30 22:27:21,343] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2020-05-30 22:27:32,388] INFO Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,389] INFO Client environment:host.name=xujiang (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,390] INFO Client environment:java.version=1.8.0_121 (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,390] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,391] INFO Client environment:java.home=D:\xj-Java\JDK\jre (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,391] INFO Client environment:java.class.path=D:\xj-Java\idea\kafka-10.0.1\libs\aopalliance-repackaged-2.4.0-b34.jar;D:\xj-Java\idea\kafka-10.0.1\libs\argparse4j-0.5.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\connect-api-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\connect-file-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\connect-json-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\connect-runtime-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\guava-18.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\hk2-api-2.4.0-b34.jar;D:\xj-Java\idea\kafka-10.0.1\libs\hk2-locator-2.4.0-b34.jar;D:\xj-Java\idea\kafka-10.0.1\libs\hk2-utils-2.4.0-b34.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jackson-annotations-2.6.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jackson-core-2.6.3.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jackson-databind-2.6.3.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jackson-jaxrs-base-2.6.3.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jackson-jaxrs-json-provider-2.6.3.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jackson-module-jaxb-annotations-2.6.3.jar;D:\xj-Java\idea\kafka-10.0.1\libs\javassist-3.18.2-GA.jar;D:\xj-Java\idea\kafka-10.0.1\libs\javax.annotation-api-1.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\javax.inject-1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\javax.inject-2.4.0-b34.jar;D:\xj-Java\idea\kafka-10.0.1\libs\javax.servlet-api-3.1.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\javax.ws.rs-api-2.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-client-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-common-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-container-servlet-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-container-servlet-core-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-guava-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-media-jaxb-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jersey-server-2.22.2.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-continuation-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-http-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-io-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-security-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-server-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-servlet-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-servlets-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jetty-util-9.2.15.v20160210.jar;D:\xj-Java\idea\kafka-10.0.1\libs\jopt-simple-4.9.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka-clients-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka-log4j-appender-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka-streams-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka-streams-examples-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka-tools-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka_2.11-0.10.0.1-javadoc.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka_2.11-0.10.0.1-scaladoc.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka_2.11-0.10.0.1-sources.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka_2.11-0.10.0.1-test-sources.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka_2.11-0.10.0.1-test.jar;D:\xj-Java\idea\kafka-10.0.1\libs\kafka_2.11-0.10.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\log4j-1.2.17.jar;D:\xj-Java\idea\kafka-10.0.1\libs\lz4-1.3.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\metrics-core-2.2.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\osgi-resource-locator-1.0.1.jar;D:\xj-Java\idea\kafka-10.0.1\libs\reflections-0.9.10.jar;D:\xj-Java\idea\kafka-10.0.1\libs\rocksdbjni-4.8.0.jar;D:\xj-Java\idea\kafka-10.0.1\libs\scala-library-2.11.8.jar;D:\xj-Java\idea\kafka-10.0.1\libs\scala-parser-combinators_2.11-1.0.4.jar;D:\xj-Java\idea\kafka-10.0.1\libs\slf4j-api-1.7.21.jar;D:\xj-Java\idea\kafka-10.0.1\libs\slf4j-log4j12-1.7.21.jar;D:\xj-Java\idea\kafka-10.0.1\libs\snappy-java-1.1.2.6.jar;D:\xj-Java\idea\kafka-10.0.1\libs\validation-api-1.1.0.Final.jar;D:\xj-Java\idea\kafka-10.0.1\libs\zkclient-0.8.jar;D:\xj-Java\idea\kafka-10.0.1\libs\zookeeper-3.4.6.jar (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,398] INFO Client environment:java.library.path=D:\xj-Java\JDK\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;D:\xj-Java\xftp\;D:\xj-Java\xshell\;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;D:\xj-Java\JDK\bin;D:\xj-Java\tomcat8\apache-tomcat-8.5.53\bin;D:\xj-Java\tomcat8\apache-tomcat-8.5.53\lib;C:\Users\lj\AppData\Local\Microsoft\WindowsApps;;D:\xj-Java\ideaInstall\IntelliJ IDEA 2019.3.3\bin;;. (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,400] INFO Client environment:java.io.tmpdir=C:\Users\lj\AppData\Local\Temp\ (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,404] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,405] INFO Client environment:os.name=Windows 10 (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,406] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,406] INFO Client environment:os.version=10.0 (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,407] INFO Client environment:user.name=lj (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,407] INFO Client environment:user.home=C:\Users\lj (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,408] INFO Client environment:user.dir=D:\xj-Java\idea\kafka-10.0.1 (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,410] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@6eda5c9 (org.apache.zookeeper.ZooKeeper)
[2020-05-30 22:27:32,447] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2020-05-30 22:27:32,454] INFO Opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-05-30 22:27:32,457] INFO Socket connection established to 127.0.0.1/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2020-05-30 22:27:32,467] INFO Session establishment complete on server 127.0.0.1/127.0.0.1:2181, sessionid = 0x17265f7f0e80001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2020-05-30 22:27:32,469] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2020-05-30 22:27:32,514] INFO Loading logs. (kafka.log.LogManager)
[2020-05-30 22:27:32,523] INFO Logs loading complete. (kafka.log.LogManager)
[2020-05-30 22:27:32,588] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2020-05-30 22:27:32,590] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2020-05-30 22:27:32,595] WARN No meta.properties file under dir D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs\meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2020-05-30 22:27:32,641] INFO Awaiting socket connections on localhost:9092. (kafka.network.Acceptor)
[2020-05-30 22:27:32,646] INFO [Socket Server on Broker 0], Started 1 acceptor threads (kafka.network.SocketServer)
[2020-05-30 22:27:32,675] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-05-30 22:27:32,679] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-05-30 22:27:32,742] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2020-05-30 22:27:32,752] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2020-05-30 22:27:32,753] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
[2020-05-30 22:27:32,817] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-05-30 22:27:32,819] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-05-30 22:27:32,830] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:27:32,831] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:27:32,835] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 9 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:32,851] INFO [ThrottledRequestReaper-Produce], Starting  (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2020-05-30 22:27:32,853] INFO [ThrottledRequestReaper-Fetch], Starting  (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2020-05-30 22:27:32,856] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
[2020-05-30 22:27:32,876] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2020-05-30 22:27:32,881] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2020-05-30 22:27:32,883] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(localhost,9092,PLAINTEXT) (kafka.utils.ZkUtils)
[2020-05-30 22:27:32,884] WARN No meta.properties file under dir D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs\meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2020-05-30 22:27:32,893] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
[2020-05-30 22:27:32,905] INFO Kafka version : 0.10.0.1 (org.apache.kafka.common.utils.AppInfoParser)
[2020-05-30 22:27:32,906] INFO Kafka commitId : a7a17cdec9eaa6c5 (org.apache.kafka.common.utils.AppInfoParser)
[2020-05-30 22:27:32,908] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
[2020-05-30 22:27:39,179] INFO Topic creation {"version":1,"partitions":{"0":[0]}} (kafka.admin.AdminUtils$)
[2020-05-30 22:27:39,183] INFO [KafkaApi-0] Auto creation of topic topic.xuj with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2020-05-30 22:27:39,211] INFO Topic creation {"version":1,"partitions":{"45":[0],"34":[0],"12":[0],"8":[0],"19":[0],"23":[0],"4":[0],"40":[0],"15":[0],"11":[0],"9":[0],"44":[0],"33":[0],"22":[0],"26":[0],"37":[0],"13":[0],"46":[0],"24":[0],"35":[0],"16":[0],"5":[0],"10":[0],"48":[0],"21":[0],"43":[0],"32":[0],"49":[0],"6":[0],"36":[0],"1":[0],"39":[0],"17":[0],"25":[0],"14":[0],"47":[0],"31":[0],"42":[0],"0":[0],"20":[0],"27":[0],"2":[0],"38":[0],"18":[0],"30":[0],"7":[0],"29":[0],"41":[0],"3":[0],"28":[0]}} (kafka.admin.AdminUtils$)
[2020-05-30 22:27:39,215] INFO [KafkaApi-0] Auto creation of topic __consumer_offsets with 50 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2020-05-30 22:27:39,283] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [topic.xuj,0] (kafka.server.ReplicaFetcherManager)
[2020-05-30 22:27:39,355] INFO Completed load of log topic.xuj-0 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:39,360] INFO Created log for partition [topic.xuj,0] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> delete, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:39,363] INFO Partition [topic.xuj,0] on broker 0: No checkpointed highwatermark is found for partition [topic.xuj,0] (kafka.cluster.Partition)
[2020-05-30 22:27:39,896] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [__consumer_offsets,32],[__consumer_offsets,16],[__consumer_offsets,49],[__consumer_offsets,44],[__consumer_offsets,28],[__consumer_offsets,17],[__consumer_offsets,23],[__consumer_offsets,7],[__consumer_offsets,4],[__consumer_offsets,29],[__consumer_offsets,35],[__consumer_offsets,3],[__consumer_offsets,24],[__consumer_offsets,41],[__consumer_offsets,0],[__consumer_offsets,38],[__consumer_offsets,13],[__consumer_offsets,8],[__consumer_offsets,5],[__consumer_offsets,39],[__consumer_offsets,36],[__consumer_offsets,40],[__consumer_offsets,45],[__consumer_offsets,15],[__consumer_offsets,33],[__consumer_offsets,37],[__consumer_offsets,21],[__consumer_offsets,6],[__consumer_offsets,11],[__consumer_offsets,20],[__consumer_offsets,47],[__consumer_offsets,2],[__consumer_offsets,27],[__consumer_offsets,34],[__consumer_offsets,9],[__consumer_offsets,22],[__consumer_offsets,42],[__consumer_offsets,14],[__consumer_offsets,25],[__consumer_offsets,10],[__consumer_offsets,48],[__consumer_offsets,31],[__consumer_offsets,18],[__consumer_offsets,19],[__consumer_offsets,12],[__consumer_offsets,46],[__consumer_offsets,43],[__consumer_offsets,1],[__consumer_offsets,26],[__consumer_offsets,30] (kafka.server.ReplicaFetcherManager)
[2020-05-30 22:27:39,905] INFO Completed load of log __consumer_offsets-0 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:39,907] INFO Created log for partition [__consumer_offsets,0] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:39,909] INFO Partition [__consumer_offsets,0] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,0] (kafka.cluster.Partition)
[2020-05-30 22:27:39,916] INFO Completed load of log __consumer_offsets-29 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:39,919] INFO Created log for partition [__consumer_offsets,29] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:39,920] INFO Partition [__consumer_offsets,29] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,29] (kafka.cluster.Partition)
[2020-05-30 22:27:39,926] INFO Completed load of log __consumer_offsets-48 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:39,927] INFO Created log for partition [__consumer_offsets,48] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:39,929] INFO Partition [__consumer_offsets,48] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,48] (kafka.cluster.Partition)
[2020-05-30 22:27:39,935] INFO Completed load of log __consumer_offsets-10 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:39,936] INFO Created log for partition [__consumer_offsets,10] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:39,938] INFO Partition [__consumer_offsets,10] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,10] (kafka.cluster.Partition)
[2020-05-30 22:27:39,945] INFO Completed load of log __consumer_offsets-45 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:39,947] INFO Created log for partition [__consumer_offsets,45] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:39,948] INFO Partition [__consumer_offsets,45] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,45] (kafka.cluster.Partition)
[2020-05-30 22:27:39,953] INFO Completed load of log __consumer_offsets-26 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:39,955] INFO Created log for partition [__consumer_offsets,26] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:39,957] INFO Partition [__consumer_offsets,26] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,26] (kafka.cluster.Partition)
[2020-05-30 22:27:39,963] INFO Completed load of log __consumer_offsets-7 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:39,965] INFO Created log for partition [__consumer_offsets,7] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:39,967] INFO Partition [__consumer_offsets,7] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,7] (kafka.cluster.Partition)
[2020-05-30 22:27:39,972] INFO Completed load of log __consumer_offsets-42 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:39,973] INFO Created log for partition [__consumer_offsets,42] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:39,974] INFO Partition [__consumer_offsets,42] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,42] (kafka.cluster.Partition)
[2020-05-30 22:27:39,983] INFO Completed load of log __consumer_offsets-4 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:39,984] INFO Created log for partition [__consumer_offsets,4] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:39,986] INFO Partition [__consumer_offsets,4] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,4] (kafka.cluster.Partition)
[2020-05-30 22:27:39,995] INFO Completed load of log __consumer_offsets-23 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:39,996] INFO Created log for partition [__consumer_offsets,23] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:39,998] INFO Partition [__consumer_offsets,23] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,23] (kafka.cluster.Partition)
[2020-05-30 22:27:40,004] INFO Completed load of log __consumer_offsets-1 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,005] INFO Created log for partition [__consumer_offsets,1] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,009] INFO Partition [__consumer_offsets,1] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,1] (kafka.cluster.Partition)
[2020-05-30 22:27:40,015] INFO Completed load of log __consumer_offsets-20 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,016] INFO Created log for partition [__consumer_offsets,20] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,018] INFO Partition [__consumer_offsets,20] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,20] (kafka.cluster.Partition)
[2020-05-30 22:27:40,028] INFO Completed load of log __consumer_offsets-39 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,030] INFO Created log for partition [__consumer_offsets,39] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,033] INFO Partition [__consumer_offsets,39] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,39] (kafka.cluster.Partition)
[2020-05-30 22:27:40,039] INFO Completed load of log __consumer_offsets-17 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,040] INFO Created log for partition [__consumer_offsets,17] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,043] INFO Partition [__consumer_offsets,17] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,17] (kafka.cluster.Partition)
[2020-05-30 22:27:40,053] INFO Completed load of log __consumer_offsets-36 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,055] INFO Created log for partition [__consumer_offsets,36] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,056] INFO Partition [__consumer_offsets,36] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,36] (kafka.cluster.Partition)
[2020-05-30 22:27:40,065] INFO Completed load of log __consumer_offsets-14 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,066] INFO Created log for partition [__consumer_offsets,14] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,067] INFO Partition [__consumer_offsets,14] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,14] (kafka.cluster.Partition)
[2020-05-30 22:27:40,074] INFO Completed load of log __consumer_offsets-33 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,075] INFO Created log for partition [__consumer_offsets,33] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,077] INFO Partition [__consumer_offsets,33] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,33] (kafka.cluster.Partition)
[2020-05-30 22:27:40,083] INFO Completed load of log __consumer_offsets-49 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,085] INFO Created log for partition [__consumer_offsets,49] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,087] INFO Partition [__consumer_offsets,49] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,49] (kafka.cluster.Partition)
[2020-05-30 22:27:40,093] INFO Completed load of log __consumer_offsets-11 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,095] INFO Created log for partition [__consumer_offsets,11] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,097] INFO Partition [__consumer_offsets,11] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,11] (kafka.cluster.Partition)
[2020-05-30 22:27:40,104] INFO Completed load of log __consumer_offsets-30 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,106] INFO Created log for partition [__consumer_offsets,30] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,107] INFO Partition [__consumer_offsets,30] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,30] (kafka.cluster.Partition)
[2020-05-30 22:27:40,115] INFO Completed load of log __consumer_offsets-46 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,118] INFO Created log for partition [__consumer_offsets,46] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,119] INFO Partition [__consumer_offsets,46] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,46] (kafka.cluster.Partition)
[2020-05-30 22:27:40,128] INFO Completed load of log __consumer_offsets-27 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,129] INFO Created log for partition [__consumer_offsets,27] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,131] INFO Partition [__consumer_offsets,27] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,27] (kafka.cluster.Partition)
[2020-05-30 22:27:40,138] INFO Completed load of log __consumer_offsets-8 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,140] INFO Created log for partition [__consumer_offsets,8] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,141] INFO Partition [__consumer_offsets,8] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,8] (kafka.cluster.Partition)
[2020-05-30 22:27:40,151] INFO Completed load of log __consumer_offsets-24 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,154] INFO Created log for partition [__consumer_offsets,24] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,156] INFO Partition [__consumer_offsets,24] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,24] (kafka.cluster.Partition)
[2020-05-30 22:27:40,165] INFO Completed load of log __consumer_offsets-43 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,166] INFO Created log for partition [__consumer_offsets,43] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,168] INFO Partition [__consumer_offsets,43] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,43] (kafka.cluster.Partition)
[2020-05-30 22:27:40,188] INFO Completed load of log __consumer_offsets-5 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,192] INFO Created log for partition [__consumer_offsets,5] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,194] INFO Partition [__consumer_offsets,5] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,5] (kafka.cluster.Partition)
[2020-05-30 22:27:40,201] INFO Completed load of log __consumer_offsets-21 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,202] INFO Created log for partition [__consumer_offsets,21] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,203] INFO Partition [__consumer_offsets,21] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,21] (kafka.cluster.Partition)
[2020-05-30 22:27:40,211] INFO Completed load of log __consumer_offsets-2 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,212] INFO Created log for partition [__consumer_offsets,2] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,214] INFO Partition [__consumer_offsets,2] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,2] (kafka.cluster.Partition)
[2020-05-30 22:27:40,221] INFO Completed load of log __consumer_offsets-40 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,222] INFO Created log for partition [__consumer_offsets,40] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,225] INFO Partition [__consumer_offsets,40] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,40] (kafka.cluster.Partition)
[2020-05-30 22:27:40,235] INFO Completed load of log __consumer_offsets-37 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,236] INFO Created log for partition [__consumer_offsets,37] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,237] INFO Partition [__consumer_offsets,37] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,37] (kafka.cluster.Partition)
[2020-05-30 22:27:40,243] INFO Completed load of log __consumer_offsets-18 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,244] INFO Created log for partition [__consumer_offsets,18] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,245] INFO Partition [__consumer_offsets,18] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,18] (kafka.cluster.Partition)
[2020-05-30 22:27:40,251] INFO Completed load of log __consumer_offsets-34 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,252] INFO Created log for partition [__consumer_offsets,34] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,255] INFO Partition [__consumer_offsets,34] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,34] (kafka.cluster.Partition)
[2020-05-30 22:27:40,261] INFO Completed load of log __consumer_offsets-15 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,262] INFO Created log for partition [__consumer_offsets,15] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,264] INFO Partition [__consumer_offsets,15] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,15] (kafka.cluster.Partition)
[2020-05-30 22:27:40,272] INFO Completed load of log __consumer_offsets-12 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,274] INFO Created log for partition [__consumer_offsets,12] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,275] INFO Partition [__consumer_offsets,12] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,12] (kafka.cluster.Partition)
[2020-05-30 22:27:40,282] INFO Completed load of log __consumer_offsets-31 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,283] INFO Created log for partition [__consumer_offsets,31] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,284] INFO Partition [__consumer_offsets,31] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,31] (kafka.cluster.Partition)
[2020-05-30 22:27:40,289] INFO Completed load of log __consumer_offsets-9 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,290] INFO Created log for partition [__consumer_offsets,9] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,291] INFO Partition [__consumer_offsets,9] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,9] (kafka.cluster.Partition)
[2020-05-30 22:27:40,297] INFO Completed load of log __consumer_offsets-47 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,299] INFO Created log for partition [__consumer_offsets,47] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,300] INFO Partition [__consumer_offsets,47] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,47] (kafka.cluster.Partition)
[2020-05-30 22:27:40,305] INFO Completed load of log __consumer_offsets-19 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,307] INFO Created log for partition [__consumer_offsets,19] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,310] INFO Partition [__consumer_offsets,19] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,19] (kafka.cluster.Partition)
[2020-05-30 22:27:40,316] INFO Completed load of log __consumer_offsets-28 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,318] INFO Created log for partition [__consumer_offsets,28] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,321] INFO Partition [__consumer_offsets,28] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,28] (kafka.cluster.Partition)
[2020-05-30 22:27:40,328] INFO Completed load of log __consumer_offsets-38 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,329] INFO Created log for partition [__consumer_offsets,38] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,332] INFO Partition [__consumer_offsets,38] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,38] (kafka.cluster.Partition)
[2020-05-30 22:27:40,337] INFO Completed load of log __consumer_offsets-35 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,338] INFO Created log for partition [__consumer_offsets,35] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,339] INFO Partition [__consumer_offsets,35] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,35] (kafka.cluster.Partition)
[2020-05-30 22:27:40,345] INFO Completed load of log __consumer_offsets-44 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,346] INFO Created log for partition [__consumer_offsets,44] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,348] INFO Partition [__consumer_offsets,44] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,44] (kafka.cluster.Partition)
[2020-05-30 22:27:40,355] INFO Completed load of log __consumer_offsets-6 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,355] INFO Created log for partition [__consumer_offsets,6] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,357] INFO Partition [__consumer_offsets,6] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,6] (kafka.cluster.Partition)
[2020-05-30 22:27:40,364] INFO Completed load of log __consumer_offsets-25 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,365] INFO Created log for partition [__consumer_offsets,25] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,366] INFO Partition [__consumer_offsets,25] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,25] (kafka.cluster.Partition)
[2020-05-30 22:27:40,371] INFO Completed load of log __consumer_offsets-16 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,372] INFO Created log for partition [__consumer_offsets,16] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,373] INFO Partition [__consumer_offsets,16] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,16] (kafka.cluster.Partition)
[2020-05-30 22:27:40,384] INFO Completed load of log __consumer_offsets-22 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,385] INFO Created log for partition [__consumer_offsets,22] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,387] INFO Partition [__consumer_offsets,22] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,22] (kafka.cluster.Partition)
[2020-05-30 22:27:40,397] INFO Completed load of log __consumer_offsets-41 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,400] INFO Created log for partition [__consumer_offsets,41] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,401] INFO Partition [__consumer_offsets,41] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,41] (kafka.cluster.Partition)
[2020-05-30 22:27:40,406] INFO Completed load of log __consumer_offsets-32 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,407] INFO Created log for partition [__consumer_offsets,32] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,409] INFO Partition [__consumer_offsets,32] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,32] (kafka.cluster.Partition)
[2020-05-30 22:27:40,419] INFO Completed load of log __consumer_offsets-3 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,420] INFO Created log for partition [__consumer_offsets,3] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,421] INFO Partition [__consumer_offsets,3] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,3] (kafka.cluster.Partition)
[2020-05-30 22:27:40,431] INFO Completed load of log __consumer_offsets-13 with log end offset 0 (kafka.log.Log)
[2020-05-30 22:27:40,434] INFO Created log for partition [__consumer_offsets,13] in D:\var\local\soft\kafka_2.11-0.10.0.1\kafka-logs with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2020-05-30 22:27:40,436] INFO Partition [__consumer_offsets,13] on broker 0: No checkpointed highwatermark is found for partition [__consumer_offsets,13] (kafka.cluster.Partition)
[2020-05-30 22:27:40,444] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,22] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,455] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,22] in 9 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,456] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,25] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,461] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,25] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,461] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,28] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,466] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,28] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,466] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,31] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,470] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,31] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,471] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,34] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,474] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,34] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,476] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,37] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,482] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,37] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,483] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,40] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,487] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,40] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,487] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,43] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,490] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,43] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,491] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,46] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,495] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,46] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,497] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,49] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,502] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,49] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,503] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,41] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,506] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,41] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,507] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,44] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,511] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,44] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,512] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,47] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,518] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,47] in 5 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,521] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,1] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,526] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,1] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,527] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,4] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,532] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,4] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,532] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,7] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,535] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,7] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,536] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,10] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,539] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,10] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,540] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,13] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,544] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,13] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,545] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,16] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,548] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,16] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,550] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,19] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,552] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,19] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,553] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,2] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,556] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,2] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,557] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,5] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,560] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,5] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,561] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,8] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,565] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,8] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,565] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,11] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,569] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,11] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,570] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,14] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,573] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,14] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,573] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,17] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,576] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,17] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,577] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,20] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,580] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,20] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,581] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,23] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,586] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,23] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,586] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,26] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,589] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,26] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,589] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,29] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,593] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,29] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,593] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,32] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,596] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,32] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,597] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,35] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,600] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,35] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,601] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,38] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,606] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,38] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,607] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,0] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,611] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,0] in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,611] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,3] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,616] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,3] in 4 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,616] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,6] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,619] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,6] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,620] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,9] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,622] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,9] in 2 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,623] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,12] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,629] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,12] in 5 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,630] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,15] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,631] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,15] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,631] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,18] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,633] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,18] in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,635] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,21] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,636] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,21] in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,640] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,24] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,642] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,24] in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,643] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,27] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,644] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,27] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,645] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,30] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,649] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,30] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,650] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,33] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,651] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,33] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,651] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,36] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,652] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,36] in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,652] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,39] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,654] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,39] in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,654] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,42] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,655] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,42] in 1 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,655] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,45] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,656] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,45] in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,656] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,48] (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:27:40,657] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,48] in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:31:52,057] INFO [GroupCoordinator 0]: Preparing to restabilize group venus with old generation 0 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:31:52,072] INFO [GroupCoordinator 0]: Stabilized group venus generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:31:52,085] INFO [GroupCoordinator 0]: Assignment received from leader for group venus for generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:37:32,826] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:37:59,202] INFO [GroupCoordinator 0]: Preparing to restabilize group venus with old generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:37:59,207] INFO [GroupCoordinator 0]: Group venus generation 1 is dead and removed (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:38:20,510] INFO [GroupCoordinator 0]: Preparing to restabilize group venus with old generation 0 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:38:20,511] INFO [GroupCoordinator 0]: Stabilized group venus generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:38:20,530] INFO [GroupCoordinator 0]: Assignment received from leader for group venus for generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:41:30,250] INFO [GroupCoordinator 0]: Preparing to restabilize group venus with old generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:41:30,251] INFO [GroupCoordinator 0]: Group venus generation 1 is dead and removed (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:41:40,786] INFO [GroupCoordinator 0]: Preparing to restabilize group venus with old generation 0 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:41:40,786] INFO [GroupCoordinator 0]: Stabilized group venus generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:41:40,799] INFO [GroupCoordinator 0]: Assignment received from leader for group venus for generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:43:00,317] INFO [GroupCoordinator 0]: Preparing to restabilize group venus with old generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:43:00,318] INFO [GroupCoordinator 0]: Group venus generation 1 is dead and removed (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:43:27,113] INFO [GroupCoordinator 0]: Preparing to restabilize group venus with old generation 0 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:43:27,113] INFO [GroupCoordinator 0]: Stabilized group venus generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:43:27,125] INFO [GroupCoordinator 0]: Assignment received from leader for group venus for generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 22:47:32,826] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 22:57:32,826] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
log4j:ERROR Failed to rename [D:\xj-Java\idea\kafka-10.0.1/logs/controller.log] to [D:\xj-Java\idea\kafka-10.0.1/logs/controller.log.2020-05-30-22].
log4j:ERROR Failed to rename [D:\xj-Java\idea\kafka-10.0.1/logs/server.log] to [D:\xj-Java\idea\kafka-10.0.1/logs/server.log.2020-05-30-22].
[2020-05-30 23:07:32,827] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-30 23:15:07,286] INFO [GroupCoordinator 0]: Preparing to restabilize group venus with old generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 23:15:07,287] INFO [GroupCoordinator 0]: Group venus generation 1 is dead and removed (kafka.coordinator.GroupCoordinator)
[2020-05-30 23:15:08,625] INFO [GroupCoordinator 0]: Preparing to restabilize group venus with old generation 0 (kafka.coordinator.GroupCoordinator)
[2020-05-30 23:15:08,625] INFO [GroupCoordinator 0]: Stabilized group venus generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 23:15:08,648] INFO [GroupCoordinator 0]: Assignment received from leader for group venus for generation 1 (kafka.coordinator.GroupCoordinator)
[2020-05-30 23:17:32,827] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
 

你可能感兴趣的:(2020年最新版的spring-kafka 整合消费者端代码实践---采用原始的方式,非@KafkaListener注解监听方式)