Spring Boot:集成Kafka实现消息的发布和订阅

本文使用上一篇文章<>中通过Docker搭建的Kafka服务进行SpringBoot的集成,详细步骤如下:

1 依赖包引用

build.gradle文件中添加依赖如下:

implementation 'org.springframework.kafka:spring-kafka'

2 配置信息

当前使用application-dev.properties,在文件中添加如下相关Kafka信息如下:


#>>>>>>>>>> kafka
#自定义topic名称
kafka.topic=crane-topic
#逗号分隔的地址列表,建立与Kafka集群的初始连接
spring.kafka.bootstrap-servers=localhost:9092

# >>>>>>>>>> producer
# 发生错误后,消息重发的次数,默认最大int值
spring.kafka.producer.retries=5
#当有多个消息需要被发送到同一个分区时,生产者会把它们放在同一个批次里。该参数指定了一个批次可以使用的内存大小,字节
spring.kafka.producer.batch-size=16384
# 设置生产者内存缓冲区的大小,字节
spring.kafka.producer.buffer-memory=33554432
## 指定消息key和消息体的编解码方式
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
# acks=0 : 生产者在成功写入消息之前不会等待任何来自服务器的响应。
# acks=1 : 只要集群的首领节点收到消息,生产者就会收到一个来自服务器成功响应。
# acks=all :只有当所有参与复制的节点全部收到消息时,生产者才会收到一个来自服务器的成功响应。
spring.kafka.producer.acks=1

#>>>>>>>>>> consumer
# 必须指定默认消费者group id
spring.kafka.consumer.group-id=crane-kafka-consumer
# 指定消费者在读取一个没有偏移量的分区或者偏移量无效的情况下该作何处理:
# latest(默认值)在偏移量无效的情况下,消费者将从最新的记录开始读取数据(在消费者启动之后生成的记录)
# earliest :在偏移量无效的情况下,消费者将从起始位置读取分区的记录
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=true
#enable-auto-commit=true的情况下,自动提交的时间间隔 特定的格式,如1S,1M,2H,5D
spring.kafka.consumer.auto-commit-interval=1S
## 指定消息key和消息体的编解码方式
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

#>>>>>>>>>> listener
# consumer.enable-auto-commit=false的情况下制定ack模式,在listener中执行手动ack
#spring.kafka.listener.ack-mode=manual_immediate

3 创建相关Kafka类

创建Kafka相关类,进行topic的创建,消息的生产及消费操作。

3.1 Topic自动生成

添加Configuration配置类,在应用启动时检查topic是否存在,不存在则自动创建,完整代码如下:

注:在生产者发送消息时会自动创建topic,这里手动创建可根据需要配置分区数量(提高并发性能)和副本数量,下面创建分区数量为3,副本数量为1的topic。

@Configuration
public class KafkaConfig {

    @Bean
    public NewTopic initialTopic(@Value("${kafka.topic}") String topicName) {
        return new NewTopic(topicName, 3, (short) 1);
    }
}

3.2 创建消息对象

自定义消息对象格式类KafkaMessage完整代码如下:


/**
 * kafka消息对象
 */
public class KafkaMessage {
    private long id;
    private String msg;
    private Date time;

    public long getId() {
        return id;
    }

    public void setId(long id) {
        this.id = id;
    }

    public String getMsg() {
        return msg;
    }

    public void setMsg(String msg) {
        this.msg = msg;
    }

    public Date getTime() {
        return time;
    }

    public void setTime(Date time) {
        this.time = time;
    }
}

3.3 消息生产组件

创建消息生产者组件,在Controller中将调用此组件向Kafka发送消息。生产者组件KafkaProducer完整代码如下:


@Component
public class KafkaProducer {
    private Logger logger = LoggerFactory.getLogger(this.getClass());
    private final KafkaTemplate<String, String> kafkaTemplate;
    @Value("${kafka.topic}")
    private String topicName;

    public KafkaProducer(KafkaTemplate<String, String> kafkaTemplate) {
        this.kafkaTemplate = kafkaTemplate;
    }

    //发送消息方法
    public void send() {
        KafkaMessage message = new KafkaMessage();
        message.setId(System.currentTimeMillis());
        message.setMsg(UUID.randomUUID().toString());
        message.setTime(new Date());
        ListenableFuture<SendResult<String, String>> result = kafkaTemplate.send(this.topicName, String.valueOf(message.getId()), JSONObject.toJSONString(message));
        result.addCallback(
                success -> logger.info(">>>>> Kafka消息发送成功,{}", success.toString()),
                failure -> {
                    logger.info(">>>>> Kafka消息发送失败,{}", failure.getMessage());
                }
        );
    }
}

3.4 消息消费组件

创建消息消费者组件,当Kafka指定topic中进入新的消息时进行自动消费,这里消费过程仅打印消息信息,具体操作可根据业务逻辑处理。消费者组件KafkaConsumer完整代码如下:


@Component
public class KafkaConsumer {
    private Logger logger = LoggerFactory.getLogger(this.getClass());

    @KafkaListener(topics = {"${kafka.topic}"})
    public void listen(ConsumerRecord record
//            , @Header(KafkaHeaders.RECEIVED_TOPIC) String topic
//            , Consumer consumer
//            , Acknowledgment ack //配置了ack-mode才会生效
    ) {
        Optional kafkaMessage = Optional.ofNullable(record.value());
        if (kafkaMessage.isPresent()) {
            Object message = kafkaMessage.get();
            logger.info(">>>>> 消费Kafka记录: {}", record);
            logger.info(">>>>> 消费Kafka消息: {}", message);
        }
        //enable-auto-commit=false模式下通知此消息之前的消息都已被处理
        //ack.acknowledge();
        /*
         * 如果需要手工提交异步 consumer.commitSync();
         * 手工同步提交 consumer.commitAsync()
         */
    }
}

3.5 发送Kafka消息请求

在之前的控制器DemoController中注入生产者组件KafkaProducer,并添加发送消息请求,代码如下:

	private final KafkaProducer kafkaProducer;

    public DemoController(StringRedisTemplate redisTemplate, CompanyRepository companyRepository, SqlSession 		sqlSession, UserRepository userRepository, KafkaProducer kafkaProducer) {
        this.redisTemplate = redisTemplate;
        this.companyRepository = companyRepository;
        this.sqlSession = sqlSession;
        this.userRepository = userRepository;
        this.kafkaProducer = kafkaProducer;
    }
    
    ......
    
	@PostMapping("/demo/sendKafka")
    public HResponse sendKafka() {
        this.kafkaProducer.send();
        return HResponse.success();
    }

4 启动SpringBoot应用

启动Kafka,然后启动Springboot应用,启动日志中可看到Kafka相关信息,包括版本以及配置等等。上面代码中包含消费者自动注册组件,组件注册时也会打印消费者配置信息,相关日志如下:

[2020-05-11 08:30:14.810] [INFO ] [main]
                 - Kafka version: 2.3.1 [o.a.k.c.u.AppInfoParser] (AppInfoParser.java:117) 
[2020-05-11 08:30:14.810] [INFO ] [main]
                 - Kafka commitId: 18a913733fb71c01 [o.a.k.c.u.AppInfoParser] (AppInfoParser.java:118) 
[2020-05-11 08:30:14.810] [INFO ] [main]
                 - Kafka startTimeMs: 1589185814809 [o.a.k.c.u.AppInfoParser] (AppInfoParser.java:119) 
[2020-05-11 08:30:15.074] [INFO ] [main]
                 - AdminClientConfig values: 
	bootstrap.servers = [localhost:9092]
	client.dns.lookup = default
	client.id = 
	connections.max.idle.ms = 300000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 120000
	retries = 5
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
 [o.a.k.c.a.AdminClientConfig] (AbstractConfig.java:347) 
[2020-05-11 08:30:15.077] [INFO ] [main]
                 - Kafka version: 2.3.1 [o.a.k.c.u.AppInfoParser] (AppInfoParser.java:117) 
[2020-05-11 08:30:15.077] [INFO ] [main]
                 - Kafka commitId: 18a913733fb71c01 [o.a.k.c.u.AppInfoParser] (AppInfoParser.java:118) 
[2020-05-11 08:30:15.077] [INFO ] [main]
                 - Kafka startTimeMs: 1589185815077 [o.a.k.c.u.AppInfoParser] (AppInfoParser.java:119) 
[2020-05-11 08:30:15.110] [INFO ] [main]
                 - ConsumerConfig values: 
	allow.auto.create.topics = true
	auto.commit.interval.ms = 1000
	auto.offset.reset = earliest
	bootstrap.servers = [localhost:9092]
	check.crcs = true
	client.dns.lookup = default
	client.id = 
	client.rack = 
	connections.max.idle.ms = 540000
	default.api.timeout.ms = 60000
	enable.auto.commit = true
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id = crane-kafka-consumer
	group.instance.id = null
	heartbeat.interval.ms = 3000
	interceptor.classes = []
	internal.leave.group.on.close = true
	isolation.level = read_uncommitted
	key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 300000
	max.poll.records = 500
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	session.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
 [o.a.k.c.c.ConsumerConfig] (AbstractConfig.java:347) 
[2020-05-11 08:30:15.157] [INFO ] [main]
                 - Kafka version: 2.3.1 [o.a.k.c.u.AppInfoParser] (AppInfoParser.java:117) 
[2020-05-11 08:30:15.157] [INFO ] [main]
                 - Kafka commitId: 18a913733fb71c01 [o.a.k.c.u.AppInfoParser] (AppInfoParser.java:118) 
[2020-05-11 08:30:15.157] [INFO ] [main]
                 - Kafka startTimeMs: 1589185815157 [o.a.k.c.u.AppInfoParser] (AppInfoParser.java:119) 
[2020-05-11 08:30:15.159] [INFO ] [main]
                 - [Consumer clientId=consumer-1, groupId=crane-kafka-consumer] Subscribed to topic(s): crane-topic [o.a.k.c.c.KafkaConsumer] (KafkaConsumer.java:964) 
[2020-05-11 08:30:15.160] [INFO ] [main]
                 - Initializing ExecutorService [o.s.s.c.ThreadPoolTaskScheduler] (ExecutorConfigurationSupport.java:181) 
[2020-05-11 08:30:15.188] [INFO ] [main]
                 - Starting ProtocolHandler ["http-nio-9080"] [o.a.c.h.Http11NioProtocol] (DirectJDKLog.java:173) 
[2020-05-11 08:30:15.190] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - [Consumer clientId=consumer-1, groupId=crane-kafka-consumer] Cluster ID: 474hdR5DSUyIlXasfDo8mA [o.a.k.c.Metadata] (Metadata.java:261) 
[2020-05-11 08:30:15.191] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - [Consumer clientId=consumer-1, groupId=crane-kafka-consumer] Discovered group coordinator localhost:9092 (id: 2147483646 rack: null) [o.a.k.c.c.i.AbstractCoordinator] (AbstractCoordinator.java:728) 
[2020-05-11 08:30:15.194] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - [Consumer clientId=consumer-1, groupId=crane-kafka-consumer] Revoking previously assigned partitions [] [o.a.k.c.c.i.ConsumerCoordinator] (ConsumerCoordinator.java:476) 
[2020-05-11 08:30:15.195] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - crane-kafka-consumer: partitions revoked: [] [o.s.k.l.KafkaMessageListenerContainer] (LogAccessor.java:279) 
[2020-05-11 08:30:15.195] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - [Consumer clientId=consumer-1, groupId=crane-kafka-consumer] (Re-)joining group [o.a.k.c.c.i.AbstractCoordinator] (AbstractCoordinator.java:505) 
[2020-05-11 08:30:15.212] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - [Consumer clientId=consumer-1, groupId=crane-kafka-consumer] (Re-)joining group [o.a.k.c.c.i.AbstractCoordinator] (AbstractCoordinator.java:505) 
[2020-05-11 08:30:15.212] [INFO ] [main]
                 - Tomcat started on port(s): 9080 (http) with context path '' [o.s.b.w.e.t.TomcatWebServer] (TomcatWebServer.java:204) 
[2020-05-11 08:30:15.216] [INFO ] [main]
                 - Started HBackendApplication in 7.571 seconds (JVM running for 8.536) [f.h.h.HBackendApplication] (StartupInfoLogger.java:61) 
[2020-05-11 08:30:15.231] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - [Consumer clientId=consumer-1, groupId=crane-kafka-consumer] Successfully joined group with generation 5 [o.a.k.c.c.i.AbstractCoordinator] (AbstractCoordinator.java:469) 
[2020-05-11 08:30:15.235] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - [Consumer clientId=consumer-1, groupId=crane-kafka-consumer] Setting newly assigned partitions: crane-topic-1, crane-topic-2, crane-topic-0 [o.a.k.c.c.i.ConsumerCoordinator] (ConsumerCoordinator.java:283) 
[2020-05-11 08:30:15.247] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - [Consumer clientId=consumer-1, groupId=crane-kafka-consumer] Setting offset for partition crane-topic-1 to the committed offset FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 1 rack: null), epoch=0}} [o.a.k.c.c.i.ConsumerCoordinator] (ConsumerCoordinator.java:525) 
[2020-05-11 08:30:15.248] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - [Consumer clientId=consumer-1, groupId=crane-kafka-consumer] Setting offset for partition crane-topic-2 to the committed offset FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 1 rack: null), epoch=0}} [o.a.k.c.c.i.ConsumerCoordinator] (ConsumerCoordinator.java:525) 
[2020-05-11 08:30:15.248] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - [Consumer clientId=consumer-1, groupId=crane-kafka-consumer] Setting offset for partition crane-topic-0 to the committed offset FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 1 rack: null), epoch=0}} [o.a.k.c.c.i.ConsumerCoordinator] (ConsumerCoordinator.java:525) 
[2020-05-11 08:30:15.249] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - crane-kafka-consumer: partitions assigned: [crane-topic-1, crane-topic-2, crane-topic-0] [o.s.k.l.KafkaMessageListenerContainer] (LogAccessor.java:279) 

5 模拟发送请求

使用Postman发送请求,向Kafka发送随机消息,响应成功,如下图:
Spring Boot:集成Kafka实现消息的发布和订阅_第1张图片
发送消息后,SpringBoot日志响应如下,其中包含生产者相关配置:

[2020-05-11 08:36:35.428] [INFO ] [http-nio-9080-exec-2]
                [anonymousUser-16:36:35.425] - =====> Request(/demo/sendKafka) start, params:{} [f.h.h.c.s.GlobalRequestInterceptor] (GlobalRequestInterceptor.java:52) 
[2020-05-11 08:36:35.456] [INFO ] [http-nio-9080-exec-2]
                [anonymousUser-16:36:35.425] - ProducerConfig values: 
	acks = 1
	batch.size = 16384
	bootstrap.servers = [localhost:9092]
	buffer.memory = 33554432
	client.dns.lookup = default
	client.id = 
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 5
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer
 [o.a.k.c.p.ProducerConfig] (AbstractConfig.java:347) 
[2020-05-11 08:36:35.482] [INFO ] [http-nio-9080-exec-2]
                [anonymousUser-16:36:35.425] - Kafka version: 2.3.1 [o.a.k.c.u.AppInfoParser] (AppInfoParser.java:117) 
[2020-05-11 08:36:35.482] [INFO ] [http-nio-9080-exec-2]
                [anonymousUser-16:36:35.425] - Kafka commitId: 18a913733fb71c01 [o.a.k.c.u.AppInfoParser] (AppInfoParser.java:118) 
[2020-05-11 08:36:35.483] [INFO ] [http-nio-9080-exec-2]
                [anonymousUser-16:36:35.425] - Kafka startTimeMs: 1589186195482 [o.a.k.c.u.AppInfoParser] (AppInfoParser.java:119) 
[2020-05-11 08:36:35.494] [INFO ] [kafka-producer-network-thread | producer-1]
                 - [Producer clientId=producer-1] Cluster ID: 474hdR5DSUyIlXasfDo8mA [o.a.k.c.Metadata] (Metadata.java:261) 
[2020-05-11 08:36:35.548] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - >>>>> 消费Kafka记录: ConsumerRecord(topic = crane-topic, partition = 0, leaderEpoch = 0, offset = 0, CreateTime = 1589186195500, serialized key size = 13, serialized value size = 86, headers = RecordHeaders(headers = [], isReadOnly = false), key = 1589186195444, value = {"id":1589186195444,"msg":"fed36982-0b38-47e4-bd39-127e8953b651","time":1589186195444}) [f.h.h.d.k.KafkaConsumer] (KafkaConsumer.java:28) 
[2020-05-11 08:36:35.548] [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
                 - >>>>> 消费Kafka消息: {"id":1589186195444,"msg":"fed36982-0b38-47e4-bd39-127e8953b651","time":1589186195444} [f.h.h.d.k.KafkaConsumer] (KafkaConsumer.java:29) 
[2020-05-11 08:36:35.548] [INFO ] [kafka-producer-network-thread | producer-1]
                 - >>>>> Kafka消息发送成功,SendResult [producerRecord=ProducerRecord(topic=crane-topic, partition=null, headers=RecordHeaders(headers = [], isReadOnly = true), key=1589186195444, value={"id":1589186195444,"msg":"fed36982-0b38-47e4-bd39-127e8953b651","time":1589186195444}, timestamp=null), recordMetadata=crane-topic-0@0] [f.h.h.d.k.KafkaProducer] (KafkaProducer.java:34) 
[2020-05-11 08:36:35.554] [INFO ] [http-nio-9080-exec-2]
                [anonymousUser-16:36:35.425] - =====> Request(/demo/sendKafka) end, response status:200 [f.h.h.c.s.GlobalRequestInterceptor] (GlobalRequestInterceptor.java:67) 

你可能感兴趣的:(SpringBoot)