SpringBoot整合Kafka示例使用

  作者之前接触过消息中间件,如RocketMq;最近工作中项目使用到了Kafka,机制和RocketMq相似,这里把代码、心得总结给贴出来。

运行Kafka

  使用Kafka的前提是你安装好了Jdk、Scala,
  https://www.scala-lang.org/download/scala2.html(Scala的,Jdk请自行搜索)
  之后在这个网址:https://kafka.apache.org/downloads.html
  下载Kafka,选择那个二进制版本的
SpringBoot整合Kafka示例使用_第1张图片
  然后选择和你安装的Scala版本一致的。
  下载解压后,因为里面集成了windows版本,所以不用额外去开虚拟机用Linux系统来运行。解压后新建一个logs文件夹,然后对config文件夹下server.properties编辑,添加一行:
  log.dirs=解压后路径/logs

  在windows的cmd窗口中运行2个命令(开2个cmd窗口):
  解压后路径\bin\windows\zookeeper-server-start.bat 解压后路径\config\zookeeper.properties
  解压后路径\bin\windows\kafka-server-start.bat 解压后路径\config\server.properties
  前者运行zookeeper,后者运行kafka;作者这里是安装的3.1.1

maven引入

<dependencies>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>2.3.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-streams</artifactId>
        <version>2.3.1</version>
    </dependency>
    <dependency>
        <groupId>org.springframework.kafka</groupId>
        <artifactId>spring-kafka</artifactId>
        <version>2.3.4.RELEASE</version>
    </dependency>
</dependencies>

配置类

@Configuration
@EnableKafka
public class KafkaConfig {

    @Bean
    public KafkaAdmin admin() {
        Map<String, Object> configs = new HashMap<String, Object>();
        configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, 
        "127.0.0.1:9092");
        return new KafkaAdmin(configs);
    }

    @Bean
    public NewTopic topic1() {
        //该主题2个分区,备份因子为1
        return new NewTopic("bar2", 2, (short)1);
    }


    //生产者配置
    @Bean
    public Map<String, Object> producerConfigs() {
        Map<String, Object> props = new HashMap<String,Object>();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
        "127.0.0.1:9092");
        props.put("acks", "all");
        props.put("retries", 0);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer", 
        "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", 
        "org.apache.kafka.common.serialization.StringSerializer");
        return props;
    }

    @Bean
    public ProducerFactory<String, String> producerFactory() {
        return new DefaultKafkaProducerFactory<String, String>(
        producerConfigs());
    }

    @Bean
    public KafkaTemplate<String, String> kafkaTemplate() {
        return new KafkaTemplate<String, String>(producerFactory());
    }


    //消费者配置
    @Bean
    public Map<String,Object> consumerConfigs(){
        HashMap<String, Object> props = new HashMap<String, Object>();
        props.put("bootstrap.servers", "127.0.0.1:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms", "1000");
        props.put("key.deserializer", 
        "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", 
        "org.apache.kafka.common.serialization.StringDeserializer");
        return props;
    }

    @Bean
    public ConsumerFactory<String,String> consumerFactory(){
        return new DefaultKafkaConsumerFactory<String, String>(
        consumerConfigs());
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory<String,String> 
    kafkaListenerContainerFactory(){
        ConcurrentKafkaListenerContainerFactory<String, String> factory = 
        new ConcurrentKafkaListenerContainerFactory<String, String>();
        factory.setConsumerFactory(consumerFactory());
        return factory;
    }

}

生产者

public class Producer {

    public static void main(String[] args) throws ExecutionException, 
    InterruptedException {
        AnnotationConfigApplicationContext ctx = 
        new AnnotationConfigApplicationContext(KafkaConfig.class);
        KafkaTemplate<String, String> kafkaTemplate = 
        (KafkaTemplate<String, String>)ctx.getBean("kafkaTemplate");
        String data = "男";
        //给0分区发送消息
        ListenableFuture<SendResult<String, String>> send = 
        kafkaTemplate.send("bar2", 0, "sex", data);
        send.addCallback(new ListenableFutureCallback<SendResult<String, 
        String>>() {
            public void onFailure(Throwable throwable) {

            }

            public void onSuccess(SendResult<String, String> 
            integerStringSendResult) {

            }
        });
        
    }

}

消费者监听器

public class SimpleConsumerListener {

    @KafkaListener(id = "myContainer0", topics = { "bar2" })
    public void listen(ConsumerRecord<?, ?> record) {
        System.out.println(record.topic());
        System.out.println(record.key());
        System.out.println(record.value());
    }
    
}

  该类要配置到KafkaConfig中:

@Bean
public SimpleConsumerListener simpleConsumerListener(){
    return new SimpleConsumerListener();
}

  运行结果如下:

SpringBoot整合Kafka示例使用_第2张图片

@KafkaListener的其余用法

  • 指定消费哪些主题和分区的消息
// 只接收bar2主题0分区的消息,
@KafkaListener(
        id = "myContainer1",
        topicPartitions = {
                @TopicPartition(topic = "bar2", partitions = { "0" }),
        })
public void listen1(ConsumerRecord<?, ?> record) {
   //业务操作
}
  • 多线程消费
    默认是单线程消费的,Producer的main()方法发送2条消息:
//0分区和1分区都发送消息
kafkaTemplate.send("bar2", 0, "sex", "男");
kafkaTemplate.send("bar2", 1, "sex", "女");

  监听器:

@KafkaListener(
        id = "myContainer0",
        topicPartitions = {
                @TopicPartition(topic = "bar2", partitions = { "0", "1" }),
        })
public void listen(ConsumerRecord<?, ?> record) {
    System.out.println(Thread.currentThread().getName());
    System.out.println(record.key());
    System.out.println(record.value());
}

  此时输出结果:

SpringBoot整合Kafka示例使用_第3张图片

  多线程消费(2个):

//concurrency属性控制,不要超过分区数
@KafkaListener(
        id = "myContainer0",
        topicPartitions = {
                @TopicPartition(topic = "bar2", partitions = { "0", "1" })
        }, concurrency = "2")
public void listen(ConsumerRecord<?, ?> record) {
    System.out.println(Thread.currentThread().getName());
    System.out.println(record.key());
    System.out.println(record.value());
}

SpringBoot整合Kafka示例使用_第4张图片

  • 批量消费
    KafkaConfig的consumerConfigs()中,添加:
//消费者每次拉取的最大值
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "3");

  kafkaListenerContainerFactory()中,添加:

factory.setBatchListener(true);

  这时我们多发送几条消息,Producer的main()方法发送5条消息:

kafkaTemplate.send("bar2", 1, "age", "0");
kafkaTemplate.send("bar2", 1, "age", "1");
kafkaTemplate.send("bar2", 0, "age", "2");
kafkaTemplate.send("bar2", 1, "age", "3");
kafkaTemplate.send("bar2", 0, "age", "4");

  监听器的参数要改写:

@KafkaListener(
        id = "myContainer0",
        topicPartitions = {
                @TopicPartition(topic = "bar2", partitions = { "0", "1" })
        })
    public void listen(List<ConsumerRecord<?, ?>> list) {
        System.out.println("批量消费:" + list.size());
        for(int i = 0; i <list.size(); i++) {
            ConsumerRecord record = list.get(i);
            System.out.println(record.key());
            System.out.println(record.value());
        }
    }

  运行结果为:
SpringBoot整合Kafka示例使用_第5张图片

  • 消息重试和死信队列
    当消息监听的方法中,抛出异常后,可以进行重试;当重试的次数达到一定次数后,就会进入死信队列,主题是原主题+.DLT。

  KafkaConfig类中的kafkaListenerContainerFactory()方法添加代码:

//设置重试间隔 5秒,次数为 3次;超过3次进入死信队列
BackOff backOff = new FixedBackOff(5 * 1000L, 3L);
factory.setErrorHandler(new SeekToCurrentErrorHandler(
new DeadLetterPublishingRecoverer(kafkaTemplate()), backOff));

  消息监听类:

    @KafkaListener(
          id = "myContainer0",
          topicPartitions = {
                 @TopicPartition(topic = "bar2",partitions = { "0","1"})
          })
    public void listen(ConsumerRecord<?, ?> record) {
        System.out.println(record.key());
        System.out.println(record.value());
        throw new RuntimeException("抛异常");
    }

    @KafkaListener(id = "myContainer2", topics = "bar2.DLT")
    public void listen2(ConsumerRecord<?, ?> record) {
        System.out.println("进入死信队列");
        System.out.println("topic:" + record.topic());
        System.out.println("key:" + record.key());
        System.out.println("value:"+record.value());
    }

你可能感兴趣的:(#,SpringBoot,java,kafka)