一、springboot整合kafka

一、springboot整合kafka_第1张图片
image

本文提纲

  • 1、kakfa-producer
  • 2、kafka-consumer
  • 3、springboot整合

该项目依赖psyche,将相关kafka组件作为moudle放在fast-plugins

运行环境

springboot + kafka2.11

1、前提

假设你已经了解过springboot和kafka,对这两门技术已经有简单的基础认知,包括知道kafka是mq组件,知道生产者消费者的概念

  • kafka安装教程

项目整体架构如下

一、springboot整合kafka_第2张图片
image

fast-pluginsmoudle下创建fast-data-kafka,其中又包含consumer和producer两个moudle。
web的项目结构如下图
一、springboot整合kafka_第3张图片
image

web依赖kafka和base项目

  • pom.xml依赖
    相关依赖由于是公共的,都放入fast-data-kafka这个上层项目
   
        fast-data-kafka-consumer
        fast-data-kafka-producer
    

    
        
            org.springframework.kafka
            spring-kafka
        
        
            org.springframework.boot
            spring-boot-configuration-processor
            true
        
        
            com.google.guava
            guava
            22.0
        
        
            org.springframework.boot
            spring-boot
        
        
            org.springframework.boot
            spring-boot-autoconfigure
        
    

2、kafka-producer

producer主要包含两个类

  • KafkaProducerProperties:配置文件对应的bean
@Component
@ConfigurationProperties(prefix = KafkaProducerProperties.KAFKA_PRODUCER_PREFIX)
public class KafkaProducerProperties {

    public static final String KAFKA_PRODUCER_PREFIX = "kafka";

    private String brokerAddress;

    public String getBrokerAddress() {
        return brokerAddress;
    }

    public void setBrokerAddress(String brokerAddress) {
        this.brokerAddress = brokerAddress;
    }
}

该类对应配置文件中的kafka.brokerAddress属性

  • KafkaProducerAutoConfiguration:该类依赖KafkaProducerProperties配置bean

@Configuration
@EnableKafka
@EnableConfigurationProperties(KafkaProducerProperties.class)
@ConditionalOnClass(value = org.apache.kafka.clients.consumer.KafkaConsumer.class)
public class KafkaProducerAutoConfiguration {
    private KafkaProducerProperties kafkaProducerProperties;

    public KafkaProducerAutoConfiguration(KafkaProducerProperties kafkaProducerProperties) {
        this.kafkaProducerProperties = kafkaProducerProperties;
    }

    public Map producerConfigs() {
        String brokers = kafkaProducerProperties.getBrokerAddress();
        if (StringUtils.isEmpty(brokers)) {
            throw new RuntimeException("kafka broker address is empty");
        }
        Map props = Maps.newHashMap();
        // list of host:port pairs used for establishing the initial connections
        // to the Kakfa cluster
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProducerProperties
                .getBrokerAddress());
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        // value to block, after which it will throw a TimeoutException
        props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 5000);

        props.put(ProducerConfig.ACKS_CONFIG, "all");
        props.put(ProducerConfig.RETRIES_CONFIG, 1);
        props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
        props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
        props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);

        return props;
    }
    @Bean
    public ProducerFactory producerFactory() {
        return new DefaultKafkaProducerFactory<>(producerConfigs());
    }

    @Bean
    public KafkaTemplate kafkaTemplate() {
        return new KafkaTemplate(producerFactory());
    }

该类包含了kafka-producer的一些基础配置,并且创建了KafkaTemplate

以上就完成了kafka-producer的配置

3、kafka-consumer

同上,该moudle也包含以下两个类

  • KafkaConsumerProperties: 配置文件对应的bean
/**
 * Describe:
 *
 * @Author sunliang
 * @Since 2019/06/10
 */
@ConfigurationProperties(prefix = KafkaConsumerProperties.KAFKA_CONSUMER_PREFIX)
public class KafkaConsumerProperties {

    public static final String KAFKA_CONSUMER_PREFIX = "kafka";

    private String brokerAddress;

    private String groupId;

    public String getBrokerAddress() {
        return brokerAddress;
    }

    public void setBrokerAddress(String brokerAddress) {
        this.brokerAddress = brokerAddress;
    }

    public String getGroupId() {
        return groupId;
    }

    public void setGroupId(String groupId) {
        this.groupId = groupId;
    }
  • KafkaConsumerAutoConfiguration: 自动装配类
/**
* Describe:
*
* @Author sunliang
* @Since 2019/06/10
*/
@EnableKafka
@Configuration
@EnableConfigurationProperties(KafkaConsumerProperties.class)
@ConditionalOnClass(value = org.apache.kafka.clients.consumer.KafkaConsumer.class)
public class KafkaConsumerAutoConfiguration {
   protected final Logger logger = LoggerFactory.getLogger(this.getClass());

   private KafkaConsumerProperties kafkaConsumerProperties;

   public KafkaConsumerAutoConfiguration(KafkaConsumerProperties kafkaConsumerProperties) {
       logger.info("KafkaConsumerAutoConfiguration kafkaConsumerProperties:{}",
               JSON.toJSONString(kafkaConsumerProperties));
       this.kafkaConsumerProperties = kafkaConsumerProperties;
   }

   @Bean
   public KafkaListenerContainerFactory>
   kafkaListenerContainerFactory() {
       ConcurrentKafkaListenerContainerFactory factory = new
               ConcurrentKafkaListenerContainerFactory<>();
       factory.setConsumerFactory(consumerFactory());
       factory.setConcurrency(3);
       factory.getContainerProperties().setPollTimeout(1000);
       return factory;
   }

   @Bean
   public ConsumerFactory consumerFactory() {
       return new DefaultKafkaConsumerFactory<>(consumerConfigs());
   }

   @Bean
   public Map consumerConfigs() {
       String brokers = kafkaConsumerProperties.getBrokerAddress();
       if (StringUtils.isEmpty(brokers)) {
           throw new RuntimeException("kafka broker address is emptiy");
       }

       Map propsMap = new HashMap<>();
       propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaConsumerProperties.getBrokerAddress());
       propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaConsumerProperties.getGroupId());
       propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true); //自动commit
       propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100"); //定时commit的周期
       propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000"); //consumer活性超时时间
       propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
       propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
       propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest"); //从何处开始消费,latest 表示消费最新消息,earliest 表示从头开始消费,none表示抛出异常,默认latest
       return propsMap;
   }

}

以上就完成了consumer的配置,接下来我们做一个boot应用,测试下kafka

3、fast-rest

  • pom.xml
    
       
           com.liangliang
           fast-base
           0.0.1-SNAPSHOT
       
       
           com.liangliang
           fast-data-kafka-consumer
           1.0-SNAPSHOT
       
       
           com.liangliang
           fast-data-kafka-producer
           1.0-SNAPSHOT
       
   
  • kafkaUtils:kafka的工具类

/**
 * Describe:
 *
 * @Author sunliang
 * @Since 2019/06/11
 */
@Slf4j
@Component
public class KafkaUtils {
    @Autowired
    private KafkaTemplate kafkaTemplate;

    public void sendMessage(String topic, String data) {
        log.info("kafka sendMessage start");
        ListenableFuture> future = kafkaTemplate.send(topic, data);
        future.addCallback(new ListenableFutureCallback>() {
            @Override
            public void onFailure(Throwable ex) {
                log.error("kafka sendMessage error, ex = {}, topic = {}, data = {}", ex, topic, data);
            }

            @Override
            public void onSuccess(SendResult result) {
                log.info("kafka sendMessage success topic = {}, data = {}",topic, data);
            }
        });
        log.info("kafka sendMessage end");
    }
}

  • listener:consumer监听程序
    @KafkaListener(topics = {"test"})
    public void listen(ConsumerRecord record){
        String json = record.value().toString();
        log.info("kafka consumer sessionListener session json:{}", json);
    }

监听test主题,并输出log

  • controller: 可以从web端输入参数,作为kafka生产者,将相关信息,存入kafka
/**
 * Describe:
 *
 * @Author sunliang
 * @Since 2019/06/11
 */
@Slf4j
@RestController
public class KafkaProducerController {
    @Autowired
    private KafkaUtils kafkaUtils;

    @GetMapping("/chat/{msg}")
    public RestResult area(HttpServletResponse response, @PathVariable("msg")String msg){
        response.setHeader("Access-Control-Allow-Origin", "*");
        log.info(">>>>>msg = {}",msg);
        kafkaUtils.sendMessage("test",msg);
        return RestResultBuilder.builder().data(msg).success().build();
    }
}

至此已经完成了kafka与springboot的整合。

你可能感兴趣的:(一、springboot整合kafka)