让我们来看一下网上是怎样使用SpringBoot整合kafka数据源的,都存在哪些痛点?
痛点一:
手撸kafka配置代码,各种硬编码,无法利用SpringBoot的约定大于配置的优势。
痛点二:
当项目需要消费的topic,而且他们在不同集群时,需要不断地复制粘贴config和factory,如果项目需要5个不同集群的topic以上,那么这些代码将面临巨大维护压力,并且极其容易出错。
痛点三:
假如来了个新业务,也是消费kafka,然后做一些业务逻辑处理,你会发现你不得不又搭建一个新工程,然后重复上述步骤,把代码和配置都复制粘贴一遍。
分析了上述痛点,我们都知道得把配置和业务代码分离,减轻我们的无意义的重复劳作。那么我们怎么把这些跟kafka相关的东西都抽象成一个模块,让我们精力回归到业务开发本身上呢?
例如简单这样配置就完成SpringBoot和kafka的整合,我们只需要关心com.mmc.multi.kafka.starter.OneProcessor
和com.mmc.multi.kafka.starter.TwoProcessor
和业务相关的代码开发。
## topic1的kafka配置
spring.kafka.one.enabled=true
spring.kafka.one.consumer.bootstrapServers=${spring.embedded.kafka.brokers}
spring.kafka.one.topic=mmc-topic-one
spring.kafka.one.group-id=group-consumer-one
spring.kafka.one.processor=com.mmc.multi.kafka.starter.OneProcessor // 业务处理类名称
spring.kafka.one.consumer.auto-offset-reset=latest
spring.kafka.one.consumer.max-poll-records=10
spring.kafka.one.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.one.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
## topic2的kafka配置
spring.kafka.two.enabled=true
spring.kafka.two.consumer.bootstrapServers=${spring.embedded.kafka.brokers}
spring.kafka.two.topic=mmc-topic-two
spring.kafka.two.group-id=group-consumer-two
spring.kafka.two.processor=com.mmc.multi.kafka.starter.TwoProcessor // 业务处理类名称
spring.kafka.two.consumer.auto-offset-reset=latest
spring.kafka.two.consumer.max-poll-records=10
spring.kafka.two.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.two.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
国籍惯例,先上源码:Github源码
本文将介绍通过封装一个starter,来实现多kafka数据源的配置,通过通过源码,可以学习以下特性。系列文章完整目录
1、引入SpringBoot的starter和kafka相关jar包。
<dependency>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-starterartifactId>
dependency>
<dependency>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-configuration-processorartifactId>
<optional>trueoptional>
dependency>
<dependency>
<groupId>org.springframework.kafkagroupId>
<artifactId>spring-kafkaartifactId>
dependency>
<dependency>
<groupId>org.projectlombokgroupId>
<artifactId>lombokartifactId>
<version>1.18.28version>
dependency>
2、在项目工程resources目录下新建META-INF/spring.factories
文件,指定配置类,让它完成kafka相关配置。
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
com.mmc.multi.kafka.starter.MmcMultiConsumerAutoConfiguration
3、定义一个配置类MmcMultiKafkaProperties
,用来接收上述配置。
## topic1的kafka配置
spring.kafka.one.enabled=true
spring.kafka.one.consumer.bootstrapServers=${spring.embedded.kafka.brokers}
spring.kafka.one.topic=mmc-topic-one
spring.kafka.one.group-id=group-consumer-one
spring.kafka.one.processor=com.mmc.multi.kafka.starter.OneProcessor // 业务处理类名称
spring.kafka.one.consumer.auto-offset-reset=latest
spring.kafka.one.consumer.max-poll-records=10
spring.kafka.one.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.one.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
## topic2的kafka配置
spring.kafka.two.enabled=true
spring.kafka.two.consumer.bootstrapServers=${spring.embedded.kafka.brokers}
spring.kafka.two.topic=mmc-topic-two
spring.kafka.two.group-id=group-consumer-two
spring.kafka.two.processor=com.mmc.multi.kafka.starter.TwoProcessor // 业务处理类名称
spring.kafka.two.consumer.auto-offset-reset=latest
spring.kafka.two.consumer.max-poll-records=10
spring.kafka.two.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.two.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
配置类代码如下,其中Consumer
是SpringBoot kafka框架源码类,这也是优雅的地方,这样你就可以像使用SpringBoot配置kafka一样使用本starter,使用方式基本保持一致,该支持的配置不用改代码也支持,配置的时候也有智能提示:
@ToString
@Data
@ConfigurationProperties(prefix = "spring")
public class MmcMultiKafkaProperties {
/**
* 支持多个kafka配置.
*/
private Map<String, MmcKafkaProperties> kafka = new HashMap<>();
/**
* MmcKafkaProperties.
*/
@Data
static class MmcKafkaProperties {
/**
* 是否启用.
*/
private boolean enabled;
/**
* 主题(支持配置多个topic,英文逗号分隔).
*/
private String topic;
/**
* 消费组.
*/
private String groupId;
/**
* 并发度.
*/
private Integer concurrency = 1;
/**
* 批量消费.
*/
private String type = "batch";
/**
* 是否在批次内对kafka进行去重,默认为true.
*/
private boolean duplicate = true;
/**
* 处理类.
*/
private String processor;
/**
* 消费者.
*/
private final KafkaProperties.Consumer consumer = new KafkaProperties.Consumer();
public Map<String, Object> buildConsumerProperties() {
return new HashMap<>(this.consumer.buildProperties());
}
}
}
4、编辑MmcMultiConsumerAutoConfiguration
类,根据配置构建不同的kafka factory。
@Slf4j
@Configuration
@EnableConfigurationProperties(MmcMultiKafkaProperties.class)
@ConditionalOnProperty(prefix = "spring.kafka", value = "enabled", matchIfMissing = true)
public class MmcMultiConsumerAutoConfiguration extends BaseConsumerConfiguration {
@Resource
private MmcMultiKafkaProperties mmcMultiKafkaProperties;
@Bean
public MmcKafkaInputerContainer mmcKafkaInputerContainer(MmcKafkaProcessorFactory factory,
MmcKafkaBeanPostProcessor beanPostProcessor) throws Exception {
Map<String, MmcInputer> inputers = new HashMap<>();
Map<String, MmcMultiKafkaProperties.MmcKafkaProperties> kafkas = mmcMultiKafkaProperties.getKafka();
// 逐个遍历,并生成consumer
for (Map.Entry<String, MmcMultiKafkaProperties.MmcKafkaProperties> entry : kafkas.entrySet()) {
// 唯一消费者名称
String name = entry.getKey();
// 消费者配置
MmcMultiKafkaProperties.MmcKafkaProperties properties = entry.getValue();
// 是否开启
if (properties.isEnabled()) {
// 生成消费者
MmcKafkaKafkaAbastrctProcessor inputer = factory.buildInputer(name, properties, beanPostProcessor.getSuitableClass());
// 输入源容器
ConcurrentMessageListenerContainer<Object, Object> container = concurrentMessageListenerContainer(properties);
// 设置容器
inputer.setContainer(container);
inputer.setName(name);
inputer.setProperties(properties);
// 设置消费者
container.setupMessageListener(inputer);
// 关闭时候停止消费
Runtime.getRuntime().addShutdownHook(new Thread(inputer::stop));
// 直接启动
container.start();
// 加入集合
inputers.put(name, inputer);
}
}
return new MmcKafkaInputerContainer(inputers);
}
@Bean
public MmcKafkaBeanPostProcessor mmcKafkaBeanPostProcessor() {
return new MmcKafkaBeanPostProcessor();
}
@Bean
public MmcKafkaProcessorFactory processorFactory() {
return new MmcKafkaProcessorFactory();
}
}
这里核心点是怎样构建业务处理类MmcKafkaKafkaAbastrctProcessor
,为了方便功能的封装,所以这里我们要求所有业务处理类processor都应该继承MmcKafkaKafkaAbastrctProcessor
,并定义kafka消息对应的实体类。考虑到很多开发者习惯定义@Service
这些类来实现业务逻辑,所以这里我们利用BeanPostProcessor,将所有继承了MmcKafkaKafkaAbastrctProcessor
类都收集起来。
public class MmcKafkaBeanPostProcessor implements BeanPostProcessor {
@Getter
private Map<String, MmcKafkaKafkaAbastrctProcessor<? extends MmcKafkaMsg>> suitableClass = new ConcurrentHashMap<>();
@Override
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
if (bean instanceof MmcKafkaKafkaAbastrctProcessor) {
MmcKafkaKafkaAbastrctProcessor<? extends MmcKafkaMsg> target = (MmcKafkaKafkaAbastrctProcessor<? extends MmcKafkaMsg>) bean;
suitableClass.putIfAbsent(beanName, target);
suitableClass.putIfAbsent(bean.getClass().getName(), target);
}
return bean;
}
}
然后跟配置类指定的processor进行一一绑定。
spring.kafka.one.processor=oneProcessor // 业务处理类名称
spring.kafka.two.processor=twoProcessor // 业务处理类名称
@Slf4j
@Service
public class OneProcessor extends MmcKafkaKafkaAbastrctProcessor<DemoMsg> {
@Resource
private DemoService demoService;
@Override
protected Class<DemoMsg> getEntityClass() {
return DemoMsg.class;
}
@Override
protected void dealMessage(List<DemoMsg> datas) {
demoService.dealMessage("one", datas.stream().map(x -> (MmcKafkaMsg) x).collect(Collectors.toList()));
}
}
同时考虑到很多朋友也喜欢定义一个Class的方式来生成业务处理逻辑,所以这里也做了兼容,于是spring.kafka.xxx.processor
配置,既支持bean的名称,也支持Class的完整路径。
public class MmcKafkaProcessorFactory {
@Resource
private DefaultListableBeanFactory defaultListableBeanFactory;
public MmcKafkaKafkaAbastrctProcessor<? extends MmcKafkaMsg> buildInputer(
String name, MmcMultiKafkaProperties.MmcKafkaProperties properties,
Map<String, MmcKafkaKafkaAbastrctProcessor<? extends MmcKafkaMsg>> suitableClass) throws Exception {
// 如果没有配置process,则直接从注册的Bean里查找
if (!StringUtils.hasText(properties.getProcessor())) {
return findProcessorByName(name, properties.getProcessor(), suitableClass);
}
// 如果配置了process,则从指定配置中生成实例
// 判断给定的配置是类,还是bean名称
if (!isClassName(properties.getProcessor())) {
throw new IllegalArgumentException("It's not a class, wrong value of ${spring.kafka." + name + ".processor}.");
}
// 如果ioc容器已经存在该处理实例,则直接使用,避免既配置了process,又使用了@Service等注解
MmcKafkaKafkaAbastrctProcessor<? extends MmcKafkaMsg> inc = findProcessorByClass(name, properties.getProcessor(), suitableClass);
if (null != inc) {
return inc;
}
// 指定的processor处理类必须继承MmcKafkaKafkaAbastrctProcessor
Class<?> clazz = Class.forName(properties.getProcessor());
boolean isSubclass = MmcKafkaKafkaAbastrctProcessor.class.isAssignableFrom(clazz);
if (!isSubclass) {
throw new IllegalStateException(clazz.getName() + " is not subClass of MmcKafkaKafkaAbastrctProcessor.");
}
// 创建实例
Constructor<?> constructor = clazz.getConstructor();
MmcKafkaKafkaAbastrctProcessor<? extends MmcKafkaMsg> ins = (MmcKafkaKafkaAbastrctProcessor<? extends MmcKafkaMsg>) constructor.newInstance();
// 注入依赖的变量
defaultListableBeanFactory.autowireBean(ins);
return ins;
}
private MmcKafkaKafkaAbastrctProcessor<? extends MmcKafkaMsg> findProcessorByName(String name, String processor, Map<String,
MmcKafkaKafkaAbastrctProcessor<? extends MmcKafkaMsg>> suitableClass) {
return suitableClass.entrySet()
.stream()
.filter(e -> e.getKey().startsWith(name) || e.getKey().equalsIgnoreCase(processor))
.map(Map.Entry::getValue)
.findFirst()
.orElseThrow(() -> new RuntimeException("Can't found any suitable processor class for the consumer which name is " + name
+ ", please use the config ${spring.kafka." + name + ".processor} or set name of Bean like @Service(\"" + name + "Processor\") "));
}
private MmcKafkaKafkaAbastrctProcessor<? extends MmcKafkaMsg> findProcessorByClass(String name, String processor, Map<String,
MmcKafkaKafkaAbastrctProcessor<? extends MmcKafkaMsg>> suitableClass) {
return suitableClass.entrySet()
.stream()
.filter(e -> e.getKey().startsWith(name) || e.getKey().equalsIgnoreCase(processor))
.map(Map.Entry::getValue)
.findFirst()
.orElse(null);
}
private boolean isClassName(String processor) {
// 使用正则表达式验证类名格式
String regex = "^[a-zA-Z_$][a-zA-Z\\d_$]*([.][a-zA-Z_$][a-zA-Z\\d_$]*)*$";
return Pattern.matches(regex, processor);
}
}
5、考虑到需要消费的kafka消息五花八门,所以我们才抽象了一个父类MmcKafkaKafkaAbastrctProcessor
,要求所有业务逻辑处理类都需要继承它,实则上也是释放了我们劳动力,没必要关注消息的反序列化过程。同时,这里默认使用的都是kafka消息的批量拉取方式,如果你要单条处理,也可以逐条遍历处理。这里使用批量拉取有以下好处:
1、可以提升消息处理速度(下一篇文章将介绍kafka单分区实现10w级消息处理);
2、可以在batch里对消息做聚合和去重;
@Slf4j
@Setter
public abstract class MmcKafkaKafkaAbastrctProcessor<T extends MmcKafkaMsg> implements MmcKafkaStringInputer {
/**
* Kafka容器.
*/
protected ConcurrentMessageListenerContainer<Object, Object> container;
/**
* 消费者名称.
*/
protected String name;
/**
* 消费者配置.
*/
protected MmcMultiKafkaProperties.MmcKafkaProperties properties;
public MmcKafkaKafkaAbastrctProcessor() {
}
public MmcKafkaKafkaAbastrctProcessor(String name, MmcMultiKafkaProperties.MmcKafkaProperties properties) {
this.name = name;
this.properties = properties;
}
/**
* 消费kafka消息.
*/
@Override
public void onMessage(List<ConsumerRecord<String, String>> records) {
if (null == records || CollectionUtils.isEmpty(records)) {
log.warn("{} records is null or records.value is empty.", name);
return;
}
Assert.hasText(name, "You must pass the field `name` to the Constructor or invoke the setName() after the class was created.");
Assert.notNull(properties, "You must pass the field `properties` to the Constructor or invoke the setProperties() after the class was created.");
try {
Stream<T> dataStream = records.stream()
.map(ConsumerRecord::value)
.flatMap(this::doParse)
.filter(Objects::nonNull)
.filter(this::isRightRecord);
if (properties.isDuplicate()) {
dataStream = dataStream.collect(Collectors.groupingBy(this::buildRoutekey))
.entrySet()
.stream()
.map(this::findLasted)
.filter(Objects::nonNull);
}
List<T> datas = dataStream.collect(Collectors.toList());
if (CommonUtil.isNotEmpty(datas)) {
this.dealMessage(datas);
}
} catch (Exception e) {
log.error(name + "-dealMessage error ", e);
}
}
/**
* 将kafka消息解析为实体,支持json对象或者json数组.
*
* @param json kafka消息
* @return 实体类
*/
protected Stream<T> doParse(String json) {
if (json.startsWith("[")) {
// 数组
List<T> datas = JsonUtil.parseJsonArray(json, getEntityClass());
if (CommonUtil.isEmpty(datas)) {
log.warn("{} doParse error, json={} is error.", name, json);
return Stream.empty();
}
// 反序列对象后,做一些初始化操作
datas = datas.stream().peek(this::doAfterParse).collect(Collectors.toList());
return datas.stream();
} else {
// 对象
T data = JsonUtil.parseJsonObject(json, getEntityClass());
if (null == data) {
log.warn("{} doParse error, json={} is error.", name, json);
return Stream.empty();
}
// 反序列对象后,做一些初始化操作
doAfterParse(data);
return Stream.of(data);
}
}
/**
* 反序列对象后,做一些初始化操作.
*
* @param data 待处理的实体
*/
protected void doAfterParse(T data) {
}
/**
* 设置kafka容器.
*
* @param container kafka容器
*/
public void setContainer(ConcurrentMessageListenerContainer<Object, Object> container) {
this.container = container;
}
/**
* 停止kafka容器.
*/
public void stop() {
container.stop();
}
/**
* 启动kafka容器.
*/
public void start() {
container.start();
}
/**
* 如果单次拉取的kafka消息有重复,则根据某个字段分组,并取最新的一个.
*
* @param entry entry
* @return 最好更新的实体
*/
protected T findLasted(Map.Entry<String, List<T>> entry) {
try {
Optional<T> d = entry.getValue().stream()
.max(Comparator.comparing(T::getRoutekey));
if (d.isPresent()) {
return d.get();
}
} catch (Exception e) {
String content = JsonUtil.toJsonStr(entry.getValue());
log.error("处理消息出错:{}", e.getMessage() + ": " + content, e);
}
return null;
}
/**
* 构造实体类的唯一键.
*
* @param t 待处理实体
* @return 实体类的唯一键
*/
protected String buildRoutekey(T t) {
return t.getRoutekey();
}
/**
* 过滤消息.
*
* @param t 待处理实体
* @return true:不过滤,false:过滤
*/
protected boolean isRightRecord(T t) {
return true;
}
/**
* 获取反序列的实体类类型.
*
* @return 反序列的实体类类型
*/
protected abstract Class<T> getEntityClass();
/**
* 处理消息.
*
* @param datas 待处理列表
*/
protected abstract void dealMessage(List<T> datas) throws ExecutionException, InterruptedException;
}
1、引入kafka测试需要的jar。参考文章:kafka单元测试
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
</dependency>
2、定义一个消息实体和业务处理类。
@Data
class DemoMsg implements MmcKafkaMsg {
private String routekey;
private String name;
private Long timestamp;
}
@Slf4j
@Service
public class OneProcessor extends MmcKafkaKafkaAbastrctProcessor<DemoMsg> {
@Resource
private DemoService demoService;
@Override
protected Class<DemoMsg> getEntityClass() {
return DemoMsg.class;
}
@Override
protected void dealMessage(List<DemoMsg> datas) {
demoService.dealMessage("one", datas.stream().map(x -> (MmcKafkaMsg) x).collect(Collectors.toList()));
}
}
3、配置kafka地址和指定业务处理类。
spring.kafka.one.enabled=true
spring.kafka.one.consumer.bootstrapServers=${spring.embedded.kafka.brokers}
spring.kafka.one.topic=mmc-topic-one
spring.kafka.one.group-id=group-consumer-one
spring.kafka.one.processor=com.mmc.multi.kafka.starter.OneProcessor // 业务处理类名称
spring.kafka.one.consumer.auto-offset-reset=latest
spring.kafka.one.consumer.max-poll-records=10
spring.kafka.one.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.one.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
4、编写测试类。
@Slf4j
@ActiveProfiles("dev")
@ExtendWith(SpringExtension.class)
@SpringBootTest(classes = {MmcMultiConsumerAutoConfiguration.class, DemoService.class, OneProcessor.class})
@TestPropertySource(value = "classpath:application.properties")
@DirtiesContext
@EmbeddedKafka(topics = {"${spring.kafka.one.topic}"})
class AppTest {
@Resource
private EmbeddedKafkaBroker embeddedKafkaBroker;
@Value("${spring.kafka.one.topic}")
private String topicOne;
@Value("${spring.kafka.two.topic}")
private String topicTwo;
@Test
void testDealMessage() throws Exception {
// 模拟生产数据
produceMessage();
Thread.sleep(10 * 1000);
}
void produceMessage() {
Map<String, Object> configs = new HashMap<>(KafkaTestUtils.producerProps(embeddedKafkaBroker));
Producer<String, String> producer = new DefaultKafkaProducerFactory<>(configs, new StringSerializer(), new StringSerializer()).createProducer();
for (int i = 0; i < 10; i++) {
DemoMsg msg = new DemoMsg();
msg.setRoutekey("routekey" + i);
msg.setName("name" + i);
msg.setTimestamp(System.currentTimeMillis());
String json = JsonUtil.toJsonStr(msg);
producer.send(new ProducerRecord<>(topicOne, "my-aggregate-id", json));
producer.send(new ProducerRecord<>(topicTwo, "my-aggregate-id", json));
producer.flush();
}
}
}
将本项目代码构建成starter,就可以大大提升我们开发效率,我们只需要关心业务代码的开发,github项目源码:轻触这里。如果对你有用可以打个星星哦。下一篇,升级本starter,在kafka单分区下实现十万级消费处理速度。
加我加群一起交流学习!更多干货下载、项目源码和大厂内推等着你