如何确保消息格式正确的前提下最终一定能发送到Kafka? 这里的实现思路是
消息格式为JSON, 使用Jackson将类序列化为JSON字符串
public class UserDTOSerializer implements Serializer<UserDTO> {
@Override
@SneakyThrows
public byte[] serialize(final String s, final UserDTO userDTO) {
ObjectMapper objectMapper = new ObjectMapper();
return objectMapper.writeValueAsBytes(userDTO);
}
}
有几点需要注意
/**
* 以下配置建议搭配 官方文档 + kafka权威指南相关章节 + 实际业务场景吞吐量需求 自己调整
* 如果是本地, bootstrap.server的IP地址和docker-compose.yml中的EXTERNAL保持一致
* @return
*/
public static Properties loadProducerConfig(String valueSerializer) {
Properties result = new Properties();
result.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "l192.168.0.102:9093");
result.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
result.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, valueSerializer);
result.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "gzip");
// 每封邮件消息大小大约20KB, 使用默认配置吞吐量不高,下列配置增加kafka的吞吐量
// 默认16384 bytes,太小了,这会导致邮件消息一个一个发送到kafka,达不到批量发送的目的,不符合发送邮件的场景
result.put(ProducerConfig.BATCH_SIZE_CONFIG, 1048576 * 10);
// 默认1048576 bytes,限制的是一个batch的大小,对于20KB的消息来说,消息太小
result.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, 1048576 * 10);
// 等10ms, 为了让更多的消息聚合到一个batch中,提高吞吐量
result.put(ProducerConfig.LINGER_MS_CONFIG, 10);
return result;
}
@Log
public class MessageProducer {
public static final KafkaProducer<String, UserDTO> PRODUCER = new KafkaProducer<>(KafkaConfiguration.loadProducerConfig(UserDTOSerializer.class.getName()));
private MessageFailedService messageFailedService = new MessageFailedService();
/**
* kafka producer 发送失败时会进行重试,相关参数 retries 和 delivery.timeout.ms, 官方建议使用delivery.timeout.ms,默认2分钟
* 也就是说在2分钟之后,下列代码中的回调函数会被调用,重试多少次回调函数就会被调用多少次,所以我们在重试期间只要保存一次失败的消息就好,如果在重试期间成功,则去更新
* @param userDTO
*/
public void sendMessage(final UserDTO userDTO) {
ProducerRecord<String, UserDTO> user = new ProducerRecord<>("email", userDTO.getMessageId(), userDTO);
try {
PRODUCER.send( user, (recordMetadata, e) -> {
// 第一次失败, 应该只保存一次,不应该每次都保存
if (Objects.nonNull(e) && !ProducerMessageIdCache.contains(userDTO.getMessageId())) {
log.severe("message has sent failed");
saveOrUpdateFailedMessage(userDTO);
ProducerMessageIdCache.add(userDTO.getMessageId());
// 重试时成功了,应该去更新
}else if (ProducerMessageIdCache.contains(userDTO.getMessageId()) && Objects.isNull(e)) {
saveOrUpdateFailedMessage(userDTO);
ProducerMessageIdCache.remove(userDTO.getMessageId());
} else {
log.info("message has sent to topic: " + recordMetadata.topic() + ", partition: " + recordMetadata.partition() );
}
});
} catch (TimeoutException e) {
log.info("send message to kafka timeout, message: ");
// TODO: 自定义逻辑,比如发邮件通知kafka管理员
}
}
/**
* @param userDTO
*/
@SneakyThrows
private void saveOrUpdateFailedMessage(final UserDTO userDTO) {
MessageFailedEntity messageFailedEntity = new MessageFailedEntity();
messageFailedEntity.setMessageId(userDTO.getMessageId());
ObjectMapper mapper = new ObjectMapper();
messageFailedEntity.setMessageContentJsonFormat(mapper.writeValueAsString(userDTO));
messageFailedEntity.setMessageType(MessageType.EMAIL);
messageFailedEntity.setMessageFailedPhrase(MessageFailedPhrase.PRODUCER);
messageFailedService.saveOrUpdateMessageFailed(messageFailedEntity);
}
}
对上述代码做几点解释
public class ProducerMessageIdCache {
private static final Map<String, Integer> MESSAGE_IDS = new ConcurrentHashMap<>();
public static void add(String messageId) {
MESSAGE_IDS.put(messageId, 0);
}
public static void remove(String messageId) {
MESSAGE_IDS.remove(messageId);
}
public static boolean contains(String messageId) {
return MESSAGE_IDS.containsKey(messageId);
}
// TODO 定时清理过期的messageId
}
实现ServletContextListener接口, 然后在web.xml的listener元素中配置
public class KafkaListener implements ServletContextListener {
private static final List<KafkaProducer> KAFKA_PRODUCERS = new LinkedList<>();
@Override
public void contextInitialized(ServletContextEvent sce) {
KAFKA_PRODUCERS.add(MessageProducer.PRODUCER);
}
@Override
public void contextDestroyed(ServletContextEvent sce) {
KAFKA_PRODUCERS.forEach(KafkaProducer::close);
}
}
<web-app xmlns="https://jakarta.ee/xml/ns/jakartaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://jakarta.ee/xml/ns/jakartaee
https://jakarta.ee/xml/ns/jakartaee/web-app_6_0.xsd"
version="6.0">
<listener>
<listener-class>com.business.server.listener.KafkaListenerlistener-class>
listener>
web-app>
http://localhost:8999/business-server