canal-server同步到kafka本身是支持Kerberos方式的鉴权的,但是鉴于项目现在使用的kafka集群使用的是SASL/PLAIN的鉴权方式,所以需要对canal-server同步kafka做一下适配改造。
kafka SASL/PLAIN鉴权的搭建
我参考的这篇文章kafka SASL/PLAIN鉴权的搭建
了解如何使用java向以SASL/PLAIN方式鉴权的kafka集群发送和消费消息。
Properties props = new Properties();
props.put("bootstrap.servers", "127.0.0.1:8123");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
//鉴权相关,将鉴权信息放入jaas.conf文件中,设置系统变量
System.setProperty("java.security.auth.login.config", "E:/work/saslconf/kafka_write_jaas.conf");
props.put("security.protocol", "SASL_PLAINTEXT");
props.put("sasl.mechanism", "PLAIN");
Producer producer = new KafkaProducer<>(props);
Properties props = new Properties();
props.put("bootstrap.servers", "127.0.0.1:8123");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
//鉴权相关,将鉴权信息放在props对象中
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
props.put("sasl.jaas.config",
"org.apache.kafka.common.security.plain.PlainLoginModule required username='testkafkaUser' password='testKafkaPwd';");
Producer producer = new KafkaProducer<>(props);
了解了kafka生产消费的鉴权方式,接下来就是改造canal-server到kafka的相关代码了,查看源码我们发现相关代码位置
public synchronized void start() throws Throwable {
String serverMode = CanalController.getProperty(properties, CanalConstants.CANAL_SERVER_MODE);
if (serverMode.equalsIgnoreCase("kafka")) {
//这里是同步kafka使用的producer
canalMQProducer = new CanalKafkaProducer();
} else if (serverMode.equalsIgnoreCase("rocketmq")) {
canalMQProducer = new CanalRocketMQProducer();
} else if (serverMode.equalsIgnoreCase("rabbitmq")) {
canalMQProducer = new CanalRabbitMQProducer();
}
MQProperties mqProperties = null;
if (canalMQProducer != null) {
// 这里是同步kafka需要的配置,注意这个buildMQProperties方法
mqProperties = buildMQProperties(properties);
//buildMQProperties方法部分代码,我们发现CanalConstants这个类是用来读取canal-server的在application.yml中配置的信息的
private static MQProperties buildMQProperties(Properties properties) {
MQProperties mqProperties = new MQProperties();
String servers = CanalController.getProperty(properties, CanalConstants.CANAL_MQ_SERVERS);
if (!StringUtils.isEmpty(servers)) {
mqProperties.setServers(servers);
}
String retires = CanalController.getProperty(properties, CanalConstants.CANAL_MQ_RETRIES);
if (!StringUtils.isEmpty(retires)) {
mqProperties.setRetries(Integer.valueOf(retires));
}
//这里是kafka鉴权配置的相关代码
public void init(MQProperties kafkaProperties) {
super.init(kafkaProperties);
this.kafkaProperties = kafkaProperties;
Properties properties = new Properties();
properties.put("bootstrap.servers", kafkaProperties.getServers());
properties.put("acks", kafkaProperties.getAcks());
properties.put("compression.type", kafkaProperties.getCompressionType());
properties.put("batch.size", kafkaProperties.getBatchSize());
properties.put("linger.ms", kafkaProperties.getLingerMs());
properties.put("max.request.size", kafkaProperties.getMaxRequestSize());
properties.put("buffer.memory", kafkaProperties.getBufferMemory());
properties.put("key.serializer", StringSerializer.class.getName());
properties.put("max.in.flight.requests.per.connection", 1);
if (!kafkaProperties.getProperties().isEmpty()) {
properties.putAll(kafkaProperties.getProperties());
}
properties.put("retries", kafkaProperties.getRetries());
if (kafkaProperties.isKerberosEnable()) {
File krb5File = new File(kafkaProperties.getKerberosKrb5FilePath());
File jaasFile = new File(kafkaProperties.getKerberosJaasFilePath());
logger.info("kafka鉴权中...jaasFile是否存在:{}",jaasFile.exists());
if (krb5File.exists() && jaasFile.exists()) {
logger.info("kerberos认证中...");
// 配置kerberos认证,需要使用绝对路径
System.setProperty("java.security.krb5.conf", krb5File.getAbsolutePath());
System.setProperty("java.security.auth.login.config", jaasFile.getAbsolutePath());
System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
properties.put("security.protocol", "SASL_PLAINTEXT");
properties.put("sasl.kerberos.service.name", "kafka");
找到了相关的代码,改造就比较容易了,首先我们需要再canal-server的application.yml配置文件中增加两个配置项(即连接kafka的账户信息)
canal.mq.kafka.plain.linkUser = testkafkaUser
canal.mq.kafka.plain.linkPassword = testKafkaPwd
然后在CanalConstants类中增加这两项配置的常量
public static final String CANAL_MQ_KAFKA_PLAIN_LINK_USER = ROOT + "." + "mq.kafka.plain.linkUser";
public static final String CANAL_MQ_KAFKA_PLAIN_LINK_PASSWORD = ROOT + "." + "mq.kafka.plain.linkPassword";
接着在buildMQProperties方法中增加这两项配置信息,同时在MQProperties类中增加这两个变量并设置get、set方法
private static MQProperties buildMQProperties(Properties properties) {
String kafkaPlainLinkUser = CanalController.getProperty(properties,
CanalConstants.CANAL_MQ_KAFKA_PLAIN_LINK_USER);
if (!StringUtils.isEmpty(kafkaPlainLinkUser)) {
mqProperties.setPlainLinkUser(kafkaPlainLinkUser);
}
String kafkaPlainLinkPwd = CanalController.getProperty(properties,
CanalConstants.CANAL_MQ_KAFKA_PLAIN_LINK_PASSWORD);
if (!StringUtils.isEmpty(kafkaPlainLinkPwd)) {
mqProperties.setPlainLinkPassword(kafkaPlainLinkPwd);
}
}
最后就是在CanalKafkaProducer类中完成相关的init实现了
else if (!kafkaProperties.getPlainLinkUser().isEmpty() && !kafkaProperties.getPlainLinkPassword().isEmpty()) {
logger.info("SASL_PLAINTEXT认证中...(使用局部变量)");
properties.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
properties.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
properties.put("sasl.jaas.config",
"org.apache.kafka.common.security.plain.PlainLoginModule required username='"+kafkaProperties.getPlainLinkUser()+"' password='"+kafkaProperties.getPlainLinkPassword()+"';");
}
工程目录打开命令行执行以下命令打包即可
mvn clean install -Denv=release
希望这篇内容能够对你有所帮助,更多canal-adapter趟坑记录请关注我的微信公众号!谢谢支持
欢迎关注我的个人微信公众号,一个菜鸟程序猿的技术分享和奔溃日常