【Java】Springboot整合Kafka配置Kerberos/GSSAPI认证

前提

测试环境的kafka升级了,现在连接需要有KRB/GSSAPI认证,并进行相应的权限管理。
kafka集群版本:2.1.0.0
pom文件的springboot版本:2.0.8.RELEASE, kafka版本:2.1.7.RELEASE

需要的配置文件

  1. krb5.conf
  2. user.keytab

user.keytab文件需要自己根据kafka server的配置生成。
krb5.conf一般直接从kafka服务端拷贝过来即可

# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
default_realm = KAFKA.COM
#default_ccache_name = KEYRING:persistent:%{uid}
dns_lookup_kdc = true
[realms]
KAFKA.COM = {
kdc = user1.kafka.com
admin_server = user1.kafka.com
}
[domain_realm]
.kafka.com = KAFKA.COM
kafka.com = KAFKA.COM

接下来就是kafka的配置,我的项目因为有多个kafka源,所以使用配置类进行配置。

主要配置如下:

application.yml

# 此处的斜杠为两根才能正常运行,个人猜测是因为jaas配置注入到kafka时,其中一个斜杠会被识别为转义符
kafka:
	krbKeytab: \\resources\\kuser2.keytab

KafkaConfig.java

	@Value("${kafka.krbKeytab}")
	private String krbKeytab;

	@Bean
    public Map batchConsumerConfigs() {
		Map props = new HashMap<>();
		/* 省略服务器集群、提交策略等配置 */
		// Kerberos认证
		props.put(SaslConfigs.SASL_JAAS_CONFIG, "com.sun.security.auth.module.Krb5LoginModule required " +
		        "useTicketCache=true " +
		        "renewTicket=true " +
		        "principal=\"[email protected]" " +
		        "debug=true " +
		        "useKeyTab=true " +
		        "keyTab=\"" + krbKeytab + "\";");
		props.put("security.protocol", "SASL_PLAINTEXT");
		props.put(SaslConfigs.SASL_MECHANISM, SaslConfigs.GSSAPI_MECHANISM);
		props.put(SaslConfigs.SASL_KERBEROS_SERVICE_NAME, "kafka");
	}

总结

前面试了半天,总是报错,如下:

Caused by: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner  authentication information from the user

后面发现,keytab的地址经过props存入后,还需要进入kafka进行读取,所以右斜杠(\)每一层需要两个。

后面连接成功后又有个报错:

org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'version': java.nio.BufferUnderflowException

发现是kafka的依赖包版本太旧,改成2.1.7.RELEASE之后的版本就没有报错了

你可能感兴趣的:(Java)