Kafka 0.10.0 SASL/PLAIN身份认证及权限实现

注:大部分内容转自:http://matt33.com/2016/07/29/sasl-plain-kafka/

Kafka 安全机制

Kafka 的安全机制主要分为两部分:

  • 身份认证(Authentication):对client 与服务器的连接进行身份认证。
  • 权限控制(Authorization):实现对于消息级别的权限控制

查看官方文档,可以总结出以下内容:

  1. 可以使用 SSL 或者 SASL 进行客户端(producer 和 consumer)、其他 brokers、tools与 brokers 之间连接的认证,SASL/PLAIN将在 0.10.0 中得到支持;
  2. 对brokers和zookeeper之间的连接进行Authentication;
  3. 数据传输用SSL加密,性能会下降;
  4. 对clients的读写操作进行Authorization;
  5. Authorization 是pluggable,与外部的authorization services结合进行支持。

1. Kafka身份认证

Kafka 目前支持SSL、SASL/Kerberos、SASL/PLAIN三种认证机制,关于这些认证机制的介绍可以参考一下三篇文章。

  • 数字证书原理;
  • 数字证书, 数字签名, SSL(TLS) , SASL;
  • SSL的延迟

1.1 SASL/PLAIN 认证

可以参考kafka使用SASL验证,这个官方文档的中文版。

1.1.1 Kafka Server 端配置

配置服务端的SASL_PLAINTEXT设置目的在于使得kafka集群中的brokers互相通信,并允许其接收来自客户端的jaas认证。

第一步: 需要在 Kafka 安装目录下的 config/server.properties 文件中配置以下信息

listeners=SASL_PLAINTEXT://ip:pot
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
authorizer.class.name = kafka.security.auth.SimpleAclAuthorizer
super.users=User:admin

如果要配置多个超级账户,可以按如下配置:

super.users=User:admin;User:alice

第二步:还需要配置一个名 kafka_server_jaas.conf 的配置文件,将配置文件放置在conf目录下。

KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="admin"
    user_admin="admin"
    user_alice="alice";
};

这里,我们配置了两个用户:admin 和 alice,密码分别为 admin 和 alice。

第三步:最后需要为 Kafka 添加 java.security.auth.login.config 环境变量。在 bin/kafka-run-class.sh 中添加以下内容

KAFKA_SASL_OPTS='-Djava.security.auth.login.config=/opt/meituan/kafka_2.10-0.10.0.0/config/kafka_server_jaas.conf'
# Launch mode
if [ "x$DAEMON_MODE" = "xtrue" ]; then
  nohup $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_SASL_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null &
else
  exec $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_SASL_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@"
fi
1.1.2 KafkaClient 配置

要使用producer和consumer程序连接Kafka集群进行交互,还需要在程序本身进行设置。使得集群可以认证到外部的交互。

第一步: 首先需要在客户端配置 kafka_client_jaas.conf 文件

KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="alice"
  password="alice";
};

第二步:

然后在(producer 和 consumer)程序中添加环境变量和配置,如下所示

System.setProperty("java.security.auth.login.config", ".../kafka_client_jaas.conf"); // 环境变量添加,需要输入配置文件的路径
props.put("security.protocol", "SASL_PLAINTEXT");
props.put("sasl.mechanism", "PLAIN");

配置完以上内容后,就可以正常运行 producer 和 consumer 程序,如果账户密码错误的话,程序就不能正常进行,但是不会有任何提示,这方面后面会进行一些改进。

2. kafka权限控制

这个小节介绍一下 Kafka 的 ACL 。

2.1 权限的内容

权限 说明
READ 读取topic
WRITE 写入topic
DELETE 删除topic
CREATE 创建topic
ALTER 修改topic
DESCRIBE 获取topic的信息
ClusterAction
ALL 所有权限

访问控制列表ACL存储在zk上,路径为/kafka-acl

2.2 权限配置

Kafka 提供的命令如下表所示。

Option Description Default Option type
-add Indicates to the script that user is trying to add an acl. Action
-remove Indicates to the script that user is trying to remove an acl. Action
-list Indicates to the script that user is trying to list acts. Action
-authorizer Fully qualified class name of the authorizer. kafka.security.auth.SimpleAclAuthorizer Configuration
-authorizer-properties key=val pairs that will be passed to authorizer for initialization. For the default authorizer the example values are: zookeeper.connect=localhost:2181 Configuration
-cluster Specifies cluster as resource. Resource
-topic [topic-name] Specifies the topic as resource. Resource
-group [group-name] Specifies the consumer-group as resource. Resource
-allow-principal Principal is in PrincipalType:name format that will be added to ACL with Allow permission. You can specify multiple –allow-principal in a single command. Principal
-deny-principal Principal is in PrincipalType:name format that will be added to ACL with Deny permission. You can specify multiple –deny-principal in a single command. Principal
-allow-host IP address from which principals listed in –allow-principal will have access. if –allow-principal is specified defaults to * which translates to “all hosts” Host
-deny-host IP address from which principals listed in –deny-principal will be denied access. if –deny-principal is specified defaults to * which translates to “all hosts” Host
-operation Operation that will be allowed or denied. Valid values are : Read, Write, Create, Delete, Alter, Describe, ClusterAction, All All Operation
-producer Convenience option to add/remove acls for producer role. This will generate acls that allows WRITE, DESCRIBE on topic and CREATE on cluster. Convenience
-consumer Convenience option to add/remove acls for consumer role. This will generate acls that allows READ, DESCRIBE on topic and READ on consumer-group. Convenience

你可能感兴趣的:(Kafka 0.10.0 SASL/PLAIN身份认证及权限实现)