kafak scram

SASL_SCRAM+ACL

  1. 创建SCRAM Credentials

​ 创建broker建通信用户(或称超级用户)

bin/kafka-configs --zookeeper 172.20.3.20:2188 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

​ 创建客户端用户abc

bin/kafka-configs --zookeeper 172.20.3.20:2188 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=abc],SCRAM-SHA-512=[password=abc]' --entity-type users --entity-name abc	

​ 查看SCRAM 证书

bin/kafka-configs  --zookeeper localhost:2188 --describe --entity-type users --entity-name abc
  1. 配置Brokers

    1. 在每个Kafka broker的conf目录中添加一个类似下面的JAAS文件,我们称之为kafka_server_jaas.conf:
    KafkaServer {
      org.apache.kafka.common.security.scram.ScramLoginModule required
      username="admin"
      password="admin-secret";
    };
    

    注意不要少了分号

    1. 修改 /opt/rdx/confluent/bin/kafka-server-start最后一行:
    ## exec $base_dir/kafka-run-class $EXTRA_ARGS  io.confluent.support.metrics.SupportedKafka "$@"
    exec $base_dir/kafka-run-class $EXTRA_ARGS -Djava.security.auth.login.config=/opt/rdx/confluent/etc/kafka/kafka_server_jaas.conf io.confluent.support.metrics.SupportedKafka "$@"
    
  2. 在server.properties 中配置SASL端口和SASL机制。 例如:

    broker.id=1
    listeners=SASL_PLAINTEXT://0.0.0.0:9092
    advertised.listeners=SASL_PLAINTEXT://node1:9092
    advertised.host.name=node1
    
    security.inter.broker.protocol=SASL_PLAINTEXT
    
    # SCRAM
    sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
    sasl.enabled.mechanisms=SCRAM-SHA-256
    
    # acl
    allow.everyone.if.no.acl.found=false
    super.users=User:admin
    authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
    
    num.network.threads=3
    num.io.threads=8
    socket.send.buffer.bytes=102400
    socket.receive.buffer.bytes=102400
    socket.request.max.bytes=104857600
    log.dirs=/opt/confluent/data
    num.partitions=1
    num.recovery.threads.per.data.dir=1
    offsets.topic.replication.factor=1
    transaction.state.log.replication.factor=1
    transaction.state.log.min.isr=1
    log.retention.hours=168
    log.segment.bytes=1073741824
    log.retention.check.interval.ms=300000
    zookeeper.connect=node1:2188
    zookeeper.connection.timeout.ms=6000
    confluent.support.metrics.enable=true
    group.initial.rebalance.delay.ms=0
    confluent.support.customer.id=anonymous
    message.max.bytes=10485760
    
  3. 重启zk和kafka broker

  4. 配置客户端

    先使用kafka-console-producer 和 kafka-console-consumer 测试一下

    kafka-console-producer

    1. 创建 config/client-sasl.properties 文件
    security.protocol=SASL_PLAINTEXT
    sasl.mechanism=SCRAM-SHA-256
    
    1. 创建config/kafka_client_jaas_admin.conf文件
    KafkaClient {
      org.apache.kafka.common.security.scram.ScramLoginModule required
      username="admin"
      password="admin-secret";
    };
    
    1. 修改kafka-console-producer.sh脚本
    cp bin/kafka-console-producer bin/kafka-console-producer-admin
    vim bin/kafka-console-producer-admin
    
    ## exec $(dirname $0)/kafka-run-class  kafka.tools.ConsoleProducer "$@"
    exec $(dirname $0)/kafka-run-class -Djava.security.auth.login.config=/opt/rdx/confluent/etc/kafka/kafka_client_jaas_admin.conf kafka.tools.ConsoleProducer "$@"
    
    1. 创建topic
    bin/kafka-topics --create --zookeeper localhost:2188 --partitions 1 --replication-factor 1 --topic test051201
    
    1. 测试生产消息
    bin/kafka-console-producer-admin --broker-list 172.20.3.20:9092 --topic test051201 --producer.config ../etc/kakfa/client-sasl.properties
    >hello
    >
    

    可以看到admin用户无需配置ACL就可以生成消息

    1. 测试abc用户
    vim etc/kafka/kafka_client_jaas_abc.conf
    
    KafkaClient {
      org.apache.kafka.common.security.scram.ScramLoginModule required
      username="abc"
      password="abc";
    };
    
    cp bin/kafka-console-producer-admin bin/kafka-console-producer-abc
    vi bin/kafka-console-producer-abc
    exec $(dirname $-1)/kafka-run-class -Djava.security.auth.login.config=/opt/rdx/confluent/etc/kafka/kafka_client_jaas_abc.conf kafka.tools.ConsoleProducer "$@"
    

    生产消息

    [root@node1 confluent]# bin/kafka-console-producer-abc --broker-list 172.20.3.20:9092 --topic test051201 --producer.config etc/kafka/client-sasl.properties
    WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 1 : {test051201=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient)
    org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [test051201]
    

    可以看到报错了, 因为abc用户还没有权限

kafka-console-consumer

  1. 创建 consumer-abc.properties 文件

    [root@node1 confluent]# vim etc/kafka/consumer-abc.propertie
    
    security.protocol=SASL_PLAINTEXT
    sasl.mechanism=SCRAM-SHA-256
    group.id=abc-group
    
  2. 创建 kafka-console-consumer-abc.sh 文件

    cp bin/kafka-console-consumer bin/kafka-console-consumer-abc
    vim bin/kafka-console-consumer-abc
    
    exec $(dirname $0)/kafka-run-class -Djava.security.auth.login.config=/opt/rdx/confluent/etc/kafka/kafka_client_jaas_abc.conf kafka.tools.ConsoleConsumer "$@"
    
  3. 测试消费者

    [root@node1 confluent]# bin/kafka-console-consumer-abc --bootstrap-server 172.20.3.20:9092 --topic test051201 --consumer.config etc/kafka/consumer-abc.properties --from-beginning
    

    其实也会报错的, 报错内容就不贴了

ACL配置

  1. 授予abc用户对test051201 topic 写权限, 只允许 172.20.3.20

    bin/kafka-acls --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=localhost:2188 --add --allow-principal User:fanboshi --operation Write --topic test051201 --allow-host 172.20.3.20
    
  2. 授予abc用户对test051201 topic 读权限, 只允许 172.20.3.20

    bin/kafka-acls --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=localhost:2188 --add --allow-principal User:fanboshi --operation Read --topic test051201 --allow-host 172.20.3.20
    
  3. 授予abc用户, abc-group 消费者组 对test051201 topic 读权限, 只允许 172.20.3.20

    bin/kafka-acls --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=localhost:2188 --add --allow-principal User:abc --operation Read --group abc-group --allow-host 172.20.3.20
    

    配置host时,172.20.3.* 不生效,没找到是什么原因,只能单个host做限制

  4. 查看acl

    bin/kafka-acls --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=localhost:2188 --list
    

再次测试就没问题了。

你可能感兴趣的:(kafak scram)