手把手教你搭建Kafka(带SASL认证)+ELK集群 - 三 Kafka集群安装

接上一篇

手把手教你搭建Kafka(带SASL认证)+ELK集群 - 二

https://blog.csdn.net/lwlfox/article/details/119801897

部署Kafka

  • 创建kafka 用户
    useradd kafka
  • 将kafka_2.13-2.8.0.tgz 文件上传到服务器并解压到/data目录中
    tar -zxvf kafka_2.13-2.8.0.tgz -C /data/ && chown -R kafka:kafka /data/kafka_2.13-2.8.0
  • 将以下文件的内容写入到: /data/kafka_2.13-2.8.0/config/kafka_server_jaas.conf,其中KafkaServer段是用于kafka broker节点间通讯使用的认证信息,Client段是Kafka broker连接Zookeeper集群时使用的认证信息,用户名密码需要和zookeeper jaas配置文件中的Server段一致。
    KafkaServer {
    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="admin"
    password="admin";
    };
    
    Client {
           org.apache.zookeeper.server.auth.DigestLoginModule required
           username="client"
           password="client";
    };
    
  • 在kafka的启动脚本/data/kafka_2.13-2.8.0/bin/kafka-server-start.sh中添加下面的内容
    export SECURITY_OPTS="-Djava.security.auth.login.config=/data/kafka_2.13-2.8.0/config/kafka_server_jaas.conf"
    export KAFKA_OPTS="$SECURITY_OPTS $KAFKA_OPTS"
    
  • 用以下内容覆盖原本的/data/kafka_2.13-2.8.0/config/server.properties文件
    # 设置监听使用SASL_PLAINTEXT而不是SASL_SSL,就是明文传输,在公司的网络里面应该没有问题的,如果要对外可能需要SSL
    listeners=SASL_PLAINTEXT://10.50.0.36:9092 #节点的内部IP
    advertised.listeners=SASL_PLAINTEXT://10.228.82.156:9092 #节点的弹性IP
    
    # Zk连接设置
    zookeeper.connect=10.228.82.156:2181,10.228.82.214:2181,10.228.82.87:2181 #ZK集群的连接地址
    
    # 认证部分
    # 表示开启SCRAM认证机制,并启用SHA-256算法
    sasl.enabled.mechanisms=SCRAM-SHA-256 #SCRAM的用户信息是存放在zookeeper里面的,所以需要在zookeeper中手动创建一个admin账号
    # 表示Broker间通信也要开启SCRAM认证,同样适用SHA-256算法
    sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
    # 表示Broker间通信使用SASL_PLAINTEXT
    security.inter.broker.protocol=SASL_PLAINTEXT
    
    #连接的账号密码
    listener.name.sasl_plaintext.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
        username="admin" \
        password="admin";
    
    # 授权方面
    # 设置身份验证使用的类
    authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
    
    # 设置超级账号,如果是多个需要分号分割,例如:User:admin;User:root
    super.users=User:admin
    
    # 如果没有对应匹配的ACL,对所有用户topic可见,要禁用。
    allow.everyone.if.no.acl.found=false
    #配置log目录
    log.dirs=/data/kafkalogs
    #允许删除topic
    delete.topic.enable=true
    
  • 创建kafka log目录
    mkdir -p /data/kafkalogs && chown -R kafka:kafka /data/kafkalogs
  • 手动在zookeeper中创建admin账号,这个账号在配置文件中已经指定为super,所以拥有最高权限,在其中一个节点执行就可以了
    /data/kafka_2.13-2.8.0/bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin],SCRAM-SHA-512=[password=admin]' --entity-type users --entity-name admin
  • 为logstash创建账号并授权,logstash是消费消息,Topic和group名称根据实际业务修改,密码根据实际情况设置,我配置的是123456
    /data/kafka_2.13-2.8.0/bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=123456],SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name logstash
    
    #由于整体系统中logstash是消费kafka的消息,所以授权consumer角色
    /data/kafka_2.13-2.8.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=10.228.82.156:2181,10.228.82.214:2181,10.228.82.57:2181 --add --allow-principal User:logstash --consumer --topic Test --group logstash 
    
    
  • 为beats创建账号并授权,beats是产生消息,topic名称根据实际业务修改,密码根据实际情况设置,我配置的是123456
    /data/kafka_2.13-2.8.0/bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=123456],SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name beats
    #由于整套系统中,beats是生产消息的角色,所以授权producer
    /data/kafka_2.13-2.8.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=10.228.82.156:2181,10.228.82.214:2181,10.228.82.57:2181 --add --allow-principal User:beats --producer --topic test
    
  • 将kafka安装为服务,将以下内容写入: /etc/systemd/system/kafka.service
    [Unit]
    Description=kafka.service
    After=network.target
     
    [Service]
    User=kafka
    Group=kafka
    Type=simple
    Environment=JAVA_HOME=/data/jdk1.8.0_301
    ExecStart=/data/kafka_2.13-2.8.0/bin/kafka-server-start.sh /data/kafka_2.13-2.8.0/config/server.properties
    ExecStop=/data/kafka_2.13-2.8.0/bin/kafka-server-stop.sh
    
    [Install]
    WantedBy=multi-user.target
    
  • 启动kafka服务
    systemctl daemon-reload && systemctl  start kafka

部署Kafka-Eagle

 Kafka-Eagle是管理Kafka集群的Web界面工具

  • 创建kafkaeagle用户
    useradd kafkaeagle
  • 将文件kafka-eagle-web-2.0.6-bin.tar.gz上传到服务器并解压到/data/目录
    tar -zxvf kafka-eagle-web-2.0.6-bin.tar.gz -C /data/ 
    chown -R kafkaeagle:kafkaeagle /data/kafka-eagle-web-2.0.6
  • 将以下内容写入配置文件: /data/kafka-eagle-web-2.0.6/conf/system-config.properties
    # Multi zookeeper&kafka cluster list -- The client connection address of the Zookeeper cluster is set here
    kafka.eagle.zk.cluster.alias=cluster1
    cluster1.zk.list=10.228.82.156:2181,10.228.82.214:2181,10.228.82.57:2181
    
    
    # Add zookeeper acl
    cluster1.zk.acl.enable=false
    cluster1.zk.acl.schema=digest
    cluster1.zk.acl.username=test
    cluster1.zk.acl.password=test123
    
    # Kafka broker nodes online list
    cluster1.kafka.eagle.broker.size=3
    
    # Zkcli limit -- Zookeeper cluster allows the number of clients to connect to
    kafka.zk.limit.size=25
    
    # Kafka Eagle webui port -- WebConsole port access address
    kafka.eagle.webui.port=8048
    
    # Kafka offset storage -- Offset stored in a Kafka cluster, if stored in the zookeeper, you can not use this option
    cluster1.kafka.eagle.offset.storage=kafka
    
    
    # Whether the Kafka performance monitoring diagram is enabled
    kafka.eagle.metrics.charts=false
    
    # Kafka Eagle keeps data for 30 days by default
    kafka.eagle.metrics.retain=30
    
    # If offset is out of range occurs, enable this property -- Only suitable for kafka sql
    kafka.eagle.sql.fix.error=false
    kafka.eagle.sql.topic.records.max=5000
    
    # Delete kafka topic token -- Set to delete the topic token, so that administrators can have the right to delete
    kafka.eagle.topic.token=keadmin
    
    # Kafka sasl authenticate,用户名和密码是之前在kafka创建的,需要是admin权限的账号,就是自kafka配置文件中的Super用户
    cluster1.kafka.eagle.sasl.enable=true
    cluster1.kafka.eagle.sasl.protocol=SASL_PLAINTEXT
    cluster1.kafka.eagle.sasl.mechanism=SCRAM-SHA-256
    cluster1.kafka.eagle.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin";
    # If not set, the value can be empty
    cluster1.kafka.eagle.sasl.client.id=
    # Add kafka cluster cgroups
    cluster1.kafka.eagle.sasl.cgroup.enable=false
    cluster1.kafka.eagle.sasl.cgroup.topics=kafka_ads01,kafka_ads02
    
    
    
    # Default use sqlite to store data
    kafka.eagle.driver=org.sqlite.JDBC
    # It is important to note that the '/hadoop/kafka-eagle/db' path must be exist.
    kafka.eagle.url=jdbc:sqlite:/data/kafka-eagle/db/ke.db
    kafka.eagle.username=root
    kafka.eagle.password=smartloli
    
  • 将创建kafka-eagle数据库目录
    mkdir -p /data/kafka-eagle/db/ && chown  -R kafkaeagle:kafkaeagle /data/kafka-eagle/
  • 将kafka-eagle安装为服务,将以下内容写入/etc/systemd/system/kafka-eagle.service
    [Unit]
    Description=kafka-eagle.service
    After=network.target
     
    [Service]
    User=kafkaeagle
    Group=kafkaeagle
    Type=forking
    Environment=JAVA_HOME=/data/jdk1.8.0_301
    Environment=KE_HOME=/data/kafka-eagle-web-2.0.6
    ExecStart=/data/kafka-eagle-web-2.0.6/bin/ke.sh start
    ExecStop=/data/kafka-eagle-web-2.0.6/bin/ke.sh stop
    
     
    [Install]
    WantedBy=multi-user.target
    
  • 启动kafka-eagle
     
    systemctl daemon-reload && systemctl start kafka-eagle

接下一篇

手把手教你搭建Kafka(带SASL认证)+ELK集群 - 四

https://blog.csdn.net/lwlfox/article/details/119803107

你可能感兴趣的:(Linux,Kafka,ELK,SASL)