Centos7离线安装Kafka+zookeeper+SASL安全认证

1.安装zookeeper

1.下载apache-zookeeper-3.5.8-bin.tar.gz
2.解压:tar –zxvf apache-zookeeper-3.5.8-bin.tar.gz
3.进入到conf目录 cd /opt/apache-zookeeper-3.5.8-bin/conf/
4.复制配置文件cfg cp zoo_sample.cfg zoo.cfg
5.修改zoo.cfg配置文件:

vi /opt/apache-zookeeper-3.5.8-bin/conf/zoo.cfg
tickTime=5000
initLimit=10
syncLimit=5
dataDir=/opt/apache-zookeeper-3.5.8-bin/data
dataLogDir=/opt/apache-zookeeper-3.5.8-bin/logs
clientPort=2181

6.创建文件夹:

mkdir -p /opt/apache-zookeeper-3.5.8-bin/data
mkdir -p /opt/apache-zookeeper-3.5.8-bin/logs

7.启动:

cd /opt/apache-zookeeper-3.5.8-bin/ 
./zkServer.sh start >> /opt/apache-zookeeper-3.5.8-bin/logs/log 2>&1 &

8.查看状态 ./zkServer.sh status

2.安装kafka

1.下载kafka:kafka_2.12-2.5.0.tgz
2.解压:将gz包上传到/opt/下,解压tar -zxvf kafka_2.12-2.5.0.tgz
3.修改配置:

vi /opt/kafka_2.12-2.5.0/config/server.properties
broker.id=1

listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://127.0.0.1:9092

num.network.threads=3
num.io.threads=8

socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600

log.dirs=/opt/kafka_2.12-2.5.0/logs/kafka-logs

num.partitions=1
num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000

zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=18000

group.initial.rebalance.delay.ms=0

4.启动kafka服务。

cd /opt/kafka_2.12-2.5.0/bin/
./kafka-server-start.sh -daemon /opt/kafka_2.12-2.5.0/config/server.properties

5.测试kafka服务是否生效。
创建单分区 topic 副本,依然需要在bin目录下运行。
命令: ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
这样我们就创建了一个topic分区,下面的命令是查看topic分区
命令:./kafka-topics.sh --list --zookeeper localhost:2181

6l测试我们的kafka服务,我们使用自主创建生产-消费模式,我们使用命令生产消息,并且消费消息。
生产消息命令:./kafka-console-producer.sh --broker-list localhost:9092 --topic test
在命令输入信息,输入的内容,并敲击回车,每一次回车换行表示一条消息,使用ctrl+c结束生产消息。
注意:
消费者消费消息命令:./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
或者使用命令:./kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

3.配置kafka加密

1、在server.properties中加入以下内容,启用SASL安全认证:

listeners=SASL_PLAINTEXT://:9092
security.inter.broker.protocol=SASL_PLAINTEXT 
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
advertised.listeners=SASL_PLAINTEXT ://127.0.0.1:9092

2、创建server端的认证文件,此文件不可以添加任何多余的内容,包括注释。最后两行必须要有分号:

vi /opt/kafka_2.12-2.5.0/config/kafka_server_jaas.conf 
KafkaServer {
	org.apache.kafka.common.security.plain.PlainLoginModule required
	username="kafka"
	password="kafka#secret"
	user_kafka="kafka#secret" 
	user_alice="alice#secret";
};

3、创建client认证文件,此文件是后面console的生产者和消费者测试使用,同样此文件不可以添加任何多余的内容,包括注释:

vi /opt/kafka_2.12-2.5.0/config/kafka_client_jaas.conf 
KafkaClient {
	org.apache.kafka.common.security.plain.PlainLoginModule required
	username="kafka"
	password="kafka#secret";
};

4、添加kafka-server-start.sh认证文件路径,启动kafka时会加载此文件,其他应用服务如果要写数据到kafka,会先匹配用户名和密码:

vi /opt/kafka_2.12-2.5.0/bin/kafka-server-start.sh
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka_2.12-2.5.0/config/kafka_server_jaas.conf"

5、添加kafka-console-producer.sh认证文件路径,后面启动生产者测试时使用:

vi /opt/kafka_2.12-2.5.0/bin/kafka-console-producer.sh 
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka_2.12-2.5.0/config/kafka_client_jaas.conf"

6、添加kafka-console-consumer.sh认证文件路径,后面启动消费者测试时使用:

vi /opt/kafka_2.12-2.5.0/bin/kafka-console-consumer.sh
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka_2.12-2.5.0/config/kafka_client_jaas.conf"

7、修改/opt/kafka_2.12-2.5.0/config/producer.properties,在配置最后加入以下两行内容:

vi /opt/kafka_2.12-2.5.0/config/producer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN

8、修改/opt/kafka_2.12-2.5.0/config/consumer.properties,要添加的内容和producer的内容一样:

vi /opt/kafka_2.12-2.5.0/config/consumer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN

9、启动kafka,注意观察logs/server.log日志文件中是否有报错:

cd /opt/kafka_2.12-2.5.0/bin/ && ./kafka-server-start.sh -daemon /opt/kafka_2.12-2.5.0/config/server.properties

10.测试:
开启生产者

cd /opt/kafka_2.12-2.5.0/bin/ && ./kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic test --producer-property security.protocol=SASL_PLAINTEXT --producer-property sasl.mechanism=PLAIN

开启消费者:

cd /opt/kafka_2.12-2.5.0/bin/ && ./kafka-console-consumer.sh --bootstrap-server 127.0.0.1:9092 --topic test --from-beginning --consumer-property security.protocol=SASL_PLAINTEXT --consumer-property sasl.mechanism=PLAIN

4.配置开机启动

1…在/etc/init.d/目录创建kafka文件

vi /etc/init.d/kafka
#!/bin/bash

#
# Comments to support chkconfig
# chkconfig: - 50 10
# description: zookeeper-3.5.8+kafka_2.12-2.5.0 service script
#
# Source function library.
. /etc/init.d/functions

### Default variables

export JAVA_HOME=/opt/jdk1.8.0_191
export PATH=$JAVA_HOME/bin:$PATH
ZOOKEEPER_HOME=/opt/apache-zookeeper-3.5.8-bin
KAFKA_HOME=/opt/kafka_2.12-2.5.0


RETVAL=0

start(){
  $ZOOKEEPER_HOME/bin/zkServer.sh start
  echo "zookeeper is started"  
  sleep 3s
  cd $KAFKA_HOME/bin/ && ./kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties
  echo "kafka is started"
  RETVAL=$?
  return $RETVAL
}
stop(){
  cd $KAFKA_HOME/bin/ && ./kafka-server-stop.sh -daemon $KAFKA_HOME/config/server.properties
  echo "kafka is stopped"
  $ZOOKEEPER_HOME/bin/zkServer.sh stop
  echo "zookeeper is stopped"
  RETVAL=$?
  return $RETVAL
}
restart(){
  stop
  sleep 3s
  start
}

case $1 in
  start)
    start
    ;;
  stop)
    stop
    ;;
  restart)
    restart
    ;;
  *)
    echo "start|stop|restart"
    ;;  
esac
exit 0

2.授权

chmod 777 /etc/init.d/kafka

3.添加和删除服务并设置启动方式(chkconfig具体使用另行百度)

chkconfig --add kafka
chkconfig --del kafka

4.启动和关闭服务

service kafka start    // 启动服务
service kafka stop     // 关闭服务
service kafka restart  // 重启服务

5.设置服务的启动方式

chkconfig kafka on  // 设置开机启动
chkconfig kafka off // 关闭开机启动

你可能感兴趣的:(centos,kafka)