KRaft模式实现kafka集群
在docker中启动三个容器s1、s2、s3,并分别进行以下步骤
- 确保容器间是可以相互通信的,用ping命令测试下。如ping不通,点击这里 看看是不是这个问题
下载openjdk8
wget https://github.com/AdoptOpenJDK/openjdk8-binaries/releases/download/jdk8u292-b10/OpenJDK8U-jdk_x64_linux_hotspot_8u292b10.tar.gz -P /opt/java
解压openjdk8
cd /opt/java
tar -vxf OpenJDK8U-jdk_x64_linux_hotspot_8u292b10.tar.gz
配置jdk
- 进入/etc/profile.d/openjdk.sh
vi /etc/profile.d/openjdk.sh
- 配置以下信息
export JAVA_HOME=/opt/java/jdk8u292-b10
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
- 使配置生效
source /etc/profile
- 查看jdk是否安装成功
java -version
下载kafka安装包(Binary downloads)
wget https://archive.apache.org/dist/kafka/3.2.1/kafka_2.12-3.2.1.tgz
解压kafka安装包
tar xvf kafka_2.12-3.2.1.tgz
分别修改配置文件
vi kafka_2.13-3.2.1/config/kraft/server.properties
- s1中的配置
process.roles=broker, controller
node.id=1
controller.listener.names=CONTROLLER
controller.quorum.voters=controller.quorum.voters=1@s1:9093,2@s2:9093,3@s3:9093
listeners=PLAINTEXT://:9092,CONTROLLER://:9093
inter.broker.listener.name=PLAINTEXT
advertised.Listeners=PLAINTEXT://s1:9092
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
log.dirs=/kafka/data
- s2中的配置
process.roles=broker, controller
node.id=2
controller.listener.names=CONTROLLER
controller.quorum.voters=controller.quorum.voters=1@s1:9093,2@s2:9093,3@s3:9093
listeners=PLAINTEXT://:9092,CONTROLLER://:9093
inter.broker.listener.name=PLAINTEXT
advertised.Listeners=PLAINTEXT://s2:9092
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
log.dirs=/kafka/data
- s3中的配置
process.roles=broker, controller
node.id=3
controller.listener.names=CONTROLLER
controller.quorum.voters=controller.quorum.voters=1@s1:9093,2@s2:9093,3@s3:9093
listeners=PLAINTEXT://:9092,CONTROLLER://:9093
inter.broker.listener.name=PLAINTEXT
advertised.Listeners=PLAINTEXT://s3:9092
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
log.dirs=/kafka/data
生成存储目录唯一id【uuid】
/kafka_2.13-3.2.1/bin/kafka-storage.sh random-uuid
格式化
/kafka_2.13-3.2.1/bin/kafka-storage.sh format -t 【uuid】 -c /kafka_2.13-3.2.1/config/kraft/server.properties
分别在s1、s2、s3中启动kafka
/kafka_2.13-3.2.1/bin/kafka-server-start.sh -daemon /kafka_2.13-3.2.1/config/kraft/server.properties
创建topic
/kafka_2.13-3.2.1/bin/kafka-topics.sh --create --bootstrap-server s1:9092,s2:9092,s3:9092 --replication-factor 3 --partitions 3 --topic test
查看topic
/kafka_2.13-3.2.1/bin/kafka-topics.sh --list --bootstrap-server s1:9092,s2:9092,s3:9092
关闭kafka
/kafka_2.13-3.2.1/bin/kafka-server-stop.sh