JDK: 1.8
操作系统: CentOS Linux release 7.4.1708 (Core)
安装路径: /home/install_package
下
主机: 主机浮动IP是10.11.0.200,主机IP是192.168.57.30
apache-zookeeper-3.8.1-bin.tar.gz下载路径
kafka_2.13-3.4.0.tgz下载路径
将Zookeeper安装包上传到安装路径,进行安装:
$ mkdir zk
# 创建Zookeeper数据存储路径
$ mkdir zk/data
# 创建Zookeeper日志存放路径
$ mkdir zk/logs
# 解压安装包
$ tar -zxvf apache-zookeeper-3.8.1-bin.tar.gz
# 配置环境变量,添加下述内容
$ vi /etc/profile
export ZK_HOME=/home/install_package/apache-zookeeper-3.8.1-bin/bin
export PATH=$ZK_HOME/bin:$PATH
$ source /etc/profile
# 生成Zookeeper配置文件
$ cd apache-zookeeper-3.8.1-bin/conf
$ cp zoo_sample.cfg zoo.cfg
$ vi zoo.cfg
# 数据存放目录
dataDir=/home/install_package/zk/data
# 日志存放目录
#dataLogDir=/home/install_package/zk/logs
# 心跳间隔时间,时间单位为毫秒值
tickTime=2000
# leader与客户端连接超时时间,设为5个心跳间隔
initLimit=5
# Leader与Follower之间的超时时间,设为2个心跳间隔
syncLimit=2
# 客户端通信端口
clientPort=2181
# 清理间隔,单位是小时,默认是0,表示不开启
autopurge.purgeInterval=1
# 这个参数和上面的参数搭配使用,这个参数指定了需要保留的文件数目,默认是保留3个
autopurge.snapRetainCount=5
# 单机版不配下述配置
# server.NUM=IP:port1:port2 NUM表示本机为第几号服务器;IP为本机ip地址;
# port1为leader与follower通信端口;port2为参与竞选leader的通信端口
# 多个实例的端口配置不能重复,如下:
#server.0=192.168.101.136:12888:13888
#server.1=192.168.101.146:12888:13888
$ cd /home/install_package/zk/data
# 此处的0是配置文件 server.x的x的值
$ echo '0' > myid
# 启动Zookeeper
$ zKServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/install_package/apache-zookeeper-3.8.1-bin/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
# 查看Zookeeper状态
$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/data/zookeeper/apache-zookeeper-3.8.1-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: standalone
将Kafka安装包上传到安装路径,进行安装:
$ mkdir kafka
# 创建kafka日志存放路径
$ mkdir kafka/logs
# 解压安装包
$ tar -zxvf kafka_2.13-3.4.0.tgz
# 配置环境变量,添加下述内容
$ vi /etc/profile
export KAFKA_HOME=/home/install_package/kafka_2.13-3.4.0
export PATH=$KAFKA_HOME/bin:$PATH
$ source /etc/profile
# 修改kafka配置
$ cd kafka_2.13-3.4.0/config
$ vi server.properties
# broker.id每个实例的值不能重复
broker.id=0
# 配置主机的ip和端口
listeners=PLAINTEXT://192.168.57.30:9092
advertised.listeners=PLAINTEXT://10.11.0.203:9092
# 配置日志存储路径
log.dirs=/home/install_package/kafka/logs
# 配置zookeeper集群
zookeeper.connect=10.11.0.200:2181
# 启动kafka
$ kafka-server-start.sh -daemon /home/install_package/kafka_2.13-3.4.0/config/server.properties
# 创建一个名为test-topic的topic,7个分区,复制因子为2
# 复制因子数量无法超过broker数量
$ kafka-topics.sh --create --zookeeper 10.11.0.203:2181 --replication-factor 2 --partitions 7 --topic test-topic
Created topic test-topic.
# 查看test-topic信息
$ kafka-topics.sh --describe --zookeeper 10.11.0.203:2181 --topic test-topic
Topic:test-topic PartitionCount:7 ReplicationFactor:2 Configs:
Topic: test-topic Partition: 0 Leader: 0 Replicas: 0,1 Isr: 0,1
Topic: test-topic Partition: 1 Leader: 1 Replicas: 1,0 Isr: 1,0
Topic: test-topic Partition: 2 Leader: 0 Replicas: 0,1 Isr: 0,1
Topic: test-topic Partition: 3 Leader: 1 Replicas: 1,0 Isr: 1,0
Topic: test-topic Partition: 4 Leader: 0 Replicas: 0,1 Isr: 0,1
Topic: test-topic Partition: 5 Leader: 1 Replicas: 1,0 Isr: 1,0
Topic: test-topic Partition: 6 Leader: 0 Replicas: 0,1 Isr: 0,1
原因:OpenStack的虚拟机的浮动IP和真实IP是绑定关系。当用户访问浮动IP时,OpenStack会转发到对应的真实IP,但如果Kafka的listeners也配置成浮动IP,则会导致Kafka无法正常启动,因为socket无法绑定监听。
解决方法:listeners配置项使用真实IP(真实IP通过ifconfig
指令可查看)
在203上启动生产者
$ kafka-console-producer.sh --broker-list 10.11.0.203:9092, --topic test-topic
在203上启动消费者
$ kafka-console-consumer.sh --bootstrap-server 192.168.101.136:9092,192.168.101.146:9092 --topic test-topic --from-beginning