kafka依赖zookeeper,所以搭建kafka需要先配置zookeeper
zookeeper:127.0.0.1:2181
kafka1: 127.0.0.1:9092
kafka2: 127.0.0.1:9093
kafka3: 127.0.0.1:9094
1.安装 docker-compose
curl -L http://mirror.azure.cn/docker-toolbox/linux/compose/1.25.4/docker-compose-Linux-x86_64 -o /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose
2、创建 docker-compose.yml 文件
version: '3.3'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- 2181:2181
volumes:
- /data/zookeeper/data:/data
- /data/zookeeper/datalog:/datalog
- /data/zookeeper/logs:/logs
restart: always
kafka1:
image: wurstmeister/kafka
depends_on:
- zookeeper
container_name: kafka1
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 192.168.10.219:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.10.219:9092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_LOG_DIRS: /data/kafka-data
KAFKA_LOG_RETENTION_HOURS: 24
volumes:
- /data/kafka1/data:/data/kafka-data
restart: unless-stopped
kafka2:
image: wurstmeister/kafka
depends_on:
- zookeeper
container_name: kafka2
ports:
- 9093:9093
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: 192.168.10.219:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.10.219:9093
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9093
KAFKA_LOG_DIRS: /data/kafka-data
KAFKA_LOG_RETENTION_HOURS: 24
volumes:
- /data/kafka2/data:/data/kafka-data
restart: unless-stopped
kafka3:
image: wurstmeister/kafka
depends_on:
- zookeeper
container_name: kafka3
ports:
- 9094:9094
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: 192.168.10.219:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.10.219:9094
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9094
KAFKA_LOG_DIRS: /data/kafka-data
KAFKA_LOG_RETENTION_HOURS: 24
volumes:
- /data/kafka3/data:/data/kafka-data
restart: unless-stopped
参数说明:
3、启动
docker-compose up -d
测试
1、登录到 kafka1 容器
docker-compose exec kafka1 bash
切到 /opt/kakfa/bin 目录下
cd /opt/kafka/bin/
创建一个 topic:名称为first,3个分区,2个副本
./kafka-topics.sh --create --topic first --zookeeper 192.168.10.219:2181 --partitions 3 --replication-factor 2
zookeeper在kafka中的作用https://www.jianshu.com/p/a036405f989chttps://www.jianshu.com/p/a036405f989c
注意:副本数不能超过brokers数(分区是可以超过的),否则会创建失败。
2.查看 topic 列表
./kafka-topics.sh --list --zookeeper 192.168.10.219:2181
./kafka-topics.sh --describe --topic first --zookeeper 192.168.10.219:2181
4.在宿主机上,切到 /data/kafka1/data下,可以看到topic的数据
说明:
5.创建一个生产者,向 topic 中发送消息
6.登录到 kafka2 或者 kafka3 容器内(参考第1步),然后创建一个消费者,接收 topic 中的消息
1、删除topic
./kafka-topics.sh --delete --topic first --zookeeper 192.168.10.219:2181
删除topic,不会立马删除,而是会先给该topic打一个标记。在/data/kafka1/data下可以看到:
2、查看某个topic对应的消息数量
./kafka-run-class.sh kafka.tools.GetOffsetShell --topic second --time -1 --broker-list 192.168.10.219:9092
3、查看所有消费者组
./kafka-consumer-groups.sh --bootstrap-server 192.168.56.101:9092 --list
4、查看日志文件内容(消息内容)
./kafka-dump-log.sh --files /data/kafka-data/my-topic-0/00000000000000000000.log --print-data-log
查看topic=my-topic的消息,日志文件为00000.log,--print-data-log表示查看消息内容(不加不会显示消息内容)
5、修改kafka对应topic分区数(只能增加,不能减少)
./kafka-topics.sh --alter --topic al-test --partitions 2 --zookeeper 192.168.10.219:2181
6、修改kafka对应topic数据的保存时长(可以查看server.properties文件中log.retention.hours配置,默认168小时)
./kafka-topics.sh --alter --zookeeper 192.168.56.101:2181 --topic al-test --config retention.ms=86400000
这里是改为24小时=24*3600*1000
总结
The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
参考地址:docker-compose 搭建 kafka 集群 - 仅此而已-远方 - 博客园