docker pull zookeeper
docker pull wurstmeister/kafka
docker run -d --name zookeeper -p 2181:2181 zookeeper
docker run -d --name kafka -p 9092:9092 \
--link zookeeper:zookeeper \
--env KAFKA_BROKER_ID=1 \
--env HOST_IP=192.168.218.131 \
--env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
--env KAFKA_ADVERTISED_HOST_NAME=192.168.218.131 \
--env KAFKA_ADVERTISED_PORT=9092 \
-t wurstmeister/kafka
中间两个参数的192.168.218.131改为宿主机器的IP地址,如果不这么设置,可能会导致在别的机器上访问不到kafka。
进入kafka容器的命令行
docker exec -it kafka /bin/bash
进入kafka所在目录
cd opt/kafka_2.11-2.0.1/
创建一个主题
bin/kafka-topics.sh --create \
--zookeeper 192.168.218.131:2181
--topic test1 \
--partitions 3 \
--replication-factor 1
查看创建的主题
bin/kafka-topics.sh --describe --zookeeper 192.168.218.131:2181 --topic test1
输出结果:
Topic:test1 PartitionCount:3 ReplicationFactor:1 Configs:
Topic: test1 Partition: 0 Leader: 1 Replicas: 1 Isr: 1
Topic: test1 Partition: 1 Leader: 1 Replicas: 1 Isr: 1
Topic: test1 Partition: 2 Leader: 1 Replicas: 1 Isr: 1
启动生产者(producer)
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test1
再开一个窗口启动消费者(consumer)
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test1
在消息发送方输入123456
在消息接收方查看
如果看到123456 消息发送完成
使用docker命令可快速在同一台机器搭建多个kafka,只需要改变brokerId和端口
docker run -d --name kafka2 -p 9093:9092 \
-e KAFKA_BROKER_ID=2 \
-e HOST_IP=192.168.218.131 \
-e KAFKA_ZOOKEEPER_CONNECT=192.168.218.131:2181 \
-e KAFKA_ADVERTISED_HOST_NAME=192.168.218.131 \
-e KAFKA_ADVERTISED_PORT=9093 \
-t wurstmeister/kafka
创建Replication为2,Partition为2的topic
bin/kafka-topics.sh --create \
--zookeeper 192.168.218.131:2181 \
--replication-factor 2 \
--partitions 2 \
--topic test2
进入kafka容器中的opt/kafka_2.12-1.1.0/目录
bin/kafka-topics.sh --describe --zookeeper 192.168.218.131:2181 --topic test2
输出结果:
Topic:test2 PartitionCount:2 ReplicationFactor:2 Configs:
Topic: test2 Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: test2 Partition: 1 Leader: 2 Replicas: 2,1 Isr: 2,1
显示每个分区的Leader机器为broker0,在broker0和1上具有备份,Isr代表存活的备份机器中存活的。
当停掉kafka1后,
docker stop kafka1
再查看topic状态,输出结果:
Topic:test2 PartitionCount:2 ReplicationFactor:2 Configs:
Topic: test2 Partition: 0 Leader: 2 Replicas: 1,2 Isr: 2
Topic: test2 Partition: 1 Leader: 2 Replicas: 2,1 Isr: 2