Kafka 是一个分布式的基于发布 / 订阅模式的消息队列(Message Queue),主要应用于大数据实时处理领域。
基本概念:
有篇博客对 Kafka基本概念总结的不错,图也是来自参考文章(链接在最后)。
在 dockerhub 上 kafka 相关镜像有 wurstmeister/kafka 和 bitnami/kafka 两种,这两者使用的人也比较多。好像 bitnami/kafka 更新比较频繁。
生产环境使用时,会增加更多配置,比如将data,log映射出来。这里为了方便选用 wurstmeister/kafka 简单安装。
注意:
当前 Kafka 还依赖 Zookeeper,所以必须先启动一个 Zookeeper服务。
1)查找镜像,查看本地镜像
[root@centos7 ~]# docker search wurstmeister/kafka
2)拉取镜像
没有镜像就拉取,有的话就省略这一步。
[root@centos7 ~]# docker pull wurstmeister/kafka
3)创建并启动容器
[root@centos7 ~]# docker run -d --name wurstmeister_kafka -p 192.168.xxx.xxx:9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=192.168.xxx.xxx:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.xxx.xxx:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka
071873d715c0a2473243a146b18d8e66b0ff8a9f7321b0de0f4b70751d65adbe
[root@centos7 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
071873d715c0 wurstmeister/kafka "start-kafka.sh" 13 seconds ago Up 10 seconds 192.168.xxx.xxx:9092->9092/tcp wurstmeister_kafka
9bb8ce0a893f zookeeper:3.6.3 "/docker-entrypoint.…" 11 months ago Up 12 hours 2888/tcp, 3888/tcp, 192.168.xxx.xxx:2181->2181/tcp, 8080/tcp zookeeper_v3.6.3
4)进入启动好的 Kafka服务
进入启动好的 Kafka服务。查看一下目录结构,重点关注一下/opt/kafka_2.13-2.8.1/bin
目录下的文件。
[root@centos7 ~]# docker exec -it 071873d715c0 /bin/bash
bash-5.1# ls
bin dev etc home kafka lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
bash-5.1# ls /opt
kafka kafka_2.13-2.8.1 overrides
bash-5.1# ls /opt/kafka_2.13-2.8.1/
LICENSE NOTICE bin config libs licenses logs site-docs
bash-5.1# ls /opt/kafka_2.13-2.8.1/bin
connect-distributed.sh kafka-dump-log.sh kafka-storage.sh
connect-mirror-maker.sh kafka-features.sh kafka-streams-application-reset.sh
connect-standalone.sh kafka-leader-election.sh kafka-topics.sh
kafka-acls.sh kafka-log-dirs.sh kafka-verifiable-consumer.sh
kafka-broker-api-versions.sh kafka-metadata-shell.sh kafka-verifiable-producer.sh
kafka-cluster.sh kafka-mirror-maker.sh trogdor.sh
kafka-configs.sh kafka-preferred-replica-election.sh windows
kafka-console-consumer.sh kafka-producer-perf-test.sh zookeeper-security-migration.sh
kafka-console-producer.sh kafka-reassign-partitions.sh zookeeper-server-start.sh
kafka-consumer-groups.sh kafka-replica-verification.sh zookeeper-server-stop.sh
kafka-consumer-perf-test.sh kafka-run-class.sh zookeeper-shell.sh
kafka-delegation-tokens.sh kafka-server-start.sh
kafka-delete-records.sh kafka-server-stop.sh
到此,Kafka服务已经安装启动好了。
创建容器时,通过命令参数指定了一些配置参数,也可以修改配置文件,重建构建启动。
查看一下,Kafka的配置文件 server.properties如下:删掉了一部分注释。
bash-5.1# cat /opt/kafka_2.13-2.8.1/config/server.properties
# The id of the broker. This must be set to a unique integer for each broker.
# broker.id属性在kafka集群中必须要是唯一
broker.id=0
# kafka部署的机器ip和提供服务的端口号
listeners=PLAINTEXT://0.0.0.0:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://192.168.xxx.xxx:9092
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/kafka/kafka-logs-071873d715c0
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
zookeeper.connect=192.168.xxx.xxx:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000
############################# Group Coordinator Settings #############################
group.initial.rebalance.delay.ms=0
port=9092
下面命令我使用绝对路径,也可以使用相对路径。
注意:
所有命令都有一些附加的参数选项;当我们不携带任何参数运行某命令时,就会显示该命令的详细用法。
我们手工创建一个名字为“tpcTest”的Topic,这个topic只有一个partition,并且备份因子也设置为1:
bash-5.1# /opt/kafka_2.13-2.8.1/bin/kafka-topics.sh --create --zookeeper 192.168.xxx.xxx:2181 --replication-factor 1 --partitions 1 --topic tpcTest
当 producer 发布一个消息到某个指定的Topic,这个Topic如果不存在,kafka会自动创建。
查看已存在的topic列表:
bash-5.1# /opt/kafka_2.13-2.8.1/bin/kafka-topics.sh --list --zookeeper 192.168.xxx.xxx:2181
tpcTest
查看某个topic的状态:
bash-5.1# /opt/kafka_2.13-2.8.1/bin/kafka-topics.sh --describe --zookeeper 192.168.xxx.xxx:2181 --topic tpcTest
Topic: tpcTest TopicId: vY-MixMxRzuzFgwnpd-DwQ PartitionCount: 1 ReplicationFactor: 1 Configs:
Topic: tpcTest Partition: 0 Leader: 0 Replicas: 0 Isr: 0
/opt/kafka_2.13-2.8.1/bin/kafka-topics.sh --delete --zookeeper 192.168.xxx.xxx:2181 --topic tpc_test
kafka自带了一个producer命令客户端,可以从本地文件中读取内容,或者我们也可以以命令行中直接输入内容,并将这些内容以消息的形式发送到 kafka集群中。在默认情况下,每一个行会被当做成一个独立的消息。
首先我们要运行发布消息的脚本,然后在命令中输入要发送的消息的内容:
bash-5.1# /opt/kafka_2.13-2.8.1/bin/kafka-console-producer.sh --broker-list 192.168.xxx.xxx:9092 --topic tpcTest
>hello kafka
>
然后我们重新打开一个窗口,就可以进行消息消费了。
kafka同样也携带了一个consumer命令行客户端,会将获取到内容在命令中进行输出。
默认是消费最新的消息:
/opt/kafka_2.13-2.8.1/bin/kafka-console-consumer.sh --bootstrap-server 192.168.xxx.xxx:9092 --topic tpcTest
如果想要消费之前的消息可以通过--from-beginning参数
指定,命令如下:
bash-5.1# /opt/kafka_2.13-2.8.1/bin/kafka-console-consumer.sh --bootstrap-server 192.168.xxx.xxx:9092 --topic tpcTest --from-beginning
hello kafka
还可以通过--whitelist参数
指定,多个主题通过|
符号分隔。命令如下:
/opt/kafka_2.13-2.8.1/bin/kafka-console-consumer.sh --bootstrap-server 192.168.xxx.xxx:9092 --whitelist “tpcTest|tpcTest2”
/opt/kafka_2.13-2.8.1/bin/kafka-consumer-groups.sh --bootstrap-server 192.168.xxx.xxx:9092 --list
一条消息只能被某一个消费者消费的模式,类似queue模式,只需让所有消费者在同一个消费组里即可。
分别在两个客户端执行下面消费命令:
bash-5.1# /opt/kafka_2.13-2.8.1/bin/kafka-console-consumer.sh --bootstrap-server 192.168.xxx.xxx:9092 --consumer-property group.id=testGroup --topic tpcTest
1aa
会默认创建消费组 testGroup。
然后往主题里面发送消息,结果只有一个客户端能收到消息。
一条消息能被多个消费者消费的模式,类似publish-subscribe模式。
针对Kafka同一条消息只能被同一个消费组下的某一个消费者消费的特性,要实现多播只要保证这些消费者属于不同的消费组即可。
我们再增加一个消费者,该消费者属于 testGroup2消费组,
/opt/kafka_2.13-2.8.1/bin/kafka-console-consumer.sh --bootstrap-server 192.168.xxx.xxx:9092 --consumer-property group.id=testGroup2 --topic tpcTest
然后往主题里面发送消息,结果这两个客户端都能收到消息。
/opt/kafka_2.13-2.8.1/bin/kafka-consumer-groups.sh --bootstrap-server 192.168.xxx.xxx:9092 --describe --group testGroup
参考文章:
– 求知若饥,虚心若愚。