docker-kafka启动参照 https://www.jianshu.com/p/9552871bb40a
查看kafka版本
$KAFKA_HOME/bin/kafka-consumer-groups.sh --version
新创建的 Topic 名字为 mykafka, partition 数为 1, replica(replication-factor) 数为 1.
$KAFKA_HOME/bin/kafka-topics.sh --create --zookeeper 192.168.0.4:2181 --replication-factor 1 --partitions 1 --topic mykafka
查看主题
$KAFKA_HOME/bin/kafka-topics.sh --list --zookeeper zookeeper:2181
$KAFKA_HOME/bin/kafka-topics.sh --describe --zookeeper zookeeper:2181
Topic:mykafka PartitionCount:1 ReplicationFactor:1 Configs:
Topic: mykafka Partition: 0 Leader: 1006 Replicas: 1006 Isr: 1006
Topic:test PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test Partition: 0 Leader: -1 Replicas: 1002 Isr: 1002
$KAFKA_HOME/bin/kafka-topics.sh --describe --zookeeper zookeeper:2181 #可以用容器名,也可以用ip地址
发送消息
$KAFKA_HOME/bin/kafka-console-producer.sh --broker-list 192.168.0.4:9092 --topic mykafka
接收消息
$KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap-server 192.168.0.4:9092 --topic mykafka --from-beginning
查看消费情况
$KAFKA_HOME/bin/kafka-consumer-groups.sh --offsets --all-groups --bootstrap-server kafka:9092 --describe
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
example test_kafka 2 105 105 0 sarama-d77a07a0-4630-4716-ae0b-b4d9485e52a0 /172.18.0.1 sarama
example test_kafka 4 20 20 0 sarama-d77a07a0-4630-4716-ae0b-b4d9485e52a0 /172.18.0.1 sarama
example test_kafka 3 14 14 0 sarama-d77a07a0-4630-4716-ae0b-b4d9485e52a0 /172.18.0.1 sarama
example test_kafka 1 101 101 0 sarama-d77a07a0-4630-4716-ae0b-b4d9485e52a0 /172.18.0.1 sarama
example test_kafka 0 159 159 0 sarama-d77a07a0-4630-4716-ae0b-b4d9485e52a0 /172.18.0.1 sarama
#关闭consumer时
$KAFKA_HOME/bin/kafka-consumer-groups.sh --offsets --all-groups --bootstrap-server kafka:9092 --describe
Consumer group 'example' has no active members.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
example test_kafka 0 159 159 0 - - -
example test_kafka 2 105 106 1 - - -
example test_kafka 1 101 101 0 - - -
example test_kafka 4 20 20 0 - - -
example test_kafka 3 14 14 0 - - -
consumer group
每个consumer 会保存一个offset
zookeeper里保存了一个topic<__consumer_offsets>
kafka不像其它MQ,不存在在应答机制,只更新offset
1.broker接收producer的写入请求,把消息写入到一个topic里面,同时根据router信息进行分区操作。
2.broker把确定了partition的消息写入到segment里面的某一个log文件,消息的物理偏移量同时记入index文件里面。
3.查询一条message的时候,consumer发送需要读取的消息的offset给broker,broker根据offset来进行二分查找,定位到具体的index文件,进而定位到log文件里面的消息。
topic partition 默认为1 表示该topic数据均写入一个文件夹下
一个topic有多个partition可以增加topicr的message吞吐量,一个partition有多个replication提高数据可用性,不会因为部分节点宕机而数据丢失
consumergroup内每个consumer不能重复消费消息
每个partition会在每个broker节点上存副本,message先写到partition leader上,再由partition leader push到partition follower上
以下是遇到的一些错误:
命令行运行消费:无topic时
WARN [Consumer clientId=consumer-console-consumer-69322-1, groupId=console-consumer-69322] Error while fetching metadata with correlation id 2 : {test_kafka=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
命令行运行发送:改变端口之后
WARN [Producer clientId=console-producer] Connection to node 100 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
命令启动custome 端口不对
$KAFKA_HOME/bin/kafka-console-producer.sh --broker-list xx.xxx.xxx.xxx:9092 --topic test_kafka
报错;
WARN [Consumer clientId=consumer-console-consumer-2317-1, groupId=console-consumer-2317] Error while fetching metadata with correlation id 18 : {test_kafka=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
命令发送时无法找到topic:先创建topic
[2019-12-24 19:27:12,575] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 3 : {t=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
KAFKA_ADVERTISED_HOST_NAME 不能用生产者或消费者访问不到的地址(golang)
panic: dial tcp xxx5000: i/o timeout
goroutine 1 [running]:
main.main()
/main.go:33 +0x241
golang代码报错:不指定KAFKA_ADVERTISED_PORT
kafka server: Request was for a topic or partition that does not exist on this broker
golang代码报错:kafka程序没有启动成功或者端口无法连接
kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
增加partition
bash-4.4# $KAFKA_HOME/bin/kafka-topics.sh --alter --zookeeper zookeeper:2181 --topic test_kafka --partitions 3
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!
bash-4.4# $KAFKA_HOME/bin/kafka-topics.sh --describe --zookeeper zookeeper:2181 --topic test_kafka
Topic: test_kafka PartitionCount: 3 ReplicationFactor: 1 Configs:
Topic: test_kafka Partition: 0 Leader: 100 Replicas: 100 Isr: 100
Topic: test_kafka Partition: 1 Leader: 100 Replicas: 100 Isr: 100
Topic: test_kafka Partition: 2 Leader: 100 Replicas: 100 Isr: 100
减少partition,
$KAFKA_HOME/bin/kafka-topics.sh --alter --zookeeper zookeeper:2181 --topic test_kafka --partitions 4
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Error while executing topic command : The number of partitions for a topic can only be increased. Topic test_kafka currently has 5 partitions, 4 would not be an increase.
[2020-01-05 15:47:52,110] ERROR org.apache.kafka.common.errors.InvalidPartitionsException: The number of partitions for a topic can only be increased. Topic test_kafka currently has 5 partitions, 4 would not be an increase.
(kafka.admin.TopicCommand$)
zookeeper保存kafka(2.4版本)相关信息
1.查看topic
[zk: localhost:2181(CONNECTED) 8] get /brokers/topics/test_kafka
{"version":2,"partitions":{"0":[100]},"adding_replicas":{},"removing_replicas":{}}
2.查看broker注册信息
[zk: localhost:2181(CONNECTED) 11] ls /brokers/ids #获取id,每个broker都有一个唯一的ID
[100]
[zk: localhost:2181(CONNECTED) 12] get /brokers/ids/100
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://localhost:9092"],"jmx_port":-1,"host":"localhost","timestamp":"1577885461148","port":9092,"version":4}
[zk: localhost:2181(CONNECTED) 14] get /controller_epoch
15 #每次master经过重新选举就会自增1
报错: Could not find or load main class kafka.tools.ConsumerOffsetChecker
bash-4.4# $KAFKA_HOME/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker .....
解决 : https://blog.csdn.net/lukabruce/article/details/89210463
补下2.4版本的kafka-consumer-groups api
$KAFKA_HOME/bin/kafka-consumer-groups.sh
This tool helps to list all consumer groups, describe a consumer group, delete consumer group info, or reset consumer group offsets.
Option Description
------ -----------
--all-groups Apply to all consumer groups.
--all-topics Consider all topics assigned to a
group in the `reset-offsets` process.
--bootstrap-server
--by-duration Reset offsets to offset by duration
from current timestamp. Format:
'PnDTnHnMnS'
--command-config passed to Admin Client and Consumer.
--delete Pass in groups to delete topic
partition offsets and ownership
information over the entire consumer
group. For instance --group g1 --
group g2
--delete-offsets Delete offsets of consumer group.
Supports one consumer group at the
time, and multiple topics.
--describe Describe consumer group and list
offset lag (number of messages not
yet processed) related to given
group.
--dry-run Only show results without executing
changes on Consumer Groups.
Supported operations: reset-offsets.
--execute Execute operation. Supported
operations: reset-offsets.
--export Export operation execution to a CSV
file. Supported operations: reset-
offsets.
--from-file Reset offsets to values defined in CSV
file.
--group The consumer group we wish to act on.
--help Print usage information.
--list List all consumer groups.
--members Describe members of the group. This
option may be used with '--describe'
and '--bootstrap-server' options
only.
Example: --bootstrap-server localhost:
9092 --describe --group group1 --
members
--offsets Describe the group and list all topic
partitions in the group along with
their offset lag. This is the
default sub-action of and may be
used with '--describe' and '--
bootstrap-server' options only.
Example: --bootstrap-server localhost:
9092 --describe --group group1 --
offsets
--reset-offsets Reset offsets of consumer group.
Supports one consumer group at the
time, and instances should be
inactive
Has 2 execution options: --dry-run
(the default) to plan which offsets
to reset, and --execute to update
the offsets. Additionally, the --
export option is used to export the
results to a CSV format.
You must choose one of the following
reset specifications: --to-datetime,
--by-period, --to-earliest, --to-
latest, --shift-by, --from-file, --
to-current.
To define the scope use --all-topics
or --topic. One scope must be
specified unless you use '--from-
file'.
--shift-by Reset offsets shifting current offset
by 'n', where 'n' can be positive or
negative.
--state Describe the group state. This option
may be used with '--describe' and '--
bootstrap-server' options only.
Example: --bootstrap-server localhost:
9092 --describe --group group1 --
state
--timeout The timeout that can be set for some
use cases. For example, it can be
used when describing the group to
specify the maximum amount of time
in milliseconds to wait before the
group stabilizes (when the group is
just created, or is going through
some changes). (default: 5000)
--to-current Reset offsets to current offset.
--to-datetime Reset offsets to offset from datetime.
Format: 'YYYY-MM-DDTHH:mm:SS.sss'
--to-earliest Reset offsets to earliest offset.
--to-latest Reset offsets to latest offset.
--to-offset Reset offsets to a specific offset.
--topic The topic whose consumer group
information should be deleted or
topic whose should be included in
the reset offset process. In `reset-
offsets` case, partitions can be
specified using this format: `topic1:
0,1,2`, where 0,1,2 are the
partition to be included in the
process. Reset-offsets also supports
multiple topic inputs.
--verbose Provide additional information, if
any, when describing the group. This
option may be used with '--
offsets'/'--members'/'--state' and
'--bootstrap-server' options only.
Example: --bootstrap-server localhost:
9092 --describe --group group1 --
members --verbose
--version Display Kafka version.