上篇文章[Kafka]安装与部署中,搭建了一个三节点的Kafka集群环境。
这篇文章来谈谈如何使用命令行操作集群。
创建一个名为test01的topic
bin/kafka-topics.sh \
--create \
--zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 \
--replication-factor 2 \
--partitions 3 \
--topic test01
--zookeeper zookeeper的连接地址
--replication-factor num 指定数据副本的数量为num
--partitions num 指定该topic有几个主分片
--topic name 指定topic的名称
[app@test13 kafka]$ bin/kafka-topics.sh --list --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181
test
test01
test02
该命令可以查看所有zookeeper中的topic。
bin/kafka-topics.sh \
--describe \
--zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 \
--topic test01
查看test01的topic的详细信息:
[app@test13 kafka]$ bin/kafka-topics.sh \
> --describe \
> --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 \
> --topic test01
Topic:test01 PartitionCount:3 ReplicationFactor:2 Configs:
Topic: test01 Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: test01 Partition: 1 Leader: 0 Replicas: 2,0 Isr: 0,2
Topic: test01 Partition: 2 Leader: 0 Replicas: 0,1 Isr: 0,1
简单解释一下输出,topic是消息队列名,PartitionCount是分片数量其实也就是数据分布在几个broker上面,ReplicationFactor是副本数量。
具体的信息输出显示了有哪些分片,存在于哪个broker上面,该分片上还有哪些副本,ISR这个集合中的所有节点都是存活状态,并且跟leader同步。
可进行修改分区数量的操作,也就是指定partitions。
先看一下原来的topic信息:
[app@test13 kafka]$ bin/kafka-topics.sh \
> --describe \
> --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 \
> --topic test
Topic:test PartitionCount:3 ReplicationFactor:3 Configs:
Topic: test Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2
Topic: test Partition: 1 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0
Topic: test Partition: 2 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1
现在进行修改,将分片数改为2:
[app@test13 kafka]$ bin/kafka-topics.sh \
> --alter \
> --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 \
> --topic test --partitions 2
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Error while executing topic command : The number of partitions for a topic can only be increased. Topic test currently has 3 partitions, 2 would not be an increase.
[2019-11-12 16:11:32,686] ERROR org.apache.kafka.common.errors.InvalidPartitionsException: The number of partitions for a topic can only be increased. Topic test currently has 3 partitions, 2 would not be an increase.
(kafka.admin.TopicCommand$)
从上面执行结果看,失败了,原因是分区数只能增加不能减少。这种场景一般适用于加了几台broker,然后进行分区扩容。
这里修改一下partition数量,修改为4:
[app@test13 kafka]$ bin/kafka-topics.sh --alter --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 --topic test --partitions 4
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!
再查看具体信息:
[app@test13 kafka]$ bin/kafka-topics.sh --describe --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 --topic test
Topic:test PartitionCount:4 ReplicationFactor:3 Configs:
Topic: test Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2
Topic: test Partition: 1 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0
Topic: test Partition: 2 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1
Topic: test Partition: 3 Leader: 0 Replicas: 0,2,1 Isr: 0,2,1
原本的集群中有两个topic——test01和test02:
[app@test13 kafka]$ bin/kafka-topics.sh --list --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181
test01
test02
现在删除名为test01的topic:
[app@test13 kafka]$ bin/kafka-topics.sh --delete --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181 --topic test01
Topic test01 is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
[app@test13 kafka]$ bin/kafka-topics.sh --list --zookeeper 192.168.133.13:2181,192.168.133.14:2181,192.168.133.15:2181
test02
[app@test13 kafka]$
这里删除是先标记为删除,由kafka内部 scheduled for deletion
。
kafka-console-producer.sh命令使用:
This tool helps to read data from standard input and publish it to Kafka.
Option Description
------ -----------
--batch-size <Integer: size> Number of messages to send in a single
batch if they are not being sent
synchronously. (default: 200)
--broker-list <String: broker-list> REQUIRED: The broker list string in
the form HOST1:PORT1,HOST2:PORT2.
--compression-codec [String: The compression codec: either 'none',
compression-codec] 'gzip', 'snappy', 'lz4', or 'zstd'.
If specified without value, then it
defaults to 'gzip'
--help Print usage information.
--line-reader <String: reader_class> The class name of the class to use for
reading lines from standard in. By
default each line is read as a
separate message. (default: kafka.
tools.
ConsoleProducer$LineMessageReader)
--max-block-ms <Long: max block on The max time that the producer will
send> block for during a send request
(default: 60000)
--max-memory-bytes <Long: total memory The total memory used by the producer
in bytes> to buffer records waiting to be sent
to the server. (default: 33554432)
--max-partition-memory-bytes <Long: The buffer size allocated for a
memory in bytes per partition> partition. When records are received
which are smaller than this size the
producer will attempt to
optimistically group them together
until this size is reached.
(default: 16384)
--message-send-max-retries <Integer> Brokers can fail receiving the message
for multiple reasons, and being
unavailable transiently is just one
of them. This property specifies the
number of retires before the
producer give up and drop this
message. (default: 3)
--metadata-expiry-ms <Long: metadata The period of time in milliseconds
expiration interval> after which we force a refresh of
metadata even if we haven't seen any
leadership changes. (default: 300000)
--producer-property <String: A mechanism to pass user-defined
producer_prop> properties in the form key=value to
the producer.
--producer.config <String: config file> Producer config properties file. Note
that [producer-property] takes
precedence over this config.
--property <String: prop> A mechanism to pass user-defined
properties in the form key=value to
the message reader. This allows
custom configuration for a user-
defined message reader.
--request-required-acks <String: The required acks of the producer
request required acks> requests (default: 1)
--request-timeout-ms <Integer: request The ack timeout of the producer
timeout ms> requests. Value must be non-negative
and non-zero (default: 1500)
--retry-backoff-ms <Integer> Before each retry, the producer
refreshes the metadata of relevant
topics. Since leader election takes
a bit of time, this property
specifies the amount of time that
the producer waits before refreshing
the metadata. (default: 100)
--socket-buffer-size <Integer: size> The size of the tcp RECV size.
(default: 102400)
--sync If set message send requests to the
brokers are synchronously, one at a
time as they arrive.
--timeout <Integer: timeout_ms> If set and the producer is running in
asynchronous mode, this gives the
maximum amount of time a message
will queue awaiting sufficient batch
size. The value is given in ms.
(default: 1000)
--topic <String: topic> REQUIRED: The topic id to produce
messages to.
--version Display Kafka version.
先创建一个topic名为test01的消息队列,然后开启生产者:
bin/kafka-console-producer.sh --broker-list 192.168.133.15:9092 --topic test01
[app@node14 kafka]$ bin/kafka-console-consumer.sh --help
This tool helps to read data from Kafka topics and outputs it to standard output.
Option Description
------ -----------
--bootstrap-server <String: server to REQUIRED: The server(s) to connect to.
connect to>
--consumer-property <String: A mechanism to pass user-defined
consumer_prop> properties in the form key=value to
the consumer.
--consumer.config <String: config file> Consumer config properties file. Note
that [consumer-property] takes
precedence over this config.
--enable-systest-events Log lifecycle events of the consumer
in addition to logging consumed
messages. (This is specific for
system tests.)
--formatter <String: class> The name of a class to use for
formatting kafka messages for
display. (default: kafka.tools.
DefaultMessageFormatter)
--from-beginning If the consumer does not already have
an established offset to consume
from, start with the earliest
message present in the log rather
than the latest message.
--group <String: consumer group id> The consumer group id of the consumer.
--help Print usage information.
--isolation-level <String> Set to read_committed in order to
filter out transactional messages
which are not committed. Set to
read_uncommittedto read all
messages. (default: read_uncommitted)
--key-deserializer <String:
deserializer for key>
--max-messages <Integer: num_messages> The maximum number of messages to
consume before exiting. If not set,
consumption is continual.
--offset <String: consume offset> The offset id to consume from (a non-
negative number), or 'earliest'
which means from beginning, or
'latest' which means from end
(default: latest)
--partition <Integer: partition> The partition to consume from.
Consumption starts from the end of
the partition unless '--offset' is
specified.
--property <String: prop> The properties to initialize the
message formatter. Default
properties include:
print.timestamp=true|false
print.key=true|false
print.value=true|false
key.separator=<key.separator>
line.separator=<line.separator>
key.deserializer=<key.deserializer>
value.deserializer=<value.
deserializer>
Users can also pass in customized
properties for their formatter; more
specifically, users can pass in
properties keyed with 'key.
deserializer.' and 'value.
deserializer.' prefixes to configure
their deserializers.
--skip-message-on-error If there is an error when processing a
message, skip it instead of halt.
--timeout-ms <Integer: timeout_ms> If specified, exit if no message is
available for consumption for the
specified interval.
--topic <String: topic> The topic id to consume on.
--value-deserializer <String:
deserializer for values>
--version Display Kafka version.
--whitelist <String: whitelist> Regular expression specifying
whitelist of topics to include for
consumption.
这里开启两个消费者(192.168.133.13,192.168.133.14),在一个消费者组内:
[app@node14 kafka]$ bin/kafka-console-consumer.sh --bootstrap-server 192.168.133.13:9092 --group hello --topic test01
1.发送消息