在kafka的bin目录下,有两个脚本kafka-producer-perf-test.sh 和kafka-consumer-perf-test.sh,这两个脚本的作用是用来测试生产者和消费者的。
[root@hostname bin]# ./kafka-producer-perf-test.sh --help
usage: producer-performance [-h] --topic TOPIC --num-records NUM-RECORDS [--payload-delimiter PAYLOAD-DELIMITER] --throughput THROUGHPUT
[--producer-props PROP-NAME=PROP-VALUE [PROP-NAME=PROP-VALUE ...]] [--producer.config CONFIG-FILE] (--record-size RECORD-SIZE |
--payload-file PAYLOAD-FILE)
This tool is used to verify the producer performance.
optional arguments:
-h, --help show this help message and exit
--topic TOPIC produce messages to this topic
--num-records NUM-RECORDS
number of messages to produce
--payload-delimiter PAYLOAD-DELIMITER
provides delimiter to be used when --payload-file is provided. Defaults to new line. Note that this parameter will be ignored if --
payload-file is not provided. (default: \n)
--throughput THROUGHPUT
throttle maximum message throughput to *approximately* THROUGHPUT messages/sec
--producer-props PROP-NAME=PROP-VALUE [PROP-NAME=PROP-VALUE ...]
kafka producer related configuration properties like bootstrap.servers,client.id etc. These configs take precedence over those passed via
--producer.config.
--producer.config CONFIG-FILE
producer config properties file.
either --record-size or --payload-file must be specified but not both.
--record-size RECORD-SIZE
message size in bytes. Note that you must provide exactly one of --record-size or --payload-file.
--payload-file PAYLOAD-FILE
file to read the message payloads from. This works only for UTF-8 encoded text files. Payloads will be read from this file and a payload
will be randomly selected when sending messages. Note that you must provide exactly one of --record-size or --payload-file.
根据help命令可以得知相关参数以及参数的含义。
[root@hadoop-sh1-core1 bin]# ./kafka-topics.sh --zookeeper hadoop-sh1-master2:2181 --topic "test003" --describe
Topic:test003 PartitionCount:3 ReplicationFactor:1 Configs:
Topic: test003 Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: test003 Partition: 1 Leader: 1 Replicas: 1 Isr: 1
Topic: test003 Partition: 2 Leader: 2 Replicas: 2 Isr: 2
[root@hadoop-sh1-core1 bin]# ./kafka-producer-perf-test.sh --topic test003 --num-records 1000000 --record-size 1024 --throughput -1 --producer.config ../config/producer.properties
[2018-07-02 14:29:03,086] WARN Error while fetching metadata with correlation id 1 : {test003=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
250992 records sent, 50198.4 records/sec (49.02 MB/sec), 425.2 ms avg latency, 1167.0 max latency.
526995 records sent, 103616.8 records/sec (101.19 MB/sec), 266.2 ms avg latency, 1080.0 max latency.
1000000 records sent, 83998.320034 records/sec (82.03 MB/sec), 317.24 ms avg latency, 1314.00 ms max latency, 259 ms 50th, 868 ms 95th, 1212 ms 99th, 1228 ms 99.9th.
不限制吞吐量,3个partition的topic,3个备份,ack=all的情况下,最高达到了101.19 MB/sec的速度,平均延迟在200多到400多左右,最高延迟是1228 ms。
调节topic参数(还是3台机器),将topic的partition数量调高到10,备份数降到1,还是不限制吞吐量
Created topic "test004".
[root@hadoop-sh1-core1 bin]# ./kafka-topics.sh --zookeeper hadoop-sh1-master2:2181 --topic "test004" --describe Topic:test004 PartitionCount:10 ReplicationFactor:1 Configs:
Topic: test004 Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: test004 Partition: 1 Leader: 1 Replicas: 1 Isr: 1
Topic: test004 Partition: 2 Leader: 2 Replicas: 2 Isr: 2
Topic: test004 Partition: 3 Leader: 3 Replicas: 3 Isr: 3
Topic: test004 Partition: 4 Leader: 0 Replicas: 0 Isr: 0
Topic: test004 Partition: 5 Leader: 1 Replicas: 1 Isr: 1
Topic: test004 Partition: 6 Leader: 2 Replicas: 2 Isr: 2
Topic: test004 Partition: 7 Leader: 3 Replicas: 3 Isr: 3
Topic: test004 Partition: 8 Leader: 0 Replicas: 0 Isr: 0
Topic: test004 Partition: 9 Leader: 1 Replicas: 1 Isr: 1
[root@hadoop-sh1-core1 bin]# ./kafka-producer-perf-test.sh --topic test004 --num-records 1000000 --record-size 1024 --throughput -1 --producer.config ../config/producer.properties
525946 records sent, 105189.2 records/sec (102.72 MB/sec), 209.1 ms avg latency, 1107.0 max latency.
1000000 records sent, 114155.251142 records/sec (111.48 MB/sec), 226.50 ms avg latency, 1203.00 ms max latency, 152 ms 50th, 1007 ms 95th, 1191 ms 99th, 1196 ms 99.9th.
从上面可以看见,平均延迟和最大延迟以及吞吐量都上升了许多,因为还是只有三台机器,吞吐量可能只能到100M多每秒,不知道再增加是否还会提升。
还是3个partiton,3份备份,将吞吐降到了1w,再看看测试情况
[root@hadoop-sh1-core1 bin]# ./kafka-producer-perf-test.sh --topic test003 --num-records 1000000 --record-size 1024 --throughput 10000 --producer.config ../config/producer.properties
49979 records sent, 9995.8 records/sec (9.76 MB/sec), 1.4 ms avg latency, 203.0 max latency.
50043 records sent, 10008.6 records/sec (9.77 MB/sec), 0.6 ms avg latency, 18.0 max latency.
49973 records sent, 9994.6 records/sec (9.76 MB/sec), 0.7 ms avg latency, 41.0 max latency.
50057 records sent, 10009.4 records/sec (9.77 MB/sec), 0.5 ms avg latency, 23.0 max latency.
50002 records sent, 10000.4 records/sec (9.77 MB/sec), 0.6 ms avg latency, 46.0 max latency.
50015 records sent, 10003.0 records/sec (9.77 MB/sec), 0.6 ms avg latency, 24.0 max latency.
49993 records sent, 9996.6 records/sec (9.76 MB/sec), 0.6 ms avg latency, 47.0 max latency.
50010 records sent, 10000.0 records/sec (9.77 MB/sec), 0.6 ms avg latency, 26.0 max latency.
50015 records sent, 10003.0 records/sec (9.77 MB/sec), 0.6 ms avg latency, 26.0 max latency.
50005 records sent, 10001.0 records/sec (9.77 MB/sec), 0.6 ms avg latency, 19.0 max latency.
50010 records sent, 10002.0 records/sec (9.77 MB/sec), 0.6 ms avg latency, 24.0 max latency.
50010 records sent, 10002.0 records/sec (9.77 MB/sec), 0.5 ms avg latency, 18.0 max latency.
50010 records sent, 10002.0 records/sec (9.77 MB/sec), 0.5 ms avg latency, 23.0 max latency.
50000 records sent, 10000.0 records/sec (9.77 MB/sec), 0.6 ms avg latency, 23.0 max latency.
50016 records sent, 10003.2 records/sec (9.77 MB/sec), 0.6 ms avg latency, 31.0 max latency.
49994 records sent, 9998.8 records/sec (9.76 MB/sec), 0.6 ms avg latency, 29.0 max latency.
50010 records sent, 10002.0 records/sec (9.77 MB/sec), 0.6 ms avg latency, 21.0 max latency.
50010 records sent, 10002.0 records/sec (9.77 MB/sec), 0.6 ms avg latency, 21.0 max latency.
50019 records sent, 10001.8 records/sec (9.77 MB/sec), 0.6 ms avg latency, 32.0 max latency.
1000000 records sent, 9999.200064 records/sec (9.76 MB/sec), 0.61 ms avg latency, 203.00 ms max latency, 1 ms 50th, 1 ms 95th, 2 ms 99th, 19 ms 99.9th.
平均延时降到了0.6ms,最大延迟在几十ms以内。
接下来再调节record size大小,看是否造成影响。
[root@hadoop-sh1-core1 bin]# ./kafka-producer-perf-test.sh --topic test004 --num-records 1000000 --record-size 258 --throughput -1 --producer.config ../config/producer.properties
1000000 records sent, 323519.896474 records/sec (79.60 MB/sec), 4.77 ms avg latency, 210.00 ms max latency, 3 ms 50th, 18 ms 95th, 39 ms 99th, 46 ms 99.9th.
[root@hadoop-sh1-core1 bin]# ./kafka-producer-perf-test.sh --topic test004 --num-records 1000000 --record-size 1024 --throughput -1 --producer.config ../config/producer.properties
525946 records sent, 105189.2 records/sec (102.72 MB/sec), 209.1 ms avg latency, 1107.0 max latency.
1000000 records sent, 114155.251142 records/sec (111.48 MB/sec), 226.50 ms avg latency, 1203.00 ms max latency, 152 ms 50th, 1007 ms 95th, 1191 ms 99th, 1196 ms 99.9th.
[root@hadoop-sh1-core1 bin]# ./kafka-producer-perf-test.sh --topic test004 --num-records 10000000 --record-size 258 --throughput -1 --producer.config ../config/producer.properties
1949374 records sent, 389874.8 records/sec (95.93 MB/sec), 3.6 ms avg latency, 212.0 max latency.
2392851 records sent, 478570.2 records/sec (117.75 MB/sec), 2.3 ms avg latency, 21.0 max latency.
2372680 records sent, 474536.0 records/sec (116.76 MB/sec), 2.4 ms avg latency, 26.0 max latency.
2362539 records sent, 472507.8 records/sec (116.26 MB/sec), 2.3 ms avg latency, 24.0 max latency.
10000000 records sent, 455435.624175 records/sec (112.06 MB/sec), 2.54 ms avg latency, 212.00 ms max latency, 2 ms 50th, 4 ms 95th, 10 ms 99th, 20 ms 99.9th.
两者对比可以发现,size从1024降到258,性能有了显著 提升,延迟就降了不少,第三个,再将records数量上升到1000w条,发现,延迟并没有上升对少,吞吐稳定在了110M多,可能三台机器的瓶颈就是这么多了。
由以上的测试结论可以得出,kafka的produce效率,和record size大小,partition数量,备份数,ack的值等有关。(当然,和单台机器的性能也有一些关系。_)