Kafka-producer-perf-test-0.10.2.1-压测脚本改造

2019独角兽企业重金招聘Python工程师标准>>> hot3.png

最近网关和业务的日志线上都是4万/秒,然后需要用flink算一下每个业务的异常数,周期性输出分析结果

既然是4万/秒,那么我起码得压个10万/秒才能安心上线,那么问题来了,怎么构造10万/秒的压力(单条消息1K)

---这里选择官方的Kafka-producer-perf-test脚本

显然,它的内部也是启动了一个java程序,那么我把它的java代码抠出来,该替换的替换成我自己的需求,进行定制

所以下面是分析步骤

1)cat kafka-producer-perf-test.sh 

#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    export KAFKA_HEAP_OPTS="-Xmx512M"
fi
exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.ProducerPerformance "$@"

可以看到,执行的是

kafka-run-class.sh org.apache.kafka.tools.ProducerPerformance "$@"

2)kafka-run-class.sh

最关键的是最后一段

# Launch mode
if [ "x$DAEMON_MODE" = "xtrue" ]; then
  nohup $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null &
else
  exec $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@"
fi

这里采用非daemon模式,那么命令行其实就是

/home/ymmapp/java/jdk1.8.0_131//bin/java -Xmx512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Xloggc:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../logs/-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../logs -Dlog4j.configuration=file:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../config/tools-log4j.properties -cp /data/java/lib::/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/aopalliance-repackaged-2.5.0-b05.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/argparse4j-0.7.0.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/connect-api-0.10.2.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/connect-file-0.10.2.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/connect-json-0.10.2.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/connect-runtime-0.10.2.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/connect-transforms-0.10.2.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/guava-18.0.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/hk2-api-2.5.0-b05.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/hk2-locator-2.5.0-b05.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/hk2-utils-2.5.0-b05.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jackson-annotations-2.8.0.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jackson-annotations-2.8.5.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jackson-core-2.8.5.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jackson-databind-2.8.5.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jackson-jaxrs-base-2.8.5.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jackson-jaxrs-json-provider-2.8.5.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jackson-module-jaxb-annotations-2.8.5.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/javassist-3.20.0-GA.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/javax.inject-1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/javax.inject-2.5.0-b05.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/javax.ws.rs-api-2.0.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jersey-client-2.24.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jersey-common-2.24.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jersey-container-servlet-2.24.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jersey-container-servlet-core-2.24.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jersey-guava-2.24.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jersey-media-jaxb-2.24.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jersey-server-2.24.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jetty-http-9.2.15.v20160210.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jetty-io-9.2.15.v20160210.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jetty-security-9.2.15.v20160210.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jetty-server-9.2.15.v20160210.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jetty-util-9.2.15.v20160210.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/jopt-simple-5.0.3.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/kafka_2.10-0.10.2.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/kafka_2.10-0.10.2.1-sources.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/kafka_2.10-0.10.2.1-test-sources.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/kafka-clients-0.10.2.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/kafka-log4j-appender-0.10.2.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/kafka-streams-0.10.2.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/kafka-streams-examples-0.10.2.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/kafka-tools-0.10.2.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/log4j-1.2.17.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/lz4-1.3.0.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/metrics-core-2.2.0.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/reflections-0.9.10.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/rocksdbjni-5.0.1.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/scala-library-2.10.6.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/slf4j-api-1.7.21.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/slf4j-log4j12-1.7.21.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/snappy-java-1.1.2.6.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/validation-api-1.1.0.Final.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/zkclient-0.10.jar:/usr/local/hadoop/kafka_2.10-0.10.2.1/bin/../libs/zookeeper-3.4.9.jar org.apache.kafka.tools.ProducerPerformance

 

3)寻找org.apache.kafka.tools.ProducerPerformance类

这个类在



    org.apache.kafka
    kafka-tools
    0.10.2.1

4)ProducerPerformance做定制


/**
 * Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE
 * file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file
 * to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the
 * License. You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
 * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
 * specific language governing permissions and limitations under the License.
 */
import static net.sourceforge.argparse4j.impl.Arguments.store;

import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Properties;
import java.util.Random;

import net.sourceforge.argparse4j.inf.MutuallyExclusiveGroup;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;

import net.sourceforge.argparse4j.ArgumentParsers;
import net.sourceforge.argparse4j.inf.ArgumentParser;
import net.sourceforge.argparse4j.inf.ArgumentParserException;
import net.sourceforge.argparse4j.inf.Namespace;
import org.apache.kafka.common.utils.Utils;
import org.apache.kafka.tools.ThroughputThrottler;

import log_exception_test.MyKafkaProducer;

public class MyProducerPerformance {

    public static void main(String[] args) throws Exception {
        //构造参数解析器
        ArgumentParser parser = argParser();
        //
        try {
            //解析
            Namespace res = parser.parseArgs(args);
            //提取参数
            String topicName = res.getString("topic");
            System.out.println("topicName        --- " + topicName);
            //
            long numRecords = res.getLong("numRecords");
            System.out.println("numRecords       --- " + numRecords);
            //
            Integer recordSize = res.getInt("recordSize");
            System.out.println("recordSize       --- " + recordSize);
            //
            int throughput = res.getInt("throughput");
            System.out.println("throughput       --- " + throughput);
            //
            List producerProps = res.getList("producerConfig");
            System.out.println("producerProps    --- " + producerProps);
            //
            String producerConfig = res.getString("producerConfigFile");
            System.out.println("producerConfig   --- " + producerConfig);
            //
            //payload自己通过程序生成
            // since default value gets printed with the help text, we are escaping \n there and replacing it with correct value here.
            String payloadDelimiter = res.getString("payloadDelimiter").equals("\\n") ? "\n"
                : res.getString("payloadDelimiter");
            System.out.println("payloadDelimiter --- " + payloadDelimiter);
            if (producerProps == null && producerConfig == null) {
                throw new ArgumentParserException(
                    "Either --producer-props or --producer.config must be specified.", parser);
            }
            //payload自己通过程序生成
            List payloadByteList = new ArrayList<>();
            String payloadFilePath = null;
            if (payloadFilePath != null) {
                Path path = Paths.get(payloadFilePath);
                System.out.println("Reading payloads from: " + path.toAbsolutePath());
                if (Files.notExists(path) || Files.size(path) == 0) {
                    throw new IllegalArgumentException(
                        "File does not exist or empty file provided.");
                }

                String[] payloadList = new String(Files.readAllBytes(path), "UTF-8")
                    .split(payloadDelimiter);
                System.out.println("Number of messages read: " + payloadList.length);

                for (String payload : payloadList) {
                    payloadByteList.add(payload.getBytes(StandardCharsets.UTF_8));
                }
            }
            System.out.println("payload 自己通过程序生成");

            //构造Properties属性
            Properties props = new Properties();
            if (producerConfig != null) {
                props.putAll(Utils.loadProps(producerConfig));
            }
            if (producerProps != null)
                for (String prop : producerProps) {
                    String[] pieces = prop.split("=");
                    if (pieces.length != 2)
                        throw new IllegalArgumentException("Invalid property: " + prop);
                    props.put(pieces[0], pieces[1]);
                }

            props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
                "org.apache.kafka.common.serialization.ByteArraySerializer");
            props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
                "org.apache.kafka.common.serialization.ByteArraySerializer");
            //开始构造Producer
            KafkaProducer producer = new KafkaProducer(props);

            /* setup perf test */
            //payload需要自己构造
            byte[] payload = null;
            Random random = new Random(0);
            //            
            ProducerRecord record;
            Stats stats = new Stats(numRecords, 1000);
            long startMs = System.currentTimeMillis();
            //numRecords是总数
            //throughput是每秒的吞吐量
            ThroughputThrottler throttler = new ThroughputThrottler(throughput, startMs);
            for (int i = 0; i < numRecords; i++) {
                //这里的payload从生产者队列里面取
                //long begin = System.currentTimeMillis();
                payload = MyQueue.getObject();
                //long end = System.currentTimeMillis();
                //System.out.println(end-begin);
                record = new ProducerRecord<>(topicName, payload);
                long sendStartMs = System.currentTimeMillis();
                Callback cb = stats.nextCompletion(sendStartMs, payload.length, stats);
                producer.send(record, cb);

                if (throttler.shouldThrottle(i, sendStartMs)) {
                    throttler.throttle();
                }
            }

            /* print final results */
            producer.close();
            stats.printTotal();
        } catch (ArgumentParserException e) {
            if (args.length == 0) {
                parser.printHelp();
                System.exit(0);
            } else {
                parser.handleError(e);
                System.exit(1);
            }
        }

    }

    /** Get the command-line argument parser. */
    private static ArgumentParser argParser() {
        ArgumentParser parser = ArgumentParsers.newArgumentParser("producer-performance")
            .defaultHelp(true).description("This tool is used to verify the producer performance.");

        MutuallyExclusiveGroup payloadOptions = parser.addMutuallyExclusiveGroup().required(false)
            .description("either --record-size or --payload-file must be specified but not both.");

        parser.addArgument("--topic").action(store()).required(false).type(String.class)
            .metavar("TOPIC").help("produce messages to this topic");

        parser.addArgument("--num-records").action(store()).required(false).type(Long.class)
            .metavar("NUM-RECORDS").dest("numRecords").help("number of messages to produce");

        payloadOptions.addArgument("--record-size").action(store()).required(false)
            .type(Integer.class).metavar("RECORD-SIZE").dest("recordSize").help(
                "message size in bytes. Note that you must provide exactly one of --record-size or --payload-file.");

        payloadOptions.addArgument("--payload-file").action(store()).required(false)
            .type(String.class).metavar("PAYLOAD-FILE").dest("payloadFile")
            .help(
                "file to read the message payloads from. This works only for UTF-8 encoded text files. "
                  + "Payloads will be read from this file and a payload will be randomly selected when sending messages. "
                  + "Note that you must provide exactly one of --record-size or --payload-file.");

        parser.addArgument("--payload-delimiter").action(store()).required(false).type(String.class)
            .metavar("PAYLOAD-DELIMITER").dest("payloadDelimiter").setDefault("\\n")
            .help("provides delimiter to be used when --payload-file is provided. "
                  + "Defaults to new line. "
                  + "Note that this parameter will be ignored if --payload-file is not provided.");

        parser.addArgument("--throughput").action(store()).required(false).type(Integer.class)
            .metavar("THROUGHPUT")
            .help("throttle maximum message throughput to *approximately* THROUGHPUT messages/sec");

        parser.addArgument("--producer-props").nargs("+").required(false)
            .metavar("PROP-NAME=PROP-VALUE").type(String.class).dest("producerConfig")
            .help(
                "kafka producer related configuration properties like bootstrap.servers,client.id etc. "
                  + "These configs take precedence over those passed via --producer.config.");

        parser.addArgument("--producer.config").action(store()).required(false).type(String.class)
            .metavar("CONFIG-FILE").dest("producerConfigFile")
            .help("producer config properties file.");

        return parser;
    }

    private static class Stats {
        private long  start;
        private long  windowStart;
        private int[] latencies;
        private int   sampling;
        private int   iteration;
        private int   index;
        private long  count;
        private long  bytes;
        private int   maxLatency;
        private long  totalLatency;
        private long  windowCount;
        private int   windowMaxLatency;
        private long  windowTotalLatency;
        private long  windowBytes;
        private long  reportingInterval;

        public Stats(long numRecords, int reportingInterval) {
            this.start = System.currentTimeMillis();
            this.windowStart = System.currentTimeMillis();
            this.index = 0;
            this.iteration = 0;
            this.sampling = (int) (numRecords / Math.min(numRecords, 500000));
            this.latencies = new int[(int) (numRecords / this.sampling) + 1];
            this.index = 0;
            this.maxLatency = 0;
            this.totalLatency = 0;
            this.windowCount = 0;
            this.windowMaxLatency = 0;
            this.windowTotalLatency = 0;
            this.windowBytes = 0;
            this.totalLatency = 0;
            this.reportingInterval = reportingInterval;
        }

        public void record(int iter, int latency, int bytes, long time) {
            this.count++;
            this.bytes += bytes;
            this.totalLatency += latency;
            this.maxLatency = Math.max(this.maxLatency, latency);
            this.windowCount++;
            this.windowBytes += bytes;
            this.windowTotalLatency += latency;
            this.windowMaxLatency = Math.max(windowMaxLatency, latency);
            if (iter % this.sampling == 0) {
                this.latencies[index] = latency;
                this.index++;
            }
            /* maybe report the recent perf */
            if (time - windowStart >= reportingInterval) {
                printWindow();
                newWindow();
            }
        }

        public Callback nextCompletion(long start, int bytes, Stats stats) {
            Callback cb = new PerfCallback(this.iteration, start, bytes, stats);
            this.iteration++;
            return cb;
        }

        public void printWindow() {
            long ellapsed = System.currentTimeMillis() - windowStart;
            double recsPerSec = 1000.0 * windowCount / (double) ellapsed;
            double mbPerSec = 1000.0 * this.windowBytes / (double) ellapsed / (1024.0 * 1024.0);
            System.out.printf(
                "%d records sent, %.1f records/sec (%.2f MB/sec), %.1f ms avg latency, %.1f max latency.\n",
                windowCount, recsPerSec, mbPerSec, windowTotalLatency / (double) windowCount,
                (double) windowMaxLatency);
        }

        public void newWindow() {
            this.windowStart = System.currentTimeMillis();
            this.windowCount = 0;
            this.windowMaxLatency = 0;
            this.windowTotalLatency = 0;
            this.windowBytes = 0;
        }

        public void printTotal() {
            long elapsed = System.currentTimeMillis() - start;
            double recsPerSec = 1000.0 * count / (double) elapsed;
            double mbPerSec = 1000.0 * this.bytes / (double) elapsed / (1024.0 * 1024.0);
            int[] percs = percentiles(this.latencies, index, 0.5, 0.95, 0.99, 0.999);
            System.out.printf(
                "%d records sent, %f records/sec (%.2f MB/sec), %.2f ms avg latency, %.2f ms max latency, %d ms 50th, %d ms 95th, %d ms 99th, %d ms 99.9th.\n",
                count, recsPerSec, mbPerSec, totalLatency / (double) count, (double) maxLatency,
                percs[0], percs[1], percs[2], percs[3]);
        }

        private static int[] percentiles(int[] latencies, int count, double... percentiles) {
            int size = Math.min(count, latencies.length);
            Arrays.sort(latencies, 0, size);
            int[] values = new int[percentiles.length];
            for (int i = 0; i < percentiles.length; i++) {
                int index = (int) (percentiles[i] * size);
                values[i] = latencies[index];
            }
            return values;
        }
    }

    private static final class PerfCallback implements Callback {
        private final long  start;
        private final int   iteration;
        private final int   bytes;
        private final Stats stats;

        public PerfCallback(int iter, long start, int bytes, Stats stats) {
            this.start = start;
            this.stats = stats;
            this.iteration = iter;
            this.bytes = bytes;
        }

        public void onCompletion(RecordMetadata metadata, Exception exception) {
            long now = System.currentTimeMillis();
            int latency = (int) (now - start);
            this.stats.record(iteration, latency, bytes, now);
            if (exception != null)
                exception.printStackTrace();
        }
    }

}
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;

import log_exception_test.MyKafkaProducer;

public class MyQueue {
    // poll: 若队列为空,返回null。
    // remove:若队列为空,抛出NoSuchElementException异常。
    // take:若队列为空,发生阻塞,等待有元素。

    // put---无空间会等待
    // add--- 满时,立即返回,会抛出异常
    // offer---满时,立即返回,不抛异常
    // private static final Logger logger =
    // LoggerFactory.getLogger(MonitorQueue.class);
    public static BlockingQueue objectQueue = new LinkedBlockingQueue(50 * 10000);

    public static void addObject(byte[] obj) {
        objectQueue.offer(obj);
    }

    public static byte[] getObject() throws InterruptedException {
        //System.out.println(objectQueue.size());
        return objectQueue.take();
    }

    private static List list = new ArrayList();
    static {
        //启动时,开启线程
        int total = 3;
        for (int index = 0; index < total; index++) {
            Thread thread = new Thread(new Runnable() {

                @Override
                public void run() {
                    while (true) {
                        byte[] data = MyKafkaProducer.getData();
                        addObject(data);
                    }
                }

            });
            thread.start();
            list.add(thread);
        }
    }
}
package log_exception_test;

import java.nio.charset.StandardCharsets;
import java.sql.Timestamp;
import java.util.Properties;
import java.util.Random;
import java.util.concurrent.Future;
import java.util.concurrent.atomic.AtomicLong;

import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;

import com.alibaba.fastjson.JSON;

public class MyKafkaProducer {
    //    private static List data = new ArrayList();
    //    static {
    //        //        for (int i = 0; i <= 6; i++)
    //        //            data.add("1.0");
    //        //        //
    //        //        for (int i = 0; i <= 6; i++)
    //        //            data.add("2.0");
    //        //        //
    //        //        for (int i = 0; i <= 1; i++)
    //        //            data.add("3.0");
    //        data.add("1");
    //        data.add("1");
    //        data.add("1");
    //        data.add("1");
    //        data.add("1");
    //        data.add("1");
    //        data.add("1");
    //        data.add("2");
    //        data.add("2");
    //        data.add("2");
    //        data.add("11");
    //        data.add("11");
    //        data.add("11");
    //        data.add("11");
    //        data.add("11");
    //        data.add("11");
    //
    //    }

    private static AtomicLong totalSendSucceed = new AtomicLong(0);
    private static AtomicLong totalSendFail    = new AtomicLong(0);
    private static long       begin            = System.currentTimeMillis();
    private static String     topic            = "ymm-appmetric-dev-self1";
    // private static long       VALVE            = 1;//30 * 10000 + 367;

    private static String[]   levels           = new String[] { "DEBUG", "INFO", "WARN", "ERROR" };

    public static byte[] getData() {
        //构造send对象
        Send send = new Send();
        Random random = new Random();
        //msg
        send.setMsg(new Random().nextInt()
                    + "Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect"
                    + System.currentTimeMillis());
        //        send.setLoc(new Random().nextInt() + "org.apache.zookeeper.ClientCnxn"
        //                    + System.currentTimeMillis());
        send.setStack(new Random().nextInt()
                      + "java.net.ConnectException: Connection timed out\\r\\nsun.nio.ch.SocketChannelImpl.checkConnect(Native Method)\\r\\nsun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)\\r\\norg.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)\\r\\norg.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)\\r\\n"
                      + System.currentTimeMillis());
        send.setTimestamp(
            new Random().nextInt() + "2018-11-20T01:45:52.160Z+" + System.currentTimeMillis());
        send.setLevel(levels[random.nextInt(4)]);
        send.setIp(//
            random.nextInt(256)//
                   + "."//
                   + random.nextInt(256)
                   + "."//
                   + random.nextInt(256)//
                   + "."//
                   + random.nextInt(256)//
        );
        send.setThrowable(
            "java.net.ConnectException" + System.currentTimeMillis() + random.nextInt());
        send.setTimeInSec(
            "2018-11-20T09:45:51.000+0800" + System.currentTimeMillis() + random.nextInt());
        send.setThread("localhost-startStop-1-SendThread(10.80.58.128:2181)"
                       + System.currentTimeMillis() + new Random().nextInt());
        send.setTime("" + random.nextInt(100000) + new Timestamp(System.currentTimeMillis()));
        send.setPro("ymm-sms-web" + random.nextInt(256));
        send.setType("log");
        //
        String value = JSON.toJSONString(send);
        return value.getBytes(StandardCharsets.UTF_8);
    }

    @SuppressWarnings("unchecked")
    private static void sendSingle(KafkaProducer producer, int index) {
        long recordTime = System.currentTimeMillis();
        //构造send对象
        Send send = new Send();
        Random random = new Random();
        //msg
        send.setMsg(random.nextInt()
                    + "Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect"
                    + System.currentTimeMillis());
        //        send.setLoc(new Random().nextInt() + "org.apache.zookeeper.ClientCnxn"
        //                    + System.currentTimeMillis());
        send.setStack(random.nextInt()
                      + "java.net.ConnectException: Connection timed out\\r\\nsun.nio.ch.SocketChannelImpl.checkConnect(Native Method)\\r\\nsun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)\\r\\norg.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)\\r\\norg.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)\\r\\n"
                      + System.currentTimeMillis());
        send.setTimestamp(
            new Random().nextInt() + "2018-11-20T01:45:52.160Z+" + System.currentTimeMillis());
        send.setLevel(levels[random.nextInt(4)]);
        send.setIp(//
            random.nextInt(256)//
                   + "."//
                   + random.nextInt(256)
                   + "."//
                   + random.nextInt(256)//
                   + "."//
                   + random.nextInt(256)//
        );
        send.setThrowable(
            "java.net.ConnectException" + System.currentTimeMillis() + random.nextInt());
        send.setTimeInSec(
            "2018-11-20T09:45:51.000+0800" + System.currentTimeMillis() + random.nextInt());
        send.setThread("localhost-startStop-1-SendThread(10.80.58.128:2181)"
                       + System.currentTimeMillis() + new Random().nextInt());
        send.setTime("" + random.nextInt() + new Timestamp(recordTime));
        send.setPro("ymm-sms-web" + random.nextInt(256));
        send.setType("log");
        //
        String value = JSON.toJSONString(send);
        //System.out.println(value);
        ProducerRecord record = new ProducerRecord(topic, value);
        try {
            Future future = producer.send(record,
                new DemoCallback(recordTime, topic, "value_" + recordTime));
            //future.get();
            totalSendSucceed.incrementAndGet();
            // System.out.println("本条发送完毕");
        } catch (Exception e) {
            totalSendFail.incrementAndGet();
            System.out.println(e.toString());
        } finally {
            //System.exit(-1);
        }
    }

    private static Send[] sendArray = new Send[] { new Send(//
        "msg", //
        "loc", //
        "stack", //
        "timestamp", //
        "level", //
        "ip", //
        "throwable", //
        "timeInSec", //
        "thread", //
        "time", //
        "type", //
        "pro"), //  
                                                   new Send(//
                                                       "msg", //
                                                       "loc", //
                                                       "stack", //
                                                       "timestamp", //
                                                       "level", //
                                                       "ip", //
                                                       "throwable", //
                                                       "timeInSec", //
                                                       "thread", //
                                                       "time", //
                                                       "type", //
                                                       "pro")//
    };

    private static void unitTest() {
        KafkaProducer producer = getKafkaProducer();
        //
        int total = 100;
        int mod = 1;
        for (int index = 0; index < total; index++) {
            Send send = new Send();
            send.setPro("pro" + Math.abs(new Random().nextInt()) % mod);
            send.setThrowable("throwable" + Math.abs(new Random().nextInt()) % mod);
            send.setLevel("level" + Math.abs(new Random().nextInt()) % mod);
            send.setIp("ip" + Math.abs(new Random().nextInt()) % mod);
            long recordTime = System.currentTimeMillis();
            String value = JSON.toJSONString(send);
            //System.out.println(value);
            ProducerRecord record = new ProducerRecord(topic,
                value);
            try {
                Future future = producer.send(record,
                    new DemoCallback(recordTime, topic, "value_" + recordTime));
                //future.get();
                totalSendSucceed.incrementAndGet();
                // System.out.println("本条发送完毕");
            } catch (Exception e) {
                totalSendFail.incrementAndGet();
                System.out.println(e.toString());
            } finally {

            }
        }

    }

    public static void main(String[] args) {

        //                unitTest();
        //                try {
        //                    Thread.sleep(1000);
        //                } catch (InterruptedException e) {
        //                    //logger.error("", e);
        //                }
        //                unitTest();
        //                try {
        //                    Thread.sleep(1000);
        //                } catch (InterruptedException e) {
        //                    //logger.error("", e);
        //                }
        //                unitTest();

        int tag = 30;
        Thread thread = null;
        //负责发送1    
        if (tag >= 1) {
            thread = new Thread(new Runnable() {
                @Override
                public void run() {
                    main0(args, 1, 90000001L);//
                }
            });
            thread.start();
        }

        //负责发送2
        if (tag >= 2) {
            thread = new Thread(new Runnable() {
                @Override
                public void run() {
                    main0(args, 2, 90000002L);//
                }
            });
            thread.start();
        }
        //负责发送3
        if (tag >= 3) {
            thread = new Thread(new Runnable() {
                @Override
                public void run() {
                    main0(args, 3, 90000003L);//
                }
            });
            thread.start();
        }
        //负责发送4
        if (tag >= 4) {
            thread = new Thread(new Runnable() {
                @Override
                public void run() {
                    main0(args, 4, 90000004L);//
                }
            });
            thread.start();
        }
        //负责发送5
        if (tag >= 5) {
            thread = new Thread(new Runnable() {
                @Override
                public void run() {
                    main0(args, 5, 90000005L);//
                }
            });
            thread.start();
        }
        //负责发送6
        if (tag >= 6) {
            thread = new Thread(new Runnable() {
                @Override
                public void run() {
                    main0(args, 6, 90000006L);//
                }
            });
            thread.start();
        }
        //负责发送7
        if (tag >= 7) {
            thread = new Thread(new Runnable() {
                @Override
                public void run() {
                    main0(args, 7, 90000007L);//
                }
            });
            thread.start();
        }
        //统计线程
        Thread statsThread = new Thread(new Runnable() {

            @Override
            public void run() {
                long last = System.currentTimeMillis();
                long lastsuc = 0;
                long lastfail = 0;
                //
                while (true) {
                    long now = System.currentTimeMillis();
                    long elapsed = now - last;
                    if (elapsed >= 1000) {
                        long currentSuc = totalSendSucceed.get();
                        long currentFail = totalSendFail.get();
                        System.out
                            .println("elapsed " + elapsed + " - suc " + (currentSuc - lastsuc)
                                     + " total " + currentSuc + " fail " + (currentFail - lastfail)
                                     + " " + System.currentTimeMillis());
                        //
                        last = now;
                        lastsuc = currentSuc;
                        lastfail = currentFail;
                    }
                }
                //end
            }
        });
        statsThread.start();
        //
    }

    private static KafkaProducer getKafkaProducer() {
        Properties kafkaProps = new Properties();

        //kafkaProps.put("bootstrap.servers", " 192.168.199.188:9092,192.168.198.109:9092,192.168.198.110:9092");
        kafkaProps.put("bootstrap.servers", "192.168.199.188:9092");
        kafkaProps.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        kafkaProps.put("value.serializer",
            "org.apache.kafka.common.serialization.StringSerializer");
        //网络调优
        //1)发送
        kafkaProps.put("compression.type", "lz4");
        kafkaProps.put("batch.size", "8196");
        //2)
        //kafkaProps.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, "1024000");
        // kafkaProps.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION,"1000");
        //2)等待响应
        kafkaProps.put(ProducerConfig.ACKS_CONFIG, "1");
        kafkaProps.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, "120000");
        //3)超时重试
        kafkaProps.put(ProducerConfig.RETRIES_CONFIG, 30);
        //
        // kafkaProps.put("batch.size", "10240");
        // kafkaProps.put("linger.ms", "0");

        KafkaProducer producer = new KafkaProducer(kafkaProps);
        return producer;
    }

    public static void main0(String[] args, int index, Long valve) {
        //
        KafkaProducer producer = getKafkaProducer();
        System.out.println("发送开始 ");
        long sendCount = 0;
        while (true) {
            try {
                sendSingle(producer, index);
            } catch (Exception e) {
                System.out.println(e.toString());
            } finally {
                sendCount++;
                if (sendCount >= valve) {
                    //线程结束
                    System.out.println("send " + index + " for " + valve + " 条 完毕!");
                    producer.close();
                    return;
                }
            }

        }

    }

    static class DemoCallback implements Callback {
        private long   startTime;
        @SuppressWarnings("unused")
        private String key;
        @SuppressWarnings("unused")
        private String message;

        public DemoCallback(long startTime, String key, String message) {
            this.startTime = startTime;
            this.key = key;
            this.message = message;
        }

        @Override
        public void onCompletion(RecordMetadata metadata, Exception exception) {
            long elapsedTime = System.currentTimeMillis() - startTime;
            if (null != exception) {
                System.out.println(exception.toString());
                return;
            }
            //            if (null != metadata && elapsedTime >= 1000) {
            //                 System.out.println("message(" + key + ", " + message + ") sent to partition("
            //                 + metadata.partition()
            //                 + "), " + "offset(" + metadata.offset() + " ) in " + elapsedTime + " ms");
            //
            //            }

        }

    }
}

5)打包后执行

java -Xms1024M -Xmx1024M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Xloggc:./gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false   -jar  kafkadriver-0.0.1-SNAPSHOT-shaded.jar   --topic ymm-appmetric-dev-self1 --throughput 100000  --num-records 20000000 --producer-props bootstrap.servers=192.168.199.188:9092 client.id=myclientid batch.size=8196 compression.type=lz4

6)效果

.kafka.common.utils.AppInfoParser$AppInfo.(AppInfoParser.java:84) 
23804 records sent, 23638.5 records/sec (20.40 MB/sec), 16.6 ms avg latency, 304.0 max latency.
97041 records sent, 96944.1 records/sec (83.67 MB/sec), 14.1 ms avg latency, 84.0 max latency.
60257 records sent, 60257.0 records/sec (52.01 MB/sec), 9.4 ms avg latency, 108.0 max latency.
[2018-11-22 17:56:09,697] INFO Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:689) 
200000 records sent, 59790.732436 records/sec (51.61 MB/sec), 12.35 ms avg latency, 304.00 ms max latency, 7 ms 50th, 39 ms 95th, 64 ms 99th, 79 ms 99.9th.

 

转载于:https://my.oschina.net/qiangzigege/blog/2907021

你可能感兴趣的:(大数据,java,json)