Flink 动态sink到多个topic,并且实现自定义分区器

1,我们先实现的是序列化类

 

public class CustomKeyedSerializationSchema implements KeyedSerializationSchema {

    @Override
    public byte[] serializeKey(TopicAndValueDemo topicAndValueDemo) {
        String pk = topicAndValueDemo.getPk();
//        return new byte[0];
        System.out.println("主键pk = " + pk);
        return (pk).getBytes();
    }

    @Override
    public byte[] serializeValue(TopicAndValueDemo topicAndValueDemo) {
        String values = topicAndValueDemo.getValues();
//        return new byte[0];
        return values.getBytes();
    }

    @Override
    public String getTargetTopic(TopicAndValueDemo topicAndValueDemo) {
        return topicAndValueDemo.getTopic();
    }

 

2,然后实现自定义分区器 

extends FlinkKafkaPartitioner
public class kafkaPartitionerDemoJava extends FlinkKafkaPartitioner {


    @Override
    public int partition(TopicAndValueDemo topicAndValueDemo, byte[] key, byte[] value, String topic, int[] partitions) {
        String pk = topicAndValueDemo.getPk();
        System.out.println("%¥¥¥#打印接手到的key = " + pk);
        System.out.println("要发送的topic = " + topic);
        System.out.println("分区的数据量:"+partitions.length);
        System.out.println("接收到是数据 new String(key):"+new String(key));
        //todo 生产环境代码
//        Math.abs(new String(key).hashCode() % partitions.length);
        return Integer.valueOf(pk);
    }

    public static void main(String[] args) {
//        Random random = new Random();
//        return random.nextInt(3);
        Random random = new Random();
        while (true){
            System.out.println("random.nextInt() = " + random.nextInt(3));
            try {
                Thread.sleep(100);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }
}

 

3,主类代码,注意 sink的时候为了保证事务,有些参数必须加上

 

public class KafkaSinkDemo {
    public static void main(String[] args) throws Exception {

        StreamExecutionEnvironment bsEnv = StreamExecutionEnvironment.getExecutionEnvironment();
        EnvironmentSettings bsSettings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
        StreamTableEnvironment tEnv = StreamTableEnvironment.create(bsEnv, bsSettings);
        bsEnv.enableCheckpointing(5000);
        bsEnv.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

        DataStream ds = bsEnv.addSource(new SourceFunction() {
            @Override
            public void run(SourceContext out) throws Exception {
                Random random = new Random( );

                while (true) {
                    int i = random.nextInt(3);
                    String  pk = String.valueOf(i);
                    String topic = "partition_"+pk+"_test";
                    TopicAndValueDemo topicAndValueDemo = new TopicAndValueDemo(topic, pk, "生产时间:"+System.currentTimeMillis());
                    System.out.println("生产的数据:"+topicAndValueDemo.toString());
                    out.collect(topicAndValueDemo);
                    Thread.sleep(1000L);
                }
            }

            @Override
            public void cancel() {

            }
        });


        FlinkKafkaProducer flinkKafkaProducer = new FlinkKafkaProducer(
                "",
                 new CustomKeyedSerializationSchema(),
                getProperties(),
                java.util.Optional.of(new kafkaPartitionerDemoJava()),
//                FlinkKafkaProducer.Semantic.EXACTLY_ONCE,
                FlinkKafkaProducer.Semantic.AT_LEAST_ONCE,
                FlinkKafkaProducer.DEFAULT_KAFKA_PRODUCERS_POOL_SIZE
        );
        ds.addSink(flinkKafkaProducer);
        bsEnv.execute("aaa");
    }

    public static Properties getProperties() {
        Properties producerConfig = new Properties();
        producerConfig.setProperty("bootstrap.servers", "dev-ct6-dc-worker01:9092,dev-ct6-dc-worker02:9092,dev-ct6-dc-worker03:9092");
        producerConfig.setProperty("acks", "all");
        producerConfig.setProperty("buffer.memory", "102400");
        producerConfig.setProperty("compression.type", "snappy");
        producerConfig.setProperty("batch.size", "1000");
        producerConfig.setProperty("linger.ms", "1");
//        producerConfig.setProperty("transaction.timeout.ms", 1000 * 60 * 5 + "");

        producerConfig.setProperty(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 1000 * 60 * 3 + "");
        producerConfig.setProperty(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1");
        producerConfig.setProperty(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
        return producerConfig;
    }
}

 

你可能感兴趣的:(Flink1.11,Flink,Partitioner,flink自定义分区器,flink,多sink输出)