flink 消费Kafka 空指针问题

flink 消费Kafka 空指针问题

error:

2021-12-02 10:51:55,644 WARN org.apache.flink.runtime.taskmanager.Task [] - Source: Custom Source (1/3)#4 (92d97e5b78862c632d32b272b843bf4b) switched from RUNNING to FAILED with failure cause: java.lang.NullPointerException
    at java.lang.String.<init>(String.java:515)
    at org.apache.flink.api.common.serialization.SimpleStringSchema.deserialize(SimpleStringSchema.java:78)
    at org.apache.flink.api.common.serialization.SimpleStringSchema.deserialize(SimpleStringSchema.java:36)
    at org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82)
    at org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper.deserialize(KafkaDeserializationSchemaWrapper.java:58)
    at org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:179)
    at org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher.runFetchLoop(KafkaFetcher.java:142)
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:826)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:66)
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:269)

2021-12-02 10:51:55,644 WARN org.apache.flink.runtime.taskmanager.Task [] - Source: Custom Source (1/3)#4 (92d97e5b78862c632d32b272b843bf4b) switched from RUNNING to FAILED with failure cause: java.lang.NullPointerException
    at java.lang.String.<init>(String.java:515)
    at org.apache.flink.api.common.serialization.SimpleStringSchema.deserialize(SimpleStringSchema.java:78)
    at org.apache.flink.api.common.serialization.SimpleStringSchema.deserialize(SimpleStringSchema.java:36)
    at org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:82)
    at org.apache.flink.streaming.connectors.

原因及解决方案

这是因为 mongodb 在客户端发送编辑信息时,会返回三条信息 delete null insert (也有可能是dbz 写入 kafka 时产生了墓碑消息)

实现static class MySimpleStringSchema implements DeserializationSchema, SerializationSchema

      @Override
    public String deserialize(byte[] message) {
            if (message != null){
                return new String(message, charset);
            }
           return deserialize(new byte[1]);

    }

做一个非null判断

你可能感兴趣的:(技术分享,个人笔记,kafka,flink,apache)