【Flink小试】Flink CDC DataStream API监听MySQL动态发往Kafka Topic

[toc]

一、背景

业务背景: MySQL增量数据实时更新同步到Kafka中供下游使用

查看了一下Flink CDC的官方文档,其中Features的描述中提到了SQL和DataStream API不同的支持程度。

Features

1. Supports reading database snapshot and continues to read binlogs with exactly-once processing even failures happen.

2. CDC connectors for DataStream API, users can consume changes on multiple databases and tables in a single job without Debezium and Kafka deployed.

3. CDC connectors for Table/SQL API, users can use SQL DDL to create a CDC source to monitor changes on a single table.

虽然SQL API使用很丝滑,也很简单。但是由于业务表较多,若是使用一个表的监听就开启一个Flink Job,会对资源消耗和运维操作带来很大的麻烦,所以笔者决定使用DataStream API实现单任务监听库级的MySQL CDC并根据表名将数据发往不同的Kafka Topic中。

二、代码实现

1. 关键maven依赖

                
            com.alibaba.ververica
            flink-connector-mysql-cdc
            1.1.1
        
        
            org.apache.flink
            flink-connector-kafka_2.11
            
                
                    org.apache.kafka
                    kafka-clients
                
            
        
        
            org.apache.kafka
            kafka-clients
            2.4.0
        

2. 自定义CDC数据反序列化器

Flink CDC定义了com.alibaba.ververica.cdc.debezium.DebeziumDeserializationSchema接口用以对CDC数据进行反序列化。默认实现类com.alibaba.ververica.cdc.debezium.table.RowDataDebeziumDeserializeSchemacom.alibaba.ververica.cdc.debezium.StringDebeziumDeserializationSchema,由于我们需要自定义Schema,所以我们不采用这两周默认的实现类,自己实现该接口定义我们需要的Schema.

定义JsonDebeziumDeserializeSchema实现DebeziumDeserializationSchema接口方法

class JsonDebeziumDeserializeSchema extends DebeziumDeserializationSchema[String] {

  private final val log: Logger = LoggerFactory.getLogger(classOf[JsonDebeziumDeserializeSchema])

  override def deserialize(sourceRecord: SourceRecord, collector: Collector[String]): Unit = {
    val op = Envelope.operationFor(sourceRecord)
    val source = sourceRecord.topic()
    val value = sourceRecord.value().asInstanceOf[Struct]
    val valueSchema: Schema = sourceRecord.valueSchema()
    if (op != Operation.CREATE && op != Operation.READ) {
      if (op == Operation.DELETE) {
        val data = extractBeforeData(value, valueSchema)
        val record = new JSONObject()
          .fluentPut("source", source)
          .fluentPut("data", data)
          .fluentPut("op", RowKind.DELETE.shortString())
          .toJSONString
        collector.collect(record)
      } else {
        val beforeData = extractBeforeData(value, valueSchema)
        val beforeRecord = new JSONObject()
          .fluentPut("source", source)
          .fluentPut("data", beforeData)
          .fluentPut("op", RowKind.UPDATE_BEFORE.shortString())
          .toJSONString
        collector.collect(beforeRecord)

        val afterData = extractAfterData(value, valueSchema)
        val afterRecord = new JSONObject()
          .fluentPut("source", source)
          .fluentPut("data", afterData)
          .fluentPut("op", RowKind.UPDATE_AFTER.shortString())
          .toJSONString
        collector.collect(afterRecord)
      }
    } else {
      val data = extractAfterData(value, valueSchema)
      val record = new JSONObject()
        .fluentPut("source", source)
        .fluentPut("data", data)
        .fluentPut("op", RowKind.INSERT.shortString())
        .toJSONString
      collector.collect(record)
    }
  }

  override def getProducedType: TypeInformation[String] = BasicTypeInfo.STRING_TYPE_INFO
  ...
}

定义MySqlSource监听MySQL库数据变化:

val properties = new Properties()
properties.setProperty("snapshotMode", snapshotMode)

val mysqlCdcSource = MySQLSource.builder[String]()
   .hostname(hostname)
   .port(port)
   .databaseList(database)
   .tableList(tableName)
   .username(username)
   .password(password)
   .deserializer(new JsonDebeziumDeserializeSchema)
   .debeziumProperties(properties)
   .serverId(serverId)
   .build()

3. 数据动态发往Kafka不同的Topic

由上面自定义的Schema我们可以知道,source字段的构成为mysql_binlog_source.库名.表名。此时我们可以自定义KafkaSerializationSchema来实现将不同的数据发往不同的topic,即OverridingTopicSchema:

abstract class OverridingTopicSchema extends KafkaSerializationSchema[String] {
    val topicPrefix: String

    val topicSuffix: String

    val topicKey: String

    override def serialize(element: String, timestamp: lang.Long): ProducerRecord[Array[Byte], Array[Byte]] = {
      val topic = if (element != null && element.contains(topicKey)) {
        val topicStr = JSON.parseObject(element).getString(topicKey).replaceAll("\\.", "_")
        topicPrefix.concat(topicStr).concat(topicSuffix)
      } else null
      new ProducerRecord[Array[Byte], Array[Byte]](topic, element.getBytes(StandardCharsets.UTF_8))
    }
  }

同时定义创建将数据动态发往不同topic的kafka生产者的方法

/**
   * 创建将数据动态发往不同topic的kafka生产者
   *
   * @param boostrapServers          kafka集群地址
   * @param kafkaSerializationSchema kafka序列器
   * @return
   */
def createDynamicFlinkProducer(boostrapServers: String, kafkaSerializationSchema:       KafkaSerializationSchema[String]): FlinkKafkaProducer[String] = {
    if (StringUtils.isEmpty(boostrapServers))
      throw new IllegalArgumentException("boostrapServers is necessary")
    val properties = initDefaultKafkaProducerConfig(boostrapServers)
    properties.put(ACKS_CONFIG, "all")

    new FlinkKafkaProducer[String](DEFAULT_TOPIC, kafkaSerializationSchema,
      properties, FlinkKafkaProducer.Semantic.EXACTLY_ONCE)
  }

4. 主类完整实现

object Cdc2KafkaByStream {

  def main(args: Array[String]): Unit = {
    val parameterTool = ParameterTool.fromArgs(args)
    //cdc config
    val hostname = parameterTool.get("hostname")
    val port = parameterTool.getInt("port", 3306)
    val username = parameterTool.get("username")
    val password = parameterTool.get("password")
    val database = parameterTool.get("database")
    val tableName = parameterTool.get("tableName")
    val serverId = parameterTool.getInt("serverId")
    val snapshotMode = parameterTool.get("snapshotMode", "initial")
    //kafka config
    val kafkaBrokers = parameterTool.get("kafkaBrokers")
    val kafkaTopicPrefix = parameterTool.get("kafkaTopicPrefix", "topic_")
    val kafkaTopicSuffix = parameterTool.get("kafkaTopicSuffix", "")
    val kafkaTopicKey = parameterTool.get("kafkaTopicKey", "source")

    val env = StreamExecutionEnvironment.getExecutionEnvironment
    ExecutionEnvUtils.configStreamExecutionEnv(env, parameterTool)
    ExecutionEnvUtils.parameterPrint(parameterTool)

    val properties = new Properties()
    properties.setProperty("snapshotMode", snapshotMode)

    val mysqlCdcSource = MySQLSource.builder[String]()
      .hostname(hostname)
      .port(port)
      .databaseList(database)
      .tableList(tableName)
      .username(username)
      .password(password)
      .deserializer(new JsonDebeziumDeserializeSchema)
      .debeziumProperties(properties)
      .serverId(serverId)
      .build()

    val kafkaSink = KafkaUtils.createDynamicFlinkProducer(kafkaBrokers, new OverridingTopicSchema() {
      override val topicPrefix: String = kafkaTopicPrefix
      override val topicSuffix: String = kafkaTopicSuffix
      override val topicKey: String = kafkaTopicKey
    })
    env.addSource(mysqlCdcSource).addSink(kafkaSink).setParallelism(1)
    env.execute()
  }
}

启动任务后可以看到kakfa中根据表名创建了不同的topic,并保存了不同表里的数据。

至此,实现了使用DataStream API单任务监听库级的MySQL CDC并根据表名将数据发往不同的Kafka Topic的功能。

三、小结

本文主要介绍了通过Flink CDC DataStream API实现监听MySQL库数据发往kafka不同Topic的功能,其中运用到自定义DebeziumDeserializationSchema实现CDC Schema自定义反序列化解析以及自定义KafkaSerializationSchema实现根据数据内容将消息发送到不同的topic等功能。

你可能感兴趣的:(【Flink小试】Flink CDC DataStream API监听MySQL动态发往Kafka Topic)