关键词:Flink-CDC、Flink-CDC入门教程、Flink CDC Connectors 、Flink-CDC 2.0.0
在 Flink CDC 诞生之前,说起数据同步工具,大家可能最熟悉 Canal、MaxWell 等工具。自从Flink CDC 诞生之后,经过一年时间的发展,现在已经慢慢成熟。Flink CDC 上手非常容易,上手容易并不意味着功能简单,相反,它的功能很强大。今天我们就来认识一下什么是Flink CDC。
CDC 是 Change Data Capture(变更数据获取)的简称。核心思想是,监测并捕获数据库的变动(包括数据或数据表的插入、更新以及删除等),将这些变更按发生的顺序完整记录下来,写入到消息中间件中以供其他服务进行订阅及消费。
CDC 主要分为基于查询和基于日志两种方式。
Flink CDC 是 Apache Flink 的一组源连接器,使用变更数据捕获 (CDC) 从 MySQL、PostgreSQL 等不同数据库中直接读取全量数据和增量变更数据。Flink CDC Connectors 集成了 Debezium 作为捕获数据变化的引擎。所以它可以充分利用Debezium的能力。
本案例使用 MySQL 数据库。
先决条件
确保 MySQL 数据库已开启 Binlog
创建一个 maven 工程
本案例已上传至码云,有需要的可以去我仓库下载。
引入依赖
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-javaartifactId>
<version>1.13.2version>
dependency>
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-streaming-java_2.12artifactId>
<version>1.13.2version>
dependency>
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-clients_2.12artifactId>
<version>1.13.2version>
dependency>
<dependency>
<groupId>mysqlgroupId>
<artifactId>mysql-connector-javaartifactId>
<version>8.0.25version>
dependency>
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-table-planner-blink_2.12artifactId>
<version>1.13.2version>
dependency>
<dependency>
<groupId>com.ververicagroupId>
<artifactId>flink-connector-mysql-cdcartifactId>
<version>2.0.0version>
dependency>
<dependency>
<groupId>com.alibabagroupId>
<artifactId>fastjsonartifactId>
<version>1.2.75version>
dependency>
<dependency>
<groupId>org.slf4jgroupId>
<artifactId>slf4j-simpleartifactId>
<version>1.7.25version>
<scope>compilescope>
dependency>
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-connector-elasticsearch7_2.11artifactId>
<version>1.13.2version>
dependency>
编写demo
public class FlinkCDC {
public static void main(String[] args) throws Exception {
//1.获取Flink 执行环境
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
//2.通过FlinkCDC构建SourceFunction
DebeziumSourceFunction<String> sourceFunction = MySqlSource.<String>builder()
.hostname("127.0.0.1")
.port(3306)
.username("root")
.password("root")
.databaseList("flink-cdc")
.deserializer(new StringDebeziumDeserializationSchema())
.startupOptions(StartupOptions.initial())
.build();
DataStreamSource<String> dataStreamSource = env.addSource(sourceFunction);
//3.数据打印
dataStreamSource.print();
//4.启动任务
env.execute("FlinkCDC");
}
}
运行
当我们在数据库添加、修改、删除一条数据时,控制台都会输出变更信息
[debezium-mysqlconnector-mysql_binlog_source-change-event-source-coordinator] INFO io.debezium.connector.mysql.MySqlStreamingChangeEventSource - Keepalive thread is running
[Legacy Source Thread - Source: Custom Source -> (Sink: Print to Std. Out, Sink: Unnamed) (1/1)#0] INFO com.ververica.cdc.debezium.internal.DebeziumChangeFetcher - Database snapshot phase can't perform checkpoint, acquired Checkpoint lock.
SourceRecord{
sourcePartition={
server=mysql_binlog_source}, sourceOffset={
ts_sec=1630413635, file=mysql-bin.000003, pos=1234, snapshot=true}} ConnectRecord{
topic='mysql_binlog_source.flink-cdc.t_user', kafkaPartition=null, key=Struct{
id=1}, keySchema=Schema{
mysql_binlog_source.flink_cdc.t_user.Key:STRUCT}, value=Struct{
after=Struct{
id=1,desc=数据测试,name=极客},source=Struct{
version=1.5.2.Final,connector=mysql,name=mysql_binlog_source,ts_ms=1630413635069,snapshot=true,db=flink-cdc,table=t_user,server_id=0,file=mysql-bin.000003,pos=1234,row=0},op=r,ts_ms=1630413635072}, valueSchema=Schema{
mysql_binlog_source.flink_cdc.t_user.Envelope:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}
SourceRecord{
sourcePartition={
server=mysql_binlog_source}, sourceOffset={
ts_sec=1630413635, file=mysql-bin.000003, pos=1234}} ConnectRecord{
topic='mysql_binlog_source.flink-cdc.t_user', kafkaPartition=null, key=Struct{
id=2}, keySchema=Schema{
mysql_binlog_source.flink_cdc.t_user.Key:STRUCT}, value=Struct{
after=Struct{
id=2,desc=你爱我我爱你,name=极客688},source=Struct{
version=1.5.2.Final,connector=mysql,name=mysql_binlog_source,ts_ms=1630413635074,snapshot=last,db=flink-cdc,table=t_user,server_id=0,file=mysql-bin.000003,pos=1234,row=0},op=r,ts_ms=1630413635074}, valueSchema=Schema{
mysql_binlog_source.flink_cdc.t_user.Envelope:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}
[Legacy Source Thread - Source: Custom Source -> (Sink: Print to Std. Out, Sink: Unnamed) (1/1)#0] INFO com.ververica.cdc.debezium.internal.DebeziumChangeFetcher - Received record from streaming binlog phase, released checkpoint lock.
[debezium-engine] INFO io.debezium.connector.common.BaseSourceTask - 3 records sent during previous 00:01:31.149, last recorded offset: {
transaction_id=null, ts_sec=1630413724, file=mysql-bin.000003, pos=1299, row=1, server_id=1, event=2}
SourceRecord{
sourcePartition={
server=mysql_binlog_source}, sourceOffset={
transaction_id=null, ts_sec=1630413724, file=mysql-bin.000003, pos=1299, row=1, server_id=1, event=2}} ConnectRecord{
topic='mysql_binlog_source.flink-cdc.t_user', kafkaPartition=null, key=Struct{
id=3}, keySchema=Schema{
mysql_binlog_source.flink_cdc.t_user.Key:STRUCT}, value=Struct{
after=Struct{
id=3,desc=66,name=666},source=Struct{
version=1.5.2.Final,connector=mysql,name=mysql_binlog_source,ts_ms=1630413724000,db=flink-cdc,table=t_user,server_id=1,file=mysql-bin.000003,pos=1436,row=0},op=c,ts_ms=1630413724566}, valueSchema=Schema{
mysql_binlog_source.flink_cdc.t_user.Envelope:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}
至此,Flink CDC 入门案例结束,是不是非常简单,Flink CDC 更多功能请参阅官网,自行研究。
好了以上就是本文的主要内容了,本文主要介绍了 CDC 的概念,Flink-CDC 概念及入门案例,。相信无敌的你都已经get到全部要点了,本专栏后续将带你继续了解大数据相关的神器,敬请期待哦(*^▽^*)
。
以上内容均来源于网络,如有错误,请多多包含。
Apache Flink-CDC ®
geek-flink