FlinkCDC全量及增量采集SqlServer数据

本文将详细介绍Flink-CDC如何全量及增量采集Sqlserver数据源,准备适配Sqlserver数据源的小伙伴们可以参考本文,希望本文能给你带来一定的帮助。

一、Sqlserver的安装及开启事务日志

如果没有Sqlserver环境,但你又想学习这块的内容,那你只能自己动手通过docker安装一个 myself sqlserver来用作学习,当然,如果你有现成环境,那就检查一下Sqlserver是否开启了代理(sqlagent.enabled)服务和CDC功能。

1.1 docker拉取镜像

Github上写Flink-CDC 目前支持的Sqlserver版本为2012, 2014, 2016, 2017, 2019,但我想全部拉到最新(事实证明,2022-latest 和latest是一样的,因为imagId都是一致的,且在后续测试也是没有问题的),所以我在docker上拉取镜像时,直接采用如下命令:

docker pull mcr.microsoft.com/mssql/server:latest
1.2 运行Sqlserver并设置代理

标准启动模式,没什么好说的,主要设置一下密码(密码要求比较严格,建议直接在网上搜个随机密码生成器来搞一下)。

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=${your_password}' \
   -p 1433:1433 --name sqlserver \
   -d mcr.microsoft.com/mssql/server:latest

设置代理sqlagent.enabled,代理设置完成后,需要重启Sqlserver,因为我们是docker安装的,直接用docker restart sqlserver就行了。

[root@hdp-01 ~]# docker exec -it --user root sqlserver bash
root@0274812d0c10:/# /opt/mssql/bin/mssql-conf set sqlagent.enabled true
SQL Server needs to be restarted in order to apply this setting. Please run
'systemctl restart mssql-server.service'.
root@0274812d0c10:/# exit
exit
[root@hdp-01 ~]# docker restart sqlserver
sqlserver
1.3 启用CDC功能

按照如下步骤执行命令,如果看到is_cdc_enabled = 1,则说明当前数据库

root@0274812d0c10:/# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "${your_password}"
1> create databases test;
2> go
1> use test;
2> go
Changed database context to 'test'.
1> EXEC sys.sp_cdc_enable_db;
2> go
1> SELECT is_cdc_enabled FROM sys.databases WHERE name = 'test';
2> go
is_cdc_enabled
--------------
             1

(1 rows affected)
1> CREATE TABLE t_info (id int,order_date date,purchaser int,quantity int,product_id int,PRIMARY KEY ([id]))
2> go
1> 
2> 
3> EXEC sys.sp_cdc_enable_table
4> @source_schema = 'dbo',
5> @source_name   = 't_info',
6> @role_name     = 'cdc_role';
7> go
Update mask evaluation will be disabled in net_changes_function because the CLR configuration option is disabled.
Job 'cdc.zeus_capture' started successfully.
Job 'cdc.zeus_cleanup' started successfully.
1> select * from t_info;
2> go
id          order_date       purchaser   quantity    product_id 
----------- ---------------- ----------- ----------- -----------

(0 rows affected)
1.4 检查CDC是否正常开启

用客户端连接Sqlserver,查看test库下的INFORMATION_SCHEMA.TABLES中是否出现TABLE_SCHEMA = cdc的表,如果出现,说明已经成功安装Sqlserver并启用了CDC

1> use test;
2> go
Changed database context to 'test'.
1> select * from INFORMATION_SCHEMA.TABLES;
2> go
TABLE_CATALOG	TABLE_SCHEMA	TABLE_NAME	       TABLE_TYPE
test	            dbo	      user_info	         BASE TABLE
test	            dbo	      systranschemas	   BASE TABLE
test	            cdc	      change_tables	     BASE TABLE
test	            cdc	      ddl_history	       BASE TABLE
test	            cdc	      lsn_time_mapping	 BASE TABLE
test	            cdc	      captured_columns	 BASE TABLE
test	            cdc	      index_columns	     BASE TABLE
test	            dbo	      orders	           BASE TABLE
test	            cdc	      dbo_orders_CT	     BASE TABLE

二、具体实现

2.1 Flik-CDC采集SqlServer主程序

添加依赖包:

        <dependency>
            <groupId>com.ververicagroupId>
            <artifactId>flink-connector-sqlserver-cdcartifactId>
            <version>3.0.0version>
        dependency>

编写主函数:

    public static void main(String[] args) throws Exception {

        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        // 设置全局并行度
        env.setParallelism(1);
        // 设置时间语义为ProcessingTime
        env.getConfig().setAutoWatermarkInterval(0);
        // 每隔60s启动一个检查点
        env.enableCheckpointing(60000, CheckpointingMode.EXACTLY_ONCE);
        // checkpoint最小间隔
        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(1000);
        // checkpoint超时时间
        env.getCheckpointConfig().setCheckpointTimeout(60000);
        // 同一时间只允许一个checkpoint
        // env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
        // Flink处理程序被cancel后,会保留Checkpoint数据
        //   env.getCheckpointConfig().setExternalizedCheckpointCleanup(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);


        SourceFunction<String> sqlServerSource = SqlServerSource.<String>builder()
                .hostname("localhost")
                .port(1433)
                .username("SA")
                .password("")
                .database("test")
                .tableList("dbo.t_info")
                .startupOptions(StartupOptions.initial())
                .debeziumProperties(getDebeziumProperties())
                .deserializer(new CustomerDeserializationSchemaSqlserver())
                .build();

        DataStreamSource<String> dataStreamSource = env.addSource(sqlServerSource, "_transaction_log_source");
        dataStreamSource.print().setParallelism(1);
        env.execute("sqlserver-cdc-test");

    }
    
    
        public static Properties getDebeziumProperties() {
        Properties properties = new Properties();
        properties.put("converters", "sqlserverDebeziumConverter");
        properties.put("sqlserverDebeziumConverter.type", "SqlserverDebeziumConverter");
        properties.put("sqlserverDebeziumConverter.database.type", "sqlserver");
        // 自定义格式,可选
        properties.put("sqlserverDebeziumConverter.format.datetime", "yyyy-MM-dd HH:mm:ss");
        properties.put("sqlserverDebeziumConverter.format.date", "yyyy-MM-dd");
        properties.put("sqlserverDebeziumConverter.format.time", "HH:mm:ss");
        return properties;
    }
2.2 自定义Sqlserver反序列化格式:

Flink-CDC底层技术为debezium,它捕获到Sqlserver数据变更(CRUD)的数据格式如下:

#初始化
Struct{after=Struct{id=1,order_date=2024-01-30,purchaser=1,quantity=100,product_id=1},source=Struct{version=1.9.7.Final,connector=sqlserver,name=sqlserver_transaction_log_source,ts_ms=1706574924473,snapshot=true,db=zeus,schema=dbo,table=orders,commit_lsn=0000002b:00002280:0003},op=r,ts_ms=1706603724432}

#新增
Struct{after=Struct{id=12,order_date=2024-01-11,purchaser=6,quantity=233,product_id=63},source=Struct{version=1.9.7.Final,connector=sqlserver,name=sqlserver_transaction_log_source,ts_ms=1706603786187,db=zeus,schema=dbo,table=orders,change_lsn=0000002b:00002480:0002,commit_lsn=0000002b:00002480:0003,event_serial_no=1},op=c,ts_ms=1706603788461}


#更新
Struct{before=Struct{id=12,order_date=2024-01-11,purchaser=6,quantity=233,product_id=63},after=Struct{id=12,order_date=2024-01-11,purchaser=8,quantity=233,product_id=63},source=Struct{version=1.9.7.Final,connector=sqlserver,name=sqlserver_transaction_log_source,ts_ms=1706603845603,db=zeus,schema=dbo,table=orders,change_lsn=0000002b:00002500:0002,commit_lsn=0000002b:00002500:0003,event_serial_no=2},op=u,ts_ms=1706603850134}


#删除
Struct{before=Struct{id=11,order_date=2024-01-11,purchaser=6,quantity=233,product_id=63},source=Struct{version=1.9.7.Final,connector=sqlserver,name=sqlserver_transaction_log_source,ts_ms=1706603973023,db=zeus,schema=dbo,table=orders,change_lsn=0000002b:000025e8:0002,commit_lsn=0000002b:000025e8:0005,event_serial_no=1},op=d,ts_ms=1706603973859}

因此,可以根据自己需要自定义反序列化格式,将数据按照标准统一数据输出,下面是我自定义的格式,供大家参考:

import com.alibaba.fastjson2.JSON;
import com.alibaba.fastjson2.JSONObject;
import com.alibaba.fastjson2.JSONWriter;
import com.ververica.cdc.debezium.DebeziumDeserializationSchema;
import io.debezium.data.Envelope;
import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.util.Collector;
import org.apache.kafka.connect.data.Field;
import org.apache.kafka.connect.data.Schema;
import org.apache.kafka.connect.data.Struct;
import org.apache.kafka.connect.source.SourceRecord;

import java.util.HashMap;
import java.util.Map;

public class CustomerDeserializationSchemaSqlserver implements DebeziumDeserializationSchema<String> {

    private static final long serialVersionUID = -1L;


    @Override
    public void deserialize(SourceRecord sourceRecord, Collector collector) {
        Map<String, Object> resultMap = new HashMap<>();
        String topic = sourceRecord.topic();
        String[] split = topic.split("[.]");
        String database = split[1];
        String table = split[2];
        resultMap.put("db", database);
        resultMap.put("tableName", table);
        //获取操作类型
        Envelope.Operation operation = Envelope.operationFor(sourceRecord);
        //获取数据本身
        Struct struct = (Struct) sourceRecord.value();
        Struct after = struct.getStruct("after");
        Struct before = struct.getStruct("before");
        String op = operation.name();
        resultMap.put("op", op);

        //新增,更新或者初始化
        if (op.equals(Envelope.Operation.CREATE.name()) || op.equals(Envelope.Operation.READ.name()) || op.equals(Envelope.Operation.UPDATE.name())) {
            JSONObject afterJson = new JSONObject();
            if (after != null) {
                Schema schema = after.schema();
                for (Field field : schema.fields()) {
                    afterJson.put(field.name(), after.get(field.name()));
                }
                resultMap.put("after", afterJson);
            }
        }

        if (op.equals(Envelope.Operation.DELETE.name())) {
            JSONObject beforeJson = new JSONObject();
            if (before != null) {
                Schema schema = before.schema();
                for (Field field : schema.fields()) {
                    beforeJson.put(field.name(), before.get(field.name()));
                }
                resultMap.put("before", beforeJson);
            }
        }

        collector.collect(JSON.toJSONString(resultMap, JSONWriter.Feature.FieldBased, JSONWriter.Feature.LargeObject));

    }

    @Override
    public TypeInformation<String> getProducedType() {
        return BasicTypeInfo.STRING_TYPE_INFO;
    }

}
2.3 自定义日期格式转换器

debezium会将日期转为5位数字,日期时间转为13位的数字,因此我们需要根据Sqlserver的日期类型转换成标准的时期或者时间格式。Sqlserver的日期类型主要包含以下几种:

字段类型 快照类型(jdbc type) cdc类型(jdbc type)
DATE java.sql.Date(91) java.sql.Date(91)
TIME java.sql.Timestamp(92) java.sql.Time(92)
DATETIME java.sql.Timestamp(93) java.sql.Timestamp(93)
DATETIME2 java.sql.Timestamp(93) java.sql.Timestamp(93)
DATETIMEOFFSET microsoft.sql.DateTimeOffset(-155) microsoft.sql.DateTimeOffset(-155)
SMALLDATETIME java.sql.Timestamp(93) java.sql.Timestamp(93)
import io.debezium.spi.converter.CustomConverter;
import io.debezium.spi.converter.RelationalColumn;
import org.apache.kafka.connect.data.SchemaBuilder;
import java.time.ZoneOffset;
import java.time.format.DateTimeFormatter;
import java.util.Properties;

@Sl4j
public class SqlserverDebeziumConverter implements CustomConverter<SchemaBuilder, RelationalColumn> {



    private static final String DATE_FORMAT = "yyyy-MM-dd";
    private static final String TIME_FORMAT = "HH:mm:ss";
    private static final String DATETIME_FORMAT = "yyyy-MM-dd HH:mm:ss";
    private DateTimeFormatter dateFormatter;
    private DateTimeFormatter timeFormatter;
    private DateTimeFormatter datetimeFormatter;
    private SchemaBuilder schemaBuilder;
    private String databaseType;
    private String schemaNamePrefix;


    @Override
    public void configure(Properties properties) {
        // 必填参数:database.type,只支持sqlserver
        this.databaseType = properties.getProperty("database.type");
        // 如果未设置,或者设置的不是mysql、sqlserver,则抛出异常。
        if (this.databaseType == null || !this.databaseType.equals("sqlserver"))) {
            throw new IllegalArgumentException("database.type 必须设置为'sqlserver'");
        }
        // 选填参数:format.date、format.time、format.datetime。获取时间格式化的格式
        String dateFormat = properties.getProperty("format.date", DATE_FORMAT);
        String timeFormat = properties.getProperty("format.time", TIME_FORMAT);
        String datetimeFormat = properties.getProperty("format.datetime", DATETIME_FORMAT);
        // 获取自身类的包名+数据库类型为默认schema.name
        String className = this.getClass().getName();
        // 查看是否设置schema.name.prefix
        this.schemaNamePrefix = properties.getProperty("schema.name.prefix", className + "." + this.databaseType);
        // 初始化时间格式化器
        dateFormatter = DateTimeFormatter.ofPattern(dateFormat);
        timeFormatter = DateTimeFormatter.ofPattern(timeFormat);
        datetimeFormatter = DateTimeFormatter.ofPattern(datetimeFormat);

    }

    // sqlserver的转换器
    public void registerSqlserverConverter(String columnType, ConverterRegistration<SchemaBuilder> converterRegistration) {
        String schemaName = this.schemaNamePrefix + "." + columnType.toLowerCase();
        schemaBuilder = SchemaBuilder.string().name(schemaName);
        switch (columnType) {
            case "DATE":
                converterRegistration.register(schemaBuilder, value -> {
                    if (value == null) {
                        return null;
                    } else if (value instanceof java.sql.Date) {
                        return dateFormatter.format(((java.sql.Date) value).toLocalDate());
                    } else {
                        return this.failConvert(value, schemaName);
                    }
                });
                break;
            case "TIME":
                converterRegistration.register(schemaBuilder, value -> {
                    if (value == null) {
                        return null;
                    } else if (value instanceof java.sql.Time) {
                        return timeFormatter.format(((java.sql.Time) value).toLocalTime());
                    } else if (value instanceof java.sql.Timestamp) {
                        return timeFormatter.format(((java.sql.Timestamp) value).toLocalDateTime().toLocalTime());
                    } else {
                        return this.failConvert(value, schemaName);
                    }
                });
                break;
            case "DATETIME":
            case "DATETIME2":
            case "SMALLDATETIME":
            case "DATETIMEOFFSET":
                converterRegistration.register(schemaBuilder, value -> {
                    if (value == null) {
                        return null;
                    } else if (value instanceof java.sql.Timestamp) {
                        return datetimeFormatter.format(((java.sql.Timestamp) value).toLocalDateTime());
                    } else if (value instanceof microsoft.sql.DateTimeOffset) {
                        microsoft.sql.DateTimeOffset dateTimeOffset = (microsoft.sql.DateTimeOffset) value;
                        return datetimeFormatter.format(
                                dateTimeOffset.getOffsetDateTime().withOffsetSameInstant(ZoneOffset.UTC).toLocalDateTime());
                    } else {
                        return this.failConvert(value, schemaName);
                    }
                });
                break;
            default:
                schemaBuilder = null;
                break;
        }
    }


    @Override
    public void converterFor(RelationalColumn relationalColumn, ConverterRegistration<SchemaBuilder> converterRegistration) {
        // 获取字段类型
        String columnType = relationalColumn.typeName().toUpperCase();
        // 根据数据库类型调用不同的转换器
        if (this.databaseType.equals("sqlserver")) {
            this.registerSqlserverConverter(columnType, converterRegistration);
        } else {
            log.warn("不支持的数据库类型: {}", this.databaseType);
            schemaBuilder = null;
        }
    }

    private String getClassName(Object value) {
        if (value == null) {
            return null;
        }
        return value.getClass().getName();
    }

    // 类型转换失败时的日志打印
    private String failConvert(Object value, String type) {
        String valueClass = this.getClassName(value);
        String valueString = valueClass == null ? null : value.toString();
        return valueString;
    }
}

三、总计

目前Fink-CDC对这种增量采集传统数据库的技术已经封装的很好了,并且官方也给了详细的操作教程,但如果想要深入的学习一项技能,个人觉得还是要从头到尾操作一遍,一方面能够快速的提升自己,另一方面发现问题时,也能从不同的角度来思考解决方案,希望本篇文章能够给大家带来一点帮助。

你可能感兴趣的:(大数据,数据同步,sqlserver,数据库,flink)