实践数据湖iceberg 第二十一课 flink1.13.5 + iceberg0.131 CDC(测试成功INSERT,变更操作失败)

系列文章目录

实践数据湖iceberg 第一课 入门
实践数据湖iceberg 第二课 iceberg基于hadoop的底层数据格式
实践数据湖iceberg 第三课 在sqlclient中,以sql方式从kafka读数据到iceberg
实践数据湖iceberg 第四课 在sqlclient中,以sql方式从kafka读数据到iceberg(升级版本到flink1.12.7)
实践数据湖iceberg 第五课 hive catalog特点
实践数据湖iceberg 第六课 从kafka写入到iceberg失败问题 解决
实践数据湖iceberg 第七课 实时写入到iceberg
实践数据湖iceberg 第八课 hive与iceberg集成
实践数据湖iceberg 第九课 合并小文件
实践数据湖iceberg 第十课 快照删除
实践数据湖iceberg 第十一课 测试分区表完整流程(造数、建表、合并、删快照)
实践数据湖iceberg 第十二课 catalog是什么
实践数据湖iceberg 第十三课 metadata比数据文件大很多倍的问题
实践数据湖iceberg 第十四课 元数据合并(解决元数据随时间增加而元数据膨胀的问题)
实践数据湖iceberg 第十五课 spark安装与集成iceberg(jersey包冲突)
实践数据湖iceberg 第十六课 通过spark3打开iceberg的认知之门
实践数据湖iceberg 第十七课 hadoop2.7,spark3 on yarn运行iceberg配置
实践数据湖iceberg 第十八课 多种客户端与iceberg交互启动命令(常用命令)
实践数据湖iceberg 第十九课 flink count iceberg,无结果问题
实践数据湖iceberg 第二十课 flink + iceberg CDC场景(版本问题,测试失败)
实践数据湖iceberg 第二十一课 flink1.13.5 + iceberg0.131 CDC(测试成功INSERT,变更操作失败)
实践数据湖iceberg 第二十二课 flink1.13.5 + iceberg0.131 CDC(CRUD测试成功)


文章目录

  • 系列文章目录
  • 概要
  • 一、环境准备
    • 1.1 准备安装包、jar包
    • 1.2 flink-sql启动
    • 2.1 准备mysql的表source表
    • 2.2 准备iceberg的sink表
  • 3.通过flink从mysql写入iceberg
    • 3.1 数据写入到iceberg
  • 4. 观察增加、 删除、更新的影响
    • 4.1 初始化,把历史数据 sink出去了
    • 4.2 spark-sql查询
    • 4.3 删除语句测试(对delete语句不支持)
    • 4.4 update测试(不支持)
  • 5 其他异常处理
    • 5.1 The primary key is necessary when enable 'Key: 'scan.incremental.snapshot.enabled' 问题处理
    • 5.2 sql执行失败,导致任务无法正常运行
  • 总结


概要

版本:flink1.13.5, flink-sql-connector-mysql-cdc-2.1.1.jar ,iceberg0.131
本课:测试cdc的CRUD

一、环境准备

1.1 准备安装包、jar包

flink-1.13.5-bin-scala_2.12.tgz
解压
采用软连接方式进行安装,每次版本升级,把软连接更换,把老版本配置/conf下的文件拷贝到新的路径就行。
这样的好处,环境变量不用每次都修改。
看版本历史,已经试过了4个版本的flink,摸索的艰辛泪。。。

[root@hadoop101 module]# ll
total 94528
drwxr-xr-x 11 hadoop hadoop     4096 Jan 11 17:52 apache-hive-2.3.6-bin
drwxr-xr-x  2 root   root       4096 Feb 14 18:25 bin
lrwxrwxrwx  1 root   root         25 Feb 17 21:30 flink -> /opt/module/flink-1.13.5/
drwxr-xr-x 10 hadoop hadoop     4096 Jan 12 15:03 flink-1.11.6
drwxr-xr-x 10   1002   1003     4096 Dec 15 08:30 flink-1.12.7
drwxr-xr-x 10   1006   1007     4096 Dec 15 08:35 flink-1.13.5
drwxr-xr-x 10    501 games      4096 Jan 11 07:45 flink-1.14.3

准备:flink集成kafka, hive, iceberg的包

[root@hadoop101 module]# ls /opt/software/flink1.13-iceberg0131/
flink-sql-connector-hive-2.3.6_2.12-1.13.5.jar  flink-sql-connector-kafka_2.12-1.13.5.jar  iceberg-flink-runtime-1.13-0.13.1.jar  iceberg-mr-0.13.1.jar
flink-sql-connector-mysql-cdc-2.1.1.jar’

包从 https://repo.maven.apache.org/maven2/org/apache 目录下直接找就是了。
mysql-cdc 自己编译,maven仓库down的是scala2.11版本,我用的是scala.2.12版,不兼容(踩过坑了)。

[root@hadoop103 target]# pwd
/opt/software/flink-cdc-connectors-release-2.1.1/flink-sql-connector-mysql-cdc/target
[root@hadoop103 target]# ls
checkstyle-checker.xml  checkstyle-suppressions.xml  dependency-reduced-pom.xml               flink-sql-connector-mysql-cdc-2.1.1-tests.jar  maven-archiver                  maven-status                                      test-classes
checkstyle-result.xml   classes                      flink-sql-connector-mysql-cdc-2.1.1.jar  generated-sources                              maven-shared-archive-resources  original-flink-sql-connector-mysql-cdc-2.1.1.jar
[root@hadoop103 target]# 

1.2 flink-sql启动

[root@hadoop101 ~]# sql-client.sh embedded -j /opt/software/flink1.13-iceberg0131/iceberg-flink-runtime-1.13-0.13.1.jar -j /opt/software/flink1.13-iceberg0131/flink-sql-connector-hive-2.3.6_2.12-1.13.5.jar -j /opt/software/flink1.13-iceberg0131/flink-sql-connector-kafka_2.12-1.13.5.jar -j /opt/software/flink1.13-iceberg0131/flink-sql-connector-mysql-cdc-2.1.1.jar shell


2.1 准备mysql的表source表

CREATE TABLE stock_basic_source(
  `i`  INT NOT NULL,
  `ts_code`     CHAR(10) NOT NULL,
  `symbol`   CHAR(10) NOT NULL,
  `name` char(10) NOT NULL,
  `area`   CHAR(20) NOT NULL,
  `industry`   CHAR(20) NOT NULL,
  `list_date`   CHAR(10) NOT NULL,
  `actural_controller`   CHAR(100),
    PRIMARY KEY(i) NOT ENFORCED
) WITH (
  'connector' = 'mysql-cdc',
  'hostname' = 'hadoop103',
  'port' = '3306',
  'username' = 'hive',
  'password' = '123456',
  'database-name' = 'xxzh_stock',
  'table-name' = 'stock_basic'
);

先执行如下8条:
INSERT INTO `stock_basic` VALUES ('0', '000001.SZ', '000001', '平安银行', '深圳', '银行', '19910403', null);
INSERT INTO `stock_basic` VALUES ('1', '000002.SZ', '000002', '万科A', '深圳', '全国地产', '19910129', null);
INSERT INTO `stock_basic` VALUES ('2', '000004.SZ', '000004', '国华网安', '深圳', '软件服务', '19910114', '李映彤');
INSERT INTO `stock_basic` VALUES ('3', '000005.SZ', '000005', 'ST星源', '深圳', '环境保护', '19901210', '郑列列,丁芃');
INSERT INTO `stock_basic` VALUES ('4', '000006.SZ', '000006', '深振业A', '深圳', '区域地产', '19920427', '深圳市人民政府国有资产监督管理委员会');
INSERT INTO `stock_basic` VALUES ('5', '000007.SZ', '000007', '*ST全新', '深圳', '酒店餐饮', '19920413', null);
INSERT INTO `stock_basic` VALUES ('6', '000008.SZ', '000008', '神州高铁', '北京', '运输设备', '19920507', '国家开发投资集团有限公司');
INSERT INTO `stock_basic` VALUES ('7', '000009.SZ', '000009', '中国宝安', '深圳', '电气设备', '19910625', null);


INSERT INTO `stock_basic` VALUES ('8', '000010.SZ', '000010', '美丽生态', '深圳', '建筑工程', '19951027', '沈玉兴');
INSERT INTO `stock_basic` VALUES ('9', '000011.SZ', '000011', '深物业A', '深圳', '区域地产', '19920330', '深圳市人民政府国有资产监督管理委员会');
INSERT INTO `stock_basic` VALUES ('10', '000012.SZ', '000012', '南玻A', '深圳', '玻璃', '19920228', null);
INSERT INTO `stock_basic` VALUES ('11', '000014.SZ', '000014', '沙河股份', '深圳', '全国地产', '19920602', '深圳市人民政府国有资产监督管理委员会');
INSERT INTO `stock_basic` VALUES ('12', '000016.SZ', '000016', '深康佳A', '深圳', '家用电器', '19920327', '国务院国有资产监督管理委员会');
INSERT INTO `stock_basic` VALUES ('13', '000017.SZ', '000017', '深中华A', '深圳', '文教休闲', '19920331', null);
INSERT INTO `stock_basic` VALUES ('14', '000019.SZ', '000019', '深粮控股', '深圳', '其他商业', '19921012', '深圳市人民政府国有资产监督管理委员会');
INSERT INTO `stock_basic` VALUES ('15', '000020.SZ', '000020', '深华发A', '深圳', '元器件', '19920428', '李中秋');
INSERT INTO `stock_basic` VALUES ('16', '000021.SZ', '000021', '深科技', '深圳', 'IT设备', '19940202', '中国电子信息产业集团有限公司');
INSERT INTO `stock_basic` VALUES ('17', '000023.SZ', '000023', '深天地A', '深圳', '水泥', '19930429', '林宏润');
INSERT INTO `stock_basic` VALUES ('18', '000025.SZ', '000025', '特力A', '深圳', '汽车服务', '19930621', '深圳市人民政府国有资产监督管理委员会');
INSERT INTO `stock_basic` VALUES ('19', '000026.SZ', '000026', '飞亚达', '深圳', '其他商业', '19930603', '中国航空技术国际控股有限公司');
INSERT INTO `stock_basic` VALUES ('20', '000027.SZ', '000027', '深圳能源', '深圳', '火力发电', '19930903', '深圳市人民政府国有资产监督管理委员会');

PRIMARY KEY(i) NOT ENFORCED需要加上主键,否则会报

Flink SQL> select * from stock_basic_source;
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: The primary key is necessary when enable 'Key: 'scan.incremental.snapshot.enabled' , default: true (fallback keys: [])' to 'true'

查询数据变化
select * from stock_basic_source; 会查回历史表所有的数据,以及以后变更的数据

                                                                                                                           SQL Query Result (Table)                                                                                                                              
 Refresh: 1 s                                                                                                                     Page: Last of 1                                                                                                             Updated: 16:55:44.317 

                              i                        ts_code                         symbol                           name                           area                       industry                      list_date             actural_controller
                              7                      000009.SZ                         000009                           中国宝安                             深圳                           电气设备                       19910625                         (NULL)
                              6                      000008.SZ                         000008                           神州高铁                             北京                           运输设备                       19920507                   国家开发投资集团有限公司
                              1                      000002.SZ                         000002                            万科A                             深圳                           全国地产                       19910129                         (NULL)
                              0                      000001.SZ                         000001                           平安银行                             深圳                             银行                       19910403                         (NULL)
                              3                      000005.SZ                         000005                           ST星源                             深圳                           环境保护                       19901210                         郑列列,丁芃
                              2                      000004.SZ                         000004                           国华网安                             深圳                           软件服务                       19910114                            李映彤
                              5                      000007.SZ                         000007                          *ST全新                             深圳                           酒店餐饮                       19920413                         (NULL)
                              4                      000006.SZ                         000006                           深振业A                             深圳                           区域地产                       19920427             深圳市人民政府国有资产监督管理委员会

在mysql中insert两条数据,发现写入了

INSERT INTO `stock_basic` VALUES ('8', '000010.SZ', '000010', '美丽生态', '深圳', '建筑工程', '19951027', '沈玉兴');
INSERT INTO `stock_basic` VALUES ('9', '000011.SZ', '000011', '深物业A', '深圳', '区域地产', '19920330', '深圳市人民政府国有资产监督管理委员会');
                                                                                                       SQL Query Result (Table)                                                                                                                              
 Refresh: 1 s                                                                                                                     Page: Last of 1                                                                                                             Updated: 16:59:01.512 

                              i                        ts_code                         symbol                           name                           area                       industry                      list_date             actural_controller
                              7                      000009.SZ                         000009                           中国宝安                             深圳                           电气设备                       19910625                         (NULL)
                              8                      000010.SZ                         000010                           美丽生态                             深圳                           建筑工程                       19951027                            沈玉兴
                              3                      000005.SZ                         000005                           ST星源                             深圳                           环境保护                       19901210                         郑列列,丁芃
                              4                      000006.SZ                         000006                           深振业A                             深圳                           区域地产                       19920427             深圳市人民政府国有资产监督管理委员会
                              5                      000007.SZ                         000007                          *ST全新                             深圳                           酒店餐饮                       19920413                         (NULL)
                              6                      000008.SZ                         000008                           神州高铁                             北京                           运输设备                       19920507                   国家开发投资集团有限公司
                              1                      000002.SZ                         000002                            万科A                             深圳                           全国地产                       19910129                         (NULL)
                              2                      000004.SZ                         000004                           国华网安                             深圳                           软件服务                       19910114                            李映彤
                              9                      000011.SZ                         000011                           深物业A                             深圳                           区域地产                       19920330             深圳市人民政府国有资产监督管理委员会
                              0                      000001.SZ                         000001                           平安银行                             深圳                             银行                       19910403                            不知道

2.2 准备iceberg的sink表

设置metadata保留次数


CREATE CATALOG hive_catalog6 WITH (
  'type'='iceberg',
  'catalog-type'='hive',
  'uri'='thrift://hadoop101:9083',
  'clients'='5',
  'property-version'='1',
  'warehouse'='hdfs:///user/hive/warehouse/hive_catalog6'
);
use catalog hive_catalog6;

CREATE DATABASE xxzh_stock_mysql_db;
USE xxzh_stock_mysql_db;

CREATE TABLE stock_basic_iceberg_sink(
  `i`  INT NOT NULL,
  `ts_code`    CHAR(10) NOT NULL,
  `symbol`   CHAR(10) NOT NULL,
  `name` char(10) NOT NULL,
  `area`   CHAR(20) NOT NULL,
  `industry`   CHAR(20) NOT NULL,
  `list_date`   CHAR(10) NOT NULL,
  `actural_controller`   CHAR(100) ,
   PRIMARY KEY(i) NOT ENFORCED
) with(
 'write.metadata.delete-after-commit.enabled'='true',
 'write.metadata.previous-versions-max'='5'
)


给表增加属性的方法:

create table tablename(
   field1 field_type
) with (
   'key' = 'value'
)

3.通过flink从mysql写入iceberg

3.1 数据写入到iceberg

use catalog default_catalog;

Flink SQL>  insert into hive_catalog6.xxzh_stock_mysql_db.stock_basic_iceberg_sink select * from stock_basic_source;
[INFO] Submitting SQL update statement to the cluster...
[INFO] SQL update statement has been successfully submitted to the cluster:
Job ID: 3ed67670f3f008e19409e99d781d92d3

实践数据湖iceberg 第二十一课 flink1.13.5 + iceberg0.131 CDC(测试成功INSERT,变更操作失败)_第1张图片

4. 观察增加、 删除、更新的影响

4.1 初始化,把历史数据 sink出去了

[root@hadoop101 module]# hadoop fs -ls  -R  hdfs://ns/user/hive/warehouse/xxzh_stock_mysql_db.db/stock_basic_iceberg_sink/
drwxr-xr-x   - root supergroup          0 2022-02-22 17:03 hdfs://ns/user/hive/warehouse/xxzh_stock_mysql_db.db/stock_basic_iceberg_sink/data
-rw-r--r--   2 root supergroup       2931 2022-02-22 17:03 hdfs://ns/user/hive/warehouse/xxzh_stock_mysql_db.db/stock_basic_iceberg_sink/data/00000-0-2ffec193-5852-496d-89a5-d04b9a78fc7f-00001.parquet
drwxr-xr-x   - root supergroup          0 2022-02-22 17:03 hdfs://ns/user/hive/warehouse/xxzh_stock_mysql_db.db/stock_basic_iceberg_sink/metadata
-rw-r--r--   2 root supergroup       2587 2022-02-22 17:01 hdfs://ns/user/hive/warehouse/xxzh_stock_mysql_db.db/stock_basic_iceberg_sink/metadata/00000-bea73d87-95d6-42e4-bc36-588ba5649884.metadata.json
-rw-r--r--   2 root supergroup       3685 2022-02-22 17:03 hdfs://ns/user/hive/warehouse/xxzh_stock_mysql_db.db/stock_basic_iceberg_sink/metadata/00001-3d0b5906-61c8-405e-958c-4bb5e52c5b1c.metadata.json
-rw-r--r--   2 root supergroup       6371 2022-02-22 17:03 hdfs://ns/user/hive/warehouse/xxzh_stock_mysql_db.db/stock_basic_iceberg_sink/metadata/3bc7dffc-6f4f-4e4b-bd54-a85d7ca1bc97-m0.avro
-rw-r--r--   2 root supergroup       3790 2022-02-22 17:03 hdfs://ns/user/hive/warehouse/xxzh_stock_mysql_db.db/stock_basic_iceberg_sink/metadata/snap-4577301483373728372-1-3bc7dffc-6f4f-4e4b-bd54-a85d7ca1bc97.avro
[root@hadoop101 module]# 

重启的话,会追加写一份,导致数据重复

4.2 spark-sql查询

spark-sql启动

[root@hadoop103 spark-3.2.0-bin-hadoop2.7]# bin/spark-sql --packages org.apache.iceberg:iceberg-spark-runtime-3.2_2.12:0.13.1    --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions     --conf spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkSessionCatalog     --conf spark.sql.catalog.spark_catalog.type=hive     --conf spark.sql.catalog.local=org.apache.iceberg.spark.SparkCatalog     --conf spark.sql.catalog.local.type=hadoop     --conf spark.sql.catalog.local.warehouse=/tmp/iceberg/warehouse
spark-sql (default)> select * from stock_basic_iceberg_sink;
i       ts_code symbol  name    area    industry        list_date       actural_controller
1       000002.SZ       000002  万科A   深圳    全国地产        19910129        NULL
0       000001.SZ       000001  平安银行        深圳    银行    19910403        不知道
5       000007.SZ       000007  *ST全新 深圳    酒店餐饮        19920413        NULL
4       000006.SZ       000006  深振业A 深圳    区域地产        19920427        深圳市人民政府国有资产监督管理委员会
3       000005.SZ       000005  ST星源  深圳    环境保护        19901210        郑列列,丁芃
2       000004.SZ       000004  国华网安        深圳    软件服务        19910114        李映彤
9       000011.SZ       000011  深物业A 深圳    区域地产        19920330        深圳市人民政府国有资产监督管理委员会
8       000010.SZ       000010  美丽生态        深圳    建筑工程        19951027        沈玉兴
7       000009.SZ       000009  中国宝安        深圳    电气设备        19910625        NULL
6       000008.SZ       000008  神州高铁        北京    运输设备        19920507        国家开发投资集团有限公司

4.3 删除语句测试(对delete语句不支持)

delete from stock_basic where i=3;
delete from stock_basic where i=4;
INSERT INTO stock_basic VALUES (‘13’, ‘000017.SZ’, ‘000017’, ‘深中华A’, ‘深圳’, ‘文教休闲’, ‘19920331’, null);

with failure cause: java.lang.IllegalArgumentException: Cannot write delete files in a v1 table
	at org.apache.iceberg.ManifestFiles.writeDeleteManifest(ManifestFiles.java:154)
	at org.apache.iceberg.SnapshotProducer.newDeleteManifestWriter(SnapshotProducer.java:374)
	at org.apache.iceberg.MergingSnapshotProducer.lambda$newDeleteFilesAsManifests$10(MergingSnapshotProducer.java:681)
	at java.util.HashMap.forEach(HashMap.java:1289)
	at org.apache.iceberg.MergingSnapshotProducer.newDeleteFilesAsManifests(MergingSnapshotProducer.java:678)
	at org.apache.iceberg.MergingSnapshotProducer.prepareDeleteManifests(MergingSnapshotProducer.java:664)
	at org.apache.iceberg.MergingSnapshotProducer.apply(MergingSnapshotProducer.java:533)
	at org.apache.iceberg.SnapshotProducer.apply(SnapshotProducer.java:164)
	at org.apache.iceberg.SnapshotProducer.lambda$commit$2(SnapshotProducer.java:283)
	at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:404)
	at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:214)
	at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:198)
	at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:190)
	at org.apache.iceberg.SnapshotProducer.commit(SnapshotProducer.java:282)
	at org.apache.iceberg.flink.sink.IcebergFilesCommitter.commitOperation(IcebergFilesCommitter.java:312)
	at org.apache.iceberg.flink.sink.IcebergFilesCommitter.commitDeltaTxn(IcebergFilesCommitter.java:299)
	at org.apache.iceberg.flink.sink.IcebergFilesCommitter.commitUpToCheckpoint(IcebergFilesCommitter.java:218)
	at org.apache.iceberg.flink.sink.IcebergFilesCommitter.initializeState(IcebergFilesCommitter.java:153)
	at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:118)
	at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:290)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:441)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:585)
	at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:565)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:650)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:540)
	at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:759)
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
	at java.lang.Thread.run(Thread.java:748)

job是失败,整个任务失败,重启。

4.4 update测试(不支持)

insert和update放到一个事务,观察:

INSERT INTO `stock_basic` VALUES ('14', '000019.SZ', '000019', '深粮控股', '深圳', '其他商业', '19921012', '深圳市人民政府国有资产监督管理委员会');
update stock_basic set actural_controller='深中华A实控人'  where i='13'

测试结果,不支持,报错信息如下:

failure cause: java.lang.IllegalArgumentException: Cannot write delete files in a v1 table
	at org.apache.iceberg.ManifestFiles.writeDeleteManifest(ManifestFiles.java:154)
	at org.apache.iceberg.SnapshotProducer.newDeleteManifestWriter(SnapshotProducer.java:374)
	at org.apache.iceberg.MergingSnapshotProducer.lambda$newDeleteFilesAsManifests$10(MergingSnapshotProducer.java:681)
	at java.util.HashMap.forEach(HashMap.java:1289)
	at org.apache.iceberg.MergingSnapshotProducer.newDeleteFilesAsManifests(MergingSnapshotProducer.java:678)
	at org.apache.iceberg.MergingSnapshotProducer.prepareDeleteManifests(MergingSnapshotProducer.java:664)
	at org.apache.iceberg.MergingSnapshotProducer.apply(MergingSnapshotProducer.java:533)
	at org.apache.iceberg.SnapshotProducer.apply(SnapshotProducer.java:164)
	at org.apache.iceberg.SnapshotProducer.lambda$commit$2(SnapshotProducer.java:283)
	at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:404)
	at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:214)
	at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:198)
	at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:190)
	at org.apache.iceberg.SnapshotProducer.commit(SnapshotProducer.java:282)
	at org.apache.iceberg.flink.sink.IcebergFilesCommitter.commitOperation(IcebergFilesCommitter.java:312)
	at org.apache.iceberg.flink.sink.IcebergFilesCommitter.commitDeltaTxn(IcebergFilesCommitter.java:299)
	at org.apache.iceberg.flink.sink.IcebergFilesCommitter.commitUpToCheckpoint(IcebergFilesCommitter.java:218)
	at org.apache.iceberg.flink.sink.IcebergFilesCommitter.notifyCheckpointComplete(IcebergFilesCommitter.java:188)
	at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.notifyCheckpointComplete(StreamOperatorWrapper.java:99)
	at org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpointComplete(SubtaskCheckpointCoordinatorImpl.java:334)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.notifyCheckpointComplete(StreamTask.java:1171)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointCompleteAsync$10(StreamTask.java:1136)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointOperation$12(StreamTask.java:1159)
	at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
	at org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90)
	at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:344)
	at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:330)
	at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:202)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:684)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.executeInvoke(StreamTask.java:639)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:650)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:623)
	at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:779)
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
	at java.lang.Thread.run(Thread.java:748)

job是失败,整个任务失败,重启。

5 其他异常处理

5.1 The primary key is necessary when enable 'Key: ‘scan.incremental.snapshot.enabled’ 问题处理

Flink SQL> insert into hive_catalog6.xxzh_stock_mysql_db.stock_basic_iceberg_sink select * from stock_basic_source;
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: The primary key is necessary when enable 'Key: 'scan.incremental.snapshot.enabled' , default: true (fallback keys: [])' to 'true'

解决方法:给表增加主键

5.2 sql执行失败,导致任务无法正常运行

2022-02-22 16:29:02,798 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - Source: TableSourceScan(table=[[default_catalog, default_database, stock_basic_source]], fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> NotNullEnforcer(fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> IcebergStreamWriter (2/2)#1 (b64d3cf5f3e1381cfd43edbfa191f353) switched from INITIALIZING to RUNNING.
2022-02-22 16:29:02,798 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - Source: TableSourceScan(table=[[default_catalog, default_database, stock_basic_source]], fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> NotNullEnforcer(fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> IcebergStreamWriter (1/2)#1 (7c30148e43c1a292bdaec60ad6d3d3eb) switched from INITIALIZING to RUNNING.
2022-02-22 16:29:02,827 INFO  org.apache.iceberg.BaseMetastoreTableOperations              [] - Refreshing table metadata from new version: hdfs://ns/user/hive/warehouse/xxzh_stock_mysql_db.db/stock_basic_iceberg_sink/metadata/00000-84c85ba9-818c-4105-8827-9ec99fbedc07.metadata.json
2022-02-22 16:29:02,833 INFO  org.apache.iceberg.BaseMetastoreCatalog                      [] - Table loaded by catalog: hive_catalog6.xxzh_stock_mysql_db.stock_basic_iceberg_sink
2022-02-22 16:29:02,834 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - IcebergFilesCommitter -> Sink: IcebergSink hive_catalog6.xxzh_stock_mysql_db.stock_basic_iceberg_sink (1/1)#1 (cd8907160119541a73034cb08ee61f66) switched from INITIALIZING to RUNNING.
2022-02-22 16:29:02,848 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - Attempting to cancel task IcebergFilesCommitter -> Sink: IcebergSink hive_catalog6.xxzh_stock_mysql_db.stock_basic_iceberg_sink (1/1)#1 (cd8907160119541a73034cb08ee61f66).
2022-02-22 16:29:02,848 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - IcebergFilesCommitter -> Sink: IcebergSink hive_catalog6.xxzh_stock_mysql_db.stock_basic_iceberg_sink (1/1)#1 (cd8907160119541a73034cb08ee61f66) switched from RUNNING to CANCELING.
2022-02-22 16:29:02,848 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - Triggering cancellation of task code IcebergFilesCommitter -> Sink: IcebergSink hive_catalog6.xxzh_stock_mysql_db.stock_basic_iceberg_sink (1/1)#1 (cd8907160119541a73034cb08ee61f66).
2022-02-22 16:29:02,849 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - IcebergFilesCommitter -> Sink: IcebergSink hive_catalog6.xxzh_stock_mysql_db.stock_basic_iceberg_sink (1/1)#1 (cd8907160119541a73034cb08ee61f66) switched from CANCELING to CANCELED.
2022-02-22 16:29:02,849 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - Freeing task resources for IcebergFilesCommitter -> Sink: IcebergSink hive_catalog6.xxzh_stock_mysql_db.stock_basic_iceberg_sink (1/1)#1 (cd8907160119541a73034cb08ee61f66).
2022-02-22 16:29:02,849 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - Attempting to cancel task Source: TableSourceScan(table=[[default_catalog, default_database, stock_basic_source]], fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> NotNullEnforcer(fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> IcebergStreamWriter (2/2)#1 (b64d3cf5f3e1381cfd43edbfa191f353).
2022-02-22 16:29:02,849 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - Source: TableSourceScan(table=[[default_catalog, default_database, stock_basic_source]], fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> NotNullEnforcer(fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> IcebergStreamWriter (2/2)#1 (b64d3cf5f3e1381cfd43edbfa191f353) switched from RUNNING to CANCELING.
2022-02-22 16:29:02,849 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - Triggering cancellation of task code Source: TableSourceScan(table=[[default_catalog, default_database, stock_basic_source]], fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> NotNullEnforcer(fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> IcebergStreamWriter (2/2)#1 (b64d3cf5f3e1381cfd43edbfa191f353).
2022-02-22 16:29:02,851 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - Attempting to cancel task Source: TableSourceScan(table=[[default_catalog, default_database, stock_basic_source]], fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> NotNullEnforcer(fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> IcebergStreamWriter (1/2)#1 (7c30148e43c1a292bdaec60ad6d3d3eb).
2022-02-22 16:29:02,851 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - Source: TableSourceScan(table=[[default_catalog, default_database, stock_basic_source]], fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> NotNullEnforcer(fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> IcebergStreamWriter (1/2)#1 (7c30148e43c1a292bdaec60ad6d3d3eb) switched from RUNNING to CANCELING.
2022-02-22 16:29:02,851 INFO  org.apache.flink.runtime.taskmanager.Task                    [] - Triggering cancellation of task code Source: TableSourceScan(table=[[default_catalog, default_database, stock_basic_source]], fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> NotNullEnforcer(fields=[i, ts_code, symbol, name, area, industry, list_date, actural_controller]) -> IcebergStreamWriter (1/2)#1 (7c30148e43c1a292bdaec60ad6d3d3eb).
2022-02-22 16:29:02,852 INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Un-registering task and sending final execution state CANCELED to JobManager for task IcebergFilesCommitter -> Sink: IcebergSink hive_catalog6.xxzh_stock_mysql_db.stock_basic_iceberg_sink (1/1)#1 cd8907160119541a73034cb08ee61f66.
2022-02-22 16:29:02,852 WARN  org.apache.hadoop.ipc.Client                                 [] - interrupted waiting to send rpc request to server
java.lang.InterruptedException: null
	at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404) ~[?:1.8.0_212]
	at java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:1.8.0_212]
	at org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1059) ~[hadoop-common-2.7.2.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1454) ~[hadoop-common-2.7.2.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1412) ~[hadoop-common-2.7.2.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.2.jar:?]
	at com.sun.proxy.$Proxy34.delete(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540) ~[hadoop-hdfs-2.7.2.jar:?]
	at sun.reflect.GeneratedMethodAccessor47.invoke(Unknown Source) ~[?:?]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_212]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_212]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.2.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.2.jar:?]
	at com.sun.proxy.$Proxy35.delete(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2044) ~[hadoop-hdfs-2.7.2.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:707) ~[hadoop-hdfs-2.7.2.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:703) ~[hadoop-hdfs-2.7.2.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.7.2.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:703) ~[hadoop-hdfs-2.7.2.jar:?]
	at org.apache.iceberg.hadoop.HadoopFileIO.deleteFile(HadoopFileIO.java:72) ~[blob_p-e30da2853472e9b543ae7b5d1cb94549195fc3d1-6cffa88d1185c4e23b94000d7618dc2c:?]
	at org.apache.iceberg.io.FileIO.deleteFile(FileIO.java:61) ~[blob_p-e30da2853472e9b543ae7b5d1cb94549195fc3d1-6cffa88d1185c4e23b94000d7618dc2c:?]
	at org.apache.iceberg.io.BaseTaskWriter$BaseRollingWriter.closeCurrent(BaseTaskWriter.java:286) ~[blob_p-e30da2853472e9b543ae7b5d1cb94549195fc3d1-6cffa88d1185c4e23b94000d7618dc2c:?]
	at org.apache.iceberg.io.BaseTaskWriter$BaseRollingWriter.close(BaseTaskWriter.java:302) ~[blob_p-e30da2853472e9b543ae7b5d1cb94549195fc3d1-6cffa88d1185c4e23b94000d7618dc2c:?]
	at org.apache.iceberg.io.BaseTaskWriter$BaseEqualityDeltaWriter.close(BaseTaskWriter.java:176) ~[blob_p-e30da2853472e9b543ae7b5d1cb94549195fc3d1-6cffa88d1185c4e23b94000d7618dc2c:?]
	at org.apache.iceberg.flink.sink.UnpartitionedDeltaWriter.close(UnpartitionedDeltaWriter.java:58) ~[blob_p-e30da2853472e9b543ae7b5d1cb94549195fc3d1-6cffa88d1185c4e23b94000d7618dc2c:?]
	at org.apache.iceberg.flink.sink.IcebergStreamWriter.dispose(IcebergStreamWriter.java:79) ~[blob_p-e30da2853472e9b543ae7b5d1cb94549195fc3d1-6cffa88d1185c4e23b94000d7618dc2c:?]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:864) ~[flink-dist_2.12-1.13.5.jar:1.13.5]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.runAndSuppressThrowable(StreamTask.java:843) [flink-dist_2.12-1.13.5.jar:1.13.5]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.cleanUpInvoke(StreamTask.java:756) [flink-dist_2.12-1.13.5.jar:1.13.5]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:662) [flink-dist_2.12-1.13.5.jar:1.13.5]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:623) [flink-dist_2.12-1.13.5.jar:1.13.5]
	at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:779) [flink-dist_2.12-1.13.5.jar:1.13.5]
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) [flink-dist_2.12-1.13.5.jar:1.13.5]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
2022-02-22 16:29:02,854 WARN  org.apache.hadoop.ipc.Client                                 [] - interrupted waiting to send rpc request to server
java.lang.InterruptedException: null

insert语句在进群的运行异常。
解决方法:先跑select 语句调试,sql命令行的信息更完整!

例如:

Flink SQL>  insert into hive_catalog6.xxzh_stock_mysql_db.stock_basic_iceberg_sink select * from stock_basic_source;
[INFO] Submitting SQL update statement to the cluster...
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Column types of query result and sink for registered table 'hive_catalog6.xxzh_stock_mysql_db.stock_basic_iceberg_sink' do not match.
Cause: Incompatible types for sink column 'ts_code' at position 1.

Query schema: [i: INT NOT NULL, ts_code: CHAR(10) NOT NULL, symbol: CHAR(10) NOT NULL, name: CHAR(10) NOT NULL, area: CHAR(20) NOT NULL, industry: CHAR(20) NOT NULL, list_date: CHAR(10) NOT NULL, actural_controller: CHAR(100)]
Sink schema:  [i: INT, ts_code: INT, symbol: STRING, name: STRING, area: STRING, industry: STRING, list_date: STRING, actural_controller: STRING]

总结

flink1.13.5, flink-sql-connector-mysql-cdc-2.1.1.jar 目前只支持insert语句,对update和delete不支持

之前使用spark测试iceberg,crud都支持的. 是否有参数配置,使得update,delete支持?

你可能感兴趣的:(iceberg,flink,spark,数据湖,iceberg)