flink1.13解决读取kafka数据写入hive中hive客户端查询不到数据问题

flink1.13版本前可参考如下方式:https://blog.csdn.net/m0_37592814/article/details/108044830

flink1.13之前的版本可以参考上一文章

https://blog.csdn.net/m0_37592814/article/details/108044830

flink1.13 版本可以通过如下方式快速解决:

在创建hive表时 添加参数指定时区为:

'sink.partition-commit.watermark-time-zone'='Asia/Shanghai'

SET table.sql-dialect=hive;
CREATE TABLE hive_table (
  user_id STRING,
  order_amount DOUBLE
) PARTITIONED BY (dt STRING, hr STRING) STORED AS parquet TBLPROPERTIES (
  'partition.time-extractor.timestamp-pattern'='$dt $hr:00:00',
  'sink.partition-commit.trigger'='partition-time',
  'sink.partition-commit.delay'='1 h',
  'sink.partition-commit.watermark-time-zone'='Asia/Shanghai', -- Assume user configured time zone is 'Asia/Shanghai'
  'sink.partition-commit.policy.kind'='metastore,success-file'
);

参考官网:https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/hive/hive_read_write/

flink1.13解决读取kafka数据写入hive中hive客户端查询不到数据问题_第1张图片

你可能感兴趣的:(flink1.13解决读取kafka数据写入hive中hive客户端查询不到数据问题)