Flink Sql 实用记录

  1. Sink Kafka
    错误1:doesn't support consuming update and delete changes which is produced by node TableSourceScan
    解答:flink1.11之后引入了CDC(Change Data Capture,变动数据捕捉)阿里大神开源的,此次错误是因为Source源是mysql-cdc所以获取的数据类型为Changelog格式,所以在WITH kafka的时候需要指定format=debezium-json
    错误2:No operators defined in streaming topology. Cannot execute
    解答:在此次流计算当中没有任何一个operators算子/算子链执行

  2. Source Mysql
        0、错误信息:The server time zone value 'Öйú±ê׼ʱ¼ä' is unrecognized or represents more than one time zone
        1、set global time_zone = '+8:00';
        2、set time_zone = '+8:00';
        3、flush privileges; 
        4、或者根据参数serverTimeZone指定
  3. Flink sql任务在本地正常,上线报错
        
          org.apache.flink
          flink-table-planner-blink_2.11
          ${flink.version}
          provided//需要添加
        

     

  4. 1.11 版本之前用户的 DDL 需要声明成如下方式
    CREATE TABLE user_behavior (
      ...
    ) WITH (
      'connector.type'='kafka',
      'connector.version'='universal',
      'connector.topic'='user_behavior',
      'connector.startup-mode'='earliest-offset',
      'connector.properties.zookeeper.connect'='localhost:2181',
      'connector.properties.bootstrap.servers'='localhost:9092',
      'format.type'='json'
    );

    而在 Flink SQL 1.11以及之后的版本中则简化为
     

    CREATE TABLE user_behavior (
      ...
    ) WITH (
      'connector'='kafka',
      'topic'='user_behavior',
      'scan.startup.mode'='earliest-offset',
      'properties.zookeeper.connect'='localhost:2181',
      'properties.bootstrap.servers'='localhost:9092',
      'format'='json'
    );

     

  5.  

你可能感兴趣的:(Flink,SQL,flink)