Flink-Table SQL(十)上

目录

一、Query

1、指定查询

2、支持的语法

3、操作

二、DDL

   1、指定DDL

    2、创建表

    3、删除表

三、数据类型

四、保留关键字


一、Query

        SQL查询是用TableEnvironment的sqlQuery()方法指定的。方法将SQL查询的结果作为表返回。表可以用于后续的SQL和表API查询,可以转换为数据集或数据流,也可以写入表链接。SQL和表API查询可以无缝地混合,并进行整体优化并转换为单个程序。
        为了访问SQL查询中的表,必须在TableEnvironment中注册该表。可以从表源、表、CREATETABLE语句、数据流或数据集注册表。或者,用户也可以在TableEnvironment中注册外部目录以指定数据源的位置。
        为了方便起见,Table.toString()会在TableEnvironment中以唯一的名称自动注册表并返回该名称。因此,表对象可以直接内联到SQL查询中(通过字符串连接),如下例所示。
        注意:Flink的SQL支持尚未完成。包含不支持的SQL功能的查询会导致TableException。下面列出了批处理表和流式处理表上SQL支持的功能。

1、指定查询

下面的示例演示如何在已注册和内联表上指定SQL查询。

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);

// ingest a DataStream from an external source
DataStream> ds = env.addSource(...);

// SQL query with an inlined (unregistered) table
Table table = tableEnv.fromDataStream(ds, "user, product, amount");
Table result = tableEnv.sqlQuery(
  "SELECT SUM(amount) FROM " + table + " WHERE product LIKE '%Rubber%'");

// SQL query with a registered table
// register the DataStream as table "Orders"
tableEnv.registerDataStream("Orders", ds, "user, product, amount");
// run a SQL query on the Table and retrieve the result as a new Table
Table result2 = tableEnv.sqlQuery(
  "SELECT product, amount FROM Orders WHERE product LIKE '%Rubber%'");

// SQL update with a registered table
// create and register a TableSink
TableSink csvSink = new CsvTableSink("/path/to/file", ...);
String[] fieldNames = {"product", "amount"};
TypeInformation[] fieldTypes = {Types.STRING, Types.INT};
tableEnv.registerTableSink("RubberOrders", fieldNames, fieldTypes, csvSink);
// run a SQL update query on the Table and emit the result to the TableSink
tableEnv.sqlUpdate(
  "INSERT INTO RubberOrders SELECT product, amount FROM Orders WHERE product LIKE '%Rubber%'");

2、支持的语法

link使用支持标准ANSI SQL的ApacheCalcite解析SQL。Flink不支持DDL语句。
下面的BNF语法描述批处理和流式查询中支持的SQL特性的超集。“操作”部分显示了受支持功能的示例,并指示哪些功能仅支持批处理或流式查询。

insert:
  INSERT INTO tableReference
  query
  
query:
  values
  | {
      select
      | selectWithoutFrom
      | query UNION [ ALL ] query
      | query EXCEPT query
      | query INTERSECT query
    }
    [ ORDER BY orderItem [, orderItem ]* ]
    [ LIMIT { count | ALL } ]
    [ OFFSET start { ROW | ROWS } ]
    [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY]

orderItem:
  expression [ ASC | DESC ]

select:
  SELECT [ ALL | DISTINCT ]
  { * | projectItem [, projectItem ]* }
  FROM tableExpression
  [ WHERE booleanExpression ]
  [ GROUP BY { groupItem [, groupItem ]* } ]
  [ HAVING booleanExpression ]
  [ WINDOW windowName AS windowSpec [, windowName AS windowSpec ]* ]
  
selectWithoutFrom:
  SELECT [ ALL | DISTINCT ]
  { * | projectItem [, projectItem ]* }

projectItem:
  expression [ [ AS ] columnAlias ]
  | tableAlias . *

tableExpression:
  tableReference [, tableReference ]*
  | tableExpression [ NATURAL ] [ LEFT | RIGHT | FULL ] JOIN tableExpression [ joinCondition ]

joinCondition:
  ON booleanExpression
  | USING '(' column [, column ]* ')'

tableReference:
  tablePrimary
  [ matchRecognize ]
  [ [ AS ] alias [ '(' columnAlias [, columnAlias ]* ')' ] ]

tablePrimary:
  [ TABLE ] [ [ catalogName . ] schemaName . ] tableName
  | LATERAL TABLE '(' functionName '(' expression [, expression ]* ')' ')'
  | UNNEST '(' expression ')'

values:
  VALUES expression [, expression ]*

groupItem:
  expression
  | '(' ')'
  | '(' expression [, expression ]* ')'
  | CUBE '(' expression [, expression ]* ')'
  | ROLLUP '(' expression [, expression ]* ')'
  | GROUPING SETS '(' groupItem [, groupItem ]* ')'

windowRef:
    windowName
  | windowSpec

windowSpec:
    [ windowName ]
    '('
    [ ORDER BY orderItem [, orderItem ]* ]
    [ PARTITION BY expression [, expression ]* ]
    [
        RANGE numericOrIntervalExpression {PRECEDING}
      | ROWS numericExpression {PRECEDING}
    ]
    ')'

matchRecognize:
      MATCH_RECOGNIZE '('
      [ PARTITION BY expression [, expression ]* ]
      [ ORDER BY orderItem [, orderItem ]* ]
      [ MEASURES measureColumn [, measureColumn ]* ]
      [ ONE ROW PER MATCH ]
      [ AFTER MATCH
            ( SKIP TO NEXT ROW
            | SKIP PAST LAST ROW
            | SKIP TO FIRST variable
            | SKIP TO LAST variable
            | SKIP TO variable )
      ]
      PATTERN '(' pattern ')'
      [ WITHIN intervalLiteral ]
      DEFINE variable AS condition [, variable AS condition ]*
      ')'

measureColumn:
      expression AS alias

pattern:
      patternTerm [ '|' patternTerm ]*

patternTerm:
      patternFactor [ patternFactor ]*

patternFactor:
      variable [ patternQuantifier ]

patternQuantifier:
      '*'
  |   '*?'
  |   '+'
  |   '+?'
  |   '?'
  |   '??'
  |   '{' { [ minRepeat ], [ maxRepeat ] } '}' ['?']
  |   '{' repeat '}'

Flink SQL对标识符(表、属性、函数名)使用类似于Java的词法策略:
  - 不管是否引用标识符,都会保留其大小写。 
  - 之后,标识符被区分大小写匹配。 
  - 与Java不同,back-ticks允许标识符包含非字母数字字符(例如," SELECT a AS  'my field'   FROM t ")。

字符串文本必须用单引号括起来(例如,选择 'Hello World' )。为转义重复一个引号(例如,SELECT ' It''s me. ')。字符串文本中支持Unicode字符。如果需要显式unicode代码点,请使用以下语法:
  - 使用反斜杠(\)作为转义字符(默认值): SELECT U&'\263A'
  - 使用自定义转义字符:  SELECT U&'#263A' UESCAPE '#'

3、操作

(1)、Show and Use

Operation Description
Show
Batch Streaming

Show all catalogs

 SHOW CATALOGS;
Show all databases in the current catalog

 SHOW DATABASES;
Show all tables in the current database in the current catalog

 SHOW TABLES;

Use
Batch Streaming

Set current catalog for the session

 USE CATALOG mycatalog;
Set current database of the current catalog for the session

 USE mydatabase;

(2)、Scan, Projection, and Filter

Operation Description
Scan / Select / As
Batch Streaming

SELECT * FROM Orders

SELECT a, c AS d FROM Orders

Where / Filter
Batch Streaming

SELECT * FROM Orders WHERE b = 'red'

SELECT * FROM Orders WHERE a % 2 = 0

User-defined Scalar Functions (Scalar UDF)
Batch Streaming

必须在TableEnvironment中注册UDF。有关如何指定和注册标量UDF的详细信息,请参阅UDF文档。

SELECT PRETTY_PRINT(user) FROM Orders

(3)、Aggregations

Operation Description
GroupBy Aggregation
Batch Streaming
Result Updating

注意:流表上的GroupBy会生成更新结果。有关详细信息,请参阅动态表流概念页。

SELECT a, SUM(b) as d
FROM Orders
GROUP BY a

GroupBy Window Aggregation
Batch Streaming

使用组窗口计算每个组的单个结果行。有关详细信息,请参见下文 Group Windows部分。

SELECT user, SUM(amount)
FROM Orders
GROUP BY TUMBLE(rowtime, INTERVAL '1' DAY), user

Over Window aggregation
Streaming

注意:所有聚合必须在同一窗口上定义,即,相同的分区、排序和范围。当前,仅支持在当前行范围之前(无边界和有边界)的窗口。尚不支持具有以下内容的范围。ORDER BY必须在单个时间属性上指定

SELECT COUNT(amount) OVER (
  PARTITION BY user
  ORDER BY proctime
  ROWS BETWEEN 2 PRECEDING AND CURRENT ROW)
FROM Orders

 

SELECT COUNT(amount) OVER w, SUM(amount) OVER w
FROM Orders 
WINDOW w AS (
  PARTITION BY user
  ORDER BY proctime
  ROWS BETWEEN 2 PRECEDING AND CURRENT ROW)  

Distinct
Batch Streaming
Result Updating

SELECT DISTINCT users FROM Orders

注意:对于流式查询,计算查询结果所需的状态可能会根据不同字段的数量无限增长。请提供具有有效保留间隔的查询配置,以防止状态大小过大。有关详细信息,请参见查询配置。

Grouping sets, Rollup, Cube
Batch
SELECT SUM(amount)
FROM Orders
GROUP BY GROUPING SETS ((user), (product))
Having
Batch Streaming
SELECT SUM(amount)
FROM Orders
GROUP BY users
HAVING SUM(amount) > 50
User-defined Aggregate Functions (UDAGG)
Batch Streaming
UDAGG必须在TableEnvironment中注册。有关如何指定和注册UDAGGs的详细信息,请参阅UDF文档。

(4)、Joins

Operation Description
Inner Equi-join
Batch Streaming

目前,只支持equi联接,即至少有一个具有相等谓词的连接条件的联接。不支持任意交叉或θ连接。

注意:连接的顺序没有优化。表按FROM子句中指定的顺序联接。请确保以不产生交叉联接(笛卡尔积)的顺序指定表,该交叉联接不受支持并且会导致查询失败。

SELECT * FROM Orders INNER JOIN Product ON Orders.productId = Product.id

注意:对于流式查询,计算查询结果所需的状态可能会根据不同输入行的数量无限增长。请提供具有有效保留间隔的查询配置,以防止状态大小过大。有关详细信息,请参见查询配置。

Outer Equi-join
Batch Streaming 
Result Updating

目前,只支持equi联接,即至少有一个具有相等谓词的连接条件的联接。不支持任意交叉或θ连接。

注意:连接的顺序没有优化。表按FROM子句中指定的顺序联接。请确保以不产生交叉联接(笛卡尔积)的顺序指定表,该交叉联接不受支持并且会导致查询失败。

SELECT * FROM Orders LEFT JOIN Product ON Orders.productId = Product.id

SELECT * FROM Orders RIGHT JOIN Product ON Orders.productId = Product.id

SELECT * FROM Orders FULL OUTER JOIN Product ON Orders.productId = Product.id

注意:对于流式查询,计算查询结果所需的状态可能会根据不同输入行的数量无限增长。请提供具有有效保留间隔的查询配置,以防止状态大小过大。有关详细信息,请参见查询配置。

Time-windowed Join
Batch Streaming

注意:时间窗口连接是可以流式处理的常规连接的子集。

时间窗口连接至少需要一个equi连接谓词和一个连接条件,该条件限定两边的时间。这样的条件可以由两个适当的范围谓词(<,<=,>=,>)、一个中间谓词或一个比较两个输入表的相同类型(即处理时间或事件时间)的时间属性的相等谓词定义。

例如,以下谓词是有效的窗口联接条件:

  • ltime = rtime
  • ltime >= rtime AND ltime < rtime + INTERVAL '10' MINUTE
  • ltime BETWEEN rtime - INTERVAL '10' SECOND AND rtime + INTERVAL '5' SECOND

SELECT *
FROM Orders o, Shipments s
WHERE o.id = s.orderId AND
      o.ordertime BETWEEN s.shiptime - INTERVAL '4' HOUR AND s.shiptime

 

如果订单在收到订单4小时后发货,则上面的示例将所有订单与其对应的发货连接起来。

Expanding arrays into a relation
Batch Streaming

Unnesting WITH ORDINALITY is not supported yet.
 

SELECT users, tag
FROM Orders CROSS JOIN UNNEST(tags) AS t (tag)

Join with Table Function (UDTF)
Batch Streaming

将表与表函数的结果联接。左(外部)表的每一行都与表函数的相应调用生成的所有行相连接。

必须先注册用户定义的表函数(UDTFs)。有关如何指定和注册UDTFs的详细信息,请参阅UDF文档。

Inner Join

如果左(外部)表的表函数调用返回空结果,则删除该行。

SELECT users, tag
FROM Orders, LATERAL TABLE(unnest_udtf(tags)) t AS tag
Left Outer Join

如果表函数调用返回空结果,则保留相应的外部行,并用空值填充结果。

SELECT users, tag
FROM Orders LEFT JOIN LATERAL TABLE(unnest_udtf(tags)) t AS tag ON TRUE

注意:目前,只有literal TRUE被支持作为针对侧表的左外部联接的谓词。

Join with Temporal Table Function
Streaming

时态表是跟踪随时间变化的表。

时态表函数提供在特定时间点对时态表状态的访问。使用临时表函数联接表的语法与使用表函数联接表的语法相同。

注意:目前只支持带有时态表的内部连接。

假设Rates是一个时态表函数,那么join可以用SQL表示如下:

SELECT
  o_amount, r_rate
FROM
  Orders,
  LATERAL TABLE (Rates(o_proctime))
WHERE
  r_currency = o_currency

有关更多信息,请查看更详细的时态表概念描述。

Join with Temporal Table
Batch Streaming

时态表是跟踪随时间变化的表。时态表提供在特定时间点对时态表版本的访问。

只支持带有处理时间时态表的内部和左连接。

下面的示例假设LatestRates是一个用最新速率具体化的时态表。

SELECT
  o.amout, o.currency, r.rate, o.amount * r.rate
FROM
  Orders AS o
  JOIN LatestRates FOR SYSTEM_TIME AS OF o.proctime AS r
  ON r.currency = o.currency

有关更多信息,请查看更详细的时态表概念描述。

仅在Blink planner中支持。

(5)、Set Operations

Operation Description
Union
Batch
SELECT *
FROM (
    (SELECT user FROM Orders WHERE a % 2 = 0)
  UNION
    (SELECT user FROM Orders WHERE b = 0)
)
UnionAll
Batch Streaming
SELECT *
FROM (
    (SELECT user FROM Orders WHERE a % 2 = 0)
  UNION ALL
    (SELECT user FROM Orders WHERE b = 0)
)
Intersect / Except
Batch
SELECT *
FROM (
    (SELECT user FROM Orders WHERE a % 2 = 0)
  INTERSECT
    (SELECT user FROM Orders WHERE b = 0)
)
SELECT *
FROM (
    (SELECT user FROM Orders WHERE a % 2 = 0)
  EXCEPT
    (SELECT user FROM Orders WHERE b = 0)
)
In
Batch Streaming

如果给定的表子查询中存在表达式,则返回true。子查询表必须由一列组成。此列必须与表达式具有相同的数据类型。

SELECT user, amount
FROM Orders
WHERE product IN (
    SELECT product FROM NewProducts
)

注意:对于流式查询,该操作将在join和group操作中重写。计算查询结果所需的状态可能会根据不同输入行的数量无限增长。请提供具有有效保留间隔的查询配置,以防止状态大小过大。有关详细信息,请参见查询配置。

Exists
Batch Streaming

如果子查询返回至少一行,则返回true。仅当可以在联接和组操作中重写该操作时才支持。

SELECT user, amount
FROM Orders
WHERE product EXISTS (
    SELECT product FROM NewProducts
)

注意:对于流式查询,该操作将在join和group操作中重写。计算查询结果所需的状态可能会根据不同输入行的数量无限增长。请提供具有有效保留间隔的查询配置,以防止状态大小过大。有关详细信息,请参见查询配置。

(6)、OrderBy & Limit

Operation Description
Order By
Batch Streaming

注意:流式查询的结果必须主要按升序时间属性排序。支持其他排序属性。

SELECT *
FROM Orders
ORDER BY orderTime

Limit
Batch

注意:LIMIT子句需要ORDER BY子句。

SELECT *
FROM Orders
ORDER BY orderTime
LIMIT 3

(7)、Top-N

注意Top-N仅在Blink planner中受支持。

Top-N查询要求按列排序的N个最小或最大值。最小值集和最大值集都被认为是Top-N查询。Top-N查询在需要仅显示批处理/流式处理表中的N个最底或N个最上面的记录的情况下非常有用。此结果集可用于进一步分析。

Flink使用OVER window子句和筛选条件的组合来表示Top-N查询。通过按条目划分窗口的权限,弗林克还支持每个群组Topn。例如,每个类别的前五个产品在实时中具有最大的销售。批处理表和流式处理表上的SQL支持Top-N查询。

下面显示TOP-N语句的语法:

SELECT [column_list]
FROM (
   SELECT [column_list],
     ROW_NUMBER() OVER ([PARTITION BY col1[, col2...]]
       ORDER BY col1 [asc|desc][, col2 [asc|desc]...]) AS rownum
   FROM table_name)
WHERE rownum <= N [AND conditions]

参数说明:

  • ROW_NUMBER():根据分区中行的顺序,为每行指定一个唯一的序列号,从一开始。目前,我们只支持行数作为over window函数。在未来,我们将支持RANK()和DENSE_RANK()。
  • COL1[,COL2…]分区:指定分区列。每个分区都有一个Top-N结果。
  • ORDER BY col1[asc | desc][,col2[asc | desc]…]:指定排序列。不同列的排序方向可能不同。
  • 其中rownum<=N:Flink需要rownum<=N才能识别此查询是Top-N查询。N表示将保留N个最小或最大记录。
  • [和条件]:可以在where子句中添加其他条件,但其他条件只能与rownum<=N using和conjunction组合。

注意在流模式下,TopN查询是Result Updating。Flink SQL将根据order键对输入数据流进行排序,因此如果前N个记录已更改,则更改后的记录将作为收回/更新记录发送到下游。建议使用支持更新的存储器作为Top-N查询的接收器。此外,如果前N个记录需要存储在外部存储器中,则结果表应具有与前N个查询相同的唯一键。

Top-N查询的唯一键是分区列和rownum列的组合。Top-N查询还可以导出上游的唯一键。以下面的作业为例,假设product_id是ShopSales的唯一键,那么Top-N查询的唯一键是[category,rownum]和[product_id]。

下面的示例演示如何在流表上使用Top-N指定SQL查询。这是一个例子,以获得“前五个产品的类别,有最大的实时销售”我们在上面提到。

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);

// ingest a DataStream from an external source
DataStream> ds = env.addSource(...);
// register the DataStream as table "ShopSales"
tableEnv.registerDataStream("ShopSales", ds, "product_id, category, product_name, sales");

// select top-5 products per category which have the maximum sales.
Table result1 = tableEnv.sqlQuery(
  "SELECT * " +
  "FROM (" +
  "   SELECT *," +
  "       ROW_NUMBER() OVER (PARTITION BY category ORDER BY sales DESC) as row_num" +
  "   FROM ShopSales)" +
  "WHERE row_num <= 5");

无排序输出优化

如上所述,rownum字段将作为唯一键的一个字段写入结果表,这可能会导致许多记录被写入结果表。例如,当排名9的记录(比如product-1001)被更新并且其排名被升级为1时,排名1~9的所有记录都将作为更新消息输出到结果表中。如果结果表接收的数据太多,将成为SQL作业的瓶颈。

优化方法是在Top-N查询的外部SELECT子句中省略rownum字段。这是合理的,因为前N个记录的数量通常不大,因此消费者可以自己快速地对记录进行排序。如果没有rownum字段,在上面的例子中,只需要将更改的记录(product-1001)发送到下游,这样可以大大减少对结果表的IO。

下面的示例演示如何以这种方式优化上面的Top-N示例:

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);

// ingest a DataStream from an external source
DataStream> ds = env.addSource(...);
// register the DataStream as table "ShopSales"
tableEnv.registerDataStream("ShopSales", ds, "product_id, category, product_name, sales");

// select top-5 products per category which have the maximum sales.
Table result1 = tableEnv.sqlQuery(
  "SELECT product_id, category, product_name, sales " + // omit row_num field in the output
  "FROM (" +
  "   SELECT *," +
  "       ROW_NUMBER() OVER (PARTITION BY category ORDER BY sales DESC) as row_num" +
  "   FROM ShopSales)" +
  "WHERE row_num <= 5");

注意在流模式下,为了将上述查询输出到外部存储器并得到正确的结果,外部存储器必须具有与Top-N查询相同的唯一键。在上面的示例查询中,如果product_id是查询的唯一键,那么外部表也应该将product_id作为唯一键。 

(8)、Deduplication(重复数据取消)

注意重复数据消除仅在Blink planner中受支持。

重复数据删除是删除在一组列上重复的行,只保留第一行或最后一行。在某些情况下,上游ETL作业不是端到端的一次,这可能会导致在发生故障转移时接收器中存在重复记录。但是,重复记录会影响下游分析作业的正确性(如SUM、COUNT)。因此,在进一步分析之前,需要进行重复数据消除。

Flink使用ROW_NUMBER()删除重复项,就像Top-N查询一样。理论上,重复数据消除是Top-N的一种特殊情况,N是1,按处理时间或事件时间排序。

下面显示重复数据消除语句的语法:

SELECT [column_list]
FROM (
   SELECT [column_list],
     ROW_NUMBER() OVER ([PARTITION BY col1[, col2...]]
       ORDER BY time_attr [asc|desc]) AS rownum
   FROM table_name)
WHERE rownum = 1

参数说明:

  • ROW_NUMBER():为每一行指定一个唯一的序列号,从一开始。
  • COL1[,COL2…]分区:指定分区列,即重复的密钥。
  • ORDER BY time_attr[asc | desc]:指定排序列,它必须是时间属性。当前仅支持proctime属性。Rowtime attattribute将在将来得到支持。按ASC排序意味着保留第一行,按DESC排序意味着保留最后一行。
  • 其中rownum=1:Flink需要rownum=1才能识别此查询是重复数据消除。

下面的示例演示如何在流表上使用重复数据消除指定SQL查询。

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);

// ingest a DataStream from an external source
DataStream> ds = env.addSource(...);
// register the DataStream as table "Orders"
tableEnv.registerDataStream("Orders", ds, "order_id, user, product, number, proctime.proctime");

// remove duplicate rows on order_id and keep the first occurrence row,
// because there shouldn't be two orders with the same order_id.
Table result1 = tableEnv.sqlQuery(
  "SELECT order_id, user, product, number " +
  "FROM (" +
  "   SELECT *," +
  "       ROW_NUMBER() OVER (PARTITION BY order_id ORDER BY proctime ASC) as row_num" +
  "   FROM Orders)" +
  "WHERE row_num = 1");

(9)、Insert

Operation Description
Insert Into
Batch Streaming

输出表必须在TableEnvironment中注册(请参阅注册表链接)。此外,已注册表的架构必须与查询的架构匹配。

INSERT INTO OutputTable
SELECT users, tag
FROM Orders

(10)、Group Windows

组窗口在SQL查询的Group BY子句中定义。与使用常规GROUPBY子句的查询一样,使用GROUPBY子句(包含GROUP窗口函数)的查询会计算每个组的单个结果行。批处理表和流式处理表上的SQL支持以下组窗口函数。

Group Window Function Description
TUMBLE(time_attr, interval)   定义翻滚时间窗口。翻滚时间窗口将行分配给具有固定持续时间(间隔)的非重叠连续窗口。例如,5分钟的翻滚窗口以5分钟的间隔对行进行分组。翻转窗口可以定义为事件时间(流+批处理)或处理时间(流)
HOP(time_attr, interval, interval) 定义跳跃时间窗口(在表API中称为滑动窗口)。跳变时间窗具有固定的持续时间(第二间隔参数)并且按指定的跳变间隔(第一间隔参数)跳变。如果跳跃间隔小于窗口大小,则跳跃窗口重叠。因此,可以将行分配给多个窗口。例如,15分钟大小和5分钟跳跃间隔的跳跃窗口将每行分配给3个15分钟大小的不同窗口,这些窗口的计算间隔为5分钟。跳转窗口可以定义为事件时间(流+批处理)或处理时间(流)。
SESSION(time_attr, interval) 定义会话时间窗口。会话时间窗口没有固定的持续时间,但它们的界限由不活动的时间间隔定义,即,如果在定义的间隔期间内没有出现事件,会话窗口将关闭。例如,间隔为30分钟的会话窗口在30分钟不活动后观察到一行时启动(否则该行将添加到现有窗口中),如果在30分钟内没有添加行,则该窗口将关闭。会话窗口可以处理事件时间(流+批处理)或处理时间(流)。

时间属性

对于流表上的SQL查询,group window函数的time_attr参数必须引用有效的time属性,该属性指定行的处理时间或事件时间。请参阅时间属性文档以了解如何定义时间属性。

对于批处理表上的SQL,group window函数的time_attr参数必须是TIMESTAMP类型的属性。

选择组窗口开始和结束时间戳

可以使用以下辅助功能选择组窗口的开始和结束时间戳以及时间属性:

 

Auxiliary Function

Description
TUMBLE_START(time_attr, interval)
HOP_START(time_attr, interval, interval)
SESSION_START(time_attr, interval)

返回相应翻滚、跳跃或会话窗口的包含下限的时间戳。

TUMBLE_END(time_attr, interval)
HOP_END(time_attr, interval, interval)
SESSION_END(time_attr, interval)

返回相应翻滚、跳跃或会话窗口的独占上限的时间戳。

Note: The exclusive upper bound timestamp cannot be used as a rowtime attribute in subsequent time-based operations, such as time-windowed joins and group window or over window aggregations.

TUMBLE_ROWTIME(time_attr, interval)
HOP_ROWTIME(time_attr, interval, interval)
SESSION_ROWTIME(time_attr, interval)

返回相应翻滚、跳跃或会话窗口的包含上限的时间戳。

结果属性是一个rowtime属性,可用于后续基于时间的操作,如时间窗口联接和组窗口或窗口上聚合。

TUMBLE_PROCTIME(time_attr, interval)
HOP_PROCTIME(time_attr, interval, interval)
SESSION_PROCTIME(time_attr, interval)

返回一个proctime属性,该属性可用于后续基于时间的操作,如时间窗口联接和组窗口或窗口上聚合。

注意:必须使用与GROUPBY子句中的group window函数完全相同的参数调用辅助函数。

下面的示例演示如何在流表上使用组窗口指定SQL查询。

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);

// ingest a DataStream from an external source
DataStream> ds = env.addSource(...);
// register the DataStream as table "Orders"
tableEnv.registerDataStream("Orders", ds, "user, product, amount, proctime.proctime, rowtime.rowtime");

// compute SUM(amount) per day (in event-time)
Table result1 = tableEnv.sqlQuery(
  "SELECT user, " +
  "  TUMBLE_START(rowtime, INTERVAL '1' DAY) as wStart,  " +
  "  SUM(amount) FROM Orders " +
  "GROUP BY TUMBLE(rowtime, INTERVAL '1' DAY), user");

// compute SUM(amount) per day (in processing-time)
Table result2 = tableEnv.sqlQuery(
  "SELECT user, SUM(amount) FROM Orders GROUP BY TUMBLE(proctime, INTERVAL '1' DAY), user");

// compute every hour the SUM(amount) of the last 24 hours in event-time
Table result3 = tableEnv.sqlQuery(
  "SELECT product, SUM(amount) FROM Orders GROUP BY HOP(rowtime, INTERVAL '1' HOUR, INTERVAL '1' DAY), product");

// compute SUM(amount) per session with 12 hour inactivity gap (in event-time)
Table result4 = tableEnv.sqlQuery(
  "SELECT user, " +
  "  SESSION_START(rowtime, INTERVAL '12' HOUR) AS sStart, " +
  "  SESSION_ROWTIME(rowtime, INTERVAL '12' HOUR) AS snd, " +
  "  SUM(amount) " +
  "FROM Orders " +
  "GROUP BY SESSION(rowtime, INTERVAL '12' HOUR), user");

(11)、Pattern Recognition

Operation Description
MATCH_RECOGNIZE
Streaming

根据匹配识别ISO标准在流表中搜索给定模式。这使得在SQL查询中表示复杂事件处理(CEP)逻辑成为可能。

有关更详细的说明,请参见检测表中模式的专用页。

SELECT T.aid, T.bid, T.cid
FROM MyTable
MATCH_RECOGNIZE (
  PARTITION BY userid
  ORDER BY proctime
  MEASURES
    A.id AS aid,
    B.id AS bid,
    C.id AS cid
  PATTERN (A B C)
  DEFINE
    A AS name = 'a',
    B AS name = 'b',
    C AS name = 'c'
) AS T

二、DDL

DDL是用TableEnvironment的sqlUpdate()方法指定的。对于成功创建表,该方法不返回任何内容。可以使用CREATETABLE语句将表注册到目录中,然后在TableEnvironment的方法sqlQuery()中的SQL查询中引用该表。

注意:Flink的DDL支持尚未完成。包含不支持的SQL功能的查询会导致TableException。下面列出了批处理表和流式处理表上SQL DDL支持的特性。

   1、指定DDL

下面的示例演示如何指定SQL DDL。

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);

// SQL query with a registered table
// register a table named "Orders"
tableEnv.sqlUpdate("CREATE TABLE Orders (`user` BIGINT, product VARCHAR, amount INT) WITH (...)");
// run a SQL query on the Table and retrieve the result as a new Table
Table result = tableEnv.sqlQuery(
  "SELECT product, amount FROM Orders WHERE product LIKE '%Rubber%'");

// SQL update with a registered table
// register a TableSink
tableEnv.sqlUpdate("CREATE TABLE RubberOrders(product VARCHAR, amount INT) WITH (...)");
// run a SQL update query on the Table and emit the result to the TableSink
tableEnv.sqlUpdate(
  "INSERT INTO RubberOrders SELECT product, amount FROM Orders WHERE product LIKE '%Rubber%'");

    2、创建表

CREATE TABLE [catalog_name.][db_name.]table_name
  [(col_name1 col_type1 [COMMENT col_comment1], ...)]
  [COMMENT table_comment]
  [PARTITIONED BY (col_name1, col_name2, ...)]
  WITH (key1=val1, key2=val2, ...)

创建具有给定表属性的表。如果数据库中已存在同名表,则会引发异常。

PARTITIONED BY

按指定列对创建的表进行分区。如果此表用作文件系统接收器,则为每个分区创建一个目录。

WITH OPTIONS

用于创建表源/汇的表属性。这些属性通常用于查找和创建底层连接器。

表达式key1=val1的键和值都应为字符串文本。有关不同连接器的所有受支持表属性,请参阅连接到外部系统中的详细信息。

注:表名可以有三种格式:1。目录名称.db名称.table名称2。数据库名称表名称3。表名。对于catalog_name.db_name.table_name,该表将注册到名为“catalog_name”且数据库名为“db_name”的元存储中;对于db_name.table_name,该表将注册到名为“db_name”的执行表环境和数据库的当前目录中;对于table_name,该表将注册到执行表环境的当前目录和数据库中。

注意:用CREATE table语句注册的表既可以用作表源,也可以用作表汇,在DMLs中引用之前,我们无法确定它是否用作源或汇。

    3、删除表

DROP TABLE [IF EXISTS] [catalog_name.][db_name.]table_name

删除具有给定表名的表。如果要删除的表不存在,则引发异常。

IF EXISTS

如果表不存在,则不会发生任何事情。 

三、数据类型

请参阅有关数据类型的专用页。

泛型类型和(嵌套的)复合类型(例如pojo、元组、行、Scala case类)也可以是行的字段。

可以使用值访问函数访问具有任意嵌套的复合类型的字段。

泛型类型被视为黑盒,可以由用户定义的函数传递或处理。

对于DDLs,我们支持页面数据类型中定义的完整数据类型。

注意:sql查询中不支持某些数据类型(强制转换表达式或文本)。E.G. STRING, BYTES, TIME(p) WITHOUT TIME ZONE, TIME(p) WITH LOCAL TIME ZONE, TIMESTAMP(p) WITHOUT TIME ZONE, TIMESTAMP(p) WITH LOCAL TIME ZONE, ARRAY, MULTISET, ROW.

四、保留关键字

虽然并不是每个SQL特性都已经实现,但是一些字符串组合已经被保留为关键字供将来使用。如果要将下列字符串之一用作字段名,请确保用反勾号(例如value'、'count\)将其括起来。

A, ABS, ABSOLUTE, ACTION, ADA, ADD, ADMIN, AFTER, ALL, ALLOCATE, ALLOW, ALTER, ALWAYS, AND, ANY, ARE, ARRAY, AS, ASC, ASENSITIVE, ASSERTION, ASSIGNMENT, ASYMMETRIC, AT, ATOMIC, ATTRIBUTE, ATTRIBUTES, AUTHORIZATION, AVG, BEFORE, BEGIN, BERNOULLI, BETWEEN, BIGINT, BINARY, BIT, BLOB, BOOLEAN, BOTH, BREADTH, BY, BYTES, C, CALL, CALLED, CARDINALITY, CASCADE, CASCADED, CASE, CAST, CATALOG, CATALOG_NAME, CEIL, CEILING, CENTURY, CHAIN, CHAR, CHARACTER, CHARACTERISTICS, CHARACTERS, CHARACTER_LENGTH, CHARACTER_SET_CATALOG, CHARACTER_SET_NAME, CHARACTER_SET_SCHEMA, CHAR_LENGTH, CHECK, CLASS_ORIGIN, CLOB, CLOSE, COALESCE, COBOL, COLLATE, COLLATION, COLLATION_CATALOG, COLLATION_NAME, COLLATION_SCHEMA, COLLECT, COLUMN, COLUMN_NAME, COMMAND_FUNCTION, COMMAND_FUNCTION_CODE, COMMIT, COMMITTED, CONDITION, CONDITION_NUMBER, CONNECT, CONNECTION, CONNECTION_NAME, CONSTRAINT, CONSTRAINTS, CONSTRAINT_CATALOG, CONSTRAINT_NAME, CONSTRAINT_SCHEMA, CONSTRUCTOR, CONTAINS, CONTINUE, CONVERT, CORR, CORRESPONDING, COUNT, COVAR_POP, COVAR_SAMP, CREATE, CROSS, CUBE, CUME_DIST, CURRENT, CURRENT_CATALOG, CURRENT_DATE, CURRENT_DEFAULT_TRANSFORM_GROUP, CURRENT_PATH, CURRENT_ROLE, CURRENT_SCHEMA, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_TRANSFORM_GROUP_FOR_TYPE, CURRENT_USER, CURSOR, CURSOR_NAME, CYCLE, DATA, DATABASE, DATE, DATETIME_INTERVAL_CODE, DATETIME_INTERVAL_PRECISION, DAY, DEALLOCATE, DEC, DECADE, DECIMAL, DECLARE, DEFAULT, DEFAULTS, DEFERRABLE, DEFERRED, DEFINED, DEFINER, DEGREE, DELETE, DENSE_RANK, DEPTH, DEREF, DERIVED, DESC, DESCRIBE, DESCRIPTION, DESCRIPTOR, DETERMINISTIC, DIAGNOSTICS, DISALLOW, DISCONNECT, DISPATCH, DISTINCT, DOMAIN, DOUBLE, DOW, DOY, DROP, DYNAMIC, DYNAMIC_FUNCTION, DYNAMIC_FUNCTION_CODE, EACH, ELEMENT, ELSE, END, END-EXEC, EPOCH, EQUALS, ESCAPE, EVERY, EXCEPT, EXCEPTION, EXCLUDE, EXCLUDING, EXEC, EXECUTE, EXISTS, EXP, EXPLAIN, EXTEND, EXTERNAL, EXTRACT, FALSE, FETCH, FILTER, FINAL, FIRST, FIRST_VALUE, FLOAT, FLOOR, FOLLOWING, FOR, FOREIGN, FORTRAN, FOUND, FRAC_SECOND, FREE, FROM, FULL, FUNCTION, FUSION, G, GENERAL, GENERATED, GET, GLOBAL, GO, GOTO, GRANT, GRANTED, GROUP, GROUPING, HAVING, HIERARCHY, HOLD, HOUR, IDENTITY, IMMEDIATE, IMPLEMENTATION, IMPORT, IN, INCLUDING, INCREMENT, INDICATOR, INITIALLY, INNER, INOUT, INPUT, INSENSITIVE, INSERT, INSTANCE, INSTANTIABLE, INT, INTEGER, INTERSECT, INTERSECTION, INTERVAL, INTO, INVOKER, IS, ISOLATION, JAVA, JOIN, K, KEY, KEY_MEMBER, KEY_TYPE, LABEL, LANGUAGE, LARGE, LAST, LAST_VALUE, LATERAL, LEADING, LEFT, LENGTH, LEVEL, LIBRARY, LIKE, LIMIT, LN, LOCAL, LOCALTIME, LOCALTIMESTAMP, LOCATOR, LOWER, M, MAP, MATCH, MATCHED, MAX, MAXVALUE, MEMBER, MERGE, MESSAGE_LENGTH, MESSAGE_OCTET_LENGTH, MESSAGE_TEXT, METHOD, MICROSECOND, MILLENNIUM, MIN, MINUTE, MINVALUE, MOD, MODIFIES, MODULE, MONTH, MORE, MULTISET, MUMPS, NAME, NAMES, NATIONAL, NATURAL, NCHAR, NCLOB, NESTING, NEW, NEXT, NO, NONE, NORMALIZE, NORMALIZED, NOT, NULL, NULLABLE, NULLIF, NULLS, NUMBER, NUMERIC, OBJECT, OCTETS, OCTET_LENGTH, OF, OFFSET, OLD, ON, ONLY, OPEN, OPTION, OPTIONS, OR, ORDER, ORDERING, ORDINALITY, OTHERS, OUT, OUTER, OUTPUT, OVER, OVERLAPS, OVERLAY, OVERRIDING, PAD, PARAMETER, PARAMETER_MODE, PARAMETER_NAME, PARAMETER_ORDINAL_POSITION, PARAMETER_SPECIFIC_CATALOG, PARAMETER_SPECIFIC_NAME, PARAMETER_SPECIFIC_SCHEMA, PARTIAL, PARTITION, PASCAL, PASSTHROUGH, PATH, PERCENTILE_CONT, PERCENTILE_DISC, PERCENT_RANK, PLACING, PLAN, PLI, POSITION, POWER, PRECEDING, PRECISION, PREPARE, PRESERVE, PRIMARY, PRIOR, PRIVILEGES, PROCEDURE, PUBLIC, QUARTER, RANGE, RANK, READ, READS, REAL, RECURSIVE, REF, REFERENCES, REFERENCING, REGR_AVGX, REGR_AVGY, REGR_COUNT, REGR_INTERCEPT, REGR_R2, REGR_SLOPE, REGR_SXX, REGR_SXY, REGR_SYY, RELATIVE, RELEASE, REPEATABLE, RESET, RESTART, RESTRICT, RESULT, RETURN, RETURNED_CARDINALITY, RETURNED_LENGTH, RETURNED_OCTET_LENGTH, RETURNED_SQLSTATE, RETURNS, REVOKE, RIGHT, ROLE, ROLLBACK, ROLLUP, ROUTINE, ROUTINE_CATALOG, ROUTINE_NAME, ROUTINE_SCHEMA, ROW, ROWS, ROW_COUNT, ROW_NUMBER, SAVEPOINT, SCALE, SCHEMA, SCHEMA_NAME, SCOPE, SCOPE_CATALOGS, SCOPE_NAME, SCOPE_SCHEMA, SCROLL, SEARCH, SECOND, SECTION, SECURITY, SELECT, SELF, SENSITIVE, SEQUENCE, SERIALIZABLE, SERVER, SERVER_NAME, SESSION, SESSION_USER, SET, SETS, SIMILAR, SIMPLE, SIZE, SMALLINT, SOME, SOURCE, SPACE, SPECIFIC, SPECIFICTYPE, SPECIFIC_NAME, SQL, SQLEXCEPTION, SQLSTATE, SQLWARNING, SQL_TSI_DAY, SQL_TSI_FRAC_SECOND, SQL_TSI_HOUR, SQL_TSI_MICROSECOND, SQL_TSI_MINUTE, SQL_TSI_MONTH, SQL_TSI_QUARTER, SQL_TSI_SECOND, SQL_TSI_WEEK, SQL_TSI_YEAR, SQRT, START, STATE, STATEMENT, STATIC, STDDEV_POP, STDDEV_SAMP, STREAM, STRING, STRUCTURE, STYLE, SUBCLASS_ORIGIN, SUBMULTISET, SUBSTITUTE, SUBSTRING, SUM, SYMMETRIC, SYSTEM, SYSTEM_USER, TABLE, TABLESAMPLE, TABLE_NAME, TEMPORARY, THEN, TIES, TIME, TIMESTAMP, TIMESTAMPADD, TIMESTAMPDIFF, TIMEZONE_HOUR, TIMEZONE_MINUTE, TINYINT, TO, TOP_LEVEL_COUNT, TRAILING, TRANSACTION, TRANSACTIONS_ACTIVE, TRANSACTIONS_COMMITTED, TRANSACTIONS_ROLLED_BACK, TRANSFORM, TRANSFORMS, TRANSLATE, TRANSLATION, TREAT, TRIGGER, TRIGGER_CATALOG, TRIGGER_NAME, TRIGGER_SCHEMA, TRIM, TRUE, TYPE, UESCAPE, UNBOUNDED, UNCOMMITTED, UNDER, UNION, UNIQUE, UNKNOWN, UNNAMED, UNNEST, UPDATE, UPPER, UPSERT, USAGE, USER, USER_DEFINED_TYPE_CATALOG, USER_DEFINED_TYPE_CODE, USER_DEFINED_TYPE_NAME, USER_DEFINED_TYPE_SCHEMA, USING, VALUE, VALUES, VARBINARY, VARCHAR, VARYING, VAR_POP, VAR_SAMP, VERSION, VIEW, WEEK, WHEN, WHENEVER, WHERE, WIDTH_BUCKET, WINDOW, WITH, WITHIN, WITHOUT, WORK, WRAPPER, WRITE, XML, YEAR, ZONE

flink官网地址 

你可能感兴趣的:(flink,flink)