从flink的官方文档,我们知道flink的编程模型分为四层,sql层是最高层的api,Table api是中间层,DataStream/DataSet Api 是核心,stateful Streaming process层是底层实现。
其中,
flink dataset api使用及原理 介绍了DataSet Api
flink DataStream API使用及原理介绍了DataStream Api
flink中的时间戳如何使用?---Watermark使用及原理 介绍了底层实现的基础Watermark
flink window实例分析 介绍了window的概念及使用原理
Flink中的状态与容错 介绍了State的概念及checkpoint,savepoint的容错机制
上篇<使用flink Table &Sql api来构建批量和流式应用(1)Table的基本概念>介绍了Table的基本概念及使用方法
本篇主要看看Table Api有哪些功能?
org.apache.flink.table.api.Table抽象了Table Api的功能
/** * A Table is the core component of the Table API. * Similar to how the batch and streaming APIs have DataSet and DataStream, * the Table API is built around {@link Table}. * ** *Use the methods of {
@link Table} to transform data. Use {@code TableEnvironment} to convert a * {@link Table} back to a {@code DataSet} or {@code DataStream}. * *When using Scala a {
@link Table} can also be converted using implicit conversions. * *Java Example: * *
* {@code * ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); * BatchTableEnvironment tEnv = BatchTableEnvironment.create(env); * * DataSet> set = ... * tEnv.registerTable("MyTable", set, "a, b"); * * Table table = tEnv.scan("MyTable").select(...); * ... * Table table2 = ... * DataSet set2 = tEnv.toDataSet(table2, MyType.class); * } *
Scala Example: * *
* {@code * val env = ExecutionEnvironment.getExecutionEnvironment * val tEnv = BatchTableEnvironment.create(env) * * val set: DataSet[(String, Int)] = ... * val table = set.toTable(tEnv, 'a, 'b) * ... * val table2 = ... * val set2: DataSet[MyType] = table2.toDataSet[MyType] * } ** *
Operations such as {@code join}, {@code select}, {@code where} and {@code groupBy} either * take arguments in a Scala DSL or as an expression String. Please refer to the documentation for * the expression syntax. */
(1) 查询select
/** * Performs a selection operation. Similar to a SQL SELECT statement. The field expressions * can contain complex expressions and aggregations. * **/ Table select(String fields); /** * Performs a selection operation. Similar to a SQL SELECT statement. The field expressions * can contain complex expressions and aggregations. * *Example: * *
* {@code * tab.select("key, value.avg + ' The average' as average") * } *
Scala Example: * *
* {@code * tab.select('key, 'value.avg + " The average" as 'average) * } * */ Table select(Expression... fields);
(2) 条件where
/** * Filters out elements that don't pass the filter predicate. Similar to a SQL WHERE * clause. * **/ Table where(String predicate); /** * Filters out elements that don't pass the filter predicate. Similar to a SQL WHERE * clause. * *Example: * *
* {@code * tab.where("name = 'Fred'") * } *
Scala Example: * *
* {@code * tab.where('name === "Fred") * } * */ Table where(Expression predicate);
(3)过滤Filter
/** * Filters out elements that don't pass the filter predicate. Similar to a SQL WHERE * clause. * **/ Table filter(String predicate); /** * Filters out elements that don't pass the filter predicate. Similar to a SQL WHERE * clause. * *Example: * *
* {@code * tab.filter("name = 'Fred'") * } *
Scala Example: * *
* {@code * tab.filter('name === "Fred") * } * */ Table filter(Expression predicate);
(4) distinct
/** * Removes duplicate values and returns only distinct (different) values. * **/ Table distinct();Example: * *
* {@code * tab.select("key, value").distinct() * } *
(5) group by
/** * Groups the elements on some grouping keys. Use this before a selection with aggregations * to perform the aggregation on a per-group basis. Similar to a SQL GROUP BY statement. * **/ GroupedTable groupBy(String fields); /** * Groups the elements on some grouping keys. Use this before a selection with aggregations * to perform the aggregation on a per-group basis. Similar to a SQL GROUP BY statement. * *Example: * *
* {@code * tab.groupBy("key").select("key, value.avg") * } *
Scala Example: * *
* {@code * tab.groupBy('key).select('key, 'value.avg) * } * */ GroupedTable groupBy(Expression... fields);
(6) order by
/** * Sorts the given {@link Table}. Similar to SQL ORDER BY. * The resulting Table is sorted globally sorted across all parallel partitions. * **/ Table orderBy(String fields); /** * Sorts the given {@link Table}. Similar to SQL ORDER BY. * The resulting Table is globally sorted across all parallel partitions. * *Example: * *
* {@code * tab.orderBy("name.desc") * } *
Scala Example: * *
* {@code * tab.orderBy('name.desc) * } * */ Table orderBy(Expression... fields);
(7) map
/** * Performs a map operation with an user-defined scalar function or a built-in scalar function. * The output will be flattened if the output type is a composite type. * **/ Table map(String mapFunction); /** * Performs a map operation with an user-defined scalar function or built-in scalar function. * The output will be flattened if the output type is a composite type. * *Example: * *
* {@code * ScalarFunction func = new MyMapFunction(); * tableEnv.registerFunction("func", func); * tab.map("func(c)"); * } *
Scala Example: * *
* {@code * val func = new MyMapFunction() * tab.map(func('c)) * } * */ Table map(Expression mapFunction); /** * Performs a flatMap operation with an user-defined table function or built-in table function. * The output will be flattened if the output type is a composite type. * *
Example: * *
* {@code * TableFunction func = new MyFlatMapFunction(); * tableEnv.registerFunction("func", func); * table.flatMap("func(c)"); * } * */ Table flatMap(String tableFunction); /** * Performs a flatMap operation with an user-defined table function or built-in table function. * The output will be flattened if the output type is a composite type. * *
Scala Example: * *
* {@code * val func = new MyFlatMapFunction * table.flatMap(func('c)) * } * */ Table flatMap(Expression tableFunction);
(8) aggregate
/** * Performs a global aggregate operation with an aggregate function. You have to close the * {@link #aggregate(String)} with a select statement. The output will be flattened if the * output type is a composite type. * **/ AggregatedTable aggregate(String aggregateFunction); /** * Performs a global aggregate operation with an aggregate function. You have to close the * {@link #aggregate(Expression)} with a select statement. The output will be flattened if the * output type is a composite type. * *Example: * *
* {@code * AggregateFunction aggFunc = new MyAggregateFunction() * tableEnv.registerFunction("aggFunc", aggFunc); * table.aggregate("aggFunc(a, b) as (f0, f1, f2)") * .select("f0, f1") * } *
Scala Example: * *
* {@code * val aggFunc = new MyAggregateFunction * table.aggregate(aggFunc('a, 'b) as ('f0, 'f1, 'f2)) * .select('f0, 'f1) * } * */ AggregatedTable aggregate(Expression aggregateFunction); /** * Perform a global flatAggregate without groupBy. FlatAggregate takes a TableAggregateFunction * which returns multiple rows. Use a selection after the flatAggregate. * *
Example: * *
* {@code * TableAggregateFunction tableAggFunc = new MyTableAggregateFunction(); * tableEnv.registerFunction("tableAggFunc", tableAggFunc); * tab.flatAggregate("tableAggFunc(a, b) as (x, y, z)") * .select("x, y, z") * } * */ FlatAggregateTable flatAggregate(String tableAggregateFunction); /** * Perform a global flatAggregate without groupBy. FlatAggregate takes a TableAggregateFunction * which returns multiple rows. Use a selection after the flatAggregate. * *
Scala Example: * *
* {@code * val tableAggFunc = new MyTableAggregateFunction * tab.flatAggregate(tableAggFunc('a, 'b) as ('x, 'y, 'z)) * .select('x, 'y, 'z) * } * */ FlatAggregateTable flatAggregate(Expression tableAggregateFunction);
(9)列的管理
/** * Adds additional columns. Similar to a SQL SELECT statement. The field expressions * can contain complex expressions, but can not contain aggregations. It will throw an exception * if the added fields already exist. * **/ Table addColumns(String fields); /** * Adds additional columns. Similar to a SQL SELECT statement. The field expressions * can contain complex expressions, but can not contain aggregations. It will throw an exception * if the added fields already exist. * *Example: *
* {@code * tab.addColumns("a + 1 as a1, concat(b, 'sunny') as b1") * } *
Scala Example: * *
* {@code * tab.addColumns('a + 1 as 'a1, concat('b, "sunny") as 'b1) * } * */ Table addColumns(Expression... fields); /** * Adds additional columns. Similar to a SQL SELECT statement. The field expressions * can contain complex expressions, but can not contain aggregations. Existing fields will be * replaced if add columns name is the same as the existing column name. Moreover, if the added * fields have duplicate field name, then the last one is used. * *
Example: *
* {@code * tab.addOrReplaceColumns("a + 1 as a1, concat(b, 'sunny') as b1") * } * */ Table addOrReplaceColumns(String fields); /** * Adds additional columns. Similar to a SQL SELECT statement. The field expressions * can contain complex expressions, but can not contain aggregations. Existing fields will be * replaced. If the added fields have duplicate field name, then the last one is used. * *
Scala Example: *
* {@code * tab.addOrReplaceColumns('a + 1 as 'a1, concat('b, "sunny") as 'b1) * } * */ Table addOrReplaceColumns(Expression... fields); /** * Renames existing columns. Similar to a field alias statement. The field expressions * should be alias expressions, and only the existing fields can be renamed. * *
Example: * *
* {@code * tab.renameColumns("a as a1, b as b1") * } * */ Table renameColumns(String fields); /** * Renames existing columns. Similar to a field alias statement. The field expressions * should be alias expressions, and only the existing fields can be renamed. * *
Scala Example: * *
* {@code * tab.renameColumns('a as 'a1, 'b as 'b1) * } * */ Table renameColumns(Expression... fields); /** * Drops existing columns. The field expressions should be field reference expressions. * *
Example: * *
* {@code * tab.dropColumns("a, b") * } * */ Table dropColumns(String fields); /** * Drops existing columns. The field expressions should be field reference expressions. * *
Scala Example: *
* {@code * tab.dropColumns('a, 'b) * } * */ Table dropColumns(Expression... fields);
(10) window操作
/** * Groups the records of a table by assigning them to windows defined by a time or row interval. * ** *For streaming tables of infinite size, grouping into windows is required to define finite * groups on which group-based aggregates can be computed. * *
For batch tables of finite size, windowing essentially provides shortcuts for time-based * groupBy. * *
Note: Computing windowed aggregates on a streaming table is only a parallel operation * if additional grouping attributes are added to the {
@code groupBy(...)} clause. * If the {@code groupBy(...)} only references a GroupWindow alias, the streamed table will be * processed by a single task, i.e., with parallelism 1. * * @param groupWindow groupWindow that specifies how elements are grouped. * @return A windowed table. */ GroupWindowedTable window(GroupWindow groupWindow); /** * Defines over-windows on the records of a table. * *An over-window defines for each record an interval of records over which aggregation * functions can be computed. * *
Example: * *
* {@code * table * .window(Over partitionBy 'c orderBy 'rowTime preceding 10.seconds as 'ow) * .select('c, 'b.count over 'ow, 'e.sum over 'ow) * } *
Note: Computing over window aggregates on a streaming table is only a parallel * operation if the window is partitioned. Otherwise, the whole stream will be processed by a * single task, i.e., with parallelism 1. * *
Note: Over-windows for batch tables are currently not supported. * * @param overWindows windows that specify the record interval over which aggregations are * computed. * @return An OverWindowedTable to specify the aggregations. */ OverWindowedTable window(OverWindow... overWindows);
(11) 表关联
包括Inner join和OuterJoin
/** * Joins two {@link Table}s. Similar to a SQL join. The fields of the two joined * operations must not overlap, use {@code as} to rename fields if necessary. You can use * where and select clauses after a join to further specify the behaviour of the join. * **/ Table join(Table right); /** * Joins two {@link Table}s. Similar to a SQL join. The fields of the two joined * operations must not overlap, use {@code as} to rename fields if necessary. * *Note: Both tables must be bound to the same {
@code TableEnvironment} . * *Example: * *
* {@code * left.join(right).where("a = b && c > 3").select("a, b, d") * } *
Note: Both tables must be bound to the same {
@code TableEnvironment} . * *Example: * *
* {@code * left.join(right, "a = b") * } * */ Table join(Table right, String joinPredicate); /** * Joins two {@link Table}s. Similar to a SQL join. The fields of the two joined * operations must not overlap, use {@code as} to rename fields if necessary. * *
Note: Both tables must be bound to the same {
@code TableEnvironment} . * *Scala Example: * *
* {@code * left.join(right, 'a === 'b).select('a, 'b, 'd) * } * */ Table join(Table right, Expression joinPredicate); /** * Joins two {@link Table}s. Similar to a SQL left outer join. The fields of the two joined * operations must not overlap, use {@code as} to rename fields if necessary. * *
Note: Both tables must be bound to the same {
@code TableEnvironment} and its * {@code TableConfig} must have null check enabled (default). * *Example: * *
* {@code * left.leftOuterJoin(right).select("a, b, d") * } * */ Table leftOuterJoin(Table right); /** * Joins two {@link Table}s. Similar to a SQL left outer join. The fields of the two joined * operations must not overlap, use {@code as} to rename fields if necessary. * *
Note: Both tables must be bound to the same {
@code TableEnvironment} and its * {@code TableConfig} must have null check enabled (default). * *Example: * *
* {@code * left.leftOuterJoin(right, "a = b").select("a, b, d") * } * */ Table leftOuterJoin(Table right, String joinPredicate); /** * Joins two {@link Table}s. Similar to a SQL left outer join. The fields of the two joined * operations must not overlap, use {@code as} to rename fields if necessary. * *
Note: Both tables must be bound to the same {
@code TableEnvironment} and its * {@code TableConfig} must have null check enabled (default). * *Scala Example: * *
* {@code * left.leftOuterJoin(right, 'a === 'b).select('a, 'b, 'd) * } * */ Table leftOuterJoin(Table right, Expression joinPredicate); /** * Joins two {@link Table}s. Similar to a SQL right outer join. The fields of the two joined * operations must not overlap, use {@code as} to rename fields if necessary. * *
Note: Both tables must be bound to the same {
@code TableEnvironment} and its * {@code TableConfig} must have null check enabled (default). * *Example: * *
* {@code * left.rightOuterJoin(right, "a = b").select("a, b, d") * } * */ Table rightOuterJoin(Table right, String joinPredicate); /** * Joins two {@link Table}s. Similar to a SQL right outer join. The fields of the two joined * operations must not overlap, use {@code as} to rename fields if necessary. * *
Note: Both tables must be bound to the same {
@code TableEnvironment} and its * {@code TableConfig} must have null check enabled (default). * *Scala Example: * *
* {@code * left.rightOuterJoin(right, 'a === 'b).select('a, 'b, 'd) * } * */ Table rightOuterJoin(Table right, Expression joinPredicate); /** * Joins two {@link Table}s. Similar to a SQL full outer join. The fields of the two joined * operations must not overlap, use {@code as} to rename fields if necessary. * *
Note: Both tables must be bound to the same {
@code TableEnvironment} and its * {@code TableConfig} must have null check enabled (default). * *Example: * *
* {@code * left.fullOuterJoin(right, "a = b").select("a, b, d") * } * */ Table fullOuterJoin(Table right, String joinPredicate); /** * Joins two {@link Table}s. Similar to a SQL full outer join. The fields of the two joined * operations must not overlap, use {@code as} to rename fields if necessary. * *
Note: Both tables must be bound to the same {
@code TableEnvironment} and its * {@code TableConfig} must have null check enabled (default). * *Scala Example: * *
* {@code * left.fullOuterJoin(right, 'a === 'b).select('a, 'b, 'd) * } * */ Table fullOuterJoin(Table right, Expression joinPredicate); /** * Joins this {@link Table} with an user-defined {@link TableFunction}. This join is similar to * a SQL inner join with ON TRUE predicate but works with a table function. Each row of the * table is joined with all rows produced by the table function. * *
Example: * *
* {@code * class MySplitUDTF extends TableFunction
Scala Example: * *
* {@code * class MySplitUDTF extends TableFunction[String] { * def eval(str: String): Unit = { * str.split("#").foreach(collect) * } * } * * val split = new MySplitUDTF() * table.joinLateral(split('c) as ('s)).select('a, 'b, 'c, 's) * } * */ Table joinLateral(Expression tableFunctionCall); /** * Joins this {@link Table} with an user-defined {@link TableFunction}. This join is similar to * a SQL inner join with ON TRUE predicate but works with a table function. Each row of the * table is joined with all rows produced by the table function. * *
Example: * *
* {@code * class MySplitUDTF extends TableFunction
Scala Example: * *
* {@code * class MySplitUDTF extends TableFunction[String] { * def eval(str: String): Unit = { * str.split("#").foreach(collect) * } * } * * val split = new MySplitUDTF() * table.joinLateral(split('c) as ('s), 'a === 's).select('a, 'b, 'c, 's) * } * */ Table joinLateral(Expression tableFunctionCall, Expression joinPredicate); /** * Joins this {@link Table} with an user-defined {@link TableFunction}. This join is similar to * a SQL left outer join with ON TRUE predicate but works with a table function. Each row of * the table is joined with all rows produced by the table function. If the table function does * not produce any row, the outer row is padded with nulls. * *
Example: * *
* {@code * class MySplitUDTF extends TableFunction
Scala Example: * *
* {@code * class MySplitUDTF extends TableFunction[String] { * def eval(str: String): Unit = { * str.split("#").foreach(collect) * } * } * * val split = new MySplitUDTF() * table.leftOuterJoinLateral(split('c) as ('s)).select('a, 'b, 'c, 's) * } * */ Table leftOuterJoinLateral(Expression tableFunctionCall); /** * Joins this {@link Table} with an user-defined {@link TableFunction}. This join is similar to * a SQL left outer join with ON TRUE predicate but works with a table function. Each row of * the table is joined with all rows produced by the table function. If the table function does * not produce any row, the outer row is padded with nulls. * *
Example: * *
* {@code * class MySplitUDTF extends TableFunction
Scala Example: * *
* {@code * class MySplitUDTF extends TableFunction[String] { * def eval(str: String): Unit = { * str.split("#").foreach(collect) * } * } * * val split = new MySplitUDTF() * table.leftOuterJoinLateral(split('c) as ('s), 'a === 's).select('a, 'b, 'c, 's) * } * */ Table leftOuterJoinLateral(Expression tableFunctionCall, Expression joinPredicate);
(12) 集合操作
/** * Minus of two {@link Table}s with duplicate records removed. * Similar to a SQL EXCEPT clause. Minus returns records from the left table that do not * exist in the right table. Duplicate records in the left table are returned * exactly once, i.e., duplicates are removed. Both tables must have identical field types. * **/ Table minus(Table right); /** * Minus of two {@link Table}s. Similar to a SQL EXCEPT ALL. * Similar to a SQL EXCEPT ALL clause. MinusAll returns the records that do not exist in * the right table. A record that is present n times in the left table and m times * in the right table is returned (n - m) times, i.e., as many duplicates as are present * in the right table are removed. Both tables must have identical field types. * *Note: Both tables must be bound to the same {
@code TableEnvironment}. * *Example: * *
* {@code * left.minus(right) * } *
Note: Both tables must be bound to the same {
@code TableEnvironment}. * *Example: * *
* {@code * left.minusAll(right) * } * */ Table minusAll(Table right); /** * Unions two {@link Table}s with duplicate records removed. * Similar to a SQL UNION. The fields of the two union operations must fully overlap. * *
Note: Both tables must be bound to the same {
@code TableEnvironment}. * *Example: * *
* {@code * left.union(right) * } * */ Table union(Table right); /** * Unions two {@link Table}s. Similar to a SQL UNION ALL. The fields of the two union * operations must fully overlap. * *
Note: Both tables must be bound to the same {
@code TableEnvironment}. * *Example: * *
* {@code * left.unionAll(right) * } * */ Table unionAll(Table right); /** * Intersects two {@link Table}s with duplicate records removed. Intersect returns records that * exist in both tables. If a record is present in one or both tables more than once, it is * returned just once, i.e., the resulting table has no duplicate records. Similar to a * SQL INTERSECT. The fields of the two intersect operations must fully overlap. * *
Note: Both tables must be bound to the same {
@code TableEnvironment}. * *Example: * *
* {@code * left.intersect(right) * } * */ Table intersect(Table right); /** * Intersects two {@link Table}s. IntersectAll returns records that exist in both tables. * If a record is present in both tables more than once, it is returned as many times as it * is present in both tables, i.e., the resulting table might have duplicate records. Similar * to an SQL INTERSECT ALL. The fields of the two intersect operations must fully overlap. * *
Note: Both tables must be bound to the same {
@code TableEnvironment}. * *Example: * *
* {@code * left.intersectAll(right) * } * */ Table intersectAll(Table right);
(13) 创建临时表
/** * Creates {@link TemporalTableFunction} backed up by this table as a history table. * Temporal Tables represent a concept of a table that changes over time and for which * Flink keeps track of those changes. {@link TemporalTableFunction} provides a way how to * access those data. * *For more information please check Flink's documentation on Temporal Tables. * *
Currently {
@link TemporalTableFunction}s are only supported in streaming. * * @param timeAttribute Must points to a time attribute. Provides a way to compare which * records are a newer or older version. * @param primaryKey Defines the primary key. With primary key it is possible to update * a row or to delete it. * @return {@link TemporalTableFunction} which is an instance of {@link TableFunction}. * It takes one single argument, the {@code timeAttribute}, for which it returns * matching version of the {@link Table}, from which {@link TemporalTableFunction} * was created. */ TemporalTableFunction createTemporalTableFunction(String timeAttribute, String primaryKey); /** * Creates {@link TemporalTableFunction} backed up by this table as a history table. * Temporal Tables represent a concept of a table that changes over time and for which * Flink keeps track of those changes. {@link TemporalTableFunction} provides a way how to * access those data. * *For more information please check Flink's documentation on Temporal Tables. * *
Currently {
@link TemporalTableFunction}s are only supported in streaming. * * @param timeAttribute Must points to a time indicator. Provides a way to compare which * records are a newer or older version. * @param primaryKey Defines the primary key. With primary key it is possible to update * a row or to delete it. * @return {@link TemporalTableFunction} which is an instance of {@link TableFunction}. * It takes one single argument, the {@code timeAttribute}, for which it returns * matching version of the {@link Table}, from which {@link TemporalTableFunction} * was created. */ TemporalTableFunction createTemporalTableFunction(Expression timeAttribute, Expression primaryKey);
(14) 重命名
/** * Renames the fields of the expression result. Use this to disambiguate fields before * joining to operations. * **/ Table as(String fields); /** * Renames the fields of the expression result. Use this to disambiguate fields before * joining to operations. * *Example: * *
* {@code * tab.as("a, b") * } *
Scala Example: * *
* {@code * tab.as('a, 'b) * } * */ Table as(Expression... fields); /** * Filters out elements that don't pass the filter predicate. Similar to a SQL WHERE * clause. * *
Example: * *
* {@code * tab.filter("name = 'Fred'") * } * */
(15)插入数据表
/** * Writes the {@link Table} to a {@link TableSink} that was registered under the specified path. * For the path resolution algorithm see {@link TableEnvironment#useDatabase(String)}. * *A batch {
@link Table} can only be written to a * {@code org.apache.flink.table.sinks.BatchTableSink}, a streaming {@link Table} requires a * {@code org.apache.flink.table.sinks.AppendStreamTableSink}, a * {@code org.apache.flink.table.sinks.RetractStreamTableSink}, or an * {@code org.apache.flink.table.sinks.UpsertStreamTableSink}. * * @param tablePath The first part of the path of the registered {@link TableSink} to which the {@link Table} is * written. This is to ensure at least the name of the {@link TableSink} is provided. * @param tablePathContinued The remaining part of the path of the registered {@link TableSink} to which the * {@link Table} is written. */ void insertInto(String tablePath, String... tablePathContinued); /** * Writes the {@link Table} to a {@link TableSink} that was registered under the specified name * in the initial default catalog. * *A batch {
@link Table} can only be written to a * {@code org.apache.flink.table.sinks.BatchTableSink}, a streaming {@link Table} requires a * {@code org.apache.flink.table.sinks.AppendStreamTableSink}, a * {@code org.apache.flink.table.sinks.RetractStreamTableSink}, or an * {@code org.apache.flink.table.sinks.UpsertStreamTableSink}. * * @param tableName The name of the {@link TableSink} to which the {@link Table} is written. * @param conf The {@link QueryConfig} to use. * @deprecated use {@link #insertInto(QueryConfig, String, String...)} */ @Deprecated void insertInto(String tableName, QueryConfig conf); /** * Writes the {@link Table} to a {@link TableSink} that was registered under the specified path. * For the path resolution algorithm see {@link TableEnvironment#useDatabase(String)}. * *A batch {
@link Table} can only be written to a * {@code org.apache.flink.table.sinks.BatchTableSink}, a streaming {@link Table} requires a * {@code org.apache.flink.table.sinks.AppendStreamTableSink}, a * {@code org.apache.flink.table.sinks.RetractStreamTableSink}, or an * {@code org.apache.flink.table.sinks.UpsertStreamTableSink}. * * @param conf The {@link QueryConfig} to use. * @param tablePath The first part of the path of the registered {@link TableSink} to which the {@link Table} is * written. This is to ensure at least the name of the {@link TableSink} is provided. * @param tablePathContinued The remaining part of the path of the registered {@link TableSink} to which the * {@link Table} is written. */ void insertInto(QueryConfig conf, String tablePath, String... tablePathContinued);
总结:
本篇抓住Table api的核心类Table来发现其拥有的功能,并提供了使用用例。Flink Table Api 主要包括了查询select,条件where,过滤filter,排序order by,分组group by,去重distinct,表关联join,重命名as等常规sql操作,也提供了flink自身特性的操作:
窗口操作window,表聚合操作,map操作,aggregate操作。