org.apache.flink
flink-streaming-java_2.11
1.7.1
org.apache.flink
flink-connector-kafka-0.9_2.11
1.7.1
org.apache.flink
flink-table_2.11
1.7.1
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(10);
env.enableCheckpointing(60000L, CheckpointingMode.EXACTLY_ONCE);
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", broker);
properties.setProperty("group.id", groupId);
properties.setProperty("max.partition.fetch.bytes", "10485760");
properties.setProperty("request.timeout.ms", "120000");
properties.setProperty("session.timeout.ms", "60000");
properties.setProperty("heartbeat.interval.ms", "10000");
FlinkKafkaConsumer09 myConsumer = new FlinkKafkaConsumer09(topic, new SimpleStringSchema(), properties);
DataStream sourceStream = env.addSource(myConsumer);
获取StreamTableEnvironment
StreamTableEnvironment tableEnv = StreamTableEnvironment.getTableEnvironment(env);
注册Table的几种方式
tableEnv.registerDataStreamInternal(“tableName”, sourceStream);
Table table = tableEnv.scan("tableName");
tableEnv.registerDataStream("tableName", sourceStream);
Table table = tableEnv.scan("tableName");
tableEnv.registerDataStream("tableName", sourceStream, "fieldName");
Table table = tableEnv.scan("tableName");
Table table = tableEnv.fromDataStream(sourceStream, "fieldName");
Table table = tableEnv.fromDataStream(sourceStream);
本例中DataStream
若DataStream数据类型为POJO,有多个字段,传入fieldName之间以逗号间隔。
若为DataStream
https://blog.csdn.net/weixin_44056920/article/details/104797904
Table API
table.select(...).filter(...);
SQL API
tableEnv.sqlQuery("SELECT * FROM tableName WHERE ...");
Table转换为DataStream
DataStream sinkStream = tableEnv.toAppendStream(table, String.class);
Table转换为带flag的DataStream(flag=true表示该条数据为新增,flag=false表示撤回该条数据)
DataStream> sinkStream = tableEnv.toRetractStream(table, String.class);
最后附上Table API官网链接:
https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/table/tableApi.html#operations