【Flink】flink的table api 和sql开发(4)

目录

为什么要Table API和SQL

API

案例

导入pom依赖

运行,查看结果

表的合并操作


Table API和SQL介绍

为什么要Table API和SQL

Table API是一种类似于SQL的API,SQL作为一种声明式语言,可以不用关心底层的实习即可以进行数据的处理。

Apache Flink 1.12 Documentation: 概念与通用 API

【Flink】flink的table api 和sql开发(4)_第1张图片

 

API

  1. 创建表
  2. 查询表
  3. 输出表

 

案例

将DataStream注册为Table和view并进行SQL统计

导入pom依赖

【Flink】flink的table api 和sql开发(4)_第2张图片

 



    4.0.0

    org.example
    flink-dataset-demo
    1.0-SNAPSHOT

    
        8
        8
    
    
        
            org.apache.flink
            flink-clients_2.12
            1.12.2
        
        
            org.apache.hadoop
            hadoop-client
            3.1.4
        
        
            org.projectlombok
            lombok
            RELEASE
            compile
        
        
            mysql
            mysql-connector-java
            5.1.47
        
        
            org.apache.flink
            flink-table-api-java-bridge_2.12
            1.12.2
        
        
            org.apache.flink
            flink-table-planner-blink_2.12
            1.12.2
        
        
            org.apache.flink
            flink-streaming-java_2.12
            1.12.2
        
    


 

代码实现

【Flink】flink的table api 和sql开发(4)_第3张图片

 

 

package cn.edu.hgu.flink.table;

import cn.edu.hgu.flink.DataStream.model.Student;
import org.apache.flink.api.common.RuntimeExecutionMode;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

import java.util.Arrays;

import static org.apache.flink.table.api.Expressions.$;

/**
 * flink的table API和 SQL演示
 */

public class FlinkTableDemo {
    public static void main(String[] args) throws Exception{
        //1.env
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setRuntimeMode(RuntimeExecutionMode.AUTOMATIC);
        //2.table env
        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
        //3.source
        DataStream StudnetA = env.fromCollection(Arrays.asList(
                new Student(101,"张三",20),
                new Student(102,"李四",18),
                new Student(103,"王五",19)
        ));
        DataStream StudentB = env.fromCollection(Arrays.asList(
                new Student(201,"赵六",21),
                new Student(202,"钱七",19),
                new Student(203,"孙八",18)
        ));
        //3.注册表
        //3.1 转换一个流为一个表
        Table tableA = tEnv.fromDataStream(StudnetA,$("id"),$("name"),$("age"));
        //3.2 注册一个流为表
        tEnv.createTemporaryView("StudentB",StudentB,$("id"),$("name"),$("age"));
        //4.transformation
        Table tableB = tEnv.sqlQuery("select * from StudentB where age>=19");//SQL
        //合并
        Table unionTable = tEnv.sqlQuery(
                "select * from " + tableA + " where age<19 " +
                        "union all " +
                        "select * from StudentB where age >= 19"
        );
        //5.sink
        //把表转换为流
        DataStream resultA = tEnv.toAppendStream(tableA,Student.class);
        DataStream resultB = tEnv.toAppendStream(tableB,Student.class);
        DataStream resultUnion = tEnv.toAppendStream(unionTable,Student.class);
//        tableA.printSchema();
//        tableB.printSchema();
//        resultA.print();
//        resultB.print();
        resultUnion.print();
        //6.execute
        env.execute();

    }
}

 

运行,查看结果

【Flink】flink的table api 和sql开发(4)_第4张图片

 

表的合并操作

【Flink】flink的table api 和sql开发(4)_第5张图片

 

下一章介绍的是flink的四大基石

你可能感兴趣的:(大数据,flink,hadoop)