FlinkSQL_1.12_用DDL实现Kafka到MySQL的数据传输_实现按照条件进行过滤写入MySQL

1.FlinkSQL_用DDL实现Kafka到MySQL的数据传输_实现按照条件进行过滤写入MySQL

package com.atguigu.day10;

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

/**
 * @author dxy
 * @date 2021/4/21 20:58
 */

//TODO 用DDL实现Kafka到MySQL的数据传输
public class FlinkSQL15_SQL_DDL_Kafka_MySQL {
    public static void main(String[] args) throws Exception {
        //1.获取执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);

        //2.使用DDL的方式加载数据--注册SourceTable
        tableEnv.executeSql("create table source_sensor(id string,ts bigint,vc double)" +
                "with (" +
                "'connector.type' = 'kafka'," +
                "'connector.version' = 'universal'," +
                "'connector.topic' = 'test'," +
                "'connector.properties.bootstrap.servers' = 'hadoop102:9092'," +
                "'connector.properties.group.id' = 'bigdata1109'," +
                "'format.type' = 'csv'"
                + ")");

        Table table = tableEnv.sqlQuery("select * from source_sensor where id = 'ws_001'");

        //3.注册SinkTable:Mysql
        tableEnv.executeSql("create table sink_sensor(id string,ts bigint,vc double)" +
                "with (" +
                "'connector' = 'jdbc'," +
                "'url' = 'jdbc:mysql://hadoop102:3306/test',"+
                "'table-name' = 'sink_table',"+
                "'username' = 'root',"+
                "'password' = '123456'"
                + ")");

/*        //4.执行查询kafka数据
        Table source_sensor = tableEnv.from("source_sensor");

        //5.将数据写入Mysql
        source_sensor.executeInsert("sink_sensor");*/

        table.executeInsert("sink_sensor");

        //6.执行任务
        env.execute();
    }
}

2.测试

[atguigu@hadoop102 ~]$ kafka-console-producer.sh --broker-list hadoop102:9092 --topic test
>ws_001,1577844001,45
>ws_002,1577844001,45
>ws_001,1577844001,45

IDEA控制台

FlinkSQL_1.12_用DDL实现Kafka到MySQL的数据传输_实现按照条件进行过滤写入MySQL_第1张图片

报错但是不会影响程序运行。

查看MySQL中数据

FlinkSQL_1.12_用DDL实现Kafka到MySQL的数据传输_实现按照条件进行过滤写入MySQL_第2张图片

在运行之前需要先把表创建,不然没法测试,不会自动帮我们在mysql中创建表。

-- flinksql测试把kafka数据写入mysql
create table sink_table
(id VARCHAR(255),
ts BIGINT,
vc DOUBLE
);

结合数据,我们可以再继续实现mysql一个特性,有则更新,无则插入原则。

 

你可能感兴趣的:(大数据,Flink1.12,大数据,flink)