大数据-玩转数据-FLINK-从kafka消费数据

一、基于前面kafka部署

大数据-玩转数据-Kafka安装

二、FLINK中编写代码

package com.lyh.flink04;

import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;

import java.util.Properties;

public class flink04_fromkafka {
    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        Properties properties = new Properties();
        properties.setProperty("bootstrap.servers","hadoop100:9092");
        properties.setProperty("group.id","test");
        env.addSource(new FlinkKafkaConsumer<>("wordsendertest",new SimpleStringSchema(),properties))
                .print();
        env.execute();


    }
}

三、运行测试

运行本段代码,等待kafka产生数据进行消费。

你可能感兴趣的:(大数据-玩转数据-FLINK,大数据,flink,kafka)