Flink入门第二课:Flink DataStream api读取各种数据源

一、读取Kafka

先添加kafka连接器的依赖



    org.apache.flink
    flink-connector-kafka-0.11_2.11
    1.10.1
package com.atguigu.Adatastream_api.source;

import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.TimeCharacteristic;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011;


import java.util.Properties;

/**
 * 需要引入kafka-connector的依赖
 * 运行时先启动zkServer,再启动kafka-server,最后启动kafak-producer,在producer中生产数据,这儿能消费到
 * zkServer.sh start
 * kafka-server-start.sh 

你可能感兴趣的:(Flink从入门到精通,Flink,流式计算,kafka,DataStream,api)