Apache Flink 是一个分布式大数据处理引擎,可对有限数据流和无限数据流进行有状态计算。可部署在各种集群环境,对各种大小的数据规模进行快速计算。
批流统一
支持高吞吐、低延迟、高性能的流处
支持带有事件时间的窗口(Window)操作
支持有状态计算的Exactly-once语义
支持高度灵活的窗口(Window)操作,支持基于time、count、session窗口操作
支持具有Backpressure功能的持续流模型
支持基于轻量级分布式快照(Snapshot)实现的容错
支持迭代计算
Flink在JVM内部实现了自己的内存管理
支持程序自动优化:避免特定情况下Shuffle、排序等昂贵操作,中间结果有必要进行缓存
框架 | 优点 | 缺点 |
---|---|---|
Storm | 低延迟 | 吞吐量低、不能保证exactly-once、编程API不丰富 |
Spark Streaming | 吞吐量高、可以保证exactly-once、编程API丰富 | 延迟较高 |
Flink | 低延迟、吞吐量高、可以保证exactly-once、编程API丰富 | 快速迭代中,API变化比较快 |
standalone模式是Flink自带的分布式集群模式,不依赖其他的资源调度框架
下载flink安装包,下载地址:https://flink.apache.org/downloads.html
上传flink安装包到Linux服务器上
解压安装包(-C指定解压位置)
修改conf目录下的flink-conf.yaml
配置文件
#指定jobmanager的地址
jobmanager.rpc.address: node-1.51doit.cn
#指定taskmanager的可用槽位的数量
taskmanager.numberOfTaskSlots: 2
修改conf目录下的worker配置文件,指定taskmanager的所在节点
linux02
linux03
将配置好的Flink拷贝到其他节点
for i in {2…3}; do scp -r flink-1.9.1/ linux0$i:$PWD; done
执行启动脚本
bin/start-cluster.sh
jps命令查看Java进程
在ndoe-1上可用看见StandaloneSessionClusterEntrypoint进程即JobManager,在其他的节点上可用看见到TaskManagerRunner 即TaskManager
访问JobManager的web管理界面,端口8081,进行监控
bin/flink run -m linux01:8081 -p 4 -c org.apache.flink.streaming.examples.socket.SocketWindowWordCount examples/streaming/SocketWindowWordCount.jar --hostname linux01 --port 8888
参数说明:
-m指定主机名后面的端口为JobManager的REST的端口,而不是RPC的端口,RPC通信端口是6123
-p 指定是并行度
-c 指定main方法的全类名
--hostname linux01 --port 8888 --- 为传入类中的args参数
Spark Streaming | Flink |
---|---|
DStream | DataStream |
Trasnformation | Trasnformation |
Action | Sink |
Task | SubTask |
Pipeline | Oprator chains |
DAG | DataFlow Graph |
Master + Driver | JobManager |
Worker + Executor | TaskManager |
在FLink编程中,可以通过
getParallelism()
和setParallelism()
的方式,查看和设置并行度
要求安装Maven 3.0.4 及以上版本和JDK 8
执行maven命令,如果maven本地仓库没有依赖的jar,需要有网络
mvn archetype:generate \
-DarchetypeGroupId=org.apache.flink \
-DarchetypeArtifactId=flink-quickstart-java \
-DarchetypeVersion=1.9.1 \
-DgroupId=cn._51doit.flink \
-DartifactId=flink-java \
-Dversion=1.0 \
-Dpackage=cn._51doit.flink \
-DinteractiveMode=false
或者在命令行中执行下面的命令,需要有网络
curl https://flink.apache.org/q/quickstart.sh | bash -s 1.9.1
执行maven命令,如果maven本地仓库没有依赖的jar,需要有网络
mvn archetype:generate \
-DarchetypeGroupId=org.apache.flink \
-DarchetypeArtifactId=flink-quickstart-scala \
-DarchetypeVersion=1.9.1 \
-DgroupId=cn._51doit.flink \
-DartifactId=flink-scala \
-Dversion=1.0 \
-Dpackage=cn._51doit.flink \
-DinteractiveMode=false
或者在命令行中执行下面的命令,需要有网络
curl https://flink.apache.org/q/quickstart-scala.sh | bash -s 1.9.1
Flink提供了不同级别的编程抽象
通过调用抽象的数据集调用算子构建DataFlow就可以实现对分布式的数据进行流式计算和离线计算
两种抽象模型:
三种方法:
从一个Socket端口中实时的读取数据,然后实时统计相同单词出现的次数,该程序会一直运行,启动程序前先使用nc -lk 8888启动一个socket用来发送数据
import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;
import java.util.Arrays;
public class WordCount01 {
public static void main(String[] args) throws Exception{
//创建Stream执行环境(上下文)
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//创建DataStream(接收消息)
DataStreamSource<String> lines = env.socketTextStream(args[0], 8888);
/**
* 使用lambda表达式编写执行逻辑,调用Transformation
* 在java中, 使用lambda表达式时,不能自行推断数据类型
* 传入变量和返回变量时,需要指定数据类型
*/
SingleOutputStreamOperator<Tuple2<String, Integer>> summed = lines
.flatMap(
(String line, Collector<Tuple2<String, Integer>> out) ->
Arrays.stream(line.split(" "))
.map(e -> Tuple2.of(e, 1))
.forEach(out::collect))
.returns(Types.TUPLE(Types.STRING, Types.INT))
.keyBy(e -> e.f0)
.sum(1);
//调用sink
summed.print();
//启动
env.execute();
}
}
import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;
public class WordCount01 {
public static void main(String[] args) throws Exception{
//创建Stream执行环境(上下文)
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//创建DataStream(接收消息)
DataStreamSource<String> lines = env.socketTextStream(args[0], 8888);
SingleOutputStreamOperator<String> flatMaped = lines.flatMap((String line, Collector<String> collector) -> {
String[] words = line.split(" ");
for (String word : words) {
collector.collect(word);
}
}).returns(Types.STRING);
SingleOutputStreamOperator<Tuple2<String, Integer>> wordAndOne = flatMaped.map((String word) -> Tuple2.of(word, 1)).returns(Types.TUPLE(Types.STRING, Types.INT));
KeyedStream<Tuple2<String, Integer>, String> keyed = wordAndOne.keyBy((Tuple2<String, Integer> tp)-> tp.f0);
SingleOutputStreamOperator<Tuple2<String, Integer>> summed = keyed.sum(1).returns(Types.TUPLE(Types.STRING, Types.INT));
//调用sink
summed.print();
//启动
env.execute();
}
}
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.java.functions.KeySelector;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;
//编写SparkStreaming程序
//1.创建StreamingContext
//2.通过创建好的StreamingContext创建DStream
//3.调用DStream的Transformation(s)
//4.调用DSteam的Action
//5.调用StreamingContext的start方法
//6.挂起程序
public class WordCount02 {
public static void main(String[] args) throws Exception{
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource<String> lines = env.socketTextStream(args[0], 8888);
DataStream<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
@Override
public void flatMap(String line, Collector<String> collector) throws Exception {
String[] words = line.split(" ");
for (String word : words) {
collector.collect(word);
}
}
});
SingleOutputStreamOperator<Tuple2<String, Integer>> wordAndOne = words.map(new MapFunction<String, Tuple2<String, Integer>>() {
@Override
public Tuple2<String, Integer> map(String word) throws Exception {
return Tuple2.of(word, 1);
}
});
KeyedStream<Tuple2<String, Integer>, String> keyed = wordAndOne.keyBy(new KeySelector<Tuple2<String, Integer>, String>() {
@Override
public String getKey(Tuple2<String, Integer> tp) throws Exception {
return tp.f0;
}
});
SingleOutputStreamOperator<Tuple2<String, Integer>> summed = keyed.sum(1);
//Transformation结束了
summed.print();
env.execute();
}
}
Source:来源,数据源
在Flink中,Source主要负责数据的读取
使用演示
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.LocalStreamEnvironment;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import java.util.Properties;
public class KafkaSourceDemo {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
//设置图形界面的访问端口号
conf.setInteger("rest.port", 9999);
LocalStreamEnvironment env = StreamExecutionEnvironment.createLocalEnvironment(conf);
//获取StreamExecutionEnvironment的并行度
int par = env.getParallelism();
System.out.println("程序默认的并行度是:"+ par);
/**
* 创建一个KafkaSource
* 三个参数依次为:
* 服务地址;
* 读取偏移量策略:如果没有记录偏移量,就从头读;如果记录过偏移量,就接着读;
* 设置消费者组
*/
//创建一个配置文件
Properties pro = new Properties();
pro.setProperty("bootstrap.servers", "linux01:9092,linux02:9092,linux03:9092");
pro.setProperty("auto.offset.reset", "earliest");
pro.setProperty("group.id", "g001");
//创建一个FlinkKafka消费者(要读取的topic;一个序列化对象;kafka配置文件)
FlinkKafkaConsumer<String> abb = new FlinkKafkaConsumer<>("abb", new SimpleStringSchema(), pro);
//创建Source
DataStreamSource<String> lines = env.addSource(abb);
//获取Source的并行度
int par2 = lines.getParallelism();
System.out.println("Source的并行度为:"+ par2);
lines.print();
env.execute();
}
}
在实际生产中,一般使用自己定义的Source,通过重写run方法来实现想要的逻辑;通过open、cancel和close方法来实现一些其他的功能;最后使用addSource来进行调用。
/**
* 自定义一个并行的无线数据的Source
*/
private static class MySource extends RichParallelSourceFunction<String>{
private boolean flag = true;
@Override
public void open(Configuration parameters) throws Exception {
System.out.println("open method invoked");
}
@Override
public void run(SourceContext<String> sourceContext) throws Exception {
//获取subTask的index序号
int index = getRuntimeContext().getIndexOfThisSubtask();
while (flag){
//生成一个随机的字符串
String ss = UUID.randomUUID().toString();
sourceContext.collect(index+ "===>" + ss);
Thread.sleep(1000);
}
}
@Override
public void cancel() {
flag = false;
System.out.println("-------------cancel invoked---------------");
}
@Override
public void close() throws Exception {
System.out.println("closs method invoked");
}
}
DataStreamSource<String> lines = env.socketTextStream("localhost", 8888);
用来读取指定端口发送的数据
元素可以用分隔符分开
DataStreamSource<String> lines = env.readTextFile("/Users/xing/Desktop/data");
一种读取File文件的Source
有限的数据流(读取完文件的内容后,就会关闭资源)
多并行
String path = "/Users/xing/Desktop/data";
TextInputFormat textInputFormat = new TextInputFormat(new Path(path));
//传入的四个参数:IO流、路径、读取模式(一次或循环)、循环间隔
DataStreamSource<String> lines = env.readFile(textInputFormat, path, FileProcessingMode.PROCESS_CONTINUOUSLY, 5000);
一种读取File文件的Source
循环的数据流
多并行
不会记录偏移量,如果文件内容发生改变,会重新读取文件的全部内容
val list = List(1,2,3,4,5,6,7,8,9)
val inputStream = env.fromCollection(list)
val iterator = Iterator(1,2,3,4)
val inputStream = env.fromCollection(iterator)
//从一个给定的对象序列中创建一个数据流,所有的对象必须是相同类型的。
val lst1 = List(1,2,3,4,5)
val lst2 = List(6,7,8,9,10)
val inputStream = env.fromElement(lst1, lst2)
//从给定的间隔中并行地产生一个数字序列。
val inputStream = env.generateSequence(1,10)
在DataStream类中通过调用map方法,接续调用了transform方法
在transform中,创建了一个StreamMap匿名对象,在这个对象中,传入了我们自己定义的逻辑mapper
public <R> SingleOutputStreamOperator<R> map(
MapFunction<T, R> mapper, TypeInformation<R> outputType) {
return transform("Map", outputType, new StreamMap<>(clean(mapper)));
}
点击查看StreamMap类,发现该构造方法,调用了父类的构造方法,其父类为AbstractUdfStreamOperator
public class StreamMap<IN, OUT> extends AbstractUdfStreamOperator<OUT, MapFunction<IN, OUT>>
implements OneInputStreamOperator<IN, OUT> {
private static final long serialVersionUID = 1L;
public StreamMap(MapFunction<IN, OUT> mapper) {
super(mapper);
chainingStrategy = ChainingStrategy.ALWAYS;
}
@Override
public void processElement(StreamRecord<IN> element) throws Exception {
output.collect(element.replace(userFunction.map(element.getValue())));
}
}
点进其父类查看,发现其将逻辑传入了一个创好的对象userFunction中
/** The user function. */
protected final F userFunction;
/** Flag to prevent duplicate function.close() calls in close() and dispose(). */
private transient boolean functionsClosed = false;
public AbstractUdfStreamOperator(F userFunction) {
this.userFunction = requireNonNull(userFunction);
checkUdfCheckpointingPreconditions();
}
返回AbstractUdfStreamOperator类,发现processElement方法
processElement中,output通过collect方法,将原本的数据依据可变参数的方式依次取出按照相应逻辑进行处理后进行收集
@Override
public void processElement(StreamRecord<IN> element) throws Exception {
output.collect(element.replace(userFunction.map(element.getValue())));
}
通过transform方法,自定义map方法
import cn._51doit.flink.day02.functions.MyMapFunc;
import org.apache.flink.api.common.typeinfo.TypeHint;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.operators.AbstractStreamOperator;
import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
import org.apache.flink.streaming.api.operators.StreamMap;
import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
/**
* 不使用map算子,而是自己调用transform实现与map算子相同的功能
*/
public class MyMapDemo {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());
//spark
DataStreamSource<String> lines = env.socketTextStream("localhost", 8888);
//SingleOutputStreamOperator uppered = lines.transform("MyMap", TypeInformation.of(String.class), new StreamMap<>(String::toUpperCase));
//两种写法效果相同
SingleOutputStreamOperator<String> uppered = lines.transform("MyMap", TypeInformation.of(new TypeHint<String>() {}), new MyStreamMap());
uppered.print();
env.execute();
}
private static class MyStreamMap extends AbstractStreamOperator<String> implements OneInputStreamOperator<String, String> {
@Override
public void processElement(StreamRecord<String> element) throws Exception {
String in = element.getValue();
String out = in.toUpperCase();
//处理后的数据通过output输出
//output.collect(new StreamRecord<>(out));
output.collect(element.replace(out));
}
}
}
常规使用(Lambda表达式用法)
public class FlatMapDemo {
public static void main(String[] args) throws Exception {
LocalStreamEnvironment env = StreamExecutionEnvironment.createLocalEnvironment(new Configuration());
DataStreamSource<String> lines = env.socketTextStream("linux01", 8899);
SingleOutputStreamOperator<String> fla = lines
.flatMap(
(String e, Collector<String> out) -> Arrays.stream(e.split(" ")).forEach(out::collect)
).returns(Types.STRING);
fla.print();
env.execute();
}
}
自定义方法
public class FlatMapDemo02 {
public static void main(String[] args) throws Exception {
LocalStreamEnvironment env = StreamExecutionEnvironment.createLocalEnvironment(new Configuration());
DataStreamSource<String> lines = env.socketTextStream("linux01", 8899);
SingleOutputStreamOperator<String> fla = lines.transform(
"MyFlatMap",
TypeInformation.of(new TypeHint<String>() {}),
new MyStreamFlatMap());
fla.print();
env.execute();
}
private static class MyStreamFlatMap extends AbstractStreamOperator<String> implements OneInputStreamOperator<String, String>{
@Override
public void processElement(StreamRecord<String> element) throws Exception {
String value = element.getValue();
String[] ss = value.split(" ");
for (String s : ss) {
output.collect(new StreamRecord<String>(s));
}
}
}
}
在自定义的FlatMap中,添加其他逻辑
public class FlatMapDemo {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());
//DataStreamSource nums = env.fromElements(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
DataStreamSource<String> lines = env.socketTextStream("localhost", 8888);
SingleOutputStreamOperator<Tuple2<String, Integer>> res = lines.transform("MyFlatMap", TypeInformation.of(new TypeHint<Tuple2<String, Integer>>() {
}), new MyStreamFlatMap());
res.print();
env.execute();
}
private static class MyStreamFlatMap extends AbstractStreamOperator<Tuple2<String, Integer>> implements OneInputStreamOperator<String, Tuple2<String, Integer>> {
@Override
public void processElement(StreamRecord<String> element) throws Exception {
String line = element.getValue();
String[] words = line.split(" ");
for (String word : words) {
output.collect(new StreamRecord<>(Tuple2.of(word, 1)));
}
}
}
}
val nums = env.generateSequence(1,10)
val filtered = nums.filter(_ % 2 == 0)
import org.apache.flink.api.common.typeinfo.TypeHint;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.api.java.functions.KeySelector;
import org.apache.flink.api.java.tuple.Tuple;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.api.java.tuple.Tuple3;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
public class KeyByDemo {
public static void main(String[] args) throws Exception{
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());
DataStreamSource<String> lines = env.socketTextStream("linux01", 9988);
SingleOutputStreamOperator<Tuple3<String, String, Integer>> mapped = lines.map(e -> {
String[] ss = e.split(",");
return Tuple3.of(ss[0], ss[1], Integer.parseInt(ss[2]));
}).returns(Types.TUPLE(Types.STRING, Types.STRING, Types.INT));
//传入一个或多个下标
// KeyedStream, Tuple> keyed = mapped.keyBy(0, 1);
//传入一个或多个字段名
// KeyedStream, Tuple> keyed = mapped.keyBy("f0", "f1");
//将两个字段拼接为一个字符串
// KeyedStream, String> keyed = mapped.keyBy(e -> e.f0 + e.f1);
//使用Lambda表达式传递元组时, 因为无法自动推断返回值类型, 需要手动传入一个TypeInformation
// KeyedStream, Tuple2> keyed = mapped.keyBy(
// e -> Tuple2.of(e.f0, e.f1),
// TypeInformation.of(new TypeHint>(){})
// );
//使用匿名内部类
KeyedStream<Tuple3<String, String, Integer>, Tuple2<String, String>> keyed = mapped.keyBy(new KeySelector<Tuple3<String, String, Integer>, Tuple2<String, String>>() {
@Override
public Tuple2<String, String> getKey(Tuple3<String, String, Integer> stringStringIntegerTuple3) throws Exception {
return Tuple2.of(stringStringIntegerTuple3.f0, stringStringIntegerTuple3.f1);
}
});
SingleOutputStreamOperator<Tuple3<String, String, Integer>> summed = keyed.sum(2);
summed.print();
env.execute();
}
}
import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
public class KeyByDemo8 {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());
//上海市,闵行区,2000
//北京市,昌平区,1000
//辽宁省,沈阳市,1000
//辽宁省,大连市,2000
//上海市,浦东区,3000
DataStreamSource<String> lines = env.socketTextStream("localhost", 8888);
SingleOutputStreamOperator<DataBean2> beanDataStream = lines.map(e -> {
String[] fields = e.split(",");
String province = fields[0];
String city = fields[1];
double money = Double.parseDouble(fields[2]);
//自己定义的of方法
return DataBean2.of(province, city, money);
}).returns(Types.POJO(DataBean2.class));
//KeyedStream keyedStream = beanDataStream.keyBy("province", "city");
//SingleOutputStreamOperator summed = keyedStream.sum("money");
KeyedStream<DataBean2, String> keyedStream = beanDataStream.keyBy(b -> b.province);
SingleOutputStreamOperator<DataBean2> summed = keyedStream.sum("money");
summed.print();
env.execute();
}
}
聚集类算子
在已经分组的数据流的基础上,滚动的进行聚合;
将当前元素与最后一个简化后的值组合并生成新值。
public class reduceDemo {
public static void main(String[] args) throws Exception{
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());
DataStreamSource<String> lines = env.socketTextStream("linux01", 9988);
/**
* 模拟输入数据格式
* //spark,2
* //hadoop,3
* //flink,4
*/
SingleOutputStreamOperator<Tuple2<String, Integer>> mapped = lines.map(e -> {
String[] ss = e.split(",");
return Tuple2.of(ss[0], Integer.parseInt(ss[1]));
}).returns(Types.TUPLE(Types.STRING, Types.INT));
KeyedStream<Tuple2<String, Integer>, String> keyed = mapped.keyBy(e -> e.f0);
//匿名内部类
SingleOutputStreamOperator<Tuple2<String, Integer>> summed = keyed.reduce(new ReduceFunction<Tuple2<String, Integer>>() {
@Override
public Tuple2<String, Integer> reduce(Tuple2<String, Integer> t1, Tuple2<String, Integer> t2) throws Exception {
return Tuple2.of(t2.f0, t1.f1 + t2.f1);
}
});
//Lambda表达式
// SingleOutputStreamOperator> summed = keyed.reduce((e1, e2) -> Tuple2.of(e2.f0, e1.f1 + e2.f1));
summed.print();
env.execute();
}
}
与reduce相似,聚合类算子皆为滚动聚合,以下算子都可以用reduce达到相同效果;
两个或多个数据流的联合,创建一个包含来自所有数据流的所有元素的新流。
注意:如果你将一个数据流与它本身合并,你将在结果流中得到每个元素两次。
两个流的数据类型需要一致。
“连接”两个数据流,保留它们的类型。允许两个流之间共享状态。
两个流的数据类型可以不同,但是Connect之后的数据流再进行map、FlatMap等操作后,输出的数据就会是一样的了
public class ConnectDemo1 {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());
//spark
DataStreamSource<String> lines1 = env.socketTextStream("localhost", 8888);
//1
DataStreamSource<String> lines2 = env.socketTextStream("localhost", 9999);
SingleOutputStreamOperator<Integer> nums = lines2.map(Integer::parseInt);
//连接
ConnectedStreams<String, Integer> connected = lines1.connect(nums);
//new的CoMapFunction中,三个参数依次为:第一个流的数据类型,第二个流的数据类型,输出的数据类型
SingleOutputStreamOperator<String> strs = connected.map(new CoMapFunction<String, Integer, String>() {
//定义状态
//对第一个流进行处理的
@Override
public String map1(String value) throws Exception {
return value;
}
//对第二个流进行处理的
@Override
public String map2(Integer value) throws Exception {
return value.toString();
}
});
strs.print();
env.execute();
}
}
一个加强的、分布式的for循环
通过将一个操作符的输出重定向到之前的某个操作符,在流中创建一个“反馈”循环。这对于定义不断更新模型的算法特别有用。下面的代码从一个流开始,并不断地应用迭代体。大于0的元素被发送回反馈通道,其余元素被转发到下游。
使用演示:
public class IterateDemo {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource<String> strs = env.socketTextStream("linux01", 9988);
DataStream<Long> numbers = strs.map(Long::parseLong);
//调用iterate方法 DataStream -> IterativeStream
//对Nums进行迭代(不停的输入int的数字)
IterativeStream<Long> iteration = numbers.iterate();
//IterativeStream -> DataStream
//对迭代出来的数据进行运算
//对输入的数据的应用更新模型,即输入数据的处理逻辑
DataStream<Long> iterationBody = iteration.map(new MapFunction<Long, Long>() {
@Override
public Long map(Long value) throws Exception {
System.out.println("iterate input =>" + value);
return value -= 2;
}
});
//只要满足value > 0的条件,就会形成一个回路,重新的迭代,即将前面的输出作为输入,在进行一次应用更新模型,即输入数据的处理逻辑
DataStream<Long> feedback = iterationBody.filter(new FilterFunction<Long>() {
@Override
public boolean filter(Long value) throws Exception {
return value > 0;
}
});
//传入迭代的条件
iteration.closeWith(feedback);
//不满足迭代条件的最后要输出
//输出的条件
DataStream<Long> output = iterationBody.filter(new FilterFunction<Long>() {
@Override
public boolean filter(Long value) throws Exception {
return value <= 0;
}
});
//数据结果
output.print("output value:");
env.execute();
}
}
从元组类型的数据流中抽取元组中部分元素
DataStream<Tuple3<Integer, Double, String>> in = // [...]
//抽取第一个和第三个元素,并改变顺序,形成新的元组
DataStream<Tuple2<String, Integer>> out = in.project(2,0);
在Flink中,Sink负责最终数据的输出
几种Sink简介:
打印每个元素的toString()方法的值到标准输出或者标准错误输出流中。或者也可以在输出流中添加一个前缀,这个可以帮助区分不同的打印调用,如果并行度大于1,那么输出也会有一个标识由哪个任务产生的标志。
writeAsText
将元素以字符串形式逐行写入(TextOutputFormat),这些字符串通过调用每个元素的toString()方法来获取。
writeAsCsv
将元组以逗号分隔写入文件中(CsvOutputFormat),行及字段之间的分隔是可配置的。每个字段的值来自对象的toString()方法。
自定义Sink以模仿print的输出方式
/**
* 定义一个自定义的Sink
*/
private static class MySink extends RichSinkFunction<String> {
private int number;
@Override
public void close() throws Exception {
System.out.println("close sink");
}
@Override
public void open(Configuration parameters) throws Exception {
number = getRuntimeContext().getIndexOfThisSubtask() + 1;
System.out.println("open sink");
}
@Override
public void invoke(String value, Context context) throws Exception {
System.out.println(number+"> "+ value);
}
}
打印每个元素的toString()方法的值到标准输出或者标准错误输出流中。或者也可以在输出流中添加一个前缀,这个可以帮助区分不同的打印调用,如果并行度大于1,那么输出也会有一个标识由哪个任务产生的标志。
将元素以字符串形式逐行写入(TextOutputFormat),这些字符串通过调用每个元素的toString()方法来获取。
将元组以逗号分隔写入文件中(CsvOutputFormat),行及字段之间的分隔是可配置的。每个字段的值来自对象的toString()方法。
自定义文件输出的方法和基类(FileOutputFormat),支持自定义对象到字节的转换。
根据SerializationSchema 将元素写入到socket中。
Flink 也提供以下方法让用户根据需要在数据转换完成后对数据分区进行更细粒度的配置。
在默认状态下,分区数由少变多时,就是采取轮训的方式
dataStream.rebalance();
将数据按均匀分布随机的发送到下一阶段的分区中(但是多并行状态下,同一轮的发送位置是相同的);
dataStream.shuffle();
向每个分区广播元素。
dataStream.broadcast();
使用用户定义的分区器为每个元素选择目标任务。
dataStream.partitionCustom(自定义分区器, 要作为分区依据的字段);
SingleOutputStreamOperator<Tuple2<String, Integer>> mapped = lines.map(new RichMapFunction<String, Tuple2<String, Integer>>() {
@Override
public Tuple2<String, Integer> map(String value) throws Exception {
int indexOfThisSubtask = getRuntimeContext().getIndexOfThisSubtask();
return Tuple2.of(value, indexOfThisSubtask);
}
});
//按照指定的规则进行分区
DataStream<Tuple2<String, Integer>> partitioned = mapped.partitionCustom(new Partitioner<String>() {
@Override
public int partition(String key, int numPartitions) {
//System.out.println("key: " + key + " ,下游task的并行度:" + numPartitions);
int res = 0;
if("spark".equals(key)) {
res = 1;
} else if ("flink".equals(key)){
res = 2;
} else if("hadoop".equals(key)) {
res = 3;
}
return res;
}
}, tp -> tp.f0);
而Window就是一种将无限数据集切分成多个有限数据集并对每一个有限数据集分别进行处理的手段。Window本质上是将数据流按照一定的规则,逻辑地切分成很多个有限大小的“bucket”桶,这样就可以对每一个在“桶里面”的有限的数据依次地进行计算了。
Flink实时计算划分窗口时,如果使用时间作为划分窗口的依据,时间有不同的类型,分为Event Time、Ingestion Time、Processing Time。Flink默认使用的是Processing Time,程序运行如果使用不同的时间类型,计算的结果完全不同,可以根据实际需求选择使用具体哪一种时间类型。
在大数据领域,日志服务器生成的一条数据也可以称为一个事件。Event Time是指在数据产生时该设备上对应的时间,这个时间在进入Flink之前已经存在于数据记录中了。
数据可能产生在多个不同的日志服务器,然后通常是再将数据写入到分布性消息中间件,然后被被Flink拉取进行处理时,处理的实际时间相对于数据产生的实际肯定有一定的延迟,并且Event Time可能也是乱序的。
那么为什么还要使用Event Time呢?是因为使用Event Time时,Flink程序可以处理乱序事件和延迟数据。
并且最重要的功能就是可以统计在数据产生时,对应时间的数据指标。
Ingestion Time指的是事件数据进入到Flink的时间。
每条数据的Ingestion Time就是进入到Source Operator时所在机器的系统时间。
Processing Time是指事件数据被Operator处理时所在机器的系统时间,是Flink默认使用的时间标准,它提供了最好的性能和最低的延迟。
Flink是一个在分布式的计算框架,数据从产生到被处理会有一定的延迟(例如从消息队列拉取数据到Source,Source再到处理的Operator会有一定的延迟,所以Processing Time无法精准的体现出数据在产生的那个时刻的变化情况。
//设置EventTime作为时间标准
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
//设置IngestionTime作为时间标准
env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
//设置ProcessingTime作为时间标准
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
按照指定的数据条数生成一个Window,与时间无关。
public class CountWindowAllDemo {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());
DataStreamSource<String> lines = env.socketTextStream("localhost", 8888);
SingleOutputStreamOperator<Integer> nums = lines.map(Integer::parseInt); //4个并行
AllWindowedStream<Integer, GlobalWindow> window = nums.countWindowAll(5);
//window function
SingleOutputStreamOperator<Integer> summed = window.sum(0);
summed.print();
env.execute();
}
}
KeyedStream<Tuple2<String, Integer>, String> keyed = wordAndCount.keyBy(t -> t.f0);
//划分窗口,Keyed Window
WindowedStream<Tuple2<String, Integer>, String, GlobalWindow> window = keyed.countWindow(5);
在默认使用窗口函数时,调用sum、reduce时都是增量聚合
如果想要全量聚合,需要调用window的apply方法
SingleOutputStreamOperator<Tuple2<String, Integer>> reduced = window.apply(new WindowFunction<Tuple2<String, Integer>, Tuple2<String, Integer>, String, GlobalWindow>() {
@Override
public void apply(String key, GlobalWindow window, Iterable<Tuple2<String, Integer>> input, Collector<Tuple2<String, Integer>> out) throws Exception {
int count = 0;
String word = null;
for (Tuple2<String, Integer> tp : input) {
word = tp.f0;
count += tp.f1;
}
//输出
out.collect(Tuple2.of(word, count));
}
});
按照时间生成Window,可以根据窗口实现原理的不同分成三类:滚动窗口(Tumbling Window)、滑动窗口(Sliding Window)和会话窗口(Session Window)。
将数据依据固定的窗口长度对数据进行切片。特点:时间对其,窗口长度固定,没有重叠。
//不分组,按照ProcessingTime划分滚动窗口,并行度为1
AllWindowedStream<Integer, TimeWindow> window = nums.windowAll(TumblingProcessingTimeWindows.of(Time.seconds(5)));
//先分组,按照ProcessingTime划分滚动窗口,多并行
WindowedStream<Tuple2<String, Integer>, String, TimeWindow> window = keyed.window(TumblingProcessingTimeWindows.of(Time.seconds(5)));
滑动窗口是固定窗口的更广义的一种形式,滑动窗口由固定的窗口长度和滑动间隔组成。
特点:时间对齐,窗口长度固定,有重叠。
//传入两个参数:窗口长度,滚动距离
AllWindowedStream<Integer, TimeWindow> window = nums.windowAll(SlidingProcessingTimeWindows.of(Time.seconds(10), Time.seconds(5)));
//先分组,再划分窗口
WindowedStream<Tuple2<String, Integer>, String, TimeWindow> window = keyed.window(SlidingProcessingTimeWindows.of(Time.seconds(10), Time.seconds(5)));
一段时间没有接收到新数据就会生成新的窗口。
特点:时间无对其
//传入的参数为窗口活跃时长(也就是超过该时长窗口就会关闭,进行输出)
AllWindowedStream<Integer, TimeWindow> window = nums.windowAll(ProcessingTimeSessionWindows.withGap(Time.seconds(5)));
//先分组,再划分窗口
WindowedStream<Tuple2<String, Integer>, String, TimeWindow> window = keyed.window(ProcessingTimeSessionWindows.withGap(Time.seconds(5)));
按照信息内的EventTime作为划分窗口的时间标准
由于在集群环境中,不一定所有信息都能够按照时间顺序进入Flink,所以,依据事件时间(EventTime)在一些场合下可以更精确的得到预期的结果。
触发窗口的水位线
在同一flink进程中,WaterMark是唯一的,是触发窗口的唯一标准;
每条数据进入Flink中进行计算,一般会在Source中分离出EventTime,程序会通过比较保留最大的EventTime,然后使用EventTime-maxOutOfOrderness(延迟时间),获得WaterMark;
然后使用求出的WaterMark在现有的窗口区间进行比较,如果超出了区间,现有的窗口就会被触发。
注意:如果获取WaterMark不是在Source中进行的,而是在transformation中,那么单个分区达到指定触发条件会单独触发,其他分区不会触发。
窗口区间一般为左闭右开的区间;在滚动窗口和滑动窗口中,窗口的边界为窗口长度的倍数;
新版本与旧版本(1.20之前)不同,旧版区间为【0,4999),新版区间为【0,5000),更加严谨。
如:
间隔为5秒的滚动窗口;
[0,4999)就是一个窗口,如果延迟时间为零,当有EventTime为5000的数据进入FLink中时,就会触发当前窗口。
由于在集群环境中,不一定所有信息都能够按照时间顺序进入Flink,可能EventTime为5000的数据进入之后,时间为4500的数据后进入了FLink中,但是由于延迟到达,该消息就会被废弃;
这时,可以使用延迟时间来避免此类情况的产生:
比如将延迟时间设置为2s,时间为4999-6998范围内的时间进入时不会触发窗口闭合, 只有当超过6998的消息(x-2>=4999)进入时,才会触发【0,4999)的时间窗口闭合。
使用旧版(1.20前)
/**
* 调用assignTimestampsAndWatermarks方法
* 传入new BoundedOutOfOrdernessTimestampExtractor(延迟时间)
* 将时间戳从传入的element中提取出来并返回
*/
SingleOutputStreamOperator<String> linesWithWaterMark = lines.assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<String>(Time.seconds(0)) {
//将字符串日期转化为时间戳
private SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
@Override
public long extractTimestamp(String element) {
String timeStr = element.split(",")[0];
long timestamp = 0;
try {
Date date = dateFormat.parse(timeStr);
timestamp = date.getTime();
} catch (ParseException e) {
e.printStackTrace();
timestamp = System.currentTimeMillis();
}
return timestamp; //EventTime
}
});
使用新版
/**
* 调用assignTimestampsAndWatermarks方法
* 传入WatermarkStrategy
* 将时间戳从传入的element中提取出来并返回
*/
SingleOutputStreamOperator<String> dataWithWaterMark = lines.assignTimestampsAndWatermarks(WatermarkStrategy
.<String>forBoundedOutOfOrderness(Duration.ofMillis(0)) //设置延迟时间
.withTimestampAssigner((element, recordTimestamp) -> Long.parseLong(element.split(",")[0])));//提取时间戳
//不分组,按照ProcessingTime划分滚动窗口,并行度为1
AllWindowedStream<Integer, TimeWindow> window = nums.windowAll(TumblingEventTimeWindows.of(Time.seconds(5)));
//先分组,按照ProcessingTime划分滚动窗口,多并行
WindowedStream<Tuple2<String, Integer>, String, TimeWindow> window = keyed.window(TumblingEventTimeWindows.of(Time.seconds(5)));
//传入两个参数:窗口长度,滚动距离
AllWindowedStream<Integer, TimeWindow> window = nums.windowAll(SlidingEventTimeWindows.of(Time.seconds(10), Time.seconds(5)));
//先分组,再划分窗口
WindowedStream<Tuple2<String, Integer>, String, TimeWindow> window = keyed.window(SlidingEventTimeWindows.of(Time.seconds(10), Time.seconds(5)));
注意:会话窗口的触发条件是超过设置的间隔时间才会触发窗口函数(也就是两条数据的时间差需要大于5s)
如果是keyed-window的话,是以每个key为一组来划分窗口的,触发条件是最新的WaterMark减某个分区某个组的最大EventTime大于5s。
//传入的参数为窗口活跃时长(也就是超过该时长窗口就会关闭,进行输出)
AllWindowedStream<Integer, TimeWindow> window = nums.windowAll(EventTimeSessionWindows.withGap(Time.seconds(5)));
//先分组,再划分窗口
WindowedStream<Tuple2<String, Integer>, String, TimeWindow> window = keyed.window(EventTimeSessionWindows.withGap(Time.seconds(5)));
下面,以时间类型为EventTime、窗口类型为滚动窗口、窗口长度为10s、间隔5s、延迟3s、keyed窗口为例:
import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.datastream.WindowedStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;
import org.apache.flink.streaming.api.windowing.assigners.SlidingEventTimeWindows;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
import java.time.Duration;
public class EventTimeSlidingWindow {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource<String> lines = env.socketTextStream("linux01", 9988);
/**
* 输入的数据类型为:
* (1000, spark, 2)
* (3333, flink, 1)
*/
//获取WaterMark,并设置延迟时间为3s
// SingleOutputStreamOperator linesAndWaterMark = lines.assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor(Time.seconds(3)) {
// @Override
// public long extractTimestamp(String element) {
// return Long.parseLong(element.split(",")[0]);
// }
// });
//新版API写法
SingleOutputStreamOperator<String> linesAndWaterMark = lines.assignTimestampsAndWatermarks(WatermarkStrategy
.<String>forBoundedOutOfOrderness(Duration.ofSeconds(5))
.withTimestampAssigner((e, re) -> Long.parseLong(e.split(",")[0])));
SingleOutputStreamOperator<Tuple2<String, Integer>> wordAndNum = linesAndWaterMark.map(e -> {
String[] ss = e.split(",");
return Tuple2.of(ss[1], Integer.parseInt(ss[2]));
}).returns(Types.TUPLE(Types.STRING, Types.INT));
KeyedStream<Tuple2<String, Integer>, String> keyed = wordAndNum.keyBy(e -> e.f0);
//调用窗口函数,设置窗口类型为滑动类型的(Sliding)以事件时间为时间类型的(EventTime)多线程窗口,窗口长度为10s,间隔距离为5s
WindowedStream<Tuple2<String, Integer>, String, TimeWindow> window = keyed.window(SlidingEventTimeWindows.of(Time.seconds(10), Time.seconds(5)));
SingleOutputStreamOperator<Tuple2<String, Integer>> summed = window.sum(1);
summed.print();
env.execute();
}
}
在一个完整的Flink进程
管理所有的TaskManager
Flink集群中的每个独立的设备,叫做TaskManager
在Flink中,Task与spark中的TaskSet对应,SubTask与spark中的task对应;
在一个Flink进程中,由多个Task连接而成一个Task chaining,每个Task又包含多个(并行度个)SubTask。
多个Task连接而成一个Task chaining
在并行度不发生改变并不shuffle的情况下,Flink将多个算子链接为一个任务;每个任务由一个线程执行。
这个任务就是SubTask,所以SubTask中的算子之间的关系就是Operator Chains
每个 worker(TaskManager)都是一个 JVM 进程,可以在单独的线程中执行一个或多个 subtask。为了控制一个 TaskManager 中接受多少个 task,就有了所谓的 task slots(至少一个)。
Task Slots — 任务槽
每个 task slot 代表 TaskManager 中的资源子集,在搭建环境时会进行设置;如果一个TaskManager具有 3 个 slot ,会将内存平均分为3份分给三个slot,并且互不干扰;注意此处没有 CPU 隔离;当前 slot 仅分离 task 的托管内存。
默认情况下,Flink 允许 subtask 共享 slot,前提是:它们是不同的 task 的 subtask,并且是来自于同一作业(job)。
结果就是一个 slot 可以持有整个作业管道。
默认情况下,Flink集群的最大并行度会与 Task Slots 一致,在监控集群运行时,无需计算程序的SubTask数量;
可以更充分的利用资源:
如果没有slot共享,每个SubTask占用一个slot,资源需求较大的SubTasks(密集型SubTask)和非密集SubTask所使用资源一样多,密集型SubTask容易发生阻塞;
如果共享slot,多个SubTask可以在一个TaskSlot中共同运行,可以提高SubTask的并行度,不论是密集型SubTask还是非密集型SubTask,都能最大化的获得资源;
算子链:将两个算子链接在一起能使得它们在同一个线程中执行,从而提升性能。
Flink 默认会将能链接的算子尽可能地进行链接(例如, 两个 map 转换操作)。此外, Flink 还提供了对链接更细粒度控制的 API 以满足更多需求:
对整个作业禁用算子链
StreamExecutionEnvironment.disableOperatorChaining()
任何算子不能和当前算子进行链接
someStream.map(...).disableChaining();
以当前 operator 为起点开始新的连接。如下的两个 mapper 算子会链接在一起而 filter 算子则不会和第一个 mapper 算子进行链接。
someStream.filter(...).map(...).startNewChain().map(...);
资源组:Flink 中,相同资源组的算子可以允许在同一个 slot 槽中执行,不同资源组的算子会分配到不同的 slot 槽中,从而实现 slot 槽隔离。Flink 默认的资源组名称为 “default”。
资源组的跟随原则:在不设置资源组名称的情况下,某个 operator 会自动跟随上一个算子的资源组。
资源组的设置:我们可以通过设置资源组的名称,来达到让某个算子独享单个slot的目的;
someStream.filter(...).slotSharingGroup("name");
Flink中提供了几种当集群某台设备发生故障时的重启策略,下面介绍两种常用的重启策略:
该策略可以指定固定的重启次数,并指定重启的延迟时间
//设置重启策略为:故障发生5s后进行重启,最多重启3次
env.setRestartStrategy(RestartStrategies.fixedDelayRestart(3, 5000));
该策略可以设置一段时间内的最大重启次数,及重启的延迟时间
//重启策略为:延迟时间为5s,1分钟内不挂掉3次及以上,都会一直进行重启
env.setRestartStrategy(RestartStrategies.failureRateRestart(3, Time.minutes(1), Time.seconds(2)));
如果只单纯的设置了重启策略,不对数据状态进行保存的话,当进程重启后,之前的数据会丢失;如果要继续计算,保持之前的状态,就需要进行 checkpointing 。
Flink实时计算程序为了保证计算过程中,出现异常可以容错,就要将中间的计算结果数据存储起来,这些中间数据就叫做State;
可以把State看做一个特殊的数据集,可以是多种类型的;
State保存的位置叫做Backend,默认是保存在JobManager的内存中,也可以通过设置保存到TaskManager本地文件系统或HDFS这样的分布式文件系统。
通过StreamExecutionEnvironment开启CheckPointing,开启后重启策略会自动设置为无限重启
//设置存储checkpoint间隔为5s,并默认开启无限重启的重启策略
env.enableCheckpointing(5000);
//设置state保存位置:stateBackend(如果不设置,默认放在jobManager的内存中)
env.setStateBackend(new FsStateBackend("file:\\C:\\Users\\刘宾\\Desktop\\day06"));
此类状态,仅支持分组后的KeyedStream上使用;分组后,可以在State中不再指定数据的组名(key),Flink会自动获取当前数据的key,并以此提取或记录State;
下面介绍几种不同类型的状态:
可以记录一个任意的数据类型的值,可以是String、Integer甚至记录一个List;
这个值可以通过 update(T)
进行更新,通过 T value()
进行检索。
public class ValueStateDemo01 {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());
//设置checkpoint间隔为10s
env.enableCheckpointing(10000);
DataStreamSource<String> lines = env.socketTextStream("linux01", 9988);
KeyedStream<Tuple2<String, Integer>, String> keyed = lines.flatMap((String e, Collector<String> c) -> {
String[] ss = e.split(" ");
for (String s : ss) {
c.collect(s);
}
}).returns(Types.STRING)
.map(e -> {
if (e.equals("error")){
throw new RuntimeException("输入数据错误");
}
return Tuple2.of(e, 1);
}).returns(Types.TUPLE(Types.STRING, Types.INT))
.keyBy(e -> e.f0);
//常规调用sum
// SingleOutputStreamOperator> summed = keyed.sum(1);
SingleOutputStreamOperator<Tuple2<String, Integer>> summed = keyed.map(new MySumFunction());
summed.print();
env.execute();
}
private static class MySumFunction extends RichMapFunction<Tuple2<String, Integer>, Tuple2<String, Integer>>{
private transient ValueState<Integer> values;
@Override
public void open(Configuration parameters) throws Exception {
//定义一个状态描述器
ValueStateDescriptor<Integer> stateDescriptor = new ValueStateDescriptor<>("sum_state", Integer.class);
//初始化或恢复状态
values = getRuntimeContext().getState(stateDescriptor);
}
@Override
public Tuple2<String, Integer> map(Tuple2<String, Integer> v1) throws Exception {
//进行聚合计算
Integer historyCount = values.value();
if (historyCount == null){
historyCount = 0;
}
Integer totalCount = v1.f1 + historyCount;
//更新状态
values.update(totalCount);
//返回更新的value
v1.f1 = totalCount;
return v1;
}
}
}
完全手写,使用HashMap记录数据,实现ValueState的功能:
public class MyKeyedStateDemo1 {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setRestartStrategy(RestartStrategies.fixedDelayRestart(5, 2000));
//设置5秒钟做一次checkpoint
//如果开启checkpointing,默认的重启策略是无限重启
//env.enableCheckpointing(5000);
//设置state backend(如果不设置,默认放在jobManager的内存中)
//env.setStateBackend(new FsStateBackend(args[0]));
DataStreamSource<String> lines = env.socketTextStream("localhost", 8888);
lines.flatMap(new FlatMapFunction<String, Tuple2<String, Integer>>() {
@Override
public void flatMap(String value, Collector<Tuple2<String, Integer>> out) throws Exception {
for (String word : value.split(" ")) {
if (word.contains("error")) {
throw new RuntimeException("出现错误数据了");
}
out.collect(Tuple2.of(word, 1));
}
}
})
.keyBy(t -> t.f0)
.map(new MySumFunction())
.print();
env.execute();
}
private static class MySumFunction extends RichMapFunction<Tuple2<String, Integer>, Tuple2<String, Integer>> {
private Map<String, Integer> myState;
@Override
public void open(Configuration parameters) throws Exception {
int indexOfThisSubtask = getRuntimeContext().getIndexOfThisSubtask();
File file = new File("/Users/xing/Desktop/data/" + indexOfThisSubtask + ".txt");
if (file.exists()) {
//恢复历史数据
ObjectInputStream objectInputStream = new ObjectInputStream(new FileInputStream(file));
myState = (Map<String, Integer>) objectInputStream.readObject();
} else {
myState = new HashMap<>();
}
//定期将myState中的数据持久化到文件中
new Thread(new Runnable() {
@Override
public void run() {
while (true) {
try {
Thread.sleep(10000);
if (!file.exists()) {
file.createNewFile();
}
ObjectOutputStream objectOutputStream = new ObjectOutputStream(new FileOutputStream(file));
objectOutputStream.writeObject(myState);
objectOutputStream.flush();
objectOutputStream.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}).start();
}
@Override
public Tuple2<String, Integer> map(Tuple2<String, Integer> value) throws Exception {
String word = value.f0;
Integer count = value.f1;
Integer historyCount = myState.get(word);
if(historyCount == null) {
historyCount = 0;
}
Integer totalCount = historyCount + count;
myState.put(word, totalCount); //更新状态
return Tuple2.of(word, totalCount); //输出结果
}
}
}
使用方式与ValueState类似,区别是State中存储的数据类型为Map集合;
使用场景:
对多重分组的需求,如:输入的数据为省、市和访问次数;需求为:同一省的要在同一分区,并求每个市的访问总次数;
思路:按照省进行分组,保证同一省的在同一分区;再进行自定义MapState,以市为Key,访问次数为Value;
public class MapStateDemo {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//设置5秒钟做一次checkpoint
//如果开启checkpointing,默认的重启策略是无限重启
env.enableCheckpointing(5000);
//辽宁省,大连市,3000.5
DataStreamSource<String> lines = env.socketTextStream("localhost", 8888);
SingleOutputStreamOperator<Tuple3<String, String, Double>> tpDateStream = lines.map(new MapFunction<String, Tuple3<String, String, Double>>() {
@Override
public Tuple3<String, String, Double> map(String value) throws Exception {
String[] fields = value.split(",");
return Tuple3.of(fields[0], fields[1], Double.parseDouble(fields[2]));
}
});
tpDateStream
.keyBy(t -> t.f0)
.process(new KeyedProcessFunction<String, Tuple3<String, String, Double>, Tuple3<String, String, Double>>() {
private transient MapState<String, Double> mapState;
@Override
public void open(Configuration parameters) throws Exception {
//初始化状态或恢复状态
//1.定义一个状态描述器(以后装的数量类型,名称)
MapStateDescriptor<String, Double> stateDescriptor = new MapStateDescriptor<>("location-income", String.class, Double.class);
//2.从运行时上下文中获取状态数据
mapState = getRuntimeContext().getMapState(stateDescriptor);
}
@Override
public void processElement(Tuple3<String, String, Double> value, Context ctx, Collector<Tuple3<String, String, Double>> out) throws Exception {
String city = value.f1;
Double money = value.f2;
Double historyMoney = mapState.get(city);
if(historyMoney == null) {
historyMoney = 0.0;
}
double totalMoney = historyMoney + money;
//更新状态
mapState.put(city, totalMoney);
//输出数据
value.f2 = totalMoney;
out.collect(value);
}
})
.print();
env.execute();
}
}
在State中存储集合类型的数据
public class ListStateDemo1 {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//设置5秒钟做一次checkpoint
//如果开启checkpointing,默认的重启策略是无限重启
env.enableCheckpointing(5000);
//user1,领劵
//user1,领劵
DataStreamSource<String> lines = env.socketTextStream("localhost", 8888);
SingleOutputStreamOperator<Tuple2<String, String>> tpDataStream = lines.map(new MapFunction<String, Tuple2<String, String>>() {
@Override
public Tuple2<String, String> map(String value) throws Exception {
String[] field = value.split(",");
return Tuple2.of(field[0], field[1]);
}
});
tpDataStream
.keyBy(t -> t.f0)
.process(new KeyedProcessFunction<String, Tuple2<String, String>, Tuple2<String, List<String>>>() {
private transient ListState<String> listState;
@Override
public void open(Configuration parameters) throws Exception {
ListStateDescriptor<String> stateDescriptor = new ListStateDescriptor<>("event-state", String.class);
listState = getRuntimeContext().getListState(stateDescriptor);
}
@Override
public void processElement(Tuple2<String, String> value, Context ctx, Collector<Tuple2<String, List<String>>> out) throws Exception {
String uid = value.f0;
String event = value.f1;
listState.add(event);
ArrayList<String> lst = new ArrayList<>();
for (String e : listState.get()) {
lst.add(e);
}
out.collect(Tuple2.of(uid, lst));
}
})
.print();
env.execute();
}
}
上面的三种State都是属于keyed-State,而在数据没有分组是情况下,想要保存状态,就要用到:Operator State
Operator State只有一种类型:ListState
与keyed-state每个分区的每个组都有一个或多个State不同,Operator State中的State是分区来共享的(一个分区持有一个或多个ListState)
在自定义使用Operator State时,一般需要实现该接口(keyed之后的数据也可以通过实现该接口来使用Operator State),该接口中需要重写两个方法:
initializeState :初始化方法,在调用run方法之前会调用 一次,且open之前;
snapshotState :在触发checkpoint前会执行一次。
使用Operator State记录Source读取文件的偏移量信息
public class OperatorStateDemo01 {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());
//启用自动保存状态
env.enableCheckpointing(10000);
//自定义读取文件的Source,传入要读取的文件的路径
DataStreamSource<String> lines = env.addSource(new MyAtLeastOnceSource("/Users/xing/Desktop/data"));
//创建一个读取消息端口的Source,用来人工添加错误
DataStreamSource<String> lines2 = env.socketTextStream("localhost", 8888);
//当发送error消息时,抛出异常
SingleOutputStreamOperator<String> mapped = lines2.map(new MapFunction<String, String>() {
@Override
public String map(String value) throws Exception {
if (value.contains("error")) {
throw new RuntimeException("有错误数据了,出现异常!");
}
return value;
}
});
//由于FLink任务的同一job启停一致性,为了保证测试,将两个流连到一起
DataStream<String> union = lines.union(mapped);
union.print();
env.execute();
}
//自定义的Source
private static class MyAtLeastOnceSource extends RichParallelSourceFunction<String> implements CheckpointedFunction {
private boolean flag = true;
private long offset = 0;
private transient ListState<Long> listState;
//添加可以传入path的构造方法
private String path;
public MyAtLeastOnceSource(String path) {
this.path = path;
}
/**
* 在run方法调用之前,会调用一次
* @param context
* @throws Exception
*/
@Override
public void initializeState(FunctionInitializationContext context) throws Exception {
//初始化或恢复历史状态(OperatorState), 定义一个状态描述器,OperatorState只有一种类型,ListState
ListStateDescriptor<Long> stateDescriptor = new ListStateDescriptor<>("offset-state", Long.class);
listState = context.getOperatorStateStore().getListState(stateDescriptor);
//判断状态是否已经恢复了
if(context.isRestored()) {
Iterable<Long> iter = listState.get();
for (Long l : iter) {
offset = l;
}
}
}
@Override
public void run(SourceContext<String> ctx) throws Exception {
int indexOfThisSubtask = getRuntimeContext().getIndexOfThisSubtask();
//创建一个可以从指定位置读取文件的IO流对象
RandomAccessFile accessFile = new RandomAccessFile(path + "/" + indexOfThisSubtask + ".txt", "r");
//从指定位置续读(偏移量)
accessFile.seek(offset);
while (flag) {
String line = accessFile.readLine();
if(line != null) {
line = new String(line.getBytes(Charsets.ISO_8859_1), Charsets.UTF_8);
synchronized (ctx.getCheckpointLock()) {
//获取最新的偏移量,并更新offset
offset = accessFile.getFilePointer();
ctx.collect(indexOfThisSubtask + ".txt : " + line);
}
} else {
Thread.sleep(500);
}
}
}
/**
* 在触发checkpoint时,每个subTask都会执行一次
* @param context
* @throws Exception
*/
@Override
public void snapshotState(FunctionSnapshotContext context) throws Exception {
//System.out.println("snapshotState Invoked");
listState.clear(); //清空老的状态
listState.add(offset); //放入新的状态
}
@Override
public void cancel() {
flag = false;
}
}
}
将一个数据流通过广播的方法,使数据可以更新、广播到状态中,供其他流通过关联进行查询使用;
相较于Spark:可以做到实时更新
process(new MyBroadcastFunction(stateDescriptor))
,传入上面创建的状态描述器;public class BroadcastStateDemo {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());
//INSERT,10,新人礼包
//INSERT,11,元旦活动
//UPDATE,10,新人活动
//INSERT,12,女神节活动
//DELETE,11,元旦活动
DataStreamSource<String> disStream = env.socketTextStream("localhost", 8888);
//整理维度数据
SingleOutputStreamOperator<Tuple3<String, String, String>> disTupleStream = disStream.map(new MapFunction<String, Tuple3<String, String, String>>() {
@Override
public Tuple3<String, String, String> map(String value) throws Exception {
String[] fields = value.split(",");
return Tuple3.of(fields[0], fields[1], fields[2]);
}
});
//1 创建一个状态描述器,确定状态描述器的类型和数据类型
MapStateDescriptor<String, String> stateDescriptor = new MapStateDescriptor<>("dis-state", String.class, String.class);
//2 将要广播的数据进行广播,并传入一个状态描述器
BroadcastStream<Tuple3<String, String, String>> broadcastStream = disTupleStream.broadcast(stateDescriptor);
//创建新的Source,导入事实数据
//user1,10,10000.0 -> user1,10,新人礼包,10000.0
DataStreamSource<String> truthDataStream = env.socketTextStream("localhost", 9999);
//整理事实数据
SingleOutputStreamOperator<Tuple3<String, String, Double>> truthTupleStream = truthDataStream.map(new MapFunction<String, Tuple3<String, String, Double>>() {
@Override
public Tuple3<String, String, Double> map(String value) throws Exception {
String[] fields = value.split(",");
return Tuple3.of(fields[0], fields[1], Double.parseDouble(fields[2]));
}
});
//将事实数据,关联维度数据(广播了)
truthTupleStream
.connect(broadcastStream)
//自定义一个算子,传入状态描述器
.process(new MyBroadcastFunction(stateDescriptor))
.print();
env.execute();
}
private static class MyBroadcastFunction extends BroadcastProcessFunction<Tuple3<String, String, Double>, Tuple3<String, String, String>, Tuple4<String, String,String,Double>> {
private MapStateDescriptor<String, String> stateDescriptor;
//空参构造和传入描述器的构造方法
public MyBroadcastFunction() {}
public MyBroadcastFunction(MapStateDescriptor<String, String> stateDescriptor) {
this.stateDescriptor = stateDescriptor;
}
//处理广播流的数据(维度)
@Override
public void processBroadcastElement(Tuple3<String, String, String> value, Context ctx, Collector<Tuple4<String, String, String, Double>> out) throws Exception {
BroadcastState<String, String> broadcastState = ctx.getBroadcastState(stateDescriptor);
//INSERT,UPDATE,DELETE
String type = value.f0;
String id = value.f1;
String name = value.f2;
if("DELETE".equals(type)) {
broadcastState.remove(id);
} else {
broadcastState.put(id, name);
}
}
//处理事实流的数据
@Override
public void processElement(Tuple3<String, String, Double> value, ReadOnlyContext ctx, Collector<Tuple4<String, String, String, Double>> out) throws Exception {
ReadOnlyBroadcastState<String, String> broadcastState = ctx.getBroadcastState(stateDescriptor);
String uid = value.f0; //用户ID
String aid = value.f1; //活动ID
Double money = value.f2;
//关联广播state
String name = broadcastState.get(aid);
out.collect(Tuple4.of(uid, aid, name, money));
}
}
}
savepoint — 停止任务时,人为指定将最新的状态存储到一个指定的目录
checkpoint — 自动创建的保存状态的目录
停止任务时,指定目录保存SavePoint
stop jobID -p hdfs://linux01:8020/sava_flinkpoint
启动任务时,指定恢复的savePoint,在参数中添加-s
run -s hdfs://linux01:8020/sava_flinkpoint/xxxxx
TTL:Time To Live
在FLink中,可以通过状态描述器,来为状态设置存活时间。
NeverReturnExpired : 只要超时就不返回,即使在内存没有被清除
ReturnExpiredIfNotCleanedUp: 只要在内存中没有清除就可以返回
OnReadAndWrite: 在读写该key对应的value都重新计时
OnCreateAndWrite: 在创建和修改是重新计时
Disabled: 从不过时
//创建一个计时器对象
StateTtlConfig stateTtlConfig = StateTtlConfig.newBuilder(Time.seconds(30)) //存活时长
.setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired) //超时查询策略
.setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite) //超时计算规则
.build();
//给stateDescriptor设置存活时间(传入计时器对象)
stateDescriptor.enableTimeToLive(stateTtlConfig);
集群模式下不需要
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-queryable-state-runtime_${scala.binary.version}artifactId>
<version>${flink.version}version>
dependency>
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-queryable-state-client-javaartifactId>
<version>${flink.version}version>
dependency>
Configuration config = new Configuration();
config.setInteger("rest.port", 8082);
//开启查询状态代理服务
config.setBoolean(QueryableStateOptions.ENABLE_QUERYABLE_STATE_PROXY_SERVER, true);
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(config);
//初始化状态数据或恢复历史状态数据
ValueStateDescriptor<Integer> stateDescriptor = new ValueStateDescriptor<>(
"wc-state", //指定状态描述器的名称
Integer.class //存储数据的类型
);
//设置状态可以查询,并指定状态查询名称
stateDescriptor.setQueryable("my-query-name");
import org.apache.flink.api.common.JobID;
import org.apache.flink.api.common.state.ValueState;
import org.apache.flink.api.common.state.ValueStateDescriptor;
import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
import org.apache.flink.queryablestate.client.QueryableStateClient;
import java.util.concurrent.CompletableFuture;
public class QueryStateClientDemo {
public static void main(String[] args) throws Exception {
QueryableStateClient client = new QueryableStateClient("localhost", 9069);
//初始化状态数据或恢复历史状态数据
ValueStateDescriptor<Integer> stateDescriptor = new ValueStateDescriptor<>(
"wc-state", //指定状态描述器的名称
Integer.class //存储数据的类型
);0000
CompletableFuture<ValueState<Integer>> resultFuture = client.getKvState(
JobID.fromHexString("07d0b9a75d44c9b8cc9feb5fbb4e6e80"), //job的ID
"my-query-name", //可查询的state的名称
"flink", //查询的key
BasicTypeInfo.STRING_TYPE_INFO,
stateDescriptor);
resultFuture.thenAccept(response -> {
try {
Integer res = response.value();
System.out.println(res);
} catch (Exception e) {
e.printStackTrace();
}
});
//由于查询行为是并发的,所以主程序不能直接退出
Thread.sleep(5000);
}
}
在WordCount中,如果想要将历史的累计结果一直保存,而不是每个窗口输出一次,那么就需要在调用Keyed-Windows的窗口函数时,使用reduce,并在第二个参数,使用能够保存状态的窗口函数
该类继承RichWindowFunction
public class EventTimeTumblingWindowDemo {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());
//1000,spark,1
DataStreamSource<String> lines = env.socketTextStream("linux01", 8888);
//获取EventTime,设置延迟时间
SingleOutputStreamOperator<String> linesWithWaterMark = lines.assignTimestampsAndWatermarks(
WatermarkStrategy.<String>forBoundedOutOfOrderness(Duration.ofMillis(2000))
.withTimestampAssigner((element, ts) -> Long.parseLong(element.split(",")[0])));
SingleOutputStreamOperator<Tuple2<String, Integer>> wordAndCount = linesWithWaterMark.map(new MapFunction<String, Tuple2<String, Integer>>() {
@Override
public Tuple2<String, Integer> map(String value) throws Exception {
String[] fields = value.split(",");
return Tuple2.of(fields[1], Integer.parseInt(fields[2]));
}
});
//先keyBy,再划分窗口
KeyedStream<Tuple2<String, Integer>, String> keyed = wordAndCount.keyBy(t -> t.f0);
WindowedStream<Tuple2<String, Integer>, String, TimeWindow> window = keyed.window(TumblingEventTimeWindows.of(Time.seconds(5)));
SingleOutputStreamOperator<Tuple2<String, Integer>> summed = window.reduce(new ReduceFunction<Tuple2<String, Integer>>() {
@Override
public Tuple2<String, Integer> reduce(Tuple2<String, Integer> value1, Tuple2<String, Integer> value2) throws Exception {
value2.f1 = value1.f1 + value2.f1;
return value2;
}
//传入第二个参数,自定义的WindowFunction
}, new MyWindowFunction());
summed.print();
env.execute();
}
private static class MyWindowFunction extends RichWindowFunction<Tuple2<String, Integer>, Tuple2<String, Integer>, String, TimeWindow> {
//保存历史窗口累加的数据
private transient ValueState<Integer> values;
@Override
public void open(Configuration parameters) throws Exception {
ValueStateDescriptor<Integer> stateDescriptor = new ValueStateDescriptor<>("word-count-state", Integer.class);
values = getRuntimeContext().getState(stateDescriptor);
}
@Override
public void apply(String key, TimeWindow window, Iterable<Tuple2<String, Integer>> input, Collector<Tuple2<String, Integer>> out) throws Exception {
Integer historyCount = values.value();
if (historyCount == null) {
historyCount = 0;
}
for (Tuple2<String, Integer> tp : input) {
historyCount += tp.f1;
}
values.update(historyCount);
out.collect(Tuple2.of(key, historyCount));
}
}
}
ProcessFunction是一个底层的流处理操作,允许访问所有(无循环)流应用程序的基本构建块。 —官方解释
调用process,
传入new ProcessFunction<输入数据类型, 输出数据类型>,
实现processElement(输入, Context ctx, Collector<输出> out)方法,
因为可以输出多条数据,最后可以达到FlatMap的效果;
在process中,因为 是逐条处理数据,还可以实现Filter等效果。
传入KeyedProcessFunction
可以重写open等方法;
实现processElement(Tuple2
使用process实现:对分组后的数据进行聚合(wordCount),并且保存状态
public class KeyedProcessFunctionDemo {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//设置5秒钟做一次checkpoint
//如果开启checkpointing,默认的重启策略是无限重启
env.enableCheckpointing(5000);
DataStreamSource<String> lines = env.socketTextStream("linux01", 8888);
lines.flatMap(new FlatMapFunction<String, Tuple2<String, Integer>>() {
@Override
public void flatMap(String value, Collector<Tuple2<String, Integer>> out) throws Exception {
for (String word : value.split(" ")) {
if (word.contains("error")) {
throw new RuntimeException("出现错误数据了");
}
out.collect(Tuple2.of(word, 1));
}
}
}).keyBy(t -> t.f0).process(new MyKeyedProcessFunction()).print();
env.execute();
}
private static class MyKeyedProcessFunction extends KeyedProcessFunction<String, Tuple2<String, Integer>, Tuple2<String, Integer>> {
private transient ValueState<Integer> values;
@Override
public void open(Configuration parameters) throws Exception {
//初始化状态或恢复状态
//1.定义一个状态描述器(以后装的数量类型,名称)
ValueStateDescriptor<Integer> stateDescriptor = new ValueStateDescriptor<>("wc-state", Integer.class);
//2.从运行时上下文中获取状态数据
values = getRuntimeContext().getState(stateDescriptor);
}
@Override
public void processElement(Tuple2<String, Integer> value, Context ctx, Collector<Tuple2<String, Integer>> out) throws Exception {
Integer historyCount = values.value();
if (historyCount == null) {
historyCount = 0;
}
int totalCount = historyCount + value.f1;
//更新状态
values.update(totalCount);
value.f1 = totalCount;
out.collect(value);
}
}
}
ProcessWindowFunction与MapWindowFunction类似,可以用来保存多个窗口的历史全量状态;
可以在自定义的process方法中,定义触发器,可以做到将多条数据攒到一起,延迟触发;
当注册了多个触发时间相同的定时器时,只会保留一个,不会触发多次;
触发器的触发条件为:WaterMark>=注册的定时器的时间(注意:新版WaterMark=timestamp-延迟-1)
在注册定时器时,直接指定触发时间的毫秒值;
@Override
public void processElement(Tuple2<String, Integer> value, Context ctx, Collector<Tuple2<String, Integer>> out) throws Exception {
//输入一条数据,先攒起来,不输出
List<Tuple2<String, Integer>> lst = valueState.value();
if(lst == null) {
lst = new ArrayList<>();
}
lst.add(value);
valueState.update(lst);
//注册定时器(ProcessingTime)
ctx.timerService().registerProcessingTimeTimer(1615347060000L);
}
触发机制
public class EventTimeTimerDemo {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//设置5秒钟做一次checkpoint
//如果开启checkpointing,默认的重启策略是无限重启
env.enableCheckpointing(5000);
//1000,spark,3
DataStreamSource<String> lines = env.socketTextStream("linux01", 8888);
SingleOutputStreamOperator<String> dataWithWaterMark = lines.assignTimestampsAndWatermarks(WatermarkStrategy.<String>forBoundedOutOfOrderness(Duration.ofMillis(0)).withTimestampAssigner((line, ts) -> Long.parseLong(line.split(",")[0])));
SingleOutputStreamOperator<Tuple2<String, Integer>> wordAndCount = dataWithWaterMark.map(new MapFunction<String, Tuple2<String, Integer>>() {
@Override
public Tuple2<String, Integer> map(String value) throws Exception {
String[] fields = value.split(",");
return Tuple2.of(fields[1], Integer.parseInt(fields[2]));
}
});
wordAndCount.keyBy(t -> t.f0).process(new MyKeyedProcessFunction()).print();
env.execute();
}
private static class MyKeyedProcessFunction extends KeyedProcessFunction<String, Tuple2<String, Integer>, Tuple2<String, Integer>> {
private transient ValueState<List<Tuple2<String, Integer>>> valueState;
@Override
public void open(Configuration parameters) throws Exception {
//创建一个状态描述器,存储的状态为可以存储多条Tuple2数据的List集合
ValueStateDescriptor<List<Tuple2<String, Integer>>> stateDescriptor =
new ValueStateDescriptor<>("lst-state",
TypeInformation.of(new TypeHint<List<Tuple2<String, Integer>>>() {}));
valueState = getRuntimeContext().getState(stateDescriptor);
}
@Override
public void processElement(Tuple2<String, Integer> value, Context ctx, Collector<Tuple2<String, Integer>> out) throws Exception {
//输入一条数据,先攒起来,不输出
List<Tuple2<String, Integer>> lst = valueState.value();
if(lst == null) {
lst = new ArrayList<>();
}
lst.add(value);
valueState.update(lst);
/**
* 注册EventTime类型的定时器
*/
//获取当前数据内的时间戳(EventTime)
Long timestamp = ctx.timestamp();
System.out.println("定时器触发的时间为:" + (timestamp + 5000));
//注册定时器,触发时间为当前时间+5s
ctx.timerService().registerEventTimeTimer(timestamp + 5000);
//注意:触发条件为
}
/**
* 触发的时机:WaterMark >= 你注册的定时器的时间
* @param timestamp
* @param ctx
* @param out
* @throws Exception
*/
@Override
public void onTimer(long timestamp, OnTimerContext ctx, Collector<Tuple2<String, Integer>> out) throws Exception {
//定时器执行会调用onTimer方法
for (Tuple2<String, Integer> tp : valueState.value()) {
out.collect(tp);
}
}
}
}
aggregate是和reduce类似的一个聚合算子,传入的是一个实现了AggregateFunction的对象;
一般用于窗口内部的聚合操作;
window.aggregate(new AggregateFunction<Tuple2<String, Integer>, Tuple2<String, Integer>, Tuple2<String, Integer>>() {
//初始化
@Override
public Tuple2<String, Integer> createAccumulator() {
return Tuple2.of(null, 0);
}
//聚合
@Override
public Tuple2<String, Integer> add(Tuple2<String, Integer> value, Tuple2<String, Integer> accumulator) {
value.f1 = value.f1 + accumulator.f1;
return value;
}
//返回
@Override
public Tuple2<String, Integer> getResult(Tuple2<String, Integer> accumulator) {
return accumulator;
}
/**
* 只有SessionWindow可能会调用该方法,如果不是session window可以不实现该方法
* @param a
* @param b
* @return
*/
@Override
public Tuple2<String, Integer> merge(Tuple2<String, Integer> a, Tuple2<String, Integer> b) {
return null;
}
}).print();
实现数据一致性(至少一次)需要以下条件:
开启Checkpointing,将状态保存到StateBackend中
job设置重启策略(自动无限重启)
Source支持记录偏移量
Sink支持覆盖(幂等性)
案例演示:
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.runtime.state.filesystem.FsStateBackend;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.CheckpointConfig;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.connectors.redis.RedisSink;
import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommand;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommandDescription;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisMapper;
import org.apache.flink.util.Collector;
import java.util.Arrays;
import java.util.List;
import java.util.Properties;
public class KafkaToRedis {
public static void main(String[] args) throws Exception {
//创建获取参数的工具类对象
ParameterTool par = ParameterTool.fromArgs(args);
//创建环境
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
/**
* 通过参数工具类传入指定名称对应的参数
* defaultValue: 未找到对应参数传入的值
* getRequired: 必传值,不传报错
*/
//开启checkpoint
env.enableCheckpointing(par.getLong("checkpoint-interval", 10000));
//设置state保存位置
env.setStateBackend(new FsStateBackend(par.getRequired("checkpoint-path")));
//设置在job Cancel后的,外部检查点的清除策略(RETAIN-保留;DELETE-删除)
env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
//将parameterTool中的所有参数整合为一个properties文件对象
Properties properties = par.getProperties();
/**
* 创建并添加KafkaSource
*/
//创建topicList
List<String> topicList = Arrays.asList(par.getRequired("topics").split(","));
//创建FlinkKafkaConsumer对象
FlinkKafkaConsumer<String> kafkaSource = new FlinkKafkaConsumer<>(topicList, new SimpleStringSchema(), properties);
//设置在checkpoint时,不将偏移量写入到Kafka的特殊topic中
kafkaSource.setCommitOffsetsOnCheckpoints(false);
//创建Source
DataStreamSource<String> lines = env.addSource(kafkaSource);
SingleOutputStreamOperator<Tuple2<String, Integer>> summed = lines.flatMap(new FlatMapFunction<String, Tuple2<String, Integer>>() {
@Override
public void flatMap(String value, Collector<Tuple2<String, Integer>> out) throws Exception {
for (String s : value.split(" ")) {
out.collect(Tuple2.of(s, 1));
}
}
})
.keyBy(e -> e.f0)
.sum(1);
//创建RedisSink
FlinkJedisPoolConfig redisConf = new FlinkJedisPoolConfig.Builder().setHost("linux01").setPassword("33851786").setDatabase(3).build();
RedisSink<Tuple2<String, Integer>> redisSink = new RedisSink<>(redisConf, new MyRedisMapper());
summed.addSink(redisSink);
env.execute();
}
public static class MyRedisMapper implements RedisMapper<Tuple2<String, Integer>>{
//设置要传入的总的Key
//WORD_COUNT -> {(spark, 5), (flink, 3)}
@Override
public RedisCommandDescription getCommandDescription() {
return new RedisCommandDescription(RedisCommand.HSET, "WORD_COUNT");
}
@Override
public String getKeyFromData(Tuple2<String, Integer> stringIntegerTuple2) {
return stringIntegerTuple2.f0;
}
@Override
public String getValueFromData(Tuple2<String, Integer> stringIntegerTuple2) {
return stringIntegerTuple2.f1.toString();
}
}
}
以KafkaToKafka的Exactly Once为例,要实现精准一次,需要flink和kafka的生产者、消费者协同作用;
在开启checkpoint时,可以设置checkpoint的模式,默认为Exactly Once;
env.enableCheckpointing(par.getLong("checkpoint-interval", 10000), CheckpointingMode.EXACTLY_ONCE);
需要在KafkaSink的配置参数中,设置该参数,否则报错;
properties.setProperty("transaction.timeout.ms", 1000 * 60 * 5 + "");
创建KafkaSink,一般传入4个参数:
FlinkKafkaProducer<String> kafkaSink = new FlinkKafkaProducer<>(
producerTopic, //输出的kafka的topic
new MyKafkaSerializationSchema(producerTopic), //一个序列化模型
properties, //一些kafka配置参数
FlinkKafkaProducer.Semantic.EXACTLY_ONCE //输出模式(EXACTLY_ONCE/AT_LEAST_ONCE)
);
//kafka的序列化模型,需要实现KafkaSerializationSchema
public static class MyKafkaSerializationSchema implements KafkaSerializationSchema<String>{
private String topic;
public MyKafkaSerializationSchema(String topic) {
this.topic = topic;
}
@Override
public ProducerRecord<byte[], byte[]> serialize(String element, @Nullable Long timestamp) {
//转为字符数组,并指定编码集
return new ProducerRecord<>(topic, element.getBytes(StandardCharsets.UTF_8));
}
}
在kafka的0.11版本后,生产者生产的数据可以实现事务性(需要下游kafka配合)
使用FLink中的Exactly Once模式下的生产者,首先,依然会将数据实时输出,但是输出的数据都属于uncommitted(不受约束的)的级别;当成功checkpoint后,会将当次checkpoint的数据重新输出一遍,输出的数据属于committed(受约束)的级别;
所以,当下游的Kafka在读取这些数据时,也要选择要读取的数据的级别,如果要实现Exactly Once需要的数据级别为:committed;
//设置消费者的事务隔离级别:只读已经提交事务的数据,脏数据不读
properties.setProperty("isolation.level", "read_committed");
/opt/apps/kafka_2.11-2.0.0/bin/kafka-console-consumer.sh --bootstrap-server linux01:9092,linux02:9092,linux03:9092 --topic kafka-out --from-beginning --isolation-level read_committed
/**
* KafkaToKafka Exactly Once
*/
//传入参数:
//--checkpoint-interval 30000 --checkpoint-path hdfs://linux01:8020/flink_state/ck666 --bootstrap.servers linux01:9092,linux02:9092,linux03:9092 --group.id test666 --auto.offset.reset earliest --consumer-topics kafka-in --producer-topic kafka-out
public class KafkaToKafkaExactlyOnce {
public static void main(String[] args) throws Exception {
//创建获取参数的工具类对象
ParameterTool par = ParameterTool.fromArgs(args);
//创建环境
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
/**
* 通过参数工具类传入指定名称对应的参数
* defaultValue: 未找到对应参数传入的值
* getRequired: 必传值,不传报错
*/
//开启checkpoint
env.enableCheckpointing(par.getLong("checkpoint-interval", 10000), CheckpointingMode.EXACTLY_ONCE);
//设置state保存位置
/**
* 该模式下不能设置保存地址???
*/
// env.setStateBackend(new FsStateBackend(par.getRequired("checkpoint-path")));
//将parameterTool中的所有参数整合为一个properties文件对象
Properties properties = par.getProperties();
/**
* 创建并添加KafkaSource
*/
//创建topicList
List<String> topicList = Arrays.asList(par.getRequired("consumer-topics").split(","));
//创建FlinkKafkaConsumer对象
FlinkKafkaConsumer<String> kafkaSource = new FlinkKafkaConsumer<>(
topicList,
new SimpleStringSchema(),
properties);
//设置在checkpoint时,不将偏移量写入到Kafka的特殊topic中
kafkaSource.setCommitOffsetsOnCheckpoints(false);
//创建Source
DataStreamSource<String> lines = env.addSource(kafkaSource);
SingleOutputStreamOperator<String> upperLine = lines.map(String::toUpperCase);
/**
* 创建制造错误的流,并将两个流union到一起
*/
DataStreamSource<String> lines1 = env.socketTextStream("linux01", 8888);
SingleOutputStreamOperator<String> lines2 = lines1.map(new MapFunction<String, String>() {
@Override
public String map(String value) throws Exception {
if (value.equals("error")) {
throw new RuntimeException("数据出现异常");
}
return value.toUpperCase();
}
});
DataStream<String> union = upperLine.union(lines2);
/**
* 创建KafkaSink
*/
//通过配置对象,获取producerTopic
String producerTopic = par.get("producer-topic", "kafka-out");
//允许事务最大的超时时间
properties.setProperty("transaction.timeout.ms",1000 * 60 * 5 + "");
FlinkKafkaProducer<String> kafkaSink = new FlinkKafkaProducer<>(
producerTopic, //输出的kafka的topic
new MyKafkaSerializationSchema(producerTopic), //一个序列化模型
properties, //一些kafka配置参数
FlinkKafkaProducer.Semantic.EXACTLY_ONCE //输出模式(EXACTLY_ONCE/AT_LEAST_ONCE)
);
union.addSink(kafkaSink);
env.execute();
}
public static class MyKafkaSerializationSchema implements KafkaSerializationSchema<String>{
private String topic;
public MyKafkaSerializationSchema(String topic) {
this.topic = topic;
}
@Override
public ProducerRecord<byte[], byte[]> serialize(String element, @Nullable Long timestamp) {
//转为字符数组,并指定编码集
return new ProducerRecord<>(topic, element.getBytes(StandardCharsets.UTF_8));
}
}
}
使用FLink中的Exactly Once模式下的生产者,首先,依然会将数据实时输出,但是输出的数据都属于uncommitted(未确认的)的级别;当成功checkpoint后,会将当次checkpoint的数据重新输出一遍,输出的数据属于committed(受约束)的级别;
当进行checkpoint时,JobManager会向所有的所有的数据流注入barrier;
barrier会在算子间传递下去;
所有SubTask接收到barrier后,都会启动各自的checkpoint进程;
侧流输出:Side Outputs(相当于给数据打上标签)
通过侧流输出,可以对数据流进行拆分,可以拆分为多个并行的流,也可以分为一个主流和数个支流,也可以将一部分数据复制为一个支流;
//先定义标签
//偶数
OutputTag<Integer> evenTag = new OutputTag<Integer>("even"){};
//奇数
OutputTag<Integer> oddTag = new OutputTag<Integer>("odd"){};
SingleOutputStreamOperator<Integer> mainStream = nums.process(new ProcessFunction<Integer, Integer>() {
@Override
public void processElement(Integer in, Context ctx, Collector<Integer> out) throws Exception {
if (in % 2 == 0) {
//打上Tag并输出
ctx.output(evenTag, in);
} else {
//打上Tag并输出
ctx.output(oddTag, in);
}
//希望在主流中可以获取到的数据
out.collect(in);
}
});
//侧流输出
//DataStream evenStream = mainStream.getSideOutput(evenTag);
DataStream<Integer> oddStream = mainStream.getSideOutput(oddTag);
evenStream.print("偶数");
oddStream.print("奇数");
mainStream.print("主流");
SingleOutputStreamOperator<Integer> mainStream = lines.process(new ProcessFunction<String, Integer>() {
@Override
public void processElement(String in, Context ctx, Collector<Integer> out) throws Exception {
try {
int i = Integer.parseInt(in);
out.collect(i);
} catch (NumberFormatException e) {
//将问题数据打上标签
ctx.output(strTag, in);
}
}
});
//迟到数据对应的tag名称
OutputTag<Tuple2<String, Integer>> lateDataTag = new OutputTag<Tuple2<String, Integer>>("late-data") {};
//先keyBy,再划分窗口
KeyedStream<Tuple2<String, Integer>, String> keyed = wordAndCount.keyBy(t -> t.f0);
WindowedStream<Tuple2<String, Integer>, String, TimeWindow> window = keyed
.window(TumblingEventTimeWindows.of(Time.seconds(5))) //划分窗口
.sideOutputLateData(lateDataTag); //将迟到数据打上给定的标签
SingleOutputStreamOperator<Tuple2<String, Integer>> summed = window.sum(1);
//获取迟到的数据
DataStream<Tuple2<String, Integer>> lateStream = summed.getSideOutput(lateDataTag);
简单地访问外部数据库的数据,比如使用 MapFunction
,通常意味着同步交互: MapFunction
向数据库发送一个请求然后一直等待,直到收到响应。在许多情况下,等待占据了函数运行的大部分时间。
与数据库异步交互是指一个并行函数实例可以并发地处理多个请求和接收多个响应。这样,函数在等待的时间可以发送其他请求和接收其他响应。至少等待的时间可以被多个请求摊分。大多数情况下,异步交互可以大幅度提高流处理的吞吐量。
仅提高MapFunction的并行度也可以提高吞吐量,但是会占用海量的资源;
使用异步IO则代价较小,也是利用运算资源换取计算时间的一种方式;
优点是能够明显提高效率;缺点是更加占用资源。
大部分使用异步IO的场景都是进行异步的连接各类数据库,那么就需要支持异步请求的数据库客户端;
如果没有这样的客户端,就最好使用数据库连接池(比如德鲁伊连接池)。然而,这种方法通常比正规的异步客户端效率低。
在具备异步数据库客户端的基础上,实现数据流转换操作与数据库的异步 I/O 交互需要以下三部分:
实现RichAsyncFunction的分发异步请求的实现类,其中一般需要重写三个方法:
重写open方法,创建与数据库的连接;
@Override
public void open(Configuration parameters) throws Exception {
RequestConfig requestConfig = RequestConfig.custom().build();
httpclient = HttpAsyncClients.custom() //创建HttpAsyncClients请求连接池
.setMaxConnTotal(maxConnTotal) //设置最大连接数
.setDefaultRequestConfig(requestConfig).build();
httpclient.start(); //启动异步请求httpClient
}
重写close方法,关闭线程池和连接池;
@Override
public void close() throws Exception {
dataSource.close(); //关闭数据库连接池
executorService.shutdown(); //关闭线程池
}
重写asyncInvoke,进行异步执行,并执行回调函数;
@Override
public void asyncInvoke(String id, ResultFuture<Tuple2<String, String>> resultFuture) throws Exception {
//调用线程池的submit方法,将查询请求丢入到线程池中异步执行,返回Future对象
Future<String> future = executorService.submit(() -> {
return queryFromMySql(id); //查询数据库的方法
});
//通过该静态类的该方法,将返回的future接收处理,返回result
CompletableFuture.supplyAsync(new Supplier<String>() {
@Override
public String get() {
try {
return future.get(); //获取查询的结果
} catch (Exception e) {
return null;
}
}
//将result处理为最终的输出结果
}).thenAccept((String result) -> {
//由于需要返回一个Collection,所以给返回一个只有一个值的单例集合
resultFuture.complete(Collections.singleton(Tuple2.of(id, result)));
});
}
在主程序中,进行异步IO的调用
SingleOutputStreamOperator<LogBean> result = AsyncDataStream.unorderedWait(
lines, //输入的数据流(会传入到重写的asyncInvoke方法中作为第一个参数)
new AsyncHttpGeoQueryFunction(url, key, capacity), //异步查询的Function实例
3000, //超时时间
TimeUnit.MILLISECONDS, //时间单位
capacity);//异步请求队列最大的数量,不传该参数默认值为100
public class AsyncQueryFromHttpDemo2 {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//设置job的重启策略
env.setRestartStrategy(RestartStrategies.fixedDelayRestart(3, 5000));
DataStreamSource<String> lines = env.socketTextStream("localhost", 8888);
String url = "https://restapi.amap.com/v3/geocode/regeo"; //异步请求高德地图的地址
String key = "4924f7ef5c86a278f5500851541cdcff"; //请求高德地图的秘钥,注册高德地图开发者后获得
int capacity = 50; //最大异步并发请求数量
//使用AsyncDataStream调用unorderedWait方法,并传入异步请求的Function
//unorderedWait 发送的请求和响应的结果是没有顺序的
//orderedWait 发送的请求和响应的结果是有顺序的,先请求先返回
SingleOutputStreamOperator<LogBean> result = AsyncDataStream.unorderedWait(
lines, //输入的数据流
new AsyncHttpGeoQueryFunction(url, key, capacity), //异步查询的Function实例
3000, //超时时间
TimeUnit.MILLISECONDS, //时间单位
capacity);//异步请求队列最大的数量,不传该参数默认值为100
result.print();
env.execute();
}
}
public class AsyncHttpGeoQueryFunction extends RichAsyncFunction<String, LogBean> {
private transient CloseableHttpAsyncClient httpclient; //异步请求的HttpClient
private String url; //请求高德地图URL地址
private String key; //请求高德地图的秘钥,注册高德地图开发者后获得
private int maxConnTotal; //异步HTTPClient支持的最大连接
public AsyncHttpGeoQueryFunction(String url, String key, int maxConnTotal) {
this.url = url;
this.key = key;
this.maxConnTotal = maxConnTotal;
}
@Override
public void open(Configuration parameters) throws Exception {
RequestConfig requestConfig = RequestConfig.custom().build();
httpclient = HttpAsyncClients.custom() //创建HttpAsyncClients请求连接池
.setMaxConnTotal(maxConnTotal) //设置最大连接数
.setDefaultRequestConfig(requestConfig).build();
httpclient.start(); //启动异步请求httpClient
}
@Override
public void asyncInvoke(String line, ResultFuture<LogBean> resultFuture) throws Exception {
//使用fastjson将json字符串解析成json对象
LogBean bean = JSON.parseObject(line, LogBean.class);
double longitude = bean.longitude; //获取经度
double latitude = bean.latitude; //获取维度
//将经纬度和高德地图的key与请求的url进行拼接
HttpGet httpGet = new HttpGet(url + "?location=" + longitude + "," + latitude + "&key=" + key);
//发送异步请求,返回Future
Future<HttpResponse> future = httpclient.execute(httpGet, null);
CompletableFuture.supplyAsync(new Supplier<LogBean>() {
@Override
public LogBean get() {
try {
HttpResponse response = future.get();
String province = null;
String city = null;
if (response.getStatusLine().getStatusCode() == 200) {
//解析返回的结果,获取省份、城市等信息
String result = EntityUtils.toString(response.getEntity());
JSONObject jsonObj = JSON.parseObject(result);
JSONObject regeocode = jsonObj.getJSONObject("regeocode");
if (regeocode != null && !regeocode.isEmpty()) {
JSONObject address = regeocode.getJSONObject("addressComponent");
province = address.getString("province");
city = address.getString("city");
}
}
bean.province = province; //将返回的结果给省份赋值
bean.city = city; //将返回的结果给城市赋值
return bean;
} catch (Exception e) {
return null;
}
}
}).thenAccept((LogBean result) -> {
//将结果添加到resultFuture中输出(complete方法的参数只能为集合,如果只有一个元素,就返回一个单例集合)
resultFuture.complete(Collections.singleton(result));
});
}
@Override
public void close() throws Exception {
httpclient.close(); //关闭HttpAsyncClients请求连接池
}
}
public class AsyncQueryFromMySQL {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setRestartStrategy(RestartStrategies.fixedDelayRestart(3, 5000)); //设置job的重启策略
DataStreamSource<String> lines = env.socketTextStream("localhost", 8888);
int capacity = 50;
DataStream<Tuple2<String, String>> result = AsyncDataStream.orderedWait(
lines, //输入的数据流
new MySQLAsyncFunction(capacity), //异步查询的Function实例
3000, //超时时间
TimeUnit.MILLISECONDS, //时间单位
capacity); //异步请求队列最大的数量,不传该参数默认值为100
result.print();
env.execute();
}
}
public class MySQLAsyncFunction extends RichAsyncFunction<String, Tuple2<String, String>> {
private transient DruidDataSource dataSource; //使用alibaba的Druid数据库连接池
private transient ExecutorService executorService; //用于提交多个异步请求的线程池
private int maxConnTotal; //线程池最大线程数量
public MySQLAsyncFunction(int maxConnTotal) {
this.maxConnTotal = maxConnTotal;
}
@Override
public void open(Configuration parameters) throws Exception {
executorService = Executors.newFixedThreadPool(maxConnTotal); //创建固定的大小的线程池
dataSource = new DruidDataSource(); //创建数据库连接池并指定对应的参数
dataSource.setDriverClassName("com.mysql.jdbc.Driver");
dataSource.setUsername("root");
dataSource.setPassword("123456");
dataSource.setUrl("jdbc:mysql://localhost:3306/bigdata?characterEncoding=UTF-8");
dataSource.setMaxActive(maxConnTotal);
}
@Override
public void close() throws Exception {
dataSource.close(); //关闭数据库连接池
executorService.shutdown(); //关闭线程池
}
@Override
public void asyncInvoke(String id, ResultFuture<Tuple2<String, String>> resultFuture) throws Exception {
//调用线程池的submit方法,将查询请求丢入到线程池中异步执行,返回Future对象
Future<String> future = executorService.submit(() -> {
return queryFromMySql(id); //查询数据库的方法
});
//通过该静态类的该方法,将返回的future接收处理,返回result
CompletableFuture.supplyAsync(new Supplier<String>() {
@Override
public String get() {
try {
return future.get(); //获取查询的结果
} catch (Exception e) {
return null;
}
}
//将result处理为最终的输出结果
}).thenAccept((String result) -> {
//由于需要返回一个Collection,所以给返回一个只有一个值的单例集合
resultFuture.complete(Collections.singleton(Tuple2.of(id, result)));
});
}
private String queryFromMySql(String param) throws SQLException {
String sql = "SELECT id, info FROM t_data WHERE id = ?";
String result = null;
PreparedStatement stmt = null;
ResultSet rs = null;
Connection connection = dataSource.getConnection();
try {
stmt = connection.prepareStatement(sql);
stmt.setString(1, param); //设置查询参数
rs = stmt.executeQuery(); //执行查询
while (rs.next()) {
result = rs.getString("info"); //返回查询结果
}
} finally {
if (rs != null) {
rs.close();
}
if (stmt != null) {
stmt.close();
}
if (connection != null) {
connection.close();
}
}
return result;
}
}
调用API:
stream1.join(stream2)
.where(t -> t.f1) //stream1的连接字段
.equalTo(t -> t.f1) //stream2的连接字段
.window(TumblingProcessingTimeWindows.of(Time.seconds(60))) //窗口类型及长度
//该类中的三个参数:stream1的输入类型,stream2的输入类型,返回值类型
.apply(new JoinFunction<Tuple3<Long, String, String>, Tuple3<Long, String, String>, Tuple5<Long, Long, String, String, String>>() {
@Override
public Tuple5<Long, Long, String, String, String> join(Tuple3<Long, String, String> first, Tuple3<Long, String, String> second) throws Exception {
//如果调用了join方法,说明,数据在同一个窗口内,并且join的条件(两个连接字段相等)满足了
return Tuple5.of(first.f0, second.f0, first.f1, first.f2, second.f2);
}
})
.print();
内部流程简析:
join过程中,两个流会根据指定的连接字段进行分组并shuffle到同一SubTask上,以供后续进行join;
如果使用的时间类型是EventTime,且获取waterMark是在各个分区分别获取的,那么:只有当所有分区的watermark都达到要求时,才会触发窗口(可以看做:整个进程的watermark是所有分区watermark的最小值);
除了默认的join方法为内连接,如果想达到左外连接、右外连接等效果,都需要使用coGroup。
与join中必须两个流都需要有数据不同,coGroup中,任意一个流出现数据,cogroup方法都会被调用,而要进行哪种join,就需要在重写的cogroup方法中,写入对应的逻辑,以达到目标效果;
//实现左外连接
stream1WithWaterMark.coGroup(stream2WithWaterMark)
.where(t -> t.f1) //第一个流的join的条件
.equalTo(t -> t.f1) //第二个流的join条件
.window(TumblingEventTimeWindows.of(Time.seconds(5)))
.apply(new CoGroupFunction<Tuple3<Long, String, String>, Tuple3<Long, String, String>, Tuple5<Long, Long, String, String, String>>() {
//左表和右边的流都满足触发的条件方法就会调用(该分区中有数据)
@Override
public void coGroup(Iterable<Tuple3<Long, String, String>> first, Iterable<Tuple3<Long, String, String>> second, Collector<Tuple5<Long, Long, String, String, String>> out) throws Exception {
//第一中情况,first不为空,second也不为空
//第二种情况,first不为空,second为空
for (Tuple3<Long, String, String> left : first) {
boolean isJoin = false;
//如果for循环执行,左流有数据
for (Tuple3<Long, String, String> right : second) {
isJoin = true;
out.collect(Tuple5.of(left.f0, right.f0, left.f1, left.f2, right.f2));
}
if(!isJoin) {
out.collect(Tuple5.of(left.f0, null, left.f1, left.f2, null));
}
}
}
}).print();
Interval Join : 区间连接
Interval Join过程简析:
假设有A、B两个流,Interval Join可以根据B流中的一条消息的timestamp,在A流上确定一个区间,使用B流上的这个消息到A流上的这个区间中进行join;
后面的流到前面的流取窗口;
stream1WithWaterMark
.keyBy(t -> t.f1) //对流1进行分组
.intervalJoin(stream2WithWaterMark.keyBy(t -> t.f1)) //对流2进行分组,并调用intervalJoin
.between(Time.seconds(-1), Time.seconds(1)) //设置区间范围
.lowerBoundExclusive() //设置区间的开闭(左右都闭,左闭右开,左开右闭)
.process(new ProcessJoinFunction<Tuple3<Long, String, String>, Tuple3<Long, String, String>, Tuple5<Long, Long, String, String, String>>() {
@Override
public void processElement(Tuple3<Long, String, String> left, Tuple3<Long, String, String> right, Context ctx, Collector<Tuple5<Long, Long, String, String, String>> out) throws Exception {
out.collect(Tuple5.of(left.f0, right.f0, left.f1, left.f2, right.f2));
}
}).print();
通过java类中的默认传入数组参数:args,将多组参数以指定名称出入;
传入参数示例:
–checkpoint-interval 10000 --checkpoint-path hdfs://node-1.51doit.cn:9000/ck666 --bootstrap.servers node-1.51doit.cn:9092,node-2.51doit.cn:9092,node-3.51doit.cn:9092 --group.id test6688 --auto.offset.reset earliest --topics wordcount
使用方法:
//创建获取参数的工具类对象
ParameterTool par = ParameterTool.fromArgs(args);
//创建环境
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
/**
* 通过参数工具类传入指定名称对应的参数
* defaultValue: 未找到对应参数传入的值
* getRequired: 必传值,不传报错
*/
//开启checkpoint
env.enableCheckpointing(par.getLong("checkpoint-interval", 10000));
//设置state保存位置
env.setStateBackend(new FsStateBackend(par.getRequired("checkpoint-path")));
//设置在job Cancel后的,外部检查点的清除策略(RETAIN-保留;DELETE-删除)
env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
//将parameterTool中的所有参数整合为一个properties文件对象
Properties properties = par.getProperties();