不管是stream还是batch处理,都有一个keyBy(stream)和groupBy(batch)操作。那么该如何指定key?
Some transformations (join, coGroup, keyBy, groupBy) require that a key be defined on a collection of elements. Other transformations (Reduce, GroupReduce, Aggregate, Windows) allow data being grouped on a key before they are applied.
一些算子(transformations)例如join,coGroup,keyBy,groupBy往往需要定义一个key。其他的算子例如Reduce, GroupReduce, Aggregate, Windows,也允许数据按照key进行分组。
DataSet
DataSet<...> input = // [...]
DataSet<...> reduced = input
.groupBy(/*define key here*/)
.reduceGroup(/*do something*/);
DataStream
DataStream<...> input = // [...]
DataStream<...> windowed = input
.keyBy(/*define key here*/)
.window(/*window specification*/);
类似于mysql中的join操作:select a.* , b.* from a join b on a.id=b.id
这里的keyBy就是a.id=b.id
DataStream> input = // [...]
KeyedStream,Tuple> keyed = input.keyBy(0)
可以传字段的位置
DataStream> input = // [...]
KeyedStream,Tuple> keyed = input.keyBy(0,1)
可以传字段位置的组合
这对于简单的使用时没问题的。但是对于内嵌的Tuple,如下所示:
DataStream,String,Long>> ds;
如果使用keyBy(0),那么他就会使用整个Tuple2
我们可以使用基于字符串字段表达式来引用内嵌字段去定义key。
之前我们的算子写法是这样的:
text.flatMap(new FlatMapFunction>() {
@Override
public void flatMap(String value, Collector> out) throws Exception {
String[] tokens = value.toLowerCase().split(",");
for(String token: tokens) {
if(token.length() > 0) {
out.collect(new Tuple2(token, 1));
}
}
}
}).keyBy(0).timeWindow(Time.seconds(5)).sum(1).print().setParallelism(1);
其中的new FlatMapFunction
public static class WC {
private String word;
private int count;
public WC() {
}
public WC(String word, int count) {
this.word = word;
this.count = count;
}
@Override
public String toString() {
return "WC{" +
"word='" + word + '\'' +
", count=" + count +
'}';
}
public String getWord() {
return word;
}
public void setWord(String word) {
this.word = word;
}
public int getCount() {
return count;
}
public void setCount(int count) {
this.count = count;
}
}
修改算子的写法:
text.flatMap(new FlatMapFunction() {
@Override
public void flatMap(String value, Collector out) throws Exception {
String[] tokens = value.toLowerCase().split(",");
for (String token : tokens) {
if (token.length() > 0) {
out.collect(new WC(token, 1));
}
}
}
}).keyBy("word").timeWindow(Time.seconds(5)).sum("count").print().setParallelism(1);
将原来的输出Tuple2
因此,在这个例子中我们有一个POJO类,有两个字段分别是"word"和"count",可以传递字段名到keyBy("")中。
语法:
public static class WC {
public ComplexNestedClass complex; //nested POJO
private int count;
// getter / setter for private field (count)
public int getCount() {
return count;
}
public void setCount(int c) {
this.count = c;
}
}
public static class ComplexNestedClass {
public Integer someNumber;
public float someFloat;
public Tuple3 word;
public IntWritable hadoopCitizen;
}
IntWritable
typescala写法:
object StreamingWCScalaApp {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
// 引入隐式转换
import org.apache.flink.api.scala._
val text = env.socketTextStream("192.168.152.45", 9999)
text.flatMap(_.split(","))
.map(x => WC(x,1))
.keyBy("word")
.timeWindow(Time.seconds(5))
.sum("count")
.print()
.setParallelism(1)
env.execute("StreamingWCScalaApp");
}
case class WC(word: String, count: Int)
}
.keyBy(new KeySelector() {
@Override
public Object getKey(WC value) throws Exception {
return value.word;
}
})