flink 1.8
数据集转换DataSet Transformations
本文档深入研究了数据集上可用的转换。有关Flink Java API的一般介绍,请参阅编程指南 Programming Guide。
有关在具有密集索引的数据集中压缩元素,请参阅 Zip Elements Guide。
Map
Map转换在DataSet的每个元素上应用用户定义的map函数。它实现了一对一的映射,也就是说,函数必须返回一个元素。
以下代码将Integer对的DataSet转换为Integers的DataSet:
// MapFunction that adds two integer values
public class IntAdder implements MapFunction, Integer> {
@Override
public Integer map(Tuple2 in) {
return in.f0 + in.f1;
}
}
// [...]
DataSet> intPairs = // [...]
DataSet intSums = intPairs.map(new IntAdder());
FlatMap
FlatMap转换在DataSet的每个元素上应用用户定义的 flat-map函数。map函数的这种变体可以为每个输入数据元返回任意多个结果元素(包括none)。
以下代码将文本行的DataSet转换为单词的DataSet:
// FlatMapFunction that tokenizes a String by whitespace characters and emits all String tokens.
public class Tokenizer implements FlatMapFunction {
@Override
public void flatMap(String value, Collector out) {
for (String token : value.split("\\W")) {
out.collect(token);
}
}
}
// [...]
DataSet textLines = // [...]
DataSet words = textLines.flatMap(new Tokenizer());
MapPartition
MapPartition在单个函数调用中转换一个并行分区。map-partition函数将分区作为Iterable获取,并且可以生成任意数量的结果值。每个分区中的元素数量取决于并行度和先前的算子操作。
以下代码将文本行DataSet数据集转换为每个分区的计数数据集
public class PartitionCounter implements MapPartitionFunction {
public void mapPartition(Iterable values, Collector out) {
long c = 0;
for (String s : values) {
c++;
}
out.collect(c);
}
}
// [...]
DataSet textLines = // [...]
DataSet counts = textLines.mapPartition(new PartitionCounter());
Filter
Filter转换在DataSet的每个元素上应用用户定义的过滤器函数,并仅保存函数返回true的元素。
以下代码从DataSet中删除所有小于零的整数:
// FilterFunction that filters out all Integers smaller than zero.
public class NaturalNumberFilter implements FilterFunction {
@Override
public boolean filter(Integer number) {
return number >= 0;
}
}
// [...]
DataSet intNumbers = // [...]
DataSet naturalNumbers = intNumbers.filter(new NaturalNumberFilter());
重要提示:系统假设函数不会修改应用谓词的元素。违反这一假设可能导致不正确的结果。
元组数据集的Projection
Project转换删除或移动元组DataSet的Tuple字段。该project(int...)方法选择应由其索引保存的元组字段,并在输出元组中定义它们的顺序。
预测不需要定义用户函数。
以下代码显示了在DataSet上应用项目转换的不同方法:
DataSet> in = // [...]
// converts Tuple3 into Tuple2
DataSet> out = in.project(2,0);
使用类型提示进行Projection
请注意,Java编译器无法推断project 算子的返回类型。如果您使用一个操作算子的结果调用另一个算子,则可能会导致问题,project例如:
DataSet> ds = ....
DataSet> ds2 = ds.project(0).distinct(0);
通过提示返回类型的project 算子可以克服此问题,如下所示:
DataSet> ds2 = ds.>project(0).distinct(0);
分组数据集的转换
reduce操作可以对分组的数据集进行操作。可以通过多种方式指定分组使用的key:
请查看reduce示例,了解如何指定分组键。
Reduce分组数据集
应用于分组数据集的Reduce转换使用用户定义的Reduce函数将每个组简化为单个元素。对于每组输入元素,reduce函数依次将一对元素组合成一个元素,直到每组只剩下一个元素为止(Reduce做聚合的)。
注意,对于ReduceFunction,返回对象的键字段应该匹配输入值。这是因为reduce是隐式组合的,当传递给reduce操作符时,从combine操作符发出的对象再次按键分组。
Reduce由键表达式分组的DataSet
key表达式指定数据集每个元素的一个或多个字段。每个键表达式要么是公共字段的名称,要么是getter方法的名称。点可用于向下钻取对象。key表达式“*”选择所有字段。下面的代码展示了如何使用key表达式对POJO数据集进行分组,并使用reduce函数对其进行聚合。
// some ordinary POJO
public class WC {
public String word;
public int count;
// [...]
}
// ReduceFunction that sums Integer attributes of a POJO
public class WordCounter implements ReduceFunction {
@Override
public WC reduce(WC in1, WC in2) {
return new WC(in1.word, in1.count + in2.count);
}
}
// [...]
DataSet words = // [...]
DataSet wordCounts = words
// DataSet grouping on field "word"
.groupBy("word")
// apply ReduceFunction on grouped DataSet
.reduce(new WordCounter());
Reduce由KeySelector函数分组的DataSet
key选择器函数从数据集的每个元素中提取键值key。提取的键值key用于对数据集进行分组。下面的代码展示了如何使用键选择器函数对POJO数据集进行分组,并使用reduce函数对其进行聚合。
// some ordinary POJO
public class WC {
public String word;
public int count;
// [...]
}
// ReduceFunction that sums Integer attributes of a POJO
public class WordCounter implements ReduceFunction {
@Override
public WC reduce(WC in1, WC in2) {
return new WC(in1.word, in1.count + in2.count);
}
}
// [...]
DataSet words = // [...]
DataSet wordCounts = words
// DataSet grouping on field "word"
.groupBy(new SelectWord())
// apply ReduceFunction on grouped DataSet
.reduce(new WordCounter());
public class SelectWord implements KeySelector {
@Override
public String getKey(Word w) {
return w.word;
}
}
Reduce由字段位置键分组的DataSet(仅限元组数据集)
字段位置键指定用作分组键的元组数据集的一个或多个字段。以下代码显示如何使用字段位置键并应用reduce函数
DataSet> tuples = // [...]
DataSet> reducedTuples = tuples
// group DataSet on first and second field of Tuple
.groupBy(0, 1)
// apply ReduceFunction on grouped DataSet
.reduce(new MyTupleReducer());
按case类字段分组的DataSetReduce
使用Case Classes时,您还可以使用字段名称指定分组键:
java没有case类。
scala:
case class MyClass(val a: String, b: Int, c: Double)
val tuples = DataSet[MyClass] = // [...]
// group on the first and second field
val reducedTuples = tuples.groupBy("a", "b").reduce { ... }
GroupReduce作用在分组数据集上
应用于分组数据集的GroupReduce转换为每个组调用用户定义的group-reduce函数。这与Reduce的区别在于,用户定义的函数一次获得整个组。该函数使用可迭代的方法调用组中的所有元素,并可以返回任意数量的结果元素。
GroupReduce根据指定键字段对数据集进行分组(仅适用于元组数据集)
下面的代码展示了如何从按整数分组的数据集中删除重复的字符串。
public class DistinctReduce
implements GroupReduceFunction, Tuple2> {
@Override
public void reduce(Iterable> in, Collector> out) {
Set uniqStrings = new HashSet();
Integer key = null;
// add all strings of the group to the set
for (Tuple2 t : in) {
key = t.f0;
uniqStrings.add(t.f1);
}
// emit all unique strings.
for (String s : uniqStrings) {
out.collect(new Tuple2(key, s));
}
}
}
// [...]
DataSet> input = // [...]
DataSet> output = input
.groupBy(0) // group DataSet by the first tuple field
.reduceGroup(new DistinctReduce()); // apply GroupReduceFunction
GroupReduce根据键表达式、键选择器函数或Case类字段对数据集进行分组
类似于Reduce转换中的键表达式 key expressions, 键选择器函数 key-selector functions,和案例类字段的 case class fields工作。
对已排序的组进行GroupReduce
group-reduce函数使用迭代器访问组的元素。可选地,迭代器可以按照指定的顺序分发分组的元素。在许多情况下,这可以帮助降低用户定义的group-reduce函数的复杂性,并提高其效率。
下面的代码展示了另一个示例,该示例如何删除按整数分组并按字符串排序的数据集中的重复字符串。
package org.apache.flink.examples.java.dataBatchAPI;
import org.apache.flink.api.common.functions.GroupReduceFunction;
import org.apache.flink.api.common.operators.Order;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.util.Collector;
public class SortGroupDemo {
public static void main(String[] args) {
try {
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
Tuple2 tuple1 = new Tuple2<>(1, "bb");
Tuple2 tuple2 = new Tuple2<>(4, "bb");
Tuple2 tuple3 = new Tuple2<>(33, "cc");
Tuple2 tuple4 = new Tuple2<>(4, "bb");
Tuple2 tuple5 = new Tuple2<>(-5, "bb");
Tuple2 tuple6 = new Tuple2<>(-5, "ff");
Tuple2 tuple7 = new Tuple2<>(-1, "hh");
Tuple2 tuple8 = new Tuple2<>(-3, "jj");
DataSet> text = env.fromElements(tuple1, tuple2, tuple3, tuple4,
tuple5, tuple6, tuple7, tuple8);
DataSet> output = text
.groupBy(0) // group DataSet by first field
.sortGroup(1, Order.ASCENDING) // sort groups on second tuple field
.reduceGroup(new DistinctReduce());
output.print();
} catch (Exception e) {
}
}
}
class DistinctReduce
implements GroupReduceFunction, Tuple2> {
@Override
public void reduce(Iterable> in, Collector> out) {
Integer key = null;
String comp = null;
for (Tuple2 t : in) {
key = t.f0;
String next = t.f1;
// check if strings are different
if (comp == null || !next.equals(comp)) {
out.collect(new Tuple2(key, next));
comp = next;
}
}
}
}
输出:key值相同分到一组,一组中value相同的元素会被过滤。
注意:如果在reduce操作之前使用基于排序的操作算子执行策略来建立分组,那么GroupSort通常是无效的。
可组合的GroupReduceFunctions
与reduce函数相反,group-reduce函数不是可隐式组合的(隐式组合:reduce默认输入输出的数据类型必须相同)。为了使group-reduce函数可组合,必须实现GroupCombineFunction 接口。
Important:GroupCombineFunction接口的通用输入输出类型必须等于GroupReduceFunction的通用输入类型,如下例所示:
// Combinable GroupReduceFunction that computes a sum.
public class MyCombinableGroupReducer implements
GroupReduceFunction, String>,
GroupCombineFunction, Tuple2>
{
//后执行
@Override
public void reduce(Iterable> in,
Collector out) {
String key = null;
int sum = 0;
for (Tuple2 curr : in) {
key = curr.f0;
sum += curr.f1;
}
// concat key and sum and emit
out.collect(key + "-" + sum);
}
//先执行
@Override
public void combine(Iterable> in,
Collector> out) {
String key = null;
int sum = 0;
for (Tuple2 curr : in) {
key = curr.f0;
sum += curr.f1;
}
// emit tuple with key and sum
out.collect(new Tuple2<>(key, sum));
}
}
GroupCombine在分组数据集上的使用
GroupCombine变换是可组合GroupReduceFunction中的组合步骤的更一般的形式。从某种意义上说,它允许将输入类型I转换成任意输出类型O的组合。相反,GroupReduce中的组合步骤仅允许从输入类型I到输出类型I的组合。这是因为GroupReduceFunction中的reduce步骤需要输入类型I。
在一些应用中,期望在执行别的变换(例如,减小数据大小)之前将DataSet组合成中间格式。这可以通过CombineGroup转换以非常低的成本实现。
注意:分组数据集上的GroupCombine在内存中使用贪婪策略执行,该策略可能不会一次处理所有数据,而是以多个步骤处理。它也可以在各个分区上执行,而无需像GroupReduce转换那样进行数据交换。这个统计结果可能是局部汇总值,因此最后还是要调用GroupReduceFunction。
下面的示例演示如何使用CombineGroup转换实现另一种WordCount汇总统计。
package org.apache.flink.examples.java.dataBatchAPI;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.GroupCombineFunction;
import org.apache.flink.api.common.functions.GroupReduceFunction;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.functions.KeySelector;
import org.apache.flink.api.java.operators.GroupCombineOperator;
import org.apache.flink.api.java.operators.GroupReduceOperator;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.examples.java.wordcount.util.WordCountData;
import org.apache.flink.util.Collector;
public class CombineGroupDemo {
public static void main(String[] args) {
try {
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
DataSet text = WordCountData.getDefaultTextLineDataSet(env);
// env.setParallelism(1);
DataSet> combineGroup = text.flatMap(new FlatMapFunction() {
@Override
public void flatMap(String value, Collector out) throws Exception {
String[] tokens = value.toLowerCase().split("\\W+");
// emit the pairs
for (String token : tokens) {
if (token.length() > 0) {
out.collect(token);
}
}
}
}).groupBy((String value) -> value)
.combineGroup(new GroupCombineFunction>() {
@Override
public void combine(Iterable words, Collector> out) throws Exception {
String key = null;
int count = 0;
for (String word : words) {
key = word;
count++;
}
// emit tuple with word and count
out.collect(new Tuple2(key, count));
}
});
combineGroup.print();
GroupReduceOperator, Object> output =
combineGroup.groupBy(0)
.reduceGroup(new GroupReduceFunction, Object>() {
@Override
public void reduce(Iterable> values, Collector
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.flink.examples.java.wordcount.util;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
/**
* Provides the default data sets used for the WordCount example program.
* The default data sets are used, if no parameters are given to the program.
*
*/
public class WordCountData {
public static final String[] WORDS = new String[] {
"To be, or not to be,--that is the question:--",
"Whether 'tis nobler in the mind to suffer",
"The slings and arrows of outrageous fortune",
"Or to take arms against a sea of troubles,",
"And by opposing end them?--To die,--to sleep,--",
"No more; and by a sleep to say we end",
"The heartache, and the thousand natural shocks",
"That flesh is heir to,--'tis a consummation",
"Devoutly to be wish'd. To die,--to sleep;--",
"To sleep! perchance to dream:--ay, there's the rub;",
"For in that sleep of death what dreams may come,",
"When we have shuffled off this mortal coil,",
"Must give us pause: there's the respect",
"That makes calamity of so long life;",
"For who would bear the whips and scorns of time,",
"The oppressor's wrong, the proud man's contumely,",
"The pangs of despis'd love, the law's delay,",
"The insolence of office, and the spurns",
"That patient merit of the unworthy takes,",
"When he himself might his quietus make",
"With a bare bodkin? who would these fardels bear,",
"To grunt and sweat under a weary life,",
"But that the dread of something after death,--",
"The undiscover'd country, from whose bourn",
"No traveller returns,--puzzles the will,",
"And makes us rather bear those ills we have",
"Than fly to others that we know not of?",
"Thus conscience does make cowards of us all;",
"And thus the native hue of resolution",
"Is sicklied o'er with the pale cast of thought;",
"And enterprises of great pith and moment,",
"With this regard, their currents turn awry,",
"And lose the name of action.--Soft you now!",
"The fair Ophelia!--Nymph, in thy orisons",
"Be all my sins remember'd."
};
// public static final Integer[] MNUMBER = {1, 2, 3, 4, 5, 6, 7, 8, 9, 0};
public static DataSet getDefaultTextLineDataSet(ExecutionEnvironment env)
{
return env.fromElements(WORDS);
// return env.fromElements(MNUMBER);
}
}
注意:GroupReduceFunction和GroupCombineFunction是局部按字典排序输出。
上面的替代WordCount实现演示了GroupCombine如何在执行GroupReduce转换之前组合单词。上面的例子只是一个概念的证明。请注意,组合步骤如何更改数据集的类型,通常在执行GroupReduce之前需要额外的Map转换。
聚合在分组元组数据集上
有一些常用的聚合操作。聚合转换提供以下内置聚合功能:
聚合转换只能应用于元组数据集,并且只支持用于分组的字段位置键。
面的代码展示了如何应用聚合转换的数据集中分组的字段位置键:
package org.apache.flink.examples.java.dataBatchAPI;
import org.apache.flink.api.common.operators.Order;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.operators.SortPartitionOperator;
import org.apache.flink.api.java.tuple.Tuple3;
import static org.apache.flink.api.java.aggregation.Aggregations.MIN;
import static org.apache.flink.api.java.aggregation.Aggregations.SUM;
public class MinByDemo {
public static void main(String[] args) {
try {
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(2);
Tuple3 tuple1 = new Tuple3<>("aa", 1, -5);
Tuple3 tuple2 = new Tuple3<>("aa", 2, -1);
Tuple3 tuple3 = new Tuple3<>("aa", 3, 3);
Tuple3 tuple4 = new Tuple3<>("dd", 4, -2);
Tuple3 tuple5 = new Tuple3<>("aa", -5, 2);
Tuple3 tuple6 = new Tuple3<>("ff", -5, -2);
Tuple3 tuple7 = new Tuple3<>("hh", -1, 0);
Tuple3 tuple8 = new Tuple3<>("jj", -3, 6);
DataSet> text = env.fromElements(tuple1, tuple2, tuple3, tuple4, tuple5, tuple6, tuple7, tuple8);
// text.minBy(1,2).print();
DataSet> output = text
.groupBy(0) // group DataSet on second field
.aggregate(SUM, 1) // compute sum of the first field
.and(MIN, 2); // compute minimum of the third field
output.print();
} catch (Exception e) {
e.printStackTrace();
}
}
}
要在DataSet上应用多个聚合,必须.and()在第一个聚合之后使用该函数,这意味着.aggregate(SUM,1).and(MIN, 2)生成字段1的总和和原始DataSet的字段2的最小值。与此相反,.aggregate(SUM, 1).aggregate(MIN, 2)将在字段1和2聚合上应用聚合。在给定的示例中,在计算由字段0分组的字段1的总和之后,它将产生字段2的最小值。
注意:将来会扩展聚合函数集。
DataSet> output = text
.groupBy(0) // group DataSet on second field
.aggregate(SUM, 1) // compute sum of the first field
.aggregate(SUM, 2); // compute minimum of the third field
output.print();
MinBy / MaxBy在Grouped Tuple DataSet上
MinBy (MaxBy)转换为每组元组选择一个元组。所选的元组是其一个或多个指定字段值为最小值(最大值)的元组。用于比较的字段必须是有效的关键字段,即,具有可比性。如果多个元组具有最小(最大)字段值,则返回这些元组的任意元组。
下面的代码显示了如何选择具有最小值的元组,每个元组的字段Integer和Double字段具有相同的String值DataSet
package org.apache.flink.examples.java.dataBatchAPI;
import org.apache.flink.api.common.operators.Order;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.operators.SortPartitionOperator;
import org.apache.flink.api.java.tuple.Tuple3;
import static org.apache.flink.api.java.aggregation.Aggregations.MIN;
import static org.apache.flink.api.java.aggregation.Aggregations.SUM;
public class MinByDemo {
public static void main(String[] args) {
try {
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(2);
Tuple3 tuple1 = new Tuple3<>("aa", 1, -5);
Tuple3 tuple2 = new Tuple3<>("aa", 2, -1);
Tuple3 tuple3 = new Tuple3<>("aa", 3, 3);
Tuple3 tuple4 = new Tuple3<>("dd", 4, -2);
Tuple3 tuple5 = new Tuple3<>("aa", -5, 2);
Tuple3 tuple6 = new Tuple3<>("ff", -5, -2);
Tuple3 tuple7 = new Tuple3<>("hh", -1, 0);
Tuple3 tuple8 = new Tuple3<>("jj", -3, 6);
DataSet> text = env.fromElements(tuple1, tuple2, tuple3, tuple4, tuple5, tuple6, tuple7, tuple8);
text.minBy(1,2).print();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Reduce on full DataSet: Reduce完整的DataSet
Reduce转换将用户定义的Reduce函数应用于数据集的所有元素。reduce函数随后将成对的元素组合成一个元素,直到只剩下一个元素为止。
以下代码显示了如何对Integer DataSet的所有元素求和:
// ReduceFunction that sums Integers
public class IntSummer implements ReduceFunction {
@Override
public Integer reduce(Integer num1, Integer num2) {
return num1 + num2;
}
}
// [...]
DataSet intNumbers = // [...]
DataSet sum = intNumbers.reduce(new IntSummer());
使用Reduce转换还原完整的数据集意味着最终的Reduce操作不能并行执行。然而,reduce函数是自动组合的,因此reduce转换不会限制大多数用例的可伸缩性。
GroupReduce on full DataSet完整DataSet上的GroupReduce
GroupReduce转换对数据集的所有元素应用用户定义的group-reduce函数。group-reduce可以遍历DataSet的所有元素并返回任意数量的结果元素。
下面的例子展示了如何在一个完整的数据集上应用GroupReduce转换:
DataSet input = // [...]
// apply a (preferably combinable) GroupReduceFunction to a DataSet
DataSet output = input.reduceGroup(new MyGroupReducer());
Note: 如果group-reduce函数不能组合,则无法并行地对完整数据集进行GroupReduce转换。因此,这可能是一个计算非常密集型的操作。参见上面关于“可组合的GroupReduce函数”的段落,了解如何实现group-reduce函数。
GroupCombine on a full DataSet:GroupCombine在完整的DataSet上
在整个数据集上的GroupCombine工作原理类似于分组数据集上的GroupCombine。数据在所有节点上进行分区,然后以贪婪的方式组合(即一次只组合与内存匹配的数据)。
Aggregate on full Tuple DataSet在完整的Tuple DataSet上聚合
有一些常用的聚合操作。聚合转换提供以下内置聚合功能:
聚合转换只能应用于元组数据集。
下面的代码展示了如何在整个数据集上应用聚合转换:
DataSet> input = // [...]
DataSet> output = input
.aggregate(SUM, 0) // compute sum of the first field
.and(MIN, 1); // compute minimum of the second field
注意:扩展受支持的聚合函数集在我们的路线图中。
MinBy / MaxBy on full Tuple DataSet在整个Tuple DataSet上进行MinBy / MaxBy
MinBy(MaxBy)转换从元组的DataSet中选择一个元组。选定的元组是一个元组,其一个或多个指定字段的值最小(最大)。用于比较的字段必须是有效的关键字段,即可比较。如果多个元组具有最小(最大)字段值,则返回这些元组的任意元组。
下面的代码演示如何选择与为最大值的元组Integer,并Double从一个领域DataSet
DataSet> input = // [...]
DataSet> output = input
.maxBy(0, 2); // select tuple with maximum values for first and third field.
加不加groupby的区别:
text.groupBy(0).minBy(1,2).print();
如果加上groupBy(),分组后求每组中最小的一个元组;
不加groupBy(),求整个一批最小的一个元组。
去重Distinct
Distinct算子是用来找出数据集中的不同元素,删除相同元素。下面的代码从数据集中删除所有重复的元素:
DataSet> input = // [...]
DataSet> output = input.distinct();
还可以使用以下方法更改DataSet中元素的区别:
用列位置Keys取Distinct
package org.apache.flink.examples.java.dataBatchAPI;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.aggregation.Aggregations;
import org.apache.flink.api.java.tuple.Tuple3;
public class DistinctDemo {
public static void main(String[] args) {
try {
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(2);
Tuple3 tuple1 = new Tuple3<>("aa", 1, -5);
Tuple3 tuple2 = new Tuple3<>("bb", 2, -1);
Tuple3 tuple3 = new Tuple3<>("bb", 2, -1);
Tuple3 tuple4 = new Tuple3<>("dd", 4, -2);
Tuple3 tuple5 = new Tuple3<>("ee", -5, 2);
Tuple3 tuple6 = new Tuple3<>("ff", -2, 9);
Tuple3 tuple7 = new Tuple3<>("hh", -2, 0);
Tuple3 tuple8 = new Tuple3<>("jj", -3, 6);
DataSet> text = env.fromElements(tuple1, tuple2, tuple3, tuple4, tuple5, tuple6, tuple7, tuple8).setParallelism(2);
text.distinct(1).print();
// env.execute("AggregateDemo");
} catch (Exception e) {
e.printStackTrace();
}
}
}
用KeySelector函数取Distinct
private static class AbsSelector implements KeySelector {
private static final long serialVersionUID = 1L;
@Override
public Integer getKey(Integer t) {
return Math.abs(t);
}
}
DataSet input = // [...]
DataSet output = input.distinct(new AbsSelector());
用Key表达式取Distinct
// some ordinary POJO
public class CustomType {
public String aName;
public int aNumber;
// [...]
}
DataSet input = // [...]
DataSet output = input.distinct("aName", "aNumber");
也可以用通配符表示使用所有字段:
DataSet input = // [...]
DataSet output = input.distinct("*");
Join
Join转换将两个数据集连接成一个数据集中。这两个数据集的连接条件是:一个或多个键值相等,可以使用这些键指定。
有几种不同的方法可以执行Join转换,如下所示。
Default Join (Join into Tuple2)
默认的Join转换生成一个包含两个字段的新元组数据集。每个元组在第一个元组字段中持有第一个输入数据集的连接元素,在第二个字段中持有第二个输入数据集的匹配元素。
下面的代码显示了一个使用字段位置键的默认连接转换:
public static class User { public String name; public int zip; }
public static class Store { public Manager mgr; public int zip; }
DataSet input1 = // [...]
DataSet input2 = // [...]
// result dataset is typed as Tuple2
DataSet>
result = input1.join(input2)
.where("zip") // key of the first input (users)
.equalTo("zip"); // key of the second input (stores)
Join with Join Function
连接转换还可以调用用户定义的连接函数来处理连接元组。连接函数接收第一个输入数据集的一个元素和第二个输入数据集的一个元素,并恰好返回一个元素。
下面的代码使用键选择器函数执行数据集与自定义java对象和元组数据集的连接,并演示如何使用用户定义的连接函数:
// some POJO
public class Rating {
public String name;
public String category;
public int points;
}
// Join function that joins a custom POJO with a Tuple
public class PointWeighter
implements JoinFunction, Tuple2> {
@Override
public Tuple2 join(Rating rating, Tuple2 weight) {
// multiply the points and rating and construct a new output tuple
return new Tuple2(rating.name, rating.points * weight.f1);
}
}
DataSet ratings = // [...]
DataSet> weights = // [...]
DataSet>
weightedRatings =
ratings.join(weights)
// key of the first input
.where("category")
// key of the second input
.equalTo("f0")
// applying the JoinFunction on joining pairs
.with(new PointWeighter());
Join with Flat-Join Function
与Map和FlatMap类似,FlatJoin的行为与Join相同,但是它可以返回(collect)、0、1或多个元素,而不是返回一个元素。
public class PointWeighter
implements FlatJoinFunction, Tuple2> {
@Override
public void join(Rating rating, Tuple2 weight,
Collector> out) {
if (weight.f1 > 0.1) {
out.collect(new Tuple2(rating.name, rating.points * weight.f1));
}
}
}
DataSet>
weightedRatings =
ratings.join(weights) // [...]
// key of the first input
.where("category")
// key of the second input
.equalTo("f0")
// applying the JoinFunction on joining pairs
.with(new PointWeighter());
关联Projection(仅限Java / Python)
Join变换可以使用Projection构造结果元组,如下所示:
DataSet> input1 = // [...]
DataSet> input2 = // [...]
DataSet>
result =
input1.join(input2)
// key definition on first DataSet using a field position key
.where(0)
// key definition of second DataSet using a field position key
.equalTo(0)
// select and reorder fields of matching tuples
.projectFirst(0,2).projectSecond(1).projectFirst(1);
projectFirst(int.)和projectSecond(int.)选择第一个和第二个连接输入的字段,这些字段应该被组装到一个输出元组中。索引的顺序定义了输出元组中字段的顺序。连接投影也适用于非元组数据集。在这种情况下,必须在没有参数的情况下调用projectFirst()或projectSecond(),以便向输出元组添加连接元素。
Join with DataSet Size Hint使用提示数据集大小的Join
为了引导优化器选择正确的执行策略,您可以提示要连接的数据集的大小,如下所示:
DataSet> input1 = // [...]
DataSet> input2 = // [...]
DataSet, Tuple2>>
result1 =
// 提示第二个数据集非常小
input1.joinWithTiny(input2)
.where(0)
.equalTo(0);
DataSet, Tuple2>>
result2 =
// 提示第二个数据集非常大
input1.joinWithHuge(input2)
.where(0)
.equalTo(0);
Join Algorithm Hints Join算法提示
Flink运行时可以以各种方式执行连接。在不同的情况下,每种可能的方法都优于其他方法。系统尝试自动选择一种合理的方法,但是允许您手动选择策略,以防您想强制执行执行连接的特定方法。
DataSet input1 = // [...]
DataSet input2 = // [...]
DataSet result =
input1.join(input2, JoinHint.BROADCAST_HASH_FIRST)
.where("id").equalTo("key");
以下是一些提示:
OuterJoin
外连接OuterJoin转换对两个数据集执行左、右或完整的外连接。外部连接类似于常规(内部)连接,创建在键key上相等的所有元素对。此外,如果在另一侧没有找到匹配的keys,则保存“外部”侧的记录(左侧,右侧或左右两侧l)。将匹配的元素对(或一个元素和另一个输入的null值)提供给JoinFunction,以便将这对元素转换为单个元素,或者提供给FlatJoinFunction,以便将这对元素转换为任意多个(包括none) 元素。
这两个数据集的元素连接在一个或多个键上,可以使用这些键指定
外连接OuterJoin只支持Java和Scala DataSet API。
OuterJoin with Join Function带有连接函数的外连接
OuterJoin转换调用用户定义的连接函数来处理连接元组。连接函数接收第一个输入DataSet的一个元素和第二个输入DataSet的一个元素,并恰好返回一个元素。根据外连接的类型(left,right,full),join函数的两个输入元素都可以为空null。
下面的代码使用自定义java对象和使用key选择器函数的元组数据集执行数据集的左外连接,并演示如何使用用户定义的连接函数:
// some POJO
public class Rating {
public String name;
public String category;
public int points;
}
// Join function that joins a custom POJO with a Tuple
public class PointAssigner
implements JoinFunction, Rating, Tuple2> {
@Override
public Tuple2 join(Tuple2 movie, Rating rating) {
// Assigns the rating points to the movie.
// NOTE: rating might be null
return new Tuple2(movie.f0, rating == null ? -1 : rating.points;
}
}
DataSet> movies = // [...]
DataSet ratings = // [...]
DataSet>
moviesWithPoints =
movies.leftOuterJoin(ratings)
// key of the first input
.where("f0")
// key of the second input
.equalTo("name")
// applying the JoinFunction on joining pairs
.with(new PointAssigner());
OuterJoin with Flat-Join Function具有Flat-Join函数的外部连接
与Map和FlatMap类似,带有flat-join函数的外连接与带有join函数的外连接的行为方式相同,但是它可以返回(collect)、0、1或多个元素,而不是返回一个元素。
public class PointAssigner
implements FlatJoinFunction, Rating, Tuple2> {
@Override
public void join(Tuple2 movie, Rating rating
Collector> out) {
if (rating == null ) {
out.collect(new Tuple2(movie.f0, -1));
} else if (rating.points < 10) {
out.collect(new Tuple2(movie.f0, rating.points));
} else {
// do not emit
}
}
DataSet>
moviesWithPoints =
movies.leftOuterJoin(ratings) // [...]
// key of the first input
.where("f0")
// key of the second input
.equalTo("name")
// applying the JoinFunction on joining pairs
.with(new PointAssigner());
Join Algorithm Hints关联算法提示
Flink运行时可以以各种方式执行外部连接。在不同的情况下,每种可能的方法都优于其他方法。系统尝试自动选择一种合理的方法,但是允许您手动选择策略,以防您想强制执行执行外部连接的特定方法。
DataSet input1 = // [...]
DataSet input2 = // [...]
DataSet result1 =
input1.leftOuterJoin(input2, JoinHint.REPARTITION_SORT_MERGE)
.where("id").equalTo("key");
DataSet result2 =
input1.rightOuterJoin(input2, JoinHint.BROADCAST_HASH_FIRST)
.where("id").equalTo("key");
注意:并不是所有的执行策略都受每个外部连接类型的支持。
Cross
Cross转换将两个数据集组合成一个数据集。它构建两个输入数据集元素的所有成对组合,即它建立了一个笛卡尔积。Cross转换要么在每对元素上调用用户定义的Cross函数,要么输出Tuple2。这两种模式如下所示。
注意:Cross可能是一个非常计算密集型的操作,它甚至可以挑战大型计算集群!
Cross with User-Defined Function与用户定义的函数交叉
cross转换可以调用用户定义的cross函数。cross函数接收第一个输入的一个元素和第二个输入的一个元素,并恰好返回一个结果元素。
下面的代码展示了如何使用cross函数在两个数据集上应用cross转换:
public class Coord {
public int id;
public int x;
public int y;
}
// CrossFunction computes the Euclidean distance between two Coord objects.
public class EuclideanDistComputer
implements CrossFunction> {
@Override
public Tuple3 cross(Coord c1, Coord c2) {
// compute Euclidean distance of coordinates
double dist = sqrt(pow(c1.x - c2.x, 2) + pow(c1.y - c2.y, 2));
return new Tuple3(c1.id, c2.id, dist);
}
}
DataSet coords1 = // [...]
DataSet coords2 = // [...]
DataSet>
distances =
coords1.cross(coords2)
// apply CrossFunction
.with(new EuclideanDistComputer());
Cross with Projection
交叉Cross变换也可以使用如下所示的投影projection构造结果元组:
cross:
public class CrossDemoComputer
implements CrossFunction, Tuple2, Tuple3> {
@Override
public Tuple4 cross(Tuple3 a, Tuple3 b) {
return new Tuple3(b.f0, a.f1,a.f0, b.f1);
}
}
DataSet> input1 = // [...]
DataSet> input2 = // [...]
DataSet>
result =
input1.cross(input2)
// apply CrossFunction
.with(new CrossDemoComputer());
等价也下面投影projection的实现效果:
projection:
DataSet> input1 = // [...]
DataSet> input2 = // [...]
DataSet>
result =
input1.cross(input2)
// select and reorder fields of matching tuples
.projectSecond(0).projectFirst(1,0).projectSecond(1);
交叉投影 Cross projection中的字段选择与连接结果的投影projection的工作方式相同。
Cross with DataSet Size Hint带有数据集大小提示的cross
为了引导优化器选择正确的执行策略,您可以提示数据集的大小cross如下所示:
DataSet> input1 = // [...]
DataSet> input2 = // [...]
DataSet>
udfResult =
// hint that the second DataSet is very small
input1.crossWithTiny(input2)
// apply any Cross function (or projection)
.with(new MyCrosser());
DataSet>
projectResult =
// hint that the second DataSet is very large
input1.crossWithHuge(input2)
// apply a projection (or any Cross function)
.projectFirst(0,1).projectSecond(1);
CoGroup
CoGroup转换联合处理两个数据集的组。这两个数据集都根据定义的键key进行分组,共享相同键的两个数据集的组一起传递给用户定义的co-group函数。如果对于特定的键,只有一个数据集具有组,则使用此组和空组调用co-group函数。一个co-group函数可以分别遍历这两个组的元素并返回任意数量的结果元素。
与Reduce、GroupReduce和Join类似,可以使用不同的键选择方法定义键。
CoGroup on DataSets
该示例显示了如何按字段位置键(只按元组数据集)进行分组。您可以对pojo类型和键表达式执行相同的操作。
// Some CoGroupFunction definition
class MyCoGrouper
implements CoGroupFunction, Tuple2, Double> {
@Override
public void coGroup(Iterable> iVals,
Iterable> dVals,
Collector out) {
Set ints = new HashSet();
// add all Integer values in group to set
for (Tuple2> val : iVals) {
ints.add(val.f1);
}
// multiply each Double value with each unique Integer values of group
for (Tuple2 val : dVals) {
for (Integer i : ints) {
out.collect(val.f1 * i);
}
}
}
}
// [...]
DataSet> iVals = // [...]
DataSet> dVals = // [...]
DataSet output = iVals.coGroup(dVals)
// group first DataSet on first tuple field
.where(0)
// group second DataSet on first tuple field
.equalTo(0)
// apply CoGroup function on each pair of groups
.with(new MyCoGrouper());
Union
生成必须具有相同类型的两个数据集的联合。多个数据集的联合可以通过多个联合调用来实现,如下图所示:
DataSet> vals1 = // [...]
DataSet> vals2 = // [...]
DataSet> vals3 = // [...]
DataSet> unioned = vals1.union(vals2).union(vals3);
Rebalance
均匀地重新平衡数据集的并行分区,以消除数据倾斜。
DataSet in = // [...]
// rebalance DataSet and apply a Map transformation.
DataSet> out = in.rebalance()
.map(new Mapper());
Hash-Partition
按给定键key对数据集进行哈希分区。键可以指定为位置键(元组Tuple t.f0)、key值表达式和key选择器函数(有关如何指定键,请参阅Reduce示例)。
DataSet> in = // [...]
// 按字符串值哈希分区数据集并应用MapPartition转换
DataSet> out = in.partitionByHash(0)
.mapPartition(new PartitionMapper());
Range-Partition
Range-partitions 给定key对数据集进行分区。key可以指定为位置键(元组Tuple t.f0)、key值表达式和key选择器函数(有关如何指定键,请参阅Reduce示例)。
DataSet> in = // [...]
// range-partition DataSet by String value and apply a MapPartition transformation.
DataSet> out = in.partitionByRange(0)
.mapPartition(new PartitionMapper());
Sort Partition
本地按指定顺序对指定字段对数据集的所有分区进行排序。字段可以指定为字段表达式或字段位置(有关如何指定键,请参阅Reduce示例)。通过链接sortPartition()调用,可以在多个字段上对分区进行排序。
DataSet> in = // [...]
// Locally sort partitions in ascending order on the second String field and
// in descending order on the first String field.
// Apply a MapPartition transformation on the sorted partitions.
DataSet> out = in.sortPartition(1, Order.ASCENDING)
.sortPartition(0, Order.DESCENDING)
.mapPartition(new PartitionMapper());
First-n
返回数据集的前n个(任意)元素。First-n可以应用于常规数据集、分组数据集或组排序数据集。分组键可以指定为键选择器函数或字段位置键(有关如何指定键,请参阅Reduce示例)。
DataSet> in = // [...]
// 返回数据集的前五个(任意)元素
DataSet> out1 = in.first(5);
// 返回每个字符串组的前两个(任意)元素
DataSet> out2 = in.groupBy(0)
.first(2);
// 返回按Integer字段排序的每个字符串组的前三个元素
DataSet> out3 = in.groupBy(0)
.sortGroup(1, Order.ASCENDING)
.first(3);
https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/batch/dataset_transformations.html
https://flink.sojb.cn/dev/batch/dataset_transformations.html