最近经常使用到reduceByKey这个算子,懵逼的时间占据多数,所以沉下心来上国外的帖子仔细过了一遍,发现一篇不错的,在此加上个人的理解整体过一遍这个算子,那么我们开始:
国外的大牛一上来给出这么一句话,个人感觉高度概括了reduceByKey的功能:
Spark RDD reduceByKey function merges the values for each key
using an associative reduce function.【Spark的RDD的reduceByKey
是使用一个相关的函数来合并每个key的value的值的一个算子(那么主
干就是reduceByKey是个算子/函数)】。
那么这就基本奠定了reduceByKey的作用域是key-value类型的键值对,并且是只对每个key的value进行处理,如果含有多个key的话,那么就对多个values进行处理。这里的函数是我们自己传入的,也就是说是可人为控制的【其实这是废话,人为控制不了这算子一点用没有】。那么举个例子:
scala> val x = sc.parallelize(Array(("a", 1), ("b", 1), ("a", 1), | ("a", 1), ("b", 1), ("b", 1), | ("b", 1), ("b", 1)), 3)
我们创建了一个Array的字符串,并把其存入spark的集群上,设置了三个分区【这里我们不关注分区,只关注操作】。那么我们调用reduceByKey并且传入函数进行相应操作【本处我们对相同key的value进行相加操作,类似于统计单词出现次数】:
scala> val y = x.reduceByKey((pre, after) => (pre + after))这里两个参数我们逻辑上让他分别代表同一个key的两个不同values,那么结果想必大家应该猜到了:
scala> y.collect res0: Array[(String, Int)] = Array((a,3), (b,5))嗯,到这里大家对reduceByKey有了初步的认识和体会。论坛中有一段写的也很有帮助,由于英文不好怕翻译过来误导大家,所以每次附上原话:
Basically reduceByKey function works only for RDDs which contains key and value pairs kind of
elements(i.e RDDs having tuple or Map as a data element). It is a transformation operation
which means it is lazily evaluated.We need to pass one associative function as a parameter,
which will be applied to the source RDD and will create anew RDD as with resulting values(i.e.
key value pair). This operation is a wide operation as data shuffling may happen across the
partitions.【本质上来讲,reduceByKey函数(说算子也可以)只作用于包含key-value的RDDS上,它是
transformation类型的算子,这也就意味着它是懒加载的(就是说不调用Action的方法,是不会去计算的
),在使用时,我们需要传递一个相关的函数作为参数,这个函数将会被应用到源RDD上并且创建一个新的
RDD作为返回结果,这个算子作为data Shuffling 在分区的时候被广泛使用】
看到这大家对这个算子应该有了更加深入的认识,那么再附上我的scala的一个小例
子,同样是统计字母出现次数:
import org.apache.spark.{SparkContext, SparkConf}
/**
* mhc
* Created by Administrator on 2016/5/17.
*/
object MyTest {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("MyTestApp").setMaster("local[1]")
val sc = new SparkContext(conf)
val x = sc.parallelize(List("a", "b", "a", "a", "b", "b", "b", "b"))
val s = x.map((_, 1))
val result = s.reduceByKey((pre, after) => pre + after)
println(result.collect().toBuffer)
}
}
Java:
packagecom.backtobazics.sparkexamples; importjava.util.Arrays; importorg.apache.spark.api.java.JavaPairRDD; importorg.apache.spark.api.java.JavaRDD; importorg.apache.spark.api.java.JavaSparkContext; importorg.apache.spark.api.java.function.Function2; importscala.Tuple2; public classReduceByKeyExample { public static void main(String[] args) throws Exception { JavaSparkContext sc = new JavaSparkContext(); //Reduce Function for sum Function2<Integer, Integer, Integer> reduceSumFunc = (accum, n) -> (accum + n); // Parallelized with 2 partitions JavaRDD<String> x = sc.parallelize( Arrays.asList("a", "b", "a", "a", "b", "b", "b", "b"), 3); // PairRDD parallelized with 3 partitions// mapToPair function will map JavaRDD to JavaPairRDD JavaPairRDD<String, Integer> rddX = x.mapToPair(e -> newTuple2python:Integer>(e, 1)); // New JavaPairRDD JavaPairRDD<String, Integer> rddY = rddX.reduceByKey(reduceSumFunc); //Print tuples for(Tuple2<String, Integer> element : rddY.collect()){ System.out.println("("+element._1+", "+element._2+")"); } } } // Output:// (b, 5)// (a, 3)
Bazic reduceByKey example in python# creating PairRDD x with key value pairs>>> x = sc.parallelize([("a", 1), ("b", 1), ("a", 1), ("a", 1), ... ("b", 1), ("b", 1), ("b", 1), ("b", 1)], 3) # Applying reduceByKey operation on x>>> y = x.reduceByKey(lambda accum, n: accum + n) >>> y.collect() [('b', 5), ('a', 3)] # Define associative function separately >>>def sumFunc(accum, n): ... return accum + n ... >>> y = x.reduceByKey(sumFunc) >>> y.collect() [('b', 5), ('a', 3)]感谢大家捧场,客官慢走。