SparkRDD算子--reduceByKey算子

语法

val newRdd = oldRdd.reduceByKey(func, [numTasks])

func 聚合函数

numtasks reduce任务数

源码

def reduceByKey(func : scala.Function2[V, V, V]) : org.apache.spark.rdd.RDD[scala.Tuple2[K, V]] = { /* compiled code */ }

作用

对K-V类型的RDD按照Key对value进行聚合。

例子

package com.day1

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}

object oper {
    def main(args: Array[String]): Unit = {
        val config:SparkConf = new SparkConf().setMaster("local[*]").setAppName("wordCount")

        // 创建上下文对象
        val sc = new SparkContext(config)

        // groupByKey算子
        val arrayRdd = sc.makeRDD(Array("张三","李四","王五","刘六","张三","李四","张三","刘六"))

        val mapRdd = arrayRdd.map(word => (word,1))
        val reduceRdd = mapRdd.reduceByKey((x,y) => x+y)
        reduceRdd.collect().foreach(println)
    }
}

输入
"张三" "李四" "王五" "刘六" "张三" "李四" "张三" "刘六"
输出
(张三,3)
(刘六,2)
(李四,2)
(王五,1)

示意图

SparkRDD算子--reduceByKey算子_第1张图片

你可能感兴趣的:(#,---Spark-Core,Spark-MLlib,spark)