scala隐式转换实现reduceByKey

先看下spark实现word count的方式

val lines = sc.textFile(...)

val words = lines.flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_)

如果不用spark框架,而用scala语言直接实现,因为scala原本没有reduceByKey这个操作,需要通过隐式转换实现reduceByKey

第一步,定义一个类Foo,接收一个Seq类型的入参,定义一个reduceByKey方法,传入一个函数f。先对Seq使用了groupBy,得到的是一个键值对形式,value是Seq((k,v1), (k, v2), ...)的形式,需要把v取出来做reduce操作,传入函数f,如下

class Foo[K, V](seq:Seq[(K, V)]) {

  def reduceByKey(f: (V, V) => V ): Seq[(K, V)] = {

    seq.groupBy(_._1).map(x => (x._1,x._2.map(_._2).reduce(f))).toSeq

  }
}

第二步,声明隐式转换

implicit def Seq2Foo(seq:Seq[(String, Int)]) = new Foo[String, Int](seq)

完整代码如下


object Test {

  def main(args: Array[String]): Unit = {



    val lines = Seq("hello world", "hello scala", "hello")



    implicit def Seq2Foo(seq:Seq[(String, Int)]) = new Foo[String, Int](seq)



    val words = lines.flatMap(_.split(" ")).map((_,1))

    val wc = words.reduceByKey(_+_)



    wc.foreach(println)

  }

}

class Foo[K, V](seq:Seq[(K, V)]) {

  def reduceByKey(f: (V, V) => V ): Seq[(K, V)] = {

    seq.groupBy(_._1).map(x => (x._1,x._2.map(_._2).reduce(f))).toSeq

  }

}

结果为


(scala,1)

(world,1)

(hello,3)

另外一个for循环版本也可以实现


import scala.collection.mutable



class Foo2[K, V](seq: Seq[(K, V)]) {

  def reduceByKey(f: (V, V) => V): Seq[(K, V)] = {

    val m = mutable.Map[K, V]()

    for ((k, v) <- seq)

      if (m contains k)

        m(k) = f(v, m(k))

      else

        m(k) = v

    m.toSeq

  }

}

implicit def Seq2Foo2(seq: Seq[(String, Int)]) = new Foo2[String, Int](seq)

你可能感兴趣的:(scala隐式转换实现reduceByKey)