1.实例描述
输入为一批文件,文件内容格式如下:
Id1 The Spark
……
Id2 The Hadoop
……
输出如下:(单词,文档ID合并字符串)
The Id1 Id2
Hadoop Id2
……
2.设计思路
先读取所有文件,数据项为(文档ID,文档词集合)的RDD,然后将数据映射为(词,文档ID)的RDD,去重,最后在reduceByKey阶段聚合每个单词的文档ID
3.代码
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.SparkContext._
import scala.collection.mutable
object InvertedIndex {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("InvertedIndex").setMaster("local[1]")
val sc = new SparkContext(conf)
val textRdd=sc.textFile("hdfs://master:9000/wordIndex")
val md=textRdd.map(file=>file.split("\t"))
val md2=md.map(item=>{(item(0),item(1))})
val fd=md2.flatMap(file =>{
val words=file._2.split(" ").iterator
val list=mutable.LinkedList[(String,String)]((words.next(),file._1))
var temp=list
while(words.hasNext){
temp.next=mutable.LinkedList[(String,String)]((words.next,file._1))
temp=temp.next
}
list
})
val result=fd.distinct()
val resRdd=result.reduceByKey(_+" "+_)
resRdd.saveAsTextFile("hdfs://master:9000/InvertIndex")
}
}
4.说明
其中有如下几点要注意
rdd flatMap方法定义如下
/**
* Return a new RDD by first applying a function to all elements of this
* RDD, and then flattening the results.
*/
def flatMap[U: ClassTag](f: T => TraversableOnce[U]): RDD[U] =
new FlatMappedRDD(this, sc.clean(f))
方法的参数为函数,函数输出类型为集合(的父类)。它的作用是将这些集合合并为一个新的集合,但不删除相同的元素,也不合并rdd中的分区。
reduce 方法定义如下
/**
* Reduces the elements of this RDD using the specified commutative and
* associative binary operator.
*/
def reduce(f: (T, T) => T): T = {
val cleanF = sc.clean(f)
val reducePartition: Iterator[T] => Option[T] = iter => {
if (iter.hasNext) {
Some(iter.reduceLeft(cleanF))
} else {
None
}
}
var jobResult: Option[T] = None
val mergeResult = (index: Int, taskResult: Option[T]) => {
if (taskResult.isDefined) {
jobResult = jobResult match {
case Some(value) => Some(f(value, taskResult.get))
case None => taskResult
}
}
}
sc.runJob(this, reducePartition, mergeResult)
// Get the final result out of our Option, or throw an exception if the RDD was empty
jobResult.getOrElse(throw new UnsupportedOperationException("empty collection"))
}
reduce 函数相当于对RDD中的元素进行reduceLeft函数的操作。reduceLeft先对两个元素
在RDD中,先对每个分区中的所有元素
例如:用户自定义函数如下。
f:(A,B)=>(A._1+"@"+B._1 , A._2+B._2)
如图: