对于两个输入文件A和B,编写Spark独立应用程序,对两个文件进行合并,并剔除其中重复的内容,得到一个新文件C。下面是输入文件和输出文件的一个样例,供参考。
输入文件A的样例如下:
20170101 x
20170102 y
20170103 x
20170104 y
20170105 z
20170106 z
输入文件B的样例如下:
20170101 y
20170102 y
20170103 x
20170104 z
20170105 y
根据输入的文件A和B合并得到的输出文件C的样例如下:
20170101 x
20170101 y
20170102 y
20170103 x
20170104 y
20170104 z
20170105 y
20170105 z
20170106 z
(1)假设当前目录为/usr/local/spark/mycode/remdup,在当前目录下新建一个目录mkdir -p src/main/scala,然后在目录/usr/local/spark/mycode/remdupc/main/scala下新建一个remdup.scala,复制下面代码;
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.HashPartitioner
object RemDup {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("RemDup")
val sc = new SparkContext(conf)
val dataFile = "file:///homearles/data" //A、B文件存放的路径,data文件夹下不能有其余文件
val data = sc.textFile(dataFile,2)
val res = data.filter(_.trim().length>0).map(line=>(line.trim,"")).partitionBy(new HashPartitioner(1)).groupByKey().sortByKey().keys
res.saveAsTextFile("result")
}
}
(2)在目录/usr/local/spark/mycode/remdup目录下新建simple.sbt,复制下面代码:
name := "Simple Project"
version := "1.0"//sbt版本
scalaVersion := "2.11.8"//scala版本
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.1.0"
(3)在目录/usr/local/spark/mycode/remdup下执行下面命令打包程序
$ sudo /usr/local/sbt/sbt package
(4)最后在目录/usr/local/spark/mycode/remdup下执行下面命令提交程序
$ /usr/local/spark2.0.0/bin/spark-submit --class "RemDup" /usr/local/spark2.0.0/mycode/remdup/target/scala-2.11/simple-project_2.11-1.0.jar
(5)在目录/usr/local/spark/mycode/remdup/result下即可得到结果文件。