sbt 编译spark 的wordcount 程序

  1. 直接执行 sbt 会在当前目录下面创建 target 目录

  2. sbt 的目录格局一般为 lib/ (该目录下存储与编译相关的 jar 文件) 
    project/ src/main/scala/ src/main/test/scala

  3. 复制 jar 文件 spark-assembly *hadoop2.5.1.jar 到 lib 目录下

    [root@localhost word]# find ../spark -name “spark*jar” |grep assem 
    ../spark/assembly/target/scala-2.10/spark-assembly-1.1.2-SNAPSHOT-hadoop2.5.1.jar 
    ../spark/dist/lib/spark-assembly-1.1.2-SNAPSHOT-hadoop2.5.1.jar 
    [root@localhost word]# cp ../spark/dist/lib/spark-assembly-1.1.2-SNAPSHOT-hadoop2.5.1.jar lib/ 
    [root@localhost word]# ls lib 
    spark-assembly-1.1.2-SNAPSHOT-hadoop2.5.1.jar

  4. 编辑 wordcount.scala

    import org.apache.spark.{SparkContext, SparkConf} 
    import org.apache.spark.SparkContext._ 
    object wordCount{

    def main(args: Array[String]){
        if (args.length == 0) {
        System.err.println("Usage bin/spark-submit [options] --class wordCount wordCount.jar <file1:URI>")
        System.err.println("Usage bin/spark-submit [options] --class wordCount wordCount.jar hdfs://172.16.1.141:9000/test.txt")
        System.exit(1);
    }
        val conf = new SparkConf().setAppName("WordCount")
        val sc = new SparkContext(conf)
        val doc = sc.textFile(args(0))
        doc.cache()
        val words = doc.flatMap(_.split(""))
        val pairs = words.map( x=> (x,1))
        val res = pairs.reduceByKey(_+_)
        res.collect().foreach(println)
        sc.stop()
    }
    

    }

  5. 编辑 build.sbt

    [root@localhost word]# cat build.sbt 
    name := “wordCount” 
    [blank line] 
    version := “1.0” 
    [blank line] 
    scalaVersion := “2.11.4” 
    6 . 编译打包 成 jar 文件

    [root@localhost word]# sbt package  -Dsbt.ivy.home=/root/.ivy2
    

    [info] Set current project to wordCount (in build file:/opt/htt/temp_20140611/java/word/) 
    [info] Updating {file:/opt/htt/temp_20140611/java/word/}word… 
    [info] Resolving jline#jline;2.12 … 
    [info] Done updating. 
    [info] Compiling 2 Scala sources to /opt/htt/temp_20140611/java/word/target/scala-2.11/classes… 
    [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list 
    [info] Packaging /opt/htt/temp_20140611/java/word/target/scala-2.11/wordcount_2.11-1.0.jar … 
    [info] Done packaging. 
    [success] Total time: 11 s, completed Jan 5, 2015 8:37:38 AM 
    [root@localhost word]#

    1. 编译 class 文件到当前目录

    scalac src/main/scala/wordCount.scala -cp lib/spark-assembly-1.1.2-SNAPSHOT-hadoop2.5.1.jar

  6. 调用spark 执行

    ../spark/bin/spark-submit –class wordCount target/scala-2.11/wordcount_2.11-1.0.jar hdfs://172.16.1.141:9000/opt/old/htt/test/test.txt

参考文章: http://www.aboutyun.com/thread-8587-1-1.html


sbt介绍
sbt是一个代码编译工具,是scala界的mvn,可以编译scala,java等,需要java1.6以上。

sbt项目环境建立
sbt编译需要固定的目录格式,并且需要联网,sbt会将依赖的jar包下载到用户home的.ivy2下面,目录结构如下:
  1. |--build.sbt
  2. |--lib
  3. |--project
  4. |--src
  5. |   |--main
  6. |   |    |--scala
  7. |   |--test
  8. |         |--scala
  9. |--sbt
  10. |--target
复制代码
以上建立目录如下:
  1. mkdir -p ~/spark_wordcount/lib
  2. mkdir -p ~/spark_wordcount/project
  3. mkdir -p ~/spark_wordcount/src/main/scala
  4. mkdir -p ~/spark_wordcount/src/test/scala
  5. mkdir -p ~/spark_wordcount/target
复制代码

然后拷贝spark安装目录的sbt目录的 sbt脚本和sbt的jar包

  1. cp /path/to/spark/sbt/sbt* ~/spark_wordcount/
复制代码

由于spark的sbt脚本默认查找./sbt目录,修改如下

  1. JAR=sbt/sbt-launch-${SBT_VERSION}.jar
  2. to
  3. JAR=sbt-launch-${SBT_VERSION}.jar
复制代码

拷贝spark的jar包到,sbt的lib目录
  1. cp /path/to/spark/assembly/target/scala-2.10/spark-assembly_2.10-0.9.0-incubating-hadoop2.2.0.jar \
  2. > ~/spark_wordcount/lib/
复制代码

建立build.sbt配置文件,各行需要有一个空行分割
  1. name := "WordCount"
  2. [this is bank line]
  3. version := "1.0.0"
  4. [this is bank line]
  5. scalaVersion := "2.10.3"
复制代码


由于spark的sbt脚本需要到project的build.properties文件找sbt的版本号,我们建立该文件,增加如下内容:
  1. sbt.version=0.12.4
复制代码


Spark WordCount程序编写及编译
建立WordCount.scala源文件,假设需要包为spark.example
  1. mkdir -p ~/spark_wordcount/src/main/scala/spark/example
  2. vi -p ~/spark_wordcount/src/main/scala/spark/example/WordCount.scala
复制代码

添加具体的程序代码,并保存
  1. package spark.example

  2. import org.apache.spark._
  3. import SparkContext._

  4. object WordCount {
  5.   def main(args: Array[String]) {
  6.     //命令行参数个数检查
  7.     if (args.length == 0) {
  8.       System.err.println("Usage: spark.example.WordCount <input> <output>")
  9.       System.exit(1)
  10.     }
  11.     //使用hdfs文件系统
  12.     val hdfsPathRoot = "hdfshost:9000"
  13.     //实例化spark的上下文环境
  14.     val spark = new SparkContext(args(0), "WordCount",
  15.       System.getenv("SPARK_HOME"),SparkContext.jarOfClass(this.getClass))
  16.     //读取输入文件
  17.     val inputFile = spark.textFile(hdfsPathRoot + args(1))
  18.     //执行WordCount计数
  19.     //读取inputFile执行方法flatMap,将每行通过空格分词
  20.     //然后将该词输出该词和计数的一个元组,并初始化计数
  21.     //为 1,然后执行reduceByKey方法,对相同的词计数累
  22.     //加
  23.     val countResult = inputFile.flatMap(line => line.split(" "))
  24.                       .map(word => (word, 1))
  25.                       .reduceByKey(_ + _)
  26.     //输出WordCount结果到指定目录
  27.     countResult.saveAsTextFile(hdfsPathRoot + args(2))
  28.   }
  29. }
复制代码

到spark_wordcount目录,执行编译:
  1. cd ~/spark_wordcount/
  2. ./sbt compile
复制代码

打成jar包
  1. ./sbt package
复制代码

编译过程,sbt需要上网下载依赖工具包,jna,scala等。编译完成后可以在target/scala-2.10/目录找到打包好的jar
  1. [root@bd001 scala-2.10]# pwd
  2. /usr/local/hadoop/spark_wordcount/target/scala-2.10
  3. [root@bd001 scala-2.10]# ls
  4. cache  classes  wordcount_2.10-1.0.0.jar
复制代码


WordCount执行
可以参考Spark分布式运行于hadoop的yarn上的方法,写一个执行脚本
  1. #!/usr/bin/env bash

  2. SPARK_JAR=./assembly/target/scala-2.10/spark-assembly_2.10-0.9.0-incubating-hadoop2.2.0.jar \
  3.     ./bin/spark-class org.apache.spark.deploy.yarn.Client \
  4.       --jar ~/spark_wordcount/target/scala-2.10/wordcount_2.10-1.0.0.jar \
  5.       --class  spark.example.WordCount \
  6.       --args yarn-standalone \
  7.       --args /testWordCount.txt \
  8.       --args /resultWordCount \
  9.       --num-workers 3 \
  10.       --master-memory 4g \
  11.       --worker-memory 2g \
  12.       --worker-cores 2
复制代码

然后,拷贝一个名为testWordCount.txt的文件进hdfs
  1. hdfs dfs -copyFromLocal ./testWordCount.txt /testWordCount.txt
复制代码



然后执行脚本,过一会就可以看到结果了

你可能感兴趣的:(sbt 编译spark 的wordcount 程序)