spark提交scala代码

目的:通过spark-submit提交scala代码

scala代码需要先编译才能在spark上跑

工具:sbt

           下载地址sbt

 

正式开始

假设现在的地址是 /home/sparknode/scalacode,则需要先

mkdir -p src/main/scala

(路径必须严格遵守,少一个都不行)

然后在  /home/sparknode/scalacode/src/main/scala   下创建 wordCount.scala

#wordcount程序
import org.apache.spark.{SparkConf,SparkContext}

object wordCount{
    def main(args:Array[String]){
        val conf = new SparkConf().setAppName("wordCount").setMaster("local")
        val sc = new SparkContext(conf)
        val lines = sc.textFile("file:///home/sparknode/桌面/word.txt")
        val words = lines.flatMap(_.split(" ")).filter(word => word != "")
        val pairs = words.map(word => (word,1))
        val wordscount = pairs.reduceByKey(_ + _ )
        wordscount.collect.foreach(println)
        sc.stop()
    }
}

然后回到/home/sparknode/scalacode,创建sample.sbt

#修改自己的scala和spark版本
name := "Simple Project"
version := "1.0"
scalaVersion := "2.11.6"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.2"

之后在/home/sparknode/scalacode 

sbt的安装地址/bin/sbt package

此时完成了编译,生成了simple-project_2.11-1.0.jar。

在spark上运行

spark-submit --class "wordCount" --master local /home/sparknode/scalacode/target/scala-2.11/simple-project_2.11-1.0.jar

这里的 --class 为 wordCount.scala的主类名

你可能感兴趣的:(大数据学习)