IDEA 使用Maven创建Spark WordCount经典案例

打开IDEA,

 

package com.atguigu.wordcount

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}

object ScalaWorkCount {
  def main(args: Array[String]): Unit = {
    //创建spark配置,设置应用程序名字
    val conf = new SparkConf().setAppName("ScalaWordCount")
    //创建spark执行的入口
    val sc = new SparkContext(conf)
    //指定以后从哪里读取数据创建RDD(弹性分布式数据集)
//    sc.textFile(args(0)).flatMap(_.split(" ")).map((_,1)).reduceByKey(_ +_).sortBy(_._2,false).saveAsTextFile(args(1))
    val line:RDD[String] = sc.textFile(args(0))
    //切分压平
    val words:RDD[String] = line.flatMap(_.split(" "))
    //将单词和一组合
    val wordAndOne:RDD[(String,Int)] = words.map((_,1))
    //按key进行集合
    val reduced:RDD[(String,Int)] = wordAndOne.reduceByKey(_ + _)
    //排序
    val sorted = reduced.sortBy(_._2,false)
    //将结果保存到HDFS中
    sorted.saveAsTextFile(args(1))
    sc.stop()

  }
}

/*计算过程分析 
* line.flatMap(_.split(" ")) 或写成flatMap(line => line.split(" "))按安空格拆分文件中单词 
* words.map((_,1)) 或者将每个词映射成 (word,1),word是重复的 
* wordAndOne.reduceByKey(_ + _)或写成reduceByKey((x, y) => x + y) 将key相同的单词相加得到,word不重复 
* sortBy(_._2,false) 按词数量排序 
* 
*
*/

写完后

export 生成jar 包导入集群

执行

/opt/module/spark-2.1.1/bin/spark-submit\

--master \

spark://hadoop02:7077,hadoop03:7077 \

--class com.atguigu.wordcount.ScalaWorkCount \

/home/hadoops/spark-wordcount-1.0-SNAPSHOT.jar \

hdfs://s202:9000/wc hdfs://s202:9000/wcount20190105

com.atguigu.wordcount.ScalaWorkCount :类名

/home/hadoops/spark-wordcount-1.0-SNAPSHOT.jar:jar所在目录

 

 

 

 

 

 

你可能感兴趣的:(Spark/Flink)