Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解

Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第1张图片

  • 一intellij 安装centos65系统
    • 步骤一
    • 步骤二
    • 步骤三本地运行
  • 打包集群 yarn 运行

说明:已经安装好hadoop2.2.0 完全分布,scala,spark已安装好,环境配置完毕;主机为hadoop-master,hadoop-slave

一.intellij 安装(centos6.5系统)

1.需要安装包ideaIc-2017.1.tar.gz(http://pan.baidu.com/s/1nv7Emox)
2.scala jar包 scala-intellij-bin-2017.1.14(http://pan.baidu.com/s/1i4PMZzf)

步骤一。

1.将上述两个安装包放到主节点相关位置(本机放在hadoop-master上,root用户下桌面上)
2.解压 命令如下:
tar zxvf ideaIc-2017.1.tar.gz -C ~/
3.将scala-intellij-bin-2017.1.14 放置在ide-IC-171.3780.95
目录下的 plugins 里面

步骤二。

1.打开intellij
2.新建 Project,如图所示,next
Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第2张图片
3.project name ,项目名称
project location 为project存放路径
JDK为java(本机之前用centos6.5自带版本1.7,但使用不了,后改为sun公司的1.8)
scala SDK 为2.10.4 (本机安装的版本为2.10.4)
Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第3张图片
3.Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第4张图片
4.在IDEA中开发应用程序时,常常需要通过一定的文件目录组织进行源码编写,例如源文件目录、测试源文件目录,下面演示在Intellij IDEA的src目录下创建main/scala源文件目录。
直接按F4或右鍵点击工程文件 ,点击OPEN module setting
Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第5张图片
5.点击Moudules,点击src目录,然后右键创建main/scala文件夹,再点击scala文件夹为sources,如下图所示
Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第6张图片
6导入Spark 依赖包(本机用spark-assemble-1.0.0-hadoop2.2.0),点击libraries,点击+,点击java,选则spark-assemble-1.0.0-hadoop2.2.0,点击 项目录名称,ok
Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第7张图片

至此Spark开发环境配置完成

步骤三。本地运行

1.新建 scala object,然后输入代码
Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第8张图片
2.编译代码,直接Build->build Project
3.然后编程运行参数,Run->Edit Configurations
Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第9张图片

`/**
* Created by root on 3/28/17.
*/
import org.apache.spark.SparkContext._
import org.apache.spark.{SparkConf, SparkContext}
/**
* Created by root on 3/23/17.
*/
object spp {
def main(args: Array[String]) {
//输入文件既可以是本地linux系统文件,也可以是其它来源文件,例如HDFS
if (args.length == 0) {
System.err.println(“Usage: SparkWordCount ”)
System.exit(1)
}
//以本地线程方式运行,可以指定线程个数,
//如.setMaster(“local[2]”),两个线程执行
//下面给出的是单线程执行
val conf = new SparkConf().setAppName(“SparkWordCount”).setMaster(“local”)
val sc = new SparkContext(conf)

//wordcount操作,计算文件中包含Spark的行数
val rdd2=sc.textFile(args(0)).flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
//    val count1=rdd2.countByValue()
//打印结果

// rdd2.saveAsTextFile(path = args(1))
// rdd2.count()
println(rdd2.count())
sc.stop()
}

}

完成后直接Run->Run或Alt+Shift+F10运行程序,执行结果如下图:
/usr/lib/jvm/jdk1.8.0_60/bin/java -javaagent:/root/Desktop/idea-IC-171.3780.95/lib/idea_rt.jar=42032:/root/Desktop/idea-IC-171.3780.95/bin -Dfile.encoding=UTF-8 -classpath /usr/lib/jvm/jdk1.8.0_60/jre/lib/charsets.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/deploy.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/cldrdata.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/dnsns.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/jaccess.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/jfxrt.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/localedata.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/nashorn.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/sunec.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/sunjce_provider.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/ext/zipfs.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/javaws.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/jce.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/jfr.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/jfxswt.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/jsse.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/management-agent.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/plugin.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/resources.jar:/usr/lib/jvm/jdk1.8.0_60/jre/lib/rt.jar:/root/IdeaProjects/my2/out/production/my2:/root/.ivy2/cache/org.scala-lang/scala-reflect/jars/scala-reflect-2.10.4.jar:/root/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.10.4.jar:/root/.ivy2/cache/org.scala-lang/scala-reflect/srcs/scala-reflect-2.10.4-sources.jar:/root/.ivy2/cache/org.scala-lang/scala-library/srcs/scala-library-2.10.4-sources.jar:/root/spark-1.0.0-bin-2.2.0/lib/spark-assembly-1.0.0-hadoop2.2.0.jar spp hdfs://10.6.3.200:8020/data/wordcount/1.txt
17/03/28 13:26:06 INFO SecurityManager: Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
17/03/28 13:26:06 INFO SecurityManager: Changing view acls to: root
17/03/28 13:26:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root)
17/03/28 13:26:07 INFO Slf4jLogger: Slf4jLogger started
17/03/28 13:26:07 INFO Remoting: Starting remoting
17/03/28 13:26:08 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@hadoop-master:59842]
17/03/28 13:26:08 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@hadoop-master:59842]
17/03/28 13:26:08 INFO SparkEnv: Registering MapOutputTracker
17/03/28 13:26:08 INFO SparkEnv: Registering BlockManagerMaster
17/03/28 13:26:08 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20170328132608-36ca
17/03/28 13:26:08 INFO MemoryStore: MemoryStore started with capacity 528.0 MB.
17/03/28 13:26:08 INFO ConnectionManager: Bound socket to port 50620 with id = ConnectionManagerId(hadoop-master,50620)
17/03/28 13:26:08 INFO BlockManagerMaster: Trying to register BlockManager
17/03/28 13:26:08 INFO BlockManagerInfo: Registering block manager hadoop-master:50620 with 528.0 MB RAM
17/03/28 13:26:08 INFO BlockManagerMaster: Registered BlockManager
17/03/28 13:26:08 INFO HttpServer: Starting HTTP Server
17/03/28 13:26:08 INFO HttpBroadcast: Broadcast server started at http://10.6.3.200:50541
17/03/28 13:26:08 INFO HttpFileServer: HTTP File server directory is /tmp/spark-e484b842-2b2c-43e1-8c19-c1375c30dc92
17/03/28 13:26:08 INFO HttpServer: Starting HTTP Server
17/03/28 13:26:09 INFO SparkUI: Started SparkUI at http://hadoop-master:4040
17/03/28 13:26:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
17/03/28 13:26:11 INFO MemoryStore: ensureFreeSpace(133256) called with curMem=0, maxMem=553648128
17/03/28 13:26:11 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 130.1 KB, free 527.9 MB)
17/03/28 13:26:12 INFO FileInputFormat: Total input paths to process : 1
17/03/28 13:26:12 INFO SparkContext: Starting job: count at spp.scala:28
17/03/28 13:26:12 INFO DAGScheduler: Registering RDD 4 (reduceByKey at spp.scala:23)
17/03/28 13:26:12 INFO DAGScheduler: Got job 0 (count at spp.scala:28) with 1 output partitions (allowLocal=false)
17/03/28 13:26:12 INFO DAGScheduler: Final stage: Stage 0(count at spp.scala:28)
17/03/28 13:26:12 INFO DAGScheduler: Parents of final stage: List(Stage 1)
17/03/28 13:26:12 INFO DAGScheduler: Missing parents: List(Stage 1)
17/03/28 13:26:12 INFO DAGScheduler: Submitting Stage 1 (MapPartitionsRDD[4] at reduceByKey at spp.scala:23), which has no missing parents
17/03/28 13:26:12 INFO DAGScheduler: Submitting 1 missing tasks from Stage 1 (MapPartitionsRDD[4] at reduceByKey at spp.scala:23)
17/03/28 13:26:12 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/03/28 13:26:12 INFO TaskSetManager: Starting task 1.0:0 as TID 0 on executor localhost: localhost (PROCESS_LOCAL)
17/03/28 13:26:12 INFO TaskSetManager: Serialized task 1.0:0 as 2076 bytes in 129 ms
17/03/28 13:26:12 INFO Executor: Running task ID 0
17/03/28 13:26:12 INFO BlockManager: Found block broadcast_0 locally
17/03/28 13:26:12 INFO HadoopRDD: Input split: hdfs://10.6.3.200:8020/data/wordcount/1.txt:0+15
17/03/28 13:26:12 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
17/03/28 13:26:12 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
17/03/28 13:26:12 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
17/03/28 13:26:12 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
17/03/28 13:26:12 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
17/03/28 13:26:13 INFO Executor: Serialized size of result for 0 is 783
17/03/28 13:26:13 INFO Executor: Sending result for 0 directly to driver
17/03/28 13:26:13 INFO Executor: Finished task ID 0
17/03/28 13:26:13 INFO DAGScheduler: Completed ShuffleMapTask(1, 0)
17/03/28 13:26:13 INFO TaskSetManager: Finished TID 0 in 667 ms on localhost (progress: 1/1)
17/03/28 13:26:13 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
17/03/28 13:26:13 INFO DAGScheduler: Stage 1 (reduceByKey at spp.scala:23) finished in 0.705 s
17/03/28 13:26:13 INFO DAGScheduler: looking for newly runnable stages
17/03/28 13:26:13 INFO DAGScheduler: running: Set()
17/03/28 13:26:13 INFO DAGScheduler: waiting: Set(Stage 0)
17/03/28 13:26:13 INFO DAGScheduler: failed: Set()
17/03/28 13:26:13 INFO DAGScheduler: Missing parents for Stage 0: List()
17/03/28 13:26:13 INFO DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[6] at reduceByKey at spp.scala:23), which is now runnable
17/03/28 13:26:13 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0 (MapPartitionsRDD[6] at reduceByKey at spp.scala:23)
17/03/28 13:26:13 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/03/28 13:26:13 INFO TaskSetManager: Starting task 0.0:0 as TID 1 on executor localhost: localhost (PROCESS_LOCAL)
17/03/28 13:26:13 INFO TaskSetManager: Serialized task 0.0:0 as 1939 bytes in 1 ms
17/03/28 13:26:13 INFO Executor: Running task ID 1
17/03/28 13:26:13 INFO BlockManager: Found block broadcast_0 locally
17/03/28 13:26:13 INFO BlockFetcherIterator BasicBlockFetcherIterator:maxBytesInFlight:50331648,targetRequestSize:1006632917/03/2813:26:13INFOBlockFetcherIterator BasicBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
17/03/28 13:26:13 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 0 remote fetches in 13 ms
17/03/28 13:26:13 INFO Executor: Serialized size of result for 1 is 863
17/03/28 13:26:13 INFO Executor: Sending result for 1 directly to driver
17/03/28 13:26:13 INFO Executor: Finished task ID 1
17/03/28 13:26:13 INFO DAGScheduler: Completed ResultTask(0, 0)
17/03/28 13:26:13 INFO TaskSetManager: Finished TID 1 in 91 ms on localhost (progress: 1/1)
17/03/28 13:26:13 INFO DAGScheduler: Stage 0 (count at spp.scala:28) finished in 0.094 s
17/03/28 13:26:13 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/03/28 13:26:13 INFO SparkContext: Job finished: count at spp.scala:28, took 1.148558113 s
2
17/03/28 13:26:13 INFO SparkUI: Stopped Spark web UI at http://hadoop-master:4040
17/03/28 13:26:13 INFO DAGScheduler: Stopping DAGScheduler
17/03/28 13:26:14 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!
17/03/28 13:26:14 INFO ConnectionManager: Selector thread was interrupted!
17/03/28 13:26:14 INFO ConnectionManager: ConnectionManager stopped
17/03/28 13:26:14 INFO MemoryStore: MemoryStore cleared
17/03/28 13:26:14 INFO BlockManager: BlockManager stopped
17/03/28 13:26:14 INFO BlockManagerMasterActor: Stopping BlockManagerMaster
17/03/28 13:26:14 INFO BlockManagerMaster: BlockManagerMaster stopped
17/03/28 13:26:14 INFO SparkContext: Successfully stopped SparkContext
17/03/28 13:26:14 INFO RemoteActorRefProvider RemotingTerminator:Shuttingdownremotedaemon.17/03/2813:26:14INFORemoteActorRefProvider RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.`

OK

打包、集群 yarn 运行

1.看 hdfs 情况
Hadoop fs -ls -R /
Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第10张图片
2.如上图所示我们已经有了文件1.txt,如果没有,则使用命令将一些好的文件放入hdfs中:hadoop fs -put /usr/local/cluster/hadoop/etc/hadoop/slaves /data/wordcount/
3.修改代码为:
`
package scala
import org.apache.spark.SparkContext._
import org.apache.spark.{SparkConf, SparkContext}
/**
* Created by root on 3/23/17.
*/
object spp {
def main(args: Array[String]) {
//输入文件既可以是本地linux系统文件,也可以是其它来源文件,例如HDFS
if (args.length == 0) {
System.err.println(“Usage: SparkWordCount ”)
System.exit(1)
}
//以本地线程方式运行,可以指定线程个数,
//如.setMaster(“local[2]”),两个线程执行
//下面给出的是单线程执行
val conf = new SparkConf().setAppName(“SparkWordCount”).setMaster(“spark://10.6.3.200:7077”)
val sc = new SparkContext(conf)

//wordcount操作,计算文件中包含Spark的行数
val rdd2=sc.textFile(args(0)).flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
//    val count1=rdd2.countByValue()
//打印结果
 rdd2.saveAsTextFile(path = args(1))
//   rdd2.count()
//println(rdd2.count())
sc.stop()

}

}
`
4.点击工程my2,然后按F4打个Project Structure并选择Artifacts,如下图
选择 jar from moulde
Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第11张图片

5.因为后期提交到集群上运行,因此相关jar包都存在,为减小jar包的体积,将spark-assembly-1.0.0-hadoop2.2.0.jar等jar包删除即可,如下图
Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第12张图片

6.确定后,再点击Build->Build Artifacts
生成如图所示:
Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第13张图片

7.运行 yarn-client(cluster会出现内存溢出,未解决)
/root/spark-1.0.0-bin-2.2.0/bin/spark-submit –master yarn-client –class main.scala.spp my2.jar hdfs://hadoop-master:8020/data/wordcount/1.txt hdfs://hadoop-master:8020/data/wordcount/read9

8.看hdfs 是否有read9
Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第14张图片
看spark :

Spark on yarn Intellij ide 安装,编译,打包,集群运行 详解_第15张图片

集群hdfs:10.6.3.200:50070
spark:10.6.3.200:8088
hadoop:10.6.3.8080

参考:http://blog.csdn.net/lovehuangjiaju/article/details/48577281

你可能感兴趣的:(spark)