Spark examples之SparkPi

Spark examples之SparkPi


环境:

服务器:ubuntu spark 1.5.2

编写环境:window eclipse


可以直接在集群上跑,为了熟悉流程,先下载到window,然后打成jar包上传运行:

1.下载SparkPi.scala:

地址:/home/hadoop/cloud/spark-1.5.2/examples/src/main/scala/org/apache/spark/examples


2.生成jar包SparkPi.jar,并上传到/home/hadoop/cloud/test/sh_spark_xubo/SparkPi


3.运行脚本:

hadoop@Master:~/cloud/test/sh_spark_xubo/SparkPi$ cat submitJob.sh 
    #!/usr/bin/env bash  
    spark-submit --name SparkPi  \
--class org.apache.spark.examples.SparkPi \
--master spark://Master:7077 \
--executor-memory 512M \
--total-executor-cores 1 SparkPi.jar
其中master应该换成集群master的IP

4.运行:

hadoop@Master:~/cloud/test/sh_spark_xubo/SparkPi$ ./submitJob.sh 
Pi is roughly 3.1406  

Spark examples之SparkPi_第1张图片


当脚本中的total-executor-cores数量超过集群的核数时,会以集群的核数为准,比如下面输入的是22cores,但是执行的时候是14cores:

    #!/usr/bin/env bash  
    spark-submit --name SparkPi  \
--class org.apache.spark.examples.SparkPi \
--master spark://219.219.220.149:7077 \
--executor-memory 512M \
--total-executor-cores 22 SparkPi.jar


你可能感兴趣的:(spark)