【spark】spark计算Pi

cd $SPARK_HOME/bin
./spark-submit --master spark://node111:7077 --class org.apache.spark.examples.SparkPi ../examples/jars/spark-examples_2.11-2.1.1.jar 100


命令格式:./$SPARK_HOME/bin/spark-submit --master spark://[master hostname]:7077 --class[包名.类名] [jar包位置]

hostname:node111 

不同spark版本,jar包位置不一样

 --class CLASS_NAME          Your application's main class (for Java / Scala apps).
  --name NAME                 A name of your application.
  --jars JARS                 Comma-separated list of local jars to include on the driv

1000代表启动1000个线程,根据自己core的数量设置最佳线程数量。

运行结果:Pi is roughly 3.1419091141909115


报错:

org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
    at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:154)
    at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:134)
    at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:186)
    at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:512)
    at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:406)
    at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:518)
    at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.org$apache$spark$scheduler$cluster$CoarseGrainedSchedulerBackend$DriverEndpoint$$removeExecutor(CoarseGrainedSchedulerBackend.scala:293)
    at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anonfun$onDisconnected$1.apply(CoarseGrainedSchedulerBackend.scala:228)
    at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anonfun$onDisconnected$1.apply(CoarseGrainedSchedulerBackend.scala:228)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.onDisconnected(CoarseGrainedSchedulerBackend.scala:228)
    at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:143)
    at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
    at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
    at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:213)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)


【解决上面报错】

修改:新加参数 --total-executor-cores

./spark-submit --master spark://node111:7077 --total-executor-cores 3 --class org.apache.spark.examples.SparkPi ../examples/jars/spark-examples_2.11-2.1.1.jar 100


 --total-executor-cores NUM  Total cores for all executors.


集群内当前的application提供了core的数量

运行结果:

Pi is roughly 3.1417603141760315

不报错。



【cluster模式】

新增参数  --deploy-mode cluster


运行结果:

Pi is roughly 3.141410714141071

cluster模式结果不会在linux上直接显示,可以通过webUI--Completed Drivers-Finished Drivers-stdout 查看结果

你可能感兴趣的:(大数据)