通常的方式提交spark程序是是通过spark的submit程序实现,例如在linux系统中执行./spark-submit 提交自定义的spark应用程序。但是很多时候我们需要通过程序的方式提交spark应用程序。这里提供两类通过java程序动态提交spark,其中一种是streamsets中提交spark程序的方式。
通过sparksubmit.main()方法需要引入
简单提交事例如下:
String[] args = {
"--class", "com.fly.spark.WordCount",
"--master", "spark://127.0.0.1:6066",
"--deploy-mode", "cluster",
"--executor-memory", "1g",
"--total-executor-cores", "4",
"hdfs:///wordcount.jar",
"hdfs:///wordcount.txt",
"hdfs:///wordcount"
};
SparkSubmit.main(args);
该方法将返回提交结果,可以选择等待或不等待任务执行结束。
这也是streamsets中提交sparkpipeline的方式。该类有两种使用方式:
SparkLauncher支持两种模式:
当然是更倾向于第一种啦,因为好处很多:
第一种是调用SparkLanuncher实例的startApplication方法,但是这种方式在所有配置都正确的情况下使用运行都会失败的,原因是startApplication方法会调用LauncherServer启动一个进程与集群交互,这个操作貌似是异步的,所以可能结果是main主线程结束了这个进程都没有起起来,导致运行失败。解决办法是调用new SparkLanuncher().startApplication后需要让主线程休眠一定的时间后者是使用下面的例子:
package com.learn.spark;
import org.apache.spark.launcher.SparkAppHandle;
import org.apache.spark.launcher.SparkLauncher;
import java.io.IOException;
import java.util.HashMap;
import java.util.concurrent.CountDownLatch;
public class LanuncherAppV {
public static void main(String[] args) throws IOException, InterruptedException {
HashMap env = new HashMap();
//这两个属性必须设置
env.put("HADOOP_CONF_DIR", "/usr/local/hadoop/etc/overriterHaoopConf");
env.put("JAVA_HOME", "/usr/local/java/jdk1.8.0_151");
//可以不设置
//env.put("YARN_CONF_DIR","");
CountDownLatch countDownLatch = new CountDownLatch(1);
//这里调用setJavaHome()方法后,JAVA_HOME is not set 错误依然存在
SparkAppHandle handle = new SparkLauncher(env)
.setSparkHome("/usr/local/spark")
.setAppResource("/usr/local/spark/spark-demo.jar")
.setMainClass("com.learn.spark.SimpleApp")
.setMaster("yarn")
.setDeployMode("cluster")
.setConf("spark.app.id", "11222")
.setConf("spark.driver.memory", "2g")
.setConf("spark.akka.frameSize", "200")
.setConf("spark.executor.memory", "1g")
.setConf("spark.executor.instances", "32")
.setConf("spark.executor.cores", "3")
.setConf("spark.default.parallelism", "10")
.setConf("spark.driver.allowMultipleContexts", "true")
.setVerbose(true).startApplication(new SparkAppHandle.Listener() {
//这里监听任务状态,当任务结束时(不管是什么原因结束),isFinal()方法会返回true,否则返回false
@Override
public void stateChanged(SparkAppHandle sparkAppHandle) {
if (sparkAppHandle.getState().isFinal()) {
countDownLatch.countDown();
}
System.out.println("state:" + sparkAppHandle.getState().toString());
}
@Override
public void infoChanged(SparkAppHandle sparkAppHandle) {
System.out.println("Info:" + sparkAppHandle.getState().toString());
}
});
System.out.println("The task is executing, please wait ....");
//线程等待任务结束
countDownLatch.await();
System.out.println("The task is finished!");
}
}
注意:如果部署模式是cluster,但是代码中有标准输出的话将看不到,需要把结果写到HDFS中,如果是client模式则可以看到输出。
第二种方式是:通过SparkLanuncher.lanunch()方法获取一个进程,然后调用进程的process.waitFor()方法等待线程返回结果,但是使用这种方式需要自己管理运行过程中的输出信息,比较麻烦,好处是一切都在掌握之中,即获取的输出信息和通过命令提交的方式一样,很详细,实现如下:
package com.learn.spark;
import org.apache.spark.launcher.SparkAppHandle;
import org.apache.spark.launcher.SparkLauncher;
import java.io.IOException;
import java.util.HashMap;
public class LauncherApp {
public static void main(String[] args) throws IOException, InterruptedException {
HashMap env = new HashMap();
//这两个属性必须设置
env.put("HADOOP_CONF_DIR","/usr/local/hadoop/etc/overriterHaoopConf");
env.put("JAVA_HOME","/usr/local/java/jdk1.8.0_151");
//env.put("YARN_CONF_DIR","");
SparkLauncher handle = new SparkLauncher(env)
.setSparkHome("/usr/local/spark")
.setAppResource("/usr/local/spark/spark-demo.jar")
.setMainClass("com.learn.spark.SimpleApp")
.setMaster("yarn")
.setDeployMode("cluster")
.setConf("spark.app.id", "11222")
.setConf("spark.driver.memory", "2g")
.setConf("spark.akka.frameSize", "200")
.setConf("spark.executor.memory", "1g")
.setConf("spark.executor.instances", "32")
.setConf("spark.executor.cores", "3")
.setConf("spark.default.parallelism", "10")
.setConf("spark.driver.allowMultipleContexts","true")
.setVerbose(true);
Process process =handle.launch();
InputStreamReaderRunnable inputStreamReaderRunnable = new InputStreamReaderRunnable(process.getInputStream(), "input");
Thread inputThread = new Thread(inputStreamReaderRunnable, "LogStreamReader input");
inputThread.start();
InputStreamReaderRunnable errorStreamReaderRunnable = new InputStreamReaderRunnable(process.getErrorStream(), "error");
Thread errorThread = new Thread(errorStreamReaderRunnable, "LogStreamReader error");
errorThread.start();
System.out.println("Waiting for finish...");
int exitCode = process.waitFor();
System.out.println("Finished! Exit code:" + exitCode);
}
}
使用的自定义InputStreamReaderRunnable类实现如下:
package com.learn.spark;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
public class InputStreamReaderRunnable implements Runnable {
private BufferedReader reader;
private String name;
public InputStreamReaderRunnable(InputStream is, String name) {
this.reader = new BufferedReader(new InputStreamReader(is));
this.name = name;
}
public void run() {
System.out.println("InputStream " + name + ":");
try {
String line = reader.readLine();
while (line != null) {
System.out.println(line);
line = reader.readLine();
}
reader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
打包完成后上传到部署Spark的服务器上。由于SparkLauncher所在的类引用了SparkLauncher,所以还需要把这个jar也上传到服务器上。
由于SparkLauncher需要指定SPARK_HOME,因此如果你的机器可以执行spark-submit,那么就看一下spark-submit里面,SPARK_HOME是在哪
综上,我们需要的是:
引用:
1.https://blog.csdn.net/u011244682/article/details/79170134