Spark本地调试和远程调试

本地调试:
代码:

package com.atguigu

import org.apache.spark.{SparkConf, SparkContext}

object WordCount{

  def main(args: Array[String]): Unit = {

//创建SparkConf并设置App名称
    val conf = new SparkConf().setAppName("WC").setMaster("local[*]")

//创建SparkContext,该对象是提交Spark App的入口
    val sc = new SparkContext(conf)

    //使用sc创建RDD并执行相应的transformation和action
    sc.textFile(args(0)).flatMap(_.split(" ")).map((_, 1)).reduceByKey(_+_, 1).sortBy(_._2, false).saveAsTextFile(args(1))

    sc.stop()
  }
}

如果出现异常如下:
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
原因是spark本地调试需要hadoop环境依赖,在运行环境中指定hadoop路径即可
Spark本地调试和远程调试_第1张图片

远程调试:
通过IDEA进行远程调试,主要是将IDEA作为Driver来提交应用程序,配置过程如下:
修改sparkConf,添加最终需要运行的Jar包、Driver程序的地址,并设置Master的提交地址:

val conf = new SparkConf().setAppName("WC")
.setMaster("spark://hadoop102:7077")
.setJars(List("E:\\SparkIDEA\\spark_test\\target\\WordCount.jar"))

然后加入断点,直接调试即可:

spark的pom文件

<dependencies>
    <dependency>
        <groupId>org.apache.sparkgroupId>
        <artifactId>spark-core_2.11artifactId>
        <version>2.1.1version>
    dependency>
dependencies>
<build>
        <finalName>WordCountfinalName>
        <plugins>
            <plugin>
                <groupId>net.alchim31.mavengroupId>
                <artifactId>scala-maven-pluginartifactId>
                <version>3.2.2version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compilegoal>
                            <goal>testCompilegoal>
                        goals>
                    execution>
                executions>
            plugin>
            <plugin>
                <groupId>org.apache.maven.pluginsgroupId>
                <artifactId>maven-assembly-pluginartifactId>
                <version>3.0.0version>
                <configuration>
                    <archive>
                        <manifest>
                            <mainClass>WordCount(修改)mainClass>
                        manifest>
                    archive>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependenciesdescriptorRef>
                    descriptorRefs>
                configuration>
                <executions>
                    <execution>
                        <id>make-assemblyid>
                        <phase>packagephase>
                        <goals>
                            <goal>singlegoal>
                        goals>
                    execution>
                executions>
            plugin>
        plugins>
build>

你可能感兴趣的:(大数据之Spark)