spark-windows本地环境搭建

  • 本机系统windows10
  • Hadoop是hadoop-3.2.0,替换hadoop.dll 和winutils.exe (直接换掉bin目录)
    github找 https://github.com/steveloughran/winutils/blob/master/hadoop-3.0.0/bin
  • scala-SDK-2.12.10 , spark2.4.3不需要本地环境pom配置即可 (注意spark3.0会出现java 9 报错,网上的方法都不可用)
  • 环境变量JAVA_HOME、HADOOP_HOME配置,修改hadoop-env.cmd里JAVA_HOME
  • Program Files 替换成~1 , X86 改成~2 C:\PROGRA~1\Java\jdk1.8.0_172
  • pom.xml 文件


    4.0.0

    org.example
    HelloWord
    1.0-SNAPSHOT
    

     
     org.apache.hive
     hive-exec
     1.2.1
     
        
            org.apache.spark
            spark-core_2.12
            2.4.3
        
        
            org.apache.spark
            spark-sql_2.12
            2.4.3
        
        
            org.apache.spark
            spark-mllib_2.12
            2.4.3
        
        
            org.apache.hadoop
            hadoop-client
            2.7.4
        
        
            org.apache.hadoop
            hadoop-mapreduce-client-core
            2.7.4
        
        
            org.apache.hive
            hive-jdbc
            1.2.1
        
        
            org.apache.commons
            commons-lang3
            3.8.1
        
        
            org.apache.hadoop
            hadoop-common
            2.7.4
        
    

测试运行

//  这个包报红注意吧Hadoop的依赖搞上去
import org.apache.spark.{SparkConf, SparkContext}

object HelloWord1 {

  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setMaster("local").setAppName("大爷")

    val sc = new SparkContext(conf)

    val helloWorld = sc.parallelize(List("Hello,大爷4!","Hello,大爷3!","Hello,大爷1!"))

    helloWorld.foreach(line => println(line))

  }
}

运行结果

result.png

你可能感兴趣的:(spark-windows本地环境搭建)