Spark-submit:System memory 466092032 must be at least 471859200

在运行Standalone-client模式时遇到如下错误:

Spark Executor Command: "/usr/local/jdk1.8.0_181/bin/java" "-cp" "/usr/local/spark-2.3.0/conf/:/usr/local/spark-2.3.0/jars/*:/usr/local/hadoop-2.7.6/etc/hadoop/" "-Xmx500M" "-Dspark.driver.port=43614" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler@hpmaster:43614" "--executor-id" "32" "--hostname" "192.168.199.212" "--cores" "1" "--app-id" "app-20190212233950-0001" "--worker-url" "spark://[email protected]:7079"
========================================

Exception in thread "main" java.lang.IllegalArgumentException: System memory 466092032 must be at least 471859200. Please increase heap size using the --driver-memory option or spark.driver.memory in Spark configuration.
    at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:217)
    at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:199)
    at org.apache.spark.SparkEnv$.create(SparkEnv.scala:330)
    at org.apache.spark.SparkEnv$.createExecutorEnv(SparkEnv.scala:200)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:228)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:65)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:64)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:64)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:293)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)

spark源码是这么写的:

/**
   * Return the total amount of memory shared between execution and storage, in bytes.
   */
  private def getMaxMemory(conf: SparkConf): Long = {
    val systemMemory = conf.getLong("spark.testing.memory", Runtime.getRuntime.maxMemory)
    val reservedMemory = conf.getLong("spark.testing.reservedMemory",
      if (conf.contains("spark.testing")) 0 else RESERVED_SYSTEM_MEMORY_BYTES)
    val minSystemMemory = reservedMemory * 1.5
    if (systemMemory < minSystemMemory) {
      throw new IllegalArgumentException(s"System memory $systemMemory must " +
        s"be at least $minSystemMemory. Please use a larger heap size.")
    }
    val usableMemory = systemMemory - reservedMemory
    val memoryFraction = conf.getDouble("spark.memory.fraction", 0.75)
    (usableMemory * memoryFraction).toLong
  }

spark-submit 中配置 --driver-java-options "-Dspark.testing.memory=1073741824"   比512M(536870912字节)大即可

/usr/local/spark-2.3.0/bin/spark-submit --class com.chy.rdd.initSpark --master spark://hpmaster:7077 --deploy-mode client  --executor-memory 500m --driver-java-options "-Dspark.testing.memory=1073741824" --total-executor-cores 1 /usr/local/spark-2.3.0/examples/jars/sparkProject-1.0-SNAPSHOT.jar

你可能感兴趣的:(大数据)