Spark中常见问题

  • Spark local mode 报Input path does not exist: hdfs://

Exception in thread “main” org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://argo/data/resys/mingliang/shop_diary/sparktest/WordCount/input.dat
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)

outputfile=/data/resys/mingliang/shop_diary/sparktest/WordCount/output
rm -rf $outputfile

/data/resys/var/spark-1.6.0-bin-hadoop2.6/bin/spark-submit \
  --class SparkTest.WordCount \
  --master "local" \
  --deploy-mode "client" \
  spark-wordcount-in-scala.jar \
    "local[2]" \
  /data/resys/mingliang/shop_diary/sparktest/WordCount/input.dat \
  $outputfile

http://stackoverflow.com/questions/27299923/how-to-load-local-file-in-sc-textfile-instead-of-hdfs

  • More than one scala library found in the build path (…org.scala-lang.scala-library_2.11.7.v…)
    解决方案:项目属性->scala compiler->
    选择Use Project Settings 以及 Latest 2.10 bundle (dynamic)
    点确定就不会再报错了

  • JAR creation failed. See details for additional information
    右键项目点击“刷新“就OK了

  • java.lang.NoSuchMethodError: scala.runtime.ObjectRef.create(Ljava/lang/Object;)Lscala/runtime/ObjectRef;
    此问题是由于本地编译时用到的scala版本和Spark集群所用Scala版本不一致导致的,可以在Spark Job监控页面Environment选项中查看相应版本号

  • 日志报org.apache.spark.shuffle.FetchFailedException
    可以尝试将RDD进行repartition操作,增大repartition个数,从而每个partition的size减小,这样shuffle时就不会报内存错误

  • 编译时显示”can not resolve symbol ”
    这种情况一般是没有import相关包;这个问题把我坑惨了,找了好久,之前同事用sbt build的代码,改为用maven build结果一直爆这个,加入import后就没问题。但是奇怪的是为什么用sbt build时不需要导入相关包。

  • java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object
    网上说的一般是打包时的spark/scala版本与线上不一致导致的。不过我遇到的是:val rdd = xx.map(col -> 1)
    这个结果类型实际是Tuple,但是貌似线上spark是不支持的。改为标准的Tuple即可, val rdd = xx.map((col -> 1))

  • JAR will be empty - no content was marked for inclusion!
    用的是idea,mvn package发现jar包生成了,但是并没有包含class文件,解决办法是每次先[Build]->[Make Project], 然后执行mvn package即可生成完整的jar包。

  • java.lang.NoSuchMethodError: scala.runtime.DoubleRef.create(D)Lscala/runtime/DoubleRef

var total:Double = 0
            for (i <- 1 until arr.length) {
                val Array(topic, freqStr) = arr(i).split(":")
                val freq = freqStr.toDouble
                total += freq * freq
            }

此处在var total:Double = 0 报错
解决方法是:

The library requires Scala 2.11, not 2.10, and Spark 2.0, not 1.6.2, as you can see from

.minor.version>2.11.minor.version>
.complete.version>${scala.minor.version}.8.complete.version>
.version>2.0.0.version>

https://stackoverflow.com/questions/39775517/sryza-spark-timeseries-nosuchmethoderror-scala-runtime-intref-createilscala

你可能感兴趣的:(Spark)