记录一些常见的spark问题,特别是环境类问题及相关的解决办法
问题 1 : 机器可用物理内存不足
现象:通过 # ./start-all.sh 启动 Spark 时出现异常,根据提示 参看相关log
cat spark-hadoop-org.apache.spark.deploy.master.Master-1-master.out,发现:
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 715849728 bytes for committing reserved memory.
Possible reasons:
The system is out of physical RAM or swap space
In 32 bit mode, the process size limit was hit
解决办法: free -m 发现可用内存不够 => 升级阿里云配置为 1核2G => 近80大洋没辣
问题 2:JVM内存不足
现象:使用 sbt 打包Scala程序时 出现:
hadoop@master:~/sparkapp$ /usr/local/sbt/sbt package
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256M; support was removed in 8.0
[info] Set current project to Simple Project (in build file:/home/hadoop/sparkapp/)
[info] Updating {file:/home/hadoop/sparkapp/}sparkapp...
[info] Resolving org.scala-lang#scala-library;2.10.4 ...
[info] Updating {file:/home/hadoop/sparkapp/}sparkapp...
[info] Resolving org.scala-lang#scala-library;2.10.4 ...
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: unable to create new native thread
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
解决办法: 增加JVM可用内存 #set JAVA_OPTS=-Xms512m -Xmx1024m
问题3:spark与scala版本不兼容问题
现象:使用idea运行WordCount程序时出现:
Exception in thread "main" java.lang.NoSuchMethodError: scala.collection.immutable.HashSet$.empty()Lscala/collection/immutable/HashSet;
at akka.actor.ActorCell$.(ActorCell.scala:336)
at akka.actor.ActorCell$.(ActorCell.scala)
at akka.actor.RootActorPath.$div(ActorPath.scala:159)
at akka.actor.LocalActorRefProvider.(ActorRefProvider.scala:464)
解决办法:scala.2.11.8 与 spark1.6.0存在兼容性问题。将scala环境改为 scala2.10.4后成功运行WordCount程序
问题4:运行 hive –service metastore 出现 无法用指定的mysql用户连接数据库。
解决办法:有点晕:把hive-site.xml中mysql的连接ip 改成 localhost:3306就可以了。mysql数据库安装使用肯定是没有问题,很可能是阿里云环境或者其他的问题
问题5:[ERROR] Terminal initialization failed; falling back to unsupported
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
原因是hadoop目录下存在老版本jline:
/hadoop-2.5.2/share/hadoop/yarn/lib:
-rw-r–r– 1 root root 87325 Mar 10 18:10 jline-0.9.94.jar
解决方法是:
将hive下的新版本jline的JAR包拷贝到hadoop下:
cp /hive/apache-hive-1.1.0-bin/lib/jline-2.12.jar ./
/hadoop-2.5.2/share/hadoop/yarn/lib:
-rw-r–r– 1 root root 87325 Mar 10 18:10 jline-0.9.94.jar.bak
-rw-r–r– 1 root root 213854 Mar 11 22:22 jline-2.12.jar
hive cli启动成功:
root@ubuntu:/hive# hive
问题6:datanode已经启动(用jps命令查看,确实也已经启动),但是监控界面 http://ip:50070/显示 Live datanodes 为 0
原因:上午进行了操作:# vim /etc/hosts 把
127.0.0.1 localhost
127.0.1.1 localhost.localdomain localhost
112.74.21.122 master
上面第一行中的 localhost 改成了 master 。结果就出现了上面的问题了
解决办法: 将第一行的master改回 localhost就行了
问题7:java.net.BindException: 无法指定被请求的地址: Service ‘sparkDriver’ failed after 16 retries!
原因:ifconfig 发现本地ip地址莫名改变了:由原来的 192.168.0.5 变成了 现在的192.168.0.2。而 /etc/hosts文件中还是 192.168.0.5 sparker
解决办法:修改/etc/hosts中的ip地址成192.168.0.2
~$ vim /etc/hosts
127.0.0.1 localhost
192.168.0.2 sparker
问题8:spark项目,使用Maven管理依赖包,在idea本地可以正常运行,手动打成jar包到linux服务器上运行,出现:Exception in thread “main” java.lang.SecurityException: Invalid signature file digest for Manifest main attributes
Exception in thread "main" java.lang.SecurityException: Invalid signature file digest for Manifest main attributes
at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:286)
at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:239)
at java.util.jar.JarVerifier.processEntry(JarVerifier.java:317)
at java.util.jar.JarVerifier.update(JarVerifier.java:228)
at java.util.jar.JarFile.initializeVerifier(JarFile.java:348)
解决办法
纠结了近一周,终于找到一个可行的办法-在linux控制台执行命令:
zip -d yourjar.jar 'META-INF/.SF' 'META-INF/.RSA' 'META-INF/*SF'
执行结果:
[root@spark1 sh]# zip -d spark-project.jar 'META-INF/.SF' 'META-INF/.RSA' 'META-INF/*SF'
zip warning: name not matched: META-INF/.SF
zip warning: name not matched: META-INF/.RSA
deleting: META-INF/ECLIPSEF.SF
deleting: META-INF/DUMMY.SF
从结果上看确实是因为打包后的META-INF/目录下出现了*.SF文件所导致-大致是依赖包加密所致-暂且不纠结。
参考: Invalid signature file digest for Manifest main attributes
解决办法
总结
yes, I love problems ! I am a problem-solver