原文链接:https://mp.csdn.net/postedit/82423831
出现此类问题有很多种, 当时遇到这问题的因为是在spark未改动的情况下, 更换了Hive的版本导致版本不对出现了此问题,
nexus-aliyun * Nexus aliyun http://maven.aliyun.com/nexus/content/groups/public
0.常见问题:
1如果运行程序出现错误:Exception in thread “main” java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory,这是因为项目缺少slf4j-api.jar和slf4j-log4j12.jar这两个jar包导致的错误。 2如果运行程序出现错误:java.lang.NoClassDefFoundError: org/apache/log4j/LogManager,这是因为项目缺少log4j.jar这个jar包 3错误:Exception in thread “main” java.lang.NoSuchMethodError: org.slf4j.MDC.getCopyOfContextMap()Ljava/util/Map,这是因为jar包版本冲突造成的。
1.配置spark-submit (CDH版本)
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream at org.apache.spark.deploy.SparkSubmitArguments.handleUnknown(SparkSubmitArguments.scala:451) at org.apache.spark.launcher.SparkSubmitOptionParser.parse(SparkSubmitOptionParser.java:178) at org.apache.spark.deploy.SparkSubmitArguments.(SparkSubmitArguments.scala:97) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:113) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 5 more
解决方案:
在spark-env.sh文件中添加:
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
2.启动spark-shell时,报错
INFO cluster.YarnClientSchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@services07:34965/user/Executor#1736210263] with ID 1 INFO util.RackResolver: Resolved services07 to /default-rack INFO storage.BlockManagerMasterActor: Registering block manager services07:51154 with 534.5 MB RAM
解决方案:
在spark的spark-env配置文件中配置下列配置项:
将export SPARK_WORKER_MEMORY, export SPARK_DRIVER_MEMORY, export SPARK_YARN_AM_MEMORY的值设置成小于534.5 MB
3.启动spark SQL时,报错:
Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver ") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
解决方案:
在$SPARK_HOME/conf/spark-env.sh文件中配置:
export SPARK_CLASSPATH=$HIVE_HOME/lib/mysql-connector-java-5.1.6-bin.jar
4.启动spark SQL时,报错:
java.sql.SQLException: Access denied for user 'services02 '@'services02' (using password: YES)
解决方案:
检查hive-site.xml的配置项, 有以下这个配置项
javax.jdo.option.ConnectionPassword 123456 password to use against metastore database
看该密码与与MySQL的登录密码是否一致
5.启动计算任务时报错:
报错信息为:
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout
解决方案:
分配的core不够, 多分配几核的CPU
6.启动计算任务时报错:
不断重复出现
status.SparkJobMonitor: 2017-01-04 11:53:51,564 Stage-0_0: 0(+1)/1 status.SparkJobMonitor: 2017-01-04 11:53:54,564 Stage-0_0: 0(+1)/1 status.SparkJobMonitor: 2017-01-04 11:53:55,564 Stage-0_0: 0(+1)/1 status.SparkJobMonitor: 2017-01-04 11:53:56,564 Stage-0_0: 0(+1)/1
解决方案:
资源不够, 分配大点内存, 默认值为512MB.
7.启动Spark作为计算引擎时报错:
报错信息为:
java.io.IOException: Failed on local exception: java.nio.channels.ClosedByInterruptException; Host Details : local host is: "m1/192.168.179.201"; destination host is: "m1":9000; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at org.apache.hadoop.ipc.Client.call(Client.java:1474) Caused by: java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:681) 17/01/06 11:01:43 INFO retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over m2/192.168.179.202:9000 after 9 fail over attempts. Trying to fail over immediately.
解决方案:
出现该问题的原因有多种, 我所遇到的是使用Hive On Spark时报了此错误,解决方案是: 在hive-site.xml文件下正确配置该项
spark.yarn.jar hdfs://ns1/Jar/spark-assembly-1.6.0-hadoop2.6.0.jar
8.启动spark集群时报错,启动命令为:start-mastersh
报错信息:
Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/Logger at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) at java.lang.Class.privateGetMethodRecursive(Class.java:3048) at java.lang.Class.getMethod0(Class.java:3018) at java.lang.Class.getMethod(Class.java:1784) at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526) Caused by: java.lang.ClassNotFoundException: org.slf4j.Logger at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 7 more
解决方案:
将/home/centos/soft/hadoop/share/hadoop/common/lib目录下的slf4j-api-1.7.5.jar文件,slf4j-log4j12-1.7.5.jar文件和commons-logging-1.1.3.jar文件拷贝到/home/centos/soft/spark/lib目录下
9.启动spark集群时报错,启动命令为:start-mastersh
报错信息:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2570) at java.lang.Class.getMethod0(Class.java:2813) at java.lang.Class.getMethod(Class.java:1663) at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.conf.Configuration at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 6 more
解决方案:
官网资料: https://spark.apache.org/docs/latest/hadoop-provided.html#apache-hadoop
编辑/home/centos/soft/spark/conf/spark-env.sh文件,配置下列配置项:
export SPARK_DIST_CLASSPATH=$(/home/centos/soft/hadoop/bin/hadoop classpath)
10.启动HPL/SQL存储过程时报错:
报错信息:
2017-01-10T15:20:18,491 ERROR [HiveServer2-Background-Pool: Thread-97] exec.TaskRunner: Error in executeTask java.lang.OutOfMemoryError: PermGen space at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:800) 2017-01-10T15:20:18,491 ERROR [HiveServer2-Background-Pool: Thread-97] ql.Driver: FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. PermGen space 2017-01-10T15:20:18,491 INFO [HiveServer2-Background-Pool: Thread-97] ql.Driver: Completed executing command(queryId=centos_20170110152016_240c1b5e-3153-4179-80af-9688fa7674dd); Time taken: 2.113 seconds 2017-01-10T15:20:18,500 ERROR [HiveServer2-Background-Pool: Thread-97] operation.Operation: Error running hive query: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. PermGen space at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:388) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:244) at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) Caused by: java.lang.OutOfMemoryError: PermGen space at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
解决方案:
参考资料: http://blog.csdn.net/xiao_jun_0820/article/details/45038205
出现该问题是因为Spark默认使用全部资源, 而此时主机的内存已用, 应在Spark配置文件中限制内存的大小. 在hive-site.xml文件下配置该项:
spark.driver.extraJavaOptions -XX:PermSize=128M -XX:MaxPermSize=512M
或在spark-default.conf文件下配置:
spark.driver.extraJavaOptions -XX:PermSize=128M -XX:MaxPermSize=256M