Hive on spark 执行加载不了spark的jars

问题展示

Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session 8ffa76f2-907e-4092-987d-6c89279f3c5b)'
FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 8ffa76f2-907e-4092-987d-6c89279f3c5b

Hive on spark 执行加载不了spark的jars_第1张图片
最后用一下命令查找日志:

hive --hiveconf hive.root.logger=DEBUG,console

找到报错原因:

2020-07-03T05:42:51,624 ERROR [1c9bb3c6-d486-4968-8b94-c35812f50e75 main] spark.SparkTask: Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session 1afe33ae-3b62-468c-8d2f-f04269421ed7)'
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create Spark client for Spark session 1afe33ae-3b62-468c-8d2f-f04269421ed7
 at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:221)
 at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:92)
 at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115)
 at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136)
 at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
 at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
 at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664)
 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703)
 at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
 at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218)
 at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
 at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAcce

截图如下
Hive on spark 执行加载不了spark的jars_第2张图片

最后分析是因为hive加载不了spark的jars

hive-env.sh 添加

export SPARK_HOME=/opt/module/spark-2.4.5-bin-without-hive
export SPARK_JARS=""
for jar in `ls $SPARK_HOME/jars`; do
    export SPARK_JARS=$SPARK_JARS:$SPARK_HOME/jars/$jar
done
export HIVE_AUX_JARS_PATH=/opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-lzo-0.4.21-SNAPSHOT.jar$SPARK_JARS

你可能感兴趣的:(Bug记录,spark)