HIVE--NoSuchMethodException: org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions

  • Exception展示
Exception in thread "main" java.lang.NoSuchMethodException: org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions(org.apache.hadoop.fs.Path, java.lang.String, java.util.Map, boolean, int, boolean, boolean, boolean)
    at java.lang.Class.getMethod(Class.java:1786)
    at org.apache.spark.sql.hive.client.Shim.findMethod(HiveShim.scala:114)
    at org.apache.spark.sql.hive.client.Shim_v0_14.loadDynamicPartitionsMethod$lzycompute(HiveShim.scala:404)
    at org.apache.spark.sql.hive.client.Shim_v0_14.loadDynamicPartitionsMethod(HiveShim.scala:403)
    at org.apache.spark.sql.hive.client.Shim_v0_14.loadDynamicPartitions(HiveShim.scala:455)
    at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadDynamicPartitions$1.apply$mcV$sp(ClientWrapper.scala:577)
	at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadDynamicPartitions$1.apply(ClientWrapper.scala:577)
    at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadDynamicPartitions$1.apply(ClientWrapper.scala:577)
	at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:293)
    at org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:239)
    at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:238)
    at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:281)
    at org.apache.spark.sql.hive.client.ClientWrapper.loadDynamicPartitions(ClientWrapper.scala:576)
    at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:292)
    at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:193)
    at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:352)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:56)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:56)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$insertInto$1.apply$mcV$sp(DataFrameWriter.scala:224)
	at org.apache.spark.sql.DataFrameWriter.executeAndCallQEListener(DataFrameWriter.scala:154)
	at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:224)
	at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:274)
	at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:256)
	at com.goldeneggs.cmfang.model_analysis.run.GenerateHiveTable$.execute(GenerateHiveTable.scala:60)
	at com.goldeneggs.cmfang.model_analysis.run.GenerateHiveTable$.main(GenerateHiveTable.scala:17)
	at com.goldeneggs.cmfang.model_analysis.run.GenerateHiveTable.main(GenerateHiveTable.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:730)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
  • 解决方案分析
方案:hive-exec、hive-metastore版本问题
分析:(1)查找项目中的 org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions方法,发现项目中的loadDynamicPartitions方法有9个参数,错误日志的loadDynamicPartitions方法有8个参数,顾判断为版本出现问题。
    (2)查找org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions所在jar包为hive-exec,目前项目引入的是            
            org.apache.hive
            hive-exec
            1.2.1.spark
            该jar包是从
            org.apache.spark
            spark-hive_2.10
            1.6.0
            依赖配置中继承下来的
    (3)maven repository中查找hive-exec,发现主要有两个,一个是org.apache.hive的hive-exec,一个是org.spark-project.hive的hive-exec。目前选择org.apache.hive的hive-exec(另外一个也可以考虑),查找gitlab源码https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java,
            发现1.2版本比1.1版本在loadDynamicPartitions方法上多了一个参数。顾pom.xml中我们采用1.1版本
  • 最终解决展示:pom.xml 文件修改
        <dependency>
            <groupId>org.apache.sparkgroupId>
            <artifactId>spark-hive_2.10artifactId>
            <version>1.6.0version>
        dependency>
        更改为(注意一定要把hive-metastore版本也改为1.1,要不让还会遇到HiveConf中的一个参数找不到的问题)
         <dependency>
            <groupId>org.apache.sparkgroupId>
            <artifactId>spark-hive_2.10artifactId>
            <version>1.6.0version>
          <exclusions>
              <exclusion>
                <artifactId>hive-execartifactId>
                <groupId>org.spark-project.hivegroupId>
              exclusion>
            <exclusion>
                <artifactId>hive-metastoreartifactId>
                <groupId>org.spark-project.hivegroupId>
            exclusion>
          exclusions>
        dependency>
        <dependency>
            <groupId>org.apache.hivegroupId>
            <artifactId>hive-execartifactId>
            <version>1.1.0version>
        dependency>
        <dependency>
            <groupId>org.apache.hivegroupId>
            <artifactId>hive-metastoreartifactId>
            <version>1.1.0version>
        dependency>
  • 注意:如果不更改hive-metastore版本为1.1会出现如下异常
java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:249)
    at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:330)
    at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:239)
    at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:444)
    at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:272)
	at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:271)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at org.apache.spark.sql.SQLContext.(SQLContext.scala:271)
    at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:90)
    at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:101)
    at com.goldeneggs.cmfang.model_analysis.run.GenerateHiveTable$.execute(GenerateHiveTable.scala:27)
    at com.goldeneggs.cmfang.model_analysis.run.GenerateHiveTable$.main(GenerateHiveTable.scala:17)
    at com.goldeneggs.cmfang.model_analysis.run.GenerateHiveTable.main(GenerateHiveTable.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:552)
Caused by: java.lang.NoSuchFieldError: METASTORE_CLIENT_SOCKET_LIFETIME
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:79)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:98)
    at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2841)
    at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2860)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:453)
    at org.apache.spark.sql.hive.client.ClientWrapper.(ClientWrapper.scala:204)
  • 隐患
 hive-exec和hive-metastore 从org.spark-project.hive改到org.apache.hive 后期不知道会出现什么样的问题

注:有关数据仓库等问题,有兴趣的同学可以联系我。TEL:18310801089

你可能感兴趣的:(Data,Warehouse)