Spark2.3.0集成hive3.1.1遇到的一个坑HikariCP

Spark2.3.0集成hive3.1.1遇到的一个坑HikariCP

    • hive.metastore.schema.verification false

启动spark-sql或者创建相关对象,都会报错,错误如下:

at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187)
at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)
… 124 more
Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the “HikariCP” plugin to create a ConnectionPool gave an error : The connection pool plugin of type “HikariCP” was not found in the CLASSPATH!
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:259)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:131)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.(ConnectionFactoryImpl.java:85)
… 142 more
Caused by: org.datanucleus.exceptions.NucleusUserException: The connection pool plugin of type “HikariCP” was not found in the CLASSPATH!
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:234)
… 144 more

经过多番求助和百度,终于找到原因。
hive-site.xml默认配置的数据库连接方式就是HikariCP。

datanucleus.connectionPoolingType
HikariCP

Expects one of [bonecp, dbcp, hikaricp, none].
Specify connection pool library for datanucleus

解决方法:
1.直接改成dbcp。还是报错把mysql-connector-java-5.1.38.jar放到/home/hadoop/spark-2.3.0-bin-hadoop2.7/jars才解决。
原来的export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/hadoop/apache-hive-3.1.1-bin/lib/mysql-connector-java-5.1.38.jar不知道为什么没生效。

2.把/home/hadoop/apache-hive-3.1.1-bin/lib/HikariCP-2.6.1.jar cp到/home/hadoop/spark-2.3.0-bin-hadoop2.7/jars。

下一个错误。
Caused by: MetaException(message:Hive Schema version 1.2.0 does not match metastore’s schema version 3.1.0 Metastore is not upgraded or corrupt)
at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:6679)
at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:6645)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)
at com.sun.proxy. P r o x y 5. v e r i f y S c h e m a ( U n k n o w n S o u r c e ) a t o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . H i v e M e t a S t o r e Proxy5.verifySchema(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore Proxy5.verifySchema(UnknownSource)atorg.apache.hadoop.hive.metastore.HiveMetaStoreHMSHandler.getMS(HiveMetaStore.java:572)
at org.apache.hadoop.hive.metastore.HiveMetaStore H M S H a n d l e r . c r e a t e D e f a u l t D B ( H i v e M e t a S t o r e . j a v a : 620 ) a t o r g . a p a c h e . h a d o o p . h i v e . m e t a s t o r e . H i v e M e t a S t o r e HMSHandler.createDefaultDB(HiveMetaStore.java:620) at org.apache.hadoop.hive.metastore.HiveMetaStore HMSHandler.createDefaultDB(HiveMetaStore.java:620)atorg.apache.hadoop.hive.metastore.HiveMetaStoreHMSHandler.init(HiveMetaStore.java:461)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)
… 23 more

解决方案:方案一没成功,用方案二搞定。

1.登陆mysql,修改hive metastore版本:
进行mysql:mysql -uroot -p (123456)
use hive;
select * from version;
update VERSION set SCHEMA_VERSION=‘2.1.1’ where VER_ID=1;

2.简单粗暴:在hvie-site.xml中关闭版本验证


hive.metastore.schema.verification
false

作者:holomain
来源:CSDN
原文:https://blog.csdn.net/qq_27882063/article/details/79886935
版权声明:本文为博主原创文章,转载请附上博文链接!

你可能感兴趣的:(spark)