Hive之——启动问题及解决方案

问题1:

  1. Caused by: javax.jdo.JDODataStoreException: Required table missing : "`VERSION`" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables"  
  2. NestedThrowables:  
  3. org.datanucleus.store.rdbms.exceptions.MissingTableException: Required table missing : "`VERSION`" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables"  
  4.         at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:461)  
  5.         at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732)  
  6.         at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)  
  7.         at org.apache.hadoop.hive.metastore.ObjectStore.setMetaStoreSchemaVersion(ObjectStore.java:6664)  
解决方法:
查看并跟踪hive的源码得出
  1. if ((this.readOnlyDatastore) || (this.fixedDatastore))    
  2. {    
  3.   this.autoCreateTables = false;    
  4.   this.autoCreateColumns = false;    
  5.   this.autoCreateConstraints = false;    
  6. }    
  7. else    
  8. {    
  9.   boolean autoCreateSchema = conf.getBooleanProperty("datanucleus.autoCreateSchema");    
  10.   if (autoCreateSchema)    
  11.   {    
  12.     this.autoCreateTables = true;    
  13.     this.autoCreateColumns = true;    
  14.     this.autoCreateConstraints = true;    
  15.   }    
  16.   else  
  17.   {    
  18.     this.autoCreateColumns = conf.getBooleanProperty("datanucleus.autoCreateColumns");    
  19.     this.autoCreateTables = conf.getBooleanProperty("datanucleus.autoCreateTables");    
  20.     this.autoCreateConstraints = conf.getBooleanProperty("datanucleus.autoCreateConstraints");    
  21.   }  
  22. }  
看来关键是 this.readOnlyDatastore和this.fixedDatastore 这2个字段而且autoCreateSchema 这个设置为true 就可以决定了其他的设置,所以其他设置在此都无效了。继续追踪org.datanucleus.store.AbstractStoreManager 发现了这2个字段的设置代码

  1. this.readOnlyDatastore = conf.getBooleanProperty("datanucleus.readOnlyDatastore");  
  2. this.fixedDatastore = conf.getBooleanProperty("datanucleus.fixedDatastore");    
修改的配置的内容如下

  1. <property>  
  2.     <name>datanucleus.readOnlyDatastorename>  
  3.     <value>falsevalue>  
  4. property>  
  5. <property>   
  6.     <name>datanucleus.fixedDatastorename>  
  7.     <value>falsevalue>   
  8. property>  
  9. <property>   
  10.     <name>datanucleus.autoCreateSchemaname>   
  11.     <value>truevalue>   
  12. property>  
  13. <property>  
  14.     <name>datanucleus.autoCreateTablesname>  
  15.     <value>truevalue>  
  16. property>  
  17. <property>  
  18.     <name>datanucleus.autoCreateColumnsname>  
  19.     <value>truevalue>  
  20. property>  

或者将:


  1. <property>  
  2.     <name>datanucleus.schema.autoCreateAllname>  
  3.     <value>falsevalue>  
  4.     <description>creates necessary schema on a startup if one doesn't exist. set this to false, after creating it oncedescription>  
  5.  property>  
修改为

  1. <property>  
  2.     <name>datanucleus.schema.autoCreateAllname>  
  3.     <value>truevalue>  
  4.     <description>creates necessary schema on a startup if one doesn't exist. set this to false, after creating it oncedescription>  
  5.  property>  

问题2:


  1. 16/06/02 10:49:52 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.  
  2. 16/06/02 10:49:52 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist  
  3.   
  4. Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-0.14.0.jar!/hive-log4j.properties  
  5. Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx--x--x  
  6.         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:444)  
  7.         at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:672)  
  8.         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)  
  9.         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
  10.         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)  
  11.         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)  
  12.         at java.lang.reflect.Method.invoke(Method.java:597)  
  13.         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)  
  14. Caused by: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx--x--x  
  15.         at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:529)  
  16.         at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:478)  
  17.         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:430)  
  18.         ... 7 more  
解决方法:

  1. [root@hadoop ~]# hadoop fs -chmod -R 777 /tmp/hive  
  2. [root@hadoop ~]# hadoop fs -ls /tmp  
  3. Found 1 items  
  4. drwxrwxrwx   - root supergroup          0 2016-06-02 08:48 /tmp/hive  
问题3:
  1. 16/06/02 09:33:13 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.  
  2. 16/06/02 09:33:13 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist  
  3.   
  4. Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-0.14.0.jar!/hive-log4j.properties  
  5. Exception in thread "main" java.lang.RuntimeException: java.net.ConnectException: Call to hadoop/192.168.52.139:9000 failed on connection exception: java.net.ConnectException: Connection refused  
  6.         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:444)  
  7.         at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:672)  
  8.         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)  
  9.         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
  10.         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)  
  11.         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)  
  12.         at java.lang.reflect.Method.invoke(Method.java:597)  
  13.         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)  
  14. Caused by: java.net.ConnectException: Call to hadoop/192.168.52.139:9000 failed on connection exception: java.net.ConnectException: Connection refused  
解决方法:
此问题是因为 hadoop没有启动,启动hadoop后问题解决
问题4:
  1. [root@hadoop ~]# hive  
  2. 16/06/02 09:38:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.  
  3. 16/06/02 09:38:06 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist  
  4. Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-0.14.0.jar!/hive-log4j.properties  
  5. hive>  
解决方法:
在0.10  0.11或者之后的HIVE版本 hive.metastore.local 属性不再使用。
在配置文件里面:
  1.  <property>  
  2.     <name>hive.metastore.localname>  
  3.     <value>falsevalue>  
  4.     <description>controls whether to connect to remove metastore server or open a new metastore server in Hive Client JVMdescription>  
  5. property>  
删除掉,再次登录警告就消失了
问题5:
  1. Exception in thread "main" java.lang.RuntimeException: java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D  
  2.         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:444)  
  3.         at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:672)  
  4.         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)  
  5.         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
  6.         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)  
  7.         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
  8.         at java.lang.reflect.Method.invoke(Method.java:606)  
  9.         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)  
  10. Caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D  
  11.         at org.apache.hadoop.fs.Path.initialize(Path.java:148)  
  12.         at org.apache.hadoop.fs.Path.(Path.java:126)  
  13.         at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:487)  
  14.         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:430)  
  15.         ... 7 more  
  16. Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D  
  17.         at java.net.URI.checkPath(URI.java:1804)  
  18.         at java.net.URI.(URI.java:752)  
  19.         at org.apache.hadoop.fs.Path.initialize(Path.java:145)  
  20.         ... 10 more  
解决方法:
在配置文件hive-site.xml里找"system:java.io.tmpdir"把他们都换成绝对路径如:/home/grid/apache-hive-0.14.0-bin/iotmp
hive-site.xml 完整的内容如下:
  1. xml version="1.0" encoding="UTF-8" standalone="no"?>  
  2. xml-stylesheet type="text/xsl" href="configuration.xsl"?>  
  3.   
  4. <configuration>  
  5.   
  6.     <property>   
  7.         <name>javax.jdo.option.ConnectionURLname>   
  8.         <value>jdbc:mysql://192.168.52.139:3306/hive?createDatabaseIfNotExsit=true;characterEncoding=UTF-8value>   
  9.     property>   
  10.     <property>   
  11.         <name>javax.jdo.option.ConnectionDriverNamename>   
  12.         <value>com.mysql.jdbc.Drivervalue>  
  13.     property>  
  14.     <property>  
  15.         <name>javax.jdo.option.ConnectionUserNamename>  
  16.         <value>hivevalue>  
  17.     property>  
  18.     <property>   
  19.         <name>javax.jdo.option.ConnectionPasswordname>   
  20.         <value>hivevalue>   
  21.     property>   
  22.   
  23.     <property>  
  24.         <name>datanucleus.readOnlyDatastorename>  
  25.         <value>falsevalue>  
  26.     property>  
  27.     <property>   
  28.         <name>datanucleus.fixedDatastorename>  
  29.         <value>falsevalue>   
  30.     property>  
  31.   
  32.     <property>   
  33.         <name>datanucleus.autoCreateSchemaname>   
  34.         <value>truevalue>   
  35.     property>  
  36.       
  37.     <property>  
  38.         <name>datanucleus.autoCreateTablesname>  
  39.         <value>truevalue>  
  40.     property>  
  41.   
  42.     <property>  
  43.         <name>datanucleus.autoCreateColumnsname>  
  44.         <value>truevalue>  
  45.     property>  
  46.   
  47. configuration>  

问题6:

  1. hive> show tables;  
  2. FAILED: Error in metadata: MetaException(message:Got exception: javax.jdo.JDODataStoreException An exception was thrown while adding/validating class(es) : Specified key was too long; max key length is 767 bytes  
  3. com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was too long; max key length is 767 bytes  
分析:

由于mysql的最大索引长度导致,MySQL的varchar主键只支持不超过768个字节 或者 768/2=384个双字节 或者 768/3=256个三字节的字段 而 GBK是双字节的,UTF-8是三字节的。

解决方案:

数据库的字符集除了system为utf8,其他最好为latin1,否则可能出现如上异常,在mysql机器的上运行:

  1. mysql> show variables like '%char%';  
  2. mysql>alter database 库名 character set latin1;  
  3. mysql>flush privileges

问题7:

java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected

在配置hadoop2.6  ,hive1.1的时候,Hive 启动报错:


[ERROR] Terminal initialization failed; falling back to unsupported
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expecte


解决方法是:
将hive下的新版本jline的JAR包拷贝到hadoop下:
cp /hive/apache-hive-1.1.0-bin/lib/jline-2.12.jar ./
 
/hadoop-2.6.0/share/hadoop/yarn/lib:
-rw-r--r-- 1 root root   87325 Mar 10 18:10 jline-0.9.94.jar.bak
-rw-r--r-- 1 root root  213854 Mar 11 22:22 jline-2.12.jar

问题8:

org.apache.hadoop.security.AccessControlException: Permission denied: user=root

解决方法是:

用root 用户进入hive show tables没有报错,但是select 的时候报错了:

错误信息:

 

FAILED: Hive Internal Error: java.lang.RuntimeException(org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="tmp":hadoop:supergroup:rwxr-xr-x)
java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="tmp":hadoop:supergroup:rwxr-xr-x
        at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:170)
        at org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:210)
        at org.apache.hadoop.hive.ql.Context.getMRTmpFileURI(Context.java:267)
        at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1112)
        at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7524)
        at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:431)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:909)
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:557)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="tmp":hadoop:supergroup:rwxr-xr-x
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
        at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1216)
        at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:321)
        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1126)
        at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:165)
        ... 18 more
Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="tmp":hadoop:supergroup:rwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:199)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:180)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5214)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5188)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2060)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2029)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.mkdirs(NameNode.java:817)
        at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

        at org.apache.hadoop.ipc.Client.call(Client.java:1070)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
        at $Proxy6.mkdirs(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy6.mkdirs(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1214)
        ... 21 more

解决办法:

错误信息是说我的root用户没有权限来访问hive中的表信息。

原因是我进入hive的用户不应该用hive 因为配置hadoop的时候 我所使用的用户是hadoop 

当换成hadoop用户来操作 hive 这个问题就不存在了、

问题10:

could not create ServerSocket on address 0.0.0.0/0.0.0.0:9083

解决方法:

遇到这种情况大家都找不到头绪,是因为你开始运行了hive的metastore,可以输入jps

然后出现如下:


Hive之——启动问题及解决方案_第1张图片

红线所示就是hive metastore的进程

为了重新启动,需要把这个进杀掉;


kill -9 6006(这个是哪个程序的进程号)

Hive之——启动问题及解决方案_第2张图片



然后再输入

hive --service metastore启动


Hive之——启动问题及解决方案_第3张图片



OK,启动成功。

或者

执行查看linux端口命令

9083 端口发现被占用

[root@h1 bin]# netstat -apn|grep 9083
tcp        0      0 0.0.0.0:9083                0.0.0.0:*                   LISTEN      26235/java         
tcp       48      0 192.168.170.69:9083         192.168.170.69:44742        CLOSE_WAIT  -                  
[root@h1 bin]# netstat -apn|grep 9088
[root@h1 bin]#hive --service metastore -p 9088




你可能感兴趣的:(cloudera)