hive0.11的编译/安装/配置

一、获取hive 0.11文件

        1、直接在apache网站下载release版

        2、自己下载源码编译。方法:

git clone https://github.com/amplab/hive.git -b shark-0.11 git_hive-0.11_shark

cd git_hive-0.11_shark

ant package

        因为我是为了后面和shark一起用,apache网站的hive经测试与shark集成有问题,所以用的是针对shark修改过的hive源码,自己编译。我使用shark时,shark针对hive o.11版的支持还有bug,还没有发布过针对hive 0.11的release版本,也需要自己编译。

        shark在git上找到有两个不同公司维护的源码,因此,hive的源码地址有两个:

https://github.com/amplab/hive

https://github.com/WANdisco/amplab-hive

        因为本文章是事后总结,忘了当时用的到底是哪个公司的代码。哭泣的脸

 

二、在profile中设置HADOOP_HOME、HADOOP_CONF_DIR、HIVE_HOME、CLASSPATH等变量即可。

        如果仅用hive+hadoop,hive仅需在master部署一份即可。

        如果要用shark,每个worker节点都要部署,因为shark要用hive的lib目录下的.jar文件。

 

三、配置conf/hive-site.xml文件。

<configuration>
 
<property> 
   <name>hive.metastore.local</name> 
   <value>true</value> 
   <description>controls whether to connect to remove metastore server or open a new metastore server in Hive Client JVM</description> 
</property> 
 
<property> 
   <name>javax.jdo.option.ConnectionURL</name> 
   <value>jdbc:mysql://172.16.19.139:3306/hive_11</value> 
</property>
 
<property> 
   <name>javax.jdo.option.ConnectionDriverName</name> 
   <value>com.mysql.jdbc.Driver</value> 
</property>
 
<property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
</property>
 
<property> 
   <name>javax.jdo.option.ConnectionPassword</name> 
   <value>root</value> 
</property>
 
</configuration>

四、启动hive:

1、hive                                                           //启动hive的cli,提供一个命令交互界面。

2、hive --service hiveserver                       //启动server端。用hive的jdbc包搞java开发,就要这样用。但关闭linux终端后,即不可用。

3、nohup hive --service hiveserver         //关于linux终端后,hive server在后台运行。

五、hive测试:

show tables;

create table lam01(id int, name string);

select * from lam01;

六、其它

查看hive文件:hadoop fs -ls -R /user/hive

七、安装过程中的错误记录

错误3:

MetaException(message:file:/user/hive/warehouse/xxxx is not a directory or unable to create one)

解决:CLASSPATH中加入HADOOP_CONF_DIR

错误2:

Error in metadata: MetaException(message:Got exception: org.apache.hadoop.hive.metastore.api.MetaException javax.jdo.JDODataStoreException: An exception was thrown while adding/validating class(es) : Specified key was too long; max key length is 767 bytes

解决:

只要修改MySQL中Hive元数据库MetaStore的字符集便可。

alter database dbname character set latin1;

错误1:

java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient。

解决:在CLASSPATH中要有mysql的jdbc驱动。

编译错误:

mvn-init:

[echo] hcatalog-core

BUILD FAILED

/home/hadoop/git_hive-0.11_shark/build.xml:274: The following error occurred while executing this line:

/home/hadoop/git_hive-0.11_shark/build.xml:113: The following error occurred while executing this line:

/home/hadoop/git_hive-0.11_shark/build.xml:115: The following error occurred while executing this line:

/home/hadoop/git_hive-0.11_shark/hcatalog/build.xml:65: The following error occurred while executing this line:

/home/hadoop/git_hive-0.11_shark/hcatalog/build-support/ant/deploy.xml:77: get doesn't support the "skipexisting" attribute

解决:找到对应的.xml文件行,好像是去掉skipexisting属性。好像报了2次错误,因为还有另一个文件也是这个问题,依次解决就行了。

你可能感兴趣的:(hive)