Hive是基于Hadoop构建的一套数据仓库分析系统,它提供了丰富的SQL查询方式来分析存储在Hadoop 分布式文件系统中的数据。其在Hadoop的架构体系中承担了一个SQL解析的过程,它提供了对外的入口来获取用户的指令然后对指令进行分析,解析出一个MapReduce程序组成可执行计划,并按照该计划生成对应的MapReduce任务提交给Hadoop集群处理,获取最终的结果。元数据——如表模式——存储在名为metastore的数据库中。
192.168.15.60 master
192.168.15.61 slave1
192.168.15.62 slave2
下载源码包,最新版本可自行去官网下载
$ tar -zxf apache-hive-1.2.1-bin.tar.gz
配置环境变量
# vi /etc/profile HIVE_HOME=/home/hadoop/apache-hive-1.2.1-bin PATH=$PATH:$HIVE_HOME/bin export HIVE_NAME PATH # source /etc/profile
metastore是Hive元数据集中存放地。它包括两部分:服务和后台数据存储。有三种方式配置metastore:内嵌metastore、本地metastore以及远程metastore。
本次搭建中采用MySQL作为远程仓库,部署在hadoop-master节点上,hive服务端也安装在hive-master上,hive客户端即hadoop-slave访问hive服务器。
# tar zxvf mysql-5.6.25.tar.gz
初始化数据库
# cp /usr/share/mysql/support-files/my-default.cnf /etc/my.cnf # cp /usr/share/mysql/support-files/mysql.server /etc/init.d/mysqld # /usr/share/mysql/scripts/mysql_install_db --user=mysql --defaults-file=/etc/my.cnf
启动MySQL服务
# chmod +x /etc/init.d/mysqld # service mysqld start #ln –s /data/mysql/bin/mysql
初始化密码
#mysql -uroot -h127.0.0.1 -p mysql> SET PASSWORD = PASSWORD('123456'); 我是安装的是mysql-server*.rpm/mysql-client*.rpm 直接rpm -ivh mysql-server*.rpm/rpm -ivh mysql-client*.rpm 这部分我博客已有,不再赘述。
mysql>CREATE USER 'hive' IDENTIFIED BY 'hive'; mysql>GRANT ALL PRIVILEGES ON *.* TO 'hive'@'hadoop-master' WITH GRANT OPTION; mysql>flush privileges;
mysql -h hadoop-master -uhive mysql>set password = password('hive');
mysql>create database hive;
修改配置文件
进入到hive的配置文件目录下,找到hive-default.xml.template,cp份为hive-default.xml
另创建hive-site.xml并添加参数
$ pwd /home/hadoop/apache-hive-1.2.1-bin/conf $ vi hive-site.xmljavax.jdo.option.ConnectionURL jdbc:mysql://hadoop-master:3306/hive?createDatabaseIfNotExist=true JDBC connect string for a JDBC metastore javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver Driver class name for a JDBC metastore javax.jdo.option.ConnectionUserName hive javax.jdo.option.ConnectionPassword hive
JDBC下载
$ cp mysql-connector-java-5.1.33-bin.jar apache-hive-1.2.1-bin/lib/
Hive客户端配置
$ scp -r apache-hive-1.2.1-bin/ hadoop@hadoop-slave:/home/hadoop $ vi hive-site.xmlhive.metastore.uris thrift://hadoop-master:9083
要启动metastore服务
$ hive --service metastore & $ jps RunJar #多了一个进程 NameNode SecondaryNameNode Jps NodeManager ResourceManager DataNode
Hive服务器端访问
$ hive Logging initialized using configuration in jar:file:/home/hadoop/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties hive> show databases; OK default src Time taken:1.332 seconds, Fetched: 2row(s) hive> use src; OK Time taken:0.037 seconds hive> create table test1(id int); OK Time taken: 0.572 seconds hive> show tables; OK abc test test1 Time taken: 0.057 seconds, Fetched: 3 row(s) hive>
Hive客户端访问
$ hive Logging initialized using configuration in jar:file:/home/hadoop/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties hive> show databases; OK default src Time taken:1.022 seconds, Fetched: 2 row(s) hive> use src; OK Time taken: 0.057 seconds hive> show tables; OK abc test test1 Time taken: 0.218 seconds, Fetched: 3 row(s) hive> create table test2(id int ,name string); OK Time taken: 5.518 seconds hive> show tables; OK abc test test1 test2 Time taken: 0.102 seconds, Fetched: 4 row(s) hive>
好了,测试完毕,已经安装成功了。
错误描述:hive进入后可以创建数据库,但是无法创建表
hive>create table table_test(id string,name string); FAILED: Execution Error, return code 1from org.apache.hadoop.hive.ql.exec.DDLTask.MetaException(message:javax.jdo.JDODataStoreException: An exception was thrown while adding/validating class(es) : Specified key was too long; max key length is 767 bytes
解决办法:登录mysql修改下hive数据库的编码方式
mysql>alter database hive character set latin1;
参考资料
hive元数据库配置Metadata http://blog.csdn.net/jyl1798/article/details/41087533
Hadoop+Hive环境搭建 http://nunknown.com/?p=282#3
基于Hadoop数据仓库Hive1.2部署及使用 http://lizhenliang.blog.51cto.com/7876557/1665891