大数据环境部署5:Hive安装部署



1、下载hive:wget http://archive.apache.org/dist/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz

2、解压hive安装文件 tar -zvxf apache-hive-1.2.1-bin.tar.gz,并将解压后的文件移动到目标路径。

3、配置mysql元数据库

3.1启动mysqld,建立相应的mySQL账号并赋予足够权限

[root@localhosthadoop]# service mysqld start

[root@localhosthadoop]# chkconfig mysqld on //加入开机启动

mysql

mysql> createuser 'hive' identified by 'spark';

mysql> grantall privileges on *.* to 'hive'@'%' with grant option;

mysql> flushprivileges;

mysql> grantall privileges on *.* to 'hive'@'localhost' with grant option;

mysql> flushprivileges;

mysql> grantall privileges on *.* to 'hive'@'mysqlserver' with grant option;

mysql> flushprivileges;

mysql> exit;

 

3.2、用hive用户登录测试并创建hive数据库

[root@localhosthadoop]# mysql -h 172.16.107.9 -u hive -p

mysql> createdatabase hive;

mysql> showdatabases;

mysql> usehive

mysql> showtables;

 

4、配置hive环境变量,初始化hivehdfs上的工作目录(因此在部署hive之前,请确保已经完整的部署了hadoop,并设置好相关的环境)

4.1vi .bash_profile 添加环境变量值

 exportHIVE_HOME=/home/spark/opt/hive-0.12.0
 export PATH = $HIVE_HOME/bin:$PATH
 source .bash_profile
使修改的环境变量立即生效

4.2、初始化hadoop 环境变量

 ./hadoopfs -mkdir /tmp
 ./hadoop fs -mkdir /home/spark/hive-warehouse
 ./hadoop fs -chmod g+w /tmp
 ./hadoop fs -chmod g+w /home/spark/hive-warehouse

对应的目录如果没有,用OS命令先建起来。

4.3iotmp新建

新建文件夹:/home/spark/opt/apache-hive-1.2.1-bin/iotmp

并赋权限:chmod733 iotmp

4.3 配置hive相关的配置文件:

/home/spark/opt/apache-hive-1.2.1-bin/conf目录下:

hive-default.xml.template改为hive-site.xml
hive-log4j.properties.template改为
hive-log4j.properties
hive-exec-log4j.properties.template改为hive-exec-log4j.properties

文件名修改好之后,对应配置文件hive-env.shhive-site.xml的修改(针对mysql元数据库):

4.3.1hive-env.sh

[root@hadoop0conf]# pwd

/home/spark/opt/apache-hive-1.2.1-bin/conf

[root@hadoop0conf]# cp hive-env.sh.template hive-env.sh

# HADOOP_HOME=${bin}/../../hadoop

HADOOP_HOME=/home/spark/opt/hadoop-2.6.0

# HiveConfiguration Directory can be controlled by:

exportHIVE_CONF_DIR=/home/spark/opt/apache-hive-1.2.1-bin/conf

4.3.2hive-site.xml

[root@hadoop0 conf]# cphive-default.xml.template hive-site.xml

hive.metastore.local

false

javax.jdo.option.ConnectionURL

jdbc:mysql://172.16.107.9:3306/hive?createDatabaseIfNotExist=true

javax.jdo.option.ConnectionDriverName

com.mysql.jdbc.Driver

javax.jdo.option.ConnectionUserName

hive

javax.jdo.option.ConnectionPassword

spark

hive.metastore.uris

thrift://172.16.107.9:9083

 

5、下载驱动包(mysql-connector-java-5.1.35.tar.gz)http://dev.mysql.com/downloads/connector/j/ 直接wget http://mirrors.ibiblio.org/pub/mirrors/maven2/mysql/mysql-connector-java/5.1.6/mysql-connector-java-5.1.6.jar并放到hiveLIB目录(/home/spark/opt/apache-hive-1.2.1-bin/lib)

 

6、启动服务并且客户端登录

执行命令:

启动metastorehive –service  metastore &

启动hiveserverhive –service hiveserver &

执行前保证Hadoopmysql等,已经成功启动。

cd$HIVE_HOME/bin   ./hive
默认将会进入hive的控制台,执行:show tables;如果不出错,则表明默认版本的hive安装成功。

默认版本hivemetastore保存在一个叫derby的数据库的,该数据库是一个嵌入式数据库,如果同时有两个人或者多个人操作,就会报错。在此,我们以mysql作为元数据库进行安装配置的。

 

参考:

http://f.dataguru.cn/thread-525071-1-2.html

http://www.cnblogs.com/likehua/p/3825479.html

 

你可能感兴趣的:(BigData)