3.配置hive环境

安装目录为:/usr/local

1.基本环境配置

解压缩hive安装包:tar-zxvf apache-hive-0.13.1-bin.tar.gz

重命名hive目录:mvapache-hive-0.13.1-bin hive

配置hive相关的环境变量

vi .bashrc

export HIVE_HOME=/usr/local/hive

export PATH=$HIVE_HOME/bin

source .bashrc


2.在spark1(namenode)安装mysql

数据库的用户名和密码都是:hive

使用yum安装mysql server。

yum install -y mysql-server

service mysqld start

chkconfig mysqld on

使用yum安装mysql connector

yum install -y mysql-connector-java

将mysql connector拷贝到hive的lib包中

cp/usr/share/java/mysql-connector-java-5.1.17.jar /usr/local/hive/lib

登录mysql数据库在mysql上创建hive元数据库,并对hive进行授权

Mysql 回车

create database if not existshive_metadata;

grant all privileges onhive_metadata.* to 'hive'@'%' identified by 'hive';

grant all privileges onhive_metadata.* to 'hive'@'localhost' identified by 'hive';

grant all privileges onhive_metadata.* to 'hive'@'spark1' identified by 'hive';

flush privileges;

use hive_metadata;


修改配置文件:/hive/conf目录下面

3.配置hive-site.xml

mv hive-default.xml.templatehive-site.xml

vi hive-site.xml

 javax.jdo.option.ConnectionURL

 jdbc:mysql://spark1:3306/hive_metadata?createDatabaseIfNotExist=true

 javax.jdo.option.ConnectionDriverName

 com.mysql.jdbc.Driver

 javax.jdo.option.ConnectionUserName

 hive

 javax.jdo.option.ConnectionPassword

 hive

 hive.metastore.warehouse.dir

 /user/hive/warehouse


4.配置hive-env.sh和hive-config.sh

mv hive-env.sh.template hive-env.sh改名

vi /usr/local/hive/bin/hive-config.sh 增加环境变量

export JAVA_HOME=/usr/java/latest

export HIVE_HOME=/usr/local/hive

exportHADOOP_HOME=/usr/local/hadoop


5.验证是否安装成功

直接输入hive命令,可以进入hive命令行

[root@spark1 bin]# 
[root@spark1 bin]# hive
17/05/13 19:17:14 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect.  Use hive.hmshandler.retry.* instead
Logging initialized using configuration in jar:file:/usr/local/spark/hive/lib/hive-common-0.13.1.jar!/hive-log4j.properties
hive> create table testtable(id int);
OK
Time taken: 0.819 seconds
hive> 




你可能感兴趣的:(3.配置hive环境)