Hive1.2.1安装配置实录整理

本文是另一篇文章的补充: http://blog.csdn.net/nisjlvhudy/article/details/49338883
1、配置Hive元数据库(此处为mysql)
在已经装好的mysql上新增用户:
create user 'hive' identified by 'iloveyou';
grant all privileges on *.* to 'hive'@'%' identified by 'iloveyou'  with grant option;
flush privileges;
grant all privileges on *.* to 'hive'@'localhost' identified by 'iloveyou'  with grant option;
flush privileges;
grant all privileges on *.* to 'hive'@'master' identified by 'iloveyou'  with grant option;
flush privileges;
注意后面需要跟密码,否则新的赋权为空密码。

新建库:
mysql -h master -uhive -piloveyou
create database hive;
show databases;
use hive
show tables;

2、下载并解压hive安装包
wget http://archive.apache.org/dist/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz。
tar -zvxf apache-hive-1.2.1-bin.tar.gz
mv apache-hive-1.2.1-bin ~/opt/hive-1.2.1

3、配置hive环境变量,初始化hive在hdfs上的工作目录及进行配置
要个性的文件除了一些通用的只有两个,一个是hive-env.sh;另一个是hive-site.xml(最最重要)。
3.1、以下路径如OS上没有,需要手动添加:
./hadoop fs -mkdir /tmp
./hadoop fs -mkdir /home/hs/opt/hive-1.2.1/hive-warehouse
./hadoop fs -mkdir -p /home/hs/opt/hive-1.2.1/hive-warehouse
./hadoop fs -chmod g+w /tmp
./hadoop fs -chmod g+w /home/hs/opt/hive-1.2.1/hive-warehouse
新建文件夹 :/home/hs/opt/hive-1.2.1/iotmp
并赋权限: chmod733 iotmp

3.2、拷贝配置文件
[hs@master conf]$ cp hive-default.xml.template hive-site.xml
[hs@master conf]$ cp hive-log4j.properties.template hive-log4j.properties
[hs@master conf]$ cp hive-exec-log4j.properties.template hive-exec-log4j.properties
[hs@master conf]$ cp hive-env.sh.template hive-env.sh
其中有两个文件只拷贝,后面也不需要进行更改。

3.3、添加环境变量
vi ~/.bash_profile
export HIVE_HOME=/home/hs/opt/hive-1.2.1
export PATH=$HIVE_HOME/bin:$PATH

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.95-2.6.4.0.el7_2.x86_64
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
export HADOOP_HOME=/home/hs/opt/hadoop-2.7.2
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
export PATH=$PATH:$HADOOP_HOME/bin

source .bash_profile

3.4、hive-env.sh
vi hive-env.sh
# Set HADOOP_HOME to point to a specific hadoop install directory
# HADOOP_HOME=${bin}/../../hadoop
HADOOP_HOME=/home/hs/opt/hadoop-2.7.2

# Hive Configuration Directory can be controlled by:
# export HIVE_CONF_DIR=
export HIVE_CONF_DIR=/home/hs/opt/hive-1.2.1/conf

3.5、hive-site.xml中要特别注意的几项(具体参照文章:http://blog.csdn.net/nisjlvhudy/article/details/49338883
原hive-site.xml无,特意新加的,否则不能正常运行

 
    system:java.io.tmpdir
    /home/hs/opt/hive-1.2.1/iotmp
 

原hive-site.xml无,特意新加的,否则不能正常运行
 
    system:user.name
    hive
 

根据实际情况进行修改的:
 
    hive.metastore.warehouse.dir
    /home/hs/opt/hive-1.2.1/hive-warehouse
    location of default database for the warehouse
 

其他要关注的几项内容:
 
    hive.stats.jdbcdriver
    com.mysql.jdbc.Driver
    The JDBC driver for the database that stores temporary Hive statistics.
 

 
    javax.jdo.option.ConnectionDriverName
    com.mysql.jdbc.Driver
    Driver class name for a JDBC metastore
 

 
    hive.stats.dbconnectionstring
    jdbc:mysql://master:3306/hive_stats?createDatabaseIfNotExist=true
    The default connection string for the database that stores temporary Hive statistics.
 

 
    javax.jdo.option.ConnectionURL
    jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=true
    JDBC connect string for a JDBC metastore
 

 
 
    hive.stats.dbclass
    jdbc:mysql
   
      Expects one of the pattern in [jdbc(:.*), hbase, counter, custom, fs].
      The storage that stores temporary Hive statistics. In filesystem based statistics collection ('fs'),
      each task writes statistics it has collected in a file on the filesystem, which will be aggregated
      after the job has finished. Supported values are fs (filesystem), jdbc:database (where database
      can be derby, mysql, etc.), hbase, counter, and custom as defined in StatsSetupConst.java.
   

 

 
    hive.metastore.uris
    thrift://master:9083
    Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.
 

4、下载驱动包(mysql-connector-java-5.1.35.tar.gz)
wget http://mirrors.ibiblio.org/pub/mirrors/maven2/mysql/mysql-connector-java/5.1.6/mysql-connector-java-5.1.6.jar并放到hive的LIB目录(/home/hs/opt/hive-1.2.1/lib)。

5、启动hive并登录测试
启动metastore: hive –service  metastore & (连接并初始化元数据)
启动hiveserver: hive –service hiveserver2 & (第三方登录服务)
输入hive或 hive –service cli,进行客户端登录验证。

你可能感兴趣的:(BigData,Hadoop,#,Hive)