hive3.1.1.安装

hive3.1.1安装

  • 安装步骤
    • 解压hive压缩包
    • 修改hive目录名字(个人习惯)
    • hive环境变量配置
    • hive配置文件更改
    • hive配置mysql元数据库
    • hive元数据库初始化
    • hive安装成功验证

安装步骤

本篇文章是基于hadoop3.2.0版本的hive安装。
下载链接:http://mirrors.shu.edu.cn/apache/

解压hive压缩包

[hadoop@master ~]$ tar -zxvf apache-hive-3.1.1-bin.tar.gz

修改hive目录名字(个人习惯)

[hadoop@master ~]$ mv apache-hive-3.1.1-bin hive-3.1.1

hive环境变量配置

[hadoop@master ~]$ vi .bash_profile
PATH=$PATH:$HOME/bin
export HADOOP_HOME=/home/hadoop/hadoop-3.2.0
export HIVE_HOME=/home/hadoop/hive-3.1.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin

使环境变量生效

[hadoop@master ~]$ source .bash_profile

hive配置文件更改

配置hive-config.sh

[hadoop@master ~]$ vi hive-3.1.1/bin/hive-config.sh
末尾追加
export JAVA_HOME=/usr/java/jdk1.8.0_201-amd64
export HIVE_HOME=/home/hadoop/hive-3.1.1
export HADOOP_HOME=/home/hadoop/hadoop-3.2.0

配置hive-site.xml文件

[hadoop@master ~]$ cd hive-3.1.1/conf/
[hadoop@master conf]$ cp hive-default.xml.template hive-site.xml

hive配置mysql元数据库

关于linux中建立mysql库,参见我另一篇文章:

https://blog.csdn.net/genus_yang/article/details/87939556

在mysql中建立hive用户

[root@master ~]# mysql -u root -p123
mysql> create user 'hive' identified by '123';
mysql> grant all privileges on *.* to 'hive'@'%' with grant option;
mysql> grant all privileges on *.* to hive@master identified by '123';
mysql> flush privileges;

建立hive专用的元数据库

[root@master ~]# mysql -h 169.254.1.100 -u hive -p123
mysql> create database hive;

更改hive配置文件hive-site.xml

[hadoop@master ~]$ vi hive-3.1.1/conf/hive-site.xml
末尾之前追加

javax.jdo.option.ConnectionURL
jdbc:mysql://169.254.1.100:3306/hive?characterEncoding=UTF-8


javax.jdo.option.ConnectionDriverName
com.mysql.jdbc.Driver


javax.jdo.option.ConnectionUserName
hive


javax.jdo.option.ConnectionPassword
123


datanucleus.schema.autoCreateAll
true


hive.metastore.schema.verification
false


hive.exec.local.scratchdir
/home/hadoop/hive-3.1.1/tmp
Local scratch space for Hive jobs


hive.downloaded.resources.dir
/home/hadoop/hive-3.1.1/tmp/resources
Temporary local directory for added resources in the remote file system.


hive.querylog.location
/home/hadoop/hive-3.1.1/tmp
Location of Hive run time structured log file


hive.server2.logging.operation.log.location
/home/hadoop/hive-3.1.1/tmp/operation_logs
Top level directory where operation logs are stored if logging functitonality is enabled

在hive目录中创建tmp临时目录

[hadoop@master ~]$ mkdir /home/hadoop/hive-3.1.1/tmp

将msql的JDBC复制到hive的lib下

[hadoop@master ~]$ cp mysql-connector-java-5.1.47.jar hive-3.1.1/lib/

下载链接地址:http://central.maven.org/maven2/mysql/mysql-connector-java/
Connector/J版本和mysql对应关系链接地址https://www.cnblogs.com/peijie-tech/articles/4446011.html

Connnector/J version Driver Type JDBC version MySQL Server Status
5.1 4 3.0,4.0 4.1,5.0,5.1,5.5 Recommended version
5.0 4 3.0 4.1,5.0 Released version
3.1 4 3.0 4.1,5.0 Obsolete
3.0 4 3.0 3.x,4.1 Obsolete

hive元数据库初始化

[hadoop@master ~]$ schematool -dbType mysql -initSchema

错误为:

Caused by: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8
at [row,col,system-id]: [3210,96,“file:/home/hadoop/hive-3.1.1/conf/hive-site.xml”]

意思是hive-site.xml的3210行有非法字符

3209
3210 Ensures commands with OVERWRITE (such as INSERT OVERWRITE) acquire Exclusive locks for�transactional tables. This ensures that inserts (w/o overwrite) run ning concurrently
3211 are not hidden by the INSERT OVERWRITE.
3212

删除非法字符重新初始化

[hadoop@master ~]$ schematool -dbType mysql -initSchema
Closing: 0: jdbc:mysql://169.254.1.100:3306/hive?characterEncoding=UTF-8
beeline>
beeline> Initialization script completed
schemaTool completed

表示初始化完成

hive安装成功验证

hive3.1.1.安装_第1张图片
在mysql查看hive元数据情况
hive3.1.1.安装_第2张图片
可以看到hive库中有很多元数据相关的表,表示hive安装成功!

你可能感兴趣的:(大数据)