hive-2.3.8安装部署

1 前提条件

1.1 Hadoop正常部署
Hadoop集群的正常部署并启动:

[hadoop@hadoop101 hadoop-3.2.2]$ sbin/start-dfs.sh
[hadoop@hadoop102 hadoop-3.2.2]$ sbin/start-yarn.sh

2 安装准备

2.1 Hive安装地址

1.Hive官网地址: http://hive.apache.org/
2.文档查看地址: https://cwiki.apache.org/confluence/display/Hive/GettingStarted
3.下载地址: http://archive.apache.org/dist/hive/
4.github地址: https://github.com/apache/hive

3 Hive安装

hive元数据默认存在在自带的derby数据库,正常生产环境中一般都会把元数据信息存储在关系型数据库,例如mysql

3.1 derby部署方案

Hive安装及配置
(1)把apache-hive-2.3.8-bin.tar.gz上传到linux的/opt/softwares目录下
(2)解压apache-hive-2.3.8-bin.tar.gz到/opt/modules/目录下面

[hadoop@hadoop101 softwares]$ tar -xf apache-hive-2.3.8-bin.tar.gz -C /opt/modules/

(3)修改apache-hive-2.3.8-bin.tar.gz的名称为hive

[hadoop@hadoop101 modules]$ mv apache-hive-2.3.8-bin/ hive

(4)修改/opt/modules/hive/conf目录下的hive-env.sh.template名称为hive-env.sh

[hadoop@hadoop101 conf]$ mv hive-env.sh.template hive-env.sh

(5)配置hive-env.sh文件

export HADOOP_HOME=/opt/modules/hadoop-3.2.2
export HIVE_CONF_DIR=/opt/modules/hive/conf

(6) 初始化元数据信息

bin/schematool -dbType derby -initSchema

3.2 mysql部署方案

Hive安装及配置
(1)把apache-hive-2.3.8-bin.tar.gz上传到linux的/opt/softwares目录下
(2)解压apache-hive-2.3.8-bin.tar.gz到/opt/modules/目录下面

[hadoop@hadoop101 softwares]$ tar -xf apache-hive-2.3.8-bin.tar.gz -C /opt/modules/

(3)修改apache-hive-2.3.8-bin.tar.gz的名称为hive

[hadoop@hadoop101 modules]$ mv apache-hive-2.3.8-bin/ hive

(4)修改/opt/modules/hive/conf目录下的hive-env.sh.template名称为hive-env.sh

[hadoop@hadoop101 conf]$ mv hive-env.sh.template hive-env.sh

(5)配置hive-env.sh文件

export HADOOP_HOME=/opt/modules/hadoop-3.2.2
export HIVE_CONF_DIR=/opt/modules/hive/conf

(6)在/opt/modules/hive/conf目录下创建hive-site.xml文件,编写如下内容:




        
                javax.jdo.option.ConnectionURL
                jdbc:mysql://linux1:3306/metastore?createDatabaseIfNotExist=true&useSSL=false
                JDBC connect string for a JDBCmetastore
        
        
                javax.jdo.option.ConnectionDriverName
                com.mysql.jdbc.Driver
                Driver class name for a JDBCmetastore
        
        
                javax.jdo.option.ConnectionUserName
                root
                username to use against metastoredatabase
        
        
                javax.jdo.option.ConnectionPassword
                root
                password to use against metastoredatabase
        

(7)拷贝mysql连接驱动到hive/lib

cp mysql-connector-java-5.1.49.jar /opt/modules/hive/lib

(8)初始化元数据信息

bin/schematool -dbType mysql -initSchema

注意:如果初始化报以下错误

Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.

则需要用hadoop的share/hadoop/common/lib/guava包替换hive的lib下的guava包

4 Hadoop集群配置

(1)必须启动hdfs和yarn

[hadoop@hadoop101 hadoop-3.2.2]$ sbin/start-dfs.sh
[hadoop@hadoop102 hadoop-3.2.2]$ sbin/start-yarn.sh

(2)在HDFS上创建/tmp和/user/hive/warehouse两个目录并修改他们的同组权限可写

[hadoop@hadoop101 hadoop-3.2.2]$ bin/hadoop fs -mkdir /tmp
[hadoop@hadoop101 hadoop-3.2.2]$ bin/hadoop fs -mkdir -p /user/hive/warehouse

[hadoop@hadoop102 hadoop-2.7.2]$ bin/hadoop fs -chmod g+w /tmp
[hadoop@hadoop102 hadoop-2.7.2]$ bin/hadoop fs -chmod g+w /user/hive/warehouse

4 Hive基本操作

(1)启动hive

[hadoop@hadoop101 hive]$ bin/hive

(2)查看数据库

hive> show databases;

(3)打开默认数据库

hive> use default;

(4)显示default数据库中的表

hive> show tables;

(5)创建一张表

hive> create table student(id int, name string);

(6)显示数据库中有几张表

hive> show tables;

(7)查看表的结构

hive> desc student;

(8)向表中插入数据

hive> insert into student values(100001,"jack");

(9)查询表中数据

hive> select * from student;

(10)退出hive

hive> quit;

你可能感兴趣的:(hive-2.3.8安装部署)