Hive 安装介绍

介绍

Hive是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供类SQL查询功能。

其本质是将SQL转换为MapReduce的任务进行运算,底层由HDFS来提供数据的存储,说白了hive可以理解为一个将SQL转换为MapReduce的任务的工具,甚至更进一步可以说hive就是一个MapReduce的客户端。

官网

###  官网
https://hive.apache.org/

## 中文参考
https://www.docs4dev.com/docs/zh/apache-hive/3.1.1/reference/LanguageManual_DML.html

Hive的安装模式


内嵌模式: 内嵌式是内嵌在derby数据库俩存储元数据,也不需要额外起Metastore服务。数据库和Metastore服务都嵌入在主Hive Server进程中。这个是默认的,配置简单,但是一次只能一个客户端连接, 比较适合实验,不能用于生产环境。

本地模式: 本地模式采用外部数据库来存储元数据,目前支持的数据库有:mysql、postgre。本地模式不需要单独起metastore服务,用的是跟hive在同一个进程里的metastore服务。也就是说当你启动一个hive服务,里面默认会帮我们启动一个metastore服务。

远程模式: 远程模式下,需要单独起metastore服务,然后每个客户端都在配置文件里配置连接到该metastore服务。远程模式的metastore服务和hive运行在不同的进程中。在生产环境中,建议用远程模式来配置Hive metastore。在这种情况下,其他依赖hive的软件都可以通过访问metastore 访问hive

安装

## 
cd /opt/software
tar -zxvf apache-hive-3.1.1-bin.tar.gz -C /opt/module/

## 
vi /etc/profile

## 
export HIVE_HOME=/opt/module/apache-hive-3.1.1-bin
export PATH=$PATH:$HADOOP_HOME/sbin:$HIVE_HOME/bin

## 
source /etc/profile

## 修改hive的环境变量
cd  /opt/module/apache-hive-3.1.1-bin/bin/ && vi hive-config.sh


export JAVA_HOME=/opt/module/jdk1.8.0_11
export HIVE_HOME=/opt/module/apache-hive-3.1.1-bin
export HADOOP_HOME=/opt/module/hadoop-3.2.0
export HIVE_CONF_DIR=/opt/module/apache-hive-3.1.1-bin/conf

## 拷贝hive的配置文件
cd /opt/module/apache-hive-3.1.1-bin/conf/
cp hive-default.xml.template hive-site.xml



  • 修改Hive配置文件,找到对应的位置进行修改: 主要是修改连接Mysql
 
    javax.jdo.option.ConnectionDriverName
    com.mysql.cj.jdbc.Driver
    Driver class name for a JDBC metastore
  

    javax.jdo.option.ConnectionUserName
    root
    Username to use against metastore database
  

    javax.jdo.option.ConnectionPassword
    root123
    password to use against metastore database
  

    javax.jdo.option.ConnectionURL
jdbc:mysql://192.168.202.131:3306/hive?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=GMT
    
      JDBC connect string for a JDBC metastore.
      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    
  
  
    datanucleus.schema.autoCreateAll
    true
    Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.
  

    hive.metastore.schema.verification
    false
    
      Enforce metastore schema version consistency.
      True: Verify that version information stored in is compatible with one from Hive jars.  Also disable automatic
            schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
            proper metastore schema migration. (Default)
      False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
    
  

    hive.exec.local.scratchdir
    /opt/module/apache-hive-3.1.1-bin/tmp/${user.name}
    Local scratch space for Hive jobs
  
  
system:java.io.tmpdir
/opt/module/apache-hive-3.1.1-bin/iotmp



  
    hive.downloaded.resources.dir
/opt/module/apache-hive-3.1.1-bin/tmp/${hive.session.id}_resources
    Temporary local directory for added resources in the remote file system.
  

    hive.querylog.location
    /opt/module/apache-hive-3.1.1-bin/tmp/${system:user.name}
    Location of Hive run time structured log file
  
  
    hive.server2.logging.operation.log.location
/opt/module/apache-hive-3.1.1-bin/tmp/${system:user.name}/operation_logs
    Top level directory where operation logs are stored if logging functionality is enabled
  
  
    hive.metastore.db.type
    mysql
    
      Expects one of [derby, oracle, mysql, mssql, postgres].
      Type of database used by the metastore. Information schema & JDBCStorageHandler depend on it.
    
  
  
    hive.cli.print.current.db
    true
    Whether to include the current database in the Hive prompt.
  
  
    hive.cli.print.header
    true
    Whether to print the names of the columns in query output.
  
  
    hive.metastore.warehouse.dir
    /user/hive/warehouse
    location of default database for the warehouse
  

​上传mysql驱动包到/usr/local/soft/apache-hive-3.1.1-bin/lib/文件夹下,驱动包:mysql-connector-java-8.0.15.zip,解压后从里面获取jar包。

  • 确保 mysql数据库中有名称为hive的数据库
  • 初始化初始化元数据库
 schematool -dbType mysql -initSchema
  • 确保Hadoop启动
  • 启动hive
#  启动
hive


## 检测是否启动成功
show databases;

hive常用命令

## 启动hive
[linux01 hive]$ bin/hive

## 显示数据库
hive>show databases;

## 使用default数据库
hive>use default;
## 显示default数据库中的表
hive>show tables;

## 创建student表, 并声明文件分隔符’\t’
hive> create table student(id int, name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t’;

你可能感兴趣的:(#,Hadoop,hive,hadoop,数据仓库)