易筋经Hive——Hive安装及简单使用

转载请注明出处:http://blog.csdn.net/dongdong9223/article/details/86030401
本文出自【我是干勾鱼的博客】

Ingredients:

  • Java:Java SE Development Kit 8u162(Oracle Java Archive),Linux下安装JDK并修改环境变量

  • Hadoop:hadoop-2.9.1.tar.gz(Apache Hadoop Releases Downloads, All previous releases of Hadoop are available from the Apache release archive site)

  • Hive:hive-2.3.4(mirrors.tuna.tsinghua.edu.cn,Mirror site for Hive)

1 下载Hive

执行命令:

wget -c https://mirrors.tuna.tsinghua.edu.cn/apache/hive/hive-2.3.4/apache-hive-2.3.4-bin.tar.gz

将apache-hive-2.3.4-bin.tar.gz下载到目录:

/opt/hive/

中。

2 解压缩

解压缩:

tar -xzvf apache-hive-2.3.4-bin.tar.gz

3 配置

在文件:

/etc/profile

中加入内容:

# hive
export HIVE_HOME=/opt/hive/apache-hive-2.3.4-bin
export PATH=$HIVE_HOME/bin:$PATH

运行source命令使之生效:

source /etc/profile

4 安装Hadoop

4.1 安装并启动Hadoop

安装Hadoop可以参考阿里云ECS上搭建Hadoop集群环境——使用两台ECS服务器搭建“Cluster mode”的Hadoop集群环境,启动Hadoop命令为:

sbin/start-all.sh

4.2 创建HDFS上的文件夹及权限设置

$ $HADOOP_HOME/bin/hadoop fs -mkdir       /tmp
$ $HADOOP_HOME/bin/hadoop fs -mkdir       /user/hive/warehouse
$ $HADOOP_HOME/bin/hadoop fs -chmod g+w   /tmp
$ $HADOOP_HOME/bin/hadoop fs -chmod g+w   /user/hive/warehouse

5 配置“hive-site.xml”文件

将文件hive-default.xml.template重命名为hive-site.xml,或者直接重新生成一个hive-site.xml,在里面加入:



<configuration>
        <property>
                <name>javax.jdo.option.ConnectionURLname>
                <value>jdbc:mysql://localhost:3306/myhivevalue>
        property>

        <property>
                <name>javax.jdo.option.ConnectionDriverNamename>
                <value>com.mysql.jdbc.Drivervalue>
        property>

        <property>
                <name>javax.jdo.option.ConnectionUserNamename>
                <value>rootvalue>
        property>

        <property>
                <name>javax.jdo.option.ConnectionPasswordname>
                <value>123456value>
        property>

        <property>
                <name>hive.metastore.schema.verificationname>
                <value>falsevalue>
        property>
        <property>
                <name>hive.server2.thrift.portname>
                <value>10000value>
        property>
configuration>

里面有元数据信息等。

6 初始化元数据

如果不是第一次初始化元数据,那么要注意先要将原有元数据删除。氛围Derby和MySQL这2种情况。

6.1 情况1:Derby

Derby的元数据保存在bin目录下的metastore_db文件夹里面,要将其删除或者重命名。

6.1.1 修改“metastore_db”名称为“metastore_db.tmp”

进入Hive的bin目录执行文件夹名称修改命令:

# cd /opt/hive/apache-hive-2.3.4-bin/bin
# mv metastore_db metastore_db.tmp

原因: 这里参考了Hive错误:Error: FUNCTION ‘NUCLEUS_ASCII’ already exists. (state=X0Y68,code=30000)。每次初始化的时候会新生成一个名为“metastore_db”的文件夹,如果之前初始化过,这个文件夹已经存在了,初始化的过程会出现错误:

# schematool -dbType derby -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/apache-hive-2.3.4-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop-2.9.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:	 jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver :	 org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:	 APP
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.derby.sql
Error: FUNCTION 'NUCLEUS_ASCII' already exists. (state=X0Y68,code=30000)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
Underlying cause: java.io.IOException : Schema script failed, errorcode 2
Use --verbose for detailed stacktrace.
*** schemaTool failed ***

当然如果这个文件夹不存在就不用管了。

6.1.2 使用“schematool”命令初始化元数据

简单起见,这里使用Hive自带的Derby作为数据库存储元数据。

正如Hive官网Running HiveServer2 and Beeline中所说,从Hive 2.1开始,需要使用schematool命令对元数据进行初始化。元数据既可以保存在Hive自带的Derby数据库上,也可以保存在指定数据库(比如MySQL)上,简单起见这里就是用默认的Derby数据库,初始化命令如下:

schematool -dbType derby -initSchema

6.2 情况2:MySQL

6.2.1 删除元数据

对于MySQL,直接将其下面的元数据所在的数据库(database)删除就可以了。

6.2.2 初始化元数据

schematool -dbType mysql -initSchema

7 运行Hive

连接Hive有2种方式:

  • 1: HiveCLI
  • 2: HiveServer2的Beeline

按官网的说法:

HiveServer2 (introduced in Hive 0.11) has its own CLI called Beeline. HiveCLI is now deprecated in favor of Beeline, as it lacks the multi-user, security, and other capabilities of HiveServer2.

HiveCLI逐渐被弃用,推荐使用HiveServer2的Beeline。这里把2种方式都简单说一下。

7.1 使用HiveCLI

7.1.1 进入Hive

运行命令:

# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/apache-hive-2.3.4-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop-2.9.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/opt/hive/apache-hive-2.3.4-bin/lib/hive-common-2.3.4.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive>

7.1.2 查看所有数据库

hive> show databases;
OK
default
Time taken: 0.02 seconds, Fetched: 1 row(s)

7.1.3 查看所有数据表

hive> show tables;
OK
Time taken: 0.049 seconds

7.1.4 退出Hive

退出Hive:

quit;

7.2 使用HiveServer2和Beeline

7.2.1 启动HiveServer2

hiveserver2
2019-01-10 10:30:23: Starting HiveServer2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/apache-hive-2.3.4-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop-2.9.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

7.2.2 2种进入Beeline的方式

7.2.2.1 连接之前的准备

注意!如果是使用当前Linux服务器下的root用户登录,那么需要在Hadoop服务器的:

etc/hadoop/core-site.xml

中添加内容:

<property>
	<name>hadoop.proxyuser.root.hostsname>
	<value>*value>
property>
<property>
	<name>hadoop.proxyuser.root.groupsname>
	<value>*value>
property>

否则会报错:

Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate root (state=08S01,code=0)

这里参考了https://blog.csdn.net/github_38358734/article/details/77522798。

7.2.2.2 方法1:官网法

7.2.2.2.1 修改HDFS中“/tmp”文件的权限

这里还是要吐槽一下Hive官网在讲到Running HiveServer2 and Beeline的时候,就这么简单说了一下命令,其实这里有个坑,那就是Running Hive在对HDFS的权限配置中,原有配置命令是:

$ $HADOOP_HOME/bin/hadoop fs -chmod g+w   /tmp

这个权限配置的是不够的! 应该是:

$HADOOP_HOME/bin/hadoop fs -chmod 755 /tmp

否则按照官网的方式是进不了Beeline的,会报错:

beeline -u jdbc:hive2://localhost:10000
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/apache-hive-2.3.4-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop-2.9.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://localhost:10000
19/01/10 10:33:36 [main]: WARN jdbc.HiveConnection: Failed to connect to localhost:10000
Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=anonymous, access=EXECUTE, inode="/tmp":root:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350)
。。。
7.2.2.2.2 进入Beeline

修改了权限之后就可以正式进入Beeline了,命令如官网所说,比如:

# beeline -u jdbc:hive2://localhost:10000
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/apache-hive-2.3.4-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop-2.9.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://localhost:10000
Connected to: Apache Hive (version 2.3.4)
Driver: Hive JDBC (version 2.3.4)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 2.3.4 by Apache Hive

7.2.2.3 方法2:常用法

命令略有不同,不过省去了设置权限的麻烦。

7.2.2.3.1 进入Beeline

执行beeline命令:

# beeline
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/apache-hive-2.3.4-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop-2.9.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Beeline version 2.3.4 by Apache Hive
beeline>
7.2.2.3.2 连接Hive服务器

这里比如说在服务器本地连接,使用命令:

!connect jdbc:hive2://localhost:10000

如下所示:

beeline> !connect jdbc:hive2://localhost:10000
Connecting to jdbc:hive2://localhost:10000
Enter username for jdbc:hive2://localhost:10000: root
Enter password for jdbc:hive2://localhost:10000: ***************
Connected to: Apache Hive (version 2.3.4)
Driver: Hive JDBC (version 2.3.4)
Transaction isolation: TRANSACTION_REPEATABLE_READ

输入相应的用户名密码。

7.2.2.3.3 查询测试
0: jdbc:hive2://localhost:10000> show tables;
Interrupting... Please be patient this may take some time.
+-----------+
| tab_name  |
+-----------+
+-----------+
No rows selected (97.649 seconds)
0: jdbc:hive2://localhost:10000> show databases;
+----------------+
| database_name  |
+----------------+
| default        |
+----------------+
1 row selected (53.986 seconds)
7.2.2.3.4 退出Beeline
0: jdbc:hive2://localhost:10000> !quit
Closing: 0: jdbc:hive2://localhost:10000
7.2.2.3.5 常用命令
!help                 //查看帮助
!close                //关闭当前连接,比如我们连接jdbc连接
!table;               //显示表
!sh clear;            //执行shell脚本命令
!quit                 //退出beeline终端

8 参考

GettingStarted

阿里云ECS上搭建Hadoop集群环境——使用两台ECS服务器搭建“Cluster mode”的Hadoop集群环境

Hive安装与配置详解

hive中的beeline上执行一些简单命令

Hive错误:Error: FUNCTION ‘NUCLEUS_ASCII’ already exists. (state=X0Y68,code=30000)

关于hive异常:Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStor

https://blog.csdn.net/github_38358734/article/details/77522798

你可能感兴趣的:(Hive)