hive的远程模式需要mysql数据库,需要安装mysql数据库,
创建mysql 数据库用于存储hive的原信息
create database hive DEFAULT CHARSET utf8 COLLATE utf8_general_ci;
修改访问权限
grant all privileges on *.* to "root"@"%" identified by "root" with grant option;
刷新权限
flush privileges;
查看是否开启远程访问
select user,password,host from user;
mysql> select user,host from mysql.user;
+------+------------+
| user | host |
+------+------------+
| root | % |
| root | 127.0.0.1 |
| | bigdata111 |
| root | bigdata111 |
| | localhost |
| root | localhost |
+------+------------+
6 rows in set (0.00 sec)
tar -zxvf apache-hive-2.3.0-bin.tar.gz -C ~/training/
配置环境变量:
vi /etc/profile
export HIVE_HOME=/root/training/apache-hive-2.3.0-bin
export PATH=$PATH:$HIVE_HOME/bin
生效环境变量;
source /etc/profile
//测试
hive --version
Hive 2.3.0
Git git://hw11077/Users/pxiong/Projects/backport/10/hive -r 6f4c35c9e904d226451c465effdc5bfd31d395a0
Compiled by pxiong on Thu Jul 13 22:32:59 PDT 2017
From source with checksum 6e6dbd81574fea419d3635f9cfcc08b0
hive配置目录中复制一份,配置mysql的信息
[root@bigdata111 conf]# cp hive-default.xml.template hive-site.xml
[root@bigdata111 conf]# vi hive-site.xml
javax.jdo.option.ConnectionUserName //mysql 用户名称
root
javax.jdo.option.ConnectionPassword //mysql 用户密码
root
//mysql 连接信息
javax.jdo.option.ConnectionURL
jdbc:mysql://192.168.37.11:3306/hive?useSSL=false
javax.jdo.option.ConnectionDriverName //mysql 连接驱动
com.mysql.jdbc.Driver
[root@bigdata111 apache-hive-2.3.0-bin]# cd lib/
[root@bigdata111 lib]# ls mysql-connector-java-5.1.43-bin.jar
mysql-connector-java-5.1.43-bin.jar
[root@bigdata111 conf]# schematool -dbType mysql -initSchema
a.报错信息: Schema initialization FAILED! Metastore state would be
inconsistent !!
详细信息:
//采用的derby数据库为采用 自己配置的mysql
Metastore connection URL: jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User: APP
ps:hive-site.xml是用户定义的配置文件 hive在启动的时候会读取两个文件一个是hive-default.xml.template 还有一个就是hive-site.xml 当执行cp复制命令时 hive-site.xml 里就有了hive-default.xml.template的内容 当你继续写入关于mysql的配置保存后进行初始化hive mysql时就会报这个错误,然后hive的Metastore 服务起不来
解决方法:删除hive-site.xml 文件中所有的配置信息,只保留自己配置mysql信息;
再次初始化mysql数据库:schematool -dbType mysql -initSchema
b.错误信息:Underlying cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException : Communications link failure
//数据库连接失败
修改配置文件: vi hive-site.xml
将mysql的连接信息IP地址换成localhost
javax.jdo.option.ConnectionURL
jdbc:mysql://localhost:3306/hive?useSSL=false //之前是192.168.137.11换成localhost
c.再次初始化mysql数据库:schematool -dbType mysql -initSchema 成功
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/training/apache-hive-2.3.0-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/training/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:mysql://localhost:3306/hive?useSSL=false
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: root
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.mysql.sql
Initialization script completed
schemaTool completed
5.直接启动hive
[root@bigdata111 bin]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/training/apache-hive-2.3.0-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/training/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Logging initialized using configuration in jar:file:/root/training/apache-hive-2.3.0-bin/lib/hive-common-2.3.0.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.