远程连接hive配置

远程连接hive配置

1. 配置hive-site.xml

在hive-site.xml中加入配置信息,ip需要修改。

<property>
	<name>hive.server2.thrift.portname>
    <value>10000value>
property>
<property>
 	<name>hive.server2.thrift.bind.hostname>
    <value>192.168.199.105value>
property>

2. 配置core-site.xml文件

在core-site.xml中加入配置信息,hadoop需要替换为自己的用户。

<property>
	<name>hadoop.proxyuser.hadoop.hostsname>
	<value>*value>
property>
<property>   
	<name>hadoop.proxyuser.hadoop.groupsname>
    <value>*value>
property>

3. 启动hadoop&hiveserver2

启动hadoop
cd /usr/local/hadoop
./sbin/start-dfs.sh
启动hiveserver2
cd /usr/local/hive
./bin/hiveserver2
这里打开一个进程会停住,另开一个terminal,进入beeline,输入!connect jdbc:hive2://192.168.199.105:10000连接hive,之后输入在hive-site.xml文件中配置的username和password。下面有具体配置信息,对应javax.jdo.option.ConnectionPasswordjavax.jdo.option.ConnectionPassword

hadoop@dblab-VirtualBox:~$ beeline
ls: 无法访问'/usr/local/hive/lib/hive-jdbc-*-standalone.jar': 没有那个文件或目录
Beeline version 2.1.0 by Apache Hive
beeline> !connect jdbc:hive2://192.168.199.105:10000
Connecting to jdbc:hive2://192.168.199.105:10000
Enter username for jdbc:hive2://192.168.199.105:10000: hive
Enter password for jdbc:hive2://192.168.199.105:10000: ****
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connected to: Apache Hive (version 2.1.0)
Driver: Hive JDBC (version 2.1.0)
19/06/08 22:33:29 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false.
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://192.168.199.105:10000> 

之后进入http://192.168.199.105:10002可查看连接状态。
远程连接hive配置_第1张图片
之后可以使用类似data grip这种可视化工具连接,需要导入hive的jdbc包hive-jdbc-2.1.0-standalone.jar与hadoop包hadoop-common-2.7.1jar,hadoop-nfs-2.7.1.jar,要与使用的hadoop和hive版本对应。

4. User: hadoop is not allowed to impersonate hive

这里可能会报错:
User: hadoop is not allowed to impersonate hive
解决办法:
hadoop.proxyuser.hadoop.hostshadoop.proxyuser.hadoop.groups中的hadoop该为报错User:后的user

5. Cannot create directory /tmp/hive/hive. Name node is in safe mode.

因为tmp文件不能存在,所以这里这里直接rm -rf /tmp/hive/hive删除报错目录。

6. 完整hive-site.xml和core-site.xml配置文件

hive-site.xml



<configuration>
  <property>
    <name>javax.jdo.option.ConnectionURLname>
    <value>jdbc:mysql://192.168.199.105:3306/hive?createDatabaseIfNotExist=truevalue>
    <description>JDBC connect string for a JDBC metastoredescription>
  property>
  <property>
    <name>javax.jdo.option.ConnectionDriverNamename>
    <value>com.mysql.jdbc.Drivervalue>
    <description>Driver class name for a JDBC metastoredescription>
  property>
  <property>
    <name>javax.jdo.option.ConnectionUserNamename>
    <value>hivevalue>
    <description>username to use against metastore databasedescription>
  property>
  <property>
    <name>javax.jdo.option.ConnectionPasswordname>
    <value>hivevalue>
    <description>password to use against metastore databasedescription>
  property>
	<property>
    <name>hive.server2.thrift.portname>
    <value>10000value>
	property>
  <property>
    <name>hive.server2.thrift.bind.hostname>
    <value>192.168.199.105value>
  property>
configuration>

core-site.xml







<configuration>
	<property>
		<name>hadoop.tmp.dirname>
		<value>file:/usr/local/hadoop/tmpvalue>
	property>
	<property>
		<name>fs.defaultFSname>
		<value>hdfs://localhost:9000value>
	property>
  <property>
    <name>hadoop.proxyuser.hadoop.hostsname>
    <value>*value>
  property>
    <property>
    <name>hadoop.proxyuser.hadoop.groupsname>
    <value>*value>
  property>
configuration>

你可能感兴趣的:(hadoop)