Hive集群搭建

前面一路从hadoop、spark、zookeeper、kafka等集群搭建而来,大数据生态环境已经初步形成,下面要继续来搭建大数据生态中很重要的Hive集群。
一、Hive简介
Hive 是基于 Hadoop 的一个数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供完整的 SQL 查询功能,将类 SQL 语句转换为 MapReduce 任务执行。
二、环境准备
hadoop-2.7.2
zookeeper-3.4.6
三台机器:
master 、worker1、worker2
三、开始搭建
1.下载Hive2.1.1安装包
wget http://www.apache.org/dyn/closer.cgi/hive/
或者直接去国内的清华大学 网易等镜像网站下载
解压至 /app/hive/目录下,这样管理目录更清晰。
2.配置环境变量
注意,Hive只需在一个节点上安装即可,即在master节点安装即可

[hadoop@master zookeeper]$ vim ~/.bash_profile

.bash_profile内容:

# User specific environment and startup programs
export JAVA_HOME=/app/java/jdk1.8.0_141
export HADOOP_HOME=/app/hadoop/hadoop-2.7.3
export SCALA_HOME=/app/scala/scala-2.11.8
export SPARK_HOME=/app/spark/spark-2.1.1
export ZOOKEEPER_HOME=/app/zookeeper/zookeeper-3.4.6
export KAFKA_HOME=/app/kafka/kafka_2.10-0.9.0.0
export HIVE_HOME=/app/hive/apache-hive-2.1.1-bin
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SCALA_HOME/bin:$SPARK_HOME/bin:$SPARK_HOME/sbin:$ZOOKEEPER_HOME/bin:$KAFKA_HOME/bin:$HIVE_HOME/bin
export PATH

3.安装配置mysql,检查并卸载掉centos自带的mysql

[hadoop@master app]$ rpm -qa |grep mysql
mysql-libs-5.1.71-1.el6.x86_64
[hadoop@master app]$ rpm -e mysql-libs-5.1.71-1.el6.x86_64 --nodeps

重新安装mysql,需要以管理员身份,此处使用sudo

[hadoop@master app]$ sudo rpm -e mysql-libs-5.1.71-1.el6.x86_64 --nodeps
[hadoop@master app]$ rpm -qa |grep mysql
[hadoop@master app]$ sudo yum -y install mysql-server
Loaded plugins: fastestmirror, refresh-packagekit, security
Installed:
  mysql-server.x86_64 0:5.1.73-8.el6_8                                                                                                                                     

Dependency Installed:
  mysql.x86_64 0:5.1.73-8.el6_8          mysql-libs.x86_64 0:5.1.73-8.el6_8          perl-DBD-MySQL.x86_64 0:4.013-3.el6          perl-DBI.x86_64 0:1.609-4.el6         

Complete!
[hadoop@master app]$ rpm -qa |grep mysql
mysql-5.1.73-8.el6_8.x86_64
mysql-libs-5.1.73-8.el6_8.x86_64
mysql-server-5.1.73-8.el6_8.x86_64

安装成功!
4.初始化配置mysql
(1)修改mysql的密码(root权限执行)

[hadoop@master usr]$ cd /usr/bin
[hadoop@master bin]$ sudo ./mysql_secure_installation

(2)输入当前MySQL数据库的密码, 初始root无密码, 直接回车

Enter current password for root (enter for none):

(3)设置MySQL中root用户的密码(应与下面Hive配置一致,此处设置为123)

Set root password? [Y/n] Y
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
... Success!

(4)删除匿名用户

Remove anonymous users? [Y/n] Y
... Success!

(5)是否不允许用户远程连接,选择N

Disallow root login remotely? [Y/n] N
... Success!

(6)删除test数据库

Remove test database and access to it? [Y/n] Y
 Dropping test database...
... Success!
 Removing privileges on test database...
... Success!

(7)重装

Reload privilege tables now? [Y/n] Y
... Success!

(8)完成

All done!  If you've completed all of the above steps, your MySQL
installation should now be secure.
Thanks for using MySQL!

(9)登陆mysql

mysql -uroot -p
//查看用户 select user from user;
//(无需)创建用户create user 'hive' @' %' identified by '123'
//删除用户 drop user 'hive' @' %';
flush privileges;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123' WITH GRANT OPTION;
FLUSH PRIVILEGES;
exit;

至此mysql配置完成。
5.配置Hive
(1)编辑hive-env.xml文件

[hadoop@master conf]$ cp hive-env.sh.template  hive-env.sh
[hadoop@master conf]$ vim hive-env.sh
JAVA_HOME=/app/java/jdk1.8.0_141
HADOOP_HOME=/app/hadoop/hadoop-2.7.3
HIVE_HOME=/app/hive/apache-hive-2.1.1-bin
export HIVE_CONF_DIR=$HIVE_HOME/conf
#export HIVE_AUX_JARS_PATH=$SPARK_HOME/lib/spark-assembly-1.6.0-hadoop2.6.0.jar
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$HADOOP_HOME/lib:$HIVE_HOME/lib
#export HADOOP_OPTS="-Dorg.xerial.snappy.tempdir=/tmp -Dorg.xerial.snappy.lib.name=libsnappyjava.jnilib $HADOOP_OPTS"

(2)编辑hive-site.xml

[hadoop@master conf]$ cp hive-default.xml.template hive-site.xml
[hadoop@master conf]$ vim hive-site.xml 

里面配置项非常多,清掉configuration里的所有属性。
可参照如下的配置:

<configuration>
 <property>
    <name>javax.jdo.option.ConnectionURLname>
    <value>jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=truevalue>
    <description>JDBC connect string for a JDBC metastoredescription>
property>
<property>
    <name>javax.jdo.option.ConnectionDriverNamename>
    <value>com.mysql.jdbc.Drivervalue>
    <description>Driver class name for a JDBC metastoredescription>
property>
<property>
    <name>javax.jdo.option.ConnectionUserNamename>
    <value>rootvalue>
    <description>username to use against metastore databasedescription>
property>
<property>
    <name>javax.jdo.option.ConnectionPasswordname>
    <value>123value>
    <description>password to use against metastore databasedescription>
property>
<property>
    <name>datanucleus.autoCreateSchemaname>
    <value>truevalue>
property>
<property>
    <name>datanucleus.autoCreateTablesname>
    <value>truevalue>
property>
<property>
    <name>datanucleus.autoCreateColumnsname>
    <value>truevalue>
property>

<property>
    <name>hive.metastore.warehouse.dirname>
    <value>/hivevalue>
    <description>location of default database for the warehousedescription>
property>

<property>
    <name>hive.downloaded.resources.dirname>
    <value>/app/hive/apache-hive-2.1.1-bin/tmp/resourcesvalue>
    <description>Temporary local directory for added resources in the remote file system.description>
 property>
 
<property>
    <name>hive.exec.dynamic.partitionname>
    <value>truevalue>
 property>
<property>
    <name>hive.exec.dynamic.partition.modename>
    <value>nonstrictvalue>
 property>

<property>
    <name>hive.exec.local.scratchdirname>
    <value>/app/hive/apache-hive-2.1.1-bin/tmp/HiveJobsLogvalue>
    <description>Local scratch space for Hive jobsdescription>
property>
<property>
    <name>hive.downloaded.resources.dirname>
    <value>/app/hive/apache-hive-2.1.1-bin/tmp/ResourcesLogvalue>
    <description>Temporary local directory for added resources in the remote file system.description>
property>
<property>
    <name>hive.querylog.locationname>
    <value>/app/hive/apache-hive-2.1.1-bin/tmp/HiveRunLogvalue>
    <description>Location of Hive run time structured log filedescription>
property>
<property>
    <name>hive.server2.logging.operation.log.locationname>
    <value>/app/hive/apache-hive-2.1.1-bin/tmp/OpertitionLogvalue>
    <description>Top level directory where operation tmp are stored if logging functionality is enableddescription>
property>

<property>  
    <name>hive.hwi.war.filename>  
    <value>/app/bin/apache-hive-2.2.1-bin/lib/hive-hwi-2.1.1.jarvalue>  
    <description>This sets the path to the HWI war file, relative to ${HIVE_HOME}. description>  
property>  
<property>  
    <name>hive.hwi.listen.hostname>  
    <value>mastervalue>  
    <description>This is the host address the Hive Web Interface will listen ondescription>  
property>  
<property>  
    <name>hive.hwi.listen.portname>  
    <value>9999value>  
    <description>This is the port the Hive Web Interface will listen ondescription>  
property>

 
<property>
    <name>hive.server2.thrift.bind.hostname>
    <value>mastervalue>
property>
<property>
    <name>hive.server2.thrift.portname>
    <value>10000value>
property>
<property>
    <name>hive.server2.thrift.http.portname>
    <value>10001value>
property>
<property>
    <name>hive.server2.thrift.http.pathname>
    <value>cliservicevalue>
property>

<property>
    <name>hive.server2.webui.hostname>
    <value>mastervalue>
property>
<property>
    <name>hive.server2.webui.portname>
    <value>10002value>
property>
<property>
    <name>hive.scratch.dir.permissionname>
    <value>755value>
property>

<property>
    <name>hive.server2.enable.doAsname>
    <value>falsevalue>
property>   

<property>
    <name>hive.auto.convert.joinname>
    <value>falsevalue>
property>
<property>
    <name>spark.dynamicAllocation.enabledname>
    <value>truevalue>
    <description>动态分配资源description>  
property>

<property>
    <name>spark.driver.extraJavaOptionsname>
    <value>-XX:PermSize=128M -XX:MaxPermSize=512Mvalue>
property>
configuration>

(3)配置日志地址, 修改hive-log4j.properties文件

[hadoop@master conf]$ cp hive-log4j2.properties.template hive-log4j.properties
[hadoop@master conf]$ vim hive-log4j.properties 

将hive.log日志的位置改为${HIVE_HOME}/tmp目录

 #将hive.log日志的位置改为${HIVE_HOME}/tmp目录
hive.log.dir=/app/hive/apache-hive-2.1.1-bin/tmp

创建tmp目录

[hadoop@master conf]$ mkdir ${HIVE_HOME}/tmp

(4)配置hive-config.sh文件

## 增加以下三行
export JAVA_HOME=/app/java/jdk1.8.0_141
export HIVE_HOME=/app/hive/apache-hive-2.1.1-bin
export HADOOP_HOME=/app/hadoop/hadoop-2.7.3
## 修改下列该行
HIVE_CONF_DIR=$HIVE_HOME/conf

(5)拷贝JDBC包
将JDBC的jar包放入$HIVE_HOME/lib目录下

[hadoop@master tgz]$ cp mysql-connector-java-5.1.19-bin.jar /app/hive/apache-hive-2.1.1-bin/lib/

(6)拷贝jline扩展包
将HIVE_HOME/lib目录下的jline-2.12.jar包
拷贝到HADOOP_HOME/share/hadoop/yarn/lib目录下,
并删除HADOOP_HOME/share/hadoop/yarn/lib目录下旧版本的jline包

[hadoop@master lib]$ cp jline-2.12.jar /app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/

( 7 ) 拷贝tools.jar包
复制JAVA_HOME目录下的tools.jar到HIVE_HOME/lib下

[hadoop@master tgz]$ cp  $JAVA_HOME/lib/tools.jar  ${HIVE_HOME}/lib

(8)执行初始化Hive操作
选用MySQLysql和Derby二者之一为元数据库
注意:先查看MySQL中是否有残留的Hive元数据,若有,需先删除

在hive/bin目录下运行schematool -dbType mysql -initSchema ## MySQL作为元数据库

[hadoop@master bin]$ schematool -dbType mysql -initSchema
which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin:/app/java/jdk1.8.0_141/bin:/app/hadoop/hadoop-2.7.3/bin:/app/hadoop/hadoop-2.7.3/sbin:/app/scala/scala-2.11.8/bin:/app/spark/spark-2.1.1/bin:/app/spark/spark-2.1.1/sbin:/app/zookeeper/zookeeper-3.4.6/bin:/app/kafka/kafka_2.10-0.9.0.0/bin:/app/hive/apache-hive-2.1.1-bin/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/app/hive/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:    jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=true
Metastore Connection Driver :    com.mysql.jdbc.Driver
Metastore connection User:   root
Starting metastore schema initialization to 2.1.0
Initialization script hive-schema-2.1.0.mysql.sql
Initialization script completed
schemaTool completed
[hadoop@master bin]$ 

其中mysql表示用mysql做为存储hive元数据的数据库, 若不用mysql做为元数据库, 则执行
schematool -dbType derby -initSchema ## Derby作为元数据库
本文使用的是mysql作为元数据库
脚本hive-schema-1.2.1.mysql.sql会在配置的Hive元数据库中初始化创建表

(9)启动Metastore服务
执行Hive前, 须先启动metastore服务, 否则会报错
执行./hive –service metastore

[hadoop@master bin]$ ./hive --service metastore
which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin:/app/java/jdk1.8.0_141/bin:/app/hadoop/hadoop-2.7.3/bin:/app/hadoop/hadoop-2.7.3/sbin:/app/scala/scala-2.11.8/bin:/app/spark/spark-2.1.1/bin:/app/spark/spark-2.1.1/sbin:/app/zookeeper/zookeeper-3.4.6/bin:/app/kafka/kafka_2.10-0.9.0.0/bin:/app/hive/apache-hive-2.1.1-bin/bin)
Starting Hive Metastore Server
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/app/hive/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

(10)启动hadoop集群,因为hive是依赖于hdfs的,不启动hadoop会报如下错误

[hadoop@master bin]$ start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /app/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-master.out
master: starting datanode, logging to /app/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-master.out
worker2: starting datanode, logging to /app/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-worker2.out
worker1: starting datanode, logging to /app/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-worker1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /app/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /app/hadoop/hadoop-2.7.3/logs/yarn-hadoop-resourcemanager-master.out
master: starting nodemanager, logging to /app/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-master.out
worker1: starting nodemanager, logging to /app/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-worker1.out
worker2: starting nodemanager, logging to /app/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-worker2.out
[hadoop@master bin]$ jps
7220 ResourceManager
6869 DataNode
7323 NodeManager
7053 SecondaryNameNode
6765 NameNode
7358 Jps

打开另一个终端窗口,启动Hive进程

[hadoop@master bin]$ hive
which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin:/app/java/jdk1.8.0_141/bin:/app/hadoop/hadoop-2.7.3/bin:/app/hadoop/hadoop-2.7.3/sbin:/app/scala/scala-2.11.8/bin:/app/spark/spark-2.1.1/bin:/app/spark/spark-2.1.1/sbin:/app/zookeeper/zookeeper-3.4.6/bin:/app/kafka/kafka_2.10-0.9.0.0/bin:/app/hive/apache-hive-2.1.1-bin/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/app/hive/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/app/hive/apache-hive-2.1.1-bin/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> 

不启动hadoop集群会报如下错误:

[hadoop@master bin]$ ./hive
which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin:/app/java/jdk1.8.0_141/bin:/app/hadoop/hadoop-2.7.3/bin:/app/hadoop/hadoop-2.7.3/sbin:/app/scala/scala-2.11.8/bin:/app/spark/spark-2.1.1/bin:/app/spark/spark-2.1.1/sbin:/app/zookeeper/zookeeper-3.4.6/bin:/app/kafka/kafka_2.10-0.9.0.0/bin:/app/hive/apache-hive-2.1.1-bin/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/app/hive/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/app/hive/apache-hive-2.1.1-bin/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true
Exception in thread "main" java.lang.RuntimeException: java.net.ConnectException: Call From master/192.168.163.145 to master:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:591)
    at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.ConnectException: Call From master/192.168.163.145 to master:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
    at org.apache.hadoop.ipc.Client.call(Client.java:1479)
    at org.apache.hadoop.ipc.Client.call(Client.java:1412)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy31.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy32.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
    at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:689)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:635)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:563)
    ... 9 more
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
    at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
    at org.apache.hadoop.ipc.Client.call(Client.java:1451)
    ... 29 more

四、测试

hive> show databases;
OK
default
Time taken: 2.529 seconds, Fetched: 1 row(s)
hive> show tables;
OK
Time taken: 0.225 seconds
hive> create table employee (id bigint,name string) row format delimited fields terminated by '\t';
OK
Time taken: 2.264 seconds
hive> select * from employee;
OK
Time taken: 3.812 seconds

至此hive搭建完成。

你可能感兴趣的:(大数据)