1、下载apache-hive-1.2.1-bin.tar.gz并解压
$tar -xf apache-hive-1.2.1-bin.tar.gz
$cd apache-hive-1.2.1-bin
$cd conf
2、复制hive-env.sh.tmplate为hive-env.sh,复制hive-default.xml.tmplate为hive-site.xml
1)$vim hive-env.sh,增加如下内容
HADOOP_HOME=/usr/local/hadoop-2.6.0
export HIVE_CONF_DIR=/usr/local/hadoop-2.6.0/apache-hive-1.2.1-bin/conf
export HIVE_HOME=/usr/local/hadoop-2.6.0/apache-hive-1.2.1-bin
2)$vim hive-site.xml,修改如下内容:
< property >
< name>hive.exec.scratchdir< /name>
< !–< value>/tmp/hive< /value>–>
< value>hdfs://Master:9000/hive/scratchdir< /value>
< description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/
< /property>
< property>
< name>hive.exec.local.scratchdir< /name>
< !–< value>${/usr/local/hadoop-2.6.0/apache-hive-1.2.1-bin/iotmp}/${system:user.name}< /value>–>
< value>/usr/local/hadoop-2.6.0/apache-hive-1.2.1-bin/iotmp< /value>
< description>Local scratch space for Hive jobs< /description>
< /property>
< property>
< name>hive.downloaded.resources.dir< /name>
< value>/usr/local/hadoop-2.6.0/apache-hive-1.2.1-bin/iotmp< /value>
< description>Temporary local directory for added resources in the remote file system.< /description>
< /property>
< property>
< name>hive.metastore.warehouse.dir< /name>
< !–< value>/user/hive/warehouse< /value>–>
< value>hdfs://Master:9000/hive/warehousedir
< description>location of default database for the warehouse
< property>
< name>hive.metastore.uris
< value/>
< !–< value>thrift://Master:9084–>
< description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.
< property>
< name>javax.jdo.option.ConnectionPassword
< !–< value>mine–>
< value>hadoop
< description>password to use against metastore database
< property>
< name>javax.jdo.option.ConnectionURL
< !–< value>jdbc:derby:;databaseName=metastore_db;create=true–>
< value>jdbc:mysql://Master:3306/hiveMeta?createDatabaseIfNotExist=true
< description>JDBC connect string for a JDBC metastore
< property>
< name>javax.jdo.option.ConnectionDriverName
< !–< value>org.apache.derby.jdbc.EmbeddedDriver–>
< value>com.mysql.jdbc.Driver
< description>Driver class name for a JDBC metastore
< property>
< name>javax.jdo.option.ConnectionUserName
< !–< value>APP–>
< value>hadoop
< description>Username to use against metastore database
< property>
< name>hive.aux.jars.path
< value>file:///usr/local/hadoop-2.6.0/apache-hive-1.2.1-bin/lib/hive-hbase-handler-1.2.1.jar,file:///usr/local/hadoop-2.6.0/apache-hive-1.2.1-bin/lib/protobuf-java-2.5.0.jar,file:///usr/local/hadoop-2.6.0/apache-hive-1.2.1-bin/lib/hbase-client-1.0.1.1.jar,file:///usr/local/hadoop-2.6.0/apache-hive-1.2.1-bin/lib/hbase-common-1.0.1.1.jar,file:///usr/local/hadoop-2.6.0/apache-hive-1.2.1-bin/lib/zookeeper-3.4.6.jar,file:///usr/local/hadoop-2.6.0/apache-hive-1.2.1-bin/lib/guava-14.0.1.jar
< !–< value/>–>
< description>The location of the plugin jars that contain implementations of user defined functions and serdes.
< property>
< name>hive.querylog.location
< value>/usr/local/hadoop-2.6.0/apache-hive-1.2.1-bin/iotmp
< description>Location of Hive run time structured log file
< property>
< name>hive.hwi.war.file
< value>lib/hive-hwi-0.14.0.war
< description>This sets the path to the HWI war file, relative to ${HIVE_HOME}.
< property>
< name>hive.zookeeper.quorum
< value>Master
< !–< value/>–>
< description>
List of ZooKeeper servers to talk to. This is needed for:
1 . Read/write locks - when hive.lock.manager is set to
org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager,
2 . When HiveServer2 supports service discovery via Zookeeper.
3 . For delegation token storage if zookeeper store is used, if
hive.cluster.delegation.token.store.zookeeper.connectString is not set
< property>
< name>hive.server2.logging.operation.log.location
< value>/usr/local/hadoop-2.6.0/apache-hive-1.2.1-bin/iotmp
< description>Top level directory where operation logs are stored if logging functionality is enabled
3)修改hive-log4j.properties中如下内容:
$vi hive-log4j.properties
#log4j.appender.DRFA=org.apache.hadoop.hive.ql.log.PidDailyRollingFileAppender
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
#log4j.appender.EventCounter=org.apache.hadoop.hive.shims.HiveEventCounter
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
3、安装完成了!之后可以运行几条基本检验命令:
$jps
显示有RunJar就启动了
$netstat -nl|grep 10000
如果显示有listen的话,表明启动成功
$./hive
hive>show tables;
OK
Time taken:2.585 seconds
4、报错解决:
中间会有各种报错,但是不知道是不是别人的报错,所以就不写在安装步骤里面了。这些报错的解决方案同样亲测有效~
报错1:在.log和.out文件中查看报错信息如下
14/08/21 10:41:19 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/21 10:41:20 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/21 10:41:21 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/21 10:41:22 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/21 10:41:23 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/08/21 10:41:24 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
解决1:在yare-site.xml里添加如下信息之后问题得到解决
:< property>
< name>yarn.resourcemanager.address
< value>Master:8032
< property>
< name>yarn.resourcemanager.scheduler.address
< value>Master:8030
< property>
< name>yarn.resourcemanager.resource-tracker.address
< value>Master:8031
问题2:输入hive命令没问题,show tables命令报错如下:
FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
解决2:在linux终端运行如下命令
$ hive –service hwi
13/05/23 17:13:33 INFO hwi.HWIServer: HWI is starting up
13/05/23 17:13:34 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
13/05/23 17:13:34 INFO mortbay.log: jetty-6.1.14
13/05/23 17:13:35 INFO mortbay.log: Extract jar:file:/home/niy/workspace1/hive/trunk/lib/hive-hwi-0.12.0-SNAPSHOT.war!/ to /tmp/Jetty_0_0_0_0_9999_hive.hwi.0.12.0.SNAPSHOT.war__hwi__.bt0qvz/webapp
13/05/23 17:13:36 INFO mortbay.log: Started [email protected]:9999
上面这条语句如果有报错,如下所示:
13/04/26 00:21:17 INFO hwi.HWIServer: HWI is starting up
13/04/26 00:21:18 FATAL hwi.HWIServer: HWI WAR file not found at /usr/local/hive/usr/local/hive/lib/hive-hwi-0.12.0-SNAPSHOT.war
那么就可以判断是配置的问题,将hive-default.xml中关于 hwi的设置拷贝到hive-site.xml中即可:
< property>
< name>hive.hwi.war.file
< value>lib/hive-hwi-0.12.0-SNAPSHOT.war
< description>This sets the path to the HWI war file, relative to ${HIVE_HOME}.
< property>
< name>hive.hwi.listen.host
< value>0.0.0.0
< description>This is the host address the Hive Web Interface will listen on
< property>
< name>hive.hwi.listen.port
< value>9999
< description>This is the port the Hive Web Interface will listen on
再次运行上面的命令$ bin/hive –service hwi
13/04/26 00:24:51 INFO hwi.HWIServer: HWI is starting up
13/04/26 00:24:51 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
13/04/26 00:24:51 INFO mortbay.log: jetty-6.1.14
13/04/26 00:24:51 INFO mortbay.log: Extract jar:file:/home/niy/workspace1/hive/trunk/build/dist/lib/hive-hwi-0.12.0-SNAPSHOT.war!/ to /tmp/Jetty_0_0_0_0_9999_hive.hwi.0.12.0.SNAPSHOT.war__ hwi __.bt0qvz/webapp
13/04/26 00:24:52 INFO mortbay.log: Started [email protected]:9999
这时打开浏览器,输入
http://localhost:9999/hwi即可验证服务已正常开启
$ hive –service cli
Logging initialized using configuration in file:/home/niy/workspace1/hive/trunk/conf/hive-log4j.properties
Hive history file=/tmp/niy/hive _ job_ log_ niy_ 201305231713_548210272.txt
问题3:创建语句出现错误
hive> CREATE TABLE dummy(value STRING);
FAILED: Error in metadata: javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
NestedThrowables:
java.lang.reflect.InvocationTargetException
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
解决3:
下载mysql的jar包mysql-connector-java-5.1.10-bin.jar,放在hive安装目录lib下。
问题4:mysql不允许访问
hive> show tables;
FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Access denied for user ‘hive’@’10.210.74.152’ (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user ‘hive’@’10.210.74.152’ (using password: YES)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
解决4:
这是因为mysql不允许远程访问的问题,没有对用户进行授权。
执行 grant all on *.* to ‘root’@’%’ identified by ‘root’。
问题5:
Logging initialized using configuration in jar:file:/home/duwei/Downloads/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties
[ERROR] Terminal initialization failed; falling back to unsupported
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
at jline.TerminalFactory.create(TerminalFactory.java:101)(省略N行)
Exception in thread “main” java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
at jline.console.ConsoleReader.< init>(ConsoleReader.java:230)(省略N行)
解决5:将hadoop中share/hadoop/yarn/lib路径下jline包换成hive中匹配的jar包