apache-hive-2.0.0安装

安装hive

安装在hadoop的namenode上,拷贝安装文件到linux中/usr/tools/apache-hive-2.0.0-bin.tar.gz

解压:

tar –zxvf apache-hive-2.0.0-bin.tar.gz

添加到环境变量

vi /etc/profile

编辑

exportHIVE_HOME=/usr/tools/apache-hive-2.0.0-bin

export PATH=$PATH:$HIVE_HOME/bin

保存后使其生效:

source /etc/profile

 

安装mysql作为hive的Metastore(已经安装了,可以跳过)

 

mysql默认不可以远程访问,设置远程访问

--GRANT ALL PRIVILEGES ON *.* TO 'root'@'%'WITH GRANT OPTION;

上面这句远程访问不需要密码,如果需要密码使用下面这句

GRANT ALLPRIVILEGES ON *.* TO 'root'@'%'IDENTIFIED BY '123456' WITH GRANT OPTION;

 

使权限生效:

FLUSH PRIVILEGES;

 

设置etc/my.cnf文件,使binlog_format=mixed

vi etc/my.cnf

将注释掉的binlog_format=mixed这一行前面的注释去掉然后保存,重启mysql即可

service mysql  restart

配置hive

在hdfs中新建目录/user/hive/warehouse

hdfs dfs –mkdir /tmp

hdfs dfs –mkdir /user

hdfs dfs –mkdir /user/hive

hdfs dfs –mkdir /user/hive/warehouse

 

hadoop fs -chmod g+w /tmp

hadoop fs -chmod g+w /user/hive/warehouse

将mysql的驱动jar包mysql-connector-java-5.1.7-bin.jar拷入hive的lib目录下面

进入hive的conf目录下面复制一下hive-default.xml.template名子命名为:hive-site.xml

cp hive-default.xml.template hive-site.xml

 

   javax.jdo.option.ConnectionURL

   jdbc:mysql://127.0.0.1:3306/hive?createDatabaseIfNotExist=true

   JDBC connect string for a JDBCmetastore

 

 

   javax.jdo.option.ConnectionDriverName

   com.mysql.jdbc.Driver

   Driver class name for a JDBCmetastore

 

 

   javax.jdo.option.ConnectionUserName

   root

   Username to use against metastore database

 

 

   javax.jdo.option.ConnectionPassword

   123456

   password to use against metastoredatabase

 

 

 

   hive.exec.local.scratchdir

   /usr/tools/apache-hive-2.0.0-bin/tmp

   Local scratch space for Hive jobs

 

 

   hive.downloaded.resources.dir

   /usr/tools/apache-hive-2.0.0-bin/tmp/resources

   Temporary local directory for added resources in theremote file system.

 

 

   hive.querylog.location

   /usr/tools/apache-hive-2.0.0-bin/tmp

   Location of Hive run time structured logfile

 

 

   hive.server2.logging.operation.log.location

   /usr/tools/apache-hive-2.0.0-bin/tmp/operation_logs

   Top level directory where operation logs are storedif logging functionality is enabled

 

使用schematool 初始化metastore的schema:

Schematool -initSchema -dbType mysql

 

运行hive(必须保证hadoop集群和mysql都已经启动)

hive

你可能感兴趣的:(大数据,Hive)